corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-665201
2410.02736
Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge
<|reference_start|>Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge: LLM-as-a-Judge has been widely utilized as an evaluation method in various benchmarks and served as supervised rewards in model training. However, despite their excellence in many domains, potential issues are under-explored, undermining their reliability and the scope of their utility. Therefore, we identify 12 key potential biases and propose a new automated bias quantification framework-CALM-which systematically quantifies and analyzes each type of bias in LLM-as-a-Judge by using automated and principle-guided modification. Our experiments cover multiple popular language models, and the results indicate that while advanced models have achieved commendable overall performance, significant biases persist in certain specific tasks. Empirical results suggest that there remains room for improvement in the reliability of LLM-as-a-Judge. Moreover, we also discuss the explicit and implicit influence of these biases and give some suggestions for the reliable application of LLM-as-a-Judge. Our work highlights the need for stakeholders to address these issues and remind users to exercise caution in LLM-as-a-Judge applications.<|reference_end|>
arxiv
@article{ye2024justice, title={Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge}, author={Jiayi Ye, Yanbo Wang, Yue Huang, Dongping Chen, Qihui Zhang, Nuno Moniz, Tian Gao, Werner Geyer, Chao Huang, Pin-Yu Chen, Nitesh V Chawla, Xiangliang Zhang}, journal={arXiv preprint arXiv:2410.02736}, year={2024}, archivePrefix={arXiv}, eprint={2410.02736}, primaryClass={cs.CL cs.AI} }
ye2024justice
arxiv-665202
2410.02740
Revisit Large-Scale Image-Caption Data in Pre-training Multimodal Foundation Models
<|reference_start|>Revisit Large-Scale Image-Caption Data in Pre-training Multimodal Foundation Models: Recent advancements in multimodal models highlight the value of rewritten captions for improving performance, yet key challenges remain. For example, while synthetic captions often provide superior quality and image-text alignment, it is not clear whether they can fully replace AltTexts: the role of synthetic captions and their interaction with original web-crawled AltTexts in pre-training is still not well understood. Moreover, different multimodal foundation models may have unique preferences for specific caption formats, but efforts to identify the optimal captions for each model remain limited. In this work, we propose a novel, controllable, and scalable captioning pipeline designed to generate diverse caption formats tailored to various multimodal models. By examining Short Synthetic Captions (SSC) towards Dense Synthetic Captions (DSC+) as case studies, we systematically explore their effects and interactions with AltTexts across models such as CLIP, multimodal LLMs, and diffusion models. Our findings reveal that a hybrid approach that keeps both synthetic captions and AltTexts can outperform the use of synthetic captions alone, improving both alignment and performance, with each model demonstrating preferences for particular caption formats. This comprehensive analysis provides valuable insights into optimizing captioning strategies, thereby advancing the pre-training of multimodal foundation models.<|reference_end|>
arxiv
@article{lai2024revisit, title={Revisit Large-Scale Image-Caption Data in Pre-training Multimodal Foundation Models}, author={Zhengfeng Lai, Vasileios Saveris, Chen Chen, Hong-You Chen, Haotian Zhang, Bowen Zhang, Juan Lao Tebar, Wenze Hu, Zhe Gan, Peter Grasch, Meng Cao, Yinfei Yang}, journal={arXiv preprint arXiv:2410.02740}, year={2024}, archivePrefix={arXiv}, eprint={2410.02740}, primaryClass={cs.CV cs.AI cs.LG} }
lai2024revisit
arxiv-665203
2410.02741
Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization
<|reference_start|>Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization: Large language models (LLMs) can generate fluent summaries across domains using prompting techniques, reducing the need to train models for summarization applications. However, crafting effective prompts that guide LLMs to generate summaries with the appropriate level of detail and writing style remains a challenge. In this paper, we explore the use of salient information extracted from the source document to enhance summarization prompts. We show that adding keyphrases in prompts can improve ROUGE F1 and recall, making the generated summaries more similar to the reference and more complete. The number of keyphrases can control the precision-recall trade-off. Furthermore, our analysis reveals that incorporating phrase-level salient information is superior to word- or sentence-level. However, the impact on hallucination is not universally positive across LLMs. To conduct this analysis, we introduce Keyphrase Signal Extractor (SigExt), a lightweight model that can be finetuned to extract salient keyphrases. By using SigExt, we achieve consistent ROUGE improvements across datasets and open-weight and proprietary LLMs without any LLM customization. Our findings provide insights into leveraging salient information in building prompt-based summarization systems.<|reference_end|>
arxiv
@article{xu2024salient, title={Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization}, author={Lei Xu, Mohammed Asad Karim, Saket Dingliwal, Aparna Elangovan}, journal={arXiv preprint arXiv:2410.02741}, year={2024}, archivePrefix={arXiv}, eprint={2410.02741}, primaryClass={cs.CL cs.AI cs.LG} }
xu2024salient
arxiv-665204
2410.02742
Grounding Large Language Models In Embodied Environment With Imperfect World Models
<|reference_start|>Grounding Large Language Models In Embodied Environment With Imperfect World Models: Despite a widespread success in various applications, large language models (LLMs) often stumble when tackling basic physical reasoning or executing robotics tasks, due to a lack of direct experience with the physical nuances of the real world. To address these issues, we propose a Grounding Large language model with Imperfect world MOdel (GLIMO), which utilizes proxy world models such as simulators to collect and synthesize trining data. GLIMO incorporates an LLM agent-based data generator to automatically create high-quality and diverse instruction datasets. The generator includes an iterative self-refining module for temporally consistent experience sampling, a diverse set of question-answering instruction seeds, and a retrieval-augmented generation module for reflecting on prior experiences. Comprehensive experiments show that our approach improve the performance of strong open-source LLMs like LLaMA-3 with a performance boost of 2.04 $\times$, 1.54 $\times$, and 1.82 $\times$ across three different benchmarks, respectively. The performance is able to compete with or surpass their larger counterparts such as GPT-4.<|reference_end|>
arxiv
@article{liu2024grounding, title={Grounding Large Language Models In Embodied Environment With Imperfect World Models}, author={Haolan Liu, Jishen Zhao}, journal={arXiv preprint arXiv:2410.02742}, year={2024}, archivePrefix={arXiv}, eprint={2410.02742}, primaryClass={cs.CL cs.LG cs.RO} }
liu2024grounding
arxiv-665205
2410.02743
MA-RLHF: Reinforcement Learning from Human Feedback with Macro Actions
<|reference_start|>MA-RLHF: Reinforcement Learning from Human Feedback with Macro Actions: Reinforcement learning from human feedback (RLHF) has demonstrated effectiveness in aligning large language models (LLMs) with human preferences. However, token-level RLHF suffers from the credit assignment problem over long sequences, where delayed rewards make it challenging for the model to discern which actions contributed to successful outcomes. This hinders learning efficiency and slows convergence. In this paper, we propose MA-RLHF, a simple yet effective RLHF framework that incorporates macro actions -- sequences of tokens or higher-level language constructs -- into the learning process. By operating at this higher level of abstraction, our approach reduces the temporal distance between actions and rewards, facilitating faster and more accurate credit assignment. This results in more stable policy gradient estimates and enhances learning efficiency within each episode, all without increasing computational complexity during training or inference. We validate our approach through extensive experiments across various model sizes and tasks, including text summarization, dialogue generation, question answering, and program synthesis. Our method achieves substantial performance improvements over standard RLHF, with performance gains of up to 30% in text summarization and code generation, 18% in dialogue, and 8% in question answering tasks. Notably, our approach reaches parity with vanilla RLHF 1.7x to 2x faster in terms of training time and continues to outperform it with further training. We will make our code and data publicly available at https://github.com/ernie-research/MA-RLHF .<|reference_end|>
arxiv
@article{chai2024ma-rlhf:, title={MA-RLHF: Reinforcement Learning from Human Feedback with Macro Actions}, author={Yekun Chai, Haoran Sun, Huang Fang, Shuohuan Wang, Yu Sun, Hua Wu}, journal={arXiv preprint arXiv:2410.02743}, year={2024}, archivePrefix={arXiv}, eprint={2410.02743}, primaryClass={cs.CL} }
chai2024ma-rlhf:
arxiv-665206
2410.02744
Neutral residues: revisiting adapters for model extension
<|reference_start|>Neutral residues: revisiting adapters for model extension: We address the problem of extending a pretrained large language model to a new domain that was not seen at training time, like adding a language for which the original model has seen no or little training data. Popular solutions like fine-tuning or low-rank adaptation are successful at domain adaptation, but formally they do not add any extra capacity and degrade the performance in the original domain. Our paper analyzes this extension problem under three angles: data, architecture and training procedure, which are advantageously considered jointly. In particular, we improve adapters and make it possible to learn an entire new language while ensuring that the output of the neural network is almost unchanged in the original domain. For this purpose, we modify the new residual blocks in a way that leads each new residual block to output near-zeros in the original domain. This solution of neutral residues, which borrows architectural components from mixture of experts, is effective: with only 20% extra learnable weights compared to an original model trained on English, we get results that are significantly better than concurrent approaches (fine-tuning, low-rank or vanilla adapters) in terms of the trade-off between learning a new language and not forgetting English.<|reference_end|>
arxiv
@article{talla2024neutral, title={Neutral residues: revisiting adapters for model extension}, author={Franck Signe Talla and Herve Jegou and Edouard Grave}, journal={arXiv preprint arXiv:2410.02744}, year={2024}, archivePrefix={arXiv}, eprint={2410.02744}, primaryClass={cs.CL cs.AI cs.LG} }
talla2024neutral
arxiv-665207
2410.02745
AVG-LLaVA: A Large Multimodal Model with Adaptive Visual Granularity
<|reference_start|>AVG-LLaVA: A Large Multimodal Model with Adaptive Visual Granularity: Recently, when dealing with high-resolution images, dominant LMMs usually divide them into multiple local images and one global image, which will lead to a large number of visual tokens. In this work, we introduce AVG-LLaVA, an LMM that can adaptively select the appropriate visual granularity based on the input image and instruction. This approach not only reduces the number of visual tokens and speeds up inference, but also improves the overall model performance. Specifically, we introduce the following modules based on LLaVA-NeXT: (a) a visual granularity scaler that includes multiple pooling layers to obtain visual tokens with different granularities; (b) a visual granularity router, which includes a Transformer layer, an MLP layer, and a voter layer, used to select the appropriate visual granularity based on the image and instruction. Furthermore, we propose RGLF, a novel training paradigm that aims at aligning the granularity predicted by the router with the preferences of the LMM, without the need for additional manually annotated data. Extensive experiments and analysis show that AVG-LLaVA achieves superior performance across 11 benchmarks, as well as significantly reduces the number of visual tokens and speeds up inference (e.g., an 85.3% reduction in visual tokens and a 2.53$\times$ increase in inference speed on the AI2D benchmark).<|reference_end|>
arxiv
@article{lan2024avg-llava:, title={AVG-LLaVA: A Large Multimodal Model with Adaptive Visual Granularity}, author={Zhibin Lan, Liqiang Niu, Fandong Meng, Wenbo Li, Jie Zhou, Jinsong Su}, journal={arXiv preprint arXiv:2410.02745}, year={2024}, archivePrefix={arXiv}, eprint={2410.02745}, primaryClass={cs.CV cs.AI cs.CL} }
lan2024avg-llava:
arxiv-665208
2410.02746
Contrastive Localized Language-Image Pre-Training
<|reference_start|>Contrastive Localized Language-Image Pre-Training: Contrastive Language-Image Pre-training (CLIP) has been a celebrated method for training vision encoders to generate image/text representations facilitating various applications. Recently, CLIP has been widely adopted as the vision backbone of multimodal large language models (MLLMs) to connect image inputs for language interactions. The success of CLIP as a vision-language foundation model relies on aligning web-crawled noisy text annotations at image levels. Nevertheless, such criteria may become insufficient for downstream tasks in need of fine-grained vision representations, especially when region-level understanding is demanding for MLLMs. In this paper, we improve the localization capability of CLIP with several advances. We propose a pre-training method called Contrastive Localized Language-Image Pre-training (CLOC) by complementing CLIP with region-text contrastive loss and modules. We formulate a new concept, promptable embeddings, of which the encoder produces image embeddings easy to transform into region representations given spatial hints. To support large-scale pre-training, we design a visually-enriched and spatially-localized captioning framework to effectively generate region-text pseudo-labels at scale. By scaling up to billions of annotated images, CLOC enables high-quality regional embeddings for image region recognition and retrieval tasks, and can be a drop-in replacement of CLIP to enhance MLLMs, especially on referring and grounding tasks.<|reference_end|>
arxiv
@article{chen2024contrastive, title={Contrastive Localized Language-Image Pre-Training}, author={Hong-You Chen, Zhengfeng Lai, Haotian Zhang, Xinze Wang, Marcin Eichner, Keen You, Meng Cao, Bowen Zhang, Yinfei Yang, Zhe Gan}, journal={arXiv preprint arXiv:2410.02746}, year={2024}, archivePrefix={arXiv}, eprint={2410.02746}, primaryClass={cs.CV cs.LG} }
chen2024contrastive
arxiv-665209
2410.02748
CriSPO: Multi-Aspect Critique-Suggestion-guided Automatic Prompt Optimization for Text Generation
<|reference_start|>CriSPO: Multi-Aspect Critique-Suggestion-guided Automatic Prompt Optimization for Text Generation: Existing automatic prompt engineering methods are typically designed for discriminative tasks, where new task prompts are iteratively refined with limited feedback from a single metric reflecting a single aspect. However, these approaches are suboptimal for generative tasks, which require more nuanced guidance beyond a single numeric metric to improve the prompt and optimize multiple aspects of the generated text. To address these challenges, we propose a novel multi-aspect Critique-Suggestion-guided automatic Prompt Optimization (CriSPO) approach. CriSPO introduces a critique-suggestion module as its core component. This module spontaneously discovers aspects, and compares generated and reference texts across these aspects, providing specific suggestions for prompt modification. These clear critiques and actionable suggestions guide a receptive optimizer module to make more substantial changes, exploring a broader and more effective search space. To further improve CriSPO with multi-metric optimization, we introduce an Automatic Suffix Tuning (AST) extension to enhance the performance of task prompts across multiple metrics. We evaluate CriSPO on 4 state-of-the-art LLMs across 4 summarization and 5 QA datasets. Extensive experiments show 3-4\% ROUGE score improvement on summarization and substantial improvement of various metrics on QA.<|reference_end|>
arxiv
@article{he2024crispo:, title={CriSPO: Multi-Aspect Critique-Suggestion-guided Automatic Prompt Optimization for Text Generation}, author={Han He, Qianchu Liu, Lei Xu, Chaitanya Shivade, Yi Zhang, Sundararajan Srinivasan, Katrin Kirchhoff}, journal={arXiv preprint arXiv:2410.02748}, year={2024}, archivePrefix={arXiv}, eprint={2410.02748}, primaryClass={cs.CL cs.AI cs.LG} }
he2024crispo:
arxiv-665210
2410.02749
Training Language Models on Synthetic Edit Sequences Improves Code Synthesis
<|reference_start|>Training Language Models on Synthetic Edit Sequences Improves Code Synthesis: Software engineers mainly write code by editing existing programs. In contrast, large language models (LLMs) autoregressively synthesize programs in a single pass. One explanation for this is the scarcity of open-sourced edit data. While high-quality instruction data for code synthesis is already scarce, high-quality edit data is even scarcer. To fill this gap, we develop a synthetic data generation algorithm called LintSeq. This algorithm refactors existing code into a sequence of code edits by using a linter to procedurally sample across the error-free insertions that can be used to sequentially write programs. It outputs edit sequences as text strings consisting of consecutive program diffs. To test LintSeq, we use it to refactor a dataset of instruction + program pairs into instruction + program-diff-sequence tuples. Then, we instruction finetune a series of smaller LLMs ranging from 2.6B to 14B parameters on both the re-factored and original versions of this dataset, comparing zero-shot performance on code synthesis benchmarks. We show that during repeated sampling, edit sequence finetuned models produce more diverse programs than baselines. This results in better inference-time scaling for benchmark coverage as a function of samples, i.e. the fraction of problems "pass@k" solved by any attempt given "k" tries. For example, on HumanEval pass@50, small LLMs finetuned on synthetic edit sequences are competitive with GPT-4 and outperform models finetuned on the baseline dataset by +20% (+/-3%) in absolute score. Finally, we also pretrain our own tiny LMs for code understanding. We show that finetuning tiny models on synthetic code edits results in state-of-the-art code synthesis for the on-device model class. Our 150M parameter edit sequence LM matches or outperforms code models with twice as many parameters, both with and without repeated sampling, including Codex and AlphaCode.<|reference_end|>
arxiv
@article{piterbarg2024training, title={Training Language Models on Synthetic Edit Sequences Improves Code Synthesis}, author={Ulyana Piterbarg and Lerrel Pinto and Rob Fergus}, journal={arXiv preprint arXiv:2410.02749}, year={2024}, archivePrefix={arXiv}, eprint={2410.02749}, primaryClass={cs.LG cs.CL} }
piterbarg2024training
arxiv-665211
2410.02750
An Online Automatic Modulation Classification Scheme Based on Isolation Distributional Kernel
<|reference_start|>An Online Automatic Modulation Classification Scheme Based on Isolation Distributional Kernel: Automatic Modulation Classification (AMC), as a crucial technique in modern non-cooperative communication networks, plays a key role in various civil and military applications. However, existing AMC methods usually are complicated and can work in batch mode only due to their high computational complexity. This paper introduces a new online AMC scheme based on Isolation Distributional Kernel. Our method stands out in two aspects. Firstly, it is the first proposal to represent baseband signals using a distributional kernel. Secondly, it introduces a pioneering AMC technique that works well in online settings under realistic time-varying channel conditions. Through extensive experiments in online settings, we demonstrate the effectiveness of the proposed classifier. Our results indicate that the proposed approach outperforms existing baseline models, including two state-of-the-art deep learning classifiers. Moreover, it distinguishes itself as the first online classifier for AMC with linear time complexity, which marks a significant efficiency boost for real-time applications.<|reference_end|>
arxiv
@article{li2024an, title={An Online Automatic Modulation Classification Scheme Based on Isolation Distributional Kernel}, author={Xinpeng Li, Zile Jiang, Kai Ming Ting, Ye Zhu}, journal={arXiv preprint arXiv:2410.02750}, year={2024}, archivePrefix={arXiv}, eprint={2410.02750}, primaryClass={cs.LG} }
li2024an
arxiv-665212
2410.02751
ReLIC: A Recipe for 64k Steps of In-Context Reinforcement Learning for Embodied AI
<|reference_start|>ReLIC: A Recipe for 64k Steps of In-Context Reinforcement Learning for Embodied AI: Intelligent embodied agents need to quickly adapt to new scenarios by integrating long histories of experience into decision-making. For instance, a robot in an unfamiliar house initially wouldn't know the locations of objects needed for tasks and might perform inefficiently. However, as it gathers more experience, it should learn the layout of its environment and remember where objects are, allowing it to complete new tasks more efficiently. To enable such rapid adaptation to new tasks, we present ReLIC, a new approach for in-context reinforcement learning (RL) for embodied agents. With ReLIC, agents are capable of adapting to new environments using 64,000 steps of in-context experience with full attention while being trained through self-generated experience via RL. We achieve this by proposing a novel policy update scheme for on-policy RL called "partial updates'' as well as a Sink-KV mechanism that enables effective utilization of a long observation history for embodied agents. Our method outperforms a variety of meta-RL baselines in adapting to unseen houses in an embodied multi-object navigation task. In addition, we find that ReLIC is capable of few-shot imitation learning despite never being trained with expert demonstrations. We also provide a comprehensive analysis of ReLIC, highlighting that the combination of large-scale RL training, the proposed partial updates scheme, and the Sink-KV are essential for effective in-context learning. The code for ReLIC and all our experiments is at https://github.com/aielawady/relic<|reference_end|>
arxiv
@article{elawady2024relic:, title={ReLIC: A Recipe for 64k Steps of In-Context Reinforcement Learning for Embodied AI}, author={Ahmad Elawady, Gunjan Chhablani, Ram Ramrakhya, Karmesh Yadav, Dhruv Batra, Zsolt Kira, Andrew Szot}, journal={arXiv preprint arXiv:2410.02751}, year={2024}, archivePrefix={arXiv}, eprint={2410.02751}, primaryClass={cs.LG} }
elawady2024relic:
arxiv-665213
2410.02755
SIEVE: General Purpose Data Filtering System Matching GPT-4o Accuracy at 1% the Cost
<|reference_start|>SIEVE: General Purpose Data Filtering System Matching GPT-4o Accuracy at 1% the Cost: Creating specialized large language models requires vast amounts of clean, special purpose data for training and fine-tuning. With only a handful of existing large-scale, domain-specific datasets, creation of new datasets is required in most applications. This requires the development of new application-specific filtering of web-scale data. Filtering with a high-performance, general-purpose LLM such as GPT-4o can be highly effective, but this is extremely expensive at web-scale. This paper proposes SIEVE, a lightweight alternative that matches GPT-4o accuracy at a fraction of the cost. SIEVE can perform up to 500 filtering operations for the cost of one GPT-4o filtering call. The key to SIEVE is a seamless integration of GPT-4o and lightweight T5 models, using active learning to fine-tune T5 in the background with a small number of calls to GPT-4o. Once trained, it performs as well as GPT-4o at a tiny fraction of the cost. We experimentally validate SIEVE on the OpenWebText dataset, using five highly customized filter tasks targeting high quality and domain-specific content. Our results demonstrate the effectiveness and efficiency of our method in curating large, high-quality datasets for language model training at a substantially lower cost (1%) than existing techniques. To further validate SIEVE, experiments show that SIEVE and GPT-4o achieve similar accuracy, with human evaluators preferring SIEVE's filtering results to those of GPT-4o.<|reference_end|>
arxiv
@article{zhang2024sieve:, title={SIEVE: General Purpose Data Filtering System Matching GPT-4o Accuracy at 1% the Cost}, author={Jifan Zhang, Robert Nowak}, journal={arXiv preprint arXiv:2410.02755}, year={2024}, archivePrefix={arXiv}, eprint={2410.02755}, primaryClass={cs.CL cs.LG} }
zhang2024sieve:
arxiv-665214
2410.02756
CorPipe at CRAC 2024: Predicting Zero Mentions from Raw Text
<|reference_start|>CorPipe at CRAC 2024: Predicting Zero Mentions from Raw Text: We present CorPipe 24, the winning entry to the CRAC 2024 Shared Task on Multilingual Coreference Resolution. In this third iteration of the shared task, a novel objective is to also predict empty nodes needed for zero coreference mentions (while the empty nodes were given on input in previous years). This way, coreference resolution can be performed on raw text. We evaluate two model variants: a~two-stage approach (where the empty nodes are predicted first using a pretrained encoder model and then processed together with sentence words by another pretrained model) and a single-stage approach (where a single pretrained encoder model generates empty nodes, coreference mentions, and coreference links jointly). In both settings, CorPipe surpasses other participants by a large margin of 3.9 and 2.8 percent points, respectively. The source code and the trained model are available at https://github.com/ufal/crac2024-corpipe .<|reference_end|>
arxiv
@article{straka2024corpipe, title={CorPipe at CRAC 2024: Predicting Zero Mentions from Raw Text}, author={Milan Straka}, journal={arXiv preprint arXiv:2410.02756}, year={2024}, archivePrefix={arXiv}, eprint={2410.02756}, primaryClass={cs.CL} }
straka2024corpipe
arxiv-665215
2410.02757
Loong: Generating Minute-level Long Videos with Autoregressive Language Models
<|reference_start|>Loong: Generating Minute-level Long Videos with Autoregressive Language Models: It is desirable but challenging to generate content-rich long videos in the scale of minutes. Autoregressive large language models (LLMs) have achieved great success in generating coherent and long sequences of tokens in the domain of natural language processing, while the exploration of autoregressive LLMs for video generation is limited to generating short videos of several seconds. In this work, we conduct a deep analysis of the challenges that prevent autoregressive LLM-based video generators from generating long videos. Based on the observations and analysis, we propose Loong, a new autoregressive LLM-based video generator that can generate minute-long videos. Specifically, we model the text tokens and video tokens as a unified sequence for autoregressive LLMs and train the model from scratch. We propose progressive short-to-long training with a loss re-weighting scheme to mitigate the loss imbalance problem for long video training. We further investigate inference strategies, including video token re-encoding and sampling strategies, to diminish error accumulation during inference. Our proposed Loong can be trained on 10-second videos and be extended to generate minute-level long videos conditioned on text prompts, as demonstrated by the results. More samples are available at: https://epiphqny.github.io/Loong-video.<|reference_end|>
arxiv
@article{wang2024loong:, title={Loong: Generating Minute-level Long Videos with Autoregressive Language Models}, author={Yuqing Wang, Tianwei Xiong, Daquan Zhou, Zhijie Lin, Yang Zhao, Bingyi Kang, Jiashi Feng, Xihui Liu}, journal={arXiv preprint arXiv:2410.02757}, year={2024}, archivePrefix={arXiv}, eprint={2410.02757}, primaryClass={cs.CV} }
wang2024loong:
arxiv-665216
2410.02759
Forecasting Smog Clouds With Deep Learning
<|reference_start|>Forecasting Smog Clouds With Deep Learning: In this proof-of-concept study, we conduct multivariate timeseries forecasting for the concentrations of nitrogen dioxide (NO2), ozone (O3), and (fine) particulate matter (PM10 & PM2.5) with meteorological covariates between two locations using various deep learning models, with a focus on long short-term memory (LSTM) and gated recurrent unit (GRU) architectures. In particular, we propose an integrated, hierarchical model architecture inspired by air pollution dynamics and atmospheric science that employs multi-task learning and is benchmarked by unidirectional and fully-connected models. Results demonstrate that, above all, the hierarchical GRU proves itself as a competitive and efficient method for forecasting the concentration of smog-related pollutants.<|reference_end|>
arxiv
@article{oldenburg2024forecasting, title={Forecasting Smog Clouds With Deep Learning}, author={Valentijn Oldenburg, Juan Cardenas-Cartagena, Matias Valdenegro-Toro}, journal={Oldenburg, V., Cardenas-Cartagena, J., & Valdenegro-Toro, M. (2024). Forecasting smog clouds with deep learning: A proof-of-concept. In ICML 2024 AI for Science Workshop. https://openreview.net/forum?id=UQa2PEVHMF}, year={2024}, archivePrefix={arXiv}, eprint={2410.02759}, primaryClass={cs.LG} }
oldenburg2024forecasting
arxiv-665217
2410.02760
Erasing Conceptual Knowledge from Language Models
<|reference_start|>Erasing Conceptual Knowledge from Language Models: Concept erasure in language models has traditionally lacked a comprehensive evaluation framework, leading to incomplete assessments of effectiveness of erasure methods. We propose an evaluation paradigm centered on three critical criteria: innocence (complete knowledge removal), seamlessness (maintaining conditional fluent generation), and specificity (preserving unrelated task performance). Our evaluation metrics naturally motivate the development of Erasure of Language Memory (ELM), a new method designed to address all three dimensions. ELM employs targeted low-rank updates to alter output distributions for erased concepts while preserving overall model capabilities including fluency when prompted for an erased concept. We demonstrate ELM's efficacy on biosecurity, cybersecurity, and literary domain erasure tasks. Comparative analysis shows that ELM achieves superior performance across our proposed metrics, including near-random scores on erased topic assessments, generation fluency, maintained accuracy on unrelated benchmarks, and robustness under adversarial attacks. Our code, data, and trained models are available at https://elm.baulab.info<|reference_end|>
arxiv
@article{gandikota2024erasing, title={Erasing Conceptual Knowledge from Language Models}, author={Rohit Gandikota, Sheridan Feucht, Samuel Marks, David Bau}, journal={arXiv preprint arXiv:2410.02760}, year={2024}, archivePrefix={arXiv}, eprint={2410.02760}, primaryClass={cs.CL cs.LG} }
gandikota2024erasing
arxiv-665218
2410.02761
FakeShield: Explainable Image Forgery Detection and Localization via Multi-modal Large Language Models
<|reference_start|>FakeShield: Explainable Image Forgery Detection and Localization via Multi-modal Large Language Models: The rapid development of generative AI is a double-edged sword, which not only facilitates content creation but also makes image manipulation easier and more difficult to detect. Although current image forgery detection and localization (IFDL) methods are generally effective, they tend to face two challenges: \textbf{1)} black-box nature with unknown detection principle, \textbf{2)} limited generalization across diverse tampering methods (e.g., Photoshop, DeepFake, AIGC-Editing). To address these issues, we propose the explainable IFDL task and design FakeShield, a multi-modal framework capable of evaluating image authenticity, generating tampered region masks, and providing a judgment basis based on pixel-level and image-level tampering clues. Additionally, we leverage GPT-4o to enhance existing IFDL datasets, creating the Multi-Modal Tamper Description dataSet (MMTD-Set) for training FakeShield's tampering analysis capabilities. Meanwhile, we incorporate a Domain Tag-guided Explainable Forgery Detection Module (DTE-FDM) and a Multi-modal Forgery Localization Module (MFLM) to address various types of tamper detection interpretation and achieve forgery localization guided by detailed textual descriptions. Extensive experiments demonstrate that FakeShield effectively detects and localizes various tampering techniques, offering an explainable and superior solution compared to previous IFDL methods.<|reference_end|>
arxiv
@article{xu2024fakeshield:, title={FakeShield: Explainable Image Forgery Detection and Localization via Multi-modal Large Language Models}, author={Zhipei Xu, Xuanyu Zhang, Runyi Li, Zecheng Tang, Qing Huang, Jian Zhang}, journal={arXiv preprint arXiv:2410.02761}, year={2024}, archivePrefix={arXiv}, eprint={2410.02761}, primaryClass={cs.CV cs.AI} }
xu2024fakeshield:
arxiv-665219
2410.02762
Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations
<|reference_start|>Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations: We investigate the internal representations of vision-language models (VLMs) to address hallucinations, a persistent challenge despite advances in model size and training. We project VLMs' internal image representations to their language vocabulary and observe more confident output probabilities on real objects than hallucinated objects. We additionally use these output probabilities to spatially localize real objects. Building on this approach, we introduce a knowledge erasure algorithm that removes hallucinations by linearly orthogonalizing image features with respect to hallucinated object features. We show that targeted edits to a model's latent representations can reduce hallucinations by up to 25.7% on the COCO2014 dataset while preserving performance. Our findings demonstrate how a deeper understanding of VLMs' latent representations can enhance reliability and enable novel capabilities, such as zero-shot segmentation.<|reference_end|>
arxiv
@article{jiang2024interpreting, title={Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations}, author={Nick Jiang, Anish Kachinthaya, Suzie Petryk, Yossi Gandelsman}, journal={arXiv preprint arXiv:2410.02762}, year={2024}, archivePrefix={arXiv}, eprint={2410.02762}, primaryClass={cs.CV cs.LG} }
jiang2024interpreting
arxiv-665220
2410.02763
Vinoground: Scrutinizing LMMs over Dense Temporal Reasoning with Short Videos
<|reference_start|>Vinoground: Scrutinizing LMMs over Dense Temporal Reasoning with Short Videos: There has been growing sentiment recently that modern large multimodal models (LMMs) have addressed most of the key challenges related to short video comprehension. As a result, both academia and industry are gradually shifting their attention towards the more complex challenges posed by understanding long-form videos. However, is this really the case? Our studies indicate that LMMs still lack many fundamental reasoning capabilities even when dealing with short videos. We introduce Vinoground, a temporal counterfactual LMM evaluation benchmark encompassing 1000 short and natural video-caption pairs. We demonstrate that existing LMMs severely struggle to distinguish temporal differences between different actions and object transformations. For example, the best model GPT-4o only obtains ~50% on our text and video scores, showing a large gap compared to the human baseline of ~90%. All open-source multimodal models and CLIP-based models perform much worse, producing mostly random chance performance. Through this work, we shed light onto the fact that temporal reasoning in short videos is a problem yet to be fully solved. The dataset and evaluation code are available at https://vinoground.github.io.<|reference_end|>
arxiv
@article{zhang2024vinoground:, title={Vinoground: Scrutinizing LMMs over Dense Temporal Reasoning with Short Videos}, author={Jianrui Zhang, Mu Cai, Yong Jae Lee}, journal={arXiv preprint arXiv:2410.02763}, year={2024}, archivePrefix={arXiv}, eprint={2410.02763}, primaryClass={cs.CV cs.AI cs.CL cs.LG} }
zhang2024vinoground:
arxiv-665221
2410.02764
Flash-Splat: 3D Reflection Removal with Flash Cues and Gaussian Splats
<|reference_start|>Flash-Splat: 3D Reflection Removal with Flash Cues and Gaussian Splats: We introduce a simple yet effective approach for separating transmitted and reflected light. Our key insight is that the powerful novel view synthesis capabilities provided by modern inverse rendering methods (e.g.,~3D Gaussian splatting) allow one to perform flash/no-flash reflection separation using unpaired measurements -- this relaxation dramatically simplifies image acquisition over conventional paired flash/no-flash reflection separation methods. Through extensive real-world experiments, we demonstrate our method, Flash-Splat, accurately reconstructs both transmitted and reflected scenes in 3D. Our method outperforms existing 3D reflection separation methods, which do not leverage illumination control, by a large margin. Our project webpage is at https://flash-splat.github.io/.<|reference_end|>
arxiv
@article{xie2024flash-splat:, title={Flash-Splat: 3D Reflection Removal with Flash Cues and Gaussian Splats}, author={Mingyang Xie, Haoming Cai, Sachin Shah, Yiran Xu, Brandon Y. Feng, Jia-Bin Huang, Christopher A. Metzler}, journal={arXiv preprint arXiv:2410.02764}, year={2024}, archivePrefix={arXiv}, eprint={2410.02764}, primaryClass={cs.CV cs.LG eess.IV} }
xie2024flash-splat:
arxiv-665222
2410.02766
A concise introduction to Koopman operator theory and the Extended Dynamic Mode Decomposition
<|reference_start|>A concise introduction to Koopman operator theory and the Extended Dynamic Mode Decomposition: The framework of Koopman operator theory is discussed along with its connections to Dynamic Mode Decomposition (DMD) and (Kernel) Extended Dynamic Mode Decomposition (EDMD). This paper provides a succinct overview with consistent notation. The authors hope to provide an exposition that more naturally emphasizes the connections between theory and algorithms which may result in a sense of clarity on the subject.<|reference_end|>
arxiv
@article{patyn2024a, title={A concise introduction to Koopman operator theory and the Extended Dynamic Mode Decomposition}, author={Christophe Patyn and Geert Deconinck}, journal={arXiv preprint arXiv:2410.02766}, year={2024}, archivePrefix={arXiv}, eprint={2410.02766}, primaryClass={math.NA cs.NA math.DS} }
patyn2024a
arxiv-665223
2410.02767
A mathematical model for Nordic skiing
<|reference_start|>A mathematical model for Nordic skiing: Nordic skiing provides fascinating opportunities for mathematical modelling studies that exploit methods and insights from physics, applied mathematics, data analysis, scientific computing and sports science. A typical ski course winds over varied terrain with frequent changes in elevation and direction, and so its geometry is naturally described by a three-dimensional space curve. The skier travels along a course under the influence of various forces, and their dynamics can be described using a nonlinear system of ordinary differential equations (ODEs) that are derived from Newton's laws of motion. We develop an algorithm for solving the governing equations that combines Hermite spline interpolation, numerical quadrature and a high-order ODE solver. Numerical simulations are compared with measurements of skiers on actual courses to demonstrate the effectiveness of the model.<|reference_end|>
arxiv
@article{macdonald2024a, title={A mathematical model for Nordic skiing}, author={Jane Shaw MacDonald, Rafael Ordo~nez Cardales and John M. Stockie}, journal={arXiv preprint arXiv:2410.02767}, year={2024}, archivePrefix={arXiv}, eprint={2410.02767}, primaryClass={physics.class-ph cs.NA math.NA} }
macdonald2024a
arxiv-665224
2410.02768
BoViLA: Bootstrapping Video-Language Alignment via LLM-Based Self-Questioning and Answering
<|reference_start|>BoViLA: Bootstrapping Video-Language Alignment via LLM-Based Self-Questioning and Answering: The development of multi-modal models has been rapidly advancing, with some demonstrating remarkable capabilities. However, annotating video-text pairs remains expensive and insufficient. Take video question answering (VideoQA) tasks as an example, human annotated questions and answers often cover only part of the video, and similar semantics can also be expressed through different text forms, leading to underutilization of video. To address this, we propose BoViLA, a self-training framework that augments question samples during training through LLM-based self-questioning and answering, which help model exploit video information and the internal knowledge of LLMs more thoroughly to improve modality alignment. To filter bad self-generated questions, we introduce Evidential Deep Learning (EDL) to estimate uncertainty and assess the quality of self-generated questions by evaluating the modality alignment within the context. To the best of our knowledge, this work is the first to explore LLM-based self-training frameworks for modality alignment. We evaluate BoViLA on five strong VideoQA benchmarks, where it outperforms several state-of-the-art methods and demonstrate its effectiveness and generality. Additionally, we provide extensive analyses of the self-training framework and the EDL-based uncertainty filtering mechanism. The code will be made available at https://github.com/dunknsabsw/BoViLA.<|reference_end|>
arxiv
@article{chen2024bovila:, title={BoViLA: Bootstrapping Video-Language Alignment via LLM-Based Self-Questioning and Answering}, author={Jin Chen, Kaijing Ma, Haojian Huang, Jiayu Shen, Han Fang, Xianghao Zang, Chao Ban, Zhongjiang He, Hao Sun, Yanmei Kang}, journal={arXiv preprint arXiv:2410.02768}, year={2024}, archivePrefix={arXiv}, eprint={2410.02768}, primaryClass={cs.CV cs.AI} }
chen2024bovila:
arxiv-665225
2410.02769
Fundamentals of legislation for autonomous artificial intelligence systems
<|reference_start|>Fundamentals of legislation for autonomous artificial intelligence systems: The article proposes a method for forming a dedicated operational context in course of development and implementation of autonomous corporate management systems based on example of autonomous systems for a board of directors. The significant part of the operational context for autonomous company management systems is the regulatory and legal environment within which corporations operate. In order to create a special operational context for autonomous artificial intelligence systems, the wording of local regulatory documents can be simultaneously presented in two versions: for use by people and for use by autonomous systems. In this case, the artificial intelligence system will get a well-defined operational context that allows such a system to perform functions within the required standards. Local regulations that provide for the specifics of the joint work of individuals and autonomous artificial intelligence systems can create the basis of the relevant legislation governing the development and implementation of autonomous systems.<|reference_end|>
arxiv
@article{romanova2024fundamentals, title={Fundamentals of legislation for autonomous artificial intelligence systems}, author={Anna Romanova}, journal={Dependability 2024;3:10-17}, year={2024}, doi={10.21683/1729-2646-2024-24-3-10-17}, archivePrefix={arXiv}, eprint={2410.02769}, primaryClass={cs.CY cs.AI} }
romanova2024fundamentals
arxiv-665226
2410.02770
Insightful Railway Track Evaluation: Leveraging NARX Feature Interpretation
<|reference_start|>Insightful Railway Track Evaluation: Leveraging NARX Feature Interpretation: The classification of time series is essential for extracting meaningful insights and aiding decision-making in engineering domains. Parametric modeling techniques like NARX are invaluable for comprehending intricate processes, such as environmental time series, owing to their easily interpretable and transparent structures. This article introduces a classification algorithm, Logistic-NARX Multinomial, which merges the NARX methodology with logistic regression. This approach not only produces interpretable models but also effectively tackles challenges associated with multiclass classification. Furthermore, this study introduces an innovative methodology tailored for the railway sector, offering a tool by employing NARX models to interpret the multitude of features derived from onboard sensors. This solution provides profound insights through feature importance analysis, enabling informed decision-making regarding safety and maintenance.<|reference_end|>
arxiv
@article{silva2024insightful, title={Insightful Railway Track Evaluation: Leveraging NARX Feature Interpretation}, author={P. H. O. Silva, A. S. Cerqueira, E. G. Nepomuceno}, journal={arXiv preprint arXiv:2410.02770}, year={2024}, archivePrefix={arXiv}, eprint={2410.02770}, primaryClass={eess.SP cs.LG} }
silva2024insightful
arxiv-665227
2410.02771
Complex-valued convolutional neural network classification of hand gesture from radar images
<|reference_start|>Complex-valued convolutional neural network classification of hand gesture from radar images: Hand gesture recognition systems have yielded many exciting advancements in the last decade and become more popular in HCI (human-computer interaction) with several application areas, which spans from safety and security applications to automotive field. Various deep neural network architectures have already been inspected for hand gesture recognition systems, including multi-layer perceptron (MLP), convolutional neural network (CNN), recurrent neural network (RNN) and a cascade of the last two architectures known as CNN-RNN. However, a major problem still exists, which is most of the existing ML algorithms are designed and developed the building blocks and techniques for real-valued (RV). Researchers applied various RV techniques on the complex-valued (CV) radar images, such as converting a CV optimisation problem into a RV one, by splitting the complex numbers into their real and imaginary parts. However, the major disadvantage of this method is that the resulting algorithm will double the network dimensions. Recent work on RNNs and other fundamental theoretical analysis suggest that CV numbers have a richer representational capacity, but due to the absence of the building blocks required to design such models, the performance of CV networks are marginalised. In this report, we propose a fully CV-CNN, including all building blocks, forward and backward operations, and derivatives all in complex domain. We explore our proposed classification model on two sets of CV hand gesture radar images in comparison with the equivalent RV model. In chapter five, we propose a CV-forward residual network, for the purpose of binary classification of the two sets of CV hand gesture radar datasets and compare its performance with our proposed CV-CNN and a baseline CV-forward CNN.<|reference_end|>
arxiv
@article{khandan2024complex-valued, title={Complex-valued convolutional neural network classification of hand gesture from radar images}, author={Shokooh Khandan}, journal={arXiv preprint arXiv:2410.02771}, year={2024}, archivePrefix={arXiv}, eprint={2410.02771}, primaryClass={cs.CV cs.AI} }
khandan2024complex-valued
arxiv-665228
2410.02772
Efficient Numerical Calibration of Water Delivery Network Using Short-Burst Hydrant Trials
<|reference_start|>Efficient Numerical Calibration of Water Delivery Network Using Short-Burst Hydrant Trials: Calibration is a critical process for reducing uncertainty in Water Distribution Network Hydraulic Models (WDN HM). However, features of certain WDNs, such as oversized pipelines, lead to shallow pressure gradients under normal daily conditions, posing a challenge for effective calibration. This study proposes a calibration methodology using short hydrant trials conducted at night, which increase the pressure gradient in the WDN. The data is resampled to align with hourly consumption patterns. In a unique real-world case study of a WDN zone, we demonstrate the statistically significant superiority of our method compared to calibration based on daily usage. The experimental methodology, inspired by a machine learning cross-validation framework, utilises two state-of-the-art calibration algorithms, achieving a reduction in absolute error of up to 45% in the best scenario.<|reference_end|>
arxiv
@article{kołodziej2024efficient, title={Efficient Numerical Calibration of Water Delivery Network Using Short-Burst Hydrant Trials}, author={Katarzyna Ko{l}odziej (1), Micha{l} Cholewa (1), Przemys{l}aw G{l}omb (1), Wojciech Koral (2), Micha{l} Romaszewski (1) ((1) Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Gliwice, Poland, (2) AIUT Sp. z o.o. Gliwice, Poland)}, journal={arXiv preprint arXiv:2410.02772}, year={2024}, archivePrefix={arXiv}, eprint={2410.02772}, primaryClass={eess.SP cs.LG} }
kołodziej2024efficient
arxiv-665229
2410.02773
Mind the Uncertainty in Human Disagreement: Evaluating Discrepancies between Model Predictions and Human Responses in VQA
<|reference_start|>Mind the Uncertainty in Human Disagreement: Evaluating Discrepancies between Model Predictions and Human Responses in VQA: Large vision-language models frequently struggle to accurately predict responses provided by multiple human annotators, particularly when those responses exhibit human uncertainty. In this study, we focus on the Visual Question Answering (VQA) task, and we comprehensively evaluate how well the state-of-the-art vision-language models correlate with the distribution of human responses. To do so, we categorize our samples based on their levels (low, medium, high) of human uncertainty in disagreement (HUD) and employ not only accuracy but also three new human-correlated metrics in VQA, to investigate the impact of HUD. To better align models with humans, we also verify the effect of common calibration and human calibration. Our results show that even BEiT3, currently the best model for this task, struggles to capture the multi-label distribution inherent in diverse human responses. Additionally, we observe that the commonly used accuracy-oriented calibration technique adversely affects BEiT3's ability to capture HUD, further widening the gap between model predictions and human distributions. In contrast, we show the benefits of calibrating models towards human distributions for VQA, better aligning model confidence with human uncertainty. Our findings highlight that for VQA, the consistent alignment between human responses and model predictions is understudied and should become the next crucial target of future studies.<|reference_end|>
arxiv
@article{lan2024mind, title={Mind the Uncertainty in Human Disagreement: Evaluating Discrepancies between Model Predictions and Human Responses in VQA}, author={Jian Lan, Diego Frassinelli, Barbara Plank}, journal={arXiv preprint arXiv:2410.02773}, year={2024}, archivePrefix={arXiv}, eprint={2410.02773}, primaryClass={cs.CV cs.AI cs.CL} }
lan2024mind
arxiv-665230
2410.02774
Estimating the Unobservable Components of Electricity Demand Response with Inverse Optimization
<|reference_start|>Estimating the Unobservable Components of Electricity Demand Response with Inverse Optimization: Understanding and predicting the electricity demand responses to prices are critical activities for system operators, retailers, and regulators. While conventional machine learning and time series analyses have been adequate for the routine demand patterns that have adapted only slowly over many years, the emergence of active consumers with flexible assets such as solar-plus-storage systems, and electric vehicles, introduces new challenges. These active consumers exhibit more complex consumption patterns, the drivers of which are often unobservable to the retailers and system operators. In practice, system operators and retailers can only monitor the net demand (metered at grid connection points), which reflects the overall energy consumption or production exchanged with the grid. As a result, all "behind-the-meter" activities-such as the use of flexibility-remain hidden from these entities. Such behind-the-meter behavior may be controlled by third party agents or incentivized by tariffs; in either case, the retailer's revenue and the system loads would be impacted by these activities behind the meter, but their details can only be inferred. We define the main components of net demand, as baseload, flexible, and self-generation, each having nonlinear responses to market price signals. As flexible demand response and self generation are increasing, this raises a pressing question of whether existing methods still perform well and, if not, whether there is an alternative way to understand and project the unobserved components of behavior. In response to this practical challenge, we evaluate the potential of a data-driven inverse optimization (IO) methodology. This approach characterizes decomposed consumption patterns without requiring direct observation of behind-the-meter behavior or device-level metering [...]<|reference_end|>
arxiv
@article{esteban-perez2024estimating, title={Estimating the Unobservable Components of Electricity Demand Response with Inverse Optimization}, author={Adrian Esteban-Perez, Derek Bunn, Yashar Ghiassi-Farrokhfal}, journal={arXiv preprint arXiv:2410.02774}, year={2024}, archivePrefix={arXiv}, eprint={2410.02774}, primaryClass={eess.SP cs.CE cs.LG math.OC stat.AP stat.ME} }
esteban-perez2024estimating
arxiv-665231
2410.02775
A Deep Learning Approach for User-Centric Clustering in Cell-Free Massive MIMO Systems
<|reference_start|>A Deep Learning Approach for User-Centric Clustering in Cell-Free Massive MIMO Systems: Contrary to conventional massive MIMO cellular configurations plagued by inter-cell interference, cell-free massive MIMO systems distribute network resources across the coverage area, enabling users to connect with multiple access points (APs) and boosting both system capacity and fairness across user. In such systems, one critical functionality is the association between APs and users: determining the optimal association is indeed a combinatorial problem of prohibitive complexity. In this paper, a solution based on deep learning is thus proposed to solve the user clustering problem aimed at maximizing the sum spectral efficiency while controlling the number of active connections. The proposed solution can scale effectively with the number of users, leveraging long short-term memory cells to operate without the need for retraining. Numerical results show the effectiveness of the proposed solution, even in the presence of imperfect channel state information due to pilot contamination.<|reference_end|>
arxiv
@article{di gennaro2024a, title={A Deep Learning Approach for User-Centric Clustering in Cell-Free Massive MIMO Systems}, author={Giovanni Di Gennaro and Amedeo Buonanno and Gianmarco Romano and Stefano Buzzi and Francesco A.N Palmieri}, journal={arXiv preprint arXiv:2410.02775}, year={2024}, archivePrefix={arXiv}, eprint={2410.02775}, primaryClass={eess.SP cs.IT cs.LG math.IT} }
di gennaro2024a
arxiv-665232
2410.02776
Bypassing the Popularity Bias: Repurposing Models for Better Long-Tail Recommendation
<|reference_start|>Bypassing the Popularity Bias: Repurposing Models for Better Long-Tail Recommendation: Recommender systems play a crucial role in shaping information we encounter online, whether on social media or when using content platforms, thereby influencing our beliefs, choices, and behaviours. Many recent works address the issue of fairness in recommender systems, typically focusing on topics like ensuring equal access to information and opportunities for all individual users or user groups, promoting diverse content to avoid filter bubbles and echo chambers, enhancing transparency and explainability, and adhering to ethical and sustainable practices. In this work, we aim to achieve a more equitable distribution of exposure among publishers on an online content platform, with a particular focus on those who produce high quality, long-tail content that may be unfairly disadvantaged. We propose a novel approach of repurposing existing components of an industrial recommender system to deliver valuable exposure to underrepresented publishers while maintaining high recommendation quality. To demonstrate the efficiency of our proposal, we conduct large-scale online AB experiments, report results indicating desired outcomes and share several insights from long-term application of the approach in the production setting.<|reference_end|>
arxiv
@article{blahut2024bypassing, title={Bypassing the Popularity Bias: Repurposing Models for Better Long-Tail Recommendation}, author={V'aclav Blahut, Karel Koupil}, journal={arXiv preprint arXiv:2410.02776}, year={2024}, archivePrefix={arXiv}, eprint={2410.02776}, primaryClass={cs.IR cs.LG} }
blahut2024bypassing
arxiv-665233
2410.02777
OATH: Efficient and Flexible Zero-Knowledge Proofs of End-to-End ML Fairness
<|reference_start|>OATH: Efficient and Flexible Zero-Knowledge Proofs of End-to-End ML Fairness: Though there is much interest in fair AI systems, the problem of fairness noncompliance -- which concerns whether fair models are used in practice -- has received lesser attention. Zero-Knowledge Proofs of Fairness (ZKPoF) address fairness noncompliance by allowing a service provider to verify to external parties that their model serves diverse demographics equitably, with guaranteed confidentiality over proprietary model parameters and data. They have great potential for building public trust and effective AI regulation, but no previous techniques for ZKPoF are fit for real-world deployment. We present OATH, the first ZKPoF framework that is (i) deployably efficient with client-facing communication comparable to in-the-clear ML as a Service query answering, and an offline audit phase that verifies an asymptotically constant quantity of answered queries, (ii) deployably flexible with modularity for any score-based classifier given a zero-knowledge proof of correct inference, (iii) deployably secure with an end-to-end security model that guarantees confidentiality and fairness across training, inference, and audits. We show that OATH obtains strong robustness against malicious adversaries at concretely efficient parameter settings. Notably, OATH provides a 1343x improvement to runtime over previous work for neural network ZKPoF, and scales up to much larger models -- even DNNs with tens of millions of parameters.<|reference_end|>
arxiv
@article{franzese2024oath:, title={OATH: Efficient and Flexible Zero-Knowledge Proofs of End-to-End ML Fairness}, author={Olive Franzese, Ali Shahin Shamsabadi, Hamed Haddadi}, journal={arXiv preprint arXiv:2410.02777}, year={2024}, archivePrefix={arXiv}, eprint={2410.02777}, primaryClass={cs.CY cs.LG} }
franzese2024oath:
arxiv-665234
2410.02778
Physical Layer Mutual Authentication in RIS-Aided Monostatic Backscatter Communications: A Dual-Edged Analysis
<|reference_start|>Physical Layer Mutual Authentication in RIS-Aided Monostatic Backscatter Communications: A Dual-Edged Analysis: Backscatter communication (BC) emerges as a pivotal technology for ultra-low-power energy harvesting applications, but its practical deployment is often hampered by notable security vulnerabilities. Physical layer authentication (PLA) offers a promising solution for securing BC by leveraging the unique characteristics of the communication medium. However, existing PLA approaches often fall short due to limited signal strength in practical BC scenarios and performance deterioration with increasing distance between the tag and the reader. Moreover, achieving mutual authentication has been largely neglected in current PLA schemes, given the passive nature of tags and their limited computational and energy resources. This paper introduces a reconfigurable intelligent surfaces (RIS)-aided PLA scheme based on the physical features of received signals at legitimate endpoints through cascade links in monostatic BC (MBC) systems. By considering a RIS operating in its near-optimal conditions between a tag and a reader, the proposed PLA leverages the RIS-enhanced power delivery detected by the tag's energy detector and the optimized received signal strength (RSS) at the reader's signal processing unit, addressing the conventional challenges of mutual authentication, low PLA performance, and limited secure coverage area inherent in BC systems. Through theoretical analysis and extensive simulations, we show that as long as RIS is controlled by a trusted party in the network, it can boost the authentication performance across different system settings and strengthen the security features. Additionally, we analyze to explore the potential security threats when the RIS is compromised by an adversary, assessing its impact on the system's PLA performance and secrecy capacity, providing a comprehensive understanding of the security implications for RIS-aided MBC under such circumstances.<|reference_end|>
arxiv
@article{kaveh2024physical, title={Physical Layer Mutual Authentication in RIS-Aided Monostatic Backscatter Communications: A Dual-Edged Analysis}, author={Masoud Kaveh, Farshad Rostami Ghadi, Yishan Yang, Zheng Yan, and Riku Jantti}, journal={arXiv preprint arXiv:2410.02778}, year={2024}, archivePrefix={arXiv}, eprint={2410.02778}, primaryClass={cs.IT math.IT} }
kaveh2024physical
arxiv-665235
2410.02779
Learning variant product relationship and variation attributes from e-commerce website structures
<|reference_start|>Learning variant product relationship and variation attributes from e-commerce website structures: We introduce VARM, variant relationship matcher strategy, to identify pairs of variant products in e-commerce catalogs. Traditional definitions of entity resolution are concerned with whether product mentions refer to the same underlying product. However, this fails to capture product relationships that are critical for e-commerce applications, such as having similar, but not identical, products listed on the same webpage or share reviews. Here, we formulate a new type of entity resolution in variant product relationships to capture these similar e-commerce product links. In contrast with the traditional definition, the new definition requires both identifying if two products are variant matches of each other and what are the attributes that vary between them. To satisfy these two requirements, we developed a strategy that leverages the strengths of both encoding and generative AI models. First, we construct a dataset that captures webpage product links, and therefore variant product relationships, to train an encoding LLM to predict variant matches for any given pair of products. Second, we use RAG prompted generative LLMs to extract variation and common attributes amongst groups of variant products. To validate our strategy, we evaluated model performance using real data from one of the world's leading e-commerce retailers. The results showed that our strategy outperforms alternative solutions and paves the way to exploiting these new type of product relationships.<|reference_end|>
arxiv
@article{herrero-vidal2024learning, title={Learning variant product relationship and variation attributes from e-commerce website structures}, author={Pedro Herrero-Vidal, You-Lin Chen, Cris Liu, Prithviraj Sen and Lichao Wang}, journal={arXiv preprint arXiv:2410.02779}, year={2024}, archivePrefix={arXiv}, eprint={2410.02779}, primaryClass={cs.IR cs.AI cs.CL cs.LG} }
herrero-vidal2024learning
arxiv-665236
2410.02780
Guess What I Think: Streamlined EEG-to-Image Generation with Latent Diffusion Models
<|reference_start|>Guess What I Think: Streamlined EEG-to-Image Generation with Latent Diffusion Models: Generating images from brain waves is gaining increasing attention due to its potential to advance brain-computer interface (BCI) systems by understanding how brain signals encode visual cues. Most of the literature has focused on fMRI-to-Image tasks as fMRI is characterized by high spatial resolution. However, fMRI is an expensive neuroimaging modality and does not allow for real-time BCI. On the other hand, electroencephalography (EEG) is a low-cost, non-invasive, and portable neuroimaging technique, making it an attractive option for future real-time applications. Nevertheless, EEG presents inherent challenges due to its low spatial resolution and susceptibility to noise and artifacts, which makes generating images from EEG more difficult. In this paper, we address these problems with a streamlined framework based on the ControlNet adapter for conditioning a latent diffusion model (LDM) through EEG signals. We conduct experiments and ablation studies on popular benchmarks to demonstrate that the proposed method beats other state-of-the-art models. Unlike these methods, which often require extensive preprocessing, pretraining, different losses, and captioning models, our approach is efficient and straightforward, requiring only minimal preprocessing and a few components. Code will be available after publication.<|reference_end|>
arxiv
@article{lopez2024guess, title={Guess What I Think: Streamlined EEG-to-Image Generation with Latent Diffusion Models}, author={Eleonora Lopez, Luigi Sigillo, Federica Colonnese, Massimo Panella and Danilo Comminiello}, journal={arXiv preprint arXiv:2410.02780}, year={2024}, archivePrefix={arXiv}, eprint={2410.02780}, primaryClass={cs.CV cs.AI cs.LG} }
lopez2024guess
arxiv-665237
2410.02781
Enhancing ICT Literacy and Sustainable Practices in the Hospitality Industry: Insights from Mnquma Municipality
<|reference_start|>Enhancing ICT Literacy and Sustainable Practices in the Hospitality Industry: Insights from Mnquma Municipality: The leisure and hospitality industry is a significant driver of the global economy, with the adoption of new technologies transforming service delivery and customer experience. Despite the transformative potential and benefits associated with adopting technology, there remains a low level of adoption in rural areas, particularly among small-scale players. This study explores the role of ICT literacy and sustainable practices in influencing ICT adoption among small-scale players in the hospitality industry in rural Eastern Cape Province, South Africa, specifically focusing on Mnquma Municipality. The study employs a non-probability sampling and purposive technique, utilising a case study research design within a positivist paradigm. A random sample of 21 small-scale players (BnBs, guest houses, and non-serviced accommodations) was selected, and data were collected through a face-to-face interview and questionnaire featuring closed-ended questions. The data were analysed using descriptive statistics and the Kruskal-Wallis H Test to examine differences in ICT usage levels. The test yielded a Kruskal-Wallis H of 2.57 with a p-value of 0.277. The findings reveal that businesses with more educated workforces demonstrate higher ICT adoption levels. Moreover, key factors such as ICT literacy, awareness of sustainable practices, access to ICT resources, and contextual challenges significantly impact ICT adoption. Recommendations include integrating ICT literacy and sustainability education into training programs and developing targeted policies and support mechanisms to enhance ICT integration.<|reference_end|>
arxiv
@article{lukose2024enhancing, title={Enhancing ICT Literacy and Sustainable Practices in the Hospitality Industry: Insights from Mnquma Municipality}, author={Jose Lukose and Abayomi Agbeyangi}, journal={arXiv preprint arXiv:2410.02781}, year={2024}, archivePrefix={arXiv}, eprint={2410.02781}, primaryClass={cs.CY} }
lukose2024enhancing
arxiv-665238
2410.02782
High School Summer Camps Help Democratize Coding, Data Science, and Deep Learning
<|reference_start|>High School Summer Camps Help Democratize Coding, Data Science, and Deep Learning: This study documents the impact of a summer camp series that introduces high school students to coding, data science, and deep learning. Hosted on-campus, the camps provide an immersive university experience, fostering technical skills, collaboration, and inspiration through interactions with mentors and faculty. Campers' experiences are documented through interviews and pre- and post-camp surveys. Key lessons include the importance of personalized feedback, diverse mentorship, and structured collaboration. Survey data reveals increased confidence in coding, with 68.6\% expressing interest in AI and data science careers. The camps also play a crucial role in addressing disparities in STEM education for underrepresented minorities. These findings underscore the value of such initiatives in shaping future technology education and promoting diversity in STEM fields.<|reference_end|>
arxiv
@article{gonzalez2024high, title={High School Summer Camps Help Democratize Coding, Data Science, and Deep Learning}, author={Rosemarie Santa Gonzalez, Tsion Fitsum, Michael Butros}, journal={arXiv preprint arXiv:2410.02782}, year={2024}, archivePrefix={arXiv}, eprint={2410.02782}, primaryClass={cs.CY} }
gonzalez2024high
arxiv-665239
2410.02783
Enhancing Mental Health Support through Human-AI Collaboration: Toward Secure and Empathetic AI-enabled chatbots
<|reference_start|>Enhancing Mental Health Support through Human-AI Collaboration: Toward Secure and Empathetic AI-enabled chatbots: Access to mental health support remains limited, particularly in marginalized communities where structural and cultural barriers hinder timely care. This paper explores the potential of AI-enabled chatbots as a scalable solution, focusing on advanced large language models (LLMs)-GPT v4, Mistral Large, and LLama V3.1-and assessing their ability to deliver empathetic, meaningful responses in mental health contexts. While these models show promise in generating structured responses, they fall short in replicating the emotional depth and adaptability of human therapists. Additionally, trustworthiness, bias, and privacy challenges persist due to unreliable datasets and limited collaboration with mental health professionals. To address these limitations, we propose a federated learning framework that ensures data privacy, reduces bias, and integrates continuous validation from clinicians to enhance response quality. This approach aims to develop a secure, evidence-based AI chatbot capable of offering trustworthy, empathetic, and bias-reduced mental health support, advancing AI's role in digital mental health care.<|reference_end|>
arxiv
@article{almakinah2024enhancing, title={Enhancing Mental Health Support through Human-AI Collaboration: Toward Secure and Empathetic AI-enabled chatbots}, author={Rawan AlMakinah, Andrea Norcini-Pala, Lindsey Disney, M. Abdullah Canbaz}, journal={arXiv preprint arXiv:2410.02783}, year={2024}, archivePrefix={arXiv}, eprint={2410.02783}, primaryClass={cs.CY cs.AI cs.HC} }
almakinah2024enhancing
arxiv-665240
2410.02784
A M\"untz-collocation spectral method for weakly singular Volterra delay-integro-differential equations
<|reference_start|>A M\"untz-collocation spectral method for weakly singular Volterra delay-integro-differential equations: A M\"untz spectral collocation method is implemented for solving weakly singular Volterra integro-differential equations (VDIEs) with proportional delays. After constructing the numerical scheme to seek an approximate solution, we derive error estimates in a weighted $L^2$ and $L^{\infty}$-norms. A rigorous proof reveals that the proposed method can handle the weak singularity of the exact solution at the initial point $t=0$, with the numerical errors decaying exponentially in certain cases. Moreover, several examples will illustrate our convergence analysis.<|reference_end|>
arxiv
@article{zhao2024a, title={A M\"untz-collocation spectral method for weakly singular Volterra delay-integro-differential equations}, author={Borui Zhao}, journal={arXiv preprint arXiv:2410.02784}, year={2024}, archivePrefix={arXiv}, eprint={2410.02784}, primaryClass={math.NA cs.NA} }
zhao2024a
arxiv-665241
2410.02785
Dynamic Road Management in the Era of CAV
<|reference_start|>Dynamic Road Management in the Era of CAV: Traffic management and on-road safety have been a concern for the transportation authorities and the engineering communities for many years. Most of the implemented technologies for intelligent highways focus on safety measures and increased driver awareness, and expect a centralized management for the vehicular traffic flow. Leveraging recent advances in wireless communication, researchers have proposed solutions based on vehicle-to-vehicle (V2V) and vehicle-to-Infrastructure (V2I) communication in order to detect traffic jams and better disseminate data from on-road and on-vehicle sensors. Moreover, the development of connected autonomous vehicles (CAV) have motivated a paradigm shift in how traffic will be managed. Overall, these major technological advances have motivated the notion of dynamic traffic management (DTM), where smart road reconfiguration capabilities, e.g., dynamic lane reversal, adaptive traffic light timing, etc. will be exploited in real-time to improve traffic flow and adapt to unexpected incidents. This chapter discusses what the challenges in realizing DTM are and covers how CAV has revolutionized traffic management. Moreover, we highlight the issues for handling human-driven vehicles while roads are transitioning to CAV only traffic. Particularly, we articulate a new vision for inter-vehicle communication and assessment of road conditions, and promote a novel system for traffic management. Vehicle to on-road sensors as well as inter-vehicle connectivity will be enabled through the use of handheld devices such as smartphones. This not only enables real-time data sharing but also expedites the adoption of DTM without awaiting the dominant presence of autonomous vehicle on the road. ...<|reference_end|>
arxiv
@article{younis2024dynamic, title={Dynamic Road Management in the Era of CAV}, author={Mohamed Younis, Sookyoung Lee, Wassila Lalouani, Dayuan Tan, Sanket Gupte}, journal={Connected and Autonomous Vehicles in Smart Cities. CRC Press, 2020. 133-172}, year={2024}, doi={10.1201/9780429329401-5}, archivePrefix={arXiv}, eprint={2410.02785}, primaryClass={cs.NI} }
younis2024dynamic
arxiv-665242
2410.02786
Robust Symmetry Detection via Riemannian Langevin Dynamics
<|reference_start|>Robust Symmetry Detection via Riemannian Langevin Dynamics: Symmetries are ubiquitous across all kinds of objects, whether in nature or in man-made creations. While these symmetries may seem intuitive to the human eye, detecting them with a machine is nontrivial due to the vast search space. Classical geometry-based methods work by aggregating "votes" for each symmetry but struggle with noise. In contrast, learning-based methods may be more robust to noise, but often overlook partial symmetries due to the scarcity of annotated data. In this work, we address this challenge by proposing a novel symmetry detection method that marries classical symmetry detection techniques with recent advances in generative modeling. Specifically, we apply Langevin dynamics to a redefined symmetry space to enhance robustness against noise. We provide empirical results on a variety of shapes that suggest our method is not only robust to noise, but can also identify both partial and global symmetries. Moreover, we demonstrate the utility of our detected symmetries in various downstream tasks, such as compression and symmetrization of noisy shapes.<|reference_end|>
arxiv
@article{je2024robust, title={Robust Symmetry Detection via Riemannian Langevin Dynamics}, author={Jihyeon Je, Jiayi Liu, Guandao Yang, Boyang Deng, Shengqu Cai, Gordon Wetzstein, Or Litany, Leonidas Guibas}, journal={arXiv preprint arXiv:2410.02786}, year={2024}, doi={10.1145/3680528.3687682}, archivePrefix={arXiv}, eprint={2410.02786}, primaryClass={cs.CV cs.AI cs.GR} }
je2024robust
arxiv-665243
2410.02787
Navigation with VLM framework: Go to Any Language
<|reference_start|>Navigation with VLM framework: Go to Any Language: Navigating towards fully open language goals and exploring open scenes in a manner akin to human exploration have always posed significant challenges. Recently, Vision Large Language Models (VLMs) have demonstrated remarkable capabilities in reasoning with both language and visual data. While many works have focused on leveraging VLMs for navigation in open scenes and with open vocabularies, these efforts often fall short of fully utilizing the potential of VLMs or require substantial computational resources. We introduce Navigation with VLM (NavVLM), a framework that harnesses equipment-level VLMs to enable agents to navigate towards any language goal specific or non-specific in open scenes, emulating human exploration behaviors without any prior training. The agent leverages the VLM as its cognitive core to perceive environmental information based on any language goal and constantly provides exploration guidance during navigation until it reaches the target location or area. Our framework not only achieves state-of-the-art performance in Success Rate (SR) and Success weighted by Path Length (SPL) in traditional specific goal settings but also extends the navigation capabilities to any open-set language goal. We evaluate NavVLM in richly detailed environments from the Matterport 3D (MP3D), Habitat Matterport 3D (HM3D), and Gibson datasets within the Habitat simulator. With the power of VLMs, navigation has entered a new era.<|reference_end|>
arxiv
@article{yin2024navigation, title={Navigation with VLM framework: Go to Any Language}, author={Zecheng Yin and Chonghao Cheng and Lizhen}, journal={arXiv preprint arXiv:2410.02787}, year={2024}, archivePrefix={arXiv}, eprint={2410.02787}, primaryClass={cs.CV cs.AI cs.CL} }
yin2024navigation
arxiv-665244
2410.02788
RoMo: A Robust Solver for Full-body Unlabeled Optical Motion Capture
<|reference_start|>RoMo: A Robust Solver for Full-body Unlabeled Optical Motion Capture: Optical motion capture (MoCap) is the "gold standard" for accurately capturing full-body motions. To make use of raw MoCap point data, the system labels the points with corresponding body part locations and solves the full-body motions. However, MoCap data often contains mislabeling, occlusion and positional errors, requiring extensive manual correction. To alleviate this burden, we introduce RoMo, a learning-based framework for robustly labeling and solving raw optical motion capture data. In the labeling stage, RoMo employs a divide-and-conquer strategy to break down the complex full-body labeling challenge into manageable subtasks: alignment, full-body segmentation and part-specific labeling. To utilize the temporal continuity of markers, RoMo generates marker tracklets using a K-partite graph-based clustering algorithm, where markers serve as nodes, and edges are formed based on positional and feature similarities. For motion solving, to prevent error accumulation along the kinematic chain, we introduce a hybrid inverse kinematic solver that utilizes joint positions as intermediate representations and adjusts the template skeleton to match estimated joint positions. We demonstrate that RoMo achieves high labeling and solving accuracy across multiple metrics and various datasets. Extensive comparisons show that our method outperforms state-of-the-art research methods. On a real dataset, RoMo improves the F1 score of hand labeling from 0.94 to 0.98, and reduces joint position error of body motion solving by 25%. Furthermore, RoMo can be applied in scenarios where commercial systems are inadequate. The code and data for RoMo are available at https://github.com/non-void/RoMo.<|reference_end|>
arxiv
@article{pan2024romo:, title={RoMo: A Robust Solver for Full-body Unlabeled Optical Motion Capture}, author={Xiaoyu Pan, Bowen Zheng, Xinwei Jiang, Zijiao Zeng, Qilong Kou, He Wang, Xiaogang Jin}, journal={arXiv preprint arXiv:2410.02788}, year={2024}, doi={10.1145/3680528.3687615}, archivePrefix={arXiv}, eprint={2410.02788}, primaryClass={cs.CV cs.GR} }
pan2024romo:
arxiv-665245
2410.02789
Logic-Free Building Automation: Learning the Control of Room Facilities with Wall Switches and Ceiling Camera
<|reference_start|>Logic-Free Building Automation: Learning the Control of Room Facilities with Wall Switches and Ceiling Camera: Artificial intelligence enables smarter control in building automation by its learning capability of users' preferences on facility control. Reinforcement learning (RL) was one of the approaches to this, but it has many challenges in real-world implementations. We propose a new architecture for logic-free building automation (LFBA) that leverages deep learning (DL) to control room facilities without predefined logic. Our approach differs from RL in that it uses wall switches as supervised signals and a ceiling camera to monitor the environment, allowing the DL model to learn users' preferred controls directly from the scenes and switch states. This LFBA system is tested by our testbed with various conditions and user activities. The results demonstrate the efficacy, achieving 93%-98% control accuracy with VGG, outperforming other DL models such as Vision Transformer and ResNet. This indicates that LFBA can achieve smarter and more user-friendly control by learning from the observable scenes and user interactions.<|reference_end|>
arxiv
@article{ochiai2024logic-free, title={Logic-Free Building Automation: Learning the Control of Room Facilities with Wall Switches and Ceiling Camera}, author={Hideya Ochiai, Kohki Hashimoto, Takuya Sakamoto, Seiya Watanabe, Ryosuke Hara, Ryo Yagi, Yuji Aizono, Hiroshi Esaki}, journal={arXiv preprint arXiv:2410.02789}, year={2024}, archivePrefix={arXiv}, eprint={2410.02789}, primaryClass={cs.CV cs.AI cs.HC cs.RO} }
ochiai2024logic-free
arxiv-665246
2410.02790
Raising the Bar(ometer): Identifying a User's Stair and Lift Usage Through Wearable Sensor Data Analysis
<|reference_start|>Raising the Bar(ometer): Identifying a User's Stair and Lift Usage Through Wearable Sensor Data Analysis: Many users are confronted multiple times daily with the choice of whether to take the stairs or the elevator. Whereas taking the stairs could be beneficial for cardiovascular health and wellness, taking the elevator might be more convenient but it also consumes energy. By precisely tracking and boosting users' stairs and elevator usage through their wearable, users might gain health insights and motivation, encouraging a healthy lifestyle and lowering the risk of sedentary-related health problems. This research describes a new exploratory dataset, to examine the patterns and behaviors related to using stairs and lifts. We collected data from 20 participants while climbing and descending stairs and taking a lift in a variety of scenarios. The aim is to provide insights and demonstrate the practicality of using wearable sensor data for such a scenario. Our collected dataset was used to train and test a Random Forest machine learning model, and the results show that our method is highly accurate at classifying stair and lift operations with an accuracy of 87.61% and a multi-class weighted F1-score of 87.56% over 8-second time windows. Furthermore, we investigate the effect of various types of sensors and data attributes on the model's performance. Our findings show that combining inertial and pressure sensors yields a viable solution for real-time activity detection.<|reference_end|>
arxiv
@article{karande2024raising, title={Raising the Bar(ometer): Identifying a User's Stair and Lift Usage Through Wearable Sensor Data Analysis}, author={Hrishikesh Balkrishna Karande, Ravikiran Arasur Thippeswamy Shivalingappa, Abdelhafid Nassim Yaici, Iman Haghbin, Niravkumar Bavadiya, Robin Burchard and Kristof Van Laerhoven}, journal={arXiv preprint arXiv:2410.02790}, year={2024}, archivePrefix={arXiv}, eprint={2410.02790}, primaryClass={eess.SP cs.LG} }
karande2024raising
arxiv-665247
2410.02791
DifFaiRec: Generative Fair Recommender with Conditional Diffusion Model
<|reference_start|>DifFaiRec: Generative Fair Recommender with Conditional Diffusion Model: Although recommenders can ship items to users automatically based on the users' preferences, they often cause unfairness to groups or individuals. For instance, when users can be divided into two groups according to a sensitive social attribute and there is a significant difference in terms of activity between the two groups, the learned recommendation algorithm will result in a recommendation gap between the two groups, which causes group unfairness. In this work, we propose a novel recommendation algorithm named Diffusion-based Fair Recommender (DifFaiRec) to provide fair recommendations. DifFaiRec is built upon the conditional diffusion model and hence has a strong ability to learn the distribution of user preferences from their ratings on items and is able to generate diverse recommendations effectively. To guarantee fairness, we design a counterfactual module to reduce the model sensitivity to protected attributes and provide mathematical explanations. The experiments on benchmark datasets demonstrate the superiority of DifFaiRec over competitive baselines.<|reference_end|>
arxiv
@article{jiang2024diffairec:, title={DifFaiRec: Generative Fair Recommender with Conditional Diffusion Model}, author={Zhenhao Jiang, Jicong Fan}, journal={arXiv preprint arXiv:2410.02791}, year={2024}, archivePrefix={arXiv}, eprint={2410.02791}, primaryClass={cs.IR cs.AI cs.LG} }
jiang2024diffairec:
arxiv-665248
2410.02792
Travel Experience in Public Transport Dataset
<|reference_start|>Travel Experience in Public Transport Dataset: The transportation sector holds the potential to change the world towards a greener future if it aligns with increasing mobility needs. One solution is to make public transport an attractive alternative to individual transportation. Real-world data is needed to investigate reasons for and indicators of positive and negative travel experience. Here, we present a GPS-tagged dataset where participants wore an electrocardiogram and reported experience sampling that measured stress, satisfaction, events, and emotions while traveling by tram, train, and bus. An interactive experience map helps to visually explore the data. As benchmark analysis for future users of the dataset, we report significant stress hot spots and satisfaction cold spots during the participants' journeys. The reported events and emotions, especially in such hot and cold spots, can be analyzed to highlight points of positive and negative travel experience in an ecologically highly valid setting. Data on age and self-identified gender offers insights to differences between user groups. Overall, our dataset enables the combination of qualitative and quantitative methods to identify user's needs in public transportation.<|reference_end|>
arxiv
@article{bosch2024travel, title={Travel Experience in Public Transport Dataset}, author={Esther Bosch, Ricarda Luther, Klas Ihme}, journal={arXiv preprint arXiv:2410.02792}, year={2024}, archivePrefix={arXiv}, eprint={2410.02792}, primaryClass={cs.HC} }
bosch2024travel
arxiv-665249
2410.02793
Boundary Interpolation on Triangles via Neural Network Operators
<|reference_start|>Boundary Interpolation on Triangles via Neural Network Operators: The primary objective of this study is to develop novel interpolation operators that interpolate the boundary values of a function defined on a triangle. This is accomplished by constructing New Generalized Boolean sum neural network operator $\mathcal{B}_{n_1, n_2, \xi }$ using a class of activation functions. Its interpolation properties are established and the estimates for the error of approximation corresponding to operator $\mathcal{B}_{n_1, n_2, \xi }$ is computed in terms of mixed modulus of continuity. The advantage of our method is that it does not require training the network. Instead, the number of hidden neurons adjusts the weights and bias. Numerical examples are illustrated to show the efficacy of these newly constructed operators. Further, with the help of MATLAB, comparative and graphical analysis is given to show the validity and efficiency of the results obtained for these operators.<|reference_end|>
arxiv
@article{bhat2024boundary, title={Boundary Interpolation on Triangles via Neural Network Operators}, author={Aaqib Ayoub Bhat and Asif Khan}, journal={arXiv preprint arXiv:2410.02793}, year={2024}, archivePrefix={arXiv}, eprint={2410.02793}, primaryClass={math.NA cs.NA math.FA} }
bhat2024boundary
arxiv-665250
2410.02795
TaCIE: Enhancing Instruction Comprehension in Large Language Models through Task-Centred Instruction Evolution
<|reference_start|>TaCIE: Enhancing Instruction Comprehension in Large Language Models through Task-Centred Instruction Evolution: Large Language Models (LLMs) require precise alignment with complex instructions to optimize their performance in real-world applications. As the demand for refined instruction tuning data increases, traditional methods that evolve simple seed instructions often struggle to effectively enhance complexity or manage difficulty scaling across various domains. Our innovative approach, Task-Centered Instruction Evolution (TaCIE), addresses these shortcomings by redefining instruction evolution from merely evolving seed instructions to a more dynamic and comprehensive combination of elements. TaCIE starts by deconstructing complex instructions into their fundamental components. It then generates and integrates new elements with the original ones, reassembling them into more sophisticated instructions that progressively increase in difficulty, diversity, and complexity. Applied across multiple domains, LLMs fine-tuned with these evolved instructions have substantially outperformed those tuned with conventional methods, marking a significant advancement in instruction-based model fine-tuning.<|reference_end|>
arxiv
@article{yang2024tacie:, title={TaCIE: Enhancing Instruction Comprehension in Large Language Models through Task-Centred Instruction Evolution}, author={Jiuding Yang, Shengyao Lu, Weidong Guo, Xiangyang Li, Kaitong Yang, Yu Xu and Di Niu}, journal={arXiv preprint arXiv:2410.02795}, year={2024}, archivePrefix={arXiv}, eprint={2410.02795}, primaryClass={cs.CY cs.AI cs.CL} }
yang2024tacie:
arxiv-665251
2410.02796
Toward Adaptive Tracking and Communication via an Airborne Maneuverable Bi-Static ISAC System
<|reference_start|>Toward Adaptive Tracking and Communication via an Airborne Maneuverable Bi-Static ISAC System: In this letter, we propose an airborne maneuverable bi-static integrated sensing and communication system where both the transmitter and receiver are unmanned aerial vehicles. By timely forming a dynamic bi-static range based on the motion information of the target, such a system can provide an adaptive two dimensional tracking and communication services. Towards this end, a trajectory optimization problem for both transmits and receive UAV is formulated to achieve high-accurate motion state estimation by minimizing the time-variant Cramer Rao bound, subject to the sufficient communication signal-to-noise ratio to maintain communication channel prediction error. Then we develop an efficient approach based on the successive convex approximation technique and the S-procedure to address the problem. Numerical results demonstrate that our proposed airborne maneuverable bi-static ISAC system is able to obtain higher tracking accuracy compared with the static or semi-dynamic ISAC system.<|reference_end|>
arxiv
@article{wei2024toward, title={Toward Adaptive Tracking and Communication via an Airborne Maneuverable Bi-Static ISAC System}, author={Mingliang Wei, Ruoguang Li, Li Wang, Lianming Xu, and Zhu Han}, journal={arXiv preprint arXiv:2410.02796}, year={2024}, archivePrefix={arXiv}, eprint={2410.02796}, primaryClass={eess.SP cs.ET cs.IT cs.NI math.IT} }
wei2024toward
arxiv-665252
2410.02797
Constrained B-Spline Based Everett Map Construction for Modeling Static Hysteresis Behavior
<|reference_start|>Constrained B-Spline Based Everett Map Construction for Modeling Static Hysteresis Behavior: This work presents a simple and robust method to construct a B-spline based Everett map, for application in the Preisach model of hysteresis, to predict static hysteresis behavior. Its strength comes from the ability to directly capture the Everett map as a well-founded closed-form B-spline surface expression, while also eliminating model artifacts that plague Everett map based Preisach models. Contrary to other works, that applied numerical descriptions for the Everett map, the presented approach is of completely analytic nature. In this work the B-spline surface fitting procedure and the necessary set of constraints are explained. Furthermore, the B-spline based Everett map is validated by ensuring that model artifacts were properly eliminated. Additionally, the model was compared with four benchmark excitations. Namely, a degaussing signal, a set of first-order reversal curves, an arbitrary excitation with high-order reversal curves, and a PWM like signal. The model was able to reproduce all benchmarks with high accuracy.<|reference_end|>
arxiv
@article{daniels2024constrained, title={Constrained B-Spline Based Everett Map Construction for Modeling Static Hysteresis Behavior}, author={Bram Daniels (1), Reza Zeinali (1), Timo Overboom (2), Mitrofan Curti (1) Elena Lomonova (1) ((1) Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands, (2) Royal SMIT Transformers (SGB-SMIT Group), Nijmegen, The Netherlands)}, journal={arXiv preprint arXiv:2410.02797}, year={2024}, archivePrefix={arXiv}, eprint={2410.02797}, primaryClass={cs.CE cond-mat.mtrl-sci} }
daniels2024constrained
arxiv-665253
2410.02799
A Data Envelopment Analysis Approach for Assessing Fairness in Resource Allocation: Application to Kidney Exchange Programs
<|reference_start|>A Data Envelopment Analysis Approach for Assessing Fairness in Resource Allocation: Application to Kidney Exchange Programs: Kidney exchange programs have significantly increased transplantation rates but raise pressing questions about fairness in organ allocation. We present a novel framework leveraging Data Envelopment Analysis (DEA) to evaluate multiple fairness criteria--Priority, Access, and Outcome--within a single model, capturing complexities that may be overlooked in single-metric analyses. Using data from the United Network for Organ Sharing, we analyze these criteria individually, measuring Priority fairness through waitlist durations, Access fairness through Kidney Donor Profile Index scores, and Outcome fairness through graft lifespan. We then apply our DEA model to demonstrate significant disparities in kidney allocation efficiency across ethnic groups. To quantify uncertainty, we employ conformal prediction within the DEA framework, yielding group-conditional prediction intervals with finite sample coverage guarantees. Our findings show notable differences in efficiency distributions between ethnic groups. Our study provides a rigorous framework for evaluating fairness in complex resource allocation systems, where resource scarcity and mutual compatibility constraints exist. All code for using the proposed method and reproducing results is available on GitHub.<|reference_end|>
arxiv
@article{kaazempur-mofrad2024a, title={A Data Envelopment Analysis Approach for Assessing Fairness in Resource Allocation: Application to Kidney Exchange Programs}, author={Ali Kaazempur-Mofrad and Xiaowu Dai}, journal={arXiv preprint arXiv:2410.02799}, year={2024}, archivePrefix={arXiv}, eprint={2410.02799}, primaryClass={cs.CY cs.LG stat.ME} }
kaazempur-mofrad2024a
arxiv-665254
2410.02800
Estimating Body Volume and Height Using 3D Data
<|reference_start|>Estimating Body Volume and Height Using 3D Data: Accurate body weight estimation is critical in emergency medicine for proper dosing of weight-based medications, yet direct measurement is often impractical in urgent situations. This paper presents a non-invasive method for estimating body weight by calculating total body volume and height using 3D imaging technology. A RealSense D415 camera is employed to capture high-resolution depth maps of the patient, from which 3D models are generated. The Convex Hull Algorithm is then applied to calculate the total body volume, with enhanced accuracy achieved by segmenting the point cloud data into multiple sections and summing their individual volumes. The height is derived from the 3D model by identifying the distance between key points on the body. This combined approach provides an accurate estimate of body weight, improving the reliability of medical interventions where precise weight data is unavailable. The proposed method demonstrates significant potential to enhance patient safety and treatment outcomes in emergency settings.<|reference_end|>
arxiv
@article{sonar2024estimating, title={Estimating Body Volume and Height Using 3D Data}, author={Vivek Ganesh Sonar, Muhammad Tanveer Jan, Mike Wells, Abhijit Pandya, Gabriela Engstrom, Richard Shih, Borko Furht}, journal={arXiv preprint arXiv:2410.02800}, year={2024}, archivePrefix={arXiv}, eprint={2410.02800}, primaryClass={cs.CV cs.AI} }
sonar2024estimating
arxiv-665255
2410.02801
Biases in gendered citation practices: an exploratory study and some reflections on the Matthew and Matilda effects
<|reference_start|>Biases in gendered citation practices: an exploratory study and some reflections on the Matthew and Matilda effects: Recent studies conducted in different scientific disciplines have concluded that researchers belonging to some socio-cultural groups (e.g., women, racialized people) are usually less cited than other researchers belonging to dominating groups. This is usually due to the presence of citation biases in reference lists. These citation biases towards researchers from some socio-cultural groups may inevitably cause unfairness and inaccuracy in the assessment of articles impact. These citation biases may therefore translate to significant disparities in promotion, retention, grant funding, awards, collaborative opportunities, and publications. In this paper, we conduct the first study aiming at analyzing gendered citation practices in the software engineering (SE) literature. Our study allows reflecting on citations practices adopted in the SE field and serves as a starting point for more robust empirical studies on the analyzed topic. Our results show that some efforts still need to be done to achieve fairness in citation practices in the SE field. Such efforts may notably consist in the inclusion of citation diversity statements in manuscripts submitted for publication in SE journals and conferences.<|reference_end|>
arxiv
@article{tchilinguirova2024biases, title={Biases in gendered citation practices: an exploratory study and some reflections on the Matthew and Matilda effects}, author={Karolina Tchilinguirova, Alvine Boaye Belle, Gouled Mahamud}, journal={arXiv preprint arXiv:2410.02801}, year={2024}, archivePrefix={arXiv}, eprint={2410.02801}, primaryClass={cs.DL cs.SE} }
tchilinguirova2024biases
arxiv-665256
2410.02802
A Soft Robotic Exosuit For Knee Extension Using Hyper-Bending Actuators
<|reference_start|>A Soft Robotic Exosuit For Knee Extension Using Hyper-Bending Actuators: Movement disorders impact muscle strength and mobility, and despite therapeutic efforts, many people with movement disorders have challenges functioning independently. Soft wearable robots, or exosuits, offer a promising solution for continuous daily support, however, commercially viable devices are not widely available. Here, we introduce a design framework for lower limb exosuits centered on a soft pneumatically driven fabric-based actuator. Our design consists of a novel multi-material textile sleeve that incorporates braided mesh and knit-elastic materials to realize hyper-bending actuators. The actuators incorporate 3D-printed self-sealing end caps that are attached to a semi-rigid human-robot interface to secure them to the body. We will demonstrate the effectiveness of our exosuit in generating enough force to assist during sit-to-stand transitions.<|reference_end|>
arxiv
@article{liu2024a, title={A Soft Robotic Exosuit For Knee Extension Using Hyper-Bending Actuators}, author={Tuo Liu and Jonathan Realmuto}, journal={arXiv preprint arXiv:2410.02802}, year={2024}, archivePrefix={arXiv}, eprint={2410.02802}, primaryClass={cs.RO} }
liu2024a
arxiv-665257
2410.02803
On the Effect of Quantization on Extended Dynamic Mode Decomposition
<|reference_start|>On the Effect of Quantization on Extended Dynamic Mode Decomposition: Extended Dynamic Mode Decomposition (EDMD) is a widely used data-driven algorithm for estimating the Koopman Operator. EDMD extends Dynamic Mode Decomposition (DMD) by lifting the snapshot data using nonlinear dictionary functions before performing the estimation. This letter investigates how the estimation process is affected when the data is quantized. Specifically, we examine the fundamental connection between estimates of the operator obtained from unquantized data and those from quantized data via EDMD. Furthermore, using the law of large numbers, we demonstrate that, under a large data regime, the quantized estimate can be considered a regularized version of the unquantized estimate. We also explore the relationship between the two estimates in the finite data regime. We further analyze the effect of nonlinear lifting functions on this regularization due to quantization. The theory is validated through repeated numerical experiments conducted on two different dynamical systems.<|reference_end|>
arxiv
@article{maity2024on, title={On the Effect of Quantization on Extended Dynamic Mode Decomposition}, author={Dipankar Maity and Debdipta Goswami}, journal={arXiv preprint arXiv:2410.02803}, year={2024}, archivePrefix={arXiv}, eprint={2410.02803}, primaryClass={eess.SY cs.SY} }
maity2024on
arxiv-665258
2410.02804
Leveraging Retrieval Augment Approach for Multimodal Emotion Recognition Under Missing Modalities
<|reference_start|>Leveraging Retrieval Augment Approach for Multimodal Emotion Recognition Under Missing Modalities: Multimodal emotion recognition utilizes complete multimodal information and robust multimodal joint representation to gain high performance. However, the ideal condition of full modality integrity is often not applicable in reality and there always appears the situation that some modalities are missing. For example, video, audio, or text data is missing due to sensor failure or network bandwidth problems, which presents a great challenge to MER research. Traditional methods extract useful information from the complete modalities and reconstruct the missing modalities to learn robust multimodal joint representation. These methods have laid a solid foundation for research in this field, and to a certain extent, alleviated the difficulty of multimodal emotion recognition under missing modalities. However, relying solely on internal reconstruction and multimodal joint learning has its limitations, especially when the missing information is critical for emotion recognition. To address this challenge, we propose a novel framework of Retrieval Augment for Missing Modality Multimodal Emotion Recognition (RAMER), which introduces similar multimodal emotion data to enhance the performance of emotion recognition under missing modalities. By leveraging databases, that contain related multimodal emotion data, we can retrieve similar multimodal emotion information to fill in the gaps left by missing modalities. Various experimental results demonstrate that our framework is superior to existing state-of-the-art approaches in missing modality MER tasks. Our whole project is publicly available on https://github.com/WooyoohL/Retrieval_Augment_MER.<|reference_end|>
arxiv
@article{fan2024leveraging, title={Leveraging Retrieval Augment Approach for Multimodal Emotion Recognition Under Missing Modalities}, author={Qi Fan, Hongyu Yuan, Haolin Zuo, Rui Liu and Guanglai Gao}, journal={arXiv preprint arXiv:2410.02804}, year={2024}, archivePrefix={arXiv}, eprint={2410.02804}, primaryClass={cs.CV cs.AI} }
fan2024leveraging
arxiv-665259
2410.02805
Trust-informed Decision-Making Through An Uncertainty-Aware Stacked Neural Networks Framework: Case Study in COVID-19 Classification
<|reference_start|>Trust-informed Decision-Making Through An Uncertainty-Aware Stacked Neural Networks Framework: Case Study in COVID-19 Classification: This study presents an uncertainty-aware stacked neural networks model for the reliable classification of COVID-19 from radiological images. The model addresses the critical gap in uncertainty-aware modeling by focusing on accurately identifying confidently correct predictions while alerting users to confidently incorrect and uncertain predictions, which can promote trust in automated systems. The architecture integrates uncertainty quantification methods, including Monte Carlo dropout and ensemble techniques, to enhance predictive reliability by assessing the certainty of diagnostic predictions. Within a two-tier model framework, the tier one model generates initial predictions and associated uncertainties, which the second tier model uses to produce a trust indicator alongside the diagnostic outcome. This dual-output model not only predicts COVID-19 cases but also provides a trust flag, indicating the reliability of each diagnosis and aiming to minimize the need for retesting and expert verification. The effectiveness of this approach is demonstrated through extensive experiments on the COVIDx CXR-4 dataset, showing a novel approach in identifying and handling confidently incorrect cases and uncertain cases, thus enhancing the trustworthiness of automated diagnostics in clinical settings.<|reference_end|>
arxiv
@article{gharoun2024trust-informed, title={Trust-informed Decision-Making Through An Uncertainty-Aware Stacked Neural Networks Framework: Case Study in COVID-19 Classification}, author={Hassan Gharoun, Mohammad Sadegh Khorshidi, Fang Chen, and Amir H. Gandomi}, journal={arXiv preprint arXiv:2410.02805}, year={2024}, archivePrefix={arXiv}, eprint={2410.02805}, primaryClass={eess.IV cs.AI cs.CV} }
gharoun2024trust-informed
arxiv-665260
2410.02806
Investigating the Impact of Randomness on Reproducibility in Computer Vision: A Study on Applications in Civil Engineering and Medicine
<|reference_start|>Investigating the Impact of Randomness on Reproducibility in Computer Vision: A Study on Applications in Civil Engineering and Medicine: Reproducibility is essential for scientific research. However, in computer vision, achieving consistent results is challenging due to various factors. One influential, yet often unrecognized, factor is CUDA-induced randomness. Despite CUDA's advantages for accelerating algorithm execution on GPUs, if not controlled, its behavior across multiple executions remains non-deterministic. While reproducibility issues in ML being researched, the implications of CUDA-induced randomness in application are yet to be understood. Our investigation focuses on this randomness across one standard benchmark dataset and two real-world datasets in an isolated environment. Our results show that CUDA-induced randomness can account for differences up to 4.77% in performance scores. We find that managing this variability for reproducibility may entail increased runtime or reduce performance, but that disadvantages are not as significant as reported in previous studies.<|reference_end|>
arxiv
@article{eryılmaz2024investigating, title={Investigating the Impact of Randomness on Reproducibility in Computer Vision: A Study on Applications in Civil Engineering and Medicine}, author={Bahad{i}r Ery{i}lmaz, Osman Alperen Korac{s}, J"org Schl"otterer, Christin Seifert}, journal={arXiv preprint arXiv:2410.02806}, year={2024}, archivePrefix={arXiv}, eprint={2410.02806}, primaryClass={cs.CV cs.AI} }
eryılmaz2024investigating
arxiv-665261
2410.02807
AutoPETIII: The Tracer Frontier What Frontier?
<|reference_start|>AutoPETIII: The Tracer Frontier What Frontier?: For the last three years, the AutoPET competition gathered the medical imaging community around a hot topic: lesion segmentation on Positron Emitting Tomography (PET) scans. Each year a different aspect of the problem is presented; in 2024 the multiplicity of existing and used tracers was at the core of the challenge. Specifically, this year's edition aims to develop a fully automatic algorithm capable of performing lesion segmentation on a PET/CT scan, without knowing the tracer, which can either be a FDG or PSMA-based tracer. In this paper we describe how we used the nnUNetv2 framework to train two sets of 6 fold ensembles of models to perform fully automatic PET/CT lesion segmentation as well as a MIP-CNN to choose which set of models to use for segmentation.<|reference_end|>
arxiv
@article{mesbah2024autopetiii:, title={AutoPETIII: The Tracer Frontier. What Frontier?}, author={Zacharia Mesbah, L'eo Mottay, Romain Modzelewski, Pierre Decazes, S'ebastien Hapdey, Su Ruan, S'ebastien Thureau}, journal={arXiv preprint arXiv:2410.02807}, year={2024}, archivePrefix={arXiv}, eprint={2410.02807}, primaryClass={eess.IV cs.AI cs.CV} }
mesbah2024autopetiii:
arxiv-665262
2410.02808
KLDD: Kalman Filter based Linear Deformable Diffusion Model in Retinal Image Segmentation
<|reference_start|>KLDD: Kalman Filter based Linear Deformable Diffusion Model in Retinal Image Segmentation: AI-based vascular segmentation is becoming increasingly common in enhancing the screening and treatment of ophthalmic diseases. Deep learning structures based on U-Net have achieved relatively good performance in vascular segmentation. However, small blood vessels and capillaries tend to be lost during segmentation when passed through the traditional U-Net downsampling module. To address this gap, this paper proposes a novel Kalman filter based Linear Deformable Diffusion (KLDD) model for retinal vessel segmentation. Our model employs a diffusion process that iteratively refines the segmentation, leveraging the flexible receptive fields of deformable convolutions in feature extraction modules to adapt to the detailed tubular vascular structures. More specifically, we first employ a feature extractor with linear deformable convolution to capture vascular structure information form the input images. To better optimize the coordinate positions of deformable convolution, we employ the Kalman filter to enhance the perception of vascular structures in linear deformable convolution. Subsequently, the features of the vascular structures extracted are utilized as a conditioning element within a diffusion model by the Cross-Attention Aggregation module (CAAM) and the Channel-wise Soft Attention module (CSAM). These aggregations are designed to enhance the diffusion model's capability to generate vascular structures. Experiments are evaluated on retinal fundus image datasets (DRIVE, CHASE_DB1) as well as the 3mm and 6mm of the OCTA-500 dataset, and the results show that the diffusion model proposed in this paper outperforms other methods.<|reference_end|>
arxiv
@article{zhao2024kldd:, title={KLDD: Kalman Filter based Linear Deformable Diffusion Model in Retinal Image Segmentation}, author={Zhihao Zhao, Yinzheng Zhao, Junjie Yang, Kai Huang, Nassir Navab, M. Ali Nasseri}, journal={arXiv preprint arXiv:2410.02808}, year={2024}, archivePrefix={arXiv}, eprint={2410.02808}, primaryClass={eess.IV cs.AI cs.CV} }
zhao2024kldd:
arxiv-665263
2410.02809
TREBLE: Fast Software Updates by Creating an Equilibrium in an Active Software Ecosystem of Globally Distributed Stakeholders
<|reference_start|>TREBLE: Fast Software Updates by Creating an Equilibrium in an Active Software Ecosystem of Globally Distributed Stakeholders: This paper presents our experience with TREBLE, a two-year initiative to build the modular base in Android, a Java-based mobile platform running on the Linux kernel. Our TREBLE architecture splits the hardware independent core framework written in Java from the hardware dependent vendor implementations (e.g., user space device drivers, vendor native libraries, and kernel written in C/C++). Cross-layer communications between them are done via versioned, stable inter-process communication interfaces whose backward compatibility is tested by using two API compliance suites. Based on this architecture, we repackage the key Android software components that suffered from crucial post-launch security bugs as separate images. That not only enables separate ownerships but also independent updates of each image by interested ecosystem entities. We discuss our experience of delivering TREBLE architectural changes to silicon vendors and device makers using a yearly release model. Our experiments and industry rollouts support our hypothesis that giving more freedom to all ecosystem entities and creating an equilibrium are a transformation necessary to further scale the world largest open ecosystem with over two billion active devices.<|reference_end|>
arxiv
@article{yim2024treble:, title={TREBLE: Fast Software Updates by Creating an Equilibrium in an Active Software Ecosystem of Globally Distributed Stakeholders}, author={Keun Soo Yim, Iliyan Malchev, Andrew Hsieh, Dave Burke}, journal={arXiv preprint arXiv:2410.02809}, year={2024}, archivePrefix={arXiv}, eprint={2410.02809}, primaryClass={cs.SE cs.CR} }
yim2024treble:
arxiv-665264
2410.02810
StateAct: State Tracking and Reasoning for Acting and Planning with Large Language Models
<|reference_start|>StateAct: State Tracking and Reasoning for Acting and Planning with Large Language Models: Planning and acting to solve `real' tasks using large language models (LLMs) in interactive environments has become a new frontier for AI methods. While recent advances allowed LLMs to interact with online tools, solve robotics tasks and many more, long range reasoning tasks remain a problem for LLMs. Existing methods to address this issue are very resource intensive and require additional data or human crafted rules, instead, we propose a simple method based on few-shot in-context learning alone to enhance `chain-of-thought' with state-tracking for planning and acting with LLMs. We show that our method establishes the new state-of-the-art on Alfworld for in-context learning methods (\textbf{+14\%} over the previous best few-shot in-context learning method) and performs on par with methods that use additional training data and additional tools such as code-execution. We also demonstrate that our enhanced `chain-of-states' allows the agent to both solve longer horizon problems and to be more efficient in number of steps required to solve a task. We show that our method works across a variety of LLMs for both API-based and open source ones. Finally, we also conduct ablation studies and show that `chain-of-thoughts' helps state-tracking accuracy, while a json-structure harms overall performance. We open-source our code and annotations at \url{https://github.com/ai-nikolai/StateAct}.<|reference_end|>
arxiv
@article{rozanov2024stateact:, title={StateAct: State Tracking and Reasoning for Acting and Planning with Large Language Models}, author={Nikolai Rozanov and Marek Rei}, journal={arXiv preprint arXiv:2410.02810}, year={2024}, archivePrefix={arXiv}, eprint={2410.02810}, primaryClass={cs.AI cs.CL cs.LG} }
rozanov2024stateact:
arxiv-665265
2410.02811
SAC-KG: Exploiting Large Language Models as Skilled Automatic Constructors for Domain Knowledge Graphs
<|reference_start|>SAC-KG: Exploiting Large Language Models as Skilled Automatic Constructors for Domain Knowledge Graphs: Knowledge graphs (KGs) play a pivotal role in knowledge-intensive tasks across specialized domains, where the acquisition of precise and dependable knowledge is crucial. However, existing KG construction methods heavily rely on human intervention to attain qualified KGs, which severely hinders the practical applicability in real-world scenarios. To address this challenge, we propose a general KG construction framework, named SAC-KG, to exploit large language models (LLMs) as Skilled Automatic Constructors for domain Knowledge Graph. SAC-KG effectively involves LLMs as domain experts to generate specialized and precise multi-level KGs. Specifically, SAC-KG consists of three components: Generator, Verifier, and Pruner. For a given entity, Generator produces its relations and tails from raw domain corpora, to construct a specialized single-level KG. Verifier and Pruner then work together to ensure precision by correcting generation errors and determining whether newly produced tails require further iteration for the next-level KG.Experiments demonstrate that SAC-KG automatically constructs a domain KG at the scale of over one million nodes and achieves a precision of 89.32%, leading to a superior performance with over 20% increase in precision rate compared to existing state-of-the-art methods for the KG construction task.<|reference_end|>
arxiv
@article{chen2024sac-kg:, title={SAC-KG: Exploiting Large Language Models as Skilled Automatic Constructors for Domain Knowledge Graphs}, author={Hanzhu Chen, Xu Shen, Qitan Lv, Jie Wang, Xiaoqi Ni, Jieping Ye}, journal={arXiv preprint arXiv:2410.02811}, year={2024}, archivePrefix={arXiv}, eprint={2410.02811}, primaryClass={cs.AI cs.CL cs.LG} }
chen2024sac-kg:
arxiv-665266
2410.02812
Decision support system for photovoltaic fault detection avoiding meteorological conditions
<|reference_start|>Decision support system for photovoltaic fault detection avoiding meteorological conditions: A fundamental issue about installation of photovoltaic solar power stations is the optimization of the energy generation and the fault detection, for which different techniques and methodologies have already been developed considering meteorological conditions. This fact implies the use of unstable and difficult predictable variables which may give rise to a possible problem for the plausibility of the proposed techniques and methodologies in particular conditions. In this line, our goal is to provide a decision support system for photovoltaic fault detection avoiding meteorological conditions. This paper has developed a mathematical mechanism based on fuzzy sets in order to optimize the energy production in the photovoltaic facilities, detecting anomalous behaviors in the energy generated by the facilities over time. Specifically, the incorrect and correct behaviors of the photovoltaic facilities have been modeled through the use of different membership mappings. From these mappings, a decision support system based on OWA operators informs of the performances of the facilities per day, by using natural language. Moreover, a state machine is also designed to determine the stage of each facility based on the stages and the performances from previous days. The main advantage of the designed system is that it solves the problem of "constant loss of energy production", without the consideration of meteorological conditions and being able to be more profitable. Moreover, the system is also scalable and portable, and complements previous works in energy production optimization. Finally, the proposed mechanism has been tested with real data, provided by Grupo Energ\'etico de Puerto Real S.A. which is an enterprise in charge of the management of six photovoltaic facilities in Puerto Real, C\'adiz, Spain, and good results have been obtained for faulting detection.<|reference_end|>
arxiv
@article{aragón2024decision, title={Decision support system for photovoltaic fault detection avoiding meteorological conditions}, author={Roberto G. Arag'on, M. Eugenia Cornejo, Jes'us Medina, Juan Moreno-Garc'ia, Elo'isa Ram'irez-Poussa}, journal={International Journal of Information Technology & Decision Making, 21(3) (2022) 911-932}, year={2024}, doi={10.1142/S0219622022500080}, archivePrefix={arXiv}, eprint={2410.02812}, primaryClass={eess.SY cs.SY} }
aragón2024decision
arxiv-665267
2410.02813
Mathematical Considerations on Randomized Orthgonal Decomposition Method for Developing Twin Data Models
<|reference_start|>Mathematical Considerations on Randomized Orthgonal Decomposition Method for Developing Twin Data Models: This paper introduces the approach of Randomized Orthogonal Decomposition (ROD) for producing twin data models in order to overcome the drawbacks of existing reduced order modelling techniques. When compared to Fourier empirical decomposition, ROD provides orthonormal shape modes that maximize their projection on the data space, which is a significant benefit. A shock wave event described by the viscous Burgers equation model is used to illustrate and evaluate the novel method. The new twin data model is thoroughly evaluated using certain criteria of numerical accuracy and computational performance.<|reference_end|>
arxiv
@article{bistrian2024mathematical, title={Mathematical Considerations on Randomized Orthgonal Decomposition Method for Developing Twin Data Models}, author={Diana A. Bistrian}, journal={Transylvanian Journal of Mathematics and Mechanics, 14 (2), 105-115, 2022}, year={2024}, archivePrefix={arXiv}, eprint={2410.02813}, primaryClass={math.NA cs.NA} }
bistrian2024mathematical
arxiv-665268
2410.02814
Neural Networks in Numerical Analysis and Approximation Theory
<|reference_start|>Neural Networks in Numerical Analysis and Approximation Theory: In this Master Thesis, we study the approximation capabilities of Neural Networks in the context of numerical resolution of elliptic PDEs and Approximation Theory. First of all, in Chapter 1, we introduce the mathematical definition of Neural Networks and perform some basic estimates on their composition and parallelization. Then, we implement in Chapter 2 the Galerkin method using Neural Network. In particular, we manage to build a Neural Network that approximates the inverse of positive-definite symmetric matrices, which allows to get a Garlerkin numerical solution of elliptic PDEs. Finally, in Chapter 3, we introduce the approximation space of Neural Networks, a space which consists of functions in $L^p$ that are approximated at a certain rate when increasing the number of weights of Neural Networks. We find the relation of this space with the Besov space: the smoother a function is, the faster it can be approximated with Neural Networks when increasing the number of weights.<|reference_end|>
arxiv
@article{romera2024neural, title={Neural Networks in Numerical Analysis and Approximation Theory}, author={Gonzalo Romera}, journal={arXiv preprint arXiv:2410.02814}, year={2024}, archivePrefix={arXiv}, eprint={2410.02814}, primaryClass={math.NA cs.NA} }
romera2024neural
arxiv-665269
2410.02815
Estimate of Koopman modes and eigenvalues with Kalman Filter
<|reference_start|>Estimate of Koopman modes and eigenvalues with Kalman Filter: Dynamic mode decomposition (DMD) is a data-driven method of extracting spatial-temporal coherent modes from complex systems and providing an equation-free architecture to model and predict systems. However, in practical applications, the accuracy of DMD can be limited in extracting dynamical features due to sensor noise in measurements. We develop an adaptive method to constantly update dynamic modes and eigenvalues from noisy measurements arising from discrete systems. Our method is based on the Ensemble Kalman filter owing to its capability of handling time-varying systems and nonlinear observables. Our method can be extended to non-autonomous dynamical systems, accurately recovering short-time eigenvalue-eigenvector pairs and observables. Theoretical analysis shows that the estimation is accurate in long term data misfit. We demonstrate the method on both autonomous and non-autonomous dynamical systems to show its effectiveness.<|reference_end|>
arxiv
@article{liu2024estimate, title={Estimate of Koopman modes and eigenvalues with Kalman Filter}, author={Ningxin Liu, Shuigen Liu, Xin T. Tong, and Lijian Jiang}, journal={arXiv preprint arXiv:2410.02815}, year={2024}, archivePrefix={arXiv}, eprint={2410.02815}, primaryClass={eess.SY cs.SY math.PR} }
liu2024estimate
arxiv-665270
2410.02816
Bipolar fuzzy relation equations systems based on the product t-norm
<|reference_start|>Bipolar fuzzy relation equations systems based on the product t-norm: Bipolar fuzzy relation equations arise as a generalization of fuzzy relation equations considering unknown variables together with their logical connective negations. The occurrence of a variable and the occurrence of its negation simultaneously can give very useful information for certain frameworks where the human reasoning plays a key role. Hence, the resolution of bipolar fuzzy relation equations systems is a research topic of great interest. This paper focuses on the study of bipolar fuzzy relation equations systems based on the max-product t-norm composition. Specifically, the solvability and the algebraic structure of the set of solutions of these bipolar equations systems will be studied, including the case in which such systems are composed of equations whose independent term be equal to zero. As a consequence, this paper complements the contribution carried out by the authors on the solvability of bipolar max-product fuzzy relation equations.<|reference_end|>
arxiv
@article{cornejo2024bipolar, title={Bipolar fuzzy relation equations systems based on the product t-norm}, author={M. Eugenia Cornejo, David Lobo, Jes'us Medina}, journal={Mathematical Methods in the Applied Sciences 42(17) (2019) 5779-5793}, year={2024}, doi={10.1002/mma.5646}, archivePrefix={arXiv}, eprint={2410.02816}, primaryClass={cs.AI} }
cornejo2024bipolar
arxiv-665271
2410.02817
Neural Coordination and Capacity Control for Inventory Management
<|reference_start|>Neural Coordination and Capacity Control for Inventory Management: This paper addresses the capacitated periodic review inventory control problem, focusing on a retailer managing multiple products with limited shared resources, such as storage or inbound labor at a facility. Specifically, this paper is motivated by the questions of (1) what does it mean to backtest a capacity control mechanism, (2) can we devise and backtest a capacity control mechanism that is compatible with recent advances in deep reinforcement learning for inventory management? First, because we only have a single historic sample path of Amazon's capacity limits, we propose a method that samples from a distribution of possible constraint paths covering a space of real-world scenarios. This novel approach allows for more robust and realistic testing of inventory management strategies. Second, we extend the exo-IDP (Exogenous Decision Process) formulation of Madeka et al. 2022 to capacitated periodic review inventory control problems and show that certain capacitated control problems are no harder than supervised learning. Third, we introduce a `neural coordinator', designed to produce forecasts of capacity prices, guiding the system to adhere to target constraints in place of a traditional model predictive controller. Finally, we apply a modified DirectBackprop algorithm for learning a deep RL buying policy and a training the neural coordinator. Our methodology is evaluated through large-scale backtests, demonstrating RL buying policies with a neural coordinator outperforms classic baselines both in terms of cumulative discounted reward and capacity adherence (we see improvements of up to 50% in some cases).<|reference_end|>
arxiv
@article{eisenach2024neural, title={Neural Coordination and Capacity Control for Inventory Management}, author={Carson Eisenach and Udaya Ghai and Dhruv Madeka and Kari Torkkola and Dean Foster and Sham Kakade}, journal={arXiv preprint arXiv:2410.02817}, year={2024}, archivePrefix={arXiv}, eprint={2410.02817}, primaryClass={eess.SY cs.LG cs.SY stat.ML} }
eisenach2024neural
arxiv-665272
2410.02819
Physics-Informed Graph-Mesh Networks for PDEs: A hybrid approach for complex problems
<|reference_start|>Physics-Informed Graph-Mesh Networks for PDEs: A hybrid approach for complex problems: The recent rise of deep learning has led to numerous applications, including solving partial differential equations using Physics-Informed Neural Networks. This approach has proven highly effective in several academic cases. However, their lack of physical invariances, coupled with other significant weaknesses, such as an inability to handle complex geometries or their lack of generalization capabilities, make them unable to compete with classical numerical solvers in industrial settings. In this work, a limitation regarding the use of automatic differentiation in the context of physics-informed learning is highlighted. A hybrid approach combining physics-informed graph neural networks with numerical kernels from finite elements is introduced. After studying the theoretical properties of our model, we apply it to complex geometries, in two and three dimensions. Our choices are supported by an ablation study, and we evaluate the generalisation capacity of the proposed approach.<|reference_end|>
arxiv
@article{chenaud2024physics-informed, title={Physics-Informed Graph-Mesh Networks for PDEs: A hybrid approach for complex problems}, author={Marien Chenaud, Fr'ed'eric Magoul`es, Jos'e Alves}, journal={arXiv preprint arXiv:2410.02819}, year={2024}, doi={10.1016/j.advengsoft.2024.103758}, archivePrefix={arXiv}, eprint={2410.02819}, primaryClass={math.NA cs.LG cs.NA} }
chenaud2024physics-informed
arxiv-665273
2410.02820
GPT's Judgements Under Uncertainty
<|reference_start|>GPT's Judgements Under Uncertainty: We investigate whether biases inherent in human cognition, such as loss aversion, framing effects, and conjunction fallacy, manifest in how GPT-4o judges and makes decisions in probabilistic scenarios. By conducting 1350 experiments across nine cognitive biases and analyzing the responses for statistical versus heuristic reasoning, we demonstrate GPT-4o's contradicting approach while responding to prompts with similar underlying probability notations. Our findings also reveal mixed performances with the AI demonstrating both human-like heuristic errors and statistically sound decisions, even as it goes through identical iterations of the same prompt.<|reference_end|>
arxiv
@article{saeedi2024gpt's, title={GPT's Judgements Under Uncertainty}, author={Payam Saeedi and Mahsa Goodarzi}, journal={arXiv preprint arXiv:2410.02820}, year={2024}, archivePrefix={arXiv}, eprint={2410.02820}, primaryClass={cs.AI cs.CL cs.LG} }
saeedi2024gpt's
arxiv-665274
2410.02823
DANA: Domain-Aware Neurosymbolic Agents for Consistency and Accuracy
<|reference_start|>DANA: Domain-Aware Neurosymbolic Agents for Consistency and Accuracy: Large Language Models (LLMs) have shown remarkable capabilities, but their inherent probabilistic nature often leads to inconsistency and inaccuracy in complex problem-solving tasks. This paper introduces DANA (Domain-Aware Neurosymbolic Agent), an architecture that addresses these issues by integrating domain-specific knowledge with neurosymbolic approaches. We begin by analyzing current AI architectures, including AutoGPT, LangChain ReAct and OpenAI's ChatGPT, through a neurosymbolic lens, highlighting how their reliance on probabilistic inference contributes to inconsistent outputs. In response, DANA captures and applies domain expertise in both natural-language and symbolic forms, enabling more deterministic and reliable problem-solving behaviors. We implement a variant of DANA using Hierarchical Task Plans (HTPs) in the open-source OpenSSA framework. This implementation achieves over 90\% accuracy on the FinanceBench financial-analysis benchmark, significantly outperforming current LLM-based systems in both consistency and accuracy. Application of DANA in physical industries such as semiconductor shows that its flexible architecture for incorporating knowledge is effective in mitigating the probabilistic limitations of LLMs and has potential in tackling complex, real-world problems that require reliability and precision.<|reference_end|>
arxiv
@article{luong2024dana:, title={DANA: Domain-Aware Neurosymbolic Agents for Consistency and Accuracy}, author={Vinh Luong, Sang Dinh, Shruti Raghavan, William Nguyen, Zooey Nguyen, Quynh Le, Hung Vo, Kentaro Maegaito, Loc Nguyen, Thao Nguyen, Anh Hai Ha, Christopher Nguyen}, journal={arXiv preprint arXiv:2410.02823}, year={2024}, archivePrefix={arXiv}, eprint={2410.02823}, primaryClass={cs.AI cs.LG} }
luong2024dana:
arxiv-665275
2410.02824
Inverse Design of Copolymers Including Stoichiometry and Chain Architecture
<|reference_start|>Inverse Design of Copolymers Including Stoichiometry and Chain Architecture: The demand for innovative synthetic polymers with improved properties is high, but their structural complexity and vast design space hinder rapid discovery. Machine learning-guided molecular design is a promising approach to accelerate polymer discovery. However, the scarcity of labeled polymer data and the complex hierarchical structure of synthetic polymers make generative design particularly challenging. We advance the current state-of-the-art approaches to generate not only repeating units, but monomer ensembles including their stoichiometry and chain architecture. We build upon a recent polymer representation that includes stoichiometries and chain architectures of monomer ensembles and develop a novel variational autoencoder (VAE) architecture encoding a graph and decoding a string. Using a semi-supervised setup, we enable the handling of partly labelled datasets which can be benefitial for domains with a small corpus of labelled data. Our model learns a continuous, well organized latent space (LS) that enables de-novo generation of copolymer structures including different monomer stoichiometries and chain architectures. In an inverse design case study, we demonstrate our model for in-silico discovery of novel conjugated copolymer photocatalysts for hydrogen production using optimization of the polymer's electron affinity and ionization potential in the latent space.<|reference_end|>
arxiv
@article{vogel2024inverse, title={Inverse Design of Copolymers Including Stoichiometry and Chain Architecture}, author={Gabriel Vogel, Jana M. Weber}, journal={arXiv preprint arXiv:2410.02824}, year={2024}, archivePrefix={arXiv}, eprint={2410.02824}, primaryClass={cond-mat.soft cs.LG} }
vogel2024inverse
arxiv-665276
2410.02825
Ingest-And-Ground: Dispelling Hallucinations from Continually-Pretrained LLMs with RAG
<|reference_start|>Ingest-And-Ground: Dispelling Hallucinations from Continually-Pretrained LLMs with RAG: This paper presents new methods that have the potential to improve privacy process efficiency with LLM and RAG. To reduce hallucination, we continually pre-train the base LLM model with a privacy-specific knowledge base and then augment it with a semantic RAG layer. Our evaluations demonstrate that this approach enhances the model performance (as much as doubled metrics compared to out-of-box LLM) in handling privacy-related queries, by grounding responses with factual information which reduces inaccuracies.<|reference_end|>
arxiv
@article{fang2024ingest-and-ground:, title={Ingest-And-Ground: Dispelling Hallucinations from Continually-Pretrained LLMs with RAG}, author={Chenhao Fang, Derek Larson, Shitong Zhu, Sophie Zeng, Wendy Summer, Yanqing Peng, Yuriy Hulovatyy, Rajeev Rao, Gabriel Forgues, Arya Pudota, Alex Goncalves, Herv'e Robert}, journal={arXiv preprint arXiv:2410.02825}, year={2024}, archivePrefix={arXiv}, eprint={2410.02825}, primaryClass={cs.CL cs.CR} }
fang2024ingest-and-ground:
arxiv-665277
2410.02826
LinkThief: Combining Generalized Structure Knowledge with Node Similarity for Link Stealing Attack against GNN
<|reference_start|>LinkThief: Combining Generalized Structure Knowledge with Node Similarity for Link Stealing Attack against GNN: Graph neural networks(GNNs) have a wide range of applications in multimedia.Recent studies have shown that Graph neural networks(GNNs) are vulnerable to link stealing attacks,which infers the existence of edges in the target GNN's training graph.Existing attacks are usually based on the assumption that links exist between two nodes that share similar posteriors;however,they fail to focus on links that do not hold under this assumption.To this end,we propose LinkThief,an improved link stealing attack that combines generalized structure knowledge with node similarity,in a scenario where the attackers' background knowledge contains partially leaked target graph and shadow graph.Specifically,to equip the attack model with insights into the link structure spanning both the shadow graph and the target graph,we introduce the idea of creating a Shadow-Target Bridge Graph and extracting edge subgraph structure features from it.Through theoretical analysis from the perspective of privacy theft,we first explore how to implement the aforementioned ideas.Building upon the findings,we design the Bridge Graph Generator to construct the Shadow-Target Bridge Graph.Then,the subgraph around the link is sampled by the Edge Subgraph Preparation Module.Finally,the Edge Structure Feature Extractor is designed to obtain generalized structure knowledge,which is combined with node similarity to form the features provided to the attack model.Extensive experiments validate the correctness of theoretical analysis and demonstrate that LinkThief still effectively steals links without extra assumptions.<|reference_end|>
arxiv
@article{zhang2024linkthief:, title={LinkThief: Combining Generalized Structure Knowledge with Node Similarity for Link Stealing Attack against GNN}, author={Yuxing Zhang, Siyuan Meng, Chunchun Chen, Mengyao Peng, Hongyan Gu and Xinli Huang}, journal={arXiv preprint arXiv:2410.02826}, year={2024}, doi={10.1145/3664647.3681381}, archivePrefix={arXiv}, eprint={2410.02826}, primaryClass={cs.CR cs.AI cs.LG} }
zhang2024linkthief:
arxiv-665278
2410.02827
Effective Intrusion Detection for UAV Communications using Autoencoder-based Feature Extraction and Machine Learning Approach
<|reference_start|>Effective Intrusion Detection for UAV Communications using Autoencoder-based Feature Extraction and Machine Learning Approach: This paper proposes a novel intrusion detection method for unmanned aerial vehicles (UAV) in the presence of recent actual UAV intrusion dataset. In particular, in the first stage of our method, we design an autoencoder architecture for effectively extracting important features, which are then fed into various machine learning models in the second stage for detecting and classifying attack types. To the best of our knowledge, this is the first attempt to propose such the autoencoder-based machine learning intrusion detection method for UAVs using actual dataset, while most of existing works only consider either simulated datasets or datasets irrelevant to UAV communications. Our experiment results show that the proposed method outperforms the baselines such as feature selection schemes in both binary and multi-class classification tasks.<|reference_end|>
arxiv
@article{vuong2024effective, title={Effective Intrusion Detection for UAV Communications using Autoencoder-based Feature Extraction and Machine Learning Approach}, author={Tuan-Cuong Vuong, Cong Chi Nguyen, Van-Cuong Pham, Thi-Thanh-Huyen Le, Xuan-Nam Tran, and Thien Van Luong}, journal={NOLTA 2024}, year={2024}, archivePrefix={arXiv}, eprint={2410.02827}, primaryClass={cs.RO cs.AI cs.LG eess.SP} }
vuong2024effective
arxiv-665279
2410.02828
PyRIT: A Framework for Security Risk Identification and Red Teaming in Generative AI System
<|reference_start|>PyRIT: A Framework for Security Risk Identification and Red Teaming in Generative AI System: Generative Artificial Intelligence (GenAI) is becoming ubiquitous in our daily lives. The increase in computational power and data availability has led to a proliferation of both single- and multi-modal models. As the GenAI ecosystem matures, the need for extensible and model-agnostic risk identification frameworks is growing. To meet this need, we introduce the Python Risk Identification Toolkit (PyRIT), an open-source framework designed to enhance red teaming efforts in GenAI systems. PyRIT is a model- and platform-agnostic tool that enables red teamers to probe for and identify novel harms, risks, and jailbreaks in multimodal generative AI models. Its composable architecture facilitates the reuse of core building blocks and allows for extensibility to future models and modalities. This paper details the challenges specific to red teaming generative AI systems, the development and features of PyRIT, and its practical applications in real-world scenarios.<|reference_end|>
arxiv
@article{munoz2024pyrit:, title={PyRIT: A Framework for Security Risk Identification and Red Teaming in Generative AI System}, author={Gary D. Lopez Munoz, Amanda J. Minnich, Roman Lutz, Richard Lundeen, Raja Sekhar Rao Dheekonda, Nina Chikanov, Bolor-Erdene Jagdagdorj, Martin Pouliot, Shiven Chawla, Whitney Maxwell, Blake Bullwinkel, Katherine Pratt, Joris de Gruyter, Charlotte Siska, Pete Bryan, Tori Westerhoff, Chang Kawaguchi, Christian Seifert, Ram Shankar Siva Kumar, Yonatan Zunger}, journal={arXiv preprint arXiv:2410.02828}, year={2024}, archivePrefix={arXiv}, eprint={2410.02828}, primaryClass={cs.CR cs.AI cs.CL} }
munoz2024pyrit:
arxiv-665280
2410.02829
LLMs May Not Be Human-Level Players, But They Can Be Testers: Measuring Game Difficulty with LLM Agents
<|reference_start|>LLMs May Not Be Human-Level Players, But They Can Be Testers: Measuring Game Difficulty with LLM Agents: Recent advances in Large Language Models (LLMs) have demonstrated their potential as autonomous agents across various tasks. One emerging application is the use of LLMs in playing games. In this work, we explore a practical problem for the gaming industry: Can LLMs be used to measure game difficulty? We propose a general game-testing framework using LLM agents and test it on two widely played strategy games: Wordle and Slay the Spire. Our results reveal an interesting finding: although LLMs may not perform as well as the average human player, their performance, when guided by simple, generic prompting techniques, shows a statistically significant and strong correlation with difficulty indicated by human players. This suggests that LLMs could serve as effective agents for measuring game difficulty during the development process. Based on our experiments, we also outline general principles and guidelines for incorporating LLMs into the game testing process.<|reference_end|>
arxiv
@article{xiao2024llms, title={LLMs May Not Be Human-Level Players, But They Can Be Testers: Measuring Game Difficulty with LLM Agents}, author={Chang Xiao, Brenda Z. Yang}, journal={arXiv preprint arXiv:2410.02829}, year={2024}, archivePrefix={arXiv}, eprint={2410.02829}, primaryClass={cs.AI cs.HC cs.LG} }
xiao2024llms
arxiv-665281
2410.02830
YouTube Video Analytics for Patient Engagement: Evidence from Colonoscopy Preparation Videos
<|reference_start|>YouTube Video Analytics for Patient Engagement: Evidence from Colonoscopy Preparation Videos: Videos can be an effective way to deliver contextualized, just-in-time medical information for patient education. However, video analysis, from topic identification and retrieval to extraction and analysis of medical information and understandability from a patient perspective are extremely challenging tasks. This study demonstrates a data analysis pipeline that utilizes methods to retrieve medical information from YouTube videos on preparing for a colonoscopy exam, a much maligned and disliked procedure that patients find challenging to get adequately prepared for. We first use the YouTube Data API to collect metadata of desired videos on select search keywords and use Google Video Intelligence API to analyze texts, frames and objects data. Then we annotate the YouTube video materials on medical information, video understandability and overall recommendation. We develop a bidirectional long short-term memory (BiLSTM) model to identify medical terms in videos and build three classifiers to group videos based on the levels of encoded medical information and video understandability, and whether the videos are recommended or not. Our study provides healthcare stakeholders with guidelines and a scalable approach for generating new educational video content to enhance management of a vast number of health conditions.<|reference_end|>
arxiv
@article{guo2024youtube, title={YouTube Video Analytics for Patient Engagement: Evidence from Colonoscopy Preparation Videos}, author={Yawen Guo, Xiao Liu, Anjana Susarla, Padman Rema}, journal={arXiv preprint arXiv:2410.02830}, year={2024}, archivePrefix={arXiv}, eprint={2410.02830}, primaryClass={eess.IV cs.CV cs.IR cs.MM} }
guo2024youtube
arxiv-665282
2410.02831
Skill Issues: An Analysis of CS:GO Skill Rating Systems
<|reference_start|>Skill Issues: An Analysis of CS:GO Skill Rating Systems: The meteoric rise of online games has created a need for accurate skill rating systems for tracking improvement and fair matchmaking. Although many skill rating systems are deployed, with various theoretical foundations, less work has been done at analysing the real-world performance of these algorithms. In this paper, we perform an empirical analysis of Elo, Glicko2 and TrueSkill through the lens of surrogate modelling, where skill ratings influence future matchmaking with a configurable acquisition function. We look both at overall performance and data efficiency, and perform a sensitivity analysis based on a large dataset of Counter-Strike: Global Offensive matches.<|reference_end|>
arxiv
@article{bober-irizar2024skill, title={Skill Issues: An Analysis of CS:GO Skill Rating Systems}, author={Mikel Bober-Irizar, Naunidh Dua, Max McGuinness}, journal={arXiv preprint arXiv:2410.02831}, year={2024}, archivePrefix={arXiv}, eprint={2410.02831}, primaryClass={cs.AI cs.LG} }
bober-irizar2024skill
arxiv-665283
2410.02832
FlipAttack: Jailbreak LLMs via Flipping
<|reference_start|>FlipAttack: Jailbreak LLMs via Flipping: This paper proposes a simple yet effective jailbreak attack named FlipAttack against black-box LLMs. First, from the autoregressive nature, we reveal that LLMs tend to understand the text from left to right and find that they struggle to comprehend the text when noise is added to the left side. Motivated by these insights, we propose to disguise the harmful prompt by constructing left-side noise merely based on the prompt itself, then generalize this idea to 4 flipping modes. Second, we verify the strong ability of LLMs to perform the text-flipping task, and then develop 4 variants to guide LLMs to denoise, understand, and execute harmful behaviors accurately. These designs keep FlipAttack universal, stealthy, and simple, allowing it to jailbreak black-box LLMs within only 1 query. Experiments on 8 LLMs demonstrate the superiority of FlipAttack. Remarkably, it achieves $\sim$98\% attack success rate on GPT-4o, and $\sim$98\% bypass rate against 5 guardrail models on average. The codes are available at GitHub\footnote{https://github.com/yueliu1999/FlipAttack}.<|reference_end|>
arxiv
@article{liu2024flipattack:, title={FlipAttack: Jailbreak LLMs via Flipping}, author={Yue Liu, Xiaoxin He, Miao Xiong, Jinlan Fu, Shumin Deng, Bryan Hooi}, journal={arXiv preprint arXiv:2410.02832}, year={2024}, archivePrefix={arXiv}, eprint={2410.02832}, primaryClass={cs.CR cs.AI} }
liu2024flipattack:
arxiv-665284
2410.02833
Asymmetry of the Relative Entropy in the Regularization of Empirical Risk Minimization
<|reference_start|>Asymmetry of the Relative Entropy in the Regularization of Empirical Risk Minimization: The effect of relative entropy asymmetry is analyzed in the context of empirical risk minimization (ERM) with relative entropy regularization (ERM-RER). Two regularizations are considered: $(a)$ the relative entropy of the measure to be optimized with respect to a reference measure (Type-I ERM-RER); or $(b)$ the relative entropy of the reference measure with respect to the measure to be optimized (Type-II ERM-RER). The main result is the characterization of the solution to the Type-II ERM-RER problem and its key properties. By comparing the well-understood Type-I ERM-RER with Type-II ERM-RER, the effects of entropy asymmetry are highlighted. The analysis shows that in both cases, regularization by relative entropy forces the solution's support to collapse into the support of the reference measure, introducing a strong inductive bias that can overshadow the evidence provided by the training data. Finally, it is shown that Type-II regularization is equivalent to Type-I regularization with an appropriate transformation of the empirical risk function.<|reference_end|>
arxiv
@article{daunas2024asymmetry, title={Asymmetry of the Relative Entropy in the Regularization of Empirical Risk Minimization}, author={Francisco Daunas, I~naki Esnaola, Samir M. Perlaza, H. Vincent Poor}, journal={arXiv preprint arXiv:2410.02833}, year={2024}, archivePrefix={arXiv}, eprint={2410.02833}, primaryClass={stat.ML cs.IT cs.LG math.IT} }
daunas2024asymmetry
arxiv-665285
2410.02835
The MLE is minimax optimal for LGC
<|reference_start|>The MLE is minimax optimal for LGC: We revisit the recently introduced Local Glivenko-Cantelli setting, which studies distribution-dependent uniform convegence rates of the Maximum Likelihood Estimator (MLE). In this work, we investigate generalizations of this setting where arbitrary estimators are allowed rather than just the MLE. Can a strictly larger class of measures be learned? Can better risk decay rates be obtained? We provide exhaustive answers to these questions -- which are both negative, provided the learner is barred from exploiting some infinite-dimensional pathologies. On the other hand, allowing such exploits does lead to a strictly larger class of learnable measures.<|reference_end|>
arxiv
@article{cohen2024the, title={The MLE is minimax optimal for LGC}, author={Doron Cohen, Aryeh Kontorovich, Roi Weiss}, journal={arXiv preprint arXiv:2410.02835}, year={2024}, archivePrefix={arXiv}, eprint={2410.02835}, primaryClass={math.ST cs.LG stat.ME stat.TH} }
cohen2024the
arxiv-665286
2410.02838
Modelling the longevity of complex living systems
<|reference_start|>Modelling the longevity of complex living systems: This extended abstract was presented at the Nectar Track of ECML PKDD 2024 in Vilnius, Lithuania. The content supplements a recently published paper "Laws of Macroevolutionary Expansion" in the Proceedings of the National Academy of Sciences (PNAS).<|reference_end|>
arxiv
@article{žliobaitė2024modelling, title={Modelling the longevity of complex living systems}, author={Indr.e v{Z}liobait.e}, journal={arXiv preprint arXiv:2410.02838}, year={2024}, archivePrefix={arXiv}, eprint={2410.02838}, primaryClass={q-bio.PE cs.LG q-bio.QM stat.AP} }
žliobaitė2024modelling
arxiv-665287
2410.02840
Overcoming Representation Bias in Fairness-Aware data Repair using Optimal Transport
<|reference_start|>Overcoming Representation Bias in Fairness-Aware data Repair using Optimal Transport: Optimal transport (OT) has an important role in transforming data distributions in a manner which engenders fairness. Typically, the OT operators are learnt from the unfair attribute-labelled data, and then used for their repair. Two significant limitations of this approach are as follows: (i) the OT operators for underrepresented subgroups are poorly learnt (i.e. they are susceptible to representation bias); and (ii) these OT repairs cannot be effected on identically distributed but out-of-sample (i.e.\ archival) data. In this paper, we address both of these problems by adopting a Bayesian nonparametric stopping rule for learning each attribute-labelled component of the data distribution. The induced OT-optimal quantization operators can then be used to repair the archival data. We formulate a novel definition of the fair distributional target, along with quantifiers that allow us to trade fairness against damage in the transformed data. These are used to reveal excellent performance of our representation-bias-tolerant scheme in simulated and benchmark data sets.<|reference_end|>
arxiv
@article{langbridge2024overcoming, title={Overcoming Representation Bias in Fairness-Aware data Repair using Optimal Transport}, author={Abigail Langbridge and Anthony Quinn and Robert Shorten}, journal={arXiv preprint arXiv:2410.02840}, year={2024}, archivePrefix={arXiv}, eprint={2410.02840}, primaryClass={cs.LG cs.CY math.ST stat.TH} }
langbridge2024overcoming
arxiv-665288
2410.02841
Demonstration Attack against In-Context Learning for Code Intelligence
<|reference_start|>Demonstration Attack against In-Context Learning for Code Intelligence: Recent advancements in large language models (LLMs) have revolutionized code intelligence by improving programming productivity and alleviating challenges faced by software developers. To further improve the performance of LLMs on specific code intelligence tasks and reduce training costs, researchers reveal a new capability of LLMs: in-context learning (ICL). ICL allows LLMs to learn from a few demonstrations within a specific context, achieving impressive results without parameter updating. However, the rise of ICL introduces new security vulnerabilities in the code intelligence field. In this paper, we explore a novel security scenario based on the ICL paradigm, where attackers act as third-party ICL agencies and provide users with bad ICL content to mislead LLMs outputs in code intelligence tasks. Our study demonstrates the feasibility and risks of such a scenario, revealing how attackers can leverage malicious demonstrations to construct bad ICL content and induce LLMs to produce incorrect outputs, posing significant threats to system security. We propose a novel method to construct bad ICL content called DICE, which is composed of two stages: Demonstration Selection and Bad ICL Construction, constructing targeted bad ICL content based on the user query and transferable across different query inputs. Ultimately, our findings emphasize the critical importance of securing ICL mechanisms to protect code intelligence systems from adversarial manipulation.<|reference_end|>
arxiv
@article{ge2024demonstration, title={Demonstration Attack against In-Context Learning for Code Intelligence}, author={Yifei Ge, Weisong Sun, Yihang Lou, Chunrong Fang, Yiran Zhang, Yiming Li, Xiaofang Zhang, Yang Liu, Zhihong Zhao, Zhenyu Chen}, journal={arXiv preprint arXiv:2410.02841}, year={2024}, archivePrefix={arXiv}, eprint={2410.02841}, primaryClass={cs.CR cs.SE} }
ge2024demonstration
arxiv-665289
2410.02843
Neural DDEs with Learnable Delays for Partially Observed Dynamical Systems
<|reference_start|>Neural DDEs with Learnable Delays for Partially Observed Dynamical Systems: Many successful methods to learn dynamical systems from data have recently been introduced. Such methods often rely on the availability of the system's full state. However, this underlying hypothesis is rather restrictive as it is typically not confirmed in practice, leaving us with partially observed systems. Utilizing the Mori-Zwanzig (MZ) formalism from statistical physics, we demonstrate that Constant Lag Neural Delay Differential Equations (NDDEs) naturally serve as suitable models for partially observed states. In empirical evaluation, we show that such models outperform existing methods on both synthetic and experimental data.<|reference_end|>
arxiv
@article{monsel2024neural, title={Neural DDEs with Learnable Delays for Partially Observed Dynamical Systems}, author={Thibault Monsel, Emmanuel Menier, Onofrio Semeraro, Lionel Mathelin, Guillaume Charpiat}, journal={arXiv preprint arXiv:2410.02843}, year={2024}, archivePrefix={arXiv}, eprint={2410.02843}, primaryClass={cs.LG cs.AI physics.comp-ph} }
monsel2024neural
arxiv-665290
2410.02844
CAnDOIT: Causal Discovery with Observational and Interventional Data from Time-Series
<|reference_start|>CAnDOIT: Causal Discovery with Observational and Interventional Data from Time-Series: The study of cause-and-effect is of the utmost importance in many branches of science, but also for many practical applications of intelligent systems. In particular, identifying causal relationships in situations that include hidden factors is a major challenge for methods that rely solely on observational data for building causal models. This paper proposes CAnDOIT, a causal discovery method to reconstruct causal models using both observational and interventional time-series data. The use of interventional data in the causal analysis is crucial for real-world applications, such as robotics, where the scenario is highly complex and observational data alone are often insufficient to uncover the correct causal structure. Validation of the method is performed initially on randomly generated synthetic models and subsequently on a well-known benchmark for causal structure learning in a robotic manipulation environment. The experiments demonstrate that the approach can effectively handle data from interventions and exploit them to enhance the accuracy of the causal analysis. A Python implementation of CAnDOIT has also been developed and is publicly available on GitHub: https://github.com/lcastri/causalflow.<|reference_end|>
arxiv
@article{castri2024candoit:, title={CAnDOIT: Causal Discovery with Observational and Interventional Data from Time-Series}, author={Luca Castri, Sariah Mghames, Marc Hanheide, Nicola Bellotto}, journal={arXiv preprint arXiv:2410.02844}, year={2024}, archivePrefix={arXiv}, eprint={2410.02844}, primaryClass={stat.ML cs.AI cs.LG cs.RO} }
castri2024candoit:
arxiv-665291
2410.02845
Towards Layer-Wise Personalized Federated Learning: Adaptive Layer Disentanglement via Conflicting Gradients
<|reference_start|>Towards Layer-Wise Personalized Federated Learning: Adaptive Layer Disentanglement via Conflicting Gradients: In personalized Federated Learning (pFL), high data heterogeneity can cause significant gradient divergence across devices, adversely affecting the learning process. This divergence, especially when gradients from different users form an obtuse angle during aggregation, can negate progress, leading to severe weight and gradient update degradation. To address this issue, we introduce a new approach to pFL design, namely Federated Learning with Layer-wise Aggregation via Gradient Analysis (FedLAG), utilizing the concept of gradient conflict at the layer level. Specifically, when layer-wise gradients of different clients form acute angles, those gradients align in the same direction, enabling updates across different clients toward identifying client-invariant features. Conversely, when layer-wise gradient pairs make create obtuse angles, the layers tend to focus on client-specific tasks. In hindsights, FedLAG assigns layers for personalization based on the extent of layer-wise gradient conflicts. Specifically, layers with gradient conflicts are excluded from the global aggregation process. The theoretical evaluation demonstrates that when integrated into other pFL baselines, FedLAG enhances pFL performance by a certain margin. Therefore, our proposed method achieves superior convergence behavior compared with other baselines. Extensive experiments show that our FedLAG outperforms several state-of-the-art methods and can be easily incorporated with many existing methods to further enhance performance.<|reference_end|>
arxiv
@article{nguyen2024towards, title={Towards Layer-Wise Personalized Federated Learning: Adaptive Layer Disentanglement via Conflicting Gradients}, author={Minh Duong Nguyen, Khanh Le, Khoi Do, Nguyen H.Tran, Duc Nguyen, Chien Trinh, Zhaohui Yang}, journal={arXiv preprint arXiv:2410.02845}, year={2024}, archivePrefix={arXiv}, eprint={2410.02845}, primaryClass={cs.LG cs.AI} }
nguyen2024towards
arxiv-665292
2410.02846
A Spatio-Temporal Machine Learning Model for Mortgage Credit Risk: Default Probabilities and Loan Portfolios
<|reference_start|>A Spatio-Temporal Machine Learning Model for Mortgage Credit Risk: Default Probabilities and Loan Portfolios: We introduce a novel machine learning model for credit risk by combining tree-boosting with a latent spatio-temporal Gaussian process model accounting for frailty correlation. This allows for modeling non-linearities and interactions among predictor variables in a flexible data-driven manner and for accounting for spatio-temporal variation that is not explained by observable predictor variables. We also show how estimation and prediction can be done in a computationally efficient manner. In an application to a large U.S. mortgage credit risk data set, we find that both predictive default probabilities for individual loans and predictive loan portfolio loss distributions obtained with our novel approach are more accurate compared to conventional independent linear hazard models and also linear spatio-temporal models. Using interpretability tools for machine learning models, we find that the likely reasons for this outperformance are strong interaction and non-linear effects in the predictor variables and the presence of large spatio-temporal frailty effects.<|reference_end|>
arxiv
@article{kündig2024a, title={A Spatio-Temporal Machine Learning Model for Mortgage Credit Risk: Default Probabilities and Loan Portfolios}, author={Pascal K"undig, Fabio Sigrist}, journal={arXiv preprint arXiv:2410.02846}, year={2024}, archivePrefix={arXiv}, eprint={2410.02846}, primaryClass={q-fin.RM cs.LG q-fin.ST} }
kündig2024a
arxiv-665293
2410.02847
Deep Signature: Characterization of Large-Scale Molecular Dynamics
<|reference_start|>Deep Signature: Characterization of Large-Scale Molecular Dynamics: Understanding protein dynamics are essential for deciphering protein functional mechanisms and developing molecular therapies. However, the complex high-dimensional dynamics and interatomic interactions of biological processes pose significant challenge for existing computational techniques. In this paper, we approach this problem for the first time by introducing Deep Signature, a novel computationally tractable framework that characterizes complex dynamics and interatomic interactions based on their evolving trajectories. Specifically, our approach incorporates soft spectral clustering that locally aggregates cooperative dynamics to reduce the size of the system, as well as signature transform that collects iterated integrals to provide a global characterization of the non-smooth interactive dynamics. Theoretical analysis demonstrates that Deep Signature exhibits several desirable properties, including invariance to translation, near invariance to rotation, equivariance to permutation of atomic coordinates, and invariance under time reparameterization. Furthermore, experimental results on three benchmarks of biological processes verify that our approach can achieve superior performance compared to baseline methods.<|reference_end|>
arxiv
@article{qin2024deep, title={Deep Signature: Characterization of Large-Scale Molecular Dynamics}, author={Tiexin Qin and Mengxu Zhu and Chunyang Li and Terry Lyons and Hong Yan and Haoliang Li}, journal={arXiv preprint arXiv:2410.02847}, year={2024}, archivePrefix={arXiv}, eprint={2410.02847}, primaryClass={q-bio.QM cs.AI} }
qin2024deep
arxiv-665294
2410.02854
MQT Qudits: A Software Framework for Mixed-Dimensional Quantum Computing
<|reference_start|>MQT Qudits: A Software Framework for Mixed-Dimensional Quantum Computing: Quantum computing holds great promise for surpassing the limits of classical devices in many fields. Despite impressive developments, however, current research is primarily focused on qubits. At the same time, quantum hardware based on multi-level, qudit, systems offers a range of advantages, including expanded gate sets, higher information density, and improved computational efficiency, which might play a key role in overcoming not only the limitations of classical machines but also of current qubit-based quantum devices. However, working with qudits faces challenges not only in experimental control but particularly in algorithm development and quantum software. In this work, we introduce MQT Qudits, an open-source tool, which, as part of the Munich Quantum Toolkit (MQT), is built to assist in designing and implementing applications for mixed-dimensional qudit devices. We specify a standardized language for mixed-dimension systems and discuss circuit specification, compilation to hardware gate sets, efficient circuit simulation, and open challenges. MQT Qudits is available at github.com/cda-tum/mqt-qudits and on pypi at pypi.org/project/mqt.qudits.<|reference_end|>
arxiv
@article{mato2024mqt, title={MQT Qudits: A Software Framework for Mixed-Dimensional Quantum Computing}, author={Kevin Mato and Martin Ringbauer and Lukas Burgholzer and Robert Wille}, journal={arXiv preprint arXiv:2410.02854}, year={2024}, archivePrefix={arXiv}, eprint={2410.02854}, primaryClass={quant-ph cs.ET hep-th} }
mato2024mqt
arxiv-665295
2410.02857
Reconstructing Galaxy Cluster Mass Maps using Score-based Generative Modeling
<|reference_start|>Reconstructing Galaxy Cluster Mass Maps using Score-based Generative Modeling: We present a novel approach to reconstruct gas and dark matter projected density maps of galaxy clusters using score-based generative modeling. Our diffusion model takes in mock SZ and X-ray images as conditional observations, and generates realizations of corresponding gas and dark matter maps by sampling from a learned data posterior. We train and validate the performance of our model by using mock data from a hydrodynamical cosmological simulation. The model accurately reconstructs both the mean and spread of the radial density profiles in the spatial domain to within 5\%, indicating that the model is able to distinguish between clusters of different sizes. In the spectral domain, the model achieves close-to-unity values for the bias and cross-correlation coefficients, indicating that the model can accurately probe cluster structures on both large and small scales. Our experiments demonstrate the ability of score models to learn a strong, nonlinear, and unbiased mapping between input observables and fundamental density distributions of galaxy clusters. These diffusion models can be further fine-tuned and generalized to not only take in additional observables as inputs, but also real observations and predict unknown density distributions of galaxy clusters.<|reference_end|>
arxiv
@article{hsu2024reconstructing, title={Reconstructing Galaxy Cluster Mass Maps using Score-based Generative Modeling}, author={Alan Hsu, Matthew Ho, Joyce Lin, Carleen Markey, Michelle Ntampaka, Hy Trac, Barnab'as P'oczos}, journal={arXiv preprint arXiv:2410.02857}, year={2024}, archivePrefix={arXiv}, eprint={2410.02857}, primaryClass={astro-ph.CO cs.LG} }
hsu2024reconstructing
arxiv-665296
2410.02867
FAIR Universe HiggsML Uncertainty Challenge Competition
<|reference_start|>FAIR Universe HiggsML Uncertainty Challenge Competition: The FAIR Universe -- HiggsML Uncertainty Challenge focuses on measuring the physics properties of elementary particles with imperfect simulators due to differences in modelling systematic errors. Additionally, the challenge is leveraging a large-compute-scale AI platform for sharing datasets, training models, and hosting machine learning competitions. Our challenge brings together the physics and machine learning communities to advance our understanding and methodologies in handling systematic (epistemic) uncertainties within AI techniques.<|reference_end|>
arxiv
@article{bhimji2024fair, title={FAIR Universe HiggsML Uncertainty Challenge Competition}, author={Wahid Bhimji and Paolo Calafiura and Ragansu Chakkappai and Yuan-Tang Chou and Sascha Diefenbacher and Jordan Dudley and Steven Farrell and Aishik Ghosh and Isabelle Guyon and Chris Harris and Shih-Chieh Hsu and Elham E Khoda and R'emy Lyscar and Alexandre Michon and Benjamin Nachman and Peter Nugent and Mathis Reymond and David Rousseau and Benjamin Sluijter and Benjamin Thorne and Ihsan Ullah and Yulei Zhang}, journal={arXiv preprint arXiv:2410.02867}, year={2024}, archivePrefix={arXiv}, eprint={2410.02867}, primaryClass={hep-ph cs.LG hep-ex physics.data-an} }
bhimji2024fair
arxiv-665297
2410.02870
Individuation of 3D perceptual units from neurogeometry of binocular cells
<|reference_start|>Individuation of 3D perceptual units from neurogeometry of binocular cells: We model the functional architecture of the early stages of three-dimensional vision by extending the neurogeometric sub-Riemannian model for stereo-vision introduced in \cite{BCSZ23}. A new framework for correspondence is introduced that integrates a neural-based algorithm to achieve stereo correspondence locally while, simultaneously, organizing the corresponding points into global perceptual units. The result is an effective scene segmentation. We achieve this using harmonic analysis on the sub-Riemannian structure and show, in a comparison against Riemannian distance, that the sub-Riemannian metric is central to the solution.<|reference_end|>
arxiv
@article{bolelli2024individuation, title={Individuation of 3D perceptual units from neurogeometry of binocular cells}, author={Maria Virginia Bolelli, Giovanna Citti, Alessandro Sarti, Steven W. Zucker}, journal={arXiv preprint arXiv:2410.02870}, year={2024}, archivePrefix={arXiv}, eprint={2410.02870}, primaryClass={q-bio.NC cs.CV math.DG} }
bolelli2024individuation
arxiv-665298
2410.02874
Real-World Cooking Robot System from Recipes Based on Food State Recognition Using Foundation Models and PDDL
<|reference_start|>Real-World Cooking Robot System from Recipes Based on Food State Recognition Using Foundation Models and PDDL: Although there is a growing demand for cooking behaviours as one of the expected tasks for robots, a series of cooking behaviours based on new recipe descriptions by robots in the real world has not yet been realised. In this study, we propose a robot system that integrates real-world executable robot cooking behaviour planning using the Large Language Model (LLM) and classical planning of PDDL descriptions, and food ingredient state recognition learning from a small number of data using the Vision-Language model (VLM). We succeeded in experiments in which PR2, a dual-armed wheeled robot, performed cooking from arranged new recipes in a real-world environment, and confirmed the effectiveness of the proposed system.<|reference_end|>
arxiv
@article{kanazawa2024real-world, title={Real-World Cooking Robot System from Recipes Based on Food State Recognition Using Foundation Models and PDDL}, author={Naoaki Kanazawa, Kento Kawaharazuka, Yoshiki Obinata, Kei Okada, Masayuki Inaba}, journal={arXiv preprint arXiv:2410.02874}, year={2024}, doi={10.1080/01691864.2024.2407136}, archivePrefix={arXiv}, eprint={2410.02874}, primaryClass={cs.RO cs.AI} }
kanazawa2024real-world
arxiv-665299
2410.02879
Position: LLM Unlearning Benchmarks are Weak Measures of Progress
<|reference_start|>Position: LLM Unlearning Benchmarks are Weak Measures of Progress: Unlearning methods have the potential to improve the privacy and safety of large language models (LLMs) by removing sensitive or harmful information post hoc. The LLM unlearning research community has increasingly turned toward empirical benchmarks to assess the effectiveness of such methods. In this paper, we find that existing benchmarks provide an overly optimistic and potentially misleading view on the effectiveness of candidate unlearning methods. By introducing simple, benign modifications to a number of popular benchmarks, we expose instances where supposedly unlearned information remains accessible, or where the unlearning process has degraded the model's performance on retained information to a much greater extent than indicated by the original benchmark. We identify that existing benchmarks are particularly vulnerable to modifications that introduce even loose dependencies between the forget and retain information. Further, we show that ambiguity in unlearning targets in existing benchmarks can easily lead to the design of methods that overfit to the given test queries. Based on our findings, we urge the community to be cautious when interpreting benchmark results as reliable measures of progress, and we provide several recommendations to guide future LLM unlearning research.<|reference_end|>
arxiv
@article{thaker2024position:, title={Position: LLM Unlearning Benchmarks are Weak Measures of Progress}, author={Pratiksha Thaker, Shengyuan Hu, Neil Kale, Yash Maurya, Zhiwei Steven Wu, Virginia Smith}, journal={arXiv preprint arXiv:2410.02879}, year={2024}, archivePrefix={arXiv}, eprint={2410.02879}, primaryClass={cs.CL} }
thaker2024position:
arxiv-665300
2410.02881
Computational Modeling of Artistic Inspiration: A Framework for Predicting Aesthetic Preferences in Lyrical Lines Using Linguistic and Stylistic Features
<|reference_start|>Computational Modeling of Artistic Inspiration: A Framework for Predicting Aesthetic Preferences in Lyrical Lines Using Linguistic and Stylistic Features: Artistic inspiration remains one of the least understood aspects of the creative process. It plays a crucial role in producing works that resonate deeply with audiences, but the complexity and unpredictability of aesthetic stimuli that evoke inspiration have eluded systematic study. This work proposes a novel framework for computationally modeling artistic preferences in different individuals through key linguistic and stylistic properties, with a focus on lyrical content. In addition to the framework, we introduce \textit{EvocativeLines}, a dataset of annotated lyric lines, categorized as either "inspiring" or "not inspiring," to facilitate the evaluation of our framework across diverse preference profiles. Our computational model leverages the proposed linguistic and poetic features and applies a calibration network on top of it to accurately forecast artistic preferences among different creative individuals. Our experiments demonstrate that our framework outperforms an out-of-the-box LLaMA-3-70b, a state-of-the-art open-source language model, by nearly 18 points. Overall, this work contributes an interpretable and flexible framework that can be adapted to analyze any type of artistic preferences that are inherently subjective across a wide spectrum of skill levels.<|reference_end|>
arxiv
@article{sahu2024computational, title={Computational Modeling of Artistic Inspiration: A Framework for Predicting Aesthetic Preferences in Lyrical Lines Using Linguistic and Stylistic Features}, author={Gaurav Sahu, Olga Vechtomova}, journal={arXiv preprint arXiv:2410.02881}, year={2024}, archivePrefix={arXiv}, eprint={2410.02881}, primaryClass={cs.CL} }
sahu2024computational