corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-666001 | 2410.04064 | Text2Chart31: Instruction Tuning for Chart Generation with Automatic Feedback | <|reference_start|>Text2Chart31: Instruction Tuning for Chart Generation with Automatic Feedback: Large language models (LLMs) have demonstrated strong capabilities across various language tasks, notably through instruction-tuning methods. However, LLMs face challenges in visualizing complex, real-world data through charts and plots. Firstly, existing datasets rarely cover a full range of chart types, such as 3D, volumetric, and gridded charts. Secondly, supervised fine-tuning methods do not fully leverage the intricate relationships within rich datasets, including text, code, and figures. To address these challenges, we propose a hierarchical pipeline and a new dataset for chart generation. Our dataset, Text2Chart31, includes 31 unique plot types referring to the Matplotlib library, with 11.1K tuples of descriptions, code, data tables, and plots. Moreover, we introduce a reinforcement learning-based instruction tuning technique for chart generation tasks without requiring human feedback. Our experiments show that this approach significantly enhances the model performance, enabling smaller models to outperform larger open-source models and be comparable to state-of-the-art proprietary models in data visualization tasks. We make the code and dataset available at https://github.com/fatemehpesaran310/Text2Chart31.<|reference_end|> | arxiv | @article{zadeh2024text2chart31:,
title={Text2Chart31: Instruction Tuning for Chart Generation with Automatic
Feedback},
author={Fatemeh Pesaran Zadeh, Juyeon Kim, Jin-Hwa Kim, Gunhee Kim},
journal={arXiv preprint arXiv:2410.04064},
year={2024},
archivePrefix={arXiv},
eprint={2410.04064},
primaryClass={cs.LG cs.AI}
} | zadeh2024text2chart31: |
arxiv-666002 | 2410.04066 | Exploring 5G Network Performance: Comparison of Inner and Outer City Areas in Phetchaburi Province | <|reference_start|>Exploring 5G Network Performance: Comparison of Inner and Outer City Areas in Phetchaburi Province: The advancement of 5G technology has transformed various aspects of life, including tourism, by enabling people worldwide to communicate and travel with ease. Traveling to different places and countries is now seamless, removing language barriers and facilitating easy access to information on culture, accommodation, and tourist attractions. Additionally, access to applications that facilitate quicker language translation further enhances the travel experience. Phetchaburi Province holds significant importance as a global tourist destination. UNESCO has recognized Phetchaburi as a member of the UNESCO Creative Cities Network (UCCN), comprising one of 49 cities worldwide acknowledged for their creative city initiatives. Phetchaburi Province stands as the 5th city in Thailand to receive this designation. This research investigated 5G performance in Phetchaburi Province, both the inner and outer city, focusing on download and upload speeds. The results indicate that there is widespread 5G coverage throughout Phetchaburi Province, including urban and rural areas, especially for the 5G network with a good performance provided by one of the mobile network operators. In addition, the statistical analysis reveals differences in 5G performances between the inner city and the outer city of Phetchaburi Province, particularly for download speeds (p-value < 0.001).<|reference_end|> | arxiv | @article{pornpongtechavanich2024exploring,
title={Exploring 5G Network Performance: Comparison of Inner and Outer City
Areas in Phetchaburi Province},
author={Phisit Pornpongtechavanich and Therdpong Daengsi},
journal={arXiv preprint arXiv:2410.04066},
year={2024},
archivePrefix={arXiv},
eprint={2410.04066},
primaryClass={cs.NI}
} | pornpongtechavanich2024exploring |
arxiv-666003 | 2410.04068 | ECon: On the Detection and Resolution of Evidence Conflicts | <|reference_start|>ECon: On the Detection and Resolution of Evidence Conflicts: The rise of large language models (LLMs) has significantly influenced the quality of information in decision-making systems, leading to the prevalence of AI-generated content and challenges in detecting misinformation and managing conflicting information, or "inter-evidence conflicts." This study introduces a method for generating diverse, validated evidence conflicts to simulate real-world misinformation scenarios. We evaluate conflict detection methods, including Natural Language Inference (NLI) models, factual consistency (FC) models, and LLMs, on these conflicts (RQ1) and analyze LLMs' conflict resolution behaviors (RQ2). Our key findings include: (1) NLI and LLM models exhibit high precision in detecting answer conflicts, though weaker models suffer from low recall; (2) FC models struggle with lexically similar answer conflicts, while NLI and LLM models handle these better; and (3) stronger models like GPT-4 show robust performance, especially with nuanced conflicts. For conflict resolution, LLMs often favor one piece of conflicting evidence without justification and rely on internal knowledge if they have prior beliefs.<|reference_end|> | arxiv | @article{jiayang2024econ:,
title={ECon: On the Detection and Resolution of Evidence Conflicts},
author={Cheng Jiayang, Chunkit Chan, Qianqian Zhuang, Lin Qiu, Tianhang Zhang,
Tengxiao Liu, Yangqiu Song, Yue Zhang, Pengfei Liu, Zheng Zhang},
journal={arXiv preprint arXiv:2410.04068},
year={2024},
archivePrefix={arXiv},
eprint={2410.04068},
primaryClass={cs.CL cs.AI}
} | jiayang2024econ: |
arxiv-666004 | 2410.04070 | PAD: Personalized Alignment at Decoding-Time | <|reference_start|>PAD: Personalized Alignment at Decoding-Time: Aligning with personalized preferences, which vary significantly across cultural, educational, and political differences, poses a significant challenge due to the computational costs and data demands of traditional alignment methods. In response, this paper presents Personalized Alignment at Decoding-time (PAD), a novel framework designed to align LLM outputs with diverse personalized preferences during the inference phase, eliminating the need for additional training. By introducing a unique personalized reward modeling strategy, this framework decouples the text generation process from personalized preferences, facilitating the generation of generalizable token-level personalized rewards. The PAD algorithm leverages these rewards to guide the decoding process, dynamically tailoring the base model's predictions to personalized preferences. Extensive experimental results demonstrate that PAD not only outperforms existing training-based alignment methods in terms of aligning with diverse preferences but also shows significant generalizability to preferences unseen during training and scalability across different base models. This work advances the capability of LLMs to meet user needs in real-time applications, presenting a substantial step forward in personalized LLM alignment.<|reference_end|> | arxiv | @article{chen2024pad:,
title={PAD: Personalized Alignment of LLMs at Decoding-Time},
author={Ruizhe Chen, Xiaotian Zhang, Meng Luo, Wenhao Chai, and Zuozhu Liu},
journal={arXiv preprint arXiv:2410.04070},
year={2024},
archivePrefix={arXiv},
eprint={2410.04070},
primaryClass={cs.CL cs.AI}
} | chen2024pad: |
arxiv-666005 | 2410.04071 | Pseudo-Deterministic Construction of Irreducible Polynomials over Finite Fields | <|reference_start|>Pseudo-Deterministic Construction of Irreducible Polynomials over Finite Fields: We present a polynomial-time pseudo-deterministic algorithm for constructing irreducible polynomial of degree $d$ over finite field $\mathbb{F}_q$. A pseudo-deterministic algorithm is allowed to use randomness, but with high probability it must output a canonical irreducible polynomial. Our construction runs in time $\tilde{O}(d^4 \log^4{q})$. Our construction extends Shoup's deterministic algorithm (FOCS 1988) for the same problem, which runs in time $\tilde{O}(d^4 p^{\frac{1}{2}} \log^4{q})$ (where $p$ is the characteristic of the field $\mathbb{F}_q$). Shoup had shown a reduction from constructing irreducible polynomials to factoring polynomials over finite fields. We show that by using a fast randomized factoring algorithm, the above reduction yields an efficient pseudo-deterministic algorithm for constructing irreducible polynomials over finite fields.<|reference_end|> | arxiv | @article{rai2024pseudo-deterministic,
title={Pseudo-Deterministic Construction of Irreducible Polynomials over Finite
Fields},
author={Shanthanu S Rai},
journal={arXiv preprint arXiv:2410.04071},
year={2024},
archivePrefix={arXiv},
eprint={2410.04071},
primaryClass={cs.DS cs.CC math.NT}
} | rai2024pseudo-deterministic |
arxiv-666006 | 2410.04072 | Multi-Round Region-Based Optimization for Scene Sketching | <|reference_start|>Multi-Round Region-Based Optimization for Scene Sketching: Scene sketching is to convert a scene into a simplified, abstract representation that captures the essential elements and composition of the original scene. It requires semantic understanding of the scene and consideration of different regions within the scene. Since scenes often contain diverse visual information across various regions, such as foreground objects, background elements, and spatial divisions, dealing with these different regions poses unique difficulties. In this paper, we define a sketch as some sets of Bezier curves. We optimize the different regions of input scene in multiple rounds. In each round of optimization, strokes sampled from the next region can seamlessly be integrated into the sketch generated in the previous round of optimization. We propose additional stroke initialization method to ensure the integrity of the scene and the convergence of optimization. A novel CLIP-Based Semantic loss and a VGG-Based Feature loss are utilized to guide our multi-round optimization. Extensive experimental results on the quality and quantity of the generated sketches confirm the effectiveness of our method.<|reference_end|> | arxiv | @article{liang2024multi-round,
title={Multi-Round Region-Based Optimization for Scene Sketching},
author={Yiqi Liang, Ying Liu, Dandan Long, Ruihui Li},
journal={arXiv preprint arXiv:2410.04072},
year={2024},
archivePrefix={arXiv},
eprint={2410.04072},
primaryClass={cs.CV cs.AI}
} | liang2024multi-round |
arxiv-666007 | 2410.04074 | On Eliciting Syntax from Language Models via Hashing | <|reference_start|>On Eliciting Syntax from Language Models via Hashing: Unsupervised parsing, also known as grammar induction, aims to infer syntactic structure from raw text. Recently, binary representation has exhibited remarkable information-preserving capabilities at both lexicon and syntax levels. In this paper, we explore the possibility of leveraging this capability to deduce parsing trees from raw text, relying solely on the implicitly induced grammars within models. To achieve this, we upgrade the bit-level CKY from zero-order to first-order to encode the lexicon and syntax in a unified binary representation space, switch training from supervised to unsupervised under the contrastive hashing framework, and introduce a novel loss function to impose stronger yet balanced alignment signals. Our model shows competitive performance on various datasets, therefore, we claim that our method is effective and efficient enough to acquire high-quality parsing trees from pre-trained language models at a low cost.<|reference_end|> | arxiv | @article{wang2024on,
title={On Eliciting Syntax from Language Models via Hashing},
author={Yiran Wang, Masao Utiyama},
journal={arXiv preprint arXiv:2410.04074},
year={2024},
archivePrefix={arXiv},
eprint={2410.04074},
primaryClass={cs.CL cs.AI cs.LG}
} | wang2024on |
arxiv-666008 | 2410.04075 | PsFuture: A Pseudo-Future-based Zero-Shot Adaptive Policy for Simultaneous Machine Translation | <|reference_start|>PsFuture: A Pseudo-Future-based Zero-Shot Adaptive Policy for Simultaneous Machine Translation: Simultaneous Machine Translation (SiMT) requires target tokens to be generated in real-time as streaming source tokens are consumed. Traditional approaches to SiMT typically require sophisticated architectures and extensive parameter configurations for training adaptive read/write policies, which in turn demand considerable computational power and memory. We propose PsFuture, the first zero-shot adaptive read/write policy for SiMT, enabling the translation model to independently determine read/write actions without the necessity for additional training. Furthermore, we introduce a novel training strategy, Prefix-to-Full (P2F), specifically tailored to adjust offline translation models for SiMT applications, exploiting the advantages of the bidirectional attention mechanism inherent in offline models. Experiments across multiple benchmarks demonstrate that our zero-shot policy attains performance on par with strong baselines and the P2F method can further enhance performance, achieving an outstanding trade-off between translation quality and latency.<|reference_end|> | arxiv | @article{zhao2024psfuture:,
title={PsFuture: A Pseudo-Future-based Zero-Shot Adaptive Policy for
Simultaneous Machine Translation},
author={Libo Zhao, Jing Li, Ziqian Zeng},
journal={arXiv preprint arXiv:2410.04075},
year={2024},
archivePrefix={arXiv},
eprint={2410.04075},
primaryClass={cs.CL}
} | zhao2024psfuture: |
arxiv-666009 | 2410.04078 | TeachTune: Reviewing Pedagogical Agents Against Diverse Student Profiles with Simulated Students | <|reference_start|>TeachTune: Reviewing Pedagogical Agents Against Diverse Student Profiles with Simulated Students: Large language models (LLMs) can empower educators to build pedagogical conversational agents (PCAs) customized for their students. As students have different prior knowledge and motivation levels, educators must evaluate the adaptivity of their PCAs to diverse students. Existing chatbot evaluation methods (e.g., direct chat and benchmarks) are either manually intensive for multiple iterations or limited to testing only single-turn interactions. We present TeachTune, where educators can create simulated students and review PCAs by observing automated chats between PCAs and simulated students. Our technical pipeline instructs an LLM-based student to simulate prescribed knowledge levels and characteristics, helping educators explore diverse conversation patterns. Our pipeline could produce simulated students whose behaviors correlate highly to their input knowledge and motivation levels within 5% and 10% accuracy gaps. Thirty science teachers designed PCAs in a between-subjects study, and using TeachTune resulted in a lower task load and higher student profile coverage over a baseline.<|reference_end|> | arxiv | @article{jin2024teachtune:,
title={TeachTune: Reviewing Pedagogical Agents Against Diverse Student Profiles
with Simulated Students},
author={Hyoungwook Jin, Minju Yoo, Jeongeon Park, Yokyung Lee, Xu Wang, and
Juho Kim},
journal={arXiv preprint arXiv:2410.04078},
year={2024},
archivePrefix={arXiv},
eprint={2410.04078},
primaryClass={cs.HC}
} | jin2024teachtune: |
arxiv-666010 | 2410.04080 | High Probability Bound for Cross-Learning Contextual Bandits with Unknown Context Distributions | <|reference_start|>High Probability Bound for Cross-Learning Contextual Bandits with Unknown Context Distributions: Motivated by applications in online bidding and sleeping bandits, we examine the problem of contextual bandits with cross learning, where the learner observes the loss associated with the action across all possible contexts, not just the current round's context. Our focus is on a setting where losses are chosen adversarially, and contexts are sampled i.i.d. from a specific distribution. This problem was first studied by Balseiro et al. (2019), who proposed an algorithm that achieves near-optimal regret under the assumption that the context distribution is known in advance. However, this assumption is often unrealistic. To address this issue, Schneider and Zimmert (2023) recently proposed a new algorithm that achieves nearly optimal expected regret. It is well-known that expected regret can be significantly weaker than high-probability bounds. In this paper, we present a novel, in-depth analysis of their algorithm and demonstrate that it actually achieves near-optimal regret with high probability. There are steps in the original analysis by Schneider and Zimmert (2023) that lead only to an expected bound by nature. In our analysis, we introduce several new insights. Specifically, we make extensive use of the weak dependency structure between different epochs, which was overlooked in previous analyses. Additionally, standard martingale inequalities are not directly applicable, so we refine martingale inequalities to complete our analysis.<|reference_end|> | arxiv | @article{huang2024high,
title={High Probability Bound for Cross-Learning Contextual Bandits with
Unknown Context Distributions},
author={Ruiyuan Huang, Zengfeng Huang},
journal={arXiv preprint arXiv:2410.04080},
year={2024},
archivePrefix={arXiv},
eprint={2410.04080},
primaryClass={cs.LG}
} | huang2024high |
arxiv-666011 | 2410.04081 | $\epsilon$-VAE: Denoising as Visual Decoding | <|reference_start|>$\epsilon$-VAE: Denoising as Visual Decoding: In generative modeling, tokenization simplifies complex data into compact, structured representations, creating a more efficient, learnable space. For high-dimensional visual data, it reduces redundancy and emphasizes key features for high-quality generation. Current visual tokenization methods rely on a traditional autoencoder framework, where the encoder compresses data into latent representations, and the decoder reconstructs the original input. In this work, we offer a new perspective by proposing denoising as decoding, shifting from single-step reconstruction to iterative refinement. Specifically, we replace the decoder with a diffusion process that iteratively refines noise to recover the original image, guided by the latents provided by the encoder. We evaluate our approach by assessing both reconstruction (rFID) and generation quality (FID), comparing it to state-of-the-art autoencoding approach. We hope this work offers new insights into integrating iterative generation and autoencoding for improved compression and generation.<|reference_end|> | arxiv | @article{zhao2024$\epsilon$-vae:,
title={$\epsilon$-VAE: Denoising as Visual Decoding},
author={Long Zhao, Sanghyun Woo, Ziyu Wan, Yandong Li, Han Zhang, Boqing Gong,
Hartwig Adam, Xuhui Jia, Ting Liu},
journal={arXiv preprint arXiv:2410.04081},
year={2024},
archivePrefix={arXiv},
eprint={2410.04081},
primaryClass={cs.CV cs.AI eess.IV}
} | zhao2024$\epsilon$-vae: |
arxiv-666012 | 2410.04084 | Taming the Tail: Leveraging Asymmetric Loss and Pade Approximation to Overcome Medical Image Long-Tailed Class Imbalance | <|reference_start|>Taming the Tail: Leveraging Asymmetric Loss and Pade Approximation to Overcome Medical Image Long-Tailed Class Imbalance: Long-tailed problems in healthcare emerge from data imbalance due to variability in the prevalence and representation of different medical conditions, warranting the requirement of precise and dependable classification methods. Traditional loss functions such as cross-entropy and binary cross-entropy are often inadequate due to their inability to address the imbalances between the classes with high representation and the classes with low representation found in medical image datasets. We introduce a novel polynomial loss function based on Pade approximation, designed specifically to overcome the challenges associated with long-tailed classification. This approach incorporates asymmetric sampling techniques to better classify under-represented classes. We conducted extensive evaluations on three publicly available medical datasets and a proprietary medical dataset. Our implementation of the proposed loss function is open-sourced in the public repository:https://github.com/ipankhi/ALPA.<|reference_end|> | arxiv | @article{kashyap2024taming,
title={Taming the Tail: Leveraging Asymmetric Loss and Pade Approximation to
Overcome Medical Image Long-Tailed Class Imbalance},
author={Pankhi Kashyap, Pavni Tandon, Sunny Gupta, Abhishek Tiwari, Ritwik
Kulkarni, Kshitij Sharad Jadhav},
journal={arXiv preprint arXiv:2410.04084},
year={2024},
archivePrefix={arXiv},
eprint={2410.04084},
primaryClass={cs.CV cs.AI cs.LG}
} | kashyap2024taming |
arxiv-666013 | 2410.04087 | GlobeSumm: A Challenging Benchmark Towards Unifying Multi-lingual, Cross-lingual and Multi-document News Summarization | <|reference_start|>GlobeSumm: A Challenging Benchmark Towards Unifying Multi-lingual, Cross-lingual and Multi-document News Summarization: News summarization in today's global scene can be daunting with its flood of multilingual content and varied viewpoints from different sources. However, current studies often neglect such real-world scenarios as they tend to focus solely on either single-language or single-document tasks. To bridge this gap, we aim to unify Multi-lingual, Cross-lingual and Multi-document Summarization into a novel task, i.e., MCMS, which encapsulates the real-world requirements all-in-one. Nevertheless, the lack of a benchmark inhibits researchers from adequately studying this invaluable problem. To tackle this, we have meticulously constructed the GLOBESUMM dataset by first collecting a wealth of multilingual news reports and restructuring them into event-centric format. Additionally, we introduce the method of protocol-guided prompting for high-quality and cost-effective reference annotation. In MCMS, we also highlight the challenge of conflicts between news reports, in addition to the issues of redundancies and omissions, further enhancing the complexity of GLOBESUMM. Through extensive experimental analysis, we validate the quality of our dataset and elucidate the inherent challenges of the task. We firmly believe that GLOBESUMM, given its challenging nature, will greatly contribute to the multilingual communities and the evaluation of LLMs.<|reference_end|> | arxiv | @article{ye2024globesumm:,
title={GlobeSumm: A Challenging Benchmark Towards Unifying Multi-lingual,
Cross-lingual and Multi-document News Summarization},
author={Yangfan Ye, Xiachong Feng, Xiaocheng Feng, Weitao Ma, Libo Qin,
Dongliang Xu, Qing Yang, Hongtao Liu, Bing Qin},
journal={arXiv preprint arXiv:2410.04087},
year={2024},
archivePrefix={arXiv},
eprint={2410.04087},
primaryClass={cs.CL cs.AI}
} | ye2024globesumm: |
arxiv-666014 | 2410.04088 | Cross Resolution Encoding-Decoding For Detection Transformers | <|reference_start|>Cross Resolution Encoding-Decoding For Detection Transformers: Detection Transformers (DETR) are renowned object detection pipelines, however computationally efficient multiscale detection using DETR is still challenging. In this paper, we propose a Cross-Resolution Encoding-Decoding (CRED) mechanism that allows DETR to achieve the accuracy of high-resolution detection while having the speed of low-resolution detection. CRED is based on two modules; Cross Resolution Attention Module (CRAM) and One Step Multiscale Attention (OSMA). CRAM is designed to transfer the knowledge of low-resolution encoder output to a high-resolution feature. While OSMA is designed to fuse multiscale features in a single step and produce a feature map of a desired resolution enriched with multiscale information. When used in prominent DETR methods, CRED delivers accuracy similar to the high-resolution DETR counterpart in roughly 50% fewer FLOPs. Specifically, state-of-the-art DN-DETR, when used with CRED (calling CRED-DETR), becomes 76% faster, with ~50% reduced FLOPs than its high-resolution counterpart with 202 G FLOPs on MS-COCO benchmark. We plan to release pretrained CRED-DETRs for use by the community. Code: https://github.com/ashishkumar822/CRED-DETR<|reference_end|> | arxiv | @article{kumar2024cross,
title={Cross Resolution Encoding-Decoding For Detection Transformers},
author={Ashish Kumar, Jaesik Park},
journal={arXiv preprint arXiv:2410.04088},
year={2024},
archivePrefix={arXiv},
eprint={2410.04088},
primaryClass={cs.CV}
} | kumar2024cross |
arxiv-666015 | 2410.04089 | Designing Concise ConvNets with Columnar Stages | <|reference_start|>Designing Concise ConvNets with Columnar Stages: In the era of vision Transformers, the recent success of VanillaNet shows the huge potential of simple and concise convolutional neural networks (ConvNets). Where such models mainly focus on runtime, it is also crucial to simultaneously focus on other aspects, e.g., FLOPs, parameters, etc, to strengthen their utility further. To this end, we introduce a refreshing ConvNet macro design called Columnar Stage Network (CoSNet). CoSNet has a systematically developed simple and concise structure, smaller depth, low parameter count, low FLOPs, and attention-less operations, well suited for resource-constrained deployment. The key novelty of CoSNet is deploying parallel convolutions with fewer kernels fed by input replication, using columnar stacking of these convolutions, and minimizing the use of 1x1 convolution layers. Our comprehensive evaluations show that CoSNet rivals many renowned ConvNets and Transformer designs under resource-constrained scenarios. Code: https://github.com/ashishkumar822/CoSNet<|reference_end|> | arxiv | @article{kumar2024designing,
title={Designing Concise ConvNets with Columnar Stages},
author={Ashish Kumar, Jaesik Park},
journal={arXiv preprint arXiv:2410.04089},
year={2024},
archivePrefix={arXiv},
eprint={2410.04089},
primaryClass={cs.CV}
} | kumar2024designing |
arxiv-666016 | 2410.04090 | High-Speed Stereo Visual SLAM for Low-Powered Computing Devices | <|reference_start|>High-Speed Stereo Visual SLAM for Low-Powered Computing Devices: We present an accurate and GPU-accelerated Stereo Visual SLAM design called Jetson-SLAM. It exhibits frame-processing rates above 60FPS on NVIDIA's low-powered 10W Jetson-NX embedded computer and above 200FPS on desktop-grade 200W GPUs, even in stereo configuration and in the multiscale setting. Our contributions are threefold: (i) a Bounded Rectification technique to prevent tagging many non-corner points as a corner in FAST detection, improving SLAM accuracy. (ii) A novel Pyramidal Culling and Aggregation (PyCA) technique that yields robust features while suppressing redundant ones at high speeds by harnessing a GPU device. PyCA uses our new Multi-Location Per Thread culling strategy (MLPT) and Thread-Efficient Warp-Allocation (TEWA) scheme for GPU to enable Jetson-SLAM achieving high accuracy and speed on embedded devices. (iii) Jetson-SLAM library achieves resource efficiency by having a data-sharing mechanism. Our experiments on three challenging datasets: KITTI, EuRoC, and KAIST-VIO, and two highly accurate SLAM backends: Full-BA and ICE-BA show that Jetson-SLAM is the fastest available accurate and GPU-accelerated SLAM system (Fig. 1).<|reference_end|> | arxiv | @article{kumar2024high-speed,
title={High-Speed Stereo Visual SLAM for Low-Powered Computing Devices},
author={Ashish Kumar, Jaesik Park, Laxmidhar Behera},
journal={IEEE Robotics & Automation Letters, 2023},
year={2024},
archivePrefix={arXiv},
eprint={2410.04090},
primaryClass={cs.RO cs.CV}
} | kumar2024high-speed |
arxiv-666017 | 2410.04091 | Cross-Lingual Query-by-Example Spoken Term Detection: A Transformer-Based Approach | <|reference_start|>Cross-Lingual Query-by-Example Spoken Term Detection: A Transformer-Based Approach: Query-by-example spoken term detection (QbE-STD) is typically constrained by transcribed data scarcity and language specificity. This paper introduces a novel, language-agnostic QbE-STD model leveraging image processing techniques and transformer architecture. By employing a pre-trained XLSR-53 network for feature extraction and a Hough transform for detection, our model effectively searches for user-defined spoken terms within any audio file. Experimental results across four languages demonstrate significant performance gains (19-54%) over a CNN-based baseline. While processing time is improved compared to DTW, accuracy remains inferior. Notably, our model offers the advantage of accurately counting query term repetitions within the target audio.<|reference_end|> | arxiv | @article{fatemeh2024cross-lingual,
title={Cross-Lingual Query-by-Example Spoken Term Detection: A
Transformer-Based Approach},
author={Allahdadi Fatemeh, Mahdian Toroghi Rahil, Zareian Hassan},
journal={arXiv preprint arXiv:2410.04091},
year={2024},
archivePrefix={arXiv},
eprint={2410.04091},
primaryClass={cs.LG cs.SD eess.AS}
} | fatemeh2024cross-lingual |
arxiv-666018 | 2410.04094 | BloomWise: Enhancing Problem-Solving capabilities of Large Language Models using Bloom's-Taxonomy-Inspired Prompts | <|reference_start|>BloomWise: Enhancing Problem-Solving capabilities of Large Language Models using Bloom's-Taxonomy-Inspired Prompts: Despite the continuous progress of Large Language Models (LLMs) across various tasks, their performance on mathematical problems and reasoning tasks remains limited. This limitation can be attributed, among other factors, to the inherent difficulty of these problems and the fact that solutions often consist of multiple steps, potentially of varying nature, making it challenging for a single prompting technique to execute all required steps. To address this, we introduce BloomWise, a new prompting technique, inspired by Bloom's Taxonomy, aiming to improve LLMs' performance in solving such problems by encouraging them to approach the problem starting from simple, i.e., remembering, and progressing to higher cognitive skills, i.e., analyzing, until the correct solution is reached. The decision regarding the need to employ more sophisticated cognitive skills is based on self-evaluation performed by the LLM. Thus, we encourage the LLM to deploy the appropriate cognitive processes. In extensive experiments across 4 popular math reasoning datasets, we have demonstrated the effectiveness of our proposed approach. We also present extensive ablations, analyzing the strengths of each module within our system.<|reference_end|> | arxiv | @article{zoumpoulidi2024bloomwise:,
title={BloomWise: Enhancing Problem-Solving capabilities of Large Language
Models using Bloom's-Taxonomy-Inspired Prompts},
author={Maria-Eleni Zoumpoulidi, Georgios Paraskevopoulos, Alexandros
Potamianos},
journal={arXiv preprint arXiv:2410.04094},
year={2024},
archivePrefix={arXiv},
eprint={2410.04094},
primaryClass={cs.CL}
} | zoumpoulidi2024bloomwise: |
arxiv-666019 | 2410.04096 | Sinc Kolmogorov-Arnold Network and Its Applications on Physics-informed Neural Networks | <|reference_start|>Sinc Kolmogorov-Arnold Network and Its Applications on Physics-informed Neural Networks: In this paper, we propose to use Sinc interpolation in the context of Kolmogorov-Arnold Networks, neural networks with learnable activation functions, which recently gained attention as alternatives to multilayer perceptron. Many different function representations have already been tried, but we show that Sinc interpolation proposes a viable alternative, since it is known in numerical analysis to represent well both smooth functions and functions with singularities. This is important not only for function approximation but also for the solutions of partial differential equations with physics-informed neural networks. Through a series of experiments, we show that SincKANs provide better results in almost all of the examples we have considered.<|reference_end|> | arxiv | @article{yu2024sinc,
title={Sinc Kolmogorov-Arnold Network and Its Applications on Physics-informed
Neural Networks},
author={Tianchi Yu, Jingwei Qiu, Jiang Yang, Ivan Oseledets},
journal={arXiv preprint arXiv:2410.04096},
year={2024},
archivePrefix={arXiv},
eprint={2410.04096},
primaryClass={cs.LG cs.AI cs.NA cs.NE math.NA physics.comp-ph}
} | yu2024sinc |
arxiv-666020 | 2410.04097 | TV-based Deep 3D Self Super-Resolution for fMRI | <|reference_start|>TV-based Deep 3D Self Super-Resolution for fMRI: While functional Magnetic Resonance Imaging (fMRI) offers valuable insights into cognitive processes, its inherent spatial limitations pose challenges for detailed analysis of the fine-grained functional architecture of the brain. More specifically, MRI scanner and sequence specifications impose a trade-off between temporal resolution, spatial resolution, signal-to-noise ratio, and scan time. Deep Learning (DL) Super-Resolution (SR) methods have emerged as a promising solution to enhance fMRI resolution, generating high-resolution (HR) images from low-resolution (LR) images typically acquired with lower scanning times. However, most existing SR approaches depend on supervised DL techniques, which require training ground truth (GT) HR data, which is often difficult to acquire and simultaneously sets a bound for how far SR can go. In this paper, we introduce a novel self-supervised DL SR model that combines a DL network with an analytical approach and Total Variation (TV) regularization. Our method eliminates the need for external GT images, achieving competitive performance compared to supervised DL techniques and preserving the functional maps.<|reference_end|> | arxiv | @article{pérez-bueno2024tv-based,
title={TV-based Deep 3D Self Super-Resolution for fMRI},
author={Fernando P'erez-Bueno, Hongwei Bran Li, Shahin Nasr, Cesar
Caballero-Gaudes, Juan Eugenio Iglesias},
journal={arXiv preprint arXiv:2410.04097},
year={2024},
archivePrefix={arXiv},
eprint={2410.04097},
primaryClass={eess.IV cs.CV}
} | pérez-bueno2024tv-based |
arxiv-666021 | 2410.04098 | The OCON model: an old but green solution for distributable supervised classification for acoustic monitoring in smart cities | <|reference_start|>The OCON model: an old but green solution for distributable supervised classification for acoustic monitoring in smart cities: This paper explores a structured application of the One-Class approach and the One-Class-One-Network model for supervised classification tasks, focusing on vowel phonemes classification and speakers recognition for the Automatic Speech Recognition (ASR) domain. For our case-study, the ASR model runs on a proprietary sensing and lightning system, exploited to monitor acoustic and air pollution on urban streets. We formalize combinations of pseudo-Neural Architecture Search and Hyper-Parameters Tuning experiments, using an informed grid-search methodology, to achieve classification accuracy comparable to nowadays most complex architectures, delving into the speaker recognition and energy efficiency aspects. Despite its simplicity, our model proposal has a very good chance to generalize the language and speaker genders context for widespread applicability in computational constrained contexts, proved by relevant statistical and performance metrics. Our experiments code is openly accessible on our GitHub.<|reference_end|> | arxiv | @article{giacomelli2024the,
title={The OCON model: an old but green solution for distributable supervised
classification for acoustic monitoring in smart cities},
author={Stefano Giacomelli, Marco Giordano, Claudia Rinaldi},
journal={in Proceedings of the 5th IEEE International Symposium on the
Internet of Sounds (IEEE IS2 2024, https://internetofsounds.net/is2_2024/)},
year={2024},
archivePrefix={arXiv},
eprint={2410.04098},
primaryClass={cs.SD cs.AI eess.AS}
} | giacomelli2024the |
arxiv-666022 | 2410.04103 | A Learning Rate Path Switching Training Paradigm for Version Updates of Large Language Models | <|reference_start|>A Learning Rate Path Switching Training Paradigm for Version Updates of Large Language Models: Due to the continuous emergence of new data, version updates have become an indispensable requirement for Large Language Models (LLMs). The training paradigms for version updates of LLMs include pre-training from scratch (PTFS) and continual pre-training (CPT). Preliminary experiments demonstrate that PTFS achieves better pre-training performance, while CPT has lower training cost. Moreover, their performance and training cost gaps widen progressively with version updates. To investigate the underlying reasons for this phenomenon, we analyze the effect of learning rate adjustments during the two stages of CPT: preparing an initialization checkpoint and continual pre-training based on this checkpoint. We find that a large learning rate in the first stage and a complete learning rate decay process in the second stage are crucial for version updates of LLMs. Hence, we propose a learning rate path switching training paradigm. Our paradigm comprises one main path, where we pre-train a LLM with the maximal learning rate, and multiple branching paths, each of which corresponds to an update of the LLM with newly-added training data. Extensive experiments demonstrate the effectiveness and generalization of our paradigm. Particularly, when training four versions of LLMs, our paradigm reduces the total training cost to 58% compared to PTFS, while maintaining comparable pre-training performance.<|reference_end|> | arxiv | @article{wang2024a,
title={A Learning Rate Path Switching Training Paradigm for Version Updates of
Large Language Models},
author={Zhihao Wang, Shiyu Liu, Jianheng Huang, Zheng Wang, Yixuan Liao,
Xiaoxin Chen, Junfeng Yao, Jinsong Su},
journal={arXiv preprint arXiv:2410.04103},
year={2024},
archivePrefix={arXiv},
eprint={2410.04103},
primaryClass={cs.CL}
} | wang2024a |
arxiv-666023 | 2410.04107 | TUBench: Benchmarking Large Vision-Language Models on Trustworthiness with Unanswerable Questions | <|reference_start|>TUBench: Benchmarking Large Vision-Language Models on Trustworthiness with Unanswerable Questions: Large Vision-Language Models (LVLMs) have achieved remarkable progress on visual perception and linguistic interpretation. Despite their impressive capabilities across various tasks, LVLMs still suffer from the issue of hallucination, which involves generating content that is incorrect or unfaithful to the visual or textual inputs. Traditional benchmarks, such as MME and POPE, evaluate hallucination in LVLMs within the scope of Visual Question Answering (VQA) using answerable questions. However, some questions are unanswerable due to insufficient information in the images, and the performance of LVLMs on such unanswerable questions remains underexplored. To bridge this research gap, we propose TUBench, a benchmark specifically designed to evaluate the reliability of LVLMs using unanswerable questions. TUBench comprises an extensive collection of high-quality, unanswerable questions that are meticulously crafted using ten distinct strategies. To thoroughly evaluate LVLMs, the unanswerable questions in TUBench are based on images from four diverse domains as visual contexts: screenshots of code snippets, natural images, geometry diagrams, and screenshots of statistical tables. These unanswerable questions are tailored to test LVLMs' trustworthiness in code reasoning, commonsense reasoning, geometric reasoning, and mathematical reasoning related to tables, respectively. We conducted a comprehensive quantitative evaluation of 28 leading foundational models on TUBench, with Gemini-1.5-Pro, the top-performing model, achieving an average accuracy of 69.2%, and GPT-4o, the third-ranked model, reaching 66.7% average accuracy, in determining whether questions are answerable. TUBench is available at https://github.com/NLPCode/TUBench.<|reference_end|> | arxiv | @article{he2024tubench:,
title={TUBench: Benchmarking Large Vision-Language Models on Trustworthiness
with Unanswerable Questions},
author={Xingwei He, Qianru Zhang, A-Long Jin, Yuan Yuan, Siu-Ming Yiu},
journal={arXiv preprint arXiv:2410.04107},
year={2024},
archivePrefix={arXiv},
eprint={2410.04107},
primaryClass={cs.CV cs.CL}
} | he2024tubench: |
arxiv-666024 | 2410.04108 | On the Sample Complexity of a Policy Gradient Algorithm with Occupancy Approximation for General Utility Reinforcement Learning | <|reference_start|>On the Sample Complexity of a Policy Gradient Algorithm with Occupancy Approximation for General Utility Reinforcement Learning: Reinforcement learning with general utilities has recently gained attention thanks to its ability to unify several problems, including imitation learning, pure exploration, and safe RL. However, prior work for solving this general problem in a unified way has mainly focused on the tabular setting. This is restrictive when considering larger state-action spaces because of the need to estimate occupancy measures during policy optimization. In this work, we address this issue and propose to approximate occupancy measures within a function approximation class using maximum likelihood estimation (MLE). We propose a simple policy gradient algorithm (PG-OMA) where an actor updates the policy parameters to maximize the general utility objective whereas a critic approximates the occupancy measure using MLE. We provide a sample complexity analysis of PG-OMA showing that our occupancy measure estimation error only scales with the dimension of our function approximation class rather than the size of the state action space. Under suitable assumptions, we establish first order stationarity and global optimality performance bounds for the proposed PG-OMA algorithm for nonconcave and concave general utilities respectively. We complement our methodological and theoretical findings with promising empirical results showing the scalability potential of our approach compared to existing tabular count-based approaches.<|reference_end|> | arxiv | @article{barakat2024on,
title={On the Sample Complexity of a Policy Gradient Algorithm with Occupancy
Approximation for General Utility Reinforcement Learning},
author={Anas Barakat, Souradip Chakraborty, Peihong Yu, Pratap Tokekar, Amrit
Singh Bedi},
journal={arXiv preprint arXiv:2410.04108},
year={2024},
archivePrefix={arXiv},
eprint={2410.04108},
primaryClass={cs.LG cs.AI}
} | barakat2024on |
arxiv-666025 | 2410.04111 | 180 Days After EIP-4844: Will Blob Sharing Solve Dilemma for Small Rollups? | <|reference_start|>180 Days After EIP-4844: Will Blob Sharing Solve Dilemma for Small Rollups?: The introduction of blobs through EIP-4844 has significantly reduced the Data Availability (DA) costs for rollups on Ethereum. However, due to the fixed size of blobs at 128 KB, rollups with low data throughput face a dilemma: they either use blobs inefficiently or decrease the frequency of DA submissions. Blob sharing, where multiple rollups share a single blob, has been proposed as a solution to this problem. This paper examines the effectiveness of blob sharing based on real-world data collected approximately six months after the implementation of EIP-4844. By simulating cost changes using a simple blob sharing format, we demonstrate that blob sharing can substantially improve the costs and DA service quality for small rollups, effectively resolving their dilemma. Notably, we observed cost reductions in USD exceeding 90% for most of the rollups when they cooperate, attributable to the smoothing effect of the blob base fee achieved through blob sharing.<|reference_end|> | arxiv | @article{lee2024180,
title={180 Days After EIP-4844: Will Blob Sharing Solve Dilemma for Small
Rollups?},
author={Suhyeon Lee},
journal={arXiv preprint arXiv:2410.04111},
year={2024},
archivePrefix={arXiv},
eprint={2410.04111},
primaryClass={cs.CR cs.DC}
} | lee2024180 |
arxiv-666026 | 2410.04112 | Exploring LLM-based Data Annotation Strategies for Medical Dialogue Preference Alignment | <|reference_start|>Exploring LLM-based Data Annotation Strategies for Medical Dialogue Preference Alignment: This research examines the use of Reinforcement Learning from AI Feedback (RLAIF) techniques to improve healthcare dialogue models, with the aim of tackling the challenges of preference-aligned data annotation while reducing the reliance on medical experts. We argue that the primary challenges in current RLAIF research for healthcare are the limitations of automated evaluation methods and the difficulties in accurately representing physician preferences. To address these challenges, we present a new evaluation framework based on standardized patient examinations. This framework is designed to objectively assess the effectiveness of large language models (LLMs) in guiding users and following instructions, enabling a comprehensive comparison across different models. Furthermore, our investigation of effective ways to express physician preferences using Constitutional AI algorithms highlighted the particular effectiveness of flowcharts. Utilizing this finding, we introduce an innovative agent-based approach for annotating preference data. This approach autonomously creates medical dialogue flows tailored to the patient's condition, demonstrates strong generalization abilities, and reduces the need for expert involvement. Our results show that the agent-based approach outperforms existing RLAIF annotation methods in standardized patient examinations and surpasses current open source medical dialogue LLMs in various test scenarios.<|reference_end|> | arxiv | @article{dou2024exploring,
title={Exploring LLM-based Data Annotation Strategies for Medical Dialogue
Preference Alignment},
author={Chengfeng Dou, Ying Zhang, Zhi Jin, Wenpin Jiao, Haiyan Zhao,
Yongqiang Zhao, Zhengwei Tao},
journal={arXiv preprint arXiv:2410.04112},
year={2024},
archivePrefix={arXiv},
eprint={2410.04112},
primaryClass={cs.CL}
} | dou2024exploring |
arxiv-666027 | 2410.04114 | Transport-Embedded Neural Architecture: Redefining the Landscape of physics aware neural models in fluid mechanics | <|reference_start|>Transport-Embedded Neural Architecture: Redefining the Landscape of physics aware neural models in fluid mechanics: This work introduces a new neural model which follows the transport equation by design. A physical problem, the Taylor-Green vortex, defined on a bi-periodic domain, is used as a benchmark to evaluate the performance of both the standard physics-informed neural network and our model (transport-embedded neural network). Results exhibit that while the standard physics-informed neural network fails to predict the solution accurately and merely returns the initial condition for the entire time span, our model successfully captures the temporal changes in the physics, particularly for high Reynolds numbers of the flow. Additionally, the ability of our model to prevent false minima can pave the way for addressing multiphysics problems, which are more prone to false minima, and help them accurately predict complex physics.<|reference_end|> | arxiv | @article{jafari2024transport-embedded,
title={Transport-Embedded Neural Architecture: Redefining the Landscape of
physics aware neural models in fluid mechanics},
author={Amirmahdi Jafari},
journal={arXiv preprint arXiv:2410.04114},
year={2024},
archivePrefix={arXiv},
eprint={2410.04114},
primaryClass={cs.CE cs.AI}
} | jafari2024transport-embedded |
arxiv-666028 | 2410.04117 | Logical Expressibility of Syntactic NL for Complementarity, Monotonicity, and Maximization | <|reference_start|>Logical Expressibility of Syntactic NL for Complementarity, Monotonicity, and Maximization: In a discussion on the computational complexity of ``parameterized'' NL (nondeterministic logarithmic-space complexity class), Syntactic NL or succinctly SNL was first introduced in 2017 as a ``syntactically''-defined natural subclass of NL using a restricted form of logical sentences, starting with second-order ``functional'' existential quantifiers followed by first-order universal quantifiers, in close connection to the so-called linear space hypothesis. We further explore various properties of this complexity class SNL. In particular, we consider the expressibility of ``complementary'' problems of SNL problems and introduce $\mu\mathrm{SNL}$, a variant of SNL by allowing the use of $\mu$-terms. As natural extensions of SNL, we further study the computational complexity of its monotone and optimization versions, respectively called MonoSNL and MAXSNL. While SNL does not enjoy the dichotomy theorem unless L$=$NL, we prove the dichotomy theorem for MonoSNL. We also consider a natural subclass of MAXSNL, called MAX$\tau$SNL, and show that all maximization problems in MAX$\tau$SNL are log-space approximable with only constant approximation ratios.<|reference_end|> | arxiv | @article{yamakami2024logical,
title={Logical Expressibility of Syntactic NL for Complementarity,
Monotonicity, and Maximization},
author={Tomoyuki Yamakami},
journal={arXiv preprint arXiv:2410.04117},
year={2024},
archivePrefix={arXiv},
eprint={2410.04117},
primaryClass={cs.CC}
} | yamakami2024logical |
arxiv-666029 | 2410.04118 | Riemann Sum Optimization for Accurate Integrated Gradients Computation | <|reference_start|>Riemann Sum Optimization for Accurate Integrated Gradients Computation: Integrated Gradients (IG) is a widely used algorithm for attributing the outputs of a deep neural network to its input features. Due to the absence of closed-form integrals for deep learning models, inaccurate Riemann Sum approximations are used to calculate IG. This often introduces undesirable errors in the form of high levels of noise, leading to false insights in the model's decision-making process. We introduce a framework, RiemannOpt, that minimizes these errors by optimizing the sample point selection for the Riemann Sum. Our algorithm is highly versatile and applicable to IG as well as its derivatives like Blur IG and Guided IG. RiemannOpt achieves up to 20% improvement in Insertion Scores. Additionally, it enables its users to curtail computational costs by up to four folds, thereby making it highly functional for constrained environments.<|reference_end|> | arxiv | @article{swain2024riemann,
title={Riemann Sum Optimization for Accurate Integrated Gradients Computation},
author={Swadesh Swain and Shree Singhi},
journal={arXiv preprint arXiv:2410.04118},
year={2024},
archivePrefix={arXiv},
eprint={2410.04118},
primaryClass={cs.LG cs.AI math.OC}
} | swain2024riemann |
arxiv-666030 | 2410.04120 | Rethinking Fair Representation Learning for Performance-Sensitive Tasks | <|reference_start|>Rethinking Fair Representation Learning for Performance-Sensitive Tasks: We investigate the prominent class of fair representation learning methods for bias mitigation. Using causal reasoning to define and formalise different sources of dataset bias, we reveal important implicit assumptions inherent to these methods. We prove fundamental limitations on fair representation learning when evaluation data is drawn from the same distribution as training data and run experiments across a range of medical modalities to examine the performance of fair representation learning under distribution shifts. Our results explain apparent contradictions in the existing literature and reveal how rarely considered causal and statistical aspects of the underlying data affect the validity of fair representation learning. We raise doubts about current evaluation practices and the applicability of fair representation learning methods in performance-sensitive settings. We argue that fine-grained analysis of dataset biases should play a key role in the field moving forward.<|reference_end|> | arxiv | @article{jones2024rethinking,
title={Rethinking Fair Representation Learning for Performance-Sensitive Tasks},
author={Charles Jones, Fabio de Sousa Ribeiro, M'elanie Roschewitz, Daniel C.
Castro, Ben Glocker},
journal={arXiv preprint arXiv:2410.04120},
year={2024},
archivePrefix={arXiv},
eprint={2410.04120},
primaryClass={cs.LG cs.CY stat.ML}
} | jones2024rethinking |
arxiv-666031 | 2410.04122 | A branch-&-price approach to the unrooted maximum agreement forest problem | <|reference_start|>A branch-&-price approach to the unrooted maximum agreement forest problem: We propose the first branch-&-price algorithm for the maximum agreement forest problem on unrooted binary trees: given two unrooted X-labelled binary trees we seek to partition X into a minimum number of blocks such that the induced subtrees are disjoint and have the same topologies in both trees. We provide a dynamic programming algorithm for the weighted maximum agreement subtree problem to solve the pricing problem. When combined with rigorous polynomial-time pre-processing our branch-&-price algorithm exhibits (beyond) state-of-the-art performance.<|reference_end|> | arxiv | @article{frohn2024a,
title={A branch-&-price approach to the unrooted maximum agreement forest
problem},
author={Martin Frohn, Steven Kelk, Simona Vychytilova},
journal={arXiv preprint arXiv:2410.04122},
year={2024},
archivePrefix={arXiv},
eprint={2410.04122},
primaryClass={cs.DS math.OC q-bio.PE}
} | frohn2024a |
arxiv-666032 | 2410.04123 | WAVE-UNET: Wavelength based Image Reconstruction method using attention UNET for OCT images | <|reference_start|>WAVE-UNET: Wavelength based Image Reconstruction method using attention UNET for OCT images: In this work, we propose to leverage a deep-learning (DL) based reconstruction framework for high quality Swept-Source Optical Coherence Tomography (SS-OCT) images, by incorporating wavelength ({\lambda}) space interferometric fringes. Generally, the SS-OCT captured fringe is linear in wavelength space and if Inverse Discrete Fourier Transform (IDFT) is applied to extract depth-resolved spectral information, the resultant images are blurred due to the broadened Point Spread Function (PSF). Thus, the recorded wavelength space fringe is to be scaled to uniform grid in wavenumber (k) space using k-linearization and calibration involving interpolations which may result in loss of information along with increased system complexity. Another challenge in OCT is the speckle noise, inherent in the low coherence interferometry-based systems. Hence, we propose a systematic design methodology WAVE-UNET to reconstruct the high-quality OCT images directly from the {\lambda}-space to reduce the complexity. The novel design paradigm surpasses the linearization procedures and uses DL to enhance the realism and quality of raw {\lambda}-space scans. This framework uses modified UNET having attention gating and residual connections, with IDFT processed {\lambda}-space fringes as the input. The method consistently outperforms the traditional OCT system by generating good-quality B-scans with highly reduced time-complexity.<|reference_end|> | arxiv | @article{viqar2024wave-unet:,
title={WAVE-UNET: Wavelength based Image Reconstruction method using attention
UNET for OCT images},
author={Maryam Viqar, Erdem Sahin, Violeta Madjarova, Elena Stoykova, Keehoon
Hong},
journal={arXiv preprint arXiv:2410.04123},
year={2024},
doi={10.1117/12.3006615},
archivePrefix={arXiv},
eprint={2410.04123},
primaryClass={eess.IV cs.CV cs.LG physics.comp-ph physics.optics}
} | viqar2024wave-unet: |
arxiv-666033 | 2410.04128 | Optimizing Medical Image Segmentation with Advanced Decoder Design | <|reference_start|>Optimizing Medical Image Segmentation with Advanced Decoder Design: U-Net is widely used in medical image segmentation due to its simple and flexible architecture design. To address the challenges of scale and complexity in medical tasks, several variants of U-Net have been proposed. In particular, methods based on Vision Transformer (ViT), represented by Swin UNETR, have gained widespread attention in recent years. However, these improvements often focus on the encoder, overlooking the crucial role of the decoder in optimizing segmentation details. This design imbalance limits the potential for further enhancing segmentation performance. To address this issue, we analyze the roles of various decoder components, including upsampling method, skip connection, and feature extraction module, as well as the shortcomings of existing methods. Consequently, we propose Swin DER (i.e., Swin UNETR Decoder Enhanced and Refined) by specifically optimizing the design of these three components. Swin DER performs upsampling using learnable interpolation algorithm called offset coordinate neighborhood weighted up sampling (Onsampling) and replaces traditional skip connection with spatial-channel parallel attention gate (SCP AG). Additionally, Swin DER introduces deformable convolution along with attention mechanism in the feature extraction module of the decoder. Our model design achieves excellent results, surpassing other state-of-the-art methods on both the Synapse and the MSD brain tumor segmentation task. Code is available at: https://github.com/WillBeanYang/Swin-DER<|reference_end|> | arxiv | @article{yang2024optimizing,
title={Optimizing Medical Image Segmentation with Advanced Decoder Design},
author={Weibin Yang, Zhiqi Dong, Mingyuan Xu, Longwei Xu, Dehua Geng, Yusong
Li, Pengwei Wang},
journal={arXiv preprint arXiv:2410.04128},
year={2024},
archivePrefix={arXiv},
eprint={2410.04128},
primaryClass={eess.IV cs.CV}
} | yang2024optimizing |
arxiv-666034 | 2410.04129 | Trajectory elongation strategies with minimum curvature discontinuities for a Dubins vehicle | <|reference_start|>Trajectory elongation strategies with minimum curvature discontinuities for a Dubins vehicle: In this paper, we present strategies for designing curvature-bounded trajectories of any desired length between any two given oriented points. The proposed trajectory is constructed by the concatenation of three circular arcs of varying radii. Such a trajectory guarantees a complete coverage of the maximum set of reachable lengths while minimising the number of changeover points in the trajectory to a maximum of two under all scenarios. Additionally, by using the notion of internally tangent circles, we expand the set of Circle-Circle-Circle trajectories to eight kinds, consisting of {LLL, LLR, LRR, LRL, RRL, RLL, RLR, RRR} paths. The paper presents a mathematical formulation of the proposed trajectory and the conditions for the existence and classification of each kind of trajectory. We also analyse the variation of the length of the trajectory using suitable elongation strategies and derive the set of reachable lengths for all pairs of oriented points. Finally, the results of this paper are illustrated using numerical simulations.<|reference_end|> | arxiv | @article{rao2024trajectory,
title={Trajectory elongation strategies with minimum curvature discontinuities
for a Dubins vehicle},
author={Aditya K. Rao, Twinkle Tripathy},
journal={arXiv preprint arXiv:2410.04129},
year={2024},
archivePrefix={arXiv},
eprint={2410.04129},
primaryClass={eess.SY cs.RO cs.SY math.OC}
} | rao2024trajectory |
arxiv-666035 | 2410.04133 | From Hospital to Portables: A Universal ECG Foundation Model Built on 10+ Million Diverse Recordings | <|reference_start|>From Hospital to Portables: A Universal ECG Foundation Model Built on 10+ Million Diverse Recordings: Artificial Intelligence (AI) has shown great promise in electrocardiogram (ECG) analysis and cardiovascular disease detection. However, developing a general AI-ECG model has been challenging due to inter-individual variability and the diversity of ECG diagnoses, limiting existing models to specific diagnostic tasks and datasets. Moreover, current AI-ECG models struggle to achieve comparable performance between single-lead and 12-lead ECGs, limiting the application of AI-ECG to portable and wearable ECG devices. To address these limitations, we introduce an ECG Foundation Model (ECGFounder), a general-purpose model that leverages real-world ECG annotations from cardiology experts to broaden the diagnostic capabilities of ECG analysis. ECGFounder is trained on over 10 million ECGs with 150 label categories from the Harvard-Emory ECG Database, enabling comprehensive cardiovascular disease diagnosis through ECG analysis. The model is designed to be both effective out-of-the-box and fine-tunable for downstream tasks, maximizing usability. More importantly, we extend its application to single-lead ECGs, enabling complex condition diagnoses and supporting various downstream tasks in mobile and remote monitoring scenarios. Experimental results demonstrate that ECGFounder achieves expert-level performance on internal validation sets for both 12-lead and single-lead ECGs, while also exhibiting strong classification performance and generalization across various diagnoses on external validation sets. When fine-tuned, ECGFounder outperforms baseline models in demographics detection, clinical event detection, and cross-modality cardiac rhythm diagnosis. The trained model and data will be publicly released upon publication through the bdsp.io. Our code is available at https://github.com/bdsp-core/ECGFounder.<|reference_end|> | arxiv | @article{li2024an,
title={An Electrocardiogram Foundation Model Built on over 10 Million
Recordings with External Evaluation across Multiple Domains},
author={Jun Li, Aaron Aguirre, Junior Moura, Che Liu, Lanhai Zhong, Chenxi
Sun, Gari Clifford, Brandon Westover, Shenda Hong},
journal={arXiv preprint arXiv:2410.04133},
year={2024},
archivePrefix={arXiv},
eprint={2410.04133},
primaryClass={cs.LG cs.AI eess.SP}
} | li2024an |
arxiv-666036 | 2410.04135 | IceCloudNet: 3D reconstruction of cloud ice from Meteosat SEVIRI | <|reference_start|>IceCloudNet: 3D reconstruction of cloud ice from Meteosat SEVIRI: IceCloudNet is a novel method based on machine learning able to predict high-quality vertically resolved cloud ice water contents (IWC) and ice crystal number concentrations (N$_\textrm{ice}$). The predictions come at the spatio-temporal coverage and resolution of geostationary satellite observations (SEVIRI) and the vertical resolution of active satellite retrievals (DARDAR). IceCloudNet consists of a ConvNeXt-based U-Net and a 3D PatchGAN discriminator model and is trained by predicting DARDAR profiles from co-located SEVIRI images. Despite the sparse availability of DARDAR data due to its narrow overpass, IceCloudNet is able to predict cloud occurrence, spatial structure, and microphysical properties with high precision. The model has been applied to ten years of SEVIRI data, producing a dataset of vertically resolved IWC and N$_\textrm{ice}$ of clouds containing ice with a 3 kmx3 kmx240 mx15 minute resolution in a spatial domain of 30{\deg}W to 30{\deg}E and 30{\deg}S to 30{\deg}N. The produced dataset increases the availability of vertical cloud profiles, for the period when DARDAR is available, by more than six orders of magnitude and moreover, IceCloudNet is able to produce vertical cloud profiles beyond the lifetime of the recently ended satellite missions underlying DARDAR.<|reference_end|> | arxiv | @article{jeggle2024icecloudnet:,
title={IceCloudNet: 3D reconstruction of cloud ice from Meteosat SEVIRI},
author={Kai Jeggle, Mikolaj Czerkawski, Federico Serva, Bertrand Le Saux,
David Neubauer, Ulrike Lohmann},
journal={arXiv preprint arXiv:2410.04135},
year={2024},
archivePrefix={arXiv},
eprint={2410.04135},
primaryClass={physics.ao-ph cs.AI cs.CV}
} | jeggle2024icecloudnet: |
arxiv-666037 | 2410.04136 | The convergence of sequences in terms of positive and alternating Perron expansions | <|reference_start|>The convergence of sequences in terms of positive and alternating Perron expansions: We consider conditions for the convergence of sequences in terms of positive and alternating Perron expansions ($P$-representation and $P^-$-representation). These conditions are crucial to determine the continuity of functions that are defined using $P$-representation or $P^-$-representation of real numbers.<|reference_end|> | arxiv | @article{moroz2024the,
title={The convergence of sequences in terms of positive and alternating Perron
expansions},
author={Mykola Moroz},
journal={arXiv preprint arXiv:2410.04136},
year={2024},
archivePrefix={arXiv},
eprint={2410.04136},
primaryClass={math.CA cs.NA math.NA}
} | moroz2024the |
arxiv-666038 | 2410.04139 | From Reading to Compressing: Exploring the Multi-document Reader for Prompt Compression | <|reference_start|>From Reading to Compressing: Exploring the Multi-document Reader for Prompt Compression: Large language models (LLMs) have achieved significant performance gains using advanced prompting techniques over various tasks. However, the increasing length of prompts leads to high computational costs and often obscures crucial information. Prompt compression has been proposed to alleviate these issues, but it faces challenges in (i) capturing the global context and (ii) training the compressor effectively. To tackle these challenges, we introduce a novel prompt compression method, namely Reading To Compressing (R2C), utilizing the Fusion-in-Decoder (FiD) architecture to identify the important information in the prompt. Specifically, the cross-attention scores of the FiD are used to discern essential chunks and sentences from the prompt. R2C effectively captures the global context without compromising semantic consistency while detouring the necessity of pseudo-labels for training the compressor. Empirical results show that R2C retains key contexts, enhancing the LLM performance by 6% in out-of-domain evaluations while reducing the prompt length by 80%.<|reference_end|> | arxiv | @article{choi2024from,
title={From Reading to Compressing: Exploring the Multi-document Reader for
Prompt Compression},
author={Eunseong Choi, Sunkyung Lee, Minjin Choi, June Park, Jongwuk Lee},
journal={arXiv preprint arXiv:2410.04139},
year={2024},
archivePrefix={arXiv},
eprint={2410.04139},
primaryClass={cs.CL cs.AI}
} | choi2024from |
arxiv-666039 | 2410.04140 | Gap Preserving Distillation by Building Bidirectional Mappings with A Dynamic Teacher | <|reference_start|>Gap Preserving Distillation by Building Bidirectional Mappings with A Dynamic Teacher: Knowledge distillation aims to transfer knowledge from a large teacher model to a compact student counterpart, often coming with a significant performance gap between them. We find that a too-large performance gap can hamper the training process, which is also verified in recent studies. To address this, we propose a Gap Preserving Distillation (GPD) method that trains an additional dynamic teacher model from scratch along with training the student to bridge this gap. In this way, it becomes possible to maintain a reasonable performance gap between teacher and student during the whole distillation process. To further strengthen distillation from the dynamic teacher to the student, we develop a hard strategy by enforcing them to share parameters and encouraging parameter inheritance. Besides hard strategy, we also build the soft bidirectional mappings between them which are built on an Inverse Reparameterization (IR) method and a Channel-Branch Reparameterization (CBR) strategy. We highlight that our IR is able to initialize a larger dynamic teacher with an arbitrary expansion ratio, while preserving exactly the same accuracy as the given student model. In this way, it guarantees that the dynamic teacher and student start from the same point and avoid a too large gap in early stage of training. As for our CBR, with parameter-sharing, it directly extracts an effective student model from the well-learned dynamic teacher without any post-training, making our method highly flexible for model deployment. In the experiments, GPD significantly outperforms existing distillation methods on top of both CNNs and transformers architectures, achieving up to 1.58% accuracy improvement. Interestingly, GPD also generalizes well to the scenarios without a pre-trained teacher, including training from scratch and fine-tuning, yielding a large improvement of 1.80% and 0.89% on ResNet18, respectively.<|reference_end|> | arxiv | @article{guo2024gap,
title={Gap Preserving Distillation by Building Bidirectional Mappings with A
Dynamic Teacher},
author={Yong Guo, Shulian Zhang, Haolin Pan, Jing Liu, Yulun Zhang, Jian Chen},
journal={arXiv preprint arXiv:2410.04140},
year={2024},
archivePrefix={arXiv},
eprint={2410.04140},
primaryClass={cs.CV}
} | guo2024gap |
arxiv-666040 | 2410.04144 | ConDa: Fast Federated Unlearning with Contribution Dampening | <|reference_start|>ConDa: Fast Federated Unlearning with Contribution Dampening: Federated learning (FL) has enabled collaborative model training across decentralized data sources or clients. While adding new participants to a shared model does not pose great technical hurdles, the removal of a participant and their related information contained in the shared model remains a challenge. To address this problem, federated unlearning has emerged as a critical research direction, seeking to remove information from globally trained models without harming the model performance on the remaining data. Most modern federated unlearning methods use costly approaches such as the use of remaining clients data to retrain the global model or methods that would require heavy computation on client or server side. We introduce Contribution Dampening (ConDa), a framework that performs efficient unlearning by tracking down the parameters which affect the global model for each client and performs synaptic dampening on the parameters of the global model that have privacy infringing contributions from the forgetting client. Our technique does not require clients data or any kind of retraining and it does not put any computational overhead on either the client or server side. We perform experiments on multiple datasets and demonstrate that ConDa is effective to forget a client's data. In experiments conducted on the MNIST, CIFAR10, and CIFAR100 datasets, ConDa proves to be the fastest federated unlearning method, outperforming the nearest state of the art approach by at least 100x. Our emphasis is on the non-IID Federated Learning setting, which presents the greatest challenge for unlearning. Additionally, we validate ConDa's robustness through backdoor and membership inference attacks. We envision this work as a crucial component for FL in adhering to legal and ethical requirements.<|reference_end|> | arxiv | @article{chundawat2024conda:,
title={ConDa: Fast Federated Unlearning with Contribution Dampening},
author={Vikram S Chundawat, Pushkar Niroula, Prasanna Dhungana, Stefan
Schoepf, Murari Mandal, Alexandra Brintrup},
journal={arXiv preprint arXiv:2410.04144},
year={2024},
archivePrefix={arXiv},
eprint={2410.04144},
primaryClass={cs.LG cs.CR}
} | chundawat2024conda: |
arxiv-666041 | 2410.04146 | The discrete octonionic Stokes' formula revisited | <|reference_start|>The discrete octonionic Stokes' formula revisited: In a previous work [arXiv:2211.02945] we made an attempt to set up a discrete octonionic Stokes' formula. Due to an algebraic property that we have not considered in that attempt, the formula however turned out to involve an associator term in addition to a change of sign that we already observed earlier. This associator term has an impact on the final result. In this paper we carefully revise this discrete Stokes formula taking into account this additional term. In fact the result that we now obtain is much more in line with the results that one has in the continuous setting.<|reference_end|> | arxiv | @article{kraußhar2024the,
title={The discrete octonionic Stokes' formula revisited},
author={Rolf S"oren Krau{ss}har, Anastasiia Legatiukand, Dmitrii Legatiuk},
journal={arXiv preprint arXiv:2410.04146},
year={2024},
archivePrefix={arXiv},
eprint={2410.04146},
primaryClass={math.AP cs.NA math.NA}
} | kraußhar2024the |
arxiv-666042 | 2410.04147 | Can the Variation of Model Weights be used as a Criterion for Self-Paced Multilingual NMT? | <|reference_start|>Can the Variation of Model Weights be used as a Criterion for Self-Paced Multilingual NMT?: Many-to-one neural machine translation systems improve over one-to-one systems when training data is scarce. In this paper, we design and test a novel algorithm for selecting the language of minibatches when training such systems. The algorithm changes the language of the minibatch when the weights of the model do not evolve significantly, as measured by the smoothed KL divergence between all layers of the Transformer network. This algorithm outperforms the use of alternating monolingual batches, but not the use of shuffled batches, in terms of translation quality (measured with BLEU and COMET) and convergence speed.<|reference_end|> | arxiv | @article{atrio2024can,
title={Can the Variation of Model Weights be used as a Criterion for Self-Paced
Multilingual NMT?},
author={`Alex R. Atrio, Alexis Allemann, Ljiljana Dolamic, Andrei
Popescu-Belis},
journal={arXiv preprint arXiv:2410.04147},
year={2024},
archivePrefix={arXiv},
eprint={2410.04147},
primaryClass={cs.CL}
} | atrio2024can |
arxiv-666043 | 2410.04148 | Reasoning with Natural Language Explanations | <|reference_start|>Reasoning with Natural Language Explanations: Explanation constitutes an archetypal feature of human rationality, underpinning learning and generalisation, and representing one of the media supporting scientific discovery and communication. Due to the importance of explanations in human reasoning, an increasing amount of research in Natural Language Inference (NLI) has started reconsidering the role that explanations play in learning and inference, attempting to build explanation-based NLI models that can effectively encode and use natural language explanations on downstream tasks. Research in explanation-based NLI, however, presents specific challenges and opportunities, as explanatory reasoning reflects aspects of both material and formal inference, making it a particularly rich setting to model and deliver complex reasoning. In this tutorial, we provide a comprehensive introduction to the field of explanation-based NLI, grounding this discussion on the epistemological-linguistic foundations of explanations, systematically describing the main architectural trends and evaluation methodologies that can be used to build systems capable of explanatory reasoning.<|reference_end|> | arxiv | @article{valentino2024reasoning,
title={Reasoning with Natural Language Explanations},
author={Marco Valentino, Andr'e Freitas},
journal={arXiv preprint arXiv:2410.04148},
year={2024},
archivePrefix={arXiv},
eprint={2410.04148},
primaryClass={cs.CL cs.AI}
} | valentino2024reasoning |
arxiv-666044 | 2410.04149 | Mov-Avg: Codeless time series analysis using moving averages | <|reference_start|>Mov-Avg: Codeless time series analysis using moving averages: This paper introduces Mov-Avg, the Python software package for time series analysis that requires little computer programming experience from the user. The package allows the identification of trends, patterns, and the prediction of future events based on data collected over time. In this regard, the Mov-Avg implementation provides three indicators to apply, namely: Simple Moving Average, Weighted Moving Average and Exponential Moving Average. Due to its generic design, the Mov-Avg software package can be used in any field where the application of moving averages is valid. In general, the Mov-Avg library for time series analysis contributes to a better understanding of data-driven processes over time by taking advantage of moving averages in any way adapted to the research context.<|reference_end|> | arxiv | @article{weichbroth2024mov-avg:,
title={Mov-Avg: Codeless time series analysis using moving averages},
author={Pawe{l} Weichbroth, Jakub Buczkowski},
journal={arXiv preprint arXiv:2410.04149},
year={2024},
archivePrefix={arXiv},
eprint={2410.04149},
primaryClass={cs.OH}
} | weichbroth2024mov-avg: |
arxiv-666045 | 2410.04151 | Trajectory Design and Resource Allocation for Multi-UAV-Assisted Sensing, Communication, and Edge Computing Integration | <|reference_start|>Trajectory Design and Resource Allocation for Multi-UAV-Assisted Sensing, Communication, and Edge Computing Integration: In this paper, we propose a multi-unmanned aerial vehicle (UAV)-assisted integrated sensing, communication, and computation network. Specifically, the treble-functional UAVs are capable of offering communication and edge computing services to mobile users (MUs) in proximity, alongside their target sensing capabilities by using multi-input multi-output arrays. For the purpose of enhance the computation efficiency, we consider task compression, where each MU can partially compress their offloaded data prior to transmission to trim its size. The objective is to minimize the weighted energy consumption by jointly optimizing the transmit beamforming, the UAVs' trajectories, the compression and offloading partition, the computation resource allocation, while fulfilling the causal-effect correlation between communication and computation as well as adhering to the constraints on sensing quality. To tackle it, we first reformulate the original problem as a multi-agent Markov decision process (MDP), which involves heterogeneous agents to decompose the large state spaces and action spaces of MDP. Then, we propose a multi-agent proximal policy optimization algorithm with attention mechanism to handle the decision-making problem. Simulation results validate the significant effectiveness of the proposed method in reducing energy consumption. Moreover, it demonstrates superior performance compared to the baselines in relation to resource utilization and convergence speed.<|reference_end|> | arxiv | @article{peng2024trajectory,
title={Trajectory Design and Resource Allocation for Multi-UAV-Assisted
Sensing, Communication, and Edge Computing Integration},
author={Sicong Peng, Bin Li, Lei Liu, Zesong Fei, Dusit Niyato},
journal={arXiv preprint arXiv:2410.04151},
year={2024},
archivePrefix={arXiv},
eprint={2410.04151},
primaryClass={cs.IT math.IT}
} | peng2024trajectory |
arxiv-666046 | 2410.04152 | DAMMI:Daily Activities in a Psychologically Annotated Multi-Modal IoT dataset | <|reference_start|>DAMMI:Daily Activities in a Psychologically Annotated Multi-Modal IoT dataset: The growth in the elderly population and the shift in the age pyramid have increased the demand for healthcare and well-being services. To address this concern, alongside the rising cost of medical care, the concept of ageing at home has emerged, driven by recent advances in medical and technological solutions. Experts in computer science, communication technology, and healthcare have collaborated to develop affordable health solutions by employing sensors in living environments, wearable devices, and smartphones, in association with advanced data mining and intelligent systems with learning capabilities, to monitor, analyze, and predict the health status of elderly individuals. However, implementing intelligent healthcare systems and developing analytical techniques requires testing and evaluating algorithms on real-world data. Despite the need, there is a shortage of publicly available datasets that meet these requirements. To address this gap, we present the DAMMI dataset in this work, designed to support researchers in the field. The dataset includes daily activity data of an elderly individual collected via home-installed sensors, smartphone data, and a wristband over 146 days. It also contains daily psychological reports provided by a team of psychologists. Furthermore, the data collection spans significant events such as the COVID-19 pandemic, New Year's holidays, and the religious month of Ramadan, offering additional opportunities for analysis. In this paper, we outline detailed information about the data collection system, the types of data recorded, and pre-processed event logs. This dataset is intended to assist professionals in IoT and data mining in evaluating and implementing their research ideas.<|reference_end|> | arxiv | @article{rad2024dammi:daily,
title={DAMMI:Daily Activities in a Psychologically Annotated Multi-Modal IoT
dataset},
author={Mohsen Falah Rad, Kamrad Khoshhal Roudposhti, Mohammad Hassan
Khoobkar, Mohsen Shirali, Zahra Ahmadi, Carlos Fernandez-Llatas},
journal={arXiv preprint arXiv:2410.04152},
year={2024},
archivePrefix={arXiv},
eprint={2410.04152},
primaryClass={cs.AI}
} | rad2024dammi:daily |
arxiv-666047 | 2410.04153 | Neuro-Symbolic Entity Alignment via Variational Inference | <|reference_start|>Neuro-Symbolic Entity Alignment via Variational Inference: Entity alignment (EA) aims to merge two knowledge graphs (KGs) by identifying equivalent entity pairs. Existing methods can be categorized into symbolic and neural models. Symbolic models, while precise, struggle with substructure heterogeneity and sparsity, whereas neural models, although effective, generally lack interpretability and cannot handle uncertainty. We propose NeuSymEA, a probabilistic neuro-symbolic framework that combines the strengths of both methods. NeuSymEA models the joint probability of all possible pairs' truth scores in a Markov random field, regulated by a set of rules, and optimizes it with the variational EM algorithm. In the E-step, a neural model parameterizes the truth score distributions and infers missing alignments. In the M-step, the rule weights are updated based on the observed and inferred alignments. To facilitate interpretability, we further design a path-ranking-based explainer upon this framework that generates supporting rules for the inferred alignments. Experiments on benchmarks demonstrate that NeuSymEA not only significantly outperforms baselines in terms of effectiveness and robustness, but also provides interpretable results.<|reference_end|> | arxiv | @article{chen2024neuro-symbolic,
title={Neuro-Symbolic Entity Alignment via Variational Inference},
author={Shengyuan Chen, Qinggang Zhang, Junnan Dong, Wen Hua, Jiannong Cao,
Xiao Huang},
journal={arXiv preprint arXiv:2410.04153},
year={2024},
archivePrefix={arXiv},
eprint={2410.04153},
primaryClass={cs.AI}
} | chen2024neuro-symbolic |
arxiv-666048 | 2410.04154 | Applying Quantum Autoencoders for Time Series Anomaly Detection | <|reference_start|>Applying Quantum Autoencoders for Time Series Anomaly Detection: Anomaly detection is an important problem with applications in various domains such as fraud detection, pattern recognition or medical diagnosis. Several algorithms have been introduced using classical computing approaches. However, using quantum computing for solving anomaly detection problems in time series data is a widely unexplored research field. This paper explores the application of quantum autoencoders to time series anomaly detection. We investigate two primary techniques for classifying anomalies: (1) Analyzing the reconstruction error generated by the quantum autoencoder and (2) latent representation analysis. Our simulated experimental results, conducted across various ansaetze, demonstrate that quantum autoencoders consistently outperform classical deep learning-based autoencoders across multiple datasets. Specifically, quantum autoencoders achieve superior anomaly detection performance while utilizing 60-230 times fewer parameters and requiring five times fewer training iterations. In addition, we implement our quantum encoder on real quantum hardware. Our experimental results demonstrate that quantum autoencoders achieve anomaly detection performance on par with their simulated counterparts.<|reference_end|> | arxiv | @article{frehner2024applying,
title={Applying Quantum Autoencoders for Time Series Anomaly Detection},
author={Robin Frehner and Kurt Stockinger},
journal={arXiv preprint arXiv:2410.04154},
year={2024},
archivePrefix={arXiv},
eprint={2410.04154},
primaryClass={cs.LG cs.AI cs.ET quant-ph}
} | frehner2024applying |
arxiv-666049 | 2410.04155 | Toxic Subword Pruning for Dialogue Response Generation on Large Language Models | <|reference_start|>Toxic Subword Pruning for Dialogue Response Generation on Large Language Models: How to defend large language models (LLMs) from generating toxic content is an important research area. Yet, most research focused on various model training techniques to remediate LLMs by updating their weights. A typical related research area is safety alignment. This however is often costly and tedious and can expose the model to even more problems such as catastrophic forgetting if the trainings are not carefully handled by experienced NLP practitioners. We thus propose a simple yet effective and novel algorithm, namely \textbf{Tox}ic Subword \textbf{Prun}ing (ToxPrune) to prune the subword contained by the toxic words from BPE in trained LLMs. In contrast to the previous work that demonstrates pruning BPE tokens as harmful to the task of machine translation, we surprisingly found its usefulness in preventing toxic content from being generated on LLMs. Fortunately, our findings suggest that ToxPrune simultaneously improves the toxic language model NSFW-3B on the task of dialogue response generation obviously. We surprisingly found that ToxPrune can even obviously improve official Llama-3.1-6B in the metric of dialogue diversity. Extensive automatic results and human evaluation indicate that ToxPrune could be helpful for both remediating toxic LLMs and improving non-toxic LLMs on the task of dialogue response generation.\footnote{We plan to release the resources to facilitate future work.}<|reference_end|> | arxiv | @article{lu2024toxic,
title={Toxic Subword Pruning for Dialogue Response Generation on Large Language
Models},
author={Hongyuan Lu and Wai Lam},
journal={arXiv preprint arXiv:2410.04155},
year={2024},
archivePrefix={arXiv},
eprint={2410.04155},
primaryClass={cs.CL}
} | lu2024toxic |
arxiv-666050 | 2410.04159 | Efficient and Robust Long-Form Speech Recognition with Hybrid H3-Conformer | <|reference_start|>Efficient and Robust Long-Form Speech Recognition with Hybrid H3-Conformer: Recently, Conformer has achieved state-of-the-art performance in many speech recognition tasks. However, the Transformer-based models show significant deterioration for long-form speech, such as lectures, because the self-attention mechanism becomes unreliable with the computation of the square order of the input length. To solve the problem, we incorporate a kind of state-space model, Hungry Hungry Hippos (H3), to replace or complement the multi-head self-attention (MHSA). H3 allows for efficient modeling of long-form sequences with a linear-order computation. In experiments using two datasets of CSJ and LibriSpeech, our proposed H3-Conformer model performs efficient and robust recognition of long-form speech. Moreover, we propose a hybrid of H3 and MHSA and show that using H3 in higher layers and MHSA in lower layers provides significant improvement in online recognition. We also investigate a parallel use of H3 and MHSA in all layers, resulting in the best performance.<|reference_end|> | arxiv | @article{honda2024efficient,
title={Efficient and Robust Long-Form Speech Recognition with Hybrid
H3-Conformer},
author={Tomoki Honda, Shinsuke Sakai, Tatsuya Kawahara},
journal={arXiv preprint arXiv:2410.04159},
year={2024},
doi={10.21437/Interspeech.2024-258},
archivePrefix={arXiv},
eprint={2410.04159},
primaryClass={cs.SD eess.AS}
} | honda2024efficient |
arxiv-666051 | 2410.04161 | Overcoming False Illusions in Real-World Face Restoration with Multi-Modal Guided Diffusion Model | <|reference_start|>Overcoming False Illusions in Real-World Face Restoration with Multi-Modal Guided Diffusion Model: We introduce a novel Multi-modal Guided Real-World Face Restoration (MGFR) technique designed to improve the quality of facial image restoration from low-quality inputs. Leveraging a blend of attribute text prompts, high-quality reference images, and identity information, MGFR can mitigate the generation of false facial attributes and identities often associated with generative face restoration methods. By incorporating a dual-control adapter and a two-stage training strategy, our method effectively utilizes multi-modal prior information for targeted restoration tasks. We also present the Reface-HQ dataset, comprising over 23,000 high-resolution facial images across 5,000 identities, to address the need for reference face training images. Our approach achieves superior visual quality in restoring facial details under severe degradation and allows for controlled restoration processes, enhancing the accuracy of identity preservation and attribute correction. Including negative quality samples and attribute prompts in the training further refines the model's ability to generate detailed and perceptually accurate images.<|reference_end|> | arxiv | @article{tao2024overcoming,
title={Overcoming False Illusions in Real-World Face Restoration with
Multi-Modal Guided Diffusion Model},
author={Keda Tao, Jinjin Gu, Yulun Zhang, Xiucheng Wang, Nan Cheng},
journal={arXiv preprint arXiv:2410.04161},
year={2024},
archivePrefix={arXiv},
eprint={2410.04161},
primaryClass={cs.CV}
} | tao2024overcoming |
arxiv-666052 | 2410.04164 | Towards Effective Counter-Responses: Aligning Human Preferences with Strategies to Combat Online Trolling | <|reference_start|>Towards Effective Counter-Responses: Aligning Human Preferences with Strategies to Combat Online Trolling: Trolling in online communities typically involves disruptive behaviors such as provoking anger and manipulating discussions, leading to a polarized atmosphere and emotional distress. Robust moderation is essential for mitigating these negative impacts and maintaining a healthy and constructive community atmosphere. However, effectively addressing trolls is difficult because their behaviors vary widely and require different response strategies (RSs) to counter them. This diversity makes it challenging to choose an appropriate RS for each specific situation. To address this challenge, our research investigates whether humans have preferred strategies tailored to different types of trolling behaviors. Our findings reveal a correlation between the types of trolling encountered and the preferred RS. In this paper, we introduce a methodology for generating counter-responses to trolls by recommending appropriate RSs, supported by a dataset aligning these strategies with human preferences across various troll contexts. The experimental results demonstrate that our proposed approach guides constructive discussion and reduces the negative effects of trolls, thereby enhancing the online community environment.<|reference_end|> | arxiv | @article{lee2024towards,
title={Towards Effective Counter-Responses: Aligning Human Preferences with
Strategies to Combat Online Trolling},
author={Huije Lee, Hoyun Song, Jisu Shin, Sukmin Cho, SeungYoon Han, Jong C.
Park},
journal={arXiv preprint arXiv:2410.04164},
year={2024},
archivePrefix={arXiv},
eprint={2410.04164},
primaryClass={cs.CL}
} | lee2024towards |
arxiv-666053 | 2410.04166 | Preference Optimization as Probabilistic Inference | <|reference_start|>Preference Optimization as Probabilistic Inference: Existing preference optimization methods are mainly designed for directly learning from human feedback with the assumption that paired examples (preferred vs. dis-preferred) are available. In contrast, we propose a method that can leverage unpaired preferred or dis-preferred examples, and works even when only one type of feedback (positive or negative) is available. This flexibility allows us to apply it in scenarios with varying forms of feedback and models, including training generative language models based on human feedback as well as training policies for sequential decision-making problems, where learned (value) functions are available. Our approach builds upon the probabilistic framework introduced in (Dayan and Hinton, 1997), which proposes to use expectation-maximization (EM) to directly optimize the probability of preferred outcomes (as opposed to classic expected reward maximization). To obtain a practical algorithm, we identify and address a key limitation in current EM-based methods: when applied to preference optimization, they solely maximize the likelihood of preferred examples, while neglecting dis-preferred samples. We show how one can extend EM algorithms to explicitly incorporate dis-preferred outcomes, leading to a novel, theoretically grounded, preference optimization algorithm that offers an intuitive and versatile way to learn from both positive and negative feedback.<|reference_end|> | arxiv | @article{abdolmaleki2024preference,
title={Preference Optimization as Probabilistic Inference},
author={Abbas Abdolmaleki, Bilal Piot, Bobak Shahriari, Jost Tobias
Springenberg, Tim Hertweck, Rishabh Joshi, Junhyuk Oh, Michael Bloesch,
Thomas Lampe, Nicolas Heess, Jonas Buchli, Martin Riedmiller},
journal={arXiv preprint arXiv:2410.04166},
year={2024},
archivePrefix={arXiv},
eprint={2410.04166},
primaryClass={cs.LG stat.ML}
} | abdolmaleki2024preference |
arxiv-666054 | 2410.04167 | Beyond Language: Applying MLX Transformers to Engineering Physics | <|reference_start|>Beyond Language: Applying MLX Transformers to Engineering Physics: Transformer Neural Networks are driving an explosion of activity and discovery in the field of Large Language Models (LLMs). In contrast, there have been only a few attempts to apply Transformers in engineering physics. Aiming to offer an easy entry point to physics-centric Transformers, we introduce a physics-informed Transformer model for solving the heat conduction problem in a 2D plate with Dirichlet boundary conditions. The model is implemented in the machine learning framework MLX and leverages the unified memory of Apple M-series processors. The use of MLX means that the models can be trained and perform predictions efficiently on personal machines with only modest memory requirements. To train, validate and test the Transformer model we solve the 2D heat conduction problem using central finite differences. Each finite difference solution in these sets is initialized with four random Dirichlet boundary conditions, a uniform but random internal temperature distribution and a randomly selected thermal diffusivity. Validation is performed in-line during training to monitor against over-fitting. The excellent performance of the trained model is demonstrated by predicting the evolution of the temperature field to steady state for the unseen test set of conditions.<|reference_end|> | arxiv | @article{kassinos2024beyond,
title={Beyond Language: Applying MLX Transformers to Engineering Physics},
author={Stavros Kassinos, Alessio Alexiadis},
journal={arXiv preprint arXiv:2410.04167},
year={2024},
archivePrefix={arXiv},
eprint={2410.04167},
primaryClass={cs.CE cs.LG physics.comp-ph}
} | kassinos2024beyond |
arxiv-666055 | 2410.04168 | Robust Task-Oriented Communication Framework for Real-Time Collaborative Vision Perception | <|reference_start|>Robust Task-Oriented Communication Framework for Real-Time Collaborative Vision Perception: Cooperative perception enhances sensing in multi-robot and vehicular networks by aggregating information from multiple agents, improving perception accuracy and range. However, mobility and non-rigid sensor mounts introduce extrinsic calibration errors, necessitating online calibration, which is complicated by limited overlap in sensing regions. Maintaining fresh information is crucial for timely and accurate sensing. To address calibration errors and ensure both perception accuracy and transmission timeliness, we propose a Robust Task-Oriented Communication framework (R-TOCOM) that optimizes calibration and feature transmission in both deployment and streaming phases. First, we formulate an Age of Perceived Targets (AoPT) minimization problem to capture information freshness. Then, in the deployment phase, we introduce a channel-aware self-calibration technique based on re-identification (Re-ID). This technique adaptively compresses key-point features according to channel capacities, effectively addressing calibration issues via spatial and temporal cross-camera correlations. In the streaming phase, we tackle the trade-off between bandwidth and inference accuracy by integrating an Information Bottleneck (IB)-based encoding method that adjusts video compression rates based on task relevance, thereby reducing communication overhead and latency. To mitigate performance degradation from packet loss, we introduce a priority network that filters corrupted features. Extensive studies demonstrate our framework outperforms five baselines, improving multiple object detection accuracy (MODA) by 25.49% and reducing communication costs by 51.36% under severe channel condition.<|reference_end|> | arxiv | @article{fang2024r-acp:,
title={R-ACP: Real-Time Adaptive Collaborative Perception Leveraging Robust
Task-Oriented Communications},
author={Zhengru Fang, Jingjing Wang, Yanan Ma, Yihang Tao, Yiqin Deng, Xianhao
Chen, Yuguang Fang},
journal={arXiv preprint arXiv:2410.04168},
year={2024},
archivePrefix={arXiv},
eprint={2410.04168},
primaryClass={cs.NI}
} | fang2024r-acp: |
arxiv-666056 | 2410.04171 | IV-Mixed Sampler: Leveraging Image Diffusion Models for Enhanced Video Synthesis | <|reference_start|>IV-Mixed Sampler: Leveraging Image Diffusion Models for Enhanced Video Synthesis: The multi-step sampling mechanism, a key feature of visual diffusion models, has significant potential to replicate the success of OpenAI's Strawberry in enhancing performance by increasing the inference computational cost. Sufficient prior studies have demonstrated that correctly scaling up computation in the sampling process can successfully lead to improved generation quality, enhanced image editing, and compositional generalization. While there have been rapid advancements in developing inference-heavy algorithms for improved image generation, relatively little work has explored inference scaling laws in video diffusion models (VDMs). Furthermore, existing research shows only minimal performance gains that are perceptible to the naked eye. To address this, we design a novel training-free algorithm IV-Mixed Sampler that leverages the strengths of image diffusion models (IDMs) to assist VDMs surpass their current capabilities. The core of IV-Mixed Sampler is to use IDMs to significantly enhance the quality of each video frame and VDMs ensure the temporal coherence of the video during the sampling process. Our experiments have demonstrated that IV-Mixed Sampler achieves state-of-the-art performance on 4 benchmarks including UCF-101-FVD, MSR-VTT-FVD, Chronomagic-Bench-150, and Chronomagic-Bench-1649. For example, the open-source Animatediff with IV-Mixed Sampler reduces the UMT-FVD score from 275.2 to 228.6, closing to 223.1 from the closed-source Pika-2.0.<|reference_end|> | arxiv | @article{shao2024iv-mixed,
title={IV-Mixed Sampler: Leveraging Image Diffusion Models for Enhanced Video
Synthesis},
author={Shitong Shao, Zikai Zhou, Lichen Bai, Haoyi Xiong, Zeke Xie},
journal={arXiv preprint arXiv:2410.04171},
year={2024},
archivePrefix={arXiv},
eprint={2410.04171},
primaryClass={cs.CV cs.AI}
} | shao2024iv-mixed |
arxiv-666057 | 2410.04172 | DB-SAM: Delving into High Quality Universal Medical Image Segmentation | <|reference_start|>DB-SAM: Delving into High Quality Universal Medical Image Segmentation: Recently, the Segment Anything Model (SAM) has demonstrated promising segmentation capabilities in a variety of downstream segmentation tasks. However in the context of universal medical image segmentation there exists a notable performance discrepancy when directly applying SAM due to the domain gap between natural and 2D/3D medical data. In this work, we propose a dual-branch adapted SAM framework, named DB-SAM, that strives to effectively bridge this domain gap. Our dual-branch adapted SAM contains two branches in parallel: a ViT branch and a convolution branch. The ViT branch incorporates a learnable channel attention block after each frozen attention block, which captures domain-specific local features. On the other hand, the convolution branch employs a light-weight convolutional block to extract domain-specific shallow features from the input medical image. To perform cross-branch feature fusion, we design a bilateral cross-attention block and a ViT convolution fusion block, which dynamically combine diverse information of two branches for mask decoder. Extensive experiments on large-scale medical image dataset with various 3D and 2D medical segmentation tasks reveal the merits of our proposed contributions. On 21 3D medical image segmentation tasks, our proposed DB-SAM achieves an absolute gain of 8.8%, compared to a recent medical SAM adapter in the literature. The code and model are available at https://github.com/AlfredQin/DB-SAM.<|reference_end|> | arxiv | @article{qin2024db-sam:,
title={DB-SAM: Delving into High Quality Universal Medical Image Segmentation},
author={Chao Qin, Jiale Cao, Huazhu Fu, Fahad Shahbaz Khan, Rao Muhammad Anwer},
journal={arXiv preprint arXiv:2410.04172},
year={2024},
archivePrefix={arXiv},
eprint={2410.04172},
primaryClass={eess.IV cs.CV}
} | qin2024db-sam: |
arxiv-666058 | 2410.04173 | Fast Object Detection with a Machine Learning Edge Device | <|reference_start|>Fast Object Detection with a Machine Learning Edge Device: This machine learning study investigates a lowcost edge device integrated with an embedded system having computer vision and resulting in an improved performance in inferencing time and precision of object detection and classification. A primary aim of this study focused on reducing inferencing time and low-power consumption and to enable an embedded device of a competition-ready autonomous humanoid robot and to support real-time object recognition, scene understanding, visual navigation, motion planning, and autonomous navigation of the robot. This study compares processors for inferencing time performance between a central processing unit (CPU), a graphical processing unit (GPU), and a tensor processing unit (TPU). CPUs, GPUs, and TPUs are all processors that can be used for machine learning tasks. Related to the aim of supporting an autonomous humanoid robot, there was an additional effort to observe whether or not there was a significant difference in using a camera having monocular vision versus stereo vision capability. TPU inference time results for this study reflect a 25% reduction in time over the GPU, and a whopping 87.5% reduction in inference time compared to the CPU. Much information in this paper is contributed to the final selection of Google's Coral brand, Edge TPU device. The Arduino Nano 33 BLE Sense Tiny ML Kit was also considered for comparison but due to initial incompatibilities and in the interest of time to complete this study, a decision was made to review the kit in a future experiment.<|reference_end|> | arxiv | @article{rodriguez2024fast,
title={Fast Object Detection with a Machine Learning Edge Device},
author={Richard C. Rodriguez, Jonah Elijah P. Bardos},
journal={arXiv preprint arXiv:2410.04173},
year={2024},
archivePrefix={arXiv},
eprint={2410.04173},
primaryClass={cs.RO cs.CV}
} | rodriguez2024fast |
arxiv-666059 | 2410.04177 | The Impact of Surface Co-location and Eye-tracking on Mixed Reality Typing | <|reference_start|>The Impact of Surface Co-location and Eye-tracking on Mixed Reality Typing: Accuracy and speed are pivotal when typing. We hypothesized that the lack of tactile feedback on midair mixed reality keyboards may adversely impact performance. Our first experiment assessed the potential to provide tactile feedback to users typing in mixed reality by co-locating the virtual keyboard on a table or a wall. The keyboard was deterministic (without auto-correct), relied only on the headset's egocentric cameras for sensing, and included symbol keys. Users preferred and had the highest entry rate of 12 words-per-minute using a midair keyboard. Error rates were similar in all conditions. Based on user feedback, our second experiment explored ten-finger typing. We used a novel eye-tracking technique to mitigate accidental key presses. The technique halved the number of times backspace was pressed and was preferred by users. However, participants were faster using only their index fingers without eye-tracking at 11 words-per-minute.<|reference_end|> | arxiv | @article{schmitz2024the,
title={The Impact of Surface Co-location and Eye-tracking on Mixed Reality
Typing},
author={Cecilia Schmitz, Joshua Reynolds, Scott Kuhl, Keith Vertanen},
journal={arXiv preprint arXiv:2410.04177},
year={2024},
archivePrefix={arXiv},
eprint={2410.04177},
primaryClass={cs.HC}
} | schmitz2024the |
arxiv-666060 | 2410.04179 | Computing Most Equitable Voting Rules | <|reference_start|>Computing Most Equitable Voting Rules: How to design fair and (computationally) efficient voting rules is a central challenge in Computational Social Choice. In this paper, we aim at designing efficient algorithms for computing most equitable rules for large classes of preferences and decisions, which optimally satisfy two fundamental fairness/equity axioms: anonymity (every voter being treated equally) and neutrality (every alternative being treated equally). By revealing a natural connection to the graph isomorphism problem and leveraging recent breakthroughs by Babai [2019], we design quasipolynomial-time algorithms that compute most equitable rules with verifications, which also compute verifications about whether anonymity and neutrality are satisfied at the input profile. Further extending this approach, we propose the canonical-labeling tie-breaking, which runs in quasipolynomial-time and optimally breaks ties to preserve anonymity and neutrality. As for the complexity lower bound, we prove that even computing verifications for most equitable rules is GI-complete (i.e., as hard as the graph isomorphism problem), and sometimes GA-complete (i.e., as hard as the graph automorphism problem), for many commonly studied combinations of preferences and decisions. To the best of our knowledge, these are the first problems in computational social choice that are known to be complete in the class GI or GA.<|reference_end|> | arxiv | @article{xia2024computing,
title={Computing Most Equitable Voting Rules},
author={Lirong Xia},
journal={arXiv preprint arXiv:2410.04179},
year={2024},
archivePrefix={arXiv},
eprint={2410.04179},
primaryClass={cs.GT econ.TH}
} | xia2024computing |
arxiv-666061 | 2410.04182 | Artistic Portrait Drawing with Vector Strokes | <|reference_start|>Artistic Portrait Drawing with Vector Strokes: In this paper, we present a method, VectorPD, for converting a given human face image into a vector portrait sketch. VectorPD supports different levels of abstraction by simply controlling the number of strokes. Since vector graphics are composed of different shape primitives, it is challenging for rendering complex faces to accurately express facial details and structure. To address this, VectorPD employs a novel two-round optimization mechanism. We first initialize the strokes with facial keypoints, and generate a basic portrait sketch by a CLIP-based Semantic Loss. Then we complete the face structure through VGG-based Structure Loss, and propose a novel Crop-based Shadow Loss to enrich the shadow details of the sketch, achieving a visually pleasing portrait sketch. Quantitative and qualitative evaluations both demonstrate that the portrait sketches generated by VectorPD can produce better visual effects than existing state-of-the-art methods, maintaining as much fidelity as possible at different levels of abstraction.<|reference_end|> | arxiv | @article{liang2024artistic,
title={Artistic Portrait Drawing with Vector Strokes},
author={Yiqi Liang, Ying Liu, Dandan Long, Ruihui Li},
journal={arXiv preprint arXiv:2410.04182},
year={2024},
archivePrefix={arXiv},
eprint={2410.04182},
primaryClass={cs.CV}
} | liang2024artistic |
arxiv-666062 | 2410.04183 | Unsupervised Assessment of Landscape Shifts Based on Persistent Entropy and Topological Preservation | <|reference_start|>Unsupervised Assessment of Landscape Shifts Based on Persistent Entropy and Topological Preservation: Concept drift typically refers to the analysis of changes in data distribution. A drift in the input data can have negative consequences on a learning predictor and the system's stability. The majority of concept drift methods emphasize the analysis of statistical changes in non-stationary data over time. In this context, we consider another perspective, where the concept drift also integrates substantial changes in the topological characteristics of the data stream. In this article, we introduce a novel framework for monitoring changes in multi-dimensional data streams. We explore a generalization of the standard concept drift focusing on the changes in the topological characteristics of the data. Our developed approach is based on persistent entropy and topology-preserving projections in a continual learning scenario. The framework operates in both unsupervised and supervised environments. To demonstrate the utility of the proposed framework, we analyze the model across three scenarios using data streams generated with MNIST samples. The obtained results reveal the potential of applying topological data analysis for shift detection and encourage further research in this area.<|reference_end|> | arxiv | @article{basterrech2024unsupervised,
title={Unsupervised Assessment of Landscape Shifts Based on Persistent Entropy
and Topological Preservation},
author={Sebastian Basterrech},
journal={arXiv preprint arXiv:2410.04183},
year={2024},
archivePrefix={arXiv},
eprint={2410.04183},
primaryClass={cs.LG cs.CV}
} | basterrech2024unsupervised |
arxiv-666063 | 2410.04184 | Non-monotonic Extensions to Formal Concept Analysis via Object Preferences | <|reference_start|>Non-monotonic Extensions to Formal Concept Analysis via Object Preferences: Formal Concept Analysis (FCA) is an approach to creating a conceptual hierarchy in which a \textit{concept lattice} is generated from a \textit{formal context}. That is, a triple consisting of a set of objects, $G$, a set of attributes, $M$, and an incidence relation $I$ on $G \times M$. A \textit{concept} is then modelled as a pair consisting of a set of objects (the \textit{extent}), and a set of shared attributes (the \textit{intent}). Implications in FCA describe how one set of attributes follows from another. The semantics of these implications closely resemble that of logical consequence in classical logic. In that sense, it describes a monotonic conditional. The contributions of this paper are two-fold. First, we introduce a non-monotonic conditional between sets of attributes, which assumes a preference over the set of objects. We show that this conditional gives rise to a consequence relation that is consistent with the postulates for non-monotonicty proposed by Kraus, Lehmann, and Magidor (commonly referred to as the KLM postulates). We argue that our contribution establishes a strong characterisation of non-monotonicity in FCA. Typical concepts represent concepts where the intent aligns with expectations from the extent, allowing for an exception-tolerant view of concepts. To this end, we show that the set of all typical concepts is a meet semi-lattice of the original concept lattice. This notion of typical concepts is a further introduction of KLM-style typicality into FCA, and is foundational towards developing an algebraic structure representing a concept lattice of prototypical concepts.<|reference_end|> | arxiv | @article{carr2024non-monotonic,
title={Non-monotonic Extensions to Formal Concept Analysis via Object
Preferences},
author={Lucas Carr, Nicholas Leisegang, Thomas Meyer, Sebastian Rudolph},
journal={arXiv preprint arXiv:2410.04184},
year={2024},
archivePrefix={arXiv},
eprint={2410.04184},
primaryClass={cs.LO cs.AI}
} | carr2024non-monotonic |
arxiv-666064 | 2410.04188 | DiDOTS: Knowledge Distillation from Large-Language-Models for Dementia Obfuscation in Transcribed Speech | <|reference_start|>DiDOTS: Knowledge Distillation from Large-Language-Models for Dementia Obfuscation in Transcribed Speech: Dementia is a sensitive neurocognitive disorder affecting tens of millions of people worldwide and its cases are expected to triple by 2050. Alarmingly, recent advancements in dementia classification make it possible for adversaries to violate affected individuals' privacy and infer their sensitive condition from speech transcriptions. Existing obfuscation methods in text have never been applied for dementia and depend on the availability of large labeled datasets which are challenging to collect for sensitive medical attributes. In this work, we bridge this research gap and tackle the above issues by leveraging Large-Language-Models (LLMs) with diverse prompt designs (zero-shot, few-shot, and knowledge-based) to obfuscate dementia in speech transcripts. Our evaluation shows that LLMs are more effective dementia obfuscators compared to competing methods. However, they have billions of parameters which renders them hard to train, store and share, and they are also fragile suffering from hallucination, refusal and contradiction effects among others. To further mitigate these, we propose a novel method, DiDOTS. DiDOTS distills knowledge from LLMs using a teacher-student paradigm and parameter-efficient fine-tuning. DiDOTS has one order of magnitude fewer parameters compared to its teacher LLM and can be fine-tuned using three orders of magnitude less parameters compared to full fine-tuning. Our evaluation shows that compared to prior work DiDOTS retains the performance of LLMs achieving 1.3x and 2.2x improvement in privacy performance on two datasets, while humans rate it as better in preserving utility even when compared to state-of-the-art paraphrasing models.<|reference_end|> | arxiv | @article{woszczyk2024didots:,
title={DiDOTS: Knowledge Distillation from Large-Language-Models for Dementia
Obfuscation in Transcribed Speech},
author={Dominika Woszczyk and Soteris Demetriou},
journal={arXiv preprint arXiv:2410.04188},
year={2024},
archivePrefix={arXiv},
eprint={2410.04188},
primaryClass={cs.CL cs.CR}
} | woszczyk2024didots: |
arxiv-666065 | 2410.04190 | Harnessing Task Overload for Scalable Jailbreak Attacks on Large Language Models | <|reference_start|>Harnessing Task Overload for Scalable Jailbreak Attacks on Large Language Models: Large Language Models (LLMs) remain vulnerable to jailbreak attacks that bypass their safety mechanisms. Existing attack methods are fixed or specifically tailored for certain models and cannot flexibly adjust attack strength, which is critical for generalization when attacking models of various sizes. We introduce a novel scalable jailbreak attack that preempts the activation of an LLM's safety policies by occupying its computational resources. Our method involves engaging the LLM in a resource-intensive preliminary task - a Character Map lookup and decoding process - before presenting the target instruction. By saturating the model's processing capacity, we prevent the activation of safety protocols when processing the subsequent instruction. Extensive experiments on state-of-the-art LLMs demonstrate that our method achieves a high success rate in bypassing safety measures without requiring gradient access, manual prompt engineering. We verified our approach offers a scalable attack that quantifies attack strength and adapts to different model scales at the optimal strength. We shows safety policies of LLMs might be more susceptible to resource constraints. Our findings reveal a critical vulnerability in current LLM safety designs, highlighting the need for more robust defense strategies that account for resource-intense condition.<|reference_end|> | arxiv | @article{dong2024harnessing,
title={Harnessing Task Overload for Scalable Jailbreak Attacks on Large
Language Models},
author={Yiting Dong, Guobin Shen, Dongcheng Zhao, Xiang He, Yi Zeng},
journal={arXiv preprint arXiv:2410.04190},
year={2024},
archivePrefix={arXiv},
eprint={2410.04190},
primaryClass={cs.CR cs.CL}
} | dong2024harnessing |
arxiv-666066 | 2410.04191 | Accelerating Diffusion Models with One-to-Many Knowledge Distillation | <|reference_start|>Accelerating Diffusion Models with One-to-Many Knowledge Distillation: Significant advancements in image generation have been made with diffusion models. Nevertheless, when contrasted with previous generative models, diffusion models face substantial computational overhead, leading to failure in real-time generation. Recent approaches have aimed to accelerate diffusion models by reducing the number of sampling steps through improved sampling techniques or step distillation. However, the methods to diminish the computational cost for each timestep remain a relatively unexplored area. Observing the fact that diffusion models exhibit varying input distributions and feature distributions at different timesteps, we introduce one-to-many knowledge distillation (O2MKD), which distills a single teacher diffusion model into multiple student diffusion models, where each student diffusion model is trained to learn the teacher's knowledge for a subset of continuous timesteps. Experiments on CIFAR10, LSUN Church, CelebA-HQ with DDPM and COCO30K with Stable Diffusion show that O2MKD can be applied to previous knowledge distillation and fast sampling methods to achieve significant acceleration. Codes will be released in Github.<|reference_end|> | arxiv | @article{zhang2024accelerating,
title={Accelerating Diffusion Models with One-to-Many Knowledge Distillation},
author={Linfeng Zhang, Kaisheng Ma},
journal={arXiv preprint arXiv:2410.04191},
year={2024},
archivePrefix={arXiv},
eprint={2410.04191},
primaryClass={cs.CV cs.AI}
} | zhang2024accelerating |
arxiv-666067 | 2410.04193 | Parametric Taylor series based latent dynamics identification neural networks | <|reference_start|>Parametric Taylor series based latent dynamics identification neural networks: Numerical solving parameterised partial differential equations (P-PDEs) is highly practical yet computationally expensive, driving the development of reduced-order models (ROMs). Recently, methods that combine latent space identification techniques with deep learning algorithms (e.g., autoencoders) have shown great potential in describing the dynamical system in the lower dimensional latent space, for example, LaSDI, gLaSDI and GPLaSDI. In this paper, a new parametric latent identification of nonlinear dynamics neural networks, P-TLDINets, is introduced, which relies on a novel neural network structure based on Taylor series expansion and ResNets to learn the ODEs that govern the reduced space dynamics. During the training process, Taylor series-based Latent Dynamic Neural Networks (TLDNets) and identified equations are trained simultaneously to generate a smoother latent space. In order to facilitate the parameterised study, a $k$-nearest neighbours (KNN) method based on an inverse distance weighting (IDW) interpolation scheme is introduced to predict the identified ODE coefficients using local information. Compared to other latent dynamics identification methods based on autoencoders, P-TLDINets remain the interpretability of the model. Additionally, it circumvents the building of explicit autoencoders, avoids dependency on specific grids, and features a more lightweight structure, which is easy to train with high generalisation capability and accuracy. Also, it is capable of using different scales of meshes. P-TLDINets improve training speeds nearly hundred times compared to GPLaSDI and gLaSDI, maintaining an $L_2$ error below $2\%$ compared to high-fidelity models.<|reference_end|> | arxiv | @article{lin2024parametric,
title={Parametric Taylor series based latent dynamics identification neural
networks},
author={Xinlei Lin and Dunhui Xiao},
journal={arXiv preprint arXiv:2410.04193},
year={2024},
archivePrefix={arXiv},
eprint={2410.04193},
primaryClass={cs.LG cs.NE math.DS}
} | lin2024parametric |
arxiv-666068 | 2410.04194 | Consistent Autoformalization for Constructing Mathematical Libraries | <|reference_start|>Consistent Autoformalization for Constructing Mathematical Libraries: Autoformalization is the task of automatically translating mathematical content written in natural language to a formal language expression. The growing language interpretation capabilities of Large Language Models (LLMs), including in formal languages, are lowering the barriers for autoformalization. However, LLMs alone are not capable of consistently and reliably delivering autoformalization, in particular as the complexity and specialization of the target domain grows. As the field evolves into the direction of systematically applying autoformalization towards large mathematical libraries, the need to improve syntactic, terminological and semantic control increases. This paper proposes the coordinated use of three mechanisms, most-similar retrieval augmented generation (MS-RAG), denoising steps, and auto-correction with syntax error feedback (Auto-SEF) to improve autoformalization quality. The empirical analysis, across different models, demonstrates that these mechanisms can deliver autoformalizaton results which are syntactically, terminologically and semantically more consistent. These mechanisms can be applied across different LLMs and have shown to deliver improve results across different model types.<|reference_end|> | arxiv | @article{zhang2024consistent,
title={Consistent Autoformalization for Constructing Mathematical Libraries},
author={Lan Zhang, Xin Quan, Andre Freitas},
journal={arXiv preprint arXiv:2410.04194},
year={2024},
archivePrefix={arXiv},
eprint={2410.04194},
primaryClass={cs.CL}
} | zhang2024consistent |
arxiv-666069 | 2410.04195 | LLMTemporalComparator: A Tool for Analysing Differences in Temporal Adaptations of Large Language Models | <|reference_start|>LLMTemporalComparator: A Tool for Analysing Differences in Temporal Adaptations of Large Language Models: This study addresses the challenges of analyzing temporal discrepancies in large language models (LLMs) trained on data from different time periods. To facilitate the automatic exploration of these differences, we propose a novel system that compares in a systematic way the outputs of two LLM versions based on user-defined queries. The system first generates a hierarchical topic structure rooted in a user-specified keyword, allowing for an organized comparison of topical categories. Subsequently, it evaluates the generated text by both LLMs to identify differences in vocabulary, information presentation, and underlying themes. This fully automated approach not only streamlines the identification of shifts in public opinion and cultural norms but also enhances our understanding of the adaptability and robustness of machine learning applications in response to temporal changes. By fostering research in continual model adaptation and comparative summarization, this work contributes to the development of more transparent machine learning models capable of capturing the nuances of evolving societal contexts.<|reference_end|> | arxiv | @article{fritsch2024llmtemporalcomparator:,
title={LLMTemporalComparator: A Tool for Analysing Differences in Temporal
Adaptations of Large Language Models},
author={Reinhard Friedrich Fritsch and Adam Jatowt},
journal={arXiv preprint arXiv:2410.04195},
year={2024},
archivePrefix={arXiv},
eprint={2410.04195},
primaryClass={cs.IR}
} | fritsch2024llmtemporalcomparator: |
arxiv-666070 | 2410.04196 | Improving Generalization with Flat Hilbert Bayesian Inference | <|reference_start|>Improving Generalization with Flat Hilbert Bayesian Inference: We introduce Flat Hilbert Bayesian Inference (FHBI), an algorithm designed to enhance generalization in Bayesian inference. Our approach involves an iterative two-step procedure with an adversarial functional perturbation step and a functional descent step within the reproducing kernel Hilbert spaces. This methodology is supported by a theoretical analysis that extends previous findings on generalization ability from finite-dimensional Euclidean spaces to infinite-dimensional functional spaces. To evaluate the effectiveness of FHBI, we conduct comprehensive comparisons against seven baseline methods on the VTAB-1K benchmark, which encompasses 19 diverse datasets across various domains with diverse semantics. Empirical results demonstrate that FHBI consistently outperforms the baselines by notable margins, highlighting its practical efficacy.<|reference_end|> | arxiv | @article{truong2024improving,
title={Improving Generalization with Flat Hilbert Bayesian Inference},
author={Tuan Truong, Quyen Tran, Quan Pham-Ngoc, Nhat Ho, Dinh Phung, Trung Le},
journal={arXiv preprint arXiv:2410.04196},
year={2024},
archivePrefix={arXiv},
eprint={2410.04196},
primaryClass={cs.LG stat.ML}
} | truong2024improving |
arxiv-666071 | 2410.04197 | CS4: Measuring the Creativity of Large Language Models Automatically by Controlling the Number of Story-Writing Constraints | <|reference_start|>CS4: Measuring the Creativity of Large Language Models Automatically by Controlling the Number of Story-Writing Constraints: Evaluating the creativity of large language models (LLMs) in story writing is difficult because LLM-generated stories could seemingly look creative but be very similar to some existing stories in their huge and proprietary training corpus. To overcome this challenge, we introduce a novel benchmark dataset with varying levels of prompt specificity: CS4 ($\mathbf{C}$omparing the $\mathbf{S}$kill of $\mathbf{C}$reating $\mathbf{S}$tories by $\mathbf{C}$ontrolling the $\mathbf{S}$ynthesized $\mathbf{C}$onstraint $\mathbf{S}$pecificity). By increasing the number of requirements/constraints in the prompt, we can increase the prompt specificity and hinder LLMs from retelling high-quality narratives in their training data. Consequently, CS4 empowers us to indirectly measure the LLMs' creativity without human annotations. Our experiments on LLaMA, Gemma, and Mistral not only highlight the creativity challenges LLMs face when dealing with highly specific prompts but also reveal that different LLMs perform very differently under different numbers of constraints and achieve different balances between the model's instruction-following ability and narrative coherence. Additionally, our experiments on OLMo suggest that Learning from Human Feedback (LHF) can help LLMs select better stories from their training data but has limited influence in boosting LLMs' ability to produce creative stories that are unseen in the training corpora. The benchmark is released at https://github.com/anirudhlakkaraju/cs4_benchmark.<|reference_end|> | arxiv | @article{atmakuru2024cs4:,
title={CS4: Measuring the Creativity of Large Language Models Automatically by
Controlling the Number of Story-Writing Constraints},
author={Anirudh Atmakuru, Jatin Nainani, Rohith Siddhartha Reddy Bheemreddy,
Anirudh Lakkaraju, Zonghai Yao, Hamed Zamani, Haw-Shiuan Chang},
journal={arXiv preprint arXiv:2410.04197},
year={2024},
archivePrefix={arXiv},
eprint={2410.04197},
primaryClass={cs.CL}
} | atmakuru2024cs4: |
arxiv-666072 | 2410.04199 | LongGenBench: Long-context Generation Benchmark | <|reference_start|>LongGenBench: Long-context Generation Benchmark: Current long-context benchmarks primarily focus on retrieval-based tests, requiring Large Language Models (LLMs) to locate specific information within extensive input contexts, such as the needle-in-a-haystack (NIAH) benchmark. Long-context generation refers to the ability of a language model to generate coherent and contextually accurate text that spans across lengthy passages or documents. While recent studies show strong performance on NIAH and other retrieval-based long-context benchmarks, there is a significant lack of benchmarks for evaluating long-context generation capabilities. To bridge this gap and offer a comprehensive assessment, we introduce a synthetic benchmark, LongGenBench, which allows for flexible configurations of customized generation context lengths. LongGenBench advances beyond traditional benchmarks by redesigning the format of questions and necessitating that LLMs respond with a single, cohesive long-context answer. Upon extensive evaluation using LongGenBench, we observe that: (1) both API accessed and open source models exhibit performance degradation in long-context generation scenarios, ranging from 1.2% to 47.1%; (2) different series of LLMs exhibit varying trends of performance degradation, with the Gemini-1.5-Flash model showing the least degradation among API accessed models, and the Qwen2 series exhibiting the least degradation in LongGenBench among open source models.<|reference_end|> | arxiv | @article{liu2024longgenbench:,
title={LongGenBench: Long-context Generation Benchmark},
author={Xiang Liu, Peijie Dong, Xuming Hu, Xiaowen Chu},
journal={arXiv preprint arXiv:2410.04199},
year={2024},
archivePrefix={arXiv},
eprint={2410.04199},
primaryClass={cs.CL cs.AI}
} | liu2024longgenbench: |
arxiv-666073 | 2410.04201 | IT$^3$: Idempotent Test-Time Training | <|reference_start|>IT$^3$: Idempotent Test-Time Training: This paper introduces Idempotent Test-Time Training (IT$^3$), a novel approach to addressing the challenge of distribution shift. While supervised-learning methods assume matching train and test distributions, this is rarely the case for machine learning systems deployed in the real world. Test-Time Training (TTT) approaches address this by adapting models during inference, but they are limited by a domain specific auxiliary task. IT$^3$ is based on the universal property of idempotence. An idempotent operator is one that can be applied sequentially without changing the result beyond the initial application, that is $f(f(x))=f(x)$. At training, the model receives an input $x$ along with another signal that can either be the ground truth label $y$ or a neutral "don't know" signal $0$. At test time, the additional signal can only be $0$. When sequentially applying the model, first predicting $y_0 = f(x, 0)$ and then $y_1 = f(x, y_0)$, the distance between $y_0$ and $y_1$ measures certainty and indicates out-of-distribution input $x$ if high. We use this distance, that can be expressed as $||f(x, f(x, 0)) - f(x, 0)||$ as our TTT loss during inference. By carefully optimizing this objective, we effectively train $f(x,\cdot)$ to be idempotent, projecting the internal representation of the input onto the training distribution. We demonstrate the versatility of our approach across various tasks, including corrupted image classification, aerodynamic predictions, tabular data with missing information, age prediction from face, and large-scale aerial photo segmentation. Moreover, these tasks span different architectures such as MLPs, CNNs, and GNNs.<|reference_end|> | arxiv | @article{durasov2024it$^3$:,
title={IT$^3$: Idempotent Test-Time Training},
author={Nikita Durasov, Assaf Shocher, Doruk Oner, Gal Chechik, Alexei A.
Efros, Pascal Fua},
journal={arXiv preprint arXiv:2410.04201},
year={2024},
archivePrefix={arXiv},
eprint={2410.04201},
primaryClass={cs.CV}
} | durasov2024it$^3$: |
arxiv-666074 | 2410.04202 | Deep Transfer Learning Based Peer Review Aggregation and Meta-review Generation for Scientific Articles | <|reference_start|>Deep Transfer Learning Based Peer Review Aggregation and Meta-review Generation for Scientific Articles: Peer review is the quality assessment of a manuscript by one or more peer experts. Papers are submitted by the authors to scientific venues, and these papers must be reviewed by peers or other authors. The meta-reviewers then gather the peer reviews, assess them, and create a meta-review and decision for each manuscript. As the number of papers submitted to these venues has grown in recent years, it becomes increasingly challenging for meta-reviewers to collect these peer evaluations on time while still maintaining the quality that is the primary goal of meta-review creation. In this paper, we address two peer review aggregation challenges a meta-reviewer faces: paper acceptance decision-making and meta-review generation. Firstly, we propose to automate the process of acceptance decision prediction by applying traditional machine learning algorithms. We use pre-trained word embedding techniques BERT to process the reviews written in natural language text. For the meta-review generation, we propose a transfer learning model based on the T5 model. Experimental results show that BERT is more effective than the other word embedding techniques, and the recommendation score is an important feature for the acceptance decision prediction. In addition, we figure out that fine-tuned T5 outperforms other inference models. Our proposed system takes peer reviews and other relevant features as input to produce a meta-review and make a judgment on whether or not the paper should be accepted. In addition, experimental results show that the acceptance decision prediction system of our task outperforms the existing models, and the meta-review generation task shows significantly improved scores compared to the existing models. For the statistical test, we utilize the Wilcoxon signed-rank test to assess whether there is a statistically significant improvement between paired observations.<|reference_end|> | arxiv | @article{hasan2024deep,
title={Deep Transfer Learning Based Peer Review Aggregation and Meta-review
Generation for Scientific Articles},
author={Md. Tarek Hasan, Mohammad Nazmush Shamael, H. M. Mutasim Billah, Arifa
Akter, Md Al Emran Hossain, Sumayra Islam, Salekul Islam, Swakkhar Shatabda},
journal={arXiv preprint arXiv:2410.04202},
year={2024},
archivePrefix={arXiv},
eprint={2410.04202},
primaryClass={cs.LG}
} | hasan2024deep |
arxiv-666075 | 2410.04203 | RainbowPO: A Unified Framework for Combining Improvements in Preference Optimization | <|reference_start|>RainbowPO: A Unified Framework for Combining Improvements in Preference Optimization: Recently, numerous preference optimization algorithms have been introduced as extensions to the Direct Preference Optimization (DPO) family. While these methods have successfully aligned models with human preferences, there is a lack of understanding regarding the contributions of their additional components. Moreover, fair and consistent comparisons are scarce, making it difficult to discern which components genuinely enhance downstream performance. In this work, we propose RainbowPO, a unified framework that demystifies the effectiveness of existing DPO methods by categorizing their key components into seven broad directions. We integrate these components into a single cohesive objective, enhancing the performance of each individual element. Through extensive experiments, we demonstrate that RainbowPO outperforms existing DPO variants. Additionally, we provide insights to guide researchers in developing new DPO methods and assist practitioners in their implementations.<|reference_end|> | arxiv | @article{zhao2024rainbowpo:,
title={RainbowPO: A Unified Framework for Combining Improvements in Preference
Optimization},
author={Hanyang Zhao, Genta Indra Winata, Anirban Das, Shi-Xiong Zhang, David
D. Yao, Wenpin Tang, Sambit Sahu},
journal={arXiv preprint arXiv:2410.04203},
year={2024},
archivePrefix={arXiv},
eprint={2410.04203},
primaryClass={cs.AI}
} | zhao2024rainbowpo: |
arxiv-666076 | 2410.04205 | Exploring Strengths and Weaknesses of Super-Resolution Attack in Deepfake Detection | <|reference_start|>Exploring Strengths and Weaknesses of Super-Resolution Attack in Deepfake Detection: Image manipulation is rapidly evolving, allowing the creation of credible content that can be used to bend reality. Although the results of deepfake detectors are promising, deepfakes can be made even more complicated to detect through adversarial attacks. They aim to further manipulate the image to camouflage deepfakes' artifacts or to insert signals making the image appear pristine. In this paper, we further explore the potential of super-resolution attacks based on different super-resolution techniques and with different scales that can impact the performance of deepfake detectors with more or less intensity. We also evaluated the impact of the attack on more diverse datasets discovering that the super-resolution process is effective in hiding the artifacts introduced by deepfake generation models but fails in hiding the traces contained in fully synthetic images. Finally, we propose some changes to the detectors' training process to improve their robustness to this kind of attack.<|reference_end|> | arxiv | @article{coccomini2024exploring,
title={Exploring Strengths and Weaknesses of Super-Resolution Attack in
Deepfake Detection},
author={Davide Alessandro Coccomini, Roberto Caldelli, Fabrizio Falchi,
Claudio Gennaro, Giuseppe Amato},
journal={arXiv preprint arXiv:2410.04205},
year={2024},
archivePrefix={arXiv},
eprint={2410.04205},
primaryClass={cs.CV eess.IV}
} | coccomini2024exploring |
arxiv-666077 | 2410.04207 | Learning on LoRAs: GL-Equivariant Processing of Low-Rank Weight Spaces for Large Finetuned Models | <|reference_start|>Learning on LoRAs: GL-Equivariant Processing of Low-Rank Weight Spaces for Large Finetuned Models: Low-rank adaptations (LoRAs) have revolutionized the finetuning of large foundation models, enabling efficient adaptation even with limited computational resources. The resulting proliferation of LoRAs presents exciting opportunities for applying machine learning techniques that take these low-rank weights themselves as inputs. In this paper, we investigate the potential of Learning on LoRAs (LoL), a paradigm where LoRA weights serve as input to machine learning models. For instance, an LoL model that takes in LoRA weights as inputs could predict the performance of the finetuned model on downstream tasks, detect potentially harmful finetunes, or even generate novel model edits without traditional training methods. We first identify the inherent parameter symmetries of low rank decompositions of weights, which differ significantly from the parameter symmetries of standard neural networks. To efficiently process LoRA weights, we develop several symmetry-aware invariant or equivariant LoL models, using tools such as canonicalization, invariant featurization, and equivariant layers. We finetune thousands of text-to-image diffusion models and language models to collect datasets of LoRAs. In numerical experiments on these datasets, we show that our LoL architectures are capable of processing low rank weight decompositions to predict CLIP score, finetuning data attributes, finetuning data membership, and accuracy on downstream tasks.<|reference_end|> | arxiv | @article{putterman2024learning,
title={Learning on LoRAs: GL-Equivariant Processing of Low-Rank Weight Spaces
for Large Finetuned Models},
author={Theo Putterman, Derek Lim, Yoav Gelberg, Stefanie Jegelka, Haggai
Maron},
journal={arXiv preprint arXiv:2410.04207},
year={2024},
archivePrefix={arXiv},
eprint={2410.04207},
primaryClass={cs.LG stat.ML}
} | putterman2024learning |
arxiv-666078 | 2410.04208 | Assessing the Impact of Disorganized Background Noise on Timed Stress Task Performance Through Attention Using Machine-Learning Based Eye-Tracking Techniques | <|reference_start|>Assessing the Impact of Disorganized Background Noise on Timed Stress Task Performance Through Attention Using Machine-Learning Based Eye-Tracking Techniques: Noise pollution has been rising alongside urbanization. Literature shows that disorganized background noise decreases attention. Timed testing, an attention-demanding stress task, has become increasingly important in assessing students' academic performance. However, there is insufficient research on how background noise affects performance in timed stress tasks by impacting attention, which this study aims to address. The paper-based SAT math test under increased time pressure was administered twice: once in silence and once with conversational and traffic background noise. Attention is negatively attributed to increasing blink rate, measured using eye landmarks from dLib's machine-learning facial-detection model. First, the study affirms that background noise detriments attention and performance. Attention, through blink rate, is established as an indicator of stress task performance. Second, the study finds that participants whose blink rates increased due to background noise differed in performance compared to those whose blink rates decreased, possibly correlating with their self-perception of noise's impact on attention. Third, using a case study, the study finds that a student with ADHD had enhanced performance and attention from background noise. Fourth, the study finds that although both groups began with similar blink rates, the group exposed to noise had significantly increased blink rate near the end, indicating that noise reduces attention over time. While schools can generally provide quiet settings for timed stress tasks, the study recommends personalized treatments for students based on how noise affects them. Future research can use different attention indices to consolidate this study's findings or conduct this study with different background noises.<|reference_end|> | arxiv | @article{huang2024assessing,
title={Assessing the Impact of Disorganized Background Noise on Timed Stress
Task Performance Through Attention Using Machine-Learning Based Eye-Tracking
Techniques},
author={Hubert Huang, Jeffrey Huang},
journal={arXiv preprint arXiv:2410.04208},
year={2024},
archivePrefix={arXiv},
eprint={2410.04208},
primaryClass={cs.CY}
} | huang2024assessing |
arxiv-666079 | 2410.04209 | Equivariant Neural Functional Networks for Transformers | <|reference_start|>Equivariant Neural Functional Networks for Transformers: This paper systematically explores neural functional networks (NFN) for transformer architectures. NFN are specialized neural networks that treat the weights, gradients, or sparsity patterns of a deep neural network (DNN) as input data and have proven valuable for tasks such as learnable optimizers, implicit data representations, and weight editing. While NFN have been extensively developed for MLP and CNN, no prior work has addressed their design for transformers, despite the importance of transformers in modern deep learning. This paper aims to address this gap by providing a systematic study of NFN for transformers. We first determine the maximal symmetric group of the weights in a multi-head attention module as well as a necessary and sufficient condition under which two sets of hyperparameters of the multi-head attention module define the same function. We then define the weight space of transformer architectures and its associated group action, which leads to the design principles for NFN in transformers. Based on these, we introduce Transformer-NFN, an NFN that is equivariant under this group action. Additionally, we release a dataset of more than 125,000 Transformers model checkpoints trained on two datasets with two different tasks, providing a benchmark for evaluating Transformer-NFN and encouraging further research on transformer training and performance.<|reference_end|> | arxiv | @article{tran2024equivariant,
title={Equivariant Neural Functional Networks for Transformers},
author={Viet-Hoang Tran, Thieu N. Vo, An Nguyen The, Tho Tran Huu, Minh-Khoi
Nguyen-Nhat, Thanh Tran, Duy-Tung Pham, Tan Minh Nguyen},
journal={arXiv preprint arXiv:2410.04209},
year={2024},
archivePrefix={arXiv},
eprint={2410.04209},
primaryClass={cs.LG}
} | tran2024equivariant |
arxiv-666080 | 2410.04211 | Correlation-Aware Select and Merge Attention for Efficient Fine-Tuning and Context Length Extension | <|reference_start|>Correlation-Aware Select and Merge Attention for Efficient Fine-Tuning and Context Length Extension: Modeling long sequences is crucial for various large-scale models; however, extending existing architectures to handle longer sequences presents significant technical and resource challenges. In this paper, we propose an efficient and flexible attention architecture that enables the extension of context lengths in large language models with reduced computational resources and fine-tuning time compared to other excellent methods. Specifically, we introduce correlation-aware selection and merging mechanisms to facilitate efficient sparse attention. In addition, we also propose a novel data augmentation technique involving positional encodings to enhance generalization to unseen positions. The results are as follows: First, using a single A100, we achieve fine-tuning on Llama2-7B with a sequence length of 32K, which is more efficient than other methods that rely on subsets for regression. Second, we present a comprehensive method for extending context lengths across the pre-training, fine-tuning, and inference phases. During pre-training, our attention mechanism partially breaks translation invariance during token selection, so we apply positional encodings only to the selected tokens. This approach achieves relatively high performance and significant extrapolation capabilities. For fine-tuning, we introduce Cyclic, Randomly Truncated, and Dynamically Growing NTK Positional Embedding (CRD NTK). This design allows fine-tuning with a sequence length of only 16K, enabling models such as Llama2-7B and Mistral-7B to perform inference with context lengths of up to 1M or even arbitrary lengths. Our method achieves 100\% accuracy on the passkey task with a context length of 4M and maintains stable perplexity at a 1M context length. This represents at least a 64-fold reduction in resource requirements compared to traditional full-attention mechanisms, while still achieving competitive performance.<|reference_end|> | arxiv | @article{wang2024correlation-aware,
title={Correlation-Aware Select and Merge Attention for Efficient Fine-Tuning
and Context Length Extension},
author={Ning Wang, Zekun Li, Tongxin Bai, Guoqi Li},
journal={arXiv preprint arXiv:2410.04211},
year={2024},
archivePrefix={arXiv},
eprint={2410.04211},
primaryClass={cs.CL cs.AI}
} | wang2024correlation-aware |
arxiv-666081 | 2410.04213 | Equivariant Polynomial Functional Networks | <|reference_start|>Equivariant Polynomial Functional Networks: Neural Functional Networks (NFNs) have gained increasing interest due to their wide range of applications, including extracting information from implicit representations of data, editing network weights, and evaluating policies. A key design principle of NFNs is their adherence to the permutation and scaling symmetries inherent in the connectionist structure of the input neural networks. Recent NFNs have been proposed with permutation and scaling equivariance based on either graph-based message-passing mechanisms or parameter-sharing mechanisms. However, graph-based equivariant NFNs suffer from high memory consumption and long running times. On the other hand, parameter-sharing-based NFNs built upon equivariant linear layers exhibit lower memory consumption and faster running time, yet their expressivity is limited due to the large size of the symmetric group of the input neural networks. The challenge of designing a permutation and scaling equivariant NFN that maintains low memory consumption and running time while preserving expressivity remains unresolved. In this paper, we propose a novel solution with the development of MAGEP-NFN (Monomial mAtrix Group Equivariant Polynomial NFN). Our approach follows the parameter-sharing mechanism but differs from previous works by constructing a nonlinear equivariant layer represented as a polynomial in the input weights. This polynomial formulation enables us to incorporate additional relationships between weights from different input hidden layers, enhancing the model's expressivity while keeping memory consumption and running time low, thereby addressing the aforementioned challenge. We provide empirical evidence demonstrating that MAGEP-NFN achieves competitive performance and efficiency compared to existing baselines.<|reference_end|> | arxiv | @article{vo2024equivariant,
title={Equivariant Polynomial Functional Networks},
author={Thieu N. Vo, Viet-Hoang Tran, Tho Tran Huu, An Nguyen The, Thanh Tran,
Minh-Khoi Nguyen-Nhat, Duy-Tung Pham, Tan Minh Nguyen},
journal={arXiv preprint arXiv:2410.04213},
year={2024},
archivePrefix={arXiv},
eprint={2410.04213},
primaryClass={cs.LG}
} | vo2024equivariant |
arxiv-666082 | 2410.04214 | Boosting Visual Fidelity in Driving Simulations through Diffusion Models | <|reference_start|>Boosting Visual Fidelity in Driving Simulations through Diffusion Models: Diffusion models have made substantial progress in facilitating image generation and editing. As the technology matures, we see its potential in the context of driving simulations to enhance the simulated experience. In this paper, we explore this potential through the introduction of a novel system designed to boost visual fidelity. Our system, DRIVE (Diffusion-based Realism Improvement for Virtual Environments), leverages a diffusion model pipeline to give a simulated environment a photorealistic view, with the flexibility to be adapted for other applications. We conducted a preliminary user study to assess the system's effectiveness in rendering realistic visuals and supporting participants in performing driving tasks. Our work not only lays the groundwork for future research on the integration of diffusion models in driving simulations but also provides practical guidelines and best practices for their application in this context.<|reference_end|> | arxiv | @article{bu2024boosting,
title={Boosting Visual Fidelity in Driving Simulations through Diffusion Models},
author={Fanjun Bu, Hiroshi Yasuda},
journal={arXiv preprint arXiv:2410.04214},
year={2024},
archivePrefix={arXiv},
eprint={2410.04214},
primaryClass={cs.HC}
} | bu2024boosting |
arxiv-666083 | 2410.04216 | A class of ternary codes with few weights | <|reference_start|>A class of ternary codes with few weights: Let $\ell^m$ be a power with $\ell$ a prime greater than $3$ and $m$ a positive integer such that $3$ is a primitive root modulo $2\ell^m$. Let $\mathbb{F}_3$ be the finite field of order $3$, and let $\mathbb{F}$ be the $\ell^{m-1}(\ell-1)$-th extension field of $\mathbb{F}_3$. Denote by $\text{Tr}$ the absolute trace map from $\mathbb{F}$ to $\mathbb{F}_3$. For any $\alpha \in \mathbb{F}_3$ and $\beta \in\mathbb{F}$, let $D$ be the set of nonzero solutions in $\mathbb{F}$ to the equation $\text{Tr}(x^{\frac{q-1}{2\ell^m}} + \beta x) = \alpha$. In this paper, we investigate a ternary code $\mathcal{C}$ of length $n$, defined by $\mathcal{C} := \{(\text{Tr}(d_1x), \text{Tr}(d_2x), \dots, \text{Tr}(d_nx)) : x \in \mathbb{F}\}$ when we rewrite $D = \{d_1, d_2, \dots, d_n\}$. Using recent results on explicit evaluations of exponential sums, the Weil bound, and combinatorial techniques, we determine the Hamming weight distribution of the code $\mathcal{C}$. Furthermore, we show that when $\alpha = \beta =0$, the dual code of $\mathcal{C}$ is optimal with respect to the Hamming bound.<|reference_end|> | arxiv | @article{cheng2024a,
title={A class of ternary codes with few weights},
author={Kaimin Cheng},
journal={arXiv preprint arXiv:2410.04216},
year={2024},
archivePrefix={arXiv},
eprint={2410.04216},
primaryClass={cs.CR math.NT}
} | cheng2024a |
arxiv-666084 | 2410.04217 | Improving Portfolio Optimization Results with Bandit Networks | <|reference_start|>Improving Portfolio Optimization Results with Bandit Networks: In Reinforcement Learning (RL), multi-armed Bandit (MAB) problems have found applications across diverse domains such as recommender systems, healthcare, and finance. Traditional MAB algorithms typically assume stationary reward distributions, which limits their effectiveness in real-world scenarios characterized by non-stationary dynamics. This paper addresses this limitation by introducing and evaluating novel Bandit algorithms designed for non-stationary environments. First, we present the Adaptive Discounted Thompson Sampling (ADTS) algorithm, which enhances adaptability through relaxed discounting and sliding window mechanisms to better respond to changes in reward distributions. We then extend this approach to the Portfolio Optimization problem by introducing the Combinatorial Adaptive Discounted Thompson Sampling (CADTS) algorithm, which addresses computational challenges within Combinatorial Bandits and improves dynamic asset allocation. Additionally, we propose a novel architecture called Bandit Networks, which integrates the outputs of ADTS and CADTS, thereby mitigating computational limitations in stock selection. Through extensive experiments using real financial market data, we demonstrate the potential of these algorithms and architectures in adapting to dynamic environments and optimizing decision-making processes. For instance, the proposed bandit network instances present superior performance when compared to classic portfolio optimization approaches, such as capital asset pricing model, equal weights, risk parity, and Markovitz, with the best network presenting an out-of-sample Sharpe Ratio 20% higher than the best performing classical model.<|reference_end|> | arxiv | @article{fonseca2024improving,
title={Improving Portfolio Optimization Results with Bandit Networks},
author={Gustavo de Freitas Fonseca, Lucas Coelho e Silva, and Paulo Andr'e
Lima de Castro},
journal={arXiv preprint arXiv:2410.04217},
year={2024},
archivePrefix={arXiv},
eprint={2410.04217},
primaryClass={cs.AI q-fin.PM}
} | fonseca2024improving |
arxiv-666085 | 2410.04218 | Quantum Kolmogorov-Arnold networks by combining quantum signal processing circuits | <|reference_start|>Quantum Kolmogorov-Arnold networks by combining quantum signal processing circuits: In this paper, we show that an equivalent implementation of KAN can be done on quantum computers by simply combining quantum signal processing circuits in layers. This provides a powerful and robust path for the applications of KAN on quantum computers.<|reference_end|> | arxiv | @article{daskin2024quantum,
title={Quantum Kolmogorov-Arnold networks by combining quantum signal
processing circuits},
author={Ammar Daskin},
journal={arXiv preprint arXiv:2410.04218},
year={2024},
archivePrefix={arXiv},
eprint={2410.04218},
primaryClass={quant-ph cs.LG}
} | daskin2024quantum |
arxiv-666086 | 2410.04221 | TANGO: Co-Speech Gesture Video Reenactment with Hierarchical Audio Motion Embedding and Diffusion Interpolation | <|reference_start|>TANGO: Co-Speech Gesture Video Reenactment with Hierarchical Audio Motion Embedding and Diffusion Interpolation: We present TANGO, a framework for generating co-speech body-gesture videos. Given a few-minute, single-speaker reference video and target speech audio, TANGO produces high-fidelity videos with synchronized body gestures. TANGO builds on Gesture Video Reenactment (GVR), which splits and retrieves video clips using a directed graph structure - representing video frames as nodes and valid transitions as edges. We address two key limitations of GVR: audio-motion misalignment and visual artifacts in GAN-generated transition frames. In particular, (i) we propose retrieving gestures using latent feature distance to improve cross-modal alignment. To ensure the latent features could effectively model the relationship between speech audio and gesture motion, we implement a hierarchical joint embedding space (AuMoCLIP); (ii) we introduce the diffusion-based model to generate high-quality transition frames. Our diffusion model, Appearance Consistent Interpolation (ACInterp), is built upon AnimateAnyone and includes a reference motion module and homography background flow to preserve appearance consistency between generated and reference videos. By integrating these components into the graph-based retrieval framework, TANGO reliably produces realistic, audio-synchronized videos and outperforms all existing generative and retrieval methods. Our codes and pretrained models are available: \url{https://pantomatrix.github.io/TANGO/}<|reference_end|> | arxiv | @article{liu2024tango:,
title={TANGO: Co-Speech Gesture Video Reenactment with Hierarchical Audio
Motion Embedding and Diffusion Interpolation},
author={Haiyang Liu, Xingchao Yang, Tomoya Akiyama, Yuantian Huang, Qiaoge Li,
Shigeru Kuriyama, Takafumi Taketomi},
journal={arXiv preprint arXiv:2410.04221},
year={2024},
archivePrefix={arXiv},
eprint={2410.04221},
primaryClass={cs.CV}
} | liu2024tango: |
arxiv-666087 | 2410.04223 | Multimodal Large Language Models for Inverse Molecular Design with Retrosynthetic Planning | <|reference_start|>Multimodal Large Language Models for Inverse Molecular Design with Retrosynthetic Planning: While large language models (LLMs) have integrated images, adapting them to graphs remains challenging, limiting their applications in materials and drug design. This difficulty stems from the need for coherent autoregressive generation across texts and graphs. To address this, we introduce Llamole, the first multimodal LLM capable of interleaved text and graph generation, enabling molecular inverse design with retrosynthetic planning. Llamole integrates a base LLM with the Graph Diffusion Transformer and Graph Neural Networks for multi-conditional molecular generation and reaction inference within texts, while the LLM, with enhanced molecular understanding, flexibly controls activation among the different graph modules. Additionally, Llamole integrates A* search with LLM-based cost functions for efficient retrosynthetic planning. We create benchmarking datasets and conduct extensive experiments to evaluate Llamole against in-context learning and supervised fine-tuning. Llamole significantly outperforms 14 adapted LLMs across 12 metrics for controllable molecular design and retrosynthetic planning.<|reference_end|> | arxiv | @article{liu2024multimodal,
title={Multimodal Large Language Models for Inverse Molecular Design with
Retrosynthetic Planning},
author={Gang Liu, Michael Sun, Wojciech Matusik, Meng Jiang, Jie Chen},
journal={arXiv preprint arXiv:2410.04223},
year={2024},
archivePrefix={arXiv},
eprint={2410.04223},
primaryClass={cs.LG physics.chem-ph q-bio.BM}
} | liu2024multimodal |
arxiv-666088 | 2410.04224 | Distillation-Free One-Step Diffusion for Real-World Image Super-Resolution | <|reference_start|>Distillation-Free One-Step Diffusion for Real-World Image Super-Resolution: Diffusion models have been achieving excellent performance for real-world image super-resolution (Real-ISR) with considerable computational costs. Current approaches are trying to derive one-step diffusion models from multi-step counterparts through knowledge distillation. However, these methods incur substantial training costs and may constrain the performance of the student model by the teacher's limitations. To tackle these issues, we propose DFOSD, a Distillation-Free One-Step Diffusion model. Specifically, we propose a noise-aware discriminator (NAD) to participate in adversarial training, further enhancing the authenticity of the generated content. Additionally, we improve the perceptual loss with edge-aware DISTS (EA-DISTS) to enhance the model's ability to generate fine details. Our experiments demonstrate that, compared with previous diffusion-based methods requiring dozens or even hundreds of steps, our DFOSD attains comparable or even superior results in both quantitative metrics and qualitative evaluations. Our DFOSD also abtains higher performance and efficiency compared with other one-step diffusion methods. We will release code and models at https://github.com/JianzeLi-114/DFOSD.<|reference_end|> | arxiv | @article{li2024distillation-free,
title={Distillation-Free One-Step Diffusion for Real-World Image
Super-Resolution},
author={Jianze Li, Jiezhang Cao, Zichen Zou, Xiongfei Su, Xin Yuan, Yulun
Zhang, Yong Guo, Xiaokang Yang},
journal={arXiv preprint arXiv:2410.04224},
year={2024},
archivePrefix={arXiv},
eprint={2410.04224},
primaryClass={cs.CV}
} | li2024distillation-free |
arxiv-666089 | 2410.04225 | AIM 2024 Challenge on Video Super-Resolution Quality Assessment: Methods and Results | <|reference_start|>AIM 2024 Challenge on Video Super-Resolution Quality Assessment: Methods and Results: This paper presents the Video Super-Resolution (SR) Quality Assessment (QA) Challenge that was part of the Advances in Image Manipulation (AIM) workshop, held in conjunction with ECCV 2024. The task of this challenge was to develop an objective QA method for videos upscaled 2x and 4x by modern image- and video-SR algorithms. QA methods were evaluated by comparing their output with aggregate subjective scores collected from >150,000 pairwise votes obtained through crowd-sourced comparisons across 52 SR methods and 1124 upscaled videos. The goal was to advance the state-of-the-art in SR QA, which had proven to be a challenging problem with limited applicability of traditional QA methods. The challenge had 29 registered participants, and 5 teams had submitted their final results, all outperforming the current state-of-the-art. All data, including the private test subset, has been made publicly available on the challenge homepage at https://challenges.videoprocessing.ai/challenges/super-resolution-metrics-challenge.html<|reference_end|> | arxiv | @article{molodetskikh2024aim,
title={AIM 2024 Challenge on Video Super-Resolution Quality Assessment: Methods
and Results},
author={Ivan Molodetskikh, Artem Borisov, Dmitriy Vatolin, Radu Timofte,
Jianzhao Liu, Tianwu Zhi, Yabin Zhang, Yang Li, Jingwen Xu, Yiting Liao, Qing
Luo, Ao-Xiang Zhang, Peng Zhang, Haibo Lei, Linyan Jiang, Yaqing Li, Yuqin
Cao, Wei Sun, Weixia Zhang, Yinan Sun, Ziheng Jia, Yuxin Zhu, Xiongkuo Min,
Guangtao Zhai, Weihua Luo, Yupeng Z., and Hong Y},
journal={arXiv preprint arXiv:2410.04225},
year={2024},
archivePrefix={arXiv},
eprint={2410.04225},
primaryClass={eess.IV cs.CV cs.MM}
} | molodetskikh2024aim |
arxiv-666090 | 2410.04228 | SGD with memory: fundamental properties and stochastic acceleration | <|reference_start|>SGD with memory: fundamental properties and stochastic acceleration: An important open problem is the theoretically feasible acceleration of mini-batch SGD-type algorithms on quadratic problems with power-law spectrum. In the non-stochastic setting, the optimal exponent $\xi$ in the loss convergence $L_t\sim C_Lt^{-\xi}$ is double that in plain GD and is achievable using Heavy Ball (HB) with a suitable schedule; this no longer works in the presence of mini-batch noise. We address this challenge by considering first-order methods with an arbitrary fixed number $M$ of auxiliary velocity vectors (*memory-$M$ algorithms*). We first prove an equivalence between two forms of such algorithms and describe them in terms of suitable characteristic polynomials. Then we develop a general expansion of the loss in terms of signal and noise propagators. Using it, we show that losses of stationary stable memory-$M$ algorithms always retain the exponent $\xi$ of plain GD, but can have different constants $C_L$ depending on their effective learning rate that generalizes that of HB. We prove that in memory-1 algorithms we can make $C_L$ arbitrarily small while maintaining stability. As a consequence, we propose a memory-1 algorithm with a time-dependent schedule that we show heuristically and experimentally to improve the exponent $\xi$ of plain SGD.<|reference_end|> | arxiv | @article{yarotsky2024sgd,
title={SGD with memory: fundamental properties and stochastic acceleration},
author={Dmitry Yarotsky, Maksim Velikanov},
journal={arXiv preprint arXiv:2410.04228},
year={2024},
archivePrefix={arXiv},
eprint={2410.04228},
primaryClass={cs.LG math.OC}
} | yarotsky2024sgd |
arxiv-666091 | 2410.04231 | Metadata-based Data Exploration with Retrieval-Augmented Generation for Large Language Models | <|reference_start|>Metadata-based Data Exploration with Retrieval-Augmented Generation for Large Language Models: Developing the capacity to effectively search for requisite datasets is an urgent requirement to assist data users in identifying relevant datasets considering the very limited available metadata. For this challenge, the utilization of third-party data is emerging as a valuable source for improvement. Our research introduces a new architecture for data exploration which employs a form of Retrieval-Augmented Generation (RAG) to enhance metadata-based data discovery. The system integrates large language models (LLMs) with external vector databases to identify semantic relationships among diverse types of datasets. The proposed framework offers a new method for evaluating semantic similarity among heterogeneous data sources and for improving data exploration. Our study includes experimental results on four critical tasks: 1) recommending similar datasets, 2) suggesting combinable datasets, 3) estimating tags, and 4) predicting variables. Our results demonstrate that RAG can enhance the selection of relevant datasets, particularly from different categories, when compared to conventional metadata approaches. However, performance varied across tasks and models, which confirms the significance of selecting appropriate techniques based on specific use cases. The findings suggest that this approach holds promise for addressing challenges in data exploration and discovery, although further refinement is necessary for estimation tasks.<|reference_end|> | arxiv | @article{hayashi2024metadata-based,
title={Metadata-based Data Exploration with Retrieval-Augmented Generation for
Large Language Models},
author={Teruaki Hayashi, Hiroki Sakaji, Jiayi Dai, Randy Goebel},
journal={arXiv preprint arXiv:2410.04231},
year={2024},
archivePrefix={arXiv},
eprint={2410.04231},
primaryClass={cs.IR}
} | hayashi2024metadata-based |
arxiv-666092 | 2410.04232 | Be There, Be Together, Be Streamed! AR Scenic Live-Streaming for an Interactive and Collective Experience | <|reference_start|>Be There, Be Together, Be Streamed! AR Scenic Live-Streaming for an Interactive and Collective Experience: Scenic Live-Streaming (SLS), capturing real-world scenic sites from fixed cameras without streamers, combines scene immersion and the social and real-time characteristics of live-streaming into a unique experience. However, existing SLS affords limited audience interactions to engage them in a collective experience compared to many other live-streaming genres. It is also difficult for SLS to recreate important but intangible constituents of in-person trip experiences, such as cultural activities. To offer a more interactive, engaging, and meaningful experience, we propose ARSLS (Augmented Reality Scenic Live-Streaming). Culturally grounded AR objects with awareness of the live-streamed environment can be overlaid over camera views to provide additional interactive features while maintaining consistency with the live-streamed scene. To explore the design space of this new medium, we developed an ARSLS prototype for a famous landscape in China. A preliminary study (N=15) provided initial insights for ARSLS design.<|reference_end|> | arxiv | @article{huang2024be,
title={Be There, Be Together, Be Streamed! AR Scenic Live-Streaming for an
Interactive and Collective Experience},
author={Zeyu Huang, Zuyu Xu, Yuanhao Zhang, Chengzhong Liu, Yanwei Zhao,
Chuhan Shi, Jason Chen Zhao, Xiaojuan Ma},
journal={arXiv preprint arXiv:2410.04232},
year={2024},
archivePrefix={arXiv},
eprint={2410.04232},
primaryClass={cs.HC}
} | huang2024be |
arxiv-666093 | 2410.04234 | Functional Homotopy: Smoothing Discrete Optimization via Continuous Parameters for LLM Jailbreak Attacks | <|reference_start|>Functional Homotopy: Smoothing Discrete Optimization via Continuous Parameters for LLM Jailbreak Attacks: Optimization methods are widely employed in deep learning to identify and mitigate undesired model responses. While gradient-based techniques have proven effective for image models, their application to language models is hindered by the discrete nature of the input space. This study introduces a novel optimization approach, termed the \emph{functional homotopy} method, which leverages the functional duality between model training and input generation. By constructing a series of easy-to-hard optimization problems, we iteratively solve these problems using principles derived from established homotopy methods. We apply this approach to jailbreak attack synthesis for large language models (LLMs), achieving a $20\%-30\%$ improvement in success rate over existing methods in circumventing established safe open-source models such as Llama-2 and Llama-3.<|reference_end|> | arxiv | @article{wang2024functional,
title={Functional Homotopy: Smoothing Discrete Optimization via Continuous
Parameters for LLM Jailbreak Attacks},
author={Zi Wang, Divyam Anshumaan, Ashish Hooda, Yudong Chen, Somesh Jha},
journal={arXiv preprint arXiv:2410.04234},
year={2024},
archivePrefix={arXiv},
eprint={2410.04234},
primaryClass={cs.LG cs.AI cs.CR}
} | wang2024functional |
arxiv-666094 | 2410.04235 | Improving Distribution Alignment with Diversity-based Sampling | <|reference_start|>Improving Distribution Alignment with Diversity-based Sampling: Domain shifts are ubiquitous in machine learning, and can substantially degrade a model's performance when deployed to real-world data. To address this, distribution alignment methods aim to learn feature representations which are invariant across domains, by minimising the discrepancy between the distributions. However, the discrepancy estimates can be extremely noisy when training via stochastic gradient descent (SGD), and shifts in the relative proportions of different subgroups can lead to domain misalignments; these can both stifle the benefits of the method. This paper proposes to improve these estimates by inducing diversity in each sampled minibatch. This simultaneously balances the data and reduces the variance of the gradients, thereby enhancing the model's generalisation ability. We describe two options for diversity-based data samplers, based on the k-determinantal point process (k-DPP) and the k-means++ algorithm, which can function as drop-in replacements for a standard random sampler. On a real-world domain shift task of bioacoustic event detection, we show that both options 1) yield minibatches which are more representative of the full dataset; 2) reduce the distance estimation error between distributions, for a given sample size; and 3) improve out-of-distribution accuracy for two distribution alignment algorithms, as well as standard ERM.<|reference_end|> | arxiv | @article{napoli2024improving,
title={Improving Distribution Alignment with Diversity-based Sampling},
author={Andrea Napoli, Paul White},
journal={arXiv preprint arXiv:2410.04235},
year={2024},
archivePrefix={arXiv},
eprint={2410.04235},
primaryClass={cs.LG}
} | napoli2024improving |
arxiv-666095 | 2410.04236 | Overview of Factify5WQA: Fact Verification through 5W Question-Answering | <|reference_start|>Overview of Factify5WQA: Fact Verification through 5W Question-Answering: Researchers have found that fake news spreads much times faster than real news. This is a major problem, especially in today's world where social media is the key source of news for many among the younger population. Fact verification, thus, becomes an important task and many media sites contribute to the cause. Manual fact verification is a tedious task, given the volume of fake news online. The Factify5WQA shared task aims to increase research towards automated fake news detection by providing a dataset with an aspect-based question answering based fact verification method. Each claim and its supporting document is associated with 5W questions that help compare the two information sources. The objective performance measure in the task is done by comparing answers using BLEU score to measure the accuracy of the answers, followed by an accuracy measure of the classification. The task had submissions using custom training setup and pre-trained language-models among others. The best performing team posted an accuracy of 69.56%, which is a near 35% improvement over the baseline.<|reference_end|> | arxiv | @article{suresh2024overview,
title={Overview of Factify5WQA: Fact Verification through 5W Question-Answering},
author={Suryavardan Suresh and Anku Rani and Parth Patwa and Aishwarya Reganti
and Vinija Jain and Aman Chadha and Amitava Das and Amit Sheth and Asif Ekbal},
journal={arXiv preprint arXiv:2410.04236},
year={2024},
archivePrefix={arXiv},
eprint={2410.04236},
primaryClass={cs.CL cs.AI cs.LG}
} | suresh2024overview |
arxiv-666096 | 2410.04238 | Towards the Best Solution for Complex System Reliability: Can Statistics Outperform Machine Learning? | <|reference_start|>Towards the Best Solution for Complex System Reliability: Can Statistics Outperform Machine Learning?: Studying the reliability of complex systems using machine learning techniques involves facing a series of technical and practical challenges, ranging from the intrinsic nature of the system and data to the difficulties in modeling and effectively deploying models in real-world scenarios. This study compares the effectiveness of classical statistical techniques and machine learning methods for improving complex system analysis in reliability assessments. We aim to demonstrate that classical statistical algorithms often yield more precise and interpretable results than black-box machine learning approaches in many practical applications. The evaluation is conducted using both real-world data and simulated scenarios. We report the results obtained from statistical modeling algorithms, as well as from machine learning methods including neural networks, K-nearest neighbors, and random forests.<|reference_end|> | arxiv | @article{gamiz2024towards,
title={Towards the Best Solution for Complex System Reliability: Can Statistics
Outperform Machine Learning?},
author={Maria Luz Gamiz, Fernando Navas-Gomez, Rafael Nozal-Ca~nadas, Rocio
Raya-Miranda},
journal={arXiv preprint arXiv:2410.04238},
year={2024},
archivePrefix={arXiv},
eprint={2410.04238},
primaryClass={cs.LG}
} | gamiz2024towards |
arxiv-666097 | 2410.04239 | Persona Knowledge-Aligned Prompt Tuning Method for Online Debate | <|reference_start|>Persona Knowledge-Aligned Prompt Tuning Method for Online Debate: Debate is the process of exchanging viewpoints or convincing others on a particular issue. Recent research has provided empirical evidence that the persuasiveness of an argument is determined not only by language usage but also by communicator characteristics. Researchers have paid much attention to aspects of languages, such as linguistic features and discourse structures, but combining argument persuasiveness and impact with the social personae of the audience has not been explored due to the difficulty and complexity. We have observed the impressive simulation and personification capability of ChatGPT, indicating a giant pre-trained language model may function as an individual to provide personae and exert unique influences based on diverse background knowledge. Therefore, we propose a persona knowledge-aligned framework for argument quality assessment tasks from the audience side. This is the first work that leverages the emergence of ChatGPT and injects such audience personae knowledge into smaller language models via prompt tuning. The performance of our pipeline demonstrates significant and consistent improvement compared to competitive architectures.<|reference_end|> | arxiv | @article{chan2024persona,
title={Persona Knowledge-Aligned Prompt Tuning Method for Online Debate},
author={Chunkit Chan, Cheng Jiayang, Xin Liu, Yauwai Yim, Yuxin Jiang, Zheye
Deng, Haoran Li, Yangqiu Song, Ginny Y. Wong, Simon See},
journal={arXiv preprint arXiv:2410.04239},
year={2024},
archivePrefix={arXiv},
eprint={2410.04239},
primaryClass={cs.CL}
} | chan2024persona |
arxiv-666098 | 2410.04241 | Adaptive Question Answering: Enhancing Language Model Proficiency for Addressing Knowledge Conflicts with Source Citations | <|reference_start|>Adaptive Question Answering: Enhancing Language Model Proficiency for Addressing Knowledge Conflicts with Source Citations: Resolving knowledge conflicts is a crucial challenge in Question Answering (QA) tasks, as the internet contains numerous conflicting facts and opinions. While some research has made progress in tackling ambiguous settings where multiple valid answers exist, these approaches often neglect to provide source citations, leaving users to evaluate the factuality of each answer. On the other hand, existing work on citation generation has focused on unambiguous settings with single answers, failing to address the complexity of real-world scenarios. Despite the importance of both aspects, no prior research has combined them, leaving a significant gap in the development of QA systems. In this work, we bridge this gap by proposing the novel task of QA with source citation in ambiguous settings, where multiple valid answers exist. To facilitate research in this area, we create a comprehensive framework consisting of: (1) five novel datasets, obtained by augmenting three existing reading comprehension datasets with citation meta-data across various ambiguous settings, such as distractors and paraphrasing; (2) the first ambiguous multi-hop QA dataset featuring real-world, naturally occurring contexts; (3) two new metrics to evaluate models' performances; and (4) several strong baselines using rule-based, prompting, and finetuning approaches over five large language models. We hope that this new task, datasets, metrics, and baselines will inspire the community to push the boundaries of QA research and develop more trustworthy and interpretable systems.<|reference_end|> | arxiv | @article{shaier2024adaptive,
title={Adaptive Question Answering: Enhancing Language Model Proficiency for
Addressing Knowledge Conflicts with Source Citations},
author={Sagi Shaier, Ari Kobren, Philip Ogren},
journal={arXiv preprint arXiv:2410.04241},
year={2024},
archivePrefix={arXiv},
eprint={2410.04241},
primaryClass={cs.CL}
} | shaier2024adaptive |
arxiv-666099 | 2410.04242 | A Framework for Reproducible Benchmarking and Performance Diagnosis of SLAM Systems | <|reference_start|>A Framework for Reproducible Benchmarking and Performance Diagnosis of SLAM Systems: We propose SLAMFuse, an open-source SLAM benchmarking framework that provides consistent crossplatform environments for evaluating multi-modal SLAM algorithms, along with tools for data fuzzing, failure detection, and diagnosis across different datasets. Our framework introduces a fuzzing mechanism to test the resilience of SLAM algorithms against dataset perturbations. This enables the assessment of pose estimation accuracy under varying conditions and identifies critical perturbation thresholds. SLAMFuse improves diagnostics with failure detection and analysis tools, examining algorithm behaviour against dataset characteristics. SLAMFuse uses Docker to ensure reproducible testing conditions across diverse datasets and systems by streamlining dependency management. Emphasizing the importance of reproducibility and introducing advanced tools for algorithm evaluation and performance diagnosis, our work sets a new precedent for reliable benchmarking of SLAM systems. We provide ready-to-use docker compatible versions of the algorithms and datasets used in the experiments, together with guidelines for integrating and benchmarking new algorithms. Code is available at https://github.com/nikolaradulov/slamfuse<|reference_end|> | arxiv | @article{radulov2024a,
title={A Framework for Reproducible Benchmarking and Performance Diagnosis of
SLAM Systems},
author={Nikola Radulov (1), Yuhao Zhang (1), Mihai Bujanca (2), Ruiqi Ye (1),
Mikel Luj'an (1) ((1) Department of Computer Science University of
Manchester UK, (2) Qualcom Technologies XR Labs, Austria)},
journal={arXiv preprint arXiv:2410.04242},
year={2024},
archivePrefix={arXiv},
eprint={2410.04242},
primaryClass={cs.RO}
} | radulov2024a |
arxiv-666100 | 2410.04244 | A Two-Stage Optimization Method for Real-Time Parameterization of PV-Farm Digital Twin | <|reference_start|>A Two-Stage Optimization Method for Real-Time Parameterization of PV-Farm Digital Twin: Digital twins (DTs) are high-fidelity virtual models of physical systems. This paper details a novel two-stage optimization method for real-time parameterization of photovoltaic digital twins (PVDTs) using field measurements. Initially, the method estimates equivalent irradiance from PV power, voltage, and current data, eliminating the need for direct irradiance sensors. This is crucial for tuning the DT's parameters to actual environmental conditions, thereby improving power prediction accuracy. The second stage focuses on refining these parameters by minimizing discrepancies between measured and predicted outputs. This optimization utilizes the estimated equivalent irradiance as a model input, maintaining synchronization with real-world conditions. Parameter updates are event-trigger, launched when deviations exceed predefined thresholds. This strategy optimizes prediction accuracy and manages communication loads efficiently. Validated with extensive data from a PV farm, this approach outperforms existing methodologies in predictive accuracy and operational efficiency, significantly improving the performance DTs in real-time grid operations.<|reference_end|> | arxiv | @article{woo2024a,
title={A Two-Stage Optimization Method for Real-Time Parameterization of
PV-Farm Digital Twin},
author={Jong Ha Woo, Qi Xiao, Victor Daldegan Paduani, and Ning Lu},
journal={arXiv preprint arXiv:2410.04244},
year={2024},
archivePrefix={arXiv},
eprint={2410.04244},
primaryClass={eess.SY cs.SY}
} | woo2024a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.