corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-667601
2410.07023
Mechanism Design for Exchange Markets
<|reference_start|>Mechanism Design for Exchange Markets: Exchange markets are a significant type of market economy, in which each agent holds a budget and certain (divisible) resources available for trading. Most research on equilibrium in exchange economies is based on an environment of completely free competition. However, the orderly operation of markets also relies on effective economic regulatory mechanisms. This paper initiates the study of the mechanism design problem in exchange markets, exploring the potential to establish truthful market rules and mechanisms. This task poses a significant challenge as unlike auctioneers in auction design, the mechanism designer in exchange markets lacks centralized authority to fully control the allocation of resources. In this paper, the mechanism design problem is formalized as a two-stage game. In stage 1, agents submit their private information to the manager, who then formulates market trading rules based on the submitted information. In stage 2, agents are free to engage in transactions within these rules, ultimately reaching an equilibrium. We generalize the concept of liquid welfare from classical budget-feasible auctions and use market liquid welfare as a measure to evaluate the performance of the designed mechanism. Moreover, an extra concept called profitability is introduced to assess whether the market is money-making (profitable) or money-losing (unprofitable). Our goal is to design a truthful mechanism that achieves an (approximate) optimal welfare while minimizing unprofitability as much as possible. Two mechanisms for the problem are proposed. The first one guarantees truthfulness and profitability while approaching an approximation ratio of 1/2 in large markets. The second one is also truthful and achieves 1/2 approximation in general markets but incurs bounded unprofitability. Our aim is for both mechanisms to provide valuable insights into the truthful market design problem.<|reference_end|>
arxiv
@article{zheng2024mechanism, title={Mechanism Design for Exchange Markets}, author={Yusen Zheng, Yukun Cheng, Chenyang Xu, Xiaotie Deng}, journal={arXiv preprint arXiv:2410.07023}, year={2024}, archivePrefix={arXiv}, eprint={2410.07023}, primaryClass={cs.GT} }
zheng2024mechanism
arxiv-667602
2410.07025
Preference Fine-Tuning for Factuality in Chest X-Ray Interpretation Models Without Human Feedback
<|reference_start|>Preference Fine-Tuning for Factuality in Chest X-Ray Interpretation Models Without Human Feedback: Radiologists play a crucial role by translating medical images into medical reports. However, the field faces staffing shortages and increasing workloads. While automated approaches using vision-language models (VLMs) show promise as assistants, they require exceptionally high accuracy. Most current VLMs in radiology rely solely on supervised fine-tuning (SFT). Meanwhile, in the general domain, additional preference fine-tuning has become standard practice. The challenge in radiology lies in the prohibitive cost of obtaining radiologist feedback. We propose a scalable automated preference alignment technique for VLMs in radiology, focusing on chest X-ray (CXR) report generation. Our method leverages publicly available datasets with an LLM-as-a-Judge mechanism, eliminating the need for additional expert radiologist feedback. We evaluate and benchmark five direct alignment algorithms (DAAs). Our results show up to a 57.4% improvement in average GREEN scores, a LLM-based metric for evaluating CXR reports, and a 9.2% increase in an average across six metrics (domain specific and general), compared to the SFT baseline. We study reward overoptimization via length exploitation, with reports lengthening by up to 3.2x. To assess a potential alignment tax, we benchmark on six additional diverse tasks, finding no significant degradations. A reader study involving four board-certified radiologists indicates win rates of up to 0.62 over the SFT baseline, while significantly penalizing verbosity. Our analysis provides actionable insights for the development of VLMs in high-stakes fields like radiology.<|reference_end|>
arxiv
@article{hein2024preference, title={Preference Fine-Tuning for Factuality in Chest X-Ray Interpretation Models Without Human Feedback}, author={Dennis Hein, Zhihong Chen, Sophie Ostmeier, Justin Xu, Maya Varma, Eduardo Pontes Reis, Arne Edward Michalson, Christian Bluethgen, Hyun Joo Shin, Curtis Langlotz, Akshay S Chaudhari}, journal={arXiv preprint arXiv:2410.07025}, year={2024}, archivePrefix={arXiv}, eprint={2410.07025}, primaryClass={cs.CV cs.CL} }
hein2024preference
arxiv-667603
2410.07027
Evaluation of Run-Time Energy Efficiency using Controlled Approximation in a RISC-V Core
<|reference_start|>Evaluation of Run-Time Energy Efficiency using Controlled Approximation in a RISC-V Core: The limited energy available in most embedded systems poses a significant challenge in enhancing the performance of embedded processors and microcontrollers. One promising approach to address this challenge is the use of approximate computing, which can be implemented in both hardware and software layers to balance the trade-off between performance and power consumption. In this study, the impact of dynamic hardware approximation methods on the run-time energy efficiency of a RISC-V embedded processor with specialized features for approximate computing is investigated. The results indicate that the platform achieves an average energy efficiency of 13.3 pJ/instruction at a 500MHz clock frequency adhering approximation in 45nm CMOS technology. Compared to accurate circuits and computation, the approximate computing techniques in the processing core resulted in a significant improvement of 9.21% in overall energy efficiency, 60.83% in multiplication instructions, 14.64% in execution stage, and 9.23% in overall power consumption.<|reference_end|>
arxiv
@article{delavari2024evaluation, title={Evaluation of Run-Time Energy Efficiency using Controlled Approximation in a RISC-V Core}, author={Arvin Delavari, Faraz Ghoreishy, Hadi Shahriar Shahhoseini, Sattar Mirzakuchaki}, journal={arXiv preprint arXiv:2410.07027}, year={2024}, archivePrefix={arXiv}, eprint={2410.07027}, primaryClass={cs.AR} }
delavari2024evaluation
arxiv-667604
2410.07030
Clean Evaluations on Contaminated Visual Language Models
<|reference_start|>Clean Evaluations on Contaminated Visual Language Models: How to evaluate large language models (LLMs) cleanly has been established as an important research era to genuinely report the performance of possibly contaminated LLMs. Yet, how to cleanly evaluate the visual language models (VLMs) is an under-studied problem. We propose a novel approach to achieve such goals through data augmentation methods on the visual input information. We then craft a new visual clean evaluation benchmark with thousands of data instances. Through extensive experiments, we found that the traditional visual data augmentation methods are useful, but they are at risk of being used as a part of the training data as a workaround. We further propose using BGR augmentation to switch the colour channel of the visual information. We found that it is a simple yet effective method for reducing the effect of data contamination and fortunately, it is also harmful to be used as a data augmentation method during training. It means that it is hard to integrate such data augmentation into training by malicious trainers and it could be a promising technique to cleanly evaluate visual LLMs. Our code, data, and model weights will be released upon publication.<|reference_end|>
arxiv
@article{lu2024clean, title={Clean Evaluations on Contaminated Visual Language Models}, author={Hongyuan Lu, Shujie Miao, and Wai Lam}, journal={arXiv preprint arXiv:2410.07030}, year={2024}, archivePrefix={arXiv}, eprint={2410.07030}, primaryClass={cs.CV cs.CL} }
lu2024clean
arxiv-667605
2410.07035
PositionID: LLMs can Control Lengths, Copy and Paste with Explicit Positional Awareness
<|reference_start|>PositionID: LLMs can Control Lengths, Copy and Paste with Explicit Positional Awareness: Large Language Models (LLMs) demonstrate impressive capabilities across various domains, including role-playing, creative writing, mathematical reasoning, and coding. Despite these advancements, LLMs still encounter challenges with length control, frequently failing to adhere to specific length constraints due to their token-level operations and insufficient training on data with strict length limitations. We identify this issue as stemming from a lack of positional awareness and propose novel approaches--PositionID Prompting and PositionID Fine-Tuning--to address it. These methods enhance the model's ability to continuously monitor and manage text length during generation. Additionally, we introduce PositionID CP Prompting to enable LLMs to perform copy and paste operations accurately. Furthermore, we develop two benchmarks for evaluating length control and copy-paste abilities. Our experiments demonstrate that our methods significantly improve the model's adherence to length constraints and copy-paste accuracy without compromising response quality.<|reference_end|>
arxiv
@article{wang2024positionid:, title={PositionID: LLMs can Control Lengths, Copy and Paste with Explicit Positional Awareness}, author={Zekun Wang, Feiyu Duan, Yibo Zhang, Wangchunshu Zhou, Ke Xu, Wenhao Huang, Jie Fu}, journal={arXiv preprint arXiv:2410.07035}, year={2024}, archivePrefix={arXiv}, eprint={2410.07035}, primaryClass={cs.CL cs.AI} }
wang2024positionid:
arxiv-667606
2410.07039
Distributionally Robust Clustered Federated Learning: A Case Study in Healthcare
<|reference_start|>Distributionally Robust Clustered Federated Learning: A Case Study in Healthcare: In this paper, we address the challenge of heterogeneous data distributions in cross-silo federated learning by introducing a novel algorithm, which we term Cross-silo Robust Clustered Federated Learning (CS-RCFL). Our approach leverages the Wasserstein distance to construct ambiguity sets around each client's empirical distribution that capture possible distribution shifts in the local data, enabling evaluation of worst-case model performance. We then propose a model-agnostic integer fractional program to determine the optimal distributionally robust clustering of clients into coalitions so that possible biases in the local models caused by statistically heterogeneous client datasets are avoided, and analyze our method for linear and logistic regression models. Finally, we discuss a federated learning protocol that ensures the privacy of client distributions, a critical consideration, for instance, when clients are healthcare institutions. We evaluate our algorithm on synthetic and real-world healthcare data.<|reference_end|>
arxiv
@article{konti2024distributionally, title={Distributionally Robust Clustered Federated Learning: A Case Study in Healthcare}, author={Xenia Konti, Hans Riess, Manos Giannopoulos, Yi Shen, Michael J. Pencina, Nicoleta J. Economou-Zavlanos, Michael M. Zavlanos}, journal={arXiv preprint arXiv:2410.07039}, year={2024}, archivePrefix={arXiv}, eprint={2410.07039}, primaryClass={cs.LG} }
konti2024distributionally
arxiv-667607
2410.07040
The Euler-Lagrange equation and optimal control: Preliminary results
<|reference_start|>The Euler-Lagrange equation and optimal control: Preliminary results: Algebraically speaking, linear time-invariant (LTI) systems can be considered as modules. In this framework, controllability is translated as the freeness of the system module. Optimal control mainly relies on quadratic Lagrangians and the consideration of any basis of the system module leads to an open-loop control strategy via a linear Euler-Lagrange equation. In this approach, the endpoint is easily assignable and time horizon can be chosen to minimize the criterion. The loop is closed via an intelligent controller derived from model-free control, which exhibits excellent performances concerning model mismatches and disturbances. The extension to nonlinear systems is briefly discussed.<|reference_end|>
arxiv
@article{join2024the, title={The Euler-Lagrange equation and optimal control: Preliminary results}, author={C'edric Join, Emmanuel Delaleau, Michel Fliess}, journal={arXiv preprint arXiv:2410.07040}, year={2024}, archivePrefix={arXiv}, eprint={2410.07040}, primaryClass={math.OC cs.SY eess.SY} }
join2024the
arxiv-667608
2410.07041
Emergent properties with repeated examples
<|reference_start|>Emergent properties with repeated examples: We study the performance of transformers as a function of the number of repetitions of training examples with algorithmically generated datasets. On three problems of mathematics: the greatest common divisor, modular multiplication, and matrix eigenvalues, we show that for a fixed number of training steps, models trained on smaller sets of repeated examples outperform models trained on larger sets of single-use examples. We also demonstrate that two-set training - repeated use of a small random subset of examples, along normal sampling on the rest of the training set - provides for faster learning and better performance. This highlights that the benefits of repetition can outweigh those of data diversity. These datasets and problems provide a controlled setting to shed light on the still poorly understood interplay between generalization and memorization in deep learning.<|reference_end|>
arxiv
@article{charton2024emergent, title={Emergent properties with repeated examples}, author={Franc{c}ois Charton and Julia Kempe}, journal={arXiv preprint arXiv:2410.07041}, year={2024}, archivePrefix={arXiv}, eprint={2410.07041}, primaryClass={cs.LG cs.AI} }
charton2024emergent
arxiv-667609
2410.07043
Z-upscaling: Optical Flow Guided Frame Interpolation for Isotropic Reconstruction of 3D EM Volumes
<|reference_start|>Z-upscaling: Optical Flow Guided Frame Interpolation for Isotropic Reconstruction of 3D EM Volumes: We propose a novel optical flow based approach to enhance the axial resolution of anisotropic 3D EM volumes to achieve isotropic 3D reconstruction. Assuming spatial continuity of 3D biological structures in well aligned EM volumes, we reasoned that optical flow estimation techniques, often applied for temporal resolution enhancement in videos, can be utilized. Pixel level motion is estimated between neighboring 2D slices along z, using spatial gradient flow estimates to interpolate and generate new 2D slices resulting in isotropic voxels. We leverage recent state-of-the-art learning methods for video frame interpolation and transfer learning techniques, and demonstrate the success of our approach on publicly available ultrastructure EM volumes.<|reference_end|>
arxiv
@article{ferede2024z-upscaling:, title={Z-upscaling: Optical Flow Guided Frame Interpolation for Isotropic Reconstruction of 3D EM Volumes}, author={Fisseha A. Ferede, Ali Khalighifar, Jaison John, Krishnan Venkataraman, Khaled Khairy}, journal={arXiv preprint arXiv:2410.07043}, year={2024}, archivePrefix={arXiv}, eprint={2410.07043}, primaryClass={eess.IV cs.CV} }
ferede2024z-upscaling:
arxiv-667610
2410.07046
S2HPruner: Soft-to-Hard Distillation Bridges the Discretization Gap in Pruning
<|reference_start|>S2HPruner: Soft-to-Hard Distillation Bridges the Discretization Gap in Pruning: Recently, differentiable mask pruning methods optimize the continuous relaxation architecture (soft network) as the proxy of the pruned discrete network (hard network) for superior sub-architecture search. However, due to the agnostic impact of the discretization process, the hard network struggles with the equivalent representational capacity as the soft network, namely discretization gap, which severely spoils the pruning performance. In this paper, we first investigate the discretization gap and propose a novel structural differentiable mask pruning framework named S2HPruner to bridge the discretization gap in a one-stage manner. In the training procedure, SH2Pruner forwards both the soft network and its corresponding hard network, then distills the hard network under the supervision of the soft network. To optimize the mask and prevent performance degradation, we propose a decoupled bidirectional knowledge distillation. It blocks the weight updating from the hard to the soft network while maintaining the gradient corresponding to the mask. Compared with existing pruning arts, S2HPruner achieves surpassing pruning performance without fine-tuning on comprehensive benchmarks, including CIFAR-100, Tiny ImageNet, and ImageNet with a variety of network architectures. Besides, investigation and analysis experiments explain the effectiveness of S2HPruner. Codes will be released soon.<|reference_end|>
arxiv
@article{lin2024s2hpruner:, title={S2HPruner: Soft-to-Hard Distillation Bridges the Discretization Gap in Pruning}, author={Weihao Lin, Shengji Tang, Chong Yu, Peng Ye, Tao Chen}, journal={arXiv preprint arXiv:2410.07046}, year={2024}, archivePrefix={arXiv}, eprint={2410.07046}, primaryClass={cs.CV} }
lin2024s2hpruner:
arxiv-667611
2410.07051
Exponents for Shared Randomness-Assisted Channel Simulation
<|reference_start|>Exponents for Shared Randomness-Assisted Channel Simulation: We determine the exact error and strong converse exponents of shared randomness-assisted channel simulation in worst case total-variation distance. Namely, we find that these exponents can be written as simple optimizations over the R\'enyi channel mutual information. Strikingly, and in stark contrast to channel coding, there are no critical rates, allowing a tight characterization for arbitrary rates below and above the simulation capacity. We derive our results by asymptotically expanding the meta-converse for channel simulation [Cao {\it et al.}, IEEE Trans.~Inf.~Theory (2024)], which corresponds to non-signaling assisted codes. We prove this to be asymptotically tight by employing the approximation algorithms from [Berta {\it et al.}, Proc.~IEEE ISIT (2024)], which show how to round any non-signaling assisted strategy to a strategy that only uses shared randomness. Notably, this implies that any additional quantum entanglement-assistance does not change the error or the strong converse exponents.<|reference_end|>
arxiv
@article{oufkir2024exponents, title={Exponents for Shared Randomness-Assisted Channel Simulation}, author={Aadil Oufkir, Michael X. Cao, Hao-Chung Cheng, Mario Berta}, journal={arXiv preprint arXiv:2410.07051}, year={2024}, archivePrefix={arXiv}, eprint={2410.07051}, primaryClass={cs.IT math.IT quant-ph} }
oufkir2024exponents
arxiv-667612
2410.07053
Robots in the Middle: Evaluating LLMs in Dispute Resolution
<|reference_start|>Robots in the Middle: Evaluating LLMs in Dispute Resolution: Mediation is a dispute resolution method featuring a neutral third-party (mediator) who intervenes to help the individuals resolve their dispute. In this paper, we investigate to which extent large language models (LLMs) are able to act as mediators. We investigate whether LLMs are able to analyze dispute conversations, select suitable intervention types, and generate appropriate intervention messages. Using a novel, manually created dataset of 50 dispute scenarios, we conduct a blind evaluation comparing LLMs with human annotators across several key metrics. Overall, the LLMs showed strong performance, even outperforming our human annotators across dimensions. Specifically, in 62% of the cases, the LLMs chose intervention types that were rated as better than or equivalent to those chosen by humans. Moreover, in 84% of the cases, the intervention messages generated by the LLMs were rated as better than or equal to the intervention messages written by humans. LLMs likewise performed favourably on metrics such as impartiality, understanding and contextualization. Our results demonstrate the potential of integrating AI in online dispute resolution (ODR) platforms.<|reference_end|>
arxiv
@article{tan2024robots, title={Robots in the Middle: Evaluating LLMs in Dispute Resolution}, author={Jinzhe Tan, Hannes Westermann, Nikhil Reddy Pottanigari, Jarom'ir v{S}avelka, S'ebastien Mee`us, Mia Godet, Karim Benyekhlef}, journal={arXiv preprint arXiv:2410.07053}, year={2024}, archivePrefix={arXiv}, eprint={2410.07053}, primaryClass={cs.HC cs.CL} }
tan2024robots
arxiv-667613
2410.07054
Mitigating the Language Mismatch and Repetition Issues in LLM-based Machine Translation via Model Editing
<|reference_start|>Mitigating the Language Mismatch and Repetition Issues in LLM-based Machine Translation via Model Editing: Large Language Models (LLMs) have recently revolutionized the NLP field, while they still fall short in some specific down-stream tasks. In the work, we focus on utilizing LLMs to perform machine translation, where we observe that two patterns of errors frequently occur and drastically affect the translation quality: language mismatch and repetition. The work sets out to explore the potential for mitigating these two issues by leveraging model editing methods, e.g., by locating Feed-Forward Network (FFN) neurons or something that are responsible for the errors and deactivating them in the inference time. We find that directly applying such methods either limited effect on the targeted errors or has significant negative side-effect on the general translation quality, indicating that the located components may also be crucial for ensuring machine translation with LLMs on the rails. To this end, we propose to refine the located components by fetching the intersection of the locating results under different language settings, filtering out the aforementioned information that is irrelevant to targeted errors. The experiment results empirically demonstrate that our methods can effectively reduce the language mismatch and repetition ratios and meanwhile enhance or keep the general translation quality in most cases.<|reference_end|>
arxiv
@article{wang2024mitigating, title={Mitigating the Language Mismatch and Repetition Issues in LLM-based Machine Translation via Model Editing}, author={Weichuan Wang, Zhaoyi Li, Defu Lian, Chen Ma, Linqi Song, Ying Wei}, journal={arXiv preprint arXiv:2410.07054}, year={2024}, archivePrefix={arXiv}, eprint={2410.07054}, primaryClass={cs.CL cs.LG} }
wang2024mitigating
arxiv-667614
2410.07059
Online Epsilon Net and Piercing Set for Geometric Concepts
<|reference_start|>Online Epsilon Net and Piercing Set for Geometric Concepts: VC-dimension and $\varepsilon$-nets are key concepts in Statistical Learning Theory. Intuitively, VC-dimension is a measure of the size of a class of sets. The famous $\varepsilon$-net theorem, a fundamental result in Discrete Geometry, asserts that if the VC-dimension of a set system is bounded, then a small sample exists that intersects all sufficiently large sets. In online learning scenarios where data arrives sequentially, the VC-dimension helps to bound the complexity of the set system, and $\varepsilon$-nets ensure the selection of a small representative set. This sampling framework is crucial in various domains, including spatial data analysis, motion planning in dynamic environments, optimization of sensor networks, and feature extraction in computer vision, among others. Motivated by these applications, we study the online $\varepsilon$-net problem for geometric concepts with bounded VC-dimension. While the offline version of this problem has been extensively studied, surprisingly, there are no known theoretical results for the online version to date. We present the first deterministic online algorithm with an optimal competitive ratio for intervals in $\mathbb{R}$. Next, we give a randomized online algorithm with a near-optimal competitive ratio for axis-aligned boxes in $\mathbb{R}^d$, for $d\le 3$. Furthermore, we introduce a novel technique to analyze similar-sized objects of constant description complexity in $\mathbb{R}^d$, which may be of independent interest. Next, we focus on the continuous version of this problem, where ranges of the set system are geometric concepts in $\mathbb{R}^d$ arriving in an online manner, but the universe is the entire space, and the objective is to choose a small sample that intersects all the ranges.<|reference_end|>
arxiv
@article{bhore2024online, title={Online Epsilon Net and Piercing Set for Geometric Concepts}, author={Sujoy Bhore, Devdan Dey and Satyam Singh}, journal={arXiv preprint arXiv:2410.07059}, year={2024}, archivePrefix={arXiv}, eprint={2410.07059}, primaryClass={cs.LG cs.CG} }
bhore2024online
arxiv-667615
2410.07060
Token sliding independent set reconfiguration on block graphs
<|reference_start|>Token sliding independent set reconfiguration on block graphs: Let $S$ be an independent set of a simple undirected graph $G$. Suppose that each vertex of $S$ has a token placed on it. The tokens are allowed to be moved, one at a time, by sliding along the edges of $G$, so that after each move, the vertices having tokens always form an independent set of $G$. We would like to determine whether the tokens can be eventually brought to stay on the vertices of another independent set $S'$ of $G$ in this manner. In other words, we would like to decide if we can transform $S$ into $S'$ through a sequence of steps, each of which involves substituting a vertex in the current independent set with one of its neighbours to obtain another independent set. This problem of determining if one independent set of a graph ``is reachable'' from another independent set of it is known to be PSPACE-hard even for split graphs, planar graphs, and graphs of bounded treewidth. Polynomial time algorithms have been obtained for certain graph classes like trees, interval graphs, claw-free graphs, and bipartite permutation graphs. We present a polynomial time algorithm for the problem on block graphs, which are the graphs in which every maximal 2-connected subgraph is a clique. Our algorithm is the first generalization of the known polynomial time algorithm for trees to a larger class of graphs (note that trees form a proper subclass of block graphs).<|reference_end|>
arxiv
@article{francis2024token, title={Token sliding independent set reconfiguration on block graphs}, author={Mathew C. Francis and Veena Prabhakaran}, journal={arXiv preprint arXiv:2410.07060}, year={2024}, archivePrefix={arXiv}, eprint={2410.07060}, primaryClass={cs.DM} }
francis2024token
arxiv-667616
2410.07062
TinyEmo: Scaling down Emotional Reasoning via Metric Projection
<|reference_start|>TinyEmo: Scaling down Emotional Reasoning via Metric Projection: This paper introduces TinyEmo, a family of small multi-modal language models for emotional reasoning and classification. Our approach features: (1) a synthetic emotional instruct dataset for both pre-training and fine-tuning stages, (2) a Metric Projector that delegates classification from the language model allowing for more efficient training and inference, (3) a multi-modal large language model (MM-LLM) for emotional reasoning, and (4) a semi-automated framework for bias detection. TinyEmo is able to perform emotion classification and emotional reasoning, all while using substantially fewer parameters than comparable models. This efficiency allows us to freely incorporate more diverse emotional datasets, enabling strong performance on classification tasks, with our smallest model (700M parameters) outperforming larger state-of-the-art models based on general-purpose MM-LLMs with over 7B parameters. Additionally, the Metric Projector allows for interpretability and indirect bias detection in large models without additional training, offering an approach to understand and improve AI systems. We release code, models, and dataset at https://github.com/ggcr/TinyEmo<|reference_end|>
arxiv
@article{gutierrez2024tinyemo:, title={TinyEmo: Scaling down Emotional Reasoning via Metric Projection}, author={Cristian Gutierrez}, journal={arXiv preprint arXiv:2410.07062}, year={2024}, archivePrefix={arXiv}, eprint={2410.07062}, primaryClass={cs.CV} }
gutierrez2024tinyemo:
arxiv-667617
2410.07063
InAttention: Linear Context Scaling for Transformers
<|reference_start|>InAttention: Linear Context Scaling for Transformers: VRAM requirements for transformer models scale quadratically with context length due to the self-attention mechanism. In this paper we modify the decoder-only transformer, replacing self-attention with InAttention, which scales linearly with context length during inference by having tokens attend only to initial states. Benchmarking shows that InAttention significantly reduces VRAM usage during inference, enabling handling of long sequences on consumer GPUs. We corroborate that fine-tuning extends context length efficiently, improving performance on long sequences without high training costs. InAttention offers a scalable solution for long-range dependencies in transformer models, paving the way for further optimization.<|reference_end|>
arxiv
@article{eisner2024inattention:, title={InAttention: Linear Context Scaling for Transformers}, author={Joseph Eisner}, journal={arXiv preprint arXiv:2410.07063}, year={2024}, archivePrefix={arXiv}, eprint={2410.07063}, primaryClass={cs.LG} }
eisner2024inattention:
arxiv-667618
2410.07064
Data Selection via Optimal Control for Language Models
<|reference_start|>Data Selection via Optimal Control for Language Models: This work investigates the selection of high-quality pre-training data from massive corpora to enhance LMs' capabilities for downstream usage. We formulate data selection as a generalized Optimal Control problem, which can be solved theoretically by Pontryagin's Maximum Principle (PMP), yielding a set of necessary conditions that characterize the relationship between optimal data selection and LM training dynamics. Based on these theoretical results, we introduce PMP-based Data Selection (PDS), a framework that approximates optimal data selection by solving the PMP conditions. In our experiments, we adopt PDS to select data from CommmonCrawl and show that the PDS-selected corpus accelerates the learning of LMs and constantly boosts their performance on a wide range of downstream tasks across various model sizes. Moreover, the benefits of PDS extend to ~400B models trained on ~10T tokens, as evidenced by the extrapolation of the test loss curves according to the Scaling Laws. PDS also improves data utilization when the pre-training data is limited, by reducing the data demand by 1.8 times, which mitigates the quick exhaustion of available web-crawled corpora. Our code, data, and model checkpoints can be found in https://github.com/microsoft/LMOps/tree/main/data_selection.<|reference_end|>
arxiv
@article{gu2024data, title={Data Selection via Optimal Control for Language Models}, author={Yuxian Gu, Li Dong, Hongning Wang, Yaru Hao, Qingxiu Dong, Furu Wei, Minlie Huang}, journal={arXiv preprint arXiv:2410.07064}, year={2024}, archivePrefix={arXiv}, eprint={2410.07064}, primaryClass={cs.CL} }
gu2024data
arxiv-667619
2410.07066
A Gentle Introduction and Tutorial on Deep Generative Models in Transportation Research
<|reference_start|>A Gentle Introduction and Tutorial on Deep Generative Models in Transportation Research: Deep Generative Models (DGMs) have rapidly advanced in recent years, becoming essential tools in various fields due to their ability to learn complex data distributions and generate synthetic data. Their importance in transportation research is increasingly recognized, particularly for applications like traffic data generation, prediction, and feature extraction. This paper offers a comprehensive introduction and tutorial on DGMs, with a focus on their applications in transportation. It begins with an overview of generative models, followed by detailed explanations of fundamental models, a systematic review of the literature, and practical tutorial code to aid implementation. The paper also discusses current challenges and opportunities, highlighting how these models can be effectively utilized and further developed in transportation research. This paper serves as a valuable reference, guiding researchers and practitioners from foundational knowledge to advanced applications of DGMs in transportation research.<|reference_end|>
arxiv
@article{choi2024a, title={A Gentle Introduction and Tutorial on Deep Generative Models in Transportation Research}, author={Seongjin Choi, Zhixiong Jin, Seung Woo Ham, Jiwon Kim, Lijun Sun}, journal={arXiv preprint arXiv:2410.07066}, year={2024}, archivePrefix={arXiv}, eprint={2410.07066}, primaryClass={cs.LG} }
choi2024a
arxiv-667620
2410.07067
Further remarks on the dual negation in team logics
<|reference_start|>Further remarks on the dual negation in team logics: The dual or game-theoretical negation $\lnot$ of independence-friendly logic (IF) and dependence logic (D) exhibits an extreme degree of semantic indeterminacy in that for any pair of sentences $\phi$ and $\psi$ of IF/D, if $\phi$ and $\psi$ are incompatible in the sense that they share no models, there is a sentence $\theta$ of IF/D such that $\phi\equiv \theta$ and $\psi\equiv \lnot \theta$ (as shown originally by Burgess in the equivalent context of the prenex fragment of Henkin quantifier logic). We show that by adjusting the notion of incompatibility employed, analogues of this result can be established for a number of modal and propositional team logics, including Aloni's bilateral state-based modal logic, Hawke and Steinert-Threlkeld's semantic expressivist logic for epistemic modals, as well as propositional dependence logic with the dual negation. Together with its converse, a result of this type can be seen as an expressive completeness theorem with respect to the relevant incompatibility notion; we formulate a notion of expressive completeness for pairs of properties to make this precise.<|reference_end|>
arxiv
@article{anttila2024further, title={Further remarks on the dual negation in team logics}, author={Aleksi Anttila}, journal={arXiv preprint arXiv:2410.07067}, year={2024}, archivePrefix={arXiv}, eprint={2410.07067}, primaryClass={math.LO cs.LO} }
anttila2024further
arxiv-667621
2410.07069
ReIFE: Re-evaluating Instruction-Following Evaluation
<|reference_start|>ReIFE: Re-evaluating Instruction-Following Evaluation: The automatic evaluation of instruction following typically involves using large language models (LLMs) to assess response quality. However, there is a lack of comprehensive evaluation of these LLM-based evaluators across two dimensions: the base LLMs and the evaluation protocols. Therefore, we present a thorough meta-evaluation of instruction following, including 25 base LLMs and 15 recently proposed evaluation protocols, on 4 human-annotated datasets, assessing the evaluation accuracy of the LLM-evaluators. Our evaluation allows us to identify the best-performing base LLMs and evaluation protocols with a high degree of robustness. Moreover, our large-scale evaluation reveals: (1) Base LLM performance ranking remains largely consistent across evaluation protocols, with less capable LLMs showing greater improvement from protocol enhancements; (2) Robust evaluation of evaluation protocols requires many base LLMs with varying capability levels, as protocol effectiveness can depend on the base LLM used; (3) Evaluation results on different datasets are not always consistent, so a rigorous evaluation requires multiple datasets with distinctive features. We release our meta-evaluation suite ReIFE, which provides the codebase and evaluation result collection for more than 500 LLM-evaluator configurations, to support future research in instruction-following evaluation.<|reference_end|>
arxiv
@article{liu2024reife:, title={ReIFE: Re-evaluating Instruction-Following Evaluation}, author={Yixin Liu, Kejian Shi, Alexander R. Fabbri, Yilun Zhao, Peifeng Wang, Chien-Sheng Wu, Shafiq Joty, Arman Cohan}, journal={arXiv preprint arXiv:2410.07069}, year={2024}, archivePrefix={arXiv}, eprint={2410.07069}, primaryClass={cs.CL cs.AI cs.LG} }
liu2024reife:
arxiv-667622
2410.07071
Retrieval-Augmented Decision Transformer: External Memory for In-context RL
<|reference_start|>Retrieval-Augmented Decision Transformer: External Memory for In-context RL: In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been observed in Reinforcement Learning (RL) settings. Prior in-context RL methods, however, require entire episodes in the agent's context. Given that complex environments typically lead to long episodes with sparse rewards, these methods are constrained to simple environments with short episodes. To address these challenges, we introduce Retrieval-Augmented Decision Transformer (RA-DT). RA-DT employs an external memory mechanism to store past experiences from which it retrieves only sub-trajectories relevant for the current situation. The retrieval component in RA-DT does not require training and can be entirely domain-agnostic. We evaluate the capabilities of RA-DT on grid-world environments, robotics simulations, and procedurally-generated video games. On grid-worlds, RA-DT outperforms baselines, while using only a fraction of their context length. Furthermore, we illuminate the limitations of current in-context RL methods on complex environments and discuss future directions. To facilitate future research, we release datasets for four of the considered environments.<|reference_end|>
arxiv
@article{schmied2024retrieval-augmented, title={Retrieval-Augmented Decision Transformer: External Memory for In-context RL}, author={Thomas Schmied, Fabian Paischer, Vihang Patil, Markus Hofmarcher, Razvan Pascanu, Sepp Hochreiter}, journal={arXiv preprint arXiv:2410.07071}, year={2024}, archivePrefix={arXiv}, eprint={2410.07071}, primaryClass={cs.LG cs.AI} }
schmied2024retrieval-augmented
arxiv-667623
2410.07072
Towards xAI: Configuring RNN Weights using Domain Knowledge for MIMO Receive Processing
<|reference_start|>Towards xAI: Configuring RNN Weights using Domain Knowledge for MIMO Receive Processing: Deep learning is making a profound impact in the physical layer of wireless communications. Despite exhibiting outstanding empirical performance in tasks such as MIMO receive processing, the reasons behind the demonstrated superior performance improvement remain largely unclear. In this work, we advance the field of Explainable AI (xAI) in the physical layer of wireless communications utilizing signal processing principles. Specifically, we focus on the task of MIMO-OFDM receive processing (e.g., symbol detection) using reservoir computing (RC), a framework within recurrent neural networks (RNNs), which outperforms both conventional and other learning-based MIMO detectors. Our analysis provides a signal processing-based, first-principles understanding of the corresponding operation of the RC. Building on this fundamental understanding, we are able to systematically incorporate the domain knowledge of wireless systems (e.g., channel statistics) into the design of the underlying RNN by directly configuring the untrained RNN weights for MIMO-OFDM symbol detection. The introduced RNN weight configuration has been validated through extensive simulations demonstrating significant performance improvements. This establishes a foundation for explainable RC-based architectures in MIMO-OFDM receive processing and provides a roadmap for incorporating domain knowledge into the design of neural networks for NextG systems.<|reference_end|>
arxiv
@article{jere2024towards, title={Towards xAI: Configuring RNN Weights using Domain Knowledge for MIMO Receive Processing}, author={Shashank Jere, Lizhong Zheng, Karim Said and Lingjia Liu}, journal={arXiv preprint arXiv:2410.07072}, year={2024}, archivePrefix={arXiv}, eprint={2410.07072}, primaryClass={eess.SP cs.LG} }
jere2024towards
arxiv-667624
2410.07073
Pixtral 12B
<|reference_start|>Pixtral 12B: We introduce Pixtral-12B, a 12--billion-parameter multimodal language model. Pixtral-12B is trained to understand both natural images and documents, achieving leading performance on various multimodal benchmarks, surpassing a number of larger models. Unlike many open-source models, Pixtral is also a cutting-edge text model for its size, and does not compromise on natural language performance to excel in multimodal tasks. Pixtral uses a new vision encoder trained from scratch, which allows it to ingest images at their natural resolution and aspect ratio. This gives users flexibility on the number of tokens used to process an image. Pixtral is also able to process any number of images in its long context window of 128K tokens. Pixtral 12B substanially outperforms other open models of similar sizes (Llama-3.2 11B \& Qwen-2-VL 7B). It also outperforms much larger open models like Llama-3.2 90B while being 7x smaller. We further contribute an open-source benchmark, MM-MT-Bench, for evaluating vision-language models in practical scenarios, and provide detailed analysis and code for standardized evaluation protocols for multimodal LLMs. Pixtral-12B is released under Apache 2.0 license.<|reference_end|>
arxiv
@article{agrawal2024pixtral, title={Pixtral 12B}, author={Pravesh Agrawal, Szymon Antoniak, Emma Bou Hanna, Baptiste Bout, Devendra Chaplot, Jessica Chudnovsky, Diogo Costa, Baudouin De Monicault, Saurabh Garg, Theophile Gervet, Soham Ghosh, Am'elie H'eliou, Paul Jacob, Albert Q. Jiang, Kartik Khandelwal, Timoth'ee Lacroix, Guillaume Lample, Diego Las Casas, Thibaut Lavril, Teven Le Scao, Andy Lo, William Marshall, Louis Martin, Arthur Mensch, Pavankumar Muddireddy, Valera Nemychnikova, Marie Pellat, Patrick Von Platen, Nikhil Raghuraman, Baptiste Rozi`ere, Alexandre Sablayrolles, Lucile Saulnier, Romain Sauvestre, Wendy Shang, Roman Soletskyi, Lawrence Stewart, Pierre Stock, Joachim Studnia, Sandeep Subramanian, Sagar Vaze, Thomas Wang, Sophia Yang}, journal={arXiv preprint arXiv:2410.07073}, year={2024}, archivePrefix={arXiv}, eprint={2410.07073}, primaryClass={cs.CV cs.CL} }
agrawal2024pixtral
arxiv-667625
2410.07074
Let's Ask GNN: Empowering Large Language Model for Graph In-Context Learning
<|reference_start|>Let's Ask GNN: Empowering Large Language Model for Graph In-Context Learning: Textual Attributed Graphs (TAGs) are crucial for modeling complex real-world systems, yet leveraging large language models (LLMs) for TAGs presents unique challenges due to the gap between sequential text processing and graph-structured data. We introduce AskGNN, a novel approach that bridges this gap by leveraging In-Context Learning (ICL) to integrate graph data and task-specific information into LLMs. AskGNN employs a Graph Neural Network (GNN)-powered structure-enhanced retriever to select labeled nodes across graphs, incorporating complex graph structures and their supervision signals. Our learning-to-retrieve algorithm optimizes the retriever to select example nodes that maximize LLM performance on graph. Experiments across three tasks and seven LLMs demonstrate AskGNN's superior effectiveness in graph task performance, opening new avenues for applying LLMs to graph-structured data without extensive fine-tuning.<|reference_end|>
arxiv
@article{hu2024let's, title={Let's Ask GNN: Empowering Large Language Model for Graph In-Context Learning}, author={Zhengyu Hu, Yichuan Li, Zhengyu Chen, Jingang Wang, Han Liu, Kyumin Lee, Kaize Ding}, journal={arXiv preprint arXiv:2410.07074}, year={2024}, archivePrefix={arXiv}, eprint={2410.07074}, primaryClass={cs.LG} }
hu2024let's
arxiv-667626
2410.07076
MOOSE-Chem: Large Language Models for Rediscovering Unseen Chemistry Scientific Hypotheses
<|reference_start|>MOOSE-Chem: Large Language Models for Rediscovering Unseen Chemistry Scientific Hypotheses: Scientific discovery contributes largely to human society's prosperity, and recent progress shows that LLMs could potentially catalyze this process. However, it is still unclear whether LLMs can discover novel and valid hypotheses in chemistry. In this work, we investigate this central research question: Can LLMs automatically discover novel and valid chemistry research hypotheses given only a chemistry research background (consisting of a research question and/or a background survey), without limitation on the domain of the research question? After extensive discussions with chemistry experts, we propose an assumption that a majority of chemistry hypotheses can be resulted from a research background and several inspirations. With this key insight, we break the central question into three smaller fundamental questions. In brief, they are: (1) given a background question, whether LLMs can retrieve good inspirations; (2) with background and inspirations, whether LLMs can lead to hypothesis; and (3) whether LLMs can identify good hypotheses to rank them higher. To investigate these questions, we construct a benchmark consisting of 51 chemistry papers published in Nature, Science, or a similar level in 2024 (all papers are only available online since 2024). Every paper is divided by chemistry PhD students into three components: background, inspirations, and hypothesis. The goal is to rediscover the hypothesis, given only the background and a large randomly selected chemistry literature corpus consisting the ground truth inspiration papers, with LLMs trained with data up to 2023. We also develop an LLM-based multi-agent framework that leverages the assumption, consisting of three stages reflecting the three smaller questions. The proposed method can rediscover many hypotheses with very high similarity with the ground truth ones, covering the main innovations.<|reference_end|>
arxiv
@article{yang2024moose-chem:, title={MOOSE-Chem: Large Language Models for Rediscovering Unseen Chemistry Scientific Hypotheses}, author={Zonglin Yang, Wanhao Liu, Ben Gao, Tong Xie, Yuqiang Li, Wanli Ouyang, Soujanya Poria, Erik Cambria, Dongzhan Zhou}, journal={arXiv preprint arXiv:2410.07076}, year={2024}, archivePrefix={arXiv}, eprint={2410.07076}, primaryClass={cs.CL cs.AI cs.LG} }
yang2024moose-chem:
arxiv-667627
2410.07078
FlowBotHD: History-Aware Diffuser Handling Ambiguities in Articulated Objects Manipulation
<|reference_start|>FlowBotHD: History-Aware Diffuser Handling Ambiguities in Articulated Objects Manipulation: We introduce a novel approach to manipulate articulated objects with ambiguities, such as opening a door, in which multi-modality and occlusions create ambiguities about the opening side and direction. Multi-modality occurs when the method to open a fully closed door (push, pull, slide) is uncertain, or the side from which it should be opened is uncertain. Occlusions further obscure the door's shape from certain angles, creating further ambiguities during the occlusion. To tackle these challenges, we propose a history-aware diffusion network that models the multi-modal distribution of the articulated object and uses history to disambiguate actions and make stable predictions under occlusions. Experiments and analysis demonstrate the state-of-art performance of our method and specifically improvements in ambiguity-caused failure modes. Our project website is available at https://flowbothd.github.io/.<|reference_end|>
arxiv
@article{li2024flowbothd:, title={FlowBotHD: History-Aware Diffuser Handling Ambiguities in Articulated Objects Manipulation}, author={Yishu Li, Wen Hui Leng, Yiming Fang, Ben Eisner, David Held}, journal={arXiv preprint arXiv:2410.07078}, year={2024}, archivePrefix={arXiv}, eprint={2410.07078}, primaryClass={cs.RO} }
li2024flowbothd:
arxiv-667628
2410.07079
Automated and Complete Generation of Traffic Scenarios at Road Junctions Using a Multi-level Danger Definition
<|reference_start|>Automated and Complete Generation of Traffic Scenarios at Road Junctions Using a Multi-level Danger Definition: To ensure their safe use, autonomous vehicles (AVs) must meet rigorous certification criteria that involve executing maneuvers safely within (arbitrary) scenarios where other actors perform their intended maneuvers. For that purpose, existing scenario generation approaches optimize search to derive scenarios with high probability of dangerous situations. In this paper, we hypothesise that at road junctions, potential danger predominantly arises from overlapping paths of individual actors carrying out their designated high-level maneuvers. As a step towards AV certification, we propose an approach to derive a complete set of (potentially dangerous) abstract scenarios at any given road junction, i.e. all permutations of overlapping abstract paths assigned to actors (including the AV) for a given set of possible abstract paths. From these abstract scenarios, we derive exact paths that actors must follow to guide simulation-based testing towards potential collisions. We conduct extensive experiments to evaluate the behavior of a state-of-the-art learning based AV controller on scenarios generated over two realistic road junctions with increasing number of external actors. Results show that the AV-under-test is involved in increasing percentages of unsafe behaviors in simulation, which vary according to functional- and logical-level scenario properties.<|reference_end|>
arxiv
@article{babikian2024automated, title={Automated and Complete Generation of Traffic Scenarios at Road Junctions Using a Multi-level Danger Definition}, author={Aren A. Babikian, Attila Ficsor, Oszk'ar Semer'ath, Gunter Mussbacher and D'aniel Varr'o}, journal={arXiv preprint arXiv:2410.07079}, year={2024}, archivePrefix={arXiv}, eprint={2410.07079}, primaryClass={cs.SE} }
babikian2024automated
arxiv-667629
2410.07080
Gaussian to log-normal transition for independent sets in a percolated hypercube
<|reference_start|>Gaussian to log-normal transition for independent sets in a percolated hypercube: Independent sets in graphs, i.e., subsets of vertices where no two are adjacent, have long been studied, for instance as a model of hard-core gas. The $d$-dimensional hypercube, $\{0,1\}^d$, with the nearest neighbor structure, has been a particularly appealing choice for the base graph, owing in part to its many symmetries. Results go back to the work of Korshunov and Sapozhenko who proved sharp results on the count of such sets as well as structure theorems for random samples drawn uniformly. Of much interest is the behavior of such Gibbs measures in the presence of disorder. In this direction, Kronenberg and Spinka [KS] initiated the study of independent sets in a random subgraph of the hypercube obtained by considering an instance of bond percolation with probability $p$. Relying on tools from statistical mechanics they obtained a detailed understanding of the moments of the partition function, say $\mathcal{Z}$, of the hard-core model on such random graphs and consequently deduced certain fluctuation information, as well as posed a series of interesting questions. In particular, they showed in the uniform case that there is a natural phase transition at $p=2/3$ where $\mathcal{Z}$ transitions from being concentrated for $p>2/3$ to not concentrated at $p=2/3$. In this article, developing a probabilistic framework, as well as relying on certain cluster expansion inputs from [KS], we present a detailed picture of both the fluctuations of $\mathcal{Z}$ as well as the geometry of a randomly sampled independent set. In particular, we establish that $\mathcal{Z}$, properly centered and scaled, converges to a standard Gaussian for $p>2/3$, and to a sum of two i.i.d. log-normals at $p=2/3$. A particular step in the proof which could be of independent interest involves a non-uniform birthday problem for which collisions emerge at $p=2/3$.<|reference_end|>
arxiv
@article{chowdhury2024gaussian, title={Gaussian to log-normal transition for independent sets in a percolated hypercube}, author={Mriganka Basu Roy Chowdhury, Shirshendu Ganguly and Vilas Winstein}, journal={arXiv preprint arXiv:2410.07080}, year={2024}, archivePrefix={arXiv}, eprint={2410.07080}, primaryClass={math.PR cond-mat.stat-mech cs.DM math-ph math.CO math.MP} }
chowdhury2024gaussian
arxiv-667630
2410.07081
JPEG Inspired Deep Learning
<|reference_start|>JPEG Inspired Deep Learning: Although it is traditionally believed that lossy image compression, such as JPEG compression, has a negative impact on the performance of deep neural networks (DNNs), it is shown by recent works that well-crafted JPEG compression can actually improve the performance of deep learning (DL). Inspired by this, we propose JPEG-DL, a novel DL framework that prepends any underlying DNN architecture with a trainable JPEG compression layer. To make the quantization operation in JPEG compression trainable, a new differentiable soft quantizer is employed at the JPEG layer, and then the quantization operation and underlying DNN are jointly trained. Extensive experiments show that in comparison with the standard DL, JPEG-DL delivers significant accuracy improvements across various datasets and model architectures while enhancing robustness against adversarial attacks. Particularly, on some fine-grained image classification datasets, JPEG-DL can increase prediction accuracy by as much as 20.9%. Our code is available on https://github.com/JpegInspiredDl/JPEG-Inspired-DL.git.<|reference_end|>
arxiv
@article{salamah2024jpeg, title={JPEG Inspired Deep Learning}, author={Ahmed H. Salamah, Kaixiang Zheng, Yiwen Liu and En-Hui Yang}, journal={arXiv preprint arXiv:2410.07081}, year={2024}, archivePrefix={arXiv}, eprint={2410.07081}, primaryClass={cs.CV} }
salamah2024jpeg
arxiv-667631
2410.07083
Stanceformer: Target-Aware Transformer for Stance Detection
<|reference_start|>Stanceformer: Target-Aware Transformer for Stance Detection: The task of Stance Detection involves discerning the stance expressed in a text towards a specific subject or target. Prior works have relied on existing transformer models that lack the capability to prioritize targets effectively. Consequently, these models yield similar performance regardless of whether we utilize or disregard target information, undermining the task's significance. To address this challenge, we introduce Stanceformer, a target-aware transformer model that incorporates enhanced attention towards the targets during both training and inference. Specifically, we design a \textit{Target Awareness} matrix that increases the self-attention scores assigned to the targets. We demonstrate the efficacy of the Stanceformer with various BERT-based models, including state-of-the-art models and Large Language Models (LLMs), and evaluate its performance across three stance detection datasets, alongside a zero-shot dataset. Our approach Stanceformer not only provides superior performance but also generalizes even to other domains, such as Aspect-based Sentiment Analysis. We make the code publicly available.\footnote{\scriptsize\url{https://github.com/kgarg8/Stanceformer}}<|reference_end|>
arxiv
@article{garg2024stanceformer:, title={Stanceformer: Target-Aware Transformer for Stance Detection}, author={Krishna Garg, Cornelia Caragea}, journal={arXiv preprint arXiv:2410.07083}, year={2024}, archivePrefix={arXiv}, eprint={2410.07083}, primaryClass={cs.CL} }
garg2024stanceformer:
arxiv-667632
2410.07087
Towards Realistic UAV Vision-Language Navigation: Platform, Benchmark, and Methodology
<|reference_start|>Towards Realistic UAV Vision-Language Navigation: Platform, Benchmark, and Methodology: Developing agents capable of navigating to a target location based on language instructions and visual information, known as vision-language navigation (VLN), has attracted widespread interest. Most research has focused on ground-based agents, while UAV-based VLN remains relatively underexplored. Recent efforts in UAV vision-language navigation predominantly adopt ground-based VLN settings, relying on predefined discrete action spaces and neglecting the inherent disparities in agent movement dynamics and the complexity of navigation tasks between ground and aerial environments. To address these disparities and challenges, we propose solutions from three perspectives: platform, benchmark, and methodology. To enable realistic UAV trajectory simulation in VLN tasks, we propose the OpenUAV platform, which features diverse environments, realistic flight control, and extensive algorithmic support. We further construct a target-oriented VLN dataset consisting of approximately 12k trajectories on this platform, serving as the first dataset specifically designed for realistic UAV VLN tasks. To tackle the challenges posed by complex aerial environments, we propose an assistant-guided UAV object search benchmark called UAV-Need-Help, which provides varying levels of guidance information to help UAVs better accomplish realistic VLN tasks. We also propose a UAV navigation LLM that, given multi-view images, task descriptions, and assistant instructions, leverages the multimodal understanding capabilities of the MLLM to jointly process visual and textual information, and performs hierarchical trajectory generation. The evaluation results of our method significantly outperform the baseline models, while there remains a considerable gap between our results and those achieved by human operators, underscoring the challenge presented by the UAV-Need-Help task.<|reference_end|>
arxiv
@article{wang2024towards, title={Towards Realistic UAV Vision-Language Navigation: Platform, Benchmark, and Methodology}, author={Xiangyu Wang, Donglin Yang, Ziqin Wang, Hohin Kwan, Jinyu Chen, Wenjun Wu, Hongsheng Li, Yue Liao, Si Liu}, journal={arXiv preprint arXiv:2410.07087}, year={2024}, archivePrefix={arXiv}, eprint={2410.07087}, primaryClass={cs.CV cs.RO} }
wang2024towards
arxiv-667633
2410.07091
Collusion Detection with Graph Neural Networks
<|reference_start|>Collusion Detection with Graph Neural Networks: Collusion is a complex phenomenon in which companies secretly collaborate to engage in fraudulent practices. This paper presents an innovative methodology for detecting and predicting collusion patterns in different national markets using neural networks (NNs) and graph neural networks (GNNs). GNNs are particularly well suited to this task because they can exploit the inherent network structures present in collusion and many other economic problems. Our approach consists of two phases: In Phase I, we develop and train models on individual market datasets from Japan, the United States, two regions in Switzerland, Italy, and Brazil, focusing on predicting collusion in single markets. In Phase II, we extend the models' applicability through zero-shot learning, employing a transfer learning approach that can detect collusion in markets in which training data is unavailable. This phase also incorporates out-of-distribution (OOD) generalization to evaluate the models' performance on unseen datasets from other countries and regions. In our empirical study, we show that GNNs outperform NNs in detecting complex collusive patterns. This research contributes to the ongoing discourse on preventing collusion and optimizing detection methodologies, providing valuable guidance on the use of NNs and GNNs in economic applications to enhance market fairness and economic welfare.<|reference_end|>
arxiv
@article{gomes2024collusion, title={Collusion Detection with Graph Neural Networks}, author={Lucas Gomes, Jannis Kueck, Mara Mattes, Martin Spindler, Alexey Zaytsev}, journal={arXiv preprint arXiv:2410.07091}, year={2024}, archivePrefix={arXiv}, eprint={2410.07091}, primaryClass={econ.EM cs.LG stat.ML} }
gomes2024collusion
arxiv-667634
2410.07093
LaMP: Language-Motion Pretraining for Motion Generation, Retrieval, and Captioning
<|reference_start|>LaMP: Language-Motion Pretraining for Motion Generation, Retrieval, and Captioning: Language plays a vital role in the realm of human motion. Existing methods have largely depended on CLIP text embeddings for motion generation, yet they fall short in effectively aligning language and motion due to CLIP's pretraining on static image-text pairs. This work introduces LaMP, a novel Language-Motion Pretraining model, which transitions from a language-vision to a more suitable language-motion latent space. It addresses key limitations by generating motion-informative text embeddings, significantly enhancing the relevance and semantics of generated motion sequences. With LaMP, we advance three key tasks: text-to-motion generation, motion-text retrieval, and motion captioning through aligned language-motion representation learning. For generation, we utilize LaMP to provide the text condition instead of CLIP, and an autoregressive masked prediction is designed to achieve mask modeling without rank collapse in transformers. For retrieval, motion features from LaMP's motion transformer interact with query tokens to retrieve text features from the text transformer, and vice versa. For captioning, we finetune a large language model with the language-informative motion features to develop a strong motion captioning model. In addition, we introduce the LaMP-BertScore metric to assess the alignment of generated motions with textual descriptions. Extensive experimental results on multiple datasets demonstrate substantial improvements over previous methods across all three tasks. The code of our method will be made public.<|reference_end|>
arxiv
@article{li2024lamp:, title={LaMP: Language-Motion Pretraining for Motion Generation, Retrieval, and Captioning}, author={Zhe Li, Weihao Yuan, Yisheng He, Lingteng Qiu, Shenhao Zhu, Xiaodong Gu, Weichao Shen, Yuan Dong, Zilong Dong, Laurence T. Yang}, journal={arXiv preprint arXiv:2410.07093}, year={2024}, archivePrefix={arXiv}, eprint={2410.07093}, primaryClass={cs.CV} }
li2024lamp:
arxiv-667635
2410.07094
An Approach for Auto Generation of Labeling Functions for Software Engineering Chatbots
<|reference_start|>An Approach for Auto Generation of Labeling Functions for Software Engineering Chatbots: Software engineering (SE) chatbots are increasingly gaining attention for their role in enhancing development processes. At the core of chatbots are the Natural Language Understanding platforms (NLUs), which enable them to comprehend and respond to user queries. Before deploying NLUs, there is a need to train them with labeled data. However, acquiring such labeled data for SE chatbots is challenging due to the scarcity of high-quality datasets. This challenge arises because training SE chatbots requires specialized vocabulary and phrases not found in typical language datasets. Consequently, chatbot developers often resort to manually annotating user queries to gather the data necessary for training effective chatbots, a process that is both time-consuming and resource-intensive. Previous studies propose approaches to support chatbot practitioners in annotating users' posed queries. However, these approaches require human intervention to generate rules, called labeling functions (LFs), that identify and categorize user queries based on specific patterns in the data. To address this issue, we propose an approach to automatically generate LFs by extracting patterns from labeled user queries. We evaluate the effectiveness of our approach by applying it to the queries of four diverse SE datasets (namely AskGit, MSA, Ask Ubuntu, and Stack Overflow) and measure the performance improvement gained from training the NLU on the queries labeled by the generated LFs. We find that the generated LFs effectively label data with AUC scores of up to 85.3%, and NLU's performance improvement of up to 27.2% across the studied datasets. Furthermore, our results show that the number of LFs used to generate LFs affects the labeling performance. We believe that our approach can save time and resources in labeling users' queries, allowing practitioners to focus on core chatbot functionalities.<|reference_end|>
arxiv
@article{alor2024an, title={An Approach for Auto Generation of Labeling Functions for Software Engineering Chatbots}, author={Ebube Alor, Ahmad Abdellatif, SayedHassan Khatoonabadi, Emad Shihab}, journal={arXiv preprint arXiv:2410.07094}, year={2024}, archivePrefix={arXiv}, eprint={2410.07094}, primaryClass={cs.SE cs.AI cs.CL cs.LG} }
alor2024an
arxiv-667636
2410.07095
MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering
<|reference_start|>MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering: We introduce MLE-bench, a benchmark for measuring how well AI agents perform at machine learning engineering. To this end, we curate 75 ML engineering-related competitions from Kaggle, creating a diverse set of challenging tasks that test real-world ML engineering skills such as training models, preparing datasets, and running experiments. We establish human baselines for each competition using Kaggle's publicly available leaderboards. We use open-source agent scaffolds to evaluate several frontier language models on our benchmark, finding that the best-performing setup--OpenAI's o1-preview with AIDE scaffolding--achieves at least the level of a Kaggle bronze medal in 16.9% of competitions. In addition to our main results, we investigate various forms of resource scaling for AI agents and the impact of contamination from pre-training. We open-source our benchmark code (github.com/openai/mle-bench/) to facilitate future research in understanding the ML engineering capabilities of AI agents.<|reference_end|>
arxiv
@article{chan2024mle-bench:, title={MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering}, author={Jun Shern Chan, Neil Chowdhury, Oliver Jaffe, James Aung, Dane Sherburn, Evan Mays, Giulio Starace, Kevin Liu, Leon Maksin, Tejal Patwardhan, Lilian Weng, Aleksander Mk{a}dry}, journal={arXiv preprint arXiv:2410.07095}, year={2024}, archivePrefix={arXiv}, eprint={2410.07095}, primaryClass={cs.CL} }
chan2024mle-bench:
arxiv-667637
2410.07096
Identifying and Addressing Delusions for Target-Directed Decision-Making
<|reference_start|>Identifying and Addressing Delusions for Target-Directed Decision-Making: We are interested in target-directed agents, which produce targets during decision-time planning, to guide their behaviors and achieve better generalization during evaluation. Improper training of these agents can result in delusions: the agent may come to hold false beliefs about the targets, which cannot be properly rejected, leading to unwanted behaviors and damaging out-of-distribution generalization. We identify different types of delusions by using intuitive examples in carefully controlled environments, and investigate their causes. We demonstrate how delusions can be addressed for agents trained by hindsight relabeling, a mainstream approach in for training target-directed RL agents. We validate empirically the effectiveness of the proposed solutions in correcting delusional behaviors and improving out-of-distribution generalization.<|reference_end|>
arxiv
@article{zhao2024identifying, title={Identifying and Addressing Delusions for Target-Directed Decision-Making}, author={Mingde Zhao, Tristan Sylvain, Doina Precup, Yoshua Bengio}, journal={arXiv preprint arXiv:2410.07096}, year={2024}, archivePrefix={arXiv}, eprint={2410.07096}, primaryClass={cs.AI} }
zhao2024identifying
arxiv-667638
2410.07097
A Law of Large Numbers for SIR on the Stochastic Block Model: A Proof via Herd Immunity
<|reference_start|>A Law of Large Numbers for SIR on the Stochastic Block Model: A Proof via Herd Immunity: In this paper, we study the dynamics of the susceptible-infected-recovered (SIR) model on a network with community structure, namely the stochastic block model (SBM). As usual, the SIR model is a stochastic model for an epidemic where infected vertices infect susceptible neighbors at some rate $\eta$ and recover at rate $\gamma$, and the SBM is a random graph model where vertices are partitioned into a finite number of communities. The connection probability between two vertices depends on their community affiliation, here scaled so that the average degrees have a finite limit as the network grows. We prove laws of large numbers (LLN) for the epidemic's trajectory to a system of ordinary differential equations over any time horizon (finite or infinite), including in particular a LLN for the final size of the infection. Our proofs rely on two main ingredients: (i) a new coupling of the SIR epidemic and the randomness of the SBM, revealing a vector-valued random variable that drives the epidemic (related to what is usually called the ``force of the infection'' via a linear transformation), and (ii) a novel technique for analyzing the limiting behavior of the infinite time horizon for the infection, using the fact that once the infection passes the herd immunity threshold it dies out quickly and has a negligible impact on the overall size of the infection.<|reference_end|>
arxiv
@article{borgs2024a, title={A Law of Large Numbers for SIR on the Stochastic Block Model: A Proof via Herd Immunity}, author={Christian Borgs, Karissa Huang, Christian Ikeokwu}, journal={arXiv preprint arXiv:2410.07097}, year={2024}, archivePrefix={arXiv}, eprint={2410.07097}, primaryClass={math.PR cs.SI} }
borgs2024a
arxiv-667639
2410.07102
Non-linear Control of the Power Injected Into a Weak Grid by a Self-Synchronized Inverter
<|reference_start|>Non-linear Control of the Power Injected Into a Weak Grid by a Self-Synchronized Inverter: In this work, a non-linear controller designed using non-linear transformation linearization and feedback is proposed for an inverter connected to a weak grid through a single-stage inductive filter. The proposed strategy is self-synchronized, so that it is not necessary to have a voltage sensor at the Point of Common Coupling (PCC). The strategy allows to robustify, in the presence of a weak grid, a strategy that has already been demonstrated to allow a significant reduction in the size of the DC-link capacitor of the converter. For this purpose, a state observer is designed that allows estimating the voltage at the PCC from the measurement of the output inductor current. A start-up controller is also included, which allows synchronization even in the case of system start-up. Simulation results are presented for different operating cases, including start-up, normal operation, and grid-voltage sags and swells. In all these cases, it is considered that the exact parameters of the grid to which the inverter is connected are unknown.<|reference_end|>
arxiv
@article{jorge2024non-linear, title={Non-linear Control of the Power Injected Into a Weak Grid by a Self-Synchronized Inverter}, author={Sebastian Gomez Jorge, Jorge A. Solsona, Claudio A. Busada, Leire C. Aguirre-Larrayoz, M. Itsaso Mart'inez and Gerardo Tapia-Otaegui}, journal={arXiv preprint arXiv:2410.07102}, year={2024}, archivePrefix={arXiv}, eprint={2410.07102}, primaryClass={eess.SY cs.SY} }
jorge2024non-linear
arxiv-667640
2410.07103
Unleashing Multi-Hop Reasoning Potential in Large Language Models through Repetition of Misordered Context
<|reference_start|>Unleashing Multi-Hop Reasoning Potential in Large Language Models through Repetition of Misordered Context: Multi-hop reasoning, which requires multi-step reasoning based on the supporting documents within a given context, remains challenging for large language models (LLMs). LLMs often struggle to filter out irrelevant documents within the context, and their performance is sensitive to the position of supporting documents within that context. In this paper, we identify an additional challenge: LLMs' performance is also sensitive to the order in which the supporting documents are presented. We refer to this as the misordered context problem. To address this issue, we propose a simple yet effective method called context repetition (CoRe), which involves prompting the model by repeatedly presenting the context to ensure the supporting documents are presented in the optimal order for the model. Using CoRe, we improve the F1 score by up to 30%p on multi-hop QA tasks and increase accuracy by up to 70%p on a synthetic task. Additionally, CoRe helps mitigate the well-known "lost-in-the-middle" problem in LLMs and can be effectively combined with retrieval-based approaches utilizing Chain-of-Thought (CoT) reasoning.<|reference_end|>
arxiv
@article{yu2024unleashing, title={Unleashing Multi-Hop Reasoning Potential in Large Language Models through Repetition of Misordered Context}, author={Sangwon Yu, Ik-hwan Kim, Jongyoon Song, Saehyung Lee, Junsung Park, Sungroh Yoon}, journal={arXiv preprint arXiv:2410.07103}, year={2024}, archivePrefix={arXiv}, eprint={2410.07103}, primaryClass={cs.CL} }
yu2024unleashing
arxiv-667641
2410.07108
FAIR GPT: A virtual consultant for research data management in ChatGPT
<|reference_start|>FAIR GPT: A virtual consultant for research data management in ChatGPT: FAIR GPT is a first virtual consultant in ChatGPT designed to help researchers and organizations make their data and metadata compliant with the FAIR (Findable, Accessible, Interoperable, Reusable) principles. It provides guidance on metadata improvement, dataset organization, and repository selection. To ensure accuracy, FAIR GPT uses external APIs to assess dataset FAIRness, retrieve controlled vocabularies, and recommend repositories, minimizing hallucination and improving precision. It also assists in creating documentation (data and software management plans, README files, and codebooks), and selecting proper licenses. This paper describes its features, applications, and limitations.<|reference_end|>
arxiv
@article{shigapov2024fair, title={FAIR GPT: A virtual consultant for research data management in ChatGPT}, author={Renat Shigapov, Irene Schumm}, journal={arXiv preprint arXiv:2410.07108}, year={2024}, archivePrefix={arXiv}, eprint={2410.07108}, primaryClass={cs.DL cs.AI cs.IR} }
shigapov2024fair
arxiv-667642
2410.07109
I Want to Break Free! Anti-Social Behavior and Persuasion Ability of LLMs in Multi-Agent Settings with Social Hierarchy
<|reference_start|>I Want to Break Free! Anti-Social Behavior and Persuasion Ability of LLMs in Multi-Agent Settings with Social Hierarchy: As Large Language Model (LLM)-based agents become increasingly autonomous and will more freely interact with each other, studying interactions between them becomes crucial to anticipate emergent phenomena and potential risks. Drawing inspiration from the widely popular Stanford Prison Experiment, we contribute to this line of research by studying interaction patterns of LLM agents in a context characterized by strict social hierarchy. We do so by specifically studying two types of phenomena: persuasion and anti-social behavior in simulated scenarios involving a guard and a prisoner agent who seeks to achieve a specific goal (i.e., obtaining additional yard time or escape from prison). Leveraging 200 experimental scenarios for a total of 2,000 machine-machine conversations across five different popular LLMs, we provide a set of noteworthy findings. We first document how some models consistently fail in carrying out a conversation in our multi-agent setup where power dynamics are at play. Then, for the models that were able to engage in successful interactions, we empirically show how the goal that an agent is set to achieve impacts primarily its persuasiveness, while having a negligible effect with respect to the agent's anti-social behavior. Third, we highlight how agents' personas, and particularly the guard's personality, drive both the likelihood of successful persuasion from the prisoner and the emergence of anti-social behaviors. Fourth, we show that even without explicitly prompting for specific personalities, anti-social behavior emerges by simply assigning agents' roles. These results bear implications for the development of interactive LLM agents as well as the debate on their societal impact.<|reference_end|>
arxiv
@article{campedelli2024i, title={I Want to Break Free! Persuasion and Anti-Social Behavior of LLMs in Multi-Agent Settings with Social Hierarchy}, author={Gian Maria Campedelli, Nicol`o Penzo, Massimo Stefan, Roberto Dess`i, Marco Guerini, Bruno Lepri, Jacopo Staiano}, journal={arXiv preprint arXiv:2410.07109}, year={2024}, archivePrefix={arXiv}, eprint={2410.07109}, primaryClass={cs.CL cs.AI cs.CY cs.MA} }
campedelli2024i
arxiv-667643
2410.07110
Continual Learning: Less Forgetting, More OOD Generalization via Adaptive Contrastive Replay
<|reference_start|>Continual Learning: Less Forgetting, More OOD Generalization via Adaptive Contrastive Replay: Machine learning models often suffer from catastrophic forgetting of previously learned knowledge when learning new classes. Various methods have been proposed to mitigate this issue. However, rehearsal-based learning, which retains samples from previous classes, typically achieves good performance but tends to memorize specific instances, struggling with Out-of-Distribution (OOD) generalization. This often leads to high forgetting rates and poor generalization. Surprisingly, the OOD generalization capabilities of these methods have been largely unexplored. In this paper, we highlight this issue and propose a simple yet effective strategy inspired by contrastive learning and data-centric principles to address it. We introduce Adaptive Contrastive Replay (ACR), a method that employs dual optimization to simultaneously train both the encoder and the classifier. ACR adaptively populates the replay buffer with misclassified samples while ensuring a balanced representation of classes and tasks. By refining the decision boundary in this way, ACR achieves a balance between stability and plasticity. Our method significantly outperforms previous approaches in terms of OOD generalization, achieving an improvement of 13.41\% on Split CIFAR-100, 9.91\% on Split Mini-ImageNet, and 5.98\% on Split Tiny-ImageNet.<|reference_end|>
arxiv
@article{rezaei2024continual, title={Continual Learning: Less Forgetting, More OOD Generalization via Adaptive Contrastive Replay}, author={Hossein Rezaei, Mohammad Sabokrou}, journal={arXiv preprint arXiv:2410.07110}, year={2024}, archivePrefix={arXiv}, eprint={2410.07110}, primaryClass={cs.LG cs.CV} }
rezaei2024continual
arxiv-667644
2410.07111
Utility of Multimodal Large Language Models in Analyzing Chest X-ray with Incomplete Contextual Information
<|reference_start|>Utility of Multimodal Large Language Models in Analyzing Chest X-ray with Incomplete Contextual Information: Background: Large language models (LLMs) are gaining use in clinical settings, but their performance can suffer with incomplete radiology reports. We tested whether multimodal LLMs (using text and images) could improve accuracy and understanding in chest radiography reports, making them more effective for clinical decision support. Purpose: To assess the robustness of LLMs in generating accurate impressions from chest radiography reports using both incomplete data and multimodal data. Material and Methods: We used 300 radiology image-report pairs from the MIMIC-CXR database. Three LLMs (OpenFlamingo, MedFlamingo, IDEFICS) were tested in both text-only and multimodal formats. Impressions were first generated from the full text, then tested by removing 20%, 50%, and 80% of the text. The impact of adding images was evaluated using chest x-rays, and model performance was compared using three metrics with statistical analysis. Results: The text-only models (OpenFlamingo, MedFlamingo, IDEFICS) had similar performance (ROUGE-L: 0.39 vs. 0.21 vs. 0.21; F1RadGraph: 0.34 vs. 0.17 vs. 0.17; F1CheXbert: 0.53 vs. 0.40 vs. 0.40), with OpenFlamingo performing best on complete text (p<0.001). Performance declined with incomplete data across all models. However, adding images significantly boosted the performance of MedFlamingo and IDEFICS (p<0.001), equaling or surpassing OpenFlamingo, even with incomplete text. Conclusion: LLMs may produce low-quality outputs with incomplete radiology data, but multimodal LLMs can improve reliability and support clinical decision-making. Keywords: Large language model; multimodal; semantic analysis; Chest Radiography; Clinical Decision Support;<|reference_end|>
arxiv
@article{kim2024utility, title={Utility of Multimodal Large Language Models in Analyzing Chest X-ray with Incomplete Contextual Information}, author={Choonghan Kim, Seonhee Cho, Joo Heung Yoon}, journal={arXiv preprint arXiv:2410.07111}, year={2024}, archivePrefix={arXiv}, eprint={2410.07111}, primaryClass={eess.IV cs.CL cs.CV} }
kim2024utility
arxiv-667645
2410.07112
VHELM: A Holistic Evaluation of Vision Language Models
<|reference_start|>VHELM: A Holistic Evaluation of Vision Language Models: Current benchmarks for assessing vision-language models (VLMs) often focus on their perception or problem-solving capabilities and neglect other critical aspects such as fairness, multilinguality, or toxicity. Furthermore, they differ in their evaluation procedures and the scope of the evaluation, making it difficult to compare models. To address these issues, we extend the HELM framework to VLMs to present the Holistic Evaluation of Vision Language Models (VHELM). VHELM aggregates various datasets to cover one or more of the 9 aspects: visual perception, knowledge, reasoning, bias, fairness, multilinguality, robustness, toxicity, and safety. In doing so, we produce a comprehensive, multi-dimensional view of the capabilities of the VLMs across these important factors. In addition, we standardize the standard inference parameters, methods of prompting, and evaluation metrics to enable fair comparisons across models. Our framework is designed to be lightweight and automatic so that evaluation runs are cheap and fast. Our initial run evaluates 22 VLMs on 21 existing datasets to provide a holistic snapshot of the models. We uncover new key findings, such as the fact that efficiency-focused models (e.g., Claude 3 Haiku or Gemini 1.5 Flash) perform significantly worse than their full models (e.g., Claude 3 Opus or Gemini 1.5 Pro) on the bias benchmark but not when evaluated on the other aspects. For transparency, we release the raw model generations and complete results on our website (https://crfm.stanford.edu/helm/vhelm/v2.0.1). VHELM is intended to be a living benchmark, and we hope to continue adding new datasets and models over time.<|reference_end|>
arxiv
@article{lee2024vhelm:, title={VHELM: A Holistic Evaluation of Vision Language Models}, author={Tony Lee, Haoqin Tu, Chi Heem Wong, Wenhao Zheng, Yiyang Zhou, Yifan Mai, Josselin Somerville Roberts, Michihiro Yasunaga, Huaxiu Yao, Cihang Xie, Percy Liang}, journal={arXiv preprint arXiv:2410.07112}, year={2024}, archivePrefix={arXiv}, eprint={2410.07112}, primaryClass={cs.CV cs.AI} }
lee2024vhelm:
arxiv-667646
2410.07113
Personalized Visual Instruction Tuning
<|reference_start|>Personalized Visual Instruction Tuning: Recent advancements in multimodal large language models (MLLMs) have demonstrated significant progress; however, these models exhibit a notable limitation, which we refer to as "face blindness". Specifically, they can engage in general conversations but fail to conduct personalized dialogues targeting at specific individuals. This deficiency hinders the application of MLLMs in personalized settings, such as tailored visual assistants on mobile devices, or domestic robots that need to recognize members of the family. In this paper, we introduce Personalized Visual Instruction Tuning (PVIT), a novel data curation and training framework designed to enable MLLMs to identify target individuals within an image and engage in personalized and coherent dialogues. Our approach involves the development of a sophisticated pipeline that autonomously generates training data containing personalized conversations. This pipeline leverages the capabilities of various visual experts, image generation models, and (multi-modal) large language models. To evaluate the personalized potential of MLLMs, we present a benchmark called P-Bench, which encompasses various question types with different levels of difficulty. The experiments demonstrate a substantial personalized performance enhancement after fine-tuning with our curated dataset.<|reference_end|>
arxiv
@article{pi2024personalized, title={Personalized Visual Instruction Tuning}, author={Renjie Pi, Jianshu Zhang, Tianyang Han, Jipeng Zhang, Rui Pan, Tong Zhang}, journal={arXiv preprint arXiv:2410.07113}, year={2024}, archivePrefix={arXiv}, eprint={2410.07113}, primaryClass={cs.CV} }
pi2024personalized
arxiv-667647
2410.07114
System 2 thinking in OpenAI's o1-preview model: Near-perfect performance on a mathematics exam
<|reference_start|>System 2 thinking in OpenAI's o1-preview model: Near-perfect performance on a mathematics exam: The processes underlying human cognition are often divided into two systems: System 1, which involves fast, intuitive thinking, and System 2, which involves slow, deliberate reasoning. Previously, large language models were criticized for lacking the deeper, more analytical capabilities of System 2. In September 2024, OpenAI introduced the O1 model series, specifically designed to handle System 2-like reasoning. While OpenAI's benchmarks are promising, independent validation is still needed. In this study, we tested the O1-preview model twice on the Dutch 'Mathematics B' final exam. It scored a near-perfect 76 and 73 out of 76 points. For context, only 24 out of 16,414 students in the Netherlands achieved a perfect score. By comparison, the GPT-4o model scored 66 and 61 out of 76, well above the Dutch average of 40.63 points. The O1-preview model completed the exam in around 10 minutes, while GPT-4o took 3 minutes, and neither model had access to the exam figures. Although O1-preview had the ability to achieve a perfect score, its performance showed some variability, as it made occasional mistakes with repeated prompting. This suggests that the self-consistency method, where the consensus output is selected, could improve accuracy. We conclude that while OpenAI's new model series holds great potential, certain risks must be considered.<|reference_end|>
arxiv
@article{de winter2024system, title={System 2 thinking in OpenAI's o1-preview model: Near-perfect performance on a mathematics exam}, author={Joost de Winter, Dimitra Dodou, Yke Bauke Eisma}, journal={Computers 13 (2024) 278}, year={2024}, doi={10.3390/computers13110278}, archivePrefix={arXiv}, eprint={2410.07114}, primaryClass={cs.CY cs.AI cs.CL} }
de winter2024system
arxiv-667648
2410.07115
Generating Topologically and Geometrically Diverse Manifold Data in Dimensions Four and Below
<|reference_start|>Generating Topologically and Geometrically Diverse Manifold Data in Dimensions Four and Below: Understanding the topological characteristics of data is important to many areas of research. Recent work has demonstrated that synthetic 4D image-type data can be useful to train 4D convolutional neural network models to see topological features in these data. These models also appear to tolerate the use of image preprocessing techniques where existing topological data analysis techniques such as persistent homology do not. This paper investigates how methods from algebraic topology, combined with image processing techniques such as morphology, can be used to generate topologically sophisticated and diverse-looking 2-, 3-, and 4D image-type data with topological labels in simulation. These approaches are illustrated in 2D and 3D with the aim of providing a roadmap towards achieving this in 4D.<|reference_end|>
arxiv
@article{hannouch2024generating, title={Generating Topologically and Geometrically Diverse Manifold Data in Dimensions Four and Below}, author={Khalil Mathieu Hannouch and Stephan Chalup}, journal={arXiv preprint arXiv:2410.07115}, year={2024}, archivePrefix={arXiv}, eprint={2410.07115}, primaryClass={cs.CV cs.GR} }
hannouch2024generating
arxiv-667649
2410.07117
Classification of Buried Objects from Ground Penetrating Radar Images by using Second Order Deep Learning Models
<|reference_start|>Classification of Buried Objects from Ground Penetrating Radar Images by using Second Order Deep Learning Models: In this paper, a new classification model based on covariance matrices is built in order to classify buried objects. The inputs of the proposed models are the hyperbola thumbnails obtained with a classical Ground Penetrating Radar (GPR) system. These thumbnails are entered in the first layers of a classical CNN which results in a covariance matrix by using the outputs of the convolutional filters. Next, the covariance matrix is given to a network composed of specific layers to classify Symmetric Positive Definite (SPD) matrices. We show in a large database that our approach outperform shallow networks designed for GPR data and conventional CNNs typically used in computer vision applications, particularly when the number of training data decreases and in the presence of mislabeled data. We also illustrate the interest of our models when training data and test sets are obtained from different weather modes or considerations.<|reference_end|>
arxiv
@article{jafuno2024classification, title={Classification of Buried Objects from Ground Penetrating Radar Images by using Second Order Deep Learning Models}, author={Douba Jafuno, Ammar Mian, Guillaume Ginolhac, Nickolas Stelzenmuller}, journal={arXiv preprint arXiv:2410.07117}, year={2024}, archivePrefix={arXiv}, eprint={2410.07117}, primaryClass={cs.CV cs.LG stat.AP} }
jafuno2024classification
arxiv-667650
2410.07118
Exploring the Readiness of Prominent Small Language Models for the Democratization of Financial Literacy
<|reference_start|>Exploring the Readiness of Prominent Small Language Models for the Democratization of Financial Literacy: The use of small language models (SLMs), herein defined as models with less than three billion parameters, is increasing across various domains and applications. Due to their ability to run on more accessible hardware and preserve user privacy, SLMs possess the potential to democratize access to language models for individuals of different socioeconomic status and with different privacy preferences. This study assesses several state-of-the-art SLMs (e.g., Apple's OpenELM, Microsoft's Phi, Google's Gemma, and the Tinyllama project) for use in the financial domain to support the development of financial literacy LMs. Democratizing access to quality financial information for those who are financially under educated is greatly needed in society, particularly as new financial markets and products emerge and participation in financial markets increases due to ease of access. We are the first to examine the use of open-source SLMs to democratize access to financial question answering capabilities for individuals and students. To this end, we provide an analysis of the memory usage, inference time, similarity comparisons to ground-truth answers, and output readability of prominent SLMs to determine which models are most accessible and capable of supporting access to financial information. We analyze zero-shot and few-shot learning variants of the models. The results suggest that some off-the-shelf SLMs merit further exploration and fine-tuning to prepare them for individual use, while others may have limits to their democratization.<|reference_end|>
arxiv
@article{kosireddy2024exploring, title={Exploring the Readiness of Prominent Small Language Models for the Democratization of Financial Literacy}, author={Tagore Rao Kosireddy and Jeffrey D. Wall and Evan Lucas}, journal={arXiv preprint arXiv:2410.07118}, year={2024}, archivePrefix={arXiv}, eprint={2410.07118}, primaryClass={cs.CL} }
kosireddy2024exploring
arxiv-667651
2410.07119
Thing2Reality: Transforming 2D Content into Conditioned Multiviews and 3D Gaussian Objects for XR Communication
<|reference_start|>Thing2Reality: Transforming 2D Content into Conditioned Multiviews and 3D Gaussian Objects for XR Communication: During remote communication, participants often share both digital and physical content, such as product designs, digital assets, and environments, to enhance mutual understanding. Recent advances in augmented communication have facilitated users to swiftly create and share digital 2D copies of physical objects from video feeds into a shared space. However, conventional 2D representations of digital objects restricts users' ability to spatially reference items in a shared immersive environment. To address this, we propose Thing2Reality, an Extended Reality (XR) communication platform that enhances spontaneous discussions of both digital and physical items during remote sessions. With Thing2Reality, users can quickly materialize ideas or physical objects in immersive environments and share them as conditioned multiview renderings or 3D Gaussians. Thing2Reality enables users to interact with remote objects or discuss concepts in a collaborative manner. Our user study revealed that the ability to interact with and manipulate 3D representations of objects significantly enhances the efficiency of discussions, with the potential to augment discussion of 2D artifacts.<|reference_end|>
arxiv
@article{hu2024thing2reality:, title={Thing2Reality: Transforming 2D Content into Conditioned Multiviews and 3D Gaussian Objects for XR Communication}, author={Erzhen Hu, Mingyi Li, Jungtaek Hong, Xun Qian, Alex Olwal, David Kim, Seongkook Heo, Ruofei Du}, journal={arXiv preprint arXiv:2410.07119}, year={2024}, archivePrefix={arXiv}, eprint={2410.07119}, primaryClass={cs.HC cs.AI cs.CV} }
hu2024thing2reality:
arxiv-667652
2410.07120
Sequential Decoding of Multiple Traces Over the Syndrome Trellis for Synchronization Errors
<|reference_start|>Sequential Decoding of Multiple Traces Over the Syndrome Trellis for Synchronization Errors: Standard decoding approaches for convolutional codes, such as the Viterbi and BCJR algorithms, entail significant complexity when correcting synchronization errors. The situation worsens when multiple received sequences should be jointly decoded, as in DNA storage. Previous work has attempted to address this via separate-BCJR decoding, i.e., combining the results of decoding each received sequence separately. Another attempt to reduce complexity adapted sequential decoders for use over channels with insertion and deletion errors. However, these decoding alternatives remain prohibitively expensive for high-rate convolutional codes. To address this, we adapt sequential decoders to decode multiple received sequences jointly over the syndrome trellis. For the short blocklength regime, this decoding strategy can outperform separate-BCJR decoding under certain channel conditions, in addition to reducing decoding complexity. To mitigate the occurrence of a decoding timeout, formally called erasure, we also extend this approach to work bidirectionally, i.e., deploying two independent stack decoders that simultaneously operate in the forward and backward directions.<|reference_end|>
arxiv
@article{banerjee2024sequential, title={Sequential Decoding of Multiple Traces Over the Syndrome Trellis for Synchronization Errors}, author={Anisha Banerjee, Lorenz Welter, Alexandre Graell i Amat, Antonia Wachter-Zeh and Eirik Rosnes}, journal={arXiv preprint arXiv:2410.07120}, year={2024}, archivePrefix={arXiv}, eprint={2410.07120}, primaryClass={cs.IT math.IT} }
banerjee2024sequential
arxiv-667653
2410.07121
Transfer Learning for E-commerce Query Product Type Prediction
<|reference_start|>Transfer Learning for E-commerce Query Product Type Prediction: Getting a good understanding of the customer intent is essential in e-commerce search engines. In particular, associating the correct product type to a search query plays a vital role in surfacing correct products to the customers. Query product type classification (Q2PT) is a particularly challenging task because search queries are short and ambiguous, the number of existing product categories is extremely large, spanning thousands of values. Moreover, international marketplaces face additional challenges, such as language and dialect diversity and cultural differences, influencing the interpretation of the query. In this work we focus on Q2PT prediction in the global multilocale e-commerce markets. The common approach of training Q2PT models for each locale separately shows significant performance drops in low-resource stores. Moreover, this method does not allow for a smooth expansion to a new country, requiring to collect the data and train a new locale-specific Q2PT model from scratch. To tackle this, we propose to use transfer learning from the highresource to the low-resource locales, to achieve global parity of Q2PT performance. We benchmark the per-locale Q2PT model against the unified one, which shares the training data and model structure across all worldwide stores. Additionally, we compare locale-aware and locale-agnostic Q2PT models, showing the task dependency on the country-specific traits. We conduct extensive quantiative and qualitative analysis of Q2PT models on the large-scale e-commerce dataset across 20 worldwide locales, which shows that unified locale-aware Q2PT model has superior performance over the alternatives.<|reference_end|>
arxiv
@article{tigunova2024transfer, title={Transfer Learning for E-commerce Query Product Type Prediction}, author={Anna Tigunova, Thomas Ricatte, Ghadir Eraisha}, journal={arXiv preprint arXiv:2410.07121}, year={2024}, archivePrefix={arXiv}, eprint={2410.07121}, primaryClass={cs.IR cs.AI cs.LG} }
tigunova2024transfer
arxiv-667654
2410.07122
End-Cloud Collaboration Framework for Advanced AI Customer Service in E-commerce
<|reference_start|>End-Cloud Collaboration Framework for Advanced AI Customer Service in E-commerce: In recent years, the e-commerce industry has seen a rapid increase in the demand for advanced AI-driven customer service solutions. Traditional cloud-based models face limitations in terms of latency, personalized services, and privacy concerns. Furthermore, end devices often lack the computational resources to deploy large AI models effectively. In this paper, we propose an innovative End-Cloud Collaboration (ECC) framework for advanced AI customer service in e-commerce. This framework integrates the advantages of large cloud models and mid/small-sized end models by deeply exploring the generalization potential of cloud models and effectively utilizing the computing power resources of terminal chips, alleviating the strain on computing resources to some extent. Specifically, the large cloud model acts as a teacher, guiding and promoting the learning of the end model, which significantly reduces the end model's reliance on large-scale, high-quality data and thereby addresses the data bottleneck in traditional end model training, offering a new paradigm for the rapid deployment of industry applications. Additionally, we introduce an online evolutive learning strategy that enables the end model to continuously iterate and upgrade based on guidance from the cloud model and real-time user feedback. This strategy ensures that the model can flexibly adapt to the rapid changes in application scenarios while avoiding the uploading of sensitive information by performing local fine-tuning, achieving the dual goals of privacy protection and personalized service. %We make systematic contributions to the customized model fine-tuning methods in the e-commerce domain. To conclude, we implement in-depth corpus collection (e.g., data organization, cleaning, and preprocessing) and train an ECC-based industry-specific model for e-commerce customer service.<|reference_end|>
arxiv
@article{teng2024end-cloud, title={End-Cloud Collaboration Framework for Advanced AI Customer Service in E-commerce}, author={Liangyu Teng, Yang Liu, Jing Liu, Liang Song}, journal={arXiv preprint arXiv:2410.07122}, year={2024}, archivePrefix={arXiv}, eprint={2410.07122}, primaryClass={cs.DC cs.AI cs.CL cs.LG} }
teng2024end-cloud
arxiv-667655
2410.07123
Transforming disaster risk reduction with AI and big data: Legal and interdisciplinary perspectives
<|reference_start|>Transforming disaster risk reduction with AI and big data: Legal and interdisciplinary perspectives: Managing complex disaster risks requires interdisciplinary efforts. Breaking down silos between law, social sciences, and natural sciences is critical for all processes of disaster risk reduction. This enables adaptive systems for the rapid evolution of AI technology, which has significantly impacted the intersection of law and natural environments. Exploring how AI influences legal frameworks and environmental management, while also examining how legal and environmental considerations can confine AI within the socioeconomic domain, is essential. From a co-production review perspective, drawing on insights from lawyers, social scientists, and environmental scientists, principles for responsible data mining are proposed based on safety, transparency, fairness, accountability, and contestability. This discussion offers a blueprint for interdisciplinary collaboration to create adaptive law systems based on AI integration of knowledge from environmental and social sciences. Discrepancies in the use of language between environmental scientists and decision-makers in terms of usefulness and accuracy hamper how AI can be used based on the principles of legal considerations for a safe, trustworthy, and contestable disaster management framework. When social networks are useful for mitigating disaster risks based on AI, the legal implications related to privacy and liability of the outcomes of disaster management must be considered. Fair and accountable principles emphasise environmental considerations and foster socioeconomic discussions related to public engagement. AI also has an important role to play in education, bringing together the next generations of law, social sciences, and natural sciences to work on interdisciplinary solutions in harmony.<|reference_end|>
arxiv
@article{chun2024transforming, title={Transforming disaster risk reduction with AI and big data: Legal and interdisciplinary perspectives}, author={Kwok P Chun, Thanti Octavianti, Nilay Dogulu, Hristos Tyralis, Georgia Papacharalampous, Ryan Rowberry, Pingyu Fan, Mark Everard, Maria Francesch-Huidobro, Wellington Migliari, David M. Hannah, John Travis Marshall, Rafael Tolosana Calasanz, Chad Staddon, Ida Ansharyani, Bastien Dieppois, Todd R Lewis, Juli Ponce, Silvia Ibrean, Tiago Miguel Ferreira, Chinkie Peli~no-Golle, Ye Mu, Manuel Delgado, Elizabeth Silvestre Espinoza, Martin Keulertz, Deepak Gopinath, Cheng Li}, journal={arXiv preprint arXiv:2410.07123}, year={2024}, archivePrefix={arXiv}, eprint={2410.07123}, primaryClass={cs.CY cs.LG} }
chun2024transforming
arxiv-667656
2410.07124
Cross-Task Pretraining for Cross-Organ Cross-Scanner Adenocarcinoma Segmentation
<|reference_start|>Cross-Task Pretraining for Cross-Organ Cross-Scanner Adenocarcinoma Segmentation: This short abstract describes a solution to the COSAS 2024 competition on Cross-Organ and Cross-Scanner Adenocarcinoma Segmentation from histopathological image patches. The main challenge in the task of segmenting this type of cancer is a noticeable domain shift encountered when changing acquisition devices (microscopes) and also when tissue comes from different organs. The two tasks proposed in COSAS were to train on a dataset of images from three different organs, and then predict segmentations on data from unseen organs (dataset T1), and to train on a dataset of images acquired on three different scanners and then segment images acquired with another unseen microscope. We attempted to bridge the domain shift gap by experimenting with three different strategies: standard training for each dataset, pretraining on dataset T1 and then fine-tuning on dataset T2 (and vice-versa, a strategy we call \textit{Cross-Task Pretraining}), and training on the combination of dataset A and B. Our experiments showed that Cross-Task Pre-training is a more promising approach to domain generalization.<|reference_end|>
arxiv
@article{galdran2024cross-task, title={Cross-Task Pretraining for Cross-Organ Cross-Scanner Adenocarcinoma Segmentation}, author={Adrian Galdran}, journal={arXiv preprint arXiv:2410.07124}, year={2024}, archivePrefix={arXiv}, eprint={2410.07124}, primaryClass={cs.CV cs.AI} }
galdran2024cross-task
arxiv-667657
2410.07125
A Simplified Positional Cell Type Visualization using Spatially Aggregated Clusters
<|reference_start|>A Simplified Positional Cell Type Visualization using Spatially Aggregated Clusters: We introduce a novel method for overlaying cell type proportion data onto tissue images. This approach preserves spatial context while avoiding visual clutter or excessively obscuring the underlying slide. Our proposed technique involves clustering the data and aggregating neighboring points of the same cluster into polygons.<|reference_end|>
arxiv
@article{mason2024a, title={A Simplified Positional Cell Type Visualization using Spatially Aggregated Clusters}, author={Lee Mason and Jonas Almeida}, journal={arXiv preprint arXiv:2410.07125}, year={2024}, archivePrefix={arXiv}, eprint={2410.07125}, primaryClass={cs.CV cs.GR} }
mason2024a
arxiv-667658
2410.07126
2022 Flood Impact in Pakistan: Remote Sensing Assessment of Agricultural and Urban Damage
<|reference_start|>2022 Flood Impact in Pakistan: Remote Sensing Assessment of Agricultural and Urban Damage: Pakistan was hit by the world's deadliest flood in June 2022, causing agriculture and infrastructure damage across the country. Remote sensing technology offers a cost-effective and efficient method for flood impact assessment. This study is aimed to assess the impact of flooding on crops and built-up areas. Landsat 9 imagery, European Space Agency-Land Use/Land Cover (ESA-LULC) and Soil Moisture Active Passive (SMAP) data are used to identify and quantify the extent of flood-affected areas, crop damage, and built-up area destruction. The findings indicate that Sindh, a province in Pakistan, suffered the most. This impact destroyed most Kharif season crops, typically cultivated from March to November. Using the SMAP satellite data, it is assessed that the high amount of soil moisture after flood also caused a significant delay in the cultivation of Rabi crops. The findings of this study provide valuable information for decision-makers and stakeholders involved in flood risk management and disaster response.<|reference_end|>
arxiv
@article{younas20242022, title={2022 Flood Impact in Pakistan: Remote Sensing Assessment of Agricultural and Urban Damage}, author={Aqs Younas, Arbaz Khan, Hafiz Muhammad Abubakar, Zia Tahseen, Aqeel Arshad, Murtaza Taj, Usman Nazir}, journal={arXiv preprint arXiv:2410.07126}, year={2024}, archivePrefix={arXiv}, eprint={2410.07126}, primaryClass={cs.CY} }
younas20242022
arxiv-667659
2410.07127
Multi-body dynamic evolution sequence-assisted PSO for interval analysis
<|reference_start|>Multi-body dynamic evolution sequence-assisted PSO for interval analysis: When the exact probability distribution of input conditions cannot be obtained in practical engineering problems, interval analysis methods are often used to analyze the upper and lower bounds of output responses. Essentially, this can be regarded as an optimization problem, solvable by optimization algorithms. This paper proposes a novel interval analysis method, i.e., multi-body dynamic evolution sequence-assisted PSO (abbreviated as DES-PSO), which combines a dynamical evolutionary sequence with the heterogeneous comprehensive learning particle swarm optimization algorithm (HCLPSO). By introducing the dynamical evolutionary sequence instead of the random sequence, the proposed method addresses the difficulty HCLPSO faces in covering the search space, making it suitable for interval analysis problems. To verify the accuracy and efficiency of the proposed DES-PSO method, this paper solves two case studies using both the DES-PSO and HCLPSO methods. The first case study employs an optimization algorithm to solve the solution domain of a linear interval equation system, and the second case study analyzes the collision and heat conduction of a smartwatch using an optimization method. The results of the case studies demonstrate that DES-PSO can significantly improve the computational speed of interval analysis while ensuring accuracy, providing a new approach to solving complex interval analysis problems.<|reference_end|>
arxiv
@article{wu2024multi-body, title={Multi-body dynamic evolution sequence-assisted PSO for interval analysis}, author={Xuanlong Wu, Peng Zhong, Weihao Lin}, journal={arXiv preprint arXiv:2410.07127}, year={2024}, archivePrefix={arXiv}, eprint={2410.07127}, primaryClass={cs.NE} }
wu2024multi-body
arxiv-667660
2410.07128
Neural Differential Appearance Equations
<|reference_start|>Neural Differential Appearance Equations: We propose a method to reproduce dynamic appearance textures with space-stationary but time-varying visual statistics. While most previous work decomposes dynamic textures into static appearance and motion, we focus on dynamic appearance that results not from motion but variations of fundamental properties, such as rusting, decaying, melting, and weathering. To this end, we adopt the neural ordinary differential equation (ODE) to learn the underlying dynamics of appearance from a target exemplar. We simulate the ODE in two phases. At the "warm-up" phase, the ODE diffuses a random noise to an initial state. We then constrain the further evolution of this ODE to replicate the evolution of visual feature statistics in the exemplar during the generation phase. The particular innovation of this work is the neural ODE achieving both denoising and evolution for dynamics synthesis, with a proposed temporal training scheme. We study both relightable (BRDF) and non-relightable (RGB) appearance models. For both we introduce new pilot datasets, allowing, for the first time, to study such phenomena: For RGB we provide 22 dynamic textures acquired from free online sources; For BRDFs, we further acquire a dataset of 21 flash-lit videos of time-varying materials, enabled by a simple-to-construct setup. Our experiments show that our method consistently yields realistic and coherent results, whereas prior works falter under pronounced temporal appearance variations. A user study confirms our approach is preferred to previous work for such exemplars.<|reference_end|>
arxiv
@article{liu2024neural, title={Neural Differential Appearance Equations}, author={Chen Liu, Tobias Ritschel}, journal={arXiv preprint arXiv:2410.07128}, year={2024}, archivePrefix={arXiv}, eprint={2410.07128}, primaryClass={cs.CV cs.GR cs.LG} }
liu2024neural
arxiv-667661
2410.07129
Mental Disorders Detection in the Era of Large Language Models
<|reference_start|>Mental Disorders Detection in the Era of Large Language Models: This paper compares the effectiveness of traditional machine learning methods, encoder-based models, and large language models (LLMs) on the task of detecting depression and anxiety. Five datasets were considered, each differing in format and the method used to define the target pathology class. We tested AutoML models based on linguistic features, several variations of encoder-based Transformers such as BERT, and state-of-the-art LLMs as pathology classification models. The results demonstrated that LLMs outperform traditional methods, particularly on noisy and small datasets where training examples vary significantly in text length and genre. However, psycholinguistic features and encoder-based models can achieve performance comparable to language models when trained on texts from individuals with clinically confirmed depression, highlighting their potential effectiveness in targeted clinical applications.<|reference_end|>
arxiv
@article{kuzmin2024mental, title={Mental Disorders Detection in the Era of Large Language Models}, author={Gleb Kuzmin, Petr Strepetov, Maksim Stankevich, Artem Shelmanov, Ivan Smirnov}, journal={arXiv preprint arXiv:2410.07129}, year={2024}, archivePrefix={arXiv}, eprint={2410.07129}, primaryClass={cs.CL cs.AI} }
kuzmin2024mental
arxiv-667662
2410.07130
Analysis of vessel traffic flow characteristics in inland restricted waterways using multi-source data
<|reference_start|>Analysis of vessel traffic flow characteristics in inland restricted waterways using multi-source data: To effectively manage vessel traffic and alleviate congestion on busy inland waterways, a comprehensive understanding of vessel traffic flow characteristics is crucial. However, limited data availability has resulted in minimal research on the traffic flow characteristics of inland waterway vessels. This study addresses this gap by conducting vessel-following experiments and fixed-point video monitoring in inland waterways, collecting multi-source data to analyze vessel traffic flow characteristics. First, the analysis of vessel speed distribution identifies the economic speed for vessels operating in these environments. Next, the relationship between microscopic vessel speed and gap distance is examined, with the logarithmic model emerging as the most accurate among various tested models. Additionally, the study explores the relationships among macroscopic speed, density, and flow rate, proposing a novel piecewise fundamental diagram model to describe these relationships. Lastly, the inland vessel traffic states are categorized using K-means clustering algorithm and applied to vessel navigation services. These findings provide valuable insights for enhancing inland waterway transportation and advancing the development of an integrated waterway transportation system.<|reference_end|>
arxiv
@article{yang2024analysis, title={Analysis of vessel traffic flow characteristics in inland restricted waterways using multi-source data}, author={Wenzhang Yang, Peng Liao, Shangkun Jiang, Hao Wang}, journal={arXiv preprint arXiv:2410.07130}, year={2024}, archivePrefix={arXiv}, eprint={2410.07130}, primaryClass={cs.CE stat.AP} }
yang2024analysis
arxiv-667663
2410.07131
Stochastic Process Turing Machines
<|reference_start|>Stochastic Process Turing Machines: Computer science theory provides many different measures of complexity of a system including Kolmogorov complexity, logical depth, computational depth, and Levin complexity. However, these measures are all defined only for deterministic Turing machines, i.e., deterministic dynamics of the underlying generative process whose output we are interested in. Therefore, by construction they cannot capture complexity of the output of stochastic processes - like those in the real world. Motivated by this observation, we combine probabilistic Turing machines with a prior over the inputs to the Turing machine to define a complete stochastic process of Turing machines. We call this a stochastic process Turing machine. Stochastic process Turing machines allow us to formalize a stochastic generalization of logical depth called stochastic depth, and also to apply stochastic thermodynamics to the analysis of Turing machines. Stochastic process Turing machines and stochastic depth allow us to study the complex, stochastic systems like the human brain, societies, and evolution all from within the framework of formal computation.<|reference_end|>
arxiv
@article{wolpert2024stochastic, title={Stochastic Process Turing Machines}, author={David Wolpert, Jordan Scharnhorst}, journal={arXiv preprint arXiv:2410.07131}, year={2024}, archivePrefix={arXiv}, eprint={2410.07131}, primaryClass={cs.CC cs.FL cs.LO} }
wolpert2024stochastic
arxiv-667664
2410.07132
Evaluation of waterway lock service quality in Yangtze Delta: from the perspectives of customer and supplier
<|reference_start|>Evaluation of waterway lock service quality in Yangtze Delta: from the perspectives of customer and supplier: In recent decades, the waterway locks in the Yangtze Delta, China, have become major traffic bottlenecks. To gain a comprehensive understanding of the crew's perspectives and primary concerns regarding lock services during vessel lockage, and to enhance customer satisfaction and improve vessel lockage efficiency, it is necessary to assess the waterway lock service quality (WLSQ). This paper presents an evaluation system for WLSQ from various stakeholders' viewpoints. Firstly, by employing questionnaire surveys and the structural equation model method, in conjunction with factor analysis, the WLSQ and its influencing factors in the Yangtze River Delta region are analyzed from a customer perspective. Secondly, the Analytic Hierarchy Process method is utilized, along with a dedicated questionnaire for service suppliers, to examine their concerns regarding the performance of vessel lock services. The findings indicate that there exists a cognitive bias towards factors influencing the WLSQ. Crew members express the greatest concern over vessel lockage delays, whereas vessel lockage safety is the primary concern for management department administrators. Furthermore, enhancing the supporting facilities of waterway locks can significantly increase crew members' satisfaction during vessel lockage. Improving staff skills, and safety conditions can also greatly enhance customers' tolerance for lockage delays. The results of this study will provide valuable insights for the lock management department, operators, and the government in formulating relevant policies to improve WLSQ and implementing ongoing service quality evaluations.<|reference_end|>
arxiv
@article{yang2024evaluation, title={Evaluation of waterway lock service quality in Yangtze Delta: from the perspectives of customer and supplier}, author={Wenzhang Yang, Peng Liao, Shangkun Jiang, Hao Wang}, journal={arXiv preprint arXiv:2410.07132}, year={2024}, archivePrefix={arXiv}, eprint={2410.07132}, primaryClass={cs.CY} }
yang2024evaluation
arxiv-667665
2410.07133
EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models
<|reference_start|>EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models: Recent advancements in generation models have showcased remarkable capabilities in generating fantastic content. However, most of them are trained on proprietary high-quality data, and some models withhold their parameters and only provide accessible application programming interfaces (APIs), limiting their benefits for downstream tasks. To explore the feasibility of training a text-to-image generation model comparable to advanced models using publicly available resources, we introduce EvolveDirector. This framework interacts with advanced models through their public APIs to obtain text-image data pairs to train a base model. Our experiments with extensive data indicate that the model trained on generated data of the advanced model can approximate its generation capability. However, it requires large-scale samples of 10 million or more. This incurs significant expenses in time, computational resources, and especially the costs associated with calling fee-based APIs. To address this problem, we leverage pre-trained large vision-language models (VLMs) to guide the evolution of the base model. VLM continuously evaluates the base model during training and dynamically updates and refines the training dataset by the discrimination, expansion, deletion, and mutation operations. Experimental results show that this paradigm significantly reduces the required data volume. Furthermore, when approaching multiple advanced models, EvolveDirector can select the best samples generated by them to learn powerful and balanced abilities. The final trained model Edgen is demonstrated to outperform these advanced models. The code and model weights are available at https://github.com/showlab/EvolveDirector.<|reference_end|>
arxiv
@article{zhao2024evolvedirector:, title={EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models}, author={Rui Zhao, Hangjie Yuan, Yujie Wei, Shiwei Zhang, Yuchao Gu, Lingmin Ran, Xiang Wang, Zhangjie Wu, Junhao Zhang, Yingya Zhang, Mike Zheng Shou}, journal={arXiv preprint arXiv:2410.07133}, year={2024}, archivePrefix={arXiv}, eprint={2410.07133}, primaryClass={cs.CV} }
zhao2024evolvedirector:
arxiv-667666
2410.07135
Causal Inference with Double/Debiased Machine Learning for Evaluating the Health Effects of Multiple Mismeasured Pollutants
<|reference_start|>Causal Inference with Double/Debiased Machine Learning for Evaluating the Health Effects of Multiple Mismeasured Pollutants: One way to quantify exposure to air pollution and its constituents in epidemiologic studies is to use an individual's nearest monitor. This strategy results in potential inaccuracy in the actual personal exposure, introducing bias in estimating the health effects of air pollution and its constituents, especially when evaluating the causal effects of correlated multi-pollutant constituents measured with correlated error. This paper addresses estimation and inference for the causal effect of one constituent in the presence of other PM2.5 constituents, accounting for measurement error and correlations. We used a linear regression calibration model, fitted with generalized estimating equations in an external validation study, and extended a double/debiased machine learning (DML) approach to correct for measurement error and estimate the effect of interest in the main study. We demonstrated that the DML estimator with regression calibration is consistent and derived its asymptotic variance. Simulations showed that the proposed estimator reduced bias and attained nominal coverage probability across most simulation settings. We applied this method to assess the causal effects of PM2.5 constituents on cognitive function in the Nurses' Health Study and identified two PM2.5 constituents, Br and Mn, that showed a negative causal effect on cognitive function after measurement error correction.<|reference_end|>
arxiv
@article{xu2024causal, title={Causal Inference with Double/Debiased Machine Learning for Evaluating the Health Effects of Multiple Mismeasured Pollutants}, author={Gang Xu, Xin Zhou, Molin Wang, Boya Zhang, Wenhao Jiang, Francine Laden, Helen H. Suh, Adam A. Szpiro, Donna Spiegelman, Zuoheng Wang}, journal={arXiv preprint arXiv:2410.07135}, year={2024}, archivePrefix={arXiv}, eprint={2410.07135}, primaryClass={stat.AP cs.LG stat.ML} }
xu2024causal
arxiv-667667
2410.07137
Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates
<|reference_start|>Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates: Automatic LLM benchmarks, such as AlpacaEval 2.0, Arena-Hard-Auto, and MT-Bench, have become popular for evaluating language models due to their cost-effectiveness and scalability compared to human evaluation. Achieving high win rates on these benchmarks can significantly boost the promotional impact of newly released language models. This promotional benefit may motivate tricks, such as manipulating model output length or style to game win rates, even though several mechanisms have been developed to control length and disentangle style to reduce gameability. Nonetheless, we show that even a "null model" that always outputs a constant response (irrelevant to input instructions) can cheat automatic benchmarks and achieve top-ranked win rates: an 86.5% LC win rate on AlpacaEval 2.0; an 83.0 score on Arena-Hard-Auto; and a 9.55 score on MT-Bench. Moreover, the crafted cheating outputs are transferable because we assume that the instructions of these benchmarks (e.g., 805 samples of AlpacaEval 2.0) are private and cannot be accessed. While our experiments are primarily proof-of-concept, an adversary could use LLMs to generate more imperceptible cheating responses, unethically benefiting from high win rates and promotional impact. Our findings call for the development of anti-cheating mechanisms for reliable automatic benchmarks. The code is available at https://github.com/sail-sg/Cheating-LLM-Benchmarks.<|reference_end|>
arxiv
@article{zheng2024cheating, title={Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates}, author={Xiaosen Zheng, Tianyu Pang, Chao Du, Qian Liu, Jing Jiang, Min Lin}, journal={arXiv preprint arXiv:2410.07137}, year={2024}, archivePrefix={arXiv}, eprint={2410.07137}, primaryClass={cs.CL cs.AI cs.CR cs.LG} }
zheng2024cheating
arxiv-667668
2410.07138
Diagnosis and Pathogenic Analysis of Autism Spectrum Disorder Using Fused Brain Connection Graph
<|reference_start|>Diagnosis and Pathogenic Analysis of Autism Spectrum Disorder Using Fused Brain Connection Graph: We propose a model for diagnosing Autism spectrum disorder (ASD) using multimodal magnetic resonance imaging (MRI) data. Our approach integrates brain connectivity data from diffusion tensor imaging (DTI) and functional MRI (fMRI), employing graph neural networks (GNNs) for fused graph classification. To improve diagnostic accuracy, we introduce a loss function that maximizes inter-class and minimizes intra-class margins. We also analyze network node centrality, calculating degree, subgraph, and eigenvector centralities on a bimodal fused brain graph to identify pathological regions linked to ASD. Two non-parametric tests assess the statistical significance of these centralities between ASD patients and healthy controls. Our results reveal consistency between the tests, yet the identified regions differ significantly across centralities, suggesting distinct physiological interpretations. These findings enhance our understanding of ASD's neurobiological basis and offer new directions for clinical diagnosis.<|reference_end|>
arxiv
@article{wei2024diagnosis, title={Diagnosis and Pathogenic Analysis of Autism Spectrum Disorder Using Fused Brain Connection Graph}, author={Lu Wei, Yi Huang, Guosheng Yin, Fode Zhang, Manxue Zhang, Bin Liu}, journal={arXiv preprint arXiv:2410.07138}, year={2024}, archivePrefix={arXiv}, eprint={2410.07138}, primaryClass={q-bio.NC cs.LG stat.AP} }
wei2024diagnosis
arxiv-667669
2410.07139
Advancing Global South University Education with Large Language Models
<|reference_start|>Advancing Global South University Education with Large Language Models: In recent years, it has been observed that the center of gravity for the volume of higher education has shifted to the Global South. However, research indicates a widening disparity in the quality of higher education between the Global South and the Global North. Although investments in higher education within the Global South have increased, the rapid surge in student numbers has resulted in a decline in public expenditure per student. For instance, the student-to-teacher ratio in the Global South is significantly higher compared to that in the Global North, which poses a substantial barrier to the implementation of creative education. In response, Telkom University in Indonesia has embarked on an experiment to enhance the quality of learning and teaching by integrating large language models (LLMs) such as ChatGPT into five of its courses-Mathematics, English, Computing, Computer Systems, and Creative Media. This article elucidates the ongoing experimental plan and explores how the integration of LLMs could contribute to addressing the challenges currently faced by higher education in the Global South.<|reference_end|>
arxiv
@article{l2024advancing, title={Advancing Global South University Education with Large Language Models}, author={Kemas Muslim L, Toru Ishida, Aditya Firman Ihsan, Rikman Aherliwan Rudawan}, journal={arXiv preprint arXiv:2410.07139}, year={2024}, archivePrefix={arXiv}, eprint={2410.07139}, primaryClass={cs.CY} }
l2024advancing
arxiv-667670
2410.07140
DSparsE: Dynamic Sparse Embedding for Knowledge Graph Completion
<|reference_start|>DSparsE: Dynamic Sparse Embedding for Knowledge Graph Completion: Addressing the incompleteness problem in knowledge graph remains a significant challenge. Current knowledge graph completion methods have their limitations. For example, ComDensE is prone to overfitting and suffers from the degradation with the increase of network depth while InteractE has the limitations in feature interaction and interpretability. To this end, we propose a new method called dynamic sparse embedding (DSparsE) for knowledge graph completion. The proposed model embeds the input entity-relation pairs by a shallow encoder composed of a dynamic layer and a relation-aware layer. Subsequently, the concatenated output of the dynamic layer and relation-aware layer is passed through a projection layer and a deep decoder with residual connection structure. This model ensures the network robustness and maintains the capability of feature extraction. Furthermore, the conventional dense layers are replaced by randomly initialized sparse connection layers in the proposed method, which can mitigate the model overfitting. Finally, comprehensive experiments are conducted on the datasets of FB15k-237, WN18RR and YAGO3-10. It was demonstrated that the proposed method achieves the state-of-the-art performance in terms of Hits@1 compared to the existing baseline approaches. An ablation study is performed to examine the effects of the dynamic layer and relation-aware layer, where the combined model achieves the best performance.<|reference_end|>
arxiv
@article{yang2024dsparse:, title={DSparsE: Dynamic Sparse Embedding for Knowledge Graph Completion}, author={Chuhong Yang, Bin Li, Nan Wu}, journal={arXiv preprint arXiv:2410.07140}, year={2024}, archivePrefix={arXiv}, eprint={2410.07140}, primaryClass={cs.IR cs.DB cs.GR} }
yang2024dsparse:
arxiv-667671
2410.07142
Graph Network Surrogate Model for Optimizing the Placement of Horizontal Injection Wells for CO2 Storage
<|reference_start|>Graph Network Surrogate Model for Optimizing the Placement of Horizontal Injection Wells for CO2 Storage: Optimizing the locations of multiple CO2 injection wells will be essential as we proceed from demonstration-scale to large-scale carbon storage operations. Well placement optimization is, however, a computationally intensive task because the flow responses associated with many potential configurations must be evaluated. There is thus a need for efficient surrogate models for this application. In this work we develop and apply a graph network surrogate model (GNSM) to predict the global pressure and CO2 saturation fields in 3D geological models for arbitrary configurations of four horizontal wells. The GNSM uses an encoding-processing-decoding framework where the problem is represented in terms of computational graphs. Separate networks are applied for pressure and saturation predictions, and a multilayer perceptron is used to provide bottom-hole pressure (BHP) for each well at each time step. The GNSM is shown to achieve median relative errors of 4\% for pressure and 6\% for saturation over a test set involving very different plume shapes and dynamics. Speedup is about a factor of $120\times$ relative to high-fidelity simulation. The GNSM is applied for optimization using a differential evolution algorithm, where the goal is to minimize the CO2 footprint subject to constraints on the well configuration, plume location and well BHPs. Optimization results using the GNSM are shown to be comparable to those achieved using (much more expensive) high-fidelity simulation.<|reference_end|>
arxiv
@article{tang2024graph, title={Graph Network Surrogate Model for Optimizing the Placement of Horizontal Injection Wells for CO2 Storage}, author={Haoyu Tang, Louis J. Durlofsky}, journal={arXiv preprint arXiv:2410.07142}, year={2024}, archivePrefix={arXiv}, eprint={2410.07142}, primaryClass={cs.CE} }
tang2024graph
arxiv-667672
2410.07143
SARF: Enhancing Stock Market Prediction with Sentiment-Augmented Random Forest
<|reference_start|>SARF: Enhancing Stock Market Prediction with Sentiment-Augmented Random Forest: Stock trend forecasting, a challenging problem in the financial domain, involves ex-tensive data and related indicators. Relying solely on empirical analysis often yields unsustainable and ineffective results. Machine learning researchers have demonstrated that the application of random forest algorithm can enhance predictions in this context, playing a crucial auxiliary role in forecasting stock trends. This study introduces a new approach to stock market prediction by integrating sentiment analysis using FinGPT generative AI model with the traditional Random Forest model. The proposed technique aims to optimize the accuracy of stock price forecasts by leveraging the nuanced understanding of financial sentiments provided by FinGPT. We present a new methodology called "Sentiment-Augmented Random Forest" (SARF), which in-corporates sentiment features into the Random Forest framework. Our experiments demonstrate that SARF outperforms conventional Random Forest and LSTM models with an average accuracy improvement of 9.23% and lower prediction errors in pre-dicting stock market movements.<|reference_end|>
arxiv
@article{talazadeh2024sarf:, title={SARF: Enhancing Stock Market Prediction with Sentiment-Augmented Random Forest}, author={Saber Talazadeh, Dragan Perakovic}, journal={arXiv preprint arXiv:2410.07143}, year={2024}, archivePrefix={arXiv}, eprint={2410.07143}, primaryClass={q-fin.ST cs.LG} }
talazadeh2024sarf:
arxiv-667673
2410.07144
Natural Language Query Engine for Relational Databases using Generative AI
<|reference_start|>Natural Language Query Engine for Relational Databases using Generative AI: The growing reliance on data-driven decision-making highlights the need for more intuitive ways to access and analyze information stored in relational databases. However, the requirement of SQL knowledge has long been a significant barrier for non-technical users. This article introduces an innovative solution that leverages Generative AI to bridge this gap, enabling users to query databases using natural language. Our approach automatically translates natural language queries into SQL, ensuring both syntactic and semantic correctness, while also generating clear, natural language responses from the retrieved data. By streamlining the interaction between users and databases, this method empowers individuals without technical expertise to engage with data directly and efficiently, democratizing access to valuable insights and enhancing productivity.<|reference_end|>
arxiv
@article{fotso2024natural, title={Natural Language Query Engine for Relational Databases using Generative AI}, author={Steve Tueno Fotso}, journal={arXiv preprint arXiv:2410.07144}, year={2024}, archivePrefix={arXiv}, eprint={2410.07144}, primaryClass={cs.DB cs.AI cs.LG cs.SE} }
fotso2024natural
arxiv-667674
2410.07145
Stuffed Mamba: State Collapse and State Capacity of RNN-Based Long-Context Modeling
<|reference_start|>Stuffed Mamba: State Collapse and State Capacity of RNN-Based Long-Context Modeling: One essential advantage of recurrent neural networks (RNNs) over transformer-based language models is their linear computational complexity concerning the sequence length, which makes them much faster in handling long sequences during inference. However, most publicly available RNNs (e.g., Mamba and RWKV) are trained on sequences with less than 10K tokens, and their effectiveness in longer contexts remains largely unsatisfying so far. In this paper, we study the cause of the inability to process long context for RNNs and suggest critical mitigations. We examine two practical concerns when applying state-of-the-art RNNs to long contexts: (1) the inability to extrapolate to inputs longer than the training length and (2) the upper bound of memory capacity. Addressing the first concern, we first investigate *state collapse* (SC), a phenomenon that causes severe performance degradation on sequence lengths not encountered during training. With controlled experiments, we attribute this to overfitting due to the recurrent state being overparameterized for the training length. For the second concern, we train a series of Mamba-2 models on long documents to empirically estimate the recurrent state capacity in language modeling and passkey retrieval. Then, three SC mitigation methods are proposed to improve Mamba-2's length generalizability, allowing the model to process more than 1M tokens without SC. We also find that the recurrent state capacity in passkey retrieval scales exponentially to the state size, and we empirically train a Mamba-2 370M with near-perfect passkey retrieval accuracy on 256K context length. This suggests a promising future for RNN-based long-context modeling.<|reference_end|>
arxiv
@article{chen2024stuffed, title={Stuffed Mamba: State Collapse and State Capacity of RNN-Based Long-Context Modeling}, author={Yingfa Chen, Xinrong Zhang, Shengding Hu, Xu Han, Zhiyuan Liu, Maosong Sun}, journal={arXiv preprint arXiv:2410.07145}, year={2024}, archivePrefix={arXiv}, eprint={2410.07145}, primaryClass={cs.CL cs.AI cs.LG} }
chen2024stuffed
arxiv-667675
2410.07146
Estimation and Confidence Intervals for Mutual Information: Issues in Convergence for Non-Normal Distributions
<|reference_start|>Estimation and Confidence Intervals for Mutual Information: Issues in Convergence for Non-Normal Distributions: By employing various empirical estimators for the Mutual Information (MI) measure, we calculate and compare the estimates and their confidence intervals for both normal and non-normal bivariate data samples. We find that certain nonlinear invertible transformations of the random variables can significantly affect both the estimated MI value and the precision and asymptotic behavior of its confidence intervals. Generally, for non-normal samples, the confidence intervals are larger than those for normal samples, and the convergence of the confidence intervals is slower even as the data sample size increases. In some cases, due to strong biases, the estimated confidence interval may not contain the true value at all. We discuss various strategies to improve the precision of the estimated Mutual Information.<|reference_end|>
arxiv
@article{grigorenko2024estimation, title={Estimation and Confidence Intervals for Mutual Information: Issues in Convergence for Non-Normal Distributions}, author={Theo Grigorenko and Leo Grigorenko}, journal={arXiv preprint arXiv:2410.07146}, year={2024}, archivePrefix={arXiv}, eprint={2410.07146}, primaryClass={cs.IT math.IT} }
grigorenko2024estimation
arxiv-667676
2410.07147
Taking a turn for the better: Conversation redirection throughout the course of mental-health therapy
<|reference_start|>Taking a turn for the better: Conversation redirection throughout the course of mental-health therapy: Mental-health therapy involves a complex conversation flow in which patients and therapists continuously negotiate what should be talked about next. For example, therapists might try to shift the conversation's direction to keep the therapeutic process on track and avoid stagnation, or patients might push the discussion towards issues they want to focus on. How do such patient and therapist redirections relate to the development and quality of their relationship? To answer this question, we introduce a probabilistic measure of the extent to which a certain utterance immediately redirects the flow of the conversation, accounting for both the intention and the actual realization of such a change. We apply this new measure to characterize the development of patient-therapist relationships over multiple sessions in a very large, widely-used online therapy platform. Our analysis reveals that (1) patient control of the conversation's direction generally increases relative to that of the therapist as their relationship progresses; and (2) patients who have less control in the first few sessions are significantly more likely to eventually express dissatisfaction with their therapist and terminate the relationship.<|reference_end|>
arxiv
@article{nguyen2024taking, title={Taking a turn for the better: Conversation redirection throughout the course of mental-health therapy}, author={Vivian Nguyen, Sang Min Jung, Lillian Lee, Thomas D. Hull, Cristian Danescu-Niculescu-Mizil}, journal={arXiv preprint arXiv:2410.07147}, year={2024}, archivePrefix={arXiv}, eprint={2410.07147}, primaryClass={cs.CL cs.AI cs.CY} }
nguyen2024taking
arxiv-667677
2410.07148
Lateral Ventricle Shape Modeling using Peripheral Area Projection for Longitudinal Analysis
<|reference_start|>Lateral Ventricle Shape Modeling using Peripheral Area Projection for Longitudinal Analysis: The deformation of the lateral ventricle (LV) shape is widely studied to identify specific morphometric changes associated with diseases. Since LV enlargement is considered a relative change due to brain atrophy, local longitudinal LV deformation can indicate deformation in adjacent brain areas. However, conventional methods for LV shape analysis focus on modeling the solely segmented LV mask. In this work, we propose a novel deep learning-based approach using peripheral area projection, which is the first attempt to analyze LV considering its surrounding areas. Our approach matches the baseline LV mesh by deforming the shape of follow-up LVs, while optimizing the corresponding points of the same adjacent brain area between the baseline and follow-up LVs. Furthermore, we quantitatively evaluated the deformation of the left LV in normal (n=10) and demented subjects (n=10), and we found that each surrounding area (thalamus, caudate, hippocampus, amygdala, and right LV) projected onto the surface of LV shows noticeable differences between normal and demented subjects.<|reference_end|>
arxiv
@article{park2024lateral, title={Lateral Ventricle Shape Modeling using Peripheral Area Projection for Longitudinal Analysis}, author={Wonjung Park, Suhyun Ahn, Jinah Park}, journal={arXiv preprint arXiv:2410.07148}, year={2024}, archivePrefix={arXiv}, eprint={2410.07148}, primaryClass={eess.IV cs.CV cs.GR} }
park2024lateral
arxiv-667678
2410.07149
Towards Interpreting Visual Information Processing in Vision-Language Models
<|reference_start|>Towards Interpreting Visual Information Processing in Vision-Language Models: Vision-Language Models (VLMs) are powerful tools for processing and understanding text and images. We study the processing of visual tokens in the language model component of LLaVA, a prominent VLM. Our approach focuses on analyzing the localization of object information, the evolution of visual token representations across layers, and the mechanism of integrating visual information for predictions. Through ablation studies, we demonstrated that object identification accuracy drops by over 70\% when object-specific tokens are removed. We observed that visual token representations become increasingly interpretable in the vocabulary space across layers, suggesting an alignment with textual tokens corresponding to image content. Finally, we found that the model extracts object information from these refined representations at the last token position for prediction, mirroring the process in text-only language models for factual association tasks. These findings provide crucial insights into how VLMs process and integrate visual information, bridging the gap between our understanding of language and vision models, and paving the way for more interpretable and controllable multimodal systems.<|reference_end|>
arxiv
@article{neo2024towards, title={Towards Interpreting Visual Information Processing in Vision-Language Models}, author={Clement Neo, Luke Ong, Philip Torr, Mor Geva, David Krueger, Fazl Barez}, journal={arXiv preprint arXiv:2410.07149}, year={2024}, archivePrefix={arXiv}, eprint={2410.07149}, primaryClass={cs.CV cs.LG} }
neo2024towards
arxiv-667679
2410.07150
Graph Network Models To Detect Illicit Transactions In Block Chain
<|reference_start|>Graph Network Models To Detect Illicit Transactions In Block Chain: The use of cryptocurrencies has led to an increase in illicit activities such as money laundering, with traditional rule-based approaches becoming less effective in detecting and preventing such activities. In this paper, we propose a novel approach to tackling this problem by applying graph attention networks with residual network-like architecture (GAT-ResNet) to detect illicit transactions related to anti-money laundering/combating the financing of terrorism (AML/CFT) in blockchains. We train various models on the Elliptic Bitcoin Transaction dataset, implementing logistic regression, Random Forest, XGBoost, GCN, GAT, and our proposed GAT-ResNet model. Our results demonstrate that the GAT-ResNet model has a potential to outperform the existing graph network models in terms of accuracy, reliability and scalability. Our research sheds light on the potential of graph related machine learning models to improve efforts to combat financial crime and lays the foundation for further research in this area.<|reference_end|>
arxiv
@article{adloori2024graph, title={Graph Network Models To Detect Illicit Transactions In Block Chain}, author={Hrushyang Adloori, Vaishnavi Dasanapu, Abhijith Chandra Mergu}, journal={arXiv preprint arXiv:2410.07150}, year={2024}, archivePrefix={arXiv}, eprint={2410.07150}, primaryClass={cs.LG cs.AI cs.NE} }
adloori2024graph
arxiv-667680
2410.07151
FaceVid-1K: A Large-Scale High-Quality Multiracial Human Face Video Dataset
<|reference_start|>FaceVid-1K: A Large-Scale High-Quality Multiracial Human Face Video Dataset: Generating talking face videos from various conditions has recently become a highly popular research area within generative tasks. However, building a high-quality face video generation model requires a well-performing pre-trained backbone, a key obstacle that universal models fail to adequately address. Most existing works rely on universal video or image generation models and optimize control mechanisms, but they neglect the evident upper bound in video quality due to the limited capabilities of the backbones, which is a result of the lack of high-quality human face video datasets. In this work, we investigate the unsatisfactory results from related studies, gather and trim existing public talking face video datasets, and additionally collect and annotate a large-scale dataset, resulting in a comprehensive, high-quality multiracial face collection named \textbf{FaceVid-1K}. Using this dataset, we craft several effective pre-trained backbone models for face video generation. Specifically, we conduct experiments with several well-established video generation models, including text-to-video, image-to-video, and unconditional video generation, under various settings. We obtain the corresponding performance benchmarks and compared them with those trained on public datasets to demonstrate the superiority of our dataset. These experiments also allow us to investigate empirical strategies for crafting domain-specific video generation tasks with cost-effective settings. We will make our curated dataset, along with the pre-trained talking face video generation models, publicly available as a resource contribution to hopefully advance the research field.<|reference_end|>
arxiv
@article{di2024facevid-1k:, title={FaceVid-1K: A Large-Scale High-Quality Multiracial Human Face Video Dataset}, author={Donglin Di, He Feng, Wenzhang Sun, Yongjia Ma, Hao Li, Wei Chen, Xiaofei Gou, Tonghua Su, Xun Yang}, journal={arXiv preprint arXiv:2410.07151}, year={2024}, archivePrefix={arXiv}, eprint={2410.07151}, primaryClass={cs.CV} }
di2024facevid-1k:
arxiv-667681
2410.07153
CHASE: Learning Convex Hull Adaptive Shift for Skeleton-based Multi-Entity Action Recognition
<|reference_start|>CHASE: Learning Convex Hull Adaptive Shift for Skeleton-based Multi-Entity Action Recognition: Skeleton-based multi-entity action recognition is a challenging task aiming to identify interactive actions or group activities involving multiple diverse entities. Existing models for individuals often fall short in this task due to the inherent distribution discrepancies among entity skeletons, leading to suboptimal backbone optimization. To this end, we introduce a Convex Hull Adaptive Shift based multi-Entity action recognition method (CHASE), which mitigates inter-entity distribution gaps and unbiases subsequent backbones. Specifically, CHASE comprises a learnable parameterized network and an auxiliary objective. The parameterized network achieves plausible, sample-adaptive repositioning of skeleton sequences through two key components. First, the Implicit Convex Hull Constrained Adaptive Shift ensures that the new origin of the coordinate system is within the skeleton convex hull. Second, the Coefficient Learning Block provides a lightweight parameterization of the mapping from skeleton sequences to their specific coefficients in convex combinations. Moreover, to guide the optimization of this network for discrepancy minimization, we propose the Mini-batch Pair-wise Maximum Mean Discrepancy as the additional objective. CHASE operates as a sample-adaptive normalization method to mitigate inter-entity distribution discrepancies, thereby reducing data bias and improving the subsequent classifier's multi-entity action recognition performance. Extensive experiments on six datasets, including NTU Mutual 11/26, H2O, Assembly101, Collective Activity and Volleyball, consistently verify our approach by seamlessly adapting to single-entity backbones and boosting their performance in multi-entity scenarios. Our code is publicly available at https://github.com/Necolizer/CHASE .<|reference_end|>
arxiv
@article{wen2024chase:, title={CHASE: Learning Convex Hull Adaptive Shift for Skeleton-based Multi-Entity Action Recognition}, author={Yuhang Wen, Mengyuan Liu, Songtao Wu, Beichen Ding}, journal={arXiv preprint arXiv:2410.07153}, year={2024}, archivePrefix={arXiv}, eprint={2410.07153}, primaryClass={cs.CV cs.LG} }
wen2024chase:
arxiv-667682
2410.07154
The Transparent Relations Ontology (TRO): a vocabulary to describe conflicts of interest
<|reference_start|>The Transparent Relations Ontology (TRO): a vocabulary to describe conflicts of interest: The Transparent Relations Ontology (TRO) offers a vocabulary to publish data about relations between powerful parties that should be more transparent, in order to detect possible conflicts of interest. TRO is based on minimal modelling, reusing common vocabularies to offer a simple yet useful resource to publish interoperable data about pointers to relations that might result in corruption cases. Additionally, best practices have been followed in order to sustain a technically rigorous ontology development process. A usage example with real data is mentioned, integrating information from Basque Government's Open Data services and a news outlet. Building upon its foundational design, future enhancements of TRO could significantly amplify its utility in uncovering and scrutinizing opaque relationships that may lead to corruption.<|reference_end|>
arxiv
@article{aranguren2024the, title={The Transparent Relations Ontology (TRO): a vocabulary to describe conflicts of interest}, author={Mikel Ega~na Aranguren}, journal={arXiv preprint arXiv:2410.07154}, year={2024}, archivePrefix={arXiv}, eprint={2410.07154}, primaryClass={cs.DL cs.LO} }
aranguren2024the
arxiv-667683
2410.07155
Trans4D: Realistic Geometry-Aware Transition for Compositional Text-to-4D Synthesis
<|reference_start|>Trans4D: Realistic Geometry-Aware Transition for Compositional Text-to-4D Synthesis: Recent advances in diffusion models have demonstrated exceptional capabilities in image and video generation, further improving the effectiveness of 4D synthesis. Existing 4D generation methods can generate high-quality 4D objects or scenes based on user-friendly conditions, benefiting the gaming and video industries. However, these methods struggle to synthesize significant object deformation of complex 4D transitions and interactions within scenes. To address this challenge, we propose Trans4D, a novel text-to-4D synthesis framework that enables realistic complex scene transitions. Specifically, we first use multi-modal large language models (MLLMs) to produce a physic-aware scene description for 4D scene initialization and effective transition timing planning. Then we propose a geometry-aware 4D transition network to realize a complex scene-level 4D transition based on the plan, which involves expressive geometrical object deformation. Extensive experiments demonstrate that Trans4D consistently outperforms existing state-of-the-art methods in generating 4D scenes with accurate and high-quality transitions, validating its effectiveness. Code: https://github.com/YangLing0818/Trans4D<|reference_end|>
arxiv
@article{zeng2024trans4d:, title={Trans4D: Realistic Geometry-Aware Transition for Compositional Text-to-4D Synthesis}, author={Bohan Zeng, Ling Yang, Siyu Li, Jiaming Liu, Zixiang Zhang, Juanxi Tian, Kaixin Zhu, Yongzhen Guo, Fu-Yun Wang, Minkai Xu, Stefano Ermon, Wentao Zhang}, journal={arXiv preprint arXiv:2410.07155}, year={2024}, archivePrefix={arXiv}, eprint={2410.07155}, primaryClass={cs.CV} }
zeng2024trans4d:
arxiv-667684
2410.07157
InstructG2I: Synthesizing Images from Multimodal Attributed Graphs
<|reference_start|>InstructG2I: Synthesizing Images from Multimodal Attributed Graphs: In this paper, we approach an overlooked yet critical task Graph2Image: generating images from multimodal attributed graphs (MMAGs). This task poses significant challenges due to the explosion in graph size, dependencies among graph entities, and the need for controllability in graph conditions. To address these challenges, we propose a graph context-conditioned diffusion model called InstructG2I. InstructG2I first exploits the graph structure and multimodal information to conduct informative neighbor sampling by combining personalized page rank and re-ranking based on vision-language features. Then, a Graph-QFormer encoder adaptively encodes the graph nodes into an auxiliary set of graph prompts to guide the denoising process of diffusion. Finally, we propose graph classifier-free guidance, enabling controllable generation by varying the strength of graph guidance and multiple connected edges to a node. Extensive experiments conducted on three datasets from different domains demonstrate the effectiveness and controllability of our approach. The code is available at https://github.com/PeterGriffinJin/InstructG2I.<|reference_end|>
arxiv
@article{jin2024instructg2i:, title={InstructG2I: Synthesizing Images from Multimodal Attributed Graphs}, author={Bowen Jin, Ziqi Pang, Bingjun Guo, Yu-Xiong Wang, Jiaxuan You, Jiawei Han}, journal={NeurIPs 2024}, year={2024}, archivePrefix={arXiv}, eprint={2410.07157}, primaryClass={cs.AI cs.CL cs.CV cs.LG cs.SI} }
jin2024instructg2i:
arxiv-667685
2410.07158
Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond
<|reference_start|>Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond: In recent years, training data attribution (TDA) methods have emerged as a promising direction for the interpretability of neural networks. While research around TDA is thriving, limited effort has been dedicated to the evaluation of attributions. Similar to the development of evaluation metrics for traditional feature attribution approaches, several standalone metrics have been proposed to evaluate the quality of TDA methods across various contexts. However, the lack of a unified framework that allows for systematic comparison limits trust in TDA methods and stunts their widespread adoption. To address this research gap, we introduce Quanda, a Python toolkit designed to facilitate the evaluation of TDA methods. Beyond offering a comprehensive set of evaluation metrics, Quanda provides a uniform interface for seamless integration with existing TDA implementations across different repositories, thus enabling systematic benchmarking. The toolkit is user-friendly, thoroughly tested, well-documented, and available as an open-source library on PyPi and under https://github.com/dilyabareeva/quanda.<|reference_end|>
arxiv
@article{bareeva2024quanda:, title={Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond}, author={Dilyara Bareeva, Galip "Umit Yolcu, Anna Hedstr"om, Niklas Schmolenski, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin}, journal={arXiv preprint arXiv:2410.07158}, year={2024}, archivePrefix={arXiv}, eprint={2410.07158}, primaryClass={cs.LG cs.AI} }
bareeva2024quanda:
arxiv-667686
2410.07159
On the Spectral Efficiency of D-MIMO Networks under Rician Fading
<|reference_start|>On the Spectral Efficiency of D-MIMO Networks under Rician Fading: Contemporary wireless communications systems adopt the Multi-User Multiple-Input Multiple-Output (MU-MIMO) technique: a single base station or Access Point (AP) equipped with multiple antenna elements serves multiple active users simultaneously. Aiming at providing a more uniform wireless coverage, industry and academia have been working towards the evolution from centralized MIMO to Distributed-MIMO. That is, instead of having all the antenna elements co-located at a single AP, multiple APs, each equipped with a few or a single antenna element, jointly cooperate to serve the active users in the coverage area. In this work, we evaluate the performance of different D-MIMO setups under Rician fading, and considering different receive combining schemes. Note that the Rician fading model is convenient for MU-MIMO performance assessment, as it encompasses a wide variety of scenarios. Our numerical results show that the correlation among the channel vectors of different users increases with the Rician factor, which leads to a reduction on the achievable Spectral Efficiency (SE). Moreover, given a total number of antenna elements, there is an optimal number of APs and antenna elements per AP that provides the best performance. This "sweet spot" depends on the Rician factor and on the adopted receive combining scheme.<|reference_end|>
arxiv
@article{tominaga2024on, title={On the Spectral Efficiency of D-MIMO Networks under Rician Fading}, author={Eduardo Noboro Tominaga, Onel Luis Alcaraz L'opez, Tommy Svensson, Richard Demo Souza and Hirley Alves}, journal={arXiv preprint arXiv:2410.07159}, year={2024}, archivePrefix={arXiv}, eprint={2410.07159}, primaryClass={cs.IT eess.SP math.IT} }
tominaga2024on
arxiv-667687
2410.07160
TextToon: Real-Time Text Toonify Head Avatar from Single Video
<|reference_start|>TextToon: Real-Time Text Toonify Head Avatar from Single Video: We propose TextToon, a method to generate a drivable toonified avatar. Given a short monocular video sequence and a written instruction about the avatar style, our model can generate a high-fidelity toonified avatar that can be driven in real-time by another video with arbitrary identities. Existing related works heavily rely on multi-view modeling to recover geometry via texture embeddings, presented in a static manner, leading to control limitations. The multi-view video input also makes it difficult to deploy these models in real-world applications. To address these issues, we adopt a conditional embedding Tri-plane to learn realistic and stylized facial representations in a Gaussian deformation field. Additionally, we expand the stylization capabilities of 3D Gaussian Splatting by introducing an adaptive pixel-translation neural network and leveraging patch-aware contrastive learning to achieve high-quality images. To push our work into consumer applications, we develop a real-time system that can operate at 48 FPS on a GPU machine and 15-18 FPS on a mobile machine. Extensive experiments demonstrate the efficacy of our approach in generating textual avatars over existing methods in terms of quality and real-time animation. Please refer to our project page for more details: https://songluchuan.github.io/TextToon/.<|reference_end|>
arxiv
@article{song2024texttoon:, title={TextToon: Real-Time Text Toonify Head Avatar from Single Video}, author={Luchuan Song, Lele Chen, Celong Liu, Pinxin Liu, Chenliang Xu}, journal={arXiv preprint arXiv:2410.07160}, year={2024}, archivePrefix={arXiv}, eprint={2410.07160}, primaryClass={cs.CV cs.GR} }
song2024texttoon:
arxiv-667688
2410.07163
Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning
<|reference_start|>Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning: In this work, we address the problem of large language model (LLM) unlearning, aiming to remove unwanted data influences and associated model capabilities (e.g., copyrighted data or harmful content generation) while preserving essential model utilities, without the need for retraining from scratch. Despite the growing need for LLM unlearning, a principled optimization framework remains lacking. To this end, we revisit the state-of-the-art approach, negative preference optimization (NPO), and identify the issue of reference model bias, which could undermine NPO's effectiveness, particularly when unlearning forget data of varying difficulty. Given that, we propose a simple yet effective unlearning optimization framework, called SimNPO, showing that 'simplicity' in removing the reliance on a reference model (through the lens of simple preference optimization) benefits unlearning. We also provide deeper insights into SimNPO's advantages, supported by analysis using mixtures of Markov chains. Furthermore, we present extensive experiments validating SimNPO's superiority over existing unlearning baselines in benchmarks like TOFU and MUSE, and robustness against relearning attacks. Codes are available at https://github.com/OPTML-Group/Unlearn-Simple.<|reference_end|>
arxiv
@article{fan2024simplicity, title={Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning}, author={Chongyu Fan, Jiancheng Liu, Licong Lin, Jinghan Jia, Ruiqi Zhang, Song Mei, Sijia Liu}, journal={arXiv preprint arXiv:2410.07163}, year={2024}, archivePrefix={arXiv}, eprint={2410.07163}, primaryClass={cs.CL cs.AI cs.LG} }
fan2024simplicity
arxiv-667689
2410.07164
AvatarGO: Zero-shot 4D Human-Object Interaction Generation and Animation
<|reference_start|>AvatarGO: Zero-shot 4D Human-Object Interaction Generation and Animation: Recent advancements in diffusion models have led to significant improvements in the generation and animation of 4D full-body human-object interactions (HOI). Nevertheless, existing methods primarily focus on SMPL-based motion generation, which is limited by the scarcity of realistic large-scale interaction data. This constraint affects their ability to create everyday HOI scenes. This paper addresses this challenge using a zero-shot approach with a pre-trained diffusion model. Despite this potential, achieving our goals is difficult due to the diffusion model's lack of understanding of ''where'' and ''how'' objects interact with the human body. To tackle these issues, we introduce AvatarGO, a novel framework designed to generate animatable 4D HOI scenes directly from textual inputs. Specifically, 1) for the ''where'' challenge, we propose LLM-guided contact retargeting, which employs Lang-SAM to identify the contact body part from text prompts, ensuring precise representation of human-object spatial relations. 2) For the ''how'' challenge, we introduce correspondence-aware motion optimization that constructs motion fields for both human and object models using the linear blend skinning function from SMPL-X. Our framework not only generates coherent compositional motions, but also exhibits greater robustness in handling penetration issues. Extensive experiments with existing methods validate AvatarGO's superior generation and animation capabilities on a variety of human-object pairs and diverse poses. As the first attempt to synthesize 4D avatars with object interactions, we hope AvatarGO could open new doors for human-centric 4D content creation.<|reference_end|>
arxiv
@article{cao2024avatargo:, title={AvatarGO: Zero-shot 4D Human-Object Interaction Generation and Animation}, author={Yukang Cao, Liang Pan, Kai Han, Kwan-Yee K. Wong, Ziwei Liu}, journal={arXiv preprint arXiv:2410.07164}, year={2024}, archivePrefix={arXiv}, eprint={2410.07164}, primaryClass={cs.CV} }
cao2024avatargo:
arxiv-667690
2410.07165
Complex Logical Query Answering by Calibrating Knowledge Graph Completion Models
<|reference_start|>Complex Logical Query Answering by Calibrating Knowledge Graph Completion Models: Complex logical query answering (CLQA) is a challenging task that involves finding answer entities for complex logical queries over incomplete knowledge graphs (KGs). Previous research has explored the use of pre-trained knowledge graph completion (KGC) models, which can predict the missing facts in KGs, to answer complex logical queries. However, KGC models are typically evaluated using ranking evaluation metrics, which may result in values of predictions of KGC models that are not well-calibrated. In this paper, we propose a method for calibrating KGC models, namely CKGC, which enables KGC models to adapt to answering complex logical queries. Notably, CKGC is lightweight and effective. The adaptation function is simple, allowing the model to quickly converge during the adaptation process. The core concept of CKGC is to map the values of predictions of KGC models to the range [0, 1], ensuring that values associated with true facts are close to 1, while values linked to false facts are close to 0. Through experiments on three benchmark datasets, we demonstrate that our proposed calibration method can significantly boost model performance in the CLQA task. Moreover, our approach can enhance the performance of CLQA while preserving the ranking evaluation metrics of KGC models. The code is available at https://github.com/changyi7231/CKGC.<|reference_end|>
arxiv
@article{xiao2024complex, title={Complex Logical Query Answering by Calibrating Knowledge Graph Completion Models}, author={Changyi Xiao, Yixin Cao}, journal={arXiv preprint arXiv:2410.07165}, year={2024}, archivePrefix={arXiv}, eprint={2410.07165}, primaryClass={cs.AI cs.LG} }
xiao2024complex
arxiv-667691
2410.07166
Embodied Agent Interface: Benchmarking LLMs for Embodied Decision Making
<|reference_start|>Embodied Agent Interface: Benchmarking LLMs for Embodied Decision Making: We aim to evaluate Large Language Models (LLMs) for embodied decision making. While a significant body of work has been leveraging LLMs for decision making in embodied environments, we still lack a systematic understanding of their performance because they are usually applied in different domains, for different purposes, and built based on different inputs and outputs. Furthermore, existing evaluations tend to rely solely on a final success rate, making it difficult to pinpoint what ability is missing in LLMs and where the problem lies, which in turn blocks embodied agents from leveraging LLMs effectively and selectively. To address these limitations, we propose a generalized interface (Embodied Agent Interface) that supports the formalization of various types of tasks and input-output specifications of LLM-based modules. Specifically, it allows us to unify 1) a broad set of embodied decision-making tasks involving both state and temporally extended goals, 2) four commonly-used LLM-based modules for decision making: goal interpretation, subgoal decomposition, action sequencing, and transition modeling, and 3) a collection of fine-grained metrics which break down evaluation into various types of errors, such as hallucination errors, affordance errors, various types of planning errors, etc. Overall, our benchmark offers a comprehensive assessment of LLMs' performance for different subtasks, pinpointing the strengths and weaknesses in LLM-powered embodied AI systems, and providing insights for effective and selective use of LLMs in embodied decision making.<|reference_end|>
arxiv
@article{li2024embodied, title={Embodied Agent Interface: Benchmarking LLMs for Embodied Decision Making}, author={Manling Li, Shiyu Zhao, Qineng Wang, Kangrui Wang, Yu Zhou, Sanjana Srivastava, Cem Gokmen, Tony Lee, Li Erran Li, Ruohan Zhang, Weiyu Liu, Percy Liang, Li Fei-Fei, Jiayuan Mao and Jiajun Wu}, journal={arXiv preprint arXiv:2410.07166}, year={2024}, archivePrefix={arXiv}, eprint={2410.07166}, primaryClass={cs.CL cs.AI cs.LG cs.RO} }
li2024embodied
arxiv-667692
2410.07167
Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate
<|reference_start|>Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate: We present the Modality Integration Rate (MIR), an effective, robust, and generalized metric to indicate the multi-modal pre-training quality of Large Vision Language Models (LVLMs). Large-scale pre-training plays a critical role in building capable LVLMs, while evaluating its training quality without the costly supervised fine-tuning stage is under-explored. Loss, perplexity, and in-context evaluation results are commonly used pre-training metrics for Large Language Models (LLMs), while we observed that these metrics are less indicative when aligning a well-trained LLM with a new modality. Due to the lack of proper metrics, the research of LVLMs in the critical pre-training stage is hindered greatly, including the training data choice, efficient module design, etc. In this paper, we propose evaluating the pre-training quality from the inter-modal distribution distance perspective and present MIR, the Modality Integration Rate, which is 1) \textbf{Effective} to represent the pre-training quality and show a positive relation with the benchmark performance after supervised fine-tuning. 2) \textbf{Robust} toward different training/evaluation data. 3) \textbf{Generalize} across training configurations and architecture choices. We conduct a series of pre-training experiments to explore the effectiveness of MIR and observe satisfactory results that MIR is indicative about training data selection, training strategy schedule, and model architecture design to get better pre-training results. We hope MIR could be a helpful metric for building capable LVLMs and inspire the following research about modality alignment in different areas. Our code is at: https://github.com/shikiw/Modality-Integration-Rate.<|reference_end|>
arxiv
@article{huang2024deciphering, title={Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate}, author={Qidong Huang, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Yuhang Cao, Jiaqi Wang, Dahua Lin, Weiming Zhang, Nenghai Yu}, journal={arXiv preprint arXiv:2410.07167}, year={2024}, archivePrefix={arXiv}, eprint={2410.07167}, primaryClass={cs.CV cs.CL} }
huang2024deciphering
arxiv-667693
2410.07168
Sylber: Syllabic Embedding Representation of Speech from Raw Audio
<|reference_start|>Sylber: Syllabic Embedding Representation of Speech from Raw Audio: Syllables are compositional units of spoken language that play a crucial role in human speech perception and production. However, current neural speech representations lack structure, resulting in dense token sequences that are costly to process. To bridge this gap, we propose a new model, Sylber, that produces speech representations with clean and robust syllabic structure. Specifically, we propose a self-supervised model that regresses features on syllabic segments distilled from a teacher model which is an exponential moving average of the model in training. This results in a highly structured representation of speech features, offering three key benefits: 1) a fast, linear-time syllable segmentation algorithm, 2) efficient syllabic tokenization with an average of 4.27 tokens per second, and 3) syllabic units better suited for lexical and syntactic understanding. We also train token-to-speech generative models with our syllabic units and show that fully intelligible speech can be reconstructed from these tokens. Lastly, we observe that categorical perception, a linguistic phenomenon of speech perception, emerges naturally in our model, making the embedding space more categorical and sparse than previous self-supervised learning approaches. Together, we present a novel self-supervised approach for representing speech as syllables, with significant potential for efficient speech tokenization and spoken language modeling.<|reference_end|>
arxiv
@article{cho2024sylber:, title={Sylber: Syllabic Embedding Representation of Speech from Raw Audio}, author={Cheol Jun Cho, Nicholas Lee, Akshat Gupta, Dhruv Agarwal, Ethan Chen, Alan W Black, Gopala K. Anumanchipalli}, journal={arXiv preprint arXiv:2410.07168}, year={2024}, archivePrefix={arXiv}, eprint={2410.07168}, primaryClass={cs.CL cs.SD eess.AS} }
cho2024sylber:
arxiv-667694
2410.07169
VIRT: Vision Instructed Transformer for Robotic Manipulation
<|reference_start|>VIRT: Vision Instructed Transformer for Robotic Manipulation: Robotic manipulation, owing to its multi-modal nature, often faces significant training ambiguity, necessitating explicit instructions to clearly delineate the manipulation details in tasks. In this work, we highlight that vision instruction is naturally more comprehensible to recent robotic policies than the commonly adopted text instruction, as these policies are born with some vision understanding ability like human infants. Building on this premise and drawing inspiration from cognitive science, we introduce the robotic imagery paradigm, which realizes large-scale robotic data pre-training without text annotations. Additionally, we propose the robotic gaze strategy that emulates the human eye gaze mechanism, thereby guiding subsequent actions and focusing the attention of the policy on the manipulated object. Leveraging these innovations, we develop VIRT, a fully Transformer-based policy. We design comprehensive tasks using both a physical robot and simulated environments to assess the efficacy of VIRT. The results indicate that VIRT can complete very competitive tasks like ``opening the lid of a tightly sealed bottle'', and the proposed techniques boost the success rates of the baseline policy on diverse challenging tasks from nearly 0% to more than 65%.<|reference_end|>
arxiv
@article{li2024virt:, title={VIRT: Vision Instructed Transformer for Robotic Manipulation}, author={Zhuoling Li, Liangliang Ren, Jinrong Yang, Yong Zhao, Xiaoyang Wu, Zhenhua Xu, Xiang Bai, Hengshuang Zhao}, journal={arXiv preprint arXiv:2410.07169}, year={2024}, archivePrefix={arXiv}, eprint={2410.07169}, primaryClass={cs.RO} }
li2024virt:
arxiv-667695
2410.07170
One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation
<|reference_start|>One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation: Foundation models (FMs) are pre-trained on large-scale datasets and then fine-tuned on a downstream task for a specific application. The most successful and most commonly used fine-tuning method is to update the pre-trained weights via a low-rank adaptation (LoRA). LoRA introduces new weight matrices that are usually initialized at random with a uniform rank distribution across model weights. Recent works focus on weight-driven initialization or learning of adaptive ranks during training. Both approaches have only been investigated in isolation, resulting in slow convergence or a uniform rank distribution, in turn leading to sub-optimal performance. We propose to enhance LoRA by initializing the new weights in a data-driven manner by computing singular value decomposition on minibatches of activation vectors. Then, we initialize the LoRA matrices with the obtained right-singular vectors and re-distribute ranks among all weight matrices to explain the maximal amount of variance and continue the standard LoRA fine-tuning procedure. This results in our new method Explained Variance Adaptation (EVA). We apply EVA to a variety of fine-tuning tasks ranging from language generation and understanding to image classification and reinforcement learning. EVA exhibits faster convergence than competitors and attains the highest average score across a multitude of tasks per domain.<|reference_end|>
arxiv
@article{paischer2024one, title={One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation}, author={Fabian Paischer, Lukas Hauzenberger, Thomas Schmied, Benedikt Alkin, Marc Peter Deisenroth, Sepp Hochreiter}, journal={arXiv preprint arXiv:2410.07170}, year={2024}, archivePrefix={arXiv}, eprint={2410.07170}, primaryClass={cs.LG cs.AI cs.CL stat.ML} }
paischer2024one
arxiv-667696
2410.07171
IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation
<|reference_start|>IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation: Advanced diffusion models like RPG, Stable Diffusion 3 and FLUX have made notable strides in compositional text-to-image generation. However, these methods typically exhibit distinct strengths for compositional generation, with some excelling in handling attribute binding and others in spatial relationships. This disparity highlights the need for an approach that can leverage the complementary strengths of various models to comprehensively improve the composition capability. To this end, we introduce IterComp, a novel framework that aggregates composition-aware model preferences from multiple models and employs an iterative feedback learning approach to enhance compositional generation. Specifically, we curate a gallery of six powerful open-source diffusion models and evaluate their three key compositional metrics: attribute binding, spatial relationships, and non-spatial relationships. Based on these metrics, we develop a composition-aware model preference dataset comprising numerous image-rank pairs to train composition-aware reward models. Then, we propose an iterative feedback learning method to enhance compositionality in a closed-loop manner, enabling the progressive self-refinement of both the base diffusion model and reward models over multiple iterations. Theoretical proof demonstrates the effectiveness and extensive experiments show our significant superiority over previous SOTA methods (e.g., Omost and FLUX), particularly in multi-category object composition and complex semantic alignment. IterComp opens new research avenues in reward feedback learning for diffusion models and compositional generation. Code: https://github.com/YangLing0818/IterComp<|reference_end|>
arxiv
@article{zhang2024itercomp:, title={IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation}, author={Xinchen Zhang, Ling Yang, Guohao Li, Yaqi Cai, Jiake Xie, Yong Tang, Yujiu Yang, Mengdi Wang, Bin Cui}, journal={arXiv preprint arXiv:2410.07171}, year={2024}, archivePrefix={arXiv}, eprint={2410.07171}, primaryClass={cs.CV} }
zhang2024itercomp:
arxiv-667697
2410.07172
Glider: Global and Local Instruction-Driven Expert Router
<|reference_start|>Glider: Global and Local Instruction-Driven Expert Router: The availability of performant pre-trained models has led to a proliferation of fine-tuned expert models that are specialized to particular domains. This has enabled the creation of powerful and adaptive routing-based "Model MoErging" methods with the goal of using expert modules to create an aggregate system with improved performance or generalization. However, existing MoErging methods often prioritize generalization to unseen tasks at the expense of performance on held-in tasks, which limits its practical applicability in real-world deployment scenarios. We observe that current token-level routing mechanisms neglect the global semantic context of the input task. This token-wise independence hinders effective expert selection for held-in tasks, as routing decisions fail to incorporate the semantic properties of the task. To address this, we propose, Global and Local Instruction Driven Expert Router (GLIDER) that integrates a multi-scale routing mechanism, encompassing a semantic global router and a learned local router. The global router leverages LLM's advanced reasoning capabilities for semantic-related contexts to enhance expert selection. Given the input query and LLM, the router generates semantic task instructions that guide the retrieval of the most relevant experts across all layers. This global guidance is complemented by a local router that facilitates token-level routing decisions within each module, enabling finer control and enhanced performance on unseen tasks. Our experiments using T5-based models for T0 and FLAN tasks demonstrate that GLIDER achieves substantially improved held-in performance while maintaining strong generalization on held-out tasks. We also perform ablations experiments to dive deeper into the components of GLIDER. Our experiments highlight the importance of our multi-scale routing that leverages LLM-driven semantic reasoning for MoErging methods.<|reference_end|>
arxiv
@article{li2024glider:, title={Glider: Global and Local Instruction-Driven Expert Router}, author={Pingzhi Li, Prateek Yadav, Jaehong Yoon, Jie Peng, Yi-Lin Sung, Mohit Bansal, Tianlong Chen}, journal={arXiv preprint arXiv:2410.07172}, year={2024}, archivePrefix={arXiv}, eprint={2410.07172}, primaryClass={cs.LG} }
li2024glider:
arxiv-667698
2410.07173
Do better language models have crisper vision?
<|reference_start|>Do better language models have crisper vision?: How well do text-only Large Language Models (LLMs) grasp the visual world? As LLMs are increasingly used in computer vision, addressing this question becomes both fundamental and pertinent. However, existing studies have primarily focused on limited scenarios, such as their ability to generate visual content or cluster multimodal data. To this end, we propose the Visual Text Representation Benchmark (ViTeRB) to isolate key properties that make language models well-aligned with the visual world. With this, we identify large-scale decoder-based LLMs as ideal candidates for representing text in vision-centric contexts, counter to the current practice of utilizing text encoders. Building on these findings, we propose ShareLock, an ultra-lightweight CLIP-like model. By leveraging precomputable frozen features from strong vision and language models, ShareLock achieves an impressive 51% accuracy on ImageNet despite utilizing just 563k image-caption pairs. Moreover, training requires only 1 GPU hour (or 10 hours including the precomputation of features) - orders of magnitude less than prior methods. Code will be released.<|reference_end|>
arxiv
@article{ruthardt2024do, title={Do better language models have crisper vision?}, author={Jona Ruthardt, Gertjan J. Burghouts, Serge Belongie, Yuki M. Asano}, journal={arXiv preprint arXiv:2410.07173}, year={2024}, archivePrefix={arXiv}, eprint={2410.07173}, primaryClass={cs.CL cs.AI cs.CV} }
ruthardt2024do
arxiv-667699
2410.07174
Neural Circuit Architectural Priors for Quadruped Locomotion
<|reference_start|>Neural Circuit Architectural Priors for Quadruped Locomotion: Learning-based approaches to quadruped locomotion commonly adopt generic policy architectures like fully connected MLPs. As such architectures contain few inductive biases, it is common in practice to incorporate priors in the form of rewards, training curricula, imitation data, or trajectory generators. In nature, animals are born with priors in the form of their nervous system's architecture, which has been shaped by evolution to confer innate ability and efficient learning. For instance, a horse can walk within hours of birth and can quickly improve with practice. Such architectural priors can also be useful in ANN architectures for AI. In this work, we explore the advantages of a biologically inspired ANN architecture for quadruped locomotion based on neural circuits in the limbs and spinal cord of mammals. Our architecture achieves good initial performance and comparable final performance to MLPs, while using less data and orders of magnitude fewer parameters. Our architecture also exhibits better generalization to task variations, even admitting deployment on a physical robot without standard sim-to-real methods. This work shows that neural circuits can provide valuable architectural priors for locomotion and encourages future work in other sensorimotor skills.<|reference_end|>
arxiv
@article{bhattasali2024neural, title={Neural Circuit Architectural Priors for Quadruped Locomotion}, author={Nikhil X. Bhattasali, Venkatesh Pattabiraman, Lerrel Pinto, Grace W. Lindsay}, journal={arXiv preprint arXiv:2410.07174}, year={2024}, archivePrefix={arXiv}, eprint={2410.07174}, primaryClass={q-bio.NC cs.AI cs.LG cs.NE cs.RO} }
bhattasali2024neural
arxiv-667700
2410.07176
Astute RAG: Overcoming Imperfect Retrieval Augmentation and Knowledge Conflicts for Large Language Models
<|reference_start|>Astute RAG: Overcoming Imperfect Retrieval Augmentation and Knowledge Conflicts for Large Language Models: Retrieval-Augmented Generation (RAG), while effective in integrating external knowledge to address the limitations of large language models (LLMs), can be undermined by imperfect retrieval, which may introduce irrelevant, misleading, or even malicious information. Despite its importance, previous studies have rarely explored the behavior of RAG through joint analysis on how errors from imperfect retrieval attribute and propagate, and how potential conflicts arise between the LLMs' internal knowledge and external sources. We find that imperfect retrieval augmentation might be inevitable and quite harmful, through controlled analysis under realistic conditions. We identify the knowledge conflicts between LLM-internal and external knowledge from retrieval as a bottleneck to overcome in the post-retrieval stage of RAG. To render LLMs resilient to imperfect retrieval, we propose Astute RAG, a novel RAG approach that adaptively elicits essential information from LLMs' internal knowledge, iteratively consolidates internal and external knowledge with source-awareness, and finalizes the answer according to information reliability. Our experiments using Gemini and Claude demonstrate that Astute RAG significantly outperforms previous robustness-enhanced RAG methods. Notably, Astute RAG is the only approach that matches or exceeds the performance of LLMs without RAG under worst-case scenarios. Further analysis reveals that Astute RAG effectively resolves knowledge conflicts, improving the reliability and trustworthiness of RAG systems.<|reference_end|>
arxiv
@article{wang2024astute, title={Astute RAG: Overcoming Imperfect Retrieval Augmentation and Knowledge Conflicts for Large Language Models}, author={Fei Wang, Xingchen Wan, Ruoxi Sun, Jiefeng Chen, Sercan "O. Ar{i}k}, journal={arXiv preprint arXiv:2410.07176}, year={2024}, archivePrefix={arXiv}, eprint={2410.07176}, primaryClass={cs.CL cs.AI cs.LG} }
wang2024astute