corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-661401
|
2409.16133
|
Implicit assessment of language learning during practice as accurate as explicit testing
|
<|reference_start|>Implicit assessment of language learning during practice as accurate as explicit testing: Assessment of proficiency of the learner is an essential part of Intelligent Tutoring Systems (ITS). We use Item Response Theory (IRT) in computer-aided language learning for assessment of student ability in two contexts: in test sessions, and in exercises during practice sessions. Exhaustive testing across a wide range of skills can provide a detailed picture of proficiency, but may be undesirable for a number of reasons. Therefore, we first aim to replace exhaustive tests with efficient but accurate adaptive tests. We use learner data collected from exhaustive tests under imperfect conditions, to train an IRT model to guide adaptive tests. Simulations and experiments with real learner data confirm that this approach is efficient and accurate. Second, we explore whether we can accurately estimate learner ability directly from the context of practice with exercises, without testing. We transform learner data collected from exercise sessions into a form that can be used for IRT modeling. This is done by linking the exercises to {\em linguistic constructs}; the constructs are then treated as "items" within IRT. We present results from large-scale studies with thousands of learners. Using teacher assessments of student ability as "ground truth," we compare the estimates obtained from tests vs. those from exercises. The experiments confirm that the IRT models can produce accurate ability estimation based on exercises.<|reference_end|>
|
arxiv
|
@article{hou2024implicit,
title={Implicit assessment of language learning during practice as accurate as
explicit testing},
author={Jue Hou, Anisia Katinskaia, Anh-Duc Vu, Roman Yangarber},
journal={arXiv preprint arXiv:2409.16133},
year={2024},
archivePrefix={arXiv},
eprint={2409.16133},
primaryClass={cs.AI cs.CL cs.CY}
}
|
hou2024implicit
|
arxiv-661402
|
2409.16135
|
Evaluation of state-of-the-art ASR Models in Child-Adult Interactions
|
<|reference_start|>Evaluation of state-of-the-art ASR Models in Child-Adult Interactions: The ability to reliably transcribe child-adult conversations in a clinical setting is valuable for diagnosis and understanding of numerous developmental disorders such as Autism Spectrum Disorder. Recent advances in deep learning architectures and availability of large scale transcribed data has led to development of speech foundation models that have shown dramatic improvements in ASR performance. However, the ability of these models to translate well to conversational child-adult interactions is under studied. In this work, we provide a comprehensive evaluation of ASR performance on a dataset containing child-adult interactions from autism diagnostic sessions, using Whisper, Wav2Vec2, HuBERT, and WavLM. We find that speech foundation models show a noticeable performance drop (15-20% absolute WER) for child speech compared to adult speech in the conversational setting. Then, we employ LoRA on the best performing zero shot model (whisper-large) to probe the effectiveness of fine-tuning in a low resource setting, resulting in ~8% absolute WER improvement for child speech and ~13% absolute WER improvement for adult speech.<|reference_end|>
|
arxiv
|
@article{ashvin2024evaluation,
title={Evaluation of state-of-the-art ASR Models in Child-Adult Interactions},
author={Aditya Ashvin, Rimita Lahiri, Aditya Kommineni, Somer Bishop,
Catherine Lord, Sudarsana Reddy Kadiri, Shrikanth Narayanan},
journal={arXiv preprint arXiv:2409.16135},
year={2024},
archivePrefix={arXiv},
eprint={2409.16135},
primaryClass={eess.AS cs.LG cs.SD}
}
|
ashvin2024evaluation
|
arxiv-661403
|
2409.16136
|
HA-FGOVD: Highlighting Fine-grained Attributes via Explicit Linear Composition for Open-Vocabulary Object Detection
|
<|reference_start|>HA-FGOVD: Highlighting Fine-grained Attributes via Explicit Linear Composition for Open-Vocabulary Object Detection: Open-vocabulary object detection (OVD) models are considered to be Large Multi-modal Models (LMM), due to their extensive training data and a large number of parameters. Mainstream OVD models prioritize object coarse-grained category rather than focus on their fine-grained attributes, e.g., colors or materials, thus failed to identify objects specified with certain attributes. However, OVD models are pretrained on large-scale image-text pairs with rich attribute words, whose latent feature space can represent the global text feature as a linear composition of fine-grained attribute tokens without highlighting them. Therefore, we propose in this paper a universal and explicit approach for frozen mainstream OVD models that boosts their attribute-level detection capabilities by highlighting fine-grained attributes in explicit linear space. Firstly, a LLM is leveraged to highlight attribute words within the input text as a zero-shot prompted task. Secondly, by strategically adjusting the token masks, the text encoders of OVD models extract both global text and attribute-specific features, which are then explicitly composited as two vectors in linear space to form the new attribute-highlighted feature for detection tasks, where corresponding scalars are hand-crafted or learned to reweight both two vectors. Notably, these scalars can be seamlessly transferred among different OVD models, which proves that such an explicit linear composition is universal. Empirical evaluation on the FG-OVD dataset demonstrates that our proposed method uniformly improves fine-grained attribute-level OVD of various mainstream models and achieves new state-of-the-art performance.<|reference_end|>
|
arxiv
|
@article{ma2024ha-fgovd:,
title={HA-FGOVD: Highlighting Fine-grained Attributes via Explicit Linear
Composition for Open-Vocabulary Object Detection},
author={Yuqi Ma, Mengyin Liu, Chao Zhu, Xu-Cheng Yin},
journal={arXiv preprint arXiv:2409.16136},
year={2024},
archivePrefix={arXiv},
eprint={2409.16136},
primaryClass={cs.CV cs.AI cs.CL cs.MM}
}
|
ma2024ha-fgovd:
|
arxiv-661404
|
2409.16140
|
Metamorphic Debugging for Accountable Software
|
<|reference_start|>Metamorphic Debugging for Accountable Software: As the laws have become more complicated and enormous, the role of software systems in navigating and understanding these intricacies has become more critical. Given their socio-economic and legally critical implications, ensuring software accountability -- encompassing qualities such as legal compliance, explainability, perceptions of procedural justice, fairness of outcomes, and confidentiality/privacy -- is of paramount social importance. Moreover, software that accurately interprets its requirements, complies with legal standards and upholds social fairness can serve as a surrogate for legal and social norms, enabling policymakers to inquire about the law as seamlessly as a software engineer conducts a test. However, ensuring software accountability faces three key challenges: i) Translating legalese into formal specifications, ii) Lack of a definitive 'truth' for queries (the oracle problem), and iii) Scarcity of trustworthy datasets due to privacy and legal concerns. Drawing from the experiences in debugging U.S. tax preparation software, we propose that these challenges can be tackled by focusing on relational specifications. While the exact output for a given input may be unknown, the relationship between the outputs of two related inputs may be easier to express. This observation resembles i) the legal doctrine of precedent, meaning that similar cases must yield similar rulings; and ii) metamorphic relation (MR) in software engineering that requires a specific relation between software inputs and outputs. We propose metamorphic debugging as the foundation for detecting, explaining, and repairing socio-legal software for these relations. We showcase recent results that leverage metamorphic debugging to detect and explain accountability bugs in tax prep and poverty management software systems.<|reference_end|>
|
arxiv
|
@article{tizpaz-niari2024metamorphic,
title={Metamorphic Debugging for Accountable Software},
author={Saeid Tizpaz-Niari, Shiva Darian, Ashutosh Trivedi},
journal={arXiv preprint arXiv:2409.16140},
year={2024},
archivePrefix={arXiv},
eprint={2409.16140},
primaryClass={cs.SE cs.CY cs.PL}
}
|
tizpaz-niari2024metamorphic
|
arxiv-661405
|
2409.16141
|
Sensitivity of $m$-ary functions and low degree partitions of Hamming graphs
|
<|reference_start|>Sensitivity of $m$-ary functions and low degree partitions of Hamming graphs: The study of complexity measures of Boolean functions led Nisan and Szegedy to state the sensitivity conjecture in 1994, claiming a polynomial relation between degree and sensitivity. This problem remained unsolved until 2019, when Huang proved the conjecture via an equivalent graph theoretical reformulation due to Gotsman and Linial. We study $m$-ary functions, i.e., functions $f: T^n \rightarrow T$ where $T\subseteq \mathbb{C}$ is a finite alphabet of cardinality $|T| = m $ and extend the notions of degree $\mathrm{deg}(f)$ and sensitivity $s(f)$ to $m$-ary functions and show $s(f)\in O(\mathrm{deg}(f)^2)$. This generalizes results of Nisan and Szegedy. Conversely, we introduce the $m$-ary sensitivity conjecture, claiming a polynomial upper bound for $\mathrm{deg}(f)$ in terms of $s(f)$. Analogously to results of Gotsman and Linial, we provide a formulation of the conjecture in terms of imbalanced partitions of Hamming graphs into low degree subgraphs. Combining this with ideas of Chung, F\"uredi, Graham and Seymour, we show that for any prime $p$ the bound in the $p$-ary sensitivity conjecture has to be at least quadratic: there exist $p$-ary functions $f$ of arbitrarily large degree and $\mathrm{deg}(f)\in \Omega(s(f)^2)$.<|reference_end|>
|
arxiv
|
@article{asensio2024sensitivity,
title={Sensitivity of $m$-ary functions and low degree partitions of Hamming
graphs},
author={Sara Asensio, Ignacio Garc'ia-Marco, Kolja Knauer},
journal={arXiv preprint arXiv:2409.16141},
year={2024},
archivePrefix={arXiv},
eprint={2409.16141},
primaryClass={math.CO cs.DM}
}
|
asensio2024sensitivity
|
arxiv-661406
|
2409.16143
|
Seeing Faces in Things: A Model and Dataset for Pareidolia
|
<|reference_start|>Seeing Faces in Things: A Model and Dataset for Pareidolia: The human visual system is well-tuned to detect faces of all shapes and sizes. While this brings obvious survival advantages, such as a better chance of spotting unknown predators in the bush, it also leads to spurious face detections. ``Face pareidolia'' describes the perception of face-like structure among otherwise random stimuli: seeing faces in coffee stains or clouds in the sky. In this paper, we study face pareidolia from a computer vision perspective. We present an image dataset of ``Faces in Things'', consisting of five thousand web images with human-annotated pareidolic faces. Using this dataset, we examine the extent to which a state-of-the-art human face detector exhibits pareidolia, and find a significant behavioral gap between humans and machines. We find that the evolutionary need for humans to detect animal faces, as well as human faces, may explain some of this gap. Finally, we propose a simple statistical model of pareidolia in images. Through studies on human subjects and our pareidolic face detectors we confirm a key prediction of our model regarding what image conditions are most likely to induce pareidolia. Dataset and Website: https://aka.ms/faces-in-things<|reference_end|>
|
arxiv
|
@article{hamilton2024seeing,
title={Seeing Faces in Things: A Model and Dataset for Pareidolia},
author={Mark Hamilton, Simon Stent, Vasha DuTell, Anne Harrington, Jennifer
Corbett, Ruth Rosenholtz, William T. Freeman},
journal={arXiv preprint arXiv:2409.16143},
year={2024},
archivePrefix={arXiv},
eprint={2409.16143},
primaryClass={cs.CV cs.AI cs.HC cs.IR cs.LG}
}
|
hamilton2024seeing
|
arxiv-661407
|
2409.16145
|
Learning to Localize Actions in Instructional Videos with LLM-Based Multi-Pathway Text-Video Alignment
|
<|reference_start|>Learning to Localize Actions in Instructional Videos with LLM-Based Multi-Pathway Text-Video Alignment: Learning to localize temporal boundaries of procedure steps in instructional videos is challenging due to the limited availability of annotated large-scale training videos. Recent works focus on learning the cross-modal alignment between video segments and ASR-transcripted narration texts through contrastive learning. However, these methods fail to account for the alignment noise, i.e., irrelevant narrations to the instructional task in videos and unreliable timestamps in narrations. To address these challenges, this work proposes a novel training framework. Motivated by the strong capabilities of Large Language Models (LLMs) in procedure understanding and text summarization, we first apply an LLM to filter out task-irrelevant information and summarize task-related procedure steps (LLM-steps) from narrations. To further generate reliable pseudo-matching between the LLM-steps and the video for training, we propose the Multi-Pathway Text-Video Alignment (MPTVA) strategy. The key idea is to measure alignment between LLM-steps and videos via multiple pathways, including: (1) step-narration-video alignment using narration timestamps, (2) direct step-to-video alignment based on their long-term semantic similarity, and (3) direct step-to-video alignment focusing on short-term fine-grained semantic similarity learned from general video domains. The results from different pathways are fused to generate reliable pseudo step-video matching. We conducted extensive experiments across various tasks and problem settings to evaluate our proposed method. Our approach surpasses state-of-the-art methods in three downstream tasks: procedure step grounding, step localization, and narration grounding by 5.9\%, 3.1\%, and 2.8\%.<|reference_end|>
|
arxiv
|
@article{chen2024learning,
title={Learning to Localize Actions in Instructional Videos with LLM-Based
Multi-Pathway Text-Video Alignment},
author={Yuxiao Chen, Kai Li, Wentao Bao, Deep Patel, Yu Kong, Martin Renqiang
Min and Dimitris N. Metaxas},
journal={arXiv preprint arXiv:2409.16145},
year={2024},
archivePrefix={arXiv},
eprint={2409.16145},
primaryClass={cs.CV}
}
|
chen2024learning
|
arxiv-661408
|
2409.16146
|
Controlling Risk of Retrieval-augmented Generation: A Counterfactual Prompting Framework
|
<|reference_start|>Controlling Risk of Retrieval-augmented Generation: A Counterfactual Prompting Framework: Retrieval-augmented generation (RAG) has emerged as a popular solution to mitigate the hallucination issues of large language models. However, existing studies on RAG seldom address the issue of predictive uncertainty, i.e., how likely it is that a RAG model's prediction is incorrect, resulting in uncontrollable risks in real-world applications. In this work, we emphasize the importance of risk control, ensuring that RAG models proactively refuse to answer questions with low confidence. Our research identifies two critical latent factors affecting RAG's confidence in its predictions: the quality of the retrieved results and the manner in which these results are utilized. To guide RAG models in assessing their own confidence based on these two latent factors, we develop a counterfactual prompting framework that induces the models to alter these factors and analyzes the effect on their answers. We also introduce a benchmarking procedure to collect answers with the option to abstain, facilitating a series of experiments. For evaluation, we introduce several risk-related metrics and the experimental results demonstrate the effectiveness of our approach.<|reference_end|>
|
arxiv
|
@article{chen2024controlling,
title={Controlling Risk of Retrieval-augmented Generation: A Counterfactual
Prompting Framework},
author={Lu Chen, Ruqing Zhang, Jiafeng Guo, Yixing Fan, Xueqi Cheng},
journal={arXiv preprint arXiv:2409.16146},
year={2024},
archivePrefix={arXiv},
eprint={2409.16146},
primaryClass={cs.CL}
}
|
chen2024controlling
|
arxiv-661409
|
2409.16147
|
Gaussian Deja-vu: Creating Controllable 3D Gaussian Head-Avatars with Enhanced Generalization and Personalization Abilities
|
<|reference_start|>Gaussian Deja-vu: Creating Controllable 3D Gaussian Head-Avatars with Enhanced Generalization and Personalization Abilities: Recent advancements in 3D Gaussian Splatting (3DGS) have unlocked significant potential for modeling 3D head avatars, providing greater flexibility than mesh-based methods and more efficient rendering compared to NeRF-based approaches. Despite these advancements, the creation of controllable 3DGS-based head avatars remains time-intensive, often requiring tens of minutes to hours. To expedite this process, we here introduce the ``Gaussian D\'ej\`a-vu" framework, which first obtains a generalized model of the head avatar and then personalizes the result. The generalized model is trained on large 2D (synthetic and real) image datasets. This model provides a well-initialized 3D Gaussian head that is further refined using a monocular video to achieve the personalized head avatar. For personalizing, we propose learnable expression-aware rectification blendmaps to correct the initial 3D Gaussians, ensuring rapid convergence without the reliance on neural networks. Experiments demonstrate that the proposed method meets its objectives. It outperforms state-of-the-art 3D Gaussian head avatars in terms of photorealistic quality as well as reduces training time consumption to at least a quarter of the existing methods, producing the avatar in minutes.<|reference_end|>
|
arxiv
|
@article{yan2024gaussian,
title={Gaussian Deja-vu: Creating Controllable 3D Gaussian Head-Avatars with
Enhanced Generalization and Personalization Abilities},
author={Peizhi Yan, Rabab Ward, Qiang Tang, Shan Du},
journal={arXiv preprint arXiv:2409.16147},
year={2024},
archivePrefix={arXiv},
eprint={2409.16147},
primaryClass={cs.CV}
}
|
yan2024gaussian
|
arxiv-661410
|
2409.16149
|
MCTrack: A Unified 3D Multi-Object Tracking Framework for Autonomous Driving
|
<|reference_start|>MCTrack: A Unified 3D Multi-Object Tracking Framework for Autonomous Driving: This paper introduces MCTrack, a new 3D multi-object tracking method that achieves state-of-the-art (SOTA) performance across KITTI, nuScenes, and Waymo datasets. Addressing the gap in existing tracking paradigms, which often perform well on specific datasets but lack generalizability, MCTrack offers a unified solution. Additionally, we have standardized the format of perceptual results across various datasets, termed BaseVersion, facilitating researchers in the field of multi-object tracking (MOT) to concentrate on the core algorithmic development without the undue burden of data preprocessing. Finally, recognizing the limitations of current evaluation metrics, we propose a novel set that assesses motion information output, such as velocity and acceleration, crucial for downstream tasks. The source codes of the proposed method are available at this link: https://github.com/megvii-research/MCTrack}{https://github.com/megvii-research/MCTrack<|reference_end|>
|
arxiv
|
@article{wang2024mctrack:,
title={MCTrack: A Unified 3D Multi-Object Tracking Framework for Autonomous
Driving},
author={Xiyang Wang, Shouzheng Qi, Jieyou Zhao, Hangning Zhou, Siyu Zhang,
Guoan Wang, Kai Tu, Songlin Guo, Jianbo Zhao, Jian Li, Mu Yang},
journal={arXiv preprint arXiv:2409.16149},
year={2024},
archivePrefix={arXiv},
eprint={2409.16149},
primaryClass={cs.CV}
}
|
wang2024mctrack:
|
arxiv-661411
|
2409.16151
|
Operator-difference approximations on two-dimensional merged Voronoi-Delaunay grids
|
<|reference_start|>Operator-difference approximations on two-dimensional merged Voronoi-Delaunay grids: Formulating boundary value problems for multidimensional partial derivative equations in terms of invariant operators of vector (tensor) analysis is convenient. Computational algorithms for approximate solutions are based on constructing grid analogs of vector analysis operators. This is most easily done by dividing the computational domain into rectangular cells when the grid nodes coincide with the cell vertices or are the cell centers. Grid operators of vector analysis for irregular regions are constructed using Delaunay triangulations or Voronoi partitions. This paper uses two-dimensional merged Voronoi-Delaunay grids to represent the grid cells as orthodiagonal quadrilaterals. Consistent approximations of the gradient, divergence, and rotor operators are proposed. On their basis, operator-difference approximations for typical stationary scalar and vector problems are constructed.<|reference_end|>
|
arxiv
|
@article{vabishchevich2024operator-difference,
title={Operator-difference approximations on two-dimensional merged
Voronoi-Delaunay grids},
author={Petr N. Vabishchevich},
journal={arXiv preprint arXiv:2409.16151},
year={2024},
archivePrefix={arXiv},
eprint={2409.16151},
primaryClass={math.NA cs.NA}
}
|
vabishchevich2024operator-difference
|
arxiv-661412
|
2409.16153
|
A Strong Separation for Adversarially Robust $\ell_0$ Estimation for Linear Sketches
|
<|reference_start|>A Strong Separation for Adversarially Robust $\ell_0$ Estimation for Linear Sketches: The majority of streaming problems are defined and analyzed in a static setting, where the data stream is any worst-case sequence of insertions and deletions that is fixed in advance. However, many real-world applications require a more flexible model, where an adaptive adversary may select future stream elements after observing the previous outputs of the algorithm. Over the last few years, there has been increased interest in proving lower bounds for natural problems in the adaptive streaming model. In this work, we give the first known adaptive attack against linear sketches for the well-studied $\ell_0$-estimation problem over turnstile, integer streams. For any linear streaming algorithm $\mathcal{A}$ that uses sketching matrix $\mathbf{A}\in \mathbb{Z}^{r \times n}$ where $n$ is the size of the universe, this attack makes $\tilde{\mathcal{O}}(r^8)$ queries and succeeds with high constant probability in breaking the sketch. We also give an adaptive attack against linear sketches for the $\ell_0$-estimation problem over finite fields $\mathbb{F}_p$, which requires a smaller number of $\tilde{\mathcal{O}}(r^3)$ queries. Finally, we provide an adaptive attack over $\mathbb{R}^n$ against linear sketches $\mathbf{A} \in \mathbb{R}^{r \times n}$ for $\ell_0$-estimation, in the setting where $\mathbf{A}$ has all nonzero subdeterminants at least $\frac{1}{\textrm{poly}(r)}$. Our results provide an exponential improvement over the previous number of queries known to break an $\ell_0$-estimation sketch.<|reference_end|>
|
arxiv
|
@article{gribelyuk2024a,
title={A Strong Separation for Adversarially Robust $\ell_0$ Estimation for
Linear Sketches},
author={Elena Gribelyuk, Honghao Lin, David P. Woodruff, Huacheng Yu, Samson
Zhou},
journal={arXiv preprint arXiv:2409.16153},
year={2024},
archivePrefix={arXiv},
eprint={2409.16153},
primaryClass={cs.DS}
}
|
gribelyuk2024a
|
arxiv-661413
|
2409.16154
|
Efficient Motion Prediction: A Lightweight & Accurate Trajectory Prediction Model With Fast Training and Inference Speed
|
<|reference_start|>Efficient Motion Prediction: A Lightweight & Accurate Trajectory Prediction Model With Fast Training and Inference Speed: For efficient and safe autonomous driving, it is essential that autonomous vehicles can predict the motion of other traffic agents. While highly accurate, current motion prediction models often impose significant challenges in terms of training resource requirements and deployment on embedded hardware. We propose a new efficient motion prediction model, which achieves highly competitive benchmark results while training only a few hours on a single GPU. Due to our lightweight architectural choices and the focus on reducing the required training resources, our model can easily be applied to custom datasets. Furthermore, its low inference latency makes it particularly suitable for deployment in autonomous applications with limited computing resources.<|reference_end|>
|
arxiv
|
@article{prutsch2024efficient,
title={Efficient Motion Prediction: A Lightweight & Accurate Trajectory
Prediction Model With Fast Training and Inference Speed},
author={Alexander Prutsch, Horst Bischof, Horst Possegger},
journal={arXiv preprint arXiv:2409.16154},
year={2024},
archivePrefix={arXiv},
eprint={2409.16154},
primaryClass={cs.RO cs.CV}
}
|
prutsch2024efficient
|
arxiv-661414
|
2409.16159
|
ComiCap: A VLMs pipeline for dense captioning of Comic Panels
|
<|reference_start|>ComiCap: A VLMs pipeline for dense captioning of Comic Panels: The comic domain is rapidly advancing with the development of single- and multi-page analysis and synthesis models. Recent benchmarks and datasets have been introduced to support and assess models' capabilities in tasks such as detection (panels, characters, text), linking (character re-identification and speaker identification), and analysis of comic elements (e.g., dialog transcription). However, to provide a comprehensive understanding of the storyline, a model must not only extract elements but also understand their relationships and generate highly informative captions. In this work, we propose a pipeline that leverages Vision-Language Models (VLMs) to obtain dense, grounded captions. To construct our pipeline, we introduce an attribute-retaining metric that assesses whether all important attributes are identified in the caption. Additionally, we created a densely annotated test set to fairly evaluate open-source VLMs and select the best captioning model according to our metric. Our pipeline generates dense captions with bounding boxes that are quantitatively and qualitatively superior to those produced by specifically trained models, without requiring any additional training. Using this pipeline, we annotated over 2 million panels across 13,000 books, which will be available on the project page https://github.com/emanuelevivoli/ComiCap.<|reference_end|>
|
arxiv
|
@article{vivoli2024comicap:,
title={ComiCap: A VLMs pipeline for dense captioning of Comic Panels},
author={Emanuele Vivoli, Niccol`o Biondi, Marco Bertini, Dimosthenis Karatzas},
journal={arXiv preprint arXiv:2409.16159},
year={2024},
archivePrefix={arXiv},
eprint={2409.16159},
primaryClass={cs.CV}
}
|
vivoli2024comicap:
|
arxiv-661415
|
2409.16160
|
MIMO: Controllable Character Video Synthesis with Spatial Decomposed Modeling
|
<|reference_start|>MIMO: Controllable Character Video Synthesis with Spatial Decomposed Modeling: Character video synthesis aims to produce realistic videos of animatable characters within lifelike scenes. As a fundamental problem in the computer vision and graphics community, 3D works typically require multi-view captures for per-case training, which severely limits their applicability of modeling arbitrary characters in a short time. Recent 2D methods break this limitation via pre-trained diffusion models, but they struggle for pose generality and scene interaction. To this end, we propose MIMO, a novel framework which can not only synthesize character videos with controllable attributes (i.e., character, motion and scene) provided by simple user inputs, but also simultaneously achieve advanced scalability to arbitrary characters, generality to novel 3D motions, and applicability to interactive real-world scenes in a unified framework. The core idea is to encode the 2D video to compact spatial codes, considering the inherent 3D nature of video occurrence. Concretely, we lift the 2D frame pixels into 3D using monocular depth estimators, and decompose the video clip to three spatial components (i.e., main human, underlying scene, and floating occlusion) in hierarchical layers based on the 3D depth. These components are further encoded to canonical identity code, structured motion code and full scene code, which are utilized as control signals of synthesis process. The design of spatial decomposed modeling enables flexible user control, complex motion expression, as well as 3D-aware synthesis for scene interactions. Experimental results demonstrate effectiveness and robustness of the proposed method.<|reference_end|>
|
arxiv
|
@article{men2024mimo:,
title={MIMO: Controllable Character Video Synthesis with Spatial Decomposed
Modeling},
author={Yifang Men, Yuan Yao, Miaomiao Cui, Liefeng Bo},
journal={arXiv preprint arXiv:2409.16160},
year={2024},
archivePrefix={arXiv},
eprint={2409.16160},
primaryClass={cs.CV}
}
|
men2024mimo:
|
arxiv-661416
|
2409.16163
|
The anonymization problem in social networks
|
<|reference_start|>The anonymization problem in social networks: In this paper we introduce a general version of the anonymization problem in social networks, in which the goal is to maximize the number of anonymous nodes by altering a given graph. We define three variants of this optimization problem, being full, partial and budgeted anonymization. In each, the objective is to maximize the number of k-anonymous nodes, i.e., nodes for which there are at least k-1 equivalent nodes, according to a particular anonymity measure of structural node equivalence. We propose six new heuristic algorithms for solving the anonymization problem which we implement into the reusable ANO-NET computational framework. As a baseline, we use an edge sampling method introduced in previous work. Experiments on both graph models and 17 real-world network datasets result in three empirical findings. First, we demonstrate that edge deletion is the most effective graph alteration operation. Second, we compare four commonly used anonymity measures from the literature and highlight how the choice of anonymity measure has a tremendous effect on both the achieved anonymity as well as the difficulty of solving the anonymization problem. Third, we find that the proposed algorithms that preferentially delete edges with a larger effect on nodes at a structurally unique position consistently outperform heuristics solely based on network structure. With similar runtimes, our algorithms retain on average 17 times more edges, ensuring higher data utility after full anonymization. In the budgeted variant, they achieve 4.4 times more anonymous nodes than the baseline. This work lays important foundations for future development of algorithms for anonymizing social networks.<|reference_end|>
|
arxiv
|
@article{de jong2024the,
title={The anonymization problem in social networks},
author={Rachel G. de Jong, Mark P. J. van der Loo and Frank W. Takes},
journal={arXiv preprint arXiv:2409.16163},
year={2024},
archivePrefix={arXiv},
eprint={2409.16163},
primaryClass={cs.SI}
}
|
de jong2024the
|
arxiv-661417
|
2409.16165
|
EnIGMA: Enhanced Interactive Generative Model Agent for CTF Challenges
|
<|reference_start|>EnIGMA: Enhanced Interactive Generative Model Agent for CTF Challenges: Although language model (LM) agents are demonstrating growing potential in many domains, their success in cybersecurity has been limited due to simplistic design and the lack of fundamental features for this domain. We present EnIGMA, an LM agent for autonomously solving Capture The Flag (CTF) challenges. EnIGMA introduces new Agent-Computer Interfaces (ACIs) to improve the success rate on CTF challenges. We establish the novel Interactive Agent Tool concept, which enables LM agents to run interactive command-line utilities essential for these challenges. Empirical analysis of EnIGMA on over 350 CTF challenges from three different benchmarks indicates that providing a robust set of new tools with demonstration of their usage helps the LM solve complex problems and achieves state-of-the-art results on the NYU CTF and Intercode-CTF benchmarks. Finally, we discuss insights on ACI design and agent behavior on cybersecurity tasks that highlight the need to adapt real-world tools for LM agents.<|reference_end|>
|
arxiv
|
@article{abramovich2024enigma:,
title={EnIGMA: Enhanced Interactive Generative Model Agent for CTF Challenges},
author={Talor Abramovich, Meet Udeshi, Minghao Shao, Kilian Lieret, Haoran Xi,
Kimberly Milner, Sofija Jancheska, John Yang, Carlos E. Jimenez, Farshad
Khorrami, Prashanth Krishnamurthy, Brendan Dolan-Gavitt, Muhammad Shafique,
Karthik Narasimhan, Ramesh Karri, Ofir Press},
journal={arXiv preprint arXiv:2409.16165},
year={2024},
archivePrefix={arXiv},
eprint={2409.16165},
primaryClass={cs.AI}
}
|
abramovich2024enigma:
|
arxiv-661418
|
2409.16167
|
Merging LoRAs like Playing LEGO: Pushing the Modularity of LoRA to Extremes Through Rank-Wise Clustering
|
<|reference_start|>Merging LoRAs like Playing LEGO: Pushing the Modularity of LoRA to Extremes Through Rank-Wise Clustering: Low-Rank Adaptation (LoRA) has emerged as a popular technique for fine-tuning large language models (LLMs) to various domains due to its modular design and widespread availability on platforms like Huggingface. This modularity has sparked interest in combining multiple LoRAs to enhance LLM capabilities. However, existing methods for LoRA composition primarily focus on task-specific adaptations that require additional training, and current model merging techniques often fail to fully leverage LoRA's modular nature, leading to parameter interference and performance degradation. In this paper, we investigate the feasibility of disassembling and reassembling multiple LoRAs at a finer granularity, analogous to assembling LEGO blocks. We introduce the concept of Minimal Semantic Units (MSUs), where the parameters corresponding to each rank in LoRA function as independent units. These MSUs demonstrate permutation invariance and concatenation-summation equivalence properties, enabling flexible combinations to create new LoRAs. Building on these insights, we propose the LoRA-LEGO framework. This framework conducts rank-wise parameter clustering by grouping MSUs from different LoRAs into $k$ clusters. The centroid of each cluster serves as a representative MSU, enabling the assembly of a merged LoRA with an adjusted rank of $k$. Additionally, we apply a dual reweighting strategy to optimize the scale of the merged LoRA. Experiments across various benchmarks demonstrate that our method outperforms existing approaches in LoRA merging.<|reference_end|>
|
arxiv
|
@article{zhao2024merging,
title={Merging LoRAs like Playing LEGO: Pushing the Modularity of LoRA to
Extremes Through Rank-Wise Clustering},
author={Ziyu Zhao, Tao Shen, Didi Zhu, Zexi Li, Jing Su, Xuwu Wang, Kun Kuang,
Fei Wu},
journal={arXiv preprint arXiv:2409.16167},
year={2024},
archivePrefix={arXiv},
eprint={2409.16167},
primaryClass={cs.LG cs.AI cs.CL}
}
|
zhao2024merging
|
arxiv-661419
|
2409.16168
|
A Simple Distributed Algorithm for Sparse Fractional Covering and Packing Problems
|
<|reference_start|>A Simple Distributed Algorithm for Sparse Fractional Covering and Packing Problems: This paper presents a distributed algorithm in the CONGEST model that achieves a $(1+\epsilon)$-approximation for row-sparse fractional covering problems (RS-FCP) and the dual column-sparse fraction packing problems (CS-FPP). Compared with the best-known $(1+\epsilon)$-approximation CONGEST algorithm for RS-FCP/CS-FPP developed by Kuhn, Moscibroda, and Wattenhofer (SODA'06), our algorithm is not only much simpler but also significantly improves the dependency on $\epsilon$.<|reference_end|>
|
arxiv
|
@article{li2024a,
title={A Simple Distributed Algorithm for Sparse Fractional Covering and
Packing Problems},
author={Qian Li, Minghui Ouyang, and Yuyi Wang},
journal={arXiv preprint arXiv:2409.16168},
year={2024},
archivePrefix={arXiv},
eprint={2409.16168},
primaryClass={cs.DS cs.DC}
}
|
li2024a
|
arxiv-661420
|
2409.16172
|
A new interpolated pseudodifferential preconditioner for the Helmholtz equation in heterogeneous media
|
<|reference_start|>A new interpolated pseudodifferential preconditioner for the Helmholtz equation in heterogeneous media: This paper introduces a new pseudodifferential preconditioner for the Helmholtz equation in variable media with absorption. The pseudodifferential operator is associated with the multiplicative inverse to the symbol of the Helmholtz operator. This approach is well-suited for the intermediate and high-frequency regimes. The main novel idea for the fast evaluation of the preconditioner is to interpolate its symbol, not as a function of the (high-dimensional) phase-space variables, but as a function of the wave speed itself. Since the wave speed is a real-valued function, this approach allows us to interpolate in a univariate setting even when the original problem is posed in a multidimensional physical space. As a result, the needed number of interpolation points is small, and the interpolation coefficients can be computed using the fast Fourier transform. The overall computational complexity is log-linear with respect to the degrees of freedom as inherited from the fast Fourier transform. We present some numerical experiments to illustrate the effectiveness of the preconditioner to solve the discrete Helmholtz equation using the GMRES iterative method. The implementation of an absorbing layer for scattering problems using a complex-valued wave speed is also developed. Limitations and possible extensions are also discussed.<|reference_end|>
|
arxiv
|
@article{acosta2024a,
title={A new interpolated pseudodifferential preconditioner for the Helmholtz
equation in heterogeneous media},
author={Sebastian Acosta and Tahsin Khajah and Benjamin Palacios},
journal={arXiv preprint arXiv:2409.16172},
year={2024},
archivePrefix={arXiv},
eprint={2409.16172},
primaryClass={math.NA cs.NA physics.comp-ph}
}
|
acosta2024a
|
arxiv-661421
|
2409.16173
|
Extending Stable and Popular Matching Algorithms from Bipartite to Arbitrary Instances
|
<|reference_start|>Extending Stable and Popular Matching Algorithms from Bipartite to Arbitrary Instances: We consider stable and popular matching problems in arbitrary graphs, which are referred to as stable roommates instances. We extend the 3/2-approximation algorithm for the maximum size weakly stable matching problem to the roommates case, which solves a more than 20 year old open question of Irving and Manlove about the approximability of maximum size weakly stable matchings in roommates instances with ties [Irving and Manlove 2002] and has nice applications for the problem of matching residents to hospitals in the presence of couples. We also extend the algorithm that finds a maximum size popular matching in bipartite graphs in the case of strict preferences and the algorithm to find a popular matching among maximum weight matchings. While previous attempts to extend the idea of promoting the agents or duplicating the edges from bipartite instances to arbitrary ones failed, these results show that with the help of a simple observation, we can indeed bridge the gap and extend these algorithms<|reference_end|>
|
arxiv
|
@article{csáji2024extending,
title={Extending Stable and Popular Matching Algorithms from Bipartite to
Arbitrary Instances},
author={Gergely Cs'aji},
journal={arXiv preprint arXiv:2409.16173},
year={2024},
archivePrefix={arXiv},
eprint={2409.16173},
primaryClass={cs.DS cs.DM cs.GT cs.MA}
}
|
csáji2024extending
|
arxiv-661422
|
2409.16174
|
Fine Tuning Text-to-Image Diffusion Models for Correcting Anomalous Images
|
<|reference_start|>Fine Tuning Text-to-Image Diffusion Models for Correcting Anomalous Images: Since the advent of GANs and VAEs, image generation models have continuously evolved, opening up various real-world applications with the introduction of Stable Diffusion and DALL-E models. These text-to-image models can generate high-quality images for fields such as art, design, and advertising. However, they often produce aberrant images for certain prompts. This study proposes a method to mitigate such issues by fine-tuning the Stable Diffusion 3 model using the DreamBooth technique. Experimental results targeting the prompt "lying on the grass/street" demonstrate that the fine-tuned model shows improved performance in visual evaluation and metrics such as Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR), and Frechet Inception Distance (FID). User surveys also indicated a higher preference for the fine-tuned model. This research is expected to make contributions to enhancing the practicality and reliability of text-to-image models.<|reference_end|>
|
arxiv
|
@article{yoo2024fine,
title={Fine Tuning Text-to-Image Diffusion Models for Correcting Anomalous
Images},
author={Hyunwoo Yoo},
journal={arXiv preprint arXiv:2409.16174},
year={2024},
archivePrefix={arXiv},
eprint={2409.16174},
primaryClass={cs.CV}
}
|
yoo2024fine
|
arxiv-661423
|
2409.16176
|
Cyber Knowledge Completion Using Large Language Models
|
<|reference_start|>Cyber Knowledge Completion Using Large Language Models: The integration of the Internet of Things (IoT) into Cyber-Physical Systems (CPSs) has expanded their cyber-attack surface, introducing new and sophisticated threats with potential to exploit emerging vulnerabilities. Assessing the risks of CPSs is increasingly difficult due to incomplete and outdated cybersecurity knowledge. This highlights the urgent need for better-informed risk assessments and mitigation strategies. While previous efforts have relied on rule-based natural language processing (NLP) tools to map vulnerabilities, weaknesses, and attack patterns, recent advancements in Large Language Models (LLMs) present a unique opportunity to enhance cyber-attack knowledge completion through improved reasoning, inference, and summarization capabilities. We apply embedding models to encapsulate information on attack patterns and adversarial techniques, generating mappings between them using vector embeddings. Additionally, we propose a Retrieval-Augmented Generation (RAG)-based approach that leverages pre-trained models to create structured mappings between different taxonomies of threat patterns. Further, we use a small hand-labeled dataset to compare the proposed RAG-based approach to a baseline standard binary classification model. Thus, the proposed approach provides a comprehensive framework to address the challenge of cyber-attack knowledge graph completion.<|reference_end|>
|
arxiv
|
@article{webb2024cyber,
title={Cyber Knowledge Completion Using Large Language Models},
author={Braden K Webb, Sumit Purohit, Rounak Meyur},
journal={arXiv preprint arXiv:2409.16176},
year={2024},
number={PNNL-SA-203400},
archivePrefix={arXiv},
eprint={2409.16176},
primaryClass={cs.CR cs.AI}
}
|
webb2024cyber
|
arxiv-661424
|
2409.16178
|
SDFit: 3D Object Pose and Shape by Fitting a Morphable SDF to a Single Image
|
<|reference_start|>SDFit: 3D Object Pose and Shape by Fitting a Morphable SDF to a Single Image: We focus on recovering 3D object pose and shape from single images. This is highly challenging due to strong (self-)occlusions, depth ambiguities, the enormous shape variance, and lack of 3D ground truth for natural images. Recent work relies mostly on learning from finite datasets, so it struggles generalizing, while it focuses mostly on the shape itself, largely ignoring the alignment with pixels. Moreover, it performs feed-forward inference, so it cannot refine estimates. We tackle these limitations with a novel framework, called SDFit. To this end, we make three key observations: (1) Learned signed-distance-function (SDF) models act as a strong morphable shape prior. (2) Foundational models embed 2D images and 3D shapes in a joint space, and (3) also infer rich features from images. SDFit exploits these as follows. First, it uses a category-level morphable SDF (mSDF) model, called DIT, to generate 3D shape hypotheses. This mSDF is initialized by querying OpenShape's latent space conditioned on the input image. Then, it computes 2D-to-3D correspondences, by extracting and matching features from the image and mSDF. Last, it fits the mSDF to the image in an render-and-compare fashion, to iteratively refine estimates. We evaluate SDFit on the Pix3D and Pascal3D+ datasets of real-world images. SDFit performs roughly on par with state-of-the-art learned methods, but, uniquely, requires no re-training. Thus, SDFit is promising for generalizing in the wild, paving the way for future research. Code will be released<|reference_end|>
|
arxiv
|
@article{antić2024sdfit:,
title={SDFit: 3D Object Pose and Shape by Fitting a Morphable SDF to a Single
Image},
author={Dimitrije Anti'c, Sai Kumar Dwivedi, Shashank Tripathi, Theo Gevers,
Dimitrios Tzionas},
journal={arXiv preprint arXiv:2409.16178},
year={2024},
archivePrefix={arXiv},
eprint={2409.16178},
primaryClass={cs.CV}
}
|
antić2024sdfit:
|
arxiv-661425
|
2409.16181
|
SPIBOT: A Drone-Tethered Mobile Gripper for Robust Aerial Object Retrieval in Dynamic Environments
|
<|reference_start|>SPIBOT: A Drone-Tethered Mobile Gripper for Robust Aerial Object Retrieval in Dynamic Environments: In real-world field operations, aerial grasping systems face significant challenges in dynamic environments due to strong winds, shifting surfaces, and the need to handle heavy loads. Particularly when dealing with heavy objects, the powerful propellers of the drone can inadvertently blow the target object away as it approaches, making the task even more difficult. To address these challenges, we introduce SPIBOT, a novel drone-tethered mobile gripper system designed for robust and stable autonomous target retrieval. SPIBOT operates via a tether, much like a spider, allowing the drone to maintain a safe distance from the target. To ensure both stable mobility and secure grasping capabilities, SPIBOT is equipped with six legs and sensors to estimate the robot's and mission's states. It is designed with a reduced volume and weight compared to other hexapod robots, allowing it to be easily stowed under the drone and reeled in as needed. Designed for the 2024 MBZIRC Maritime Grand Challenge, SPIBOT is built to retrieve a 1kg target object in the highly dynamic conditions of the moving deck of a ship. This system integrates a real-time action selection algorithm that dynamically adjusts the robot's actions based on proximity to the mission goal and environmental conditions, enabling rapid and robust mission execution. Experimental results across various terrains, including a pontoon on a lake, a grass field, and rubber mats on coastal sand, demonstrate SPIBOT's ability to efficiently and reliably retrieve targets. SPIBOT swiftly converges on the target and completes its mission, even when dealing with irregular initial states and noisy information introduced by the drone.<|reference_end|>
|
arxiv
|
@article{kang2024spibot:,
title={SPIBOT: A Drone-Tethered Mobile Gripper for Robust Aerial Object
Retrieval in Dynamic Environments},
author={Gyuree Kang, Ozan G"unec{s}, Seungwook Lee, Maulana Bisyir Azhari,
and David Hyunchul Shim},
journal={arXiv preprint arXiv:2409.16181},
year={2024},
archivePrefix={arXiv},
eprint={2409.16181},
primaryClass={cs.RO}
}
|
kang2024spibot:
|
arxiv-661426
|
2409.16182
|
TiM4Rec: An Efficient Sequential Recommendation Model Based on Time-Aware Structured State Space Duality Model
|
<|reference_start|>TiM4Rec: An Efficient Sequential Recommendation Model Based on Time-Aware Structured State Space Duality Model: Sequential recommendation represents a pivotal branch of recommendation systems, centered around dynamically analyzing the sequential dependencies between user preferences and their interactive behaviors. Despite the Transformer architecture-based models achieving commendable performance within this domain, their quadratic computational complexity relative to the sequence dimension impedes efficient modeling. In response, the innovative Mamba architecture, characterized by linear computational complexity, has emerged. Mamba4Rec further pioneers the application of Mamba in sequential recommendation. Nonetheless, Mamba 1's hardware-aware algorithm struggles to efficiently leverage modern matrix computational units, which lead to the proposal of the improved State Space Duality (SSD), also known as Mamba 2. While the SSD4Rec successfully adapts the SSD architecture for sequential recommendation, showing promising results in high-dimensional contexts, it suffers significant performance drops in low-dimensional scenarios crucial for pure ID sequential recommendation tasks. Addressing this challenge, we propose a novel sequential recommendation backbone model, TiM4Rec, which ameliorates the low-dimensional performance loss of the SSD architecture while preserving its computational efficiency. Drawing inspiration from TiSASRec, we develop a time-aware enhancement method tailored for the linear computation demands of the SSD architecture, thereby enhancing its adaptability and achieving state-of-the-art (SOTA) performance in both low and high-dimensional modeling. The code for our model is publicly accessible at https://github.com/AlwaysFHao/TiM4Rec.<|reference_end|>
|
arxiv
|
@article{fan2024tim4rec:,
title={TiM4Rec: An Efficient Sequential Recommendation Model Based on
Time-Aware Structured State Space Duality Model},
author={Hao Fan, Mengyi Zhu, Yanrong Hu, Hailin Feng, Zhijie He, Hongjiu Liu,
Qingyang Liu},
journal={arXiv preprint arXiv:2409.16182},
year={2024},
archivePrefix={arXiv},
eprint={2409.16182},
primaryClass={cs.IR}
}
|
fan2024tim4rec:
|
arxiv-661427
|
2409.16183
|
Expert-level vision-language foundation model for real-world radiology and comprehensive evaluation
|
<|reference_start|>Expert-level vision-language foundation model for real-world radiology and comprehensive evaluation: Radiology is a vital and complex component of modern clinical workflow and covers many tasks. Recently, vision-language (VL) foundation models in medicine have shown potential in processing multimodal information, offering a unified solution for various radiology tasks. However, existing studies either pre-trained VL models on natural data or did not fully integrate vision-language architecture and pretraining, often neglecting the unique multimodal complexity in radiology images and their textual contexts. Additionally, their practical applicability in real-world scenarios remains underexplored. Here, we present RadFound, a large and open-source vision-language foundation model tailored for radiology, that is trained on the most extensive dataset of over 8.1 million images and 250,000 image-text pairs, covering 19 major organ systems and 10 imaging modalities. To establish expert-level multimodal perception and generation capabilities, RadFound introduces an enhanced vision encoder to capture intra-image local features and inter-image contextual information, and a unified cross-modal learning design tailored to radiology. To fully assess the models' capability, we construct a benchmark, RadVLBench, including radiology interpretation tasks like medical vision-language question-answering, as well as text generation tasks ranging from captioning to report generation. We also propose a human evaluation framework. When evaluated on the real-world benchmark involving three representative modalities, 2D images (chest X-rays), multi-view images (mammograms), and 3D images (thyroid CT scans), RadFound significantly outperforms other VL foundation models on both quantitative metrics and human evaluation. In summary, the development of RadFound represents an advancement in radiology generalists, demonstrating broad applicability potential for integration into clinical workflows.<|reference_end|>
|
arxiv
|
@article{liu2024expert-level,
title={Expert-level vision-language foundation model for real-world radiology
and comprehensive evaluation},
author={Xiaohong Liu, Guoxing Yang, Yulin Luo, Jiaji Mao, Xiang Zhang, Ming
Gao, Shanghang Zhang, Jun Shen, Guangyu Wang},
journal={arXiv preprint arXiv:2409.16183},
year={2024},
archivePrefix={arXiv},
eprint={2409.16183},
primaryClass={cs.CV}
}
|
liu2024expert-level
|
arxiv-661428
|
2409.16185
|
Refactoring-aware Block Tracking in Commit History
|
<|reference_start|>Refactoring-aware Block Tracking in Commit History: Tracking statements in the commit history of a project is in many cases useful for supporting various software maintenance, comprehension, and evolution tasks. A high level of accuracy can facilitate the adoption of code tracking tools by developers and researchers. To this end, we propose CodeTracker, a refactoring-aware tool that can generate the commit change history for code blocks. To evaluate its accuracy, we created an oracle with the change history of 1,280 code blocks found within 200 methods from 20 popular open-source project repositories. Moreover, we created a baseline based on the current state-of-the-art Abstract Syntax Tree diff tool, namely GumTree 3.0, in order to compare the accuracy and execution time. Our experiments have shown that CodeTracker has a considerably higher precision/recall and faster execution time than the GumTree-based baseline, and can extract the complete change history of a code block with a precision and recall of 99.5% within 3.6 seconds on average.<|reference_end|>
|
arxiv
|
@article{hasan2024refactoring-aware,
title={Refactoring-aware Block Tracking in Commit History},
author={Mohammed Tayeeb Hasan, Nikolaos Tsantalis, Pouria Alikhanifard},
journal={arXiv preprint arXiv:2409.16185},
year={2024},
archivePrefix={arXiv},
eprint={2409.16185},
primaryClass={cs.SE}
}
|
hasan2024refactoring-aware
|
arxiv-661429
|
2409.16186
|
System-Level Performance Metrics Sensitivity of an Electrified Heavy-Duty Mobile Manipulator
|
<|reference_start|>System-Level Performance Metrics Sensitivity of an Electrified Heavy-Duty Mobile Manipulator: The shift to electric and hybrid powertrains in vehicular systems has propelled advancements in mobile robotics and autonomous vehicles. This paper examines the sensitivity of key performance metrics in a electrified heavy-duty mobile manipulator (HDMM) driven by electromechanical linear actuators (EMLAs) powered by permanent magnet synchronous motors (PMSMs). The study evaluates power delivery, force dynamics, energy consumption, and overall efficiency of the actuation mechanisms. By computing partial derivatives (PD) with respect to the payload mass at the tool center point (TCP), it provides insights into these factors under various loading conditions. This research aids in the appropriate choice or design of EMLAs for HDMM electrification, addressing actuation mechanism selection challenge in vehicular system with mounted manipulator and determines the necessary battery capacity requirements.<|reference_end|>
|
arxiv
|
@article{bahari2024system-level,
title={System-Level Performance Metrics Sensitivity of an Electrified
Heavy-Duty Mobile Manipulator},
author={Mohammad Bahari, Alvaro Paz, and Jouni Mattila},
journal={arXiv preprint arXiv:2409.16186},
year={2024},
archivePrefix={arXiv},
eprint={2409.16186},
primaryClass={eess.SY cs.SY}
}
|
bahari2024system-level
|
arxiv-661430
|
2409.16190
|
A Universal Multi-Vehicle Cooperative Decision-Making Approach in Structured Roads by Mixed-Integer Potential Game
|
<|reference_start|>A Universal Multi-Vehicle Cooperative Decision-Making Approach in Structured Roads by Mixed-Integer Potential Game: Due to the intricate of real-world road topologies and the inherent complexity of autonomous vehicles, cooperative decision-making for multiple connected autonomous vehicles (CAVs) remains a significant challenge. Currently, most methods are tailored to specific scenarios, and the efficiency of existing optimization and learning methods applicable to diverse scenarios is hindered by the complexity of modeling and data dependency, which limit their real-world applicability. To address these issues, this paper proposes a universal multi-vehicle cooperative decision-making method in structured roads with game theory. We transform the decision-making problem into a graph path searching problem within a way-point graph framework. The problem is formulated as a mixed-integer linear programming problem (MILP) first and transformed into a mixed-integer potential game (MIPG), which reduces the scope of problem and ensures that no player needs to sacrifice for the overall cost. Two Gauss-Seidel algorithms for cooperative decision-making are presented to solve the MIPG problem and obtain the Nash equilibrium solutions. Specifically, the sequential Gauss-Seidel algorithm for cooperative decision-making considers the varying degrees of CAV interactions and flexibility in adjustment strategies to determine optimization priorities, which reduces the frequency of ineffective optimizations. Experimental evaluations across various urban traffic scenarios with different topological structures demonstrate the effectiveness and efficiency of the proposed method compared with MILP and comparisons of different optimization sequences validate the efficiency of the sequential Gauss-Seidel algorithm for cooperative decision-making.<|reference_end|>
|
arxiv
|
@article{meng2024a,
title={A Universal Multi-Vehicle Cooperative Decision-Making Approach in
Structured Roads by Mixed-Integer Potential Game},
author={Chengzhen Meng, Zhenmin Huang, and Jun Ma},
journal={arXiv preprint arXiv:2409.16190},
year={2024},
archivePrefix={arXiv},
eprint={2409.16190},
primaryClass={cs.RO}
}
|
meng2024a
|
arxiv-661431
|
2409.16191
|
HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models
|
<|reference_start|>HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models: In recent years, Large Language Models (LLMs) have demonstrated remarkable capabilities in various tasks (e.g., long-context understanding), and many benchmarks have been proposed. However, we observe that long text generation capabilities are not well investigated. Therefore, we introduce the Hierarchical Long Text Generation Benchmark (HelloBench), a comprehensive, in-the-wild, and open-ended benchmark to evaluate LLMs' performance in generating long text. Based on Bloom's Taxonomy, HelloBench categorizes long text generation tasks into five subtasks: open-ended QA, summarization, chat, text completion, and heuristic text generation. Besides, we propose Hierarchical Long Text Evaluation (HelloEval), a human-aligned evaluation method that significantly reduces the time and effort required for human evaluation while maintaining a high correlation with human evaluation. We have conducted extensive experiments across around 30 mainstream LLMs and observed that the current LLMs lack long text generation capabilities. Specifically, first, regardless of whether the instructions include explicit or implicit length constraints, we observe that most LLMs cannot generate text that is longer than 4000 words. Second, we observe that while some LLMs can generate longer text, many issues exist (e.g., severe repetition and quality degradation). Third, to demonstrate the effectiveness of HelloEval, we compare HelloEval with traditional metrics (e.g., ROUGE, BLEU, etc.) and LLM-as-a-Judge methods, which show that HelloEval has the highest correlation with human evaluation. We release our code in https://github.com/Quehry/HelloBench.<|reference_end|>
|
arxiv
|
@article{que2024hellobench:,
title={HelloBench: Evaluating Long Text Generation Capabilities of Large
Language Models},
author={Haoran Que, Feiyu Duan, Liqun He, Yutao Mou, Wangchunshu Zhou, Jiaheng
Liu, Wenge Rong, Zekun Moore Wang, Jian Yang, Ge Zhang, Junran Peng,
Zhaoxiang Zhang, Songyang Zhang, Kai Chen},
journal={arXiv preprint arXiv:2409.16191},
year={2024},
archivePrefix={arXiv},
eprint={2409.16191},
primaryClass={cs.CL}
}
|
que2024hellobench:
|
arxiv-661432
|
2409.16195
|
On the tractability and approximability of non-submodular cardinality-based $s$-$t$ cut problems in hypergraphs
|
<|reference_start|>On the tractability and approximability of non-submodular cardinality-based $s$-$t$ cut problems in hypergraphs: A minimum $s$-$t$ cut in a hypergraph is a bipartition of vertices that separates two nodes $s$ and $t$ while minimizing a hypergraph cut function. The cardinality-based hypergraph cut function assigns a cut penalty to each hyperedge based on the number of nodes in the hyperedge that are on each side of the split. Previous work has shown that when hyperedge cut penalties are submodular, this problem can be reduced to a graph $s$-$t$ cut problem and hence solved in polynomial time. NP-hardness results are also known for a certain class of non-submodular penalties, though the complexity remained open in many parameter regimes. In this paper we highlight and leverage a connection to Valued Constraint Satisfaction Problems to show that the problem is NP-hard for all non-submodular hyperedge cut penalty, except for one trivial case where a 0-cost solution is always possible. We then turn our attention to approximation strategies and approximation hardness results in the non-submodular case. We design a strategy for projecting non-submodular penalties to the submodular region, which we prove gives the optimal approximation among all such projection strategies. We also show that alternative approaches are unlikely to provide improved guarantees, by showing it is UGC-hard to obtain a better approximation in the simplest setting where all hyperedges have exactly 4 nodes.<|reference_end|>
|
arxiv
|
@article{bengali2024on,
title={On the tractability and approximability of non-submodular
cardinality-based $s$-$t$ cut problems in hypergraphs},
author={Vedangi Bengali and Nate Veldt},
journal={arXiv preprint arXiv:2409.16195},
year={2024},
archivePrefix={arXiv},
eprint={2409.16195},
primaryClass={cs.DS cs.CC cs.DM}
}
|
bengali2024on
|
arxiv-661433
|
2409.16197
|
Second Order Bounds for Contextual Bandits with Function Approximation
|
<|reference_start|>Second Order Bounds for Contextual Bandits with Function Approximation: Many works have developed algorithms no-regret algorithms for contextual bandits with function approximation, where the mean rewards over context-action pairs belongs to a function class. Although there are many approaches to this problem, one that has gained in importance is the use of algorithms based on the optimism principle such as optimistic least squares. It can be shown the regret of this algorithm scales as square root of the product of the eluder dimension (a statistical measure of the complexity of the function class), the logarithm of the function class size and the time horizon. Unfortunately, even if the variance of the measurement noise of the rewards at each time is changing and is very small, the regret of the optimistic least squares algorithm scales with square root of the time horizon. In this work we are the first to develop algorithms that satisfy regret bounds of scaling not with the square root of the time horizon, but the square root of the sum of the measurement variances in the setting of contextual bandits with function approximation when the variances are unknown. These bounds generalize existing techniques for deriving second order bounds in contextual linear problems.<|reference_end|>
|
arxiv
|
@article{pacchiano2024second,
title={Second Order Bounds for Contextual Bandits with Function Approximation},
author={Aldo Pacchiano},
journal={arXiv preprint arXiv:2409.16197},
year={2024},
archivePrefix={arXiv},
eprint={2409.16197},
primaryClass={cs.LG cs.AI stat.ML}
}
|
pacchiano2024second
|
arxiv-661434
|
2409.16198
|
Leveraging Estimated Transferability Over Human Intuition for Model Selection in Text Ranking
|
<|reference_start|>Leveraging Estimated Transferability Over Human Intuition for Model Selection in Text Ranking: Text ranking has witnessed significant advancements, attributed to the utilization of dual-encoder enhanced by Pre-trained Language Models (PLMs). Given the proliferation of available PLMs, selecting the most effective one for a given dataset has become a non-trivial challenge. As a promising alternative to human intuition and brute-force fine-tuning, Transferability Estimation (TE) has emerged as an effective approach to model selection. However, current TE methods are primarily designed for classification tasks, and their estimated transferability may not align well with the objectives of text ranking. To address this challenge, we propose to compute the expected rank as transferability, explicitly reflecting the model's ranking capability. Furthermore, to mitigate anisotropy and incorporate training dynamics, we adaptively scale isotropic sentence embeddings to yield an accurate expected rank score. Our resulting method, Adaptive Ranking Transferability (AiRTran), can effectively capture subtle differences between models. On challenging model selection scenarios across various text ranking datasets, it demonstrates significant improvements over previous classification-oriented TE methods, human intuition, and ChatGPT with minor time consumption.<|reference_end|>
|
arxiv
|
@article{bai2024leveraging,
title={Leveraging Estimated Transferability Over Human Intuition for Model
Selection in Text Ranking},
author={Jun Bai, Zhuofan Chen, Zhenzi Li, Hanhua Hong, Jianfei Zhang, Chen Li,
Chenghua Lin, Wenge Rong},
journal={arXiv preprint arXiv:2409.16198},
year={2024},
archivePrefix={arXiv},
eprint={2409.16198},
primaryClass={cs.AI}
}
|
bai2024leveraging
|
arxiv-661435
|
2409.16200
|
Upper-body free-breathing Magnetic Resonance Fingerprinting applied to the quantification of water T1 and fat fraction
|
<|reference_start|>Upper-body free-breathing Magnetic Resonance Fingerprinting applied to the quantification of water T1 and fat fraction: Over the past decade, Magnetic Resonance Fingerprinting (MRF) has emerged as an efficient paradigm for the rapid and simultaneous quantification of multiple MRI parameters, including fat fraction (FF), water T1 ($T1_{H2O}$), water T2 ($T2_{H2O}$), and fat T1 ($T1_{fat}$). These parameters serve as promising imaging biomarkers in various anatomical targets such as the heart, liver, and skeletal muscles. However, measuring these parameters in the upper body poses challenges due to physiological motion, particularly respiratory motion. In this work, we propose a novel approach, motion-corrected (MoCo) MRF T1-FF, which estimates the motion field using an optimized preliminary motion scan and uses it to correct the MRF acquisition data before dictionary search for reconstructing motion-corrected FF and $T1_{H2O}$ parametric maps of the upper-body region. We validated this framework using an $\textit{in vivo}$ dataset comprising ten healthy volunteers and a 10-year-old boy with Duchenne muscular dystrophy. At the ROI level, in regions minimally affected by motion, no significant bias was observed between the uncorrected and MoCo reconstructions for FF (mean difference of -0.7%) and $T1_{H2O}$ (-4.9 ms) values. Moreover, MoCo MRF T1-FF significantly reduced the standard deviations of distributions assessed in these regions, indicating improved precision. Notably, in regions heavily affected by motion, such as respiratory muscles, liver, and kidneys, the MRF parametric maps exhibited a marked reduction in motion blurring and streaking artifacts after motion correction. Furthermore, the diaphragm was consistently discernible on parametric maps after motion correction. This approach lays the groundwork for the joint 3D quantification of FF and $T1_{H2O}$ in regions that are rarely studied, such as the respiratory muscles, particularly the intercostal muscles and diaphragm.<|reference_end|>
|
arxiv
|
@article{slioussarenko2024upper-body,
title={Upper-body free-breathing Magnetic Resonance Fingerprinting applied to
the quantification of water T1 and fat fraction},
author={Constantin Slioussarenko, Pierre-Yves Baudin, Marc Lapert, Benjamin
Marty},
journal={arXiv preprint arXiv:2409.16200},
year={2024},
archivePrefix={arXiv},
eprint={2409.16200},
primaryClass={eess.IV cs.CV}
}
|
slioussarenko2024upper-body
|
arxiv-661436
|
2409.16202
|
CJEval: A Benchmark for Assessing Large Language Models Using Chinese Junior High School Exam Data
|
<|reference_start|>CJEval: A Benchmark for Assessing Large Language Models Using Chinese Junior High School Exam Data: Online education platforms have significantly transformed the dissemination of educational resources by providing a dynamic and digital infrastructure. With the further enhancement of this transformation, the advent of Large Language Models (LLMs) has elevated the intelligence levels of these platforms. However, current academic benchmarks provide limited guidance for real-world industry scenarios. This limitation arises because educational applications require more than mere test question responses. To bridge this gap, we introduce CJEval, a benchmark based on Chinese Junior High School Exam Evaluations. CJEval consists of 26,136 samples across four application-level educational tasks covering ten subjects. These samples include not only questions and answers but also detailed annotations such as question types, difficulty levels, knowledge concepts, and answer explanations. By utilizing this benchmark, we assessed LLMs' potential applications and conducted a comprehensive analysis of their performance by fine-tuning on various educational tasks. Extensive experiments and discussions have highlighted the opportunities and challenges of applying LLMs in the field of education.<|reference_end|>
|
arxiv
|
@article{zhang2024cjeval:,
title={CJEval: A Benchmark for Assessing Large Language Models Using Chinese
Junior High School Exam Data},
author={Qian-Wen Zhang, Haochen Wang, Fang Li, Siyu An, Lingfeng Qiao,
Liangcai Gao, Di Yin, Xing Sun},
journal={arXiv preprint arXiv:2409.16202},
year={2024},
archivePrefix={arXiv},
eprint={2409.16202},
primaryClass={cs.AI}
}
|
zhang2024cjeval:
|
arxiv-661437
|
2409.16203
|
Facial Expression-Enhanced TTS: Combining Face Representation and Emotion Intensity for Adaptive Speech
|
<|reference_start|>Facial Expression-Enhanced TTS: Combining Face Representation and Emotion Intensity for Adaptive Speech: We propose FEIM-TTS, an innovative zero-shot text-to-speech (TTS) model that synthesizes emotionally expressive speech, aligned with facial images and modulated by emotion intensity. Leveraging deep learning, FEIM-TTS transcends traditional TTS systems by interpreting facial cues and adjusting to emotional nuances without dependence on labeled datasets. To address sparse audio-visual-emotional data, the model is trained using LRS3, CREMA-D, and MELD datasets, demonstrating its adaptability. FEIM-TTS's unique capability to produce high-quality, speaker-agnostic speech makes it suitable for creating adaptable voices for virtual characters. Moreover, FEIM-TTS significantly enhances accessibility for individuals with visual impairments or those who have trouble seeing. By integrating emotional nuances into TTS, our model enables dynamic and engaging auditory experiences for webcomics, allowing visually impaired users to enjoy these narratives more fully. Comprehensive evaluation evidences its proficiency in modulating emotion and intensity, advancing emotional speech synthesis and accessibility. Samples are available at: https://feim-tts.github.io/.<|reference_end|>
|
arxiv
|
@article{chu2024facial,
title={Facial Expression-Enhanced TTS: Combining Face Representation and
Emotion Intensity for Adaptive Speech},
author={Yunji Chu, Yunseob Shim, and Unsang Park},
journal={arXiv preprint arXiv:2409.16203},
year={2024},
archivePrefix={arXiv},
eprint={2409.16203},
primaryClass={cs.SD cs.AI eess.AS}
}
|
chu2024facial
|
arxiv-661438
|
2409.16204
|
AUGUR, A flexible and efficient optimization algorithm for identification of optimal adsorption sites
|
<|reference_start|>AUGUR, A flexible and efficient optimization algorithm for identification of optimal adsorption sites: In this paper, we propose a novel flexible optimization pipeline for determining the optimal adsorption sites, named AUGUR (Aware of Uncertainty Graph Unit Regression). Our model combines graph neural networks and Gaussian processes to create a flexible, efficient, symmetry-aware, translation, and rotation-invariant predictor with inbuilt uncertainty quantification. This predictor is then used as a surrogate for a data-efficient Bayesian Optimization scheme to determine the optimal adsorption positions. This pipeline determines the optimal position of large and complicated clusters with far fewer iterations than current state-of-the-art approaches. Further, it does not rely on hand-crafted features and can be seamlessly employed on any molecule without any alterations. Additionally, the pooling properties of graphs allow for the processing of molecules of different sizes by the same model. This allows the energy prediction of computationally demanding systems by a model trained on comparatively smaller and less expensive ones<|reference_end|>
|
arxiv
|
@article{kouroudis2024augur,,
title={AUGUR, A flexible and efficient optimization algorithm for
identification of optimal adsorption sites},
author={Ioannis Kouroudis, Poonam, Neel Misciaci, Felix Mayr, Leon M"uller,
Zhaosu Gu, and Alessio Gagliardi},
journal={arXiv preprint arXiv:2409.16204},
year={2024},
archivePrefix={arXiv},
eprint={2409.16204},
primaryClass={physics.chem-ph cs.LG}
}
|
kouroudis2024augur,
|
arxiv-661439
|
2409.16205
|
Segmentation Strategies in Deep Learning for Prostate Cancer Diagnosis: A Comparative Study of Mamba, SAM, and YOLO
|
<|reference_start|>Segmentation Strategies in Deep Learning for Prostate Cancer Diagnosis: A Comparative Study of Mamba, SAM, and YOLO: Accurate segmentation of prostate cancer histopathology images is crucial for diagnosis and treatment planning. This study presents a comparative analysis of three deep learning-based methods, Mamba, SAM, and YOLO, for segmenting prostate cancer histopathology images. We evaluated the performance of these models on two comprehensive datasets, Gleason 2019 and SICAPv2, using Dice score, precision, and recall metrics. Our results show that the High-order Vision Mamba UNet (H-vmunet) model outperforms the other two models, achieving the highest scores across all metrics on both datasets. The H-vmunet model's advanced architecture, which integrates high-order visual state spaces and 2D-selective-scan operations, enables efficient and sensitive lesion detection across different scales. Our study demonstrates the potential of the H-vmunet model for clinical applications and highlights the importance of robust validation and comparison of deep learning-based methods for medical image analysis. The findings of this study contribute to the development of accurate and reliable computer-aided diagnosis systems for prostate cancer. The code is available at http://github.com/alibdz/prostate-segmentation.<|reference_end|>
|
arxiv
|
@article{badiezadeh2024segmentation,
title={Segmentation Strategies in Deep Learning for Prostate Cancer Diagnosis:
A Comparative Study of Mamba, SAM, and YOLO},
author={Ali Badiezadeh, Amin Malekmohammadi, Seyed Mostafa Mirhassani, Parisa
Gifani, Majid Vafaeezadeh},
journal={arXiv preprint arXiv:2409.16205},
year={2024},
archivePrefix={arXiv},
eprint={2409.16205},
primaryClass={cs.CV}
}
|
badiezadeh2024segmentation
|
arxiv-661440
|
2409.16208
|
Context-Based Meta Reinforcement Learning for Robust and Adaptable Peg-in-Hole Assembly Tasks
|
<|reference_start|>Context-Based Meta Reinforcement Learning for Robust and Adaptable Peg-in-Hole Assembly Tasks: Peg-in-hole assembly in unknown environments is a challenging task due to onboard sensor errors, which result in uncertainty and variations in task parameters such as the hole position and orientation. Meta Reinforcement Learning (Meta RL) has been proposed to mitigate this problem as it learns how to quickly adapt to new tasks with different parameters. However, previous approaches either depend on a sample-inefficient procedure or human demonstrations to perform the task in the real world. Our work modifies the data used by the Meta RL agent and uses simple features that can be easily measured in the real world even with an uncalibrated camera. We further adapt the Meta RL agent to use data from a force/torque sensor, instead of the camera, to perform the assembly, using a small amount of training data. Finally, we propose a fine-tuning method that consistently and safely adapts to out-of-distribution tasks with parameters that differ by a factor of 10 from the training tasks. Our results demonstrate that the proposed data modification significantly enhances the training and adaptation efficiency and enables the agent to achieve 100% success in tasks with different hole positions and orientations. Experiments on a real robot confirm that both camera- and force/torque sensor-equipped agents achieve 100% success in tasks with unknown hole positions, matching their simulation performance and validating the approach's robustness and applicability. Compared to the previous work with sample-inefficient adaptation, our proposed methods are 10 times more sample-efficient in the real-world tasks.<|reference_end|>
|
arxiv
|
@article{shokry2024context-based,
title={Context-Based Meta Reinforcement Learning for Robust and Adaptable
Peg-in-Hole Assembly Tasks},
author={Ahmed Shokry, Walid Gomaa, Tobias Zaenker, Murad Dawood, Shady A.
Maged, Mohammed I. Awad, Maren Bennewitz},
journal={arXiv preprint arXiv:2409.16208},
year={2024},
archivePrefix={arXiv},
eprint={2409.16208},
primaryClass={cs.RO}
}
|
shokry2024context-based
|
arxiv-661441
|
2409.16209
|
LLMCount: Enhancing Stationary mmWave Detection with Multimodal-LLM
|
<|reference_start|>LLMCount: Enhancing Stationary mmWave Detection with Multimodal-LLM: Millimeter wave sensing provides people with the capability of sensing the surrounding crowds in a non-invasive and privacy-preserving manner, which holds huge application potential. However, detecting stationary crowds remains challenging due to several factors such as minimal movements (like breathing or casual fidgets), which can be easily treated as noise clusters during data collection and consequently filtered in the following processing procedures. Additionally, the uneven distribution of signal power due to signal power attenuation and interferences resulting from external reflectors or absorbers further complicates accurate detection. To address these challenges and enable stationary crowd detection across various application scenarios requiring specialized domain adaption, we introduce LLMCount, the first system to harness the capabilities of large-language models (LLMs) to enhance crowd detection performance. By exploiting the decision-making capability of LLM, we can successfully compensate the signal power to acquire a uniform distribution and thereby achieve a detection with higher accuracy. To assess the system's performance, comprehensive evaluations are conducted under diversified scenarios like hall, meeting room, and cinema. The evaluation results show that our proposed approach reaches high detection accuracy with lower overall latency compared with previous methods.<|reference_end|>
|
arxiv
|
@article{li2024llmcount:,
title={LLMCount: Enhancing Stationary mmWave Detection with Multimodal-LLM},
author={Boyan Li, Shengyi Ding, Deen Ma, Yixuan Wu, Hongjie Liao, Kaiyuan Hu},
journal={arXiv preprint arXiv:2409.16209},
year={2024},
archivePrefix={arXiv},
eprint={2409.16209},
primaryClass={cs.CV}
}
|
li2024llmcount:
|
arxiv-661442
|
2409.16211
|
MaskBit: Embedding-free Image Generation via Bit Tokens
|
<|reference_start|>MaskBit: Embedding-free Image Generation via Bit Tokens: Masked transformer models for class-conditional image generation have become a compelling alternative to diffusion models. Typically comprising two stages - an initial VQGAN model for transitioning between latent space and image space, and a subsequent Transformer model for image generation within latent space - these frameworks offer promising avenues for image synthesis. In this study, we present two primary contributions: Firstly, an empirical and systematic examination of VQGANs, leading to a modernized VQGAN. Secondly, a novel embedding-free generation network operating directly on bit tokens - a binary quantized representation of tokens with rich semantics. The first contribution furnishes a transparent, reproducible, and high-performing VQGAN model, enhancing accessibility and matching the performance of current state-of-the-art methods while revealing previously undisclosed details. The second contribution demonstrates that embedding-free image generation using bit tokens achieves a new state-of-the-art FID of 1.52 on the ImageNet 256x256 benchmark, with a compact generator model of mere 305M parameters.<|reference_end|>
|
arxiv
|
@article{weber2024maskbit:,
title={MaskBit: Embedding-free Image Generation via Bit Tokens},
author={Mark Weber, Lijun Yu, Qihang Yu, Xueqing Deng, Xiaohui Shen, Daniel
Cremers, Liang-Chieh Chen},
journal={arXiv preprint arXiv:2409.16211},
year={2024},
archivePrefix={arXiv},
eprint={2409.16211},
primaryClass={cs.CV cs.LG}
}
|
weber2024maskbit:
|
arxiv-661443
|
2409.16213
|
Deep Learning for Precision Agriculture: Post-Spraying Evaluation and Deposition Estimation
|
<|reference_start|>Deep Learning for Precision Agriculture: Post-Spraying Evaluation and Deposition Estimation: Precision spraying evaluation requires automation primarily in post-spraying imagery. In this paper we propose an eXplainable Artificial Intelligence (XAI) computer vision pipeline to evaluate a precision spraying system post-spraying without the need for traditional agricultural methods. The developed system can semantically segment potential targets such as lettuce, chickweed, and meadowgrass and correctly identify if targets have been sprayed. Furthermore, this pipeline evaluates using a domain-specific Weakly Supervised Deposition Estimation task, allowing for class-specific quantification of spray deposit weights in {\mu}L. Estimation of coverage rates of spray deposition in a class-wise manner allows for further understanding of effectiveness of precision spraying systems. Our study evaluates different Class Activation Mapping techniques, namely AblationCAM and ScoreCAM, to determine which is more effective and interpretable for these tasks. In the pipeline, inference-only feature fusion is used to allow for further interpretability and to enable the automation of precision spraying evaluation post-spray. Our findings indicate that a Fully Convolutional Network with an EfficientNet-B0 backbone and inference-only feature fusion achieves an average absolute difference in deposition values of 156.8 {\mu}L across three classes in our test set. The dataset curated in this paper is publicly available at https://github.com/Harry-Rogers/PSIE<|reference_end|>
|
arxiv
|
@article{rogers2024deep,
title={Deep Learning for Precision Agriculture: Post-Spraying Evaluation and
Deposition Estimation},
author={Harry Rogers, Tahmina Zebin, Grzegorz Cielniak, Beatriz De La Iglesia,
Ben Magri},
journal={arXiv preprint arXiv:2409.16213},
year={2024},
archivePrefix={arXiv},
eprint={2409.16213},
primaryClass={cs.CV cs.LG}
}
|
rogers2024deep
|
arxiv-661444
|
2409.16214
|
TE-PINN: Quaternion-Based Orientation Estimation using Transformer-Enhanced Physics-Informed Neural Networks
|
<|reference_start|>TE-PINN: Quaternion-Based Orientation Estimation using Transformer-Enhanced Physics-Informed Neural Networks: This paper introduces a Transformer-Enhanced Physics-Informed Neural Network (TE-PINN) designed for accurate quaternion-based orientation estimation in high-dynamic environments, particularly within the field of robotics. By integrating transformer networks with physics-informed learning, our approach innovatively captures temporal dependencies in sensor data while enforcing the fundamental physical laws governing rotational motion. TE-PINN leverages a multi-head attention mechanism to handle sequential data from inertial sensors, such as accelerometers and gyroscopes, ensuring temporal consistency. Simultaneously, the model embeds quaternion kinematics and rigid body dynamics into the learning process, aligning the network's predictions with mechanical principles like Euler's laws of motion. The physics-informed loss function incorporates the dynamics of angular velocity and external forces, enhancing the network's ability to generalize in complex scenarios. Our experimental evaluation demonstrates that TE-PINN consistently outperforms traditional methods such as Extended Kalman Filters (EKF) and LSTM-based estimators, particularly in scenarios characterized by high angular velocities and noisy sensor data. The results show a significant reduction in mean quaternion error and improved gyroscope bias estimation compared to the state-of-the-art. An ablation study further isolates the contributions of both the transformer architecture and the physics-informed constraints, highlighting the synergistic effect of both components in improving model performance. The proposed model achieves real-time performance on embedded systems typical of mobile robots, offering a scalable and efficient solution for orientation estimation in autonomous systems.<|reference_end|>
|
arxiv
|
@article{golroudbari2024te-pinn:,
title={TE-PINN: Quaternion-Based Orientation Estimation using
Transformer-Enhanced Physics-Informed Neural Networks},
author={Arman Asgharpoor Golroudbari},
journal={arXiv preprint arXiv:2409.16214},
year={2024},
archivePrefix={arXiv},
eprint={2409.16214},
primaryClass={cs.RO cs.SY eess.SP eess.SY}
}
|
golroudbari2024te-pinn:
|
arxiv-661445
|
2409.16215
|
Tiny Robotics Dataset and Benchmark for Continual Object Detection
|
<|reference_start|>Tiny Robotics Dataset and Benchmark for Continual Object Detection: Detecting objects in mobile robotics is crucial for numerous applications, from autonomous navigation to inspection. However, robots are often required to perform tasks in different domains with respect to the training one and need to adapt to these changes. Tiny mobile robots, subject to size, power, and computational constraints, encounter even more difficulties in running and adapting these algorithms. Such adaptability, though, is crucial for real-world deployment, where robots must operate effectively in dynamic and unpredictable settings. In this work, we introduce a novel benchmark to evaluate the continual learning capabilities of object detection systems in tiny robotic platforms. Our contributions include: (i) Tiny Robotics Object Detection (TiROD), a comprehensive dataset collected using a small mobile robot, designed to test the adaptability of object detectors across various domains and classes; (ii) an evaluation of state-of-the-art real-time object detectors combined with different continual learning strategies on this dataset, providing detailed insights into their performance and limitations; and (iii) we publish the data and the code to replicate the results to foster continuous advancements in this field. Our benchmark results indicate key challenges that must be addressed to advance the development of robust and efficient object detection systems for tiny robotics.<|reference_end|>
|
arxiv
|
@article{pasti2024tiny,
title={Tiny Robotics Dataset and Benchmark for Continual Object Detection},
author={Francesco Pasti, Riccardo De Monte, Davide Dalle Pezze, Gian Antonio
Susto, Nicola Bellotto},
journal={arXiv preprint arXiv:2409.16215},
year={2024},
archivePrefix={arXiv},
eprint={2409.16215},
primaryClass={cs.RO cs.CV}
}
|
pasti2024tiny
|
arxiv-661446
|
2409.16217
|
Twinning Commercial Network Traces on Experimental Open RAN Platforms
|
<|reference_start|>Twinning Commercial Network Traces on Experimental Open RAN Platforms: While the availability of large datasets has been instrumental to advance fields like computer vision and natural language processing, this has not been the case in mobile networking. Indeed, mobile traffic data is often unavailable due to privacy or regulatory concerns. This problem becomes especially relevant in Open Radio Access Network (RAN), where artificial intelligence can potentially drive optimization and control of the RAN, but still lags behind due to the lack of training datasets. While substantial work has focused on developing testbeds that can accurately reflect production environments, the same level of effort has not been put into twinning the traffic that traverse such networks. To fill this gap, in this paper, we design a methodology to twin real-world cellular traffic traces in experimental Open RAN testbeds. We demonstrate our approach on the Colosseum Open RAN digital twin, and publicly release a large dataset (more than 500 hours and 450 GB) with PHY-, MAC-, and App-layer Key Performance Measurements (KPMs), and protocol stack logs. Our analysis shows that our dataset can be used to develop and evaluate a number of Open RAN use cases, including those with strict latency requirements.<|reference_end|>
|
arxiv
|
@article{bonati2024twinning,
title={Twinning Commercial Network Traces on Experimental Open RAN Platforms},
author={Leonardo Bonati, Ravis Shirkhani, Claudio Fiandrino, Stefano Maxenti,
Salvatore D'Oro, Michele Polese, Tommaso Melodia},
journal={arXiv preprint arXiv:2409.16217},
year={2024},
doi={10.1145/3636534.3697320},
archivePrefix={arXiv},
eprint={2409.16217},
primaryClass={cs.NI}
}
|
bonati2024twinning
|
arxiv-661447
|
2409.16218
|
Problem-oriented AutoML in Clustering
|
<|reference_start|>Problem-oriented AutoML in Clustering: The Problem-oriented AutoML in Clustering (PoAC) framework introduces a novel, flexible approach to automating clustering tasks by addressing the shortcomings of traditional AutoML solutions. Conventional methods often rely on predefined internal Clustering Validity Indexes (CVIs) and static meta-features, limiting their adaptability and effectiveness across diverse clustering tasks. In contrast, PoAC establishes a dynamic connection between the clustering problem, CVIs, and meta-features, allowing users to customize these components based on the specific context and goals of their task. At its core, PoAC employs a surrogate model trained on a large meta-knowledge base of previous clustering datasets and solutions, enabling it to infer the quality of new clustering pipelines and synthesize optimal solutions for unseen datasets. Unlike many AutoML frameworks that are constrained by fixed evaluation metrics and algorithm sets, PoAC is algorithm-agnostic, adapting seamlessly to different clustering problems without requiring additional data or retraining. Experimental results demonstrate that PoAC not only outperforms state-of-the-art frameworks on a variety of datasets but also excels in specific tasks such as data visualization, and highlight its ability to dynamically adjust pipeline configurations based on dataset complexity.<|reference_end|>
|
arxiv
|
@article{da silva2024problem-oriented,
title={Problem-oriented AutoML in Clustering},
author={Matheus Camilo da Silva, Gabriel Marques Tavares, Eric Medvet and
Sylvio Barbon Junior},
journal={arXiv preprint arXiv:2409.16218},
year={2024},
archivePrefix={arXiv},
eprint={2409.16218},
primaryClass={cs.LG cs.AI}
}
|
da silva2024problem-oriented
|
arxiv-661448
|
2409.16220
|
Towards Enhancing Linked Data Retrieval in Conversational UIs using Large Language Models
|
<|reference_start|>Towards Enhancing Linked Data Retrieval in Conversational UIs using Large Language Models: Despite the recent broad adoption of Large Language Models (LLMs) across various domains, their potential for enriching information systems in extracting and exploring Linked Data (LD) and Resource Description Framework (RDF) triplestores has not been extensively explored. This paper examines the integration of LLMs within existing systems, emphasising the enhancement of conversational user interfaces (UIs) and their capabilities for data extraction by producing more accurate SPARQL queries without the requirement for model retraining. Typically, conversational UI models necessitate retraining with the introduction of new datasets or updates, limiting their functionality as general-purpose extraction tools. Our approach addresses this limitation by incorporating LLMs into the conversational UI workflow, significantly enhancing their ability to comprehend and process user queries effectively. By leveraging the advanced natural language understanding capabilities of LLMs, our method improves RDF entity extraction within web systems employing conventional chatbots. This integration facilitates a more nuanced and context-aware interaction model, critical for handling the complex query patterns often encountered in RDF datasets and Linked Open Data (LOD) endpoints. The evaluation of this methodology shows a marked enhancement in system expressivity and the accuracy of responses to user queries, indicating a promising direction for future research in this area. This investigation not only underscores the versatility of LLMs in enhancing existing information systems but also sets the stage for further explorations into their potential applications within more specialised domains of web information systems.<|reference_end|>
|
arxiv
|
@article{mussa2024towards,
title={Towards Enhancing Linked Data Retrieval in Conversational UIs using
Large Language Models},
author={Omar Mussa, Omer Rana, Beno^it Goossens, Pablo Orozco-Terwengel and
Charith Perera},
journal={arXiv preprint arXiv:2409.16220},
year={2024},
archivePrefix={arXiv},
eprint={2409.16220},
primaryClass={cs.IR cs.AI cs.CL}
}
|
mussa2024towards
|
arxiv-661449
|
2409.16223
|
Fine-Tuning is Fine, if Calibrated
|
<|reference_start|>Fine-Tuning is Fine, if Calibrated: Fine-tuning is arguably the most straightforward way to tailor a pre-trained model (e.g., a foundation model) to downstream applications, but it also comes with the risk of losing valuable knowledge the model had learned in pre-training. For example, fine-tuning a pre-trained classifier capable of recognizing a large number of classes to master a subset of classes at hand is shown to drastically degrade the model's accuracy in the other classes it had previously learned. As such, it is hard to further use the fine-tuned model when it encounters classes beyond the fine-tuning data. In this paper, we systematically dissect the issue, aiming to answer the fundamental question, "What has been damaged in the fine-tuned model?" To our surprise, we find that the fine-tuned model neither forgets the relationship among the other classes nor degrades the features to recognize these classes. Instead, the fine-tuned model often produces more discriminative features for these other classes, even if they were missing during fine-tuning! {What really hurts the accuracy is the discrepant logit scales between the fine-tuning classes and the other classes}, implying that a simple post-processing calibration would bring back the pre-trained model's capability and at the same time unveil the feature improvement over all classes. We conduct an extensive empirical study to demonstrate the robustness of our findings and provide preliminary explanations underlying them, suggesting new directions for future theoretical analysis. Our code is available at https://github.com/OSU-MLB/Fine-Tuning-Is-Fine-If-Calibrated.<|reference_end|>
|
arxiv
|
@article{mai2024fine-tuning,
title={Fine-Tuning is Fine, if Calibrated},
author={Zheda Mai, Arpita Chowdhury, Ping Zhang, Cheng-Hao Tu, Hong-You Chen,
Vardaan Pahuja, Tanya Berger-Wolf, Song Gao, Charles Stewart, Yu Su, Wei-Lun
Chao},
journal={arXiv preprint arXiv:2409.16223},
year={2024},
archivePrefix={arXiv},
eprint={2409.16223},
primaryClass={cs.LG cs.AI cs.CV}
}
|
mai2024fine-tuning
|
arxiv-661450
|
2409.16225
|
VideoPatchCore: An Effective Method to Memorize Normality for Video Anomaly Detection
|
<|reference_start|>VideoPatchCore: An Effective Method to Memorize Normality for Video Anomaly Detection: Video anomaly detection (VAD) is a crucial task in video analysis and surveillance within computer vision. Currently, VAD is gaining attention with memory techniques that store the features of normal frames. The stored features are utilized for frame reconstruction, identifying an abnormality when a significant difference exists between the reconstructed and input frames. However, this approach faces several challenges due to the simultaneous optimization required for both the memory and encoder-decoder model. These challenges include increased optimization difficulty, complexity of implementation, and performance variability depending on the memory size. To address these challenges,we propose an effective memory method for VAD, called VideoPatchCore. Inspired by PatchCore, our approach introduces a structure that prioritizes memory optimization and configures three types of memory tailored to the characteristics of video data. This method effectively addresses the limitations of existing memory-based methods, achieving good performance comparable to state-of-the-art methods. Furthermore, our method requires no training and is straightforward to implement, making VAD tasks more accessible. Our code is available online at github.com/SkiddieAhn/Paper-VideoPatchCore.<|reference_end|>
|
arxiv
|
@article{ahn2024videopatchcore:,
title={VideoPatchCore: An Effective Method to Memorize Normality for Video
Anomaly Detection},
author={Sunghyun Ahn, Youngwan Jo, Kijung Lee, Sanghyun Park},
journal={arXiv preprint arXiv:2409.16225},
year={2024},
archivePrefix={arXiv},
eprint={2409.16225},
primaryClass={cs.CV}
}
|
ahn2024videopatchcore:
|
arxiv-661451
|
2409.16227
|
Low-degree Security of the Planted Random Subgraph Problem
|
<|reference_start|>Low-degree Security of the Planted Random Subgraph Problem: The planted random subgraph detection conjecture of Abram et al. (TCC 2023) asserts the pseudorandomness of a pair of graphs $(H, G)$, where $G$ is an Erdos-Renyi random graph on $n$ vertices, and $H$ is a random induced subgraph of $G$ on $k$ vertices. Assuming the hardness of distinguishing these two distributions (with two leaked vertices), Abram et al. construct communication-efficient, computationally secure (1) 2-party private simultaneous messages (PSM) and (2) secret sharing for forbidden graph structures. We prove the low-degree hardness of detecting planted random subgraphs all the way up to $k\leq n^{1 - \Omega(1)}$. This improves over Abram et al.'s analysis for $k \leq n^{1/2 - \Omega(1)}$. The hardness extends to $r$-uniform hypergraphs for constant $r$. Our analysis is tight in the distinguisher's degree, its advantage, and in the number of leaked vertices. Extending the constructions of Abram et al, we apply the conjecture towards (1) communication-optimal multiparty PSM protocols for random functions and (2) bit secret sharing with share size $(1 + \epsilon)\log n$ for any $\epsilon > 0$ in which arbitrary minimal coalitions of up to $r$ parties can reconstruct and secrecy holds against all unqualified subsets of up to $\ell = o(\epsilon \log n)^{1/(r-1)}$ parties.<|reference_end|>
|
arxiv
|
@article{bogdanov2024low-degree,
title={Low-degree Security of the Planted Random Subgraph Problem},
author={Andrej Bogdanov, Chris Jones, Alon Rosen, Ilias Zadik},
journal={arXiv preprint arXiv:2409.16227},
year={2024},
archivePrefix={arXiv},
eprint={2409.16227},
primaryClass={cs.CR cs.DS math.ST stat.TH}
}
|
bogdanov2024low-degree
|
arxiv-661452
|
2409.16228
|
Fast Extrinsic Calibration for Multiple Inertial Measurement Units in Visual-Inertial System
|
<|reference_start|>Fast Extrinsic Calibration for Multiple Inertial Measurement Units in Visual-Inertial System: In this paper, we propose a fast extrinsic calibration method for fusing multiple inertial measurement units (MIMU) to improve visual-inertial odometry (VIO) localization accuracy. Currently, data fusion algorithms for MIMU highly depend on the number of inertial sensors. Based on the assumption that extrinsic parameters between inertial sensors are perfectly calibrated, the fusion algorithm provides better localization accuracy with more IMUs, while neglecting the effect of extrinsic calibration error. Our method builds two non-linear least-squares problems to estimate the MIMU relative position and orientation separately, independent of external sensors and inertial noises online estimation. Then we give the general form of the virtual IMU (VIMU) method and propose its propagation on manifold. We perform our method on datasets, our self-made sensor board, and board with different IMUs, validating the superiority of our method over competing methods concerning speed, accuracy, and robustness. In the simulation experiment, we show that only fusing two IMUs with our calibration method to predict motion can rival nine IMUs. Real-world experiments demonstrate better localization accuracy of the VIO integrated with our calibration method and VIMU propagation on manifold.<|reference_end|>
|
arxiv
|
@article{yu2024fast,
title={Fast Extrinsic Calibration for Multiple Inertial Measurement Units in
Visual-Inertial System},
author={Youwei Yu, Yanqing Liu, Fengjie Fu, Sihan He, Dongchen Zhu, Lei Wang,
Xiaolin Zhang, and Jiamao Li},
journal={arXiv preprint arXiv:2409.16228},
year={2024},
doi={10.1109/ICRA48891.2023.10161187},
archivePrefix={arXiv},
eprint={2409.16228},
primaryClass={cs.RO}
}
|
yu2024fast
|
arxiv-661453
|
2409.16231
|
Predicting Deterioration in Mild Cognitive Impairment with Survival Transformers, Extreme Gradient Boosting and Cox Proportional Hazard Modelling
|
<|reference_start|>Predicting Deterioration in Mild Cognitive Impairment with Survival Transformers, Extreme Gradient Boosting and Cox Proportional Hazard Modelling: The paper proposes a novel approach of survival transformers and extreme gradient boosting models in predicting cognitive deterioration in individuals with mild cognitive impairment (MCI) using metabolomics data in the ADNI cohort. By leveraging advanced machine learning and transformer-based techniques applied in survival analysis, the proposed approach highlights the potential of these techniques for more accurate early detection and intervention in Alzheimer's dementia disease. This research also underscores the importance of non-invasive biomarkers and innovative modelling tools in enhancing the accuracy of dementia risk assessments, offering new avenues for clinical practice and patient care. A comprehensive Monte Carlo simulation procedure consisting of 100 repetitions of a nested cross-validation in which models were trained and evaluated, indicates that the survival machine learning models based on Transformer and XGBoost achieved the highest mean C-index performances, namely 0.85 and 0.8, respectively, and that they are superior to the conventional survival analysis Cox Proportional Hazards model which achieved a mean C-Index of 0.77. Moreover, based on the standard deviations of the C-Index performances obtained in the Monte Carlo simulation, we established that both survival machine learning models above are more stable than the conventional statistical model.<|reference_end|>
|
arxiv
|
@article{musto2024predicting,
title={Predicting Deterioration in Mild Cognitive Impairment with Survival
Transformers, Extreme Gradient Boosting and Cox Proportional Hazard Modelling},
author={Henry Musto, Daniel Stamate, Doina Logofatu, Daniel Stahl},
journal={arXiv preprint arXiv:2409.16231},
year={2024},
archivePrefix={arXiv},
eprint={2409.16231},
primaryClass={cs.LG cs.AI cs.NE}
}
|
musto2024predicting
|
arxiv-661454
|
2409.16235
|
EuroLLM: Multilingual Language Models for Europe
|
<|reference_start|>EuroLLM: Multilingual Language Models for Europe: The quality of open-weight LLMs has seen significant improvement, yet they remain predominantly focused on English. In this paper, we introduce the EuroLLM project, aimed at developing a suite of open-weight multilingual LLMs capable of understanding and generating text in all official European Union languages, as well as several additional relevant languages. We outline the progress made to date, detailing our data collection and filtering process, the development of scaling laws, the creation of our multilingual tokenizer, and the data mix and modeling configurations. Additionally, we release our initial models: EuroLLM-1.7B and EuroLLM-1.7B-Instruct and report their performance on multilingual general benchmarks and machine translation.<|reference_end|>
|
arxiv
|
@article{martins2024eurollm:,
title={EuroLLM: Multilingual Language Models for Europe},
author={Pedro Henrique Martins, Patrick Fernandes, Jo~ao Alves, Nuno M.
Guerreiro, Ricardo Rei, Duarte M. Alves, Jos'e Pombal, Amin Farajian, Manuel
Faysse, Mateusz Klimaszewski, Pierre Colombo, Barry Haddow, Jos'e G. C. de
Souza, Alexandra Birch, Andr'e F. T. Martins},
journal={arXiv preprint arXiv:2409.16235},
year={2024},
archivePrefix={arXiv},
eprint={2409.16235},
primaryClass={cs.CL}
}
|
martins2024eurollm:
|
arxiv-661455
|
2409.16238
|
Efficiently Learning Probabilistic Logical Models by Cheaply Ranking Mined Rules
|
<|reference_start|>Efficiently Learning Probabilistic Logical Models by Cheaply Ranking Mined Rules: Probabilistic logical models are a core component of neurosymbolic AI and are important models in their own right for tasks that require high explainability. Unlike neural networks, logical models are often handcrafted using domain expertise, making their development costly and prone to errors. While there are algorithms that learn logical models from data, they are generally prohibitively expensive, limiting their applicability in real-world settings. In this work, we introduce precision and recall for logical rules and define their composition as rule utility -- a cost-effective measure to evaluate the predictive power of logical models. Further, we introduce SPECTRUM, a scalable framework for learning logical models from relational data. Its scalability derives from a linear-time algorithm that mines recurrent structures in the data along with a second algorithm that, using the cheap utility measure, efficiently ranks rules built from these structures. Moreover, we derive theoretical guarantees on the utility of the learnt logical model. As a result, SPECTRUM learns more accurate logical models orders of magnitude faster than previous methods on real-world datasets.<|reference_end|>
|
arxiv
|
@article{feldstein2024efficiently,
title={Efficiently Learning Probabilistic Logical Models by Cheaply Ranking
Mined Rules},
author={Jonathan Feldstein, Dominic Phillips, Efthymia Tsamoura},
journal={arXiv preprint arXiv:2409.16238},
year={2024},
archivePrefix={arXiv},
eprint={2409.16238},
primaryClass={cs.AI}
}
|
feldstein2024efficiently
|
arxiv-661456
|
2409.16239
|
Label-Augmented Dataset Distillation
|
<|reference_start|>Label-Augmented Dataset Distillation: Traditional dataset distillation primarily focuses on image representation while often overlooking the important role of labels. In this study, we introduce Label-Augmented Dataset Distillation (LADD), a new dataset distillation framework enhancing dataset distillation with label augmentations. LADD sub-samples each synthetic image, generating additional dense labels to capture rich semantics. These dense labels require only a 2.5% increase in storage (ImageNet subsets) with significant performance benefits, providing strong learning signals. Our label generation strategy can complement existing dataset distillation methods for significantly enhancing their training efficiency and performance. Experimental results demonstrate that LADD outperforms existing methods in terms of computational overhead and accuracy. With three high-performance dataset distillation algorithms, LADD achieves remarkable gains by an average of 14.9% in accuracy. Furthermore, the effectiveness of our method is proven across various datasets, distillation hyperparameters, and algorithms. Finally, our method improves the cross-architecture robustness of the distilled dataset, which is important in the application scenario.<|reference_end|>
|
arxiv
|
@article{kang2024label-augmented,
title={Label-Augmented Dataset Distillation},
author={Seoungyoon Kang, Youngsun Lim and Hyunjung Shim},
journal={arXiv preprint arXiv:2409.16239},
year={2024},
archivePrefix={arXiv},
eprint={2409.16239},
primaryClass={cs.CV cs.AI}
}
|
kang2024label-augmented
|
arxiv-661457
|
2409.16241
|
LLM Echo Chamber: personalized and automated disinformation
|
<|reference_start|>LLM Echo Chamber: personalized and automated disinformation: Recent advancements have showcased the capabilities of Large Language Models like GPT4 and Llama2 in tasks such as summarization, translation, and content review. However, their widespread use raises concerns, particularly around the potential for LLMs to spread persuasive, humanlike misinformation at scale, which could significantly influence public opinion. This study examines these risks, focusing on LLMs ability to propagate misinformation as factual. To investigate this, we built the LLM Echo Chamber, a controlled digital environment simulating social media chatrooms, where misinformation often spreads. Echo chambers, where individuals only interact with like minded people, further entrench beliefs. By studying malicious bots spreading misinformation in this environment, we can better understand this phenomenon. We reviewed current LLMs, explored misinformation risks, and applied sota finetuning techniques. Using Microsoft phi2 model, finetuned with our custom dataset, we generated harmful content to create the Echo Chamber. This setup, evaluated by GPT4 for persuasiveness and harmfulness, sheds light on the ethical concerns surrounding LLMs and emphasizes the need for stronger safeguards against misinformation.<|reference_end|>
|
arxiv
|
@article{ma2024llm,
title={LLM Echo Chamber: personalized and automated disinformation},
author={Tony Ma},
journal={arXiv preprint arXiv:2409.16241},
year={2024},
archivePrefix={arXiv},
eprint={2409.16241},
primaryClass={cs.AI cs.CY}
}
|
ma2024llm
|
arxiv-661458
|
2409.16243
|
A fast and sound tagging method for discontinuous named-entity recognition
|
<|reference_start|>A fast and sound tagging method for discontinuous named-entity recognition: We introduce a novel tagging scheme for discontinuous named entity recognition based on an explicit description of the inner structure of discontinuous mentions. We rely on a weighted finite state automaton for both marginal and maximum a posteriori inference. As such, our method is sound in the sense that (1) well-formedness of predicted tag sequences is ensured via the automaton structure and (2) there is an unambiguous mapping between well-formed sequences of tags and (discontinuous) mentions. We evaluate our approach on three English datasets in the biomedical domain, and report comparable results to state-of-the-art while having a way simpler and faster model.<|reference_end|>
|
arxiv
|
@article{corro2024a,
title={A fast and sound tagging method for discontinuous named-entity
recognition},
author={Caio Corro},
journal={arXiv preprint arXiv:2409.16243},
year={2024},
archivePrefix={arXiv},
eprint={2409.16243},
primaryClass={cs.CL}
}
|
corro2024a
|
arxiv-661459
|
2409.16252
|
Fields of The World: A Machine Learning Benchmark Dataset For Global Agricultural Field Boundary Segmentation
|
<|reference_start|>Fields of The World: A Machine Learning Benchmark Dataset For Global Agricultural Field Boundary Segmentation: Crop field boundaries are foundational datasets for agricultural monitoring and assessments but are expensive to collect manually. Machine learning (ML) methods for automatically extracting field boundaries from remotely sensed images could help realize the demand for these datasets at a global scale. However, current ML methods for field instance segmentation lack sufficient geographic coverage, accuracy, and generalization capabilities. Further, research on improving ML methods is restricted by the lack of labeled datasets representing the diversity of global agricultural fields. We present Fields of The World (FTW) -- a novel ML benchmark dataset for agricultural field instance segmentation spanning 24 countries on four continents (Europe, Africa, Asia, and South America). FTW is an order of magnitude larger than previous datasets with 70,462 samples, each containing instance and semantic segmentation masks paired with multi-date, multi-spectral Sentinel-2 satellite images. We provide results from baseline models for the new FTW benchmark, show that models trained on FTW have better zero-shot and fine-tuning performance in held-out countries than models that aren't pre-trained with diverse datasets, and show positive qualitative zero-shot results of FTW models in a real-world scenario -- running on Sentinel-2 scenes over Ethiopia.<|reference_end|>
|
arxiv
|
@article{kerner2024fields,
title={Fields of The World: A Machine Learning Benchmark Dataset For Global
Agricultural Field Boundary Segmentation},
author={Hannah Kerner, Snehal Chaudhari, Aninda Ghosh, Caleb Robinson, Adeel
Ahmad, Eddie Choi, Nathan Jacobs, Chris Holmes, Matthias Mohr, Rahul Dodhia,
Juan M. Lavista Ferres, Jennifer Marcus},
journal={arXiv preprint arXiv:2409.16252},
year={2024},
archivePrefix={arXiv},
eprint={2409.16252},
primaryClass={cs.CV cs.AI cs.LG}
}
|
kerner2024fields
|
arxiv-661460
|
2409.16253
|
Learning To Help: Training Models to Assist Legacy Devices
|
<|reference_start|>Learning To Help: Training Models to Assist Legacy Devices: Machine learning models implemented in hardware on physical devices may be deployed for a long time. The computational abilities of the device may be limited and become outdated with respect to newer improvements. Because of the size of ML models, offloading some computation (e.g. to an edge cloud) can help such legacy devices. We cast this problem in the framework of learning with abstention (LWA) in which the expert (edge) must be trained to assist the client (device). Prior work on LWA trains the client assuming the edge is either an oracle or a human expert. In this work, we formalize the reverse problem of training the expert for a fixed (legacy) client. As in LWA, the client uses a rejection rule to decide when to offload inference to the expert (at a cost). We find the Bayes-optimal rule, prove a generalization bound, and find a consistent surrogate loss function. Empirical results show that our framework outperforms confidence-based rejection rules.<|reference_end|>
|
arxiv
|
@article{wu2024learning,
title={Learning To Help: Training Models to Assist Legacy Devices},
author={Yu Wu, Anand Sarwate},
journal={arXiv preprint arXiv:2409.16253},
year={2024},
archivePrefix={arXiv},
eprint={2409.16253},
primaryClass={cs.LG}
}
|
wu2024learning
|
arxiv-661461
|
2409.16256
|
A Critical Review of Safe Reinforcement Learning Techniques in Smart Grid Applications
|
<|reference_start|>A Critical Review of Safe Reinforcement Learning Techniques in Smart Grid Applications: The high penetration of distributed energy resources (DERs) in modern smart power systems introduces unforeseen uncertainties for the electricity sector, leading to increased complexity and difficulty in the operation and control of power systems. As a cutting-edge machine learning technology, deep reinforcement learning (DRL) has been widely implemented in recent years to handle the uncertainty in power systems. However, in critical infrastructures such as power systems, safety issues always receive top priority, while DRL may not always meet the safety requirements of power system operators. The concept of safe reinforcement learning (safe RL) is emerging as a potential solution to overcome the shortcomings of conventional DRL in the operation and control of power systems. This study provides a rigorous review of the latest research efforts focused on safe RL to derive power system control policies while accounting for the unique safety requirements of power grids. Furthermore, this study highlights various safe RL algorithms applied in diverse applications within the power system sector, from single grid-connected power converters, residential smart homes, and buildings to large power distribution networks. For all methods outlined, a discussion on their bottlenecks, research challenges, and potential opportunities in the operation and control of power system applications is also presented. This review aims to support research in the area of safe RL algorithms, embracing smart power system operation with safety constraints amid high uncertainty from DERs.<|reference_end|>
|
arxiv
|
@article{bui2024a,
title={A Critical Review of Safe Reinforcement Learning Techniques in Smart
Grid Applications},
author={Van-Hai Bui, Srijita Das, Akhtar Hussain, Guilherme Vieira Hollweg,
and Wencong Su},
journal={arXiv preprint arXiv:2409.16256},
year={2024},
archivePrefix={arXiv},
eprint={2409.16256},
primaryClass={eess.SY cs.SY}
}
|
bui2024a
|
arxiv-661462
|
2409.16257
|
Pressure stability in explicitly coupled simulations of poromechanics with application to CO$_2$ sequestration
|
<|reference_start|>Pressure stability in explicitly coupled simulations of poromechanics with application to CO$_2$ sequestration: We study in detail the pressure stabilizing effects of the non-iterated fixed-stress splitting in poromechanical problems which are nearly undrained and incompressible. When applied in conjunction with a spatial discretization which does not satisfy the discrete inf-sup condition, namely a mixed piecewise linear - piecewise constant spatial discretization, the explicit fixed-stress scheme can have a pressure stabilizing effect in transient problems. This effect disappears, however, upon time step refinement or the attainment of steady state. The interpretation of the scheme as an Augmented Lagrangian method similar to Uzawa iteration for incompressible flow helps explain these results. Moreover, due to the slowly evolving solution within undrained seal regions, we show that the explicit fixed-stress scheme requires very large time steps to reveal its pressure stabilizing effect in examples of geologic CO$_2$ sequestration. We note that large time steps can result in large errors in drained regions, such as the aquifer or reservoir regions of these examples, and can prevent convergence of nonlinear solvers in the case of multiphase flows, which can make the explicit scheme an unreliable source of pressure stabilization. We conclude by demonstrating that pressure jump stabilization is as effective in the explicit fixed-stress setting as in the fully implicit setting for undrained problems, while maintaining the stability and convergence of the fixed-stress split for drained problems.<|reference_end|>
|
arxiv
|
@article{aronson2024pressure,
title={Pressure stability in explicitly coupled simulations of poromechanics
with application to CO$_2$ sequestration},
author={Ryan M. Aronson, Pavel Tomin, Nicola Castelletto, Franc{c}ois P.
Hamon, J. A. White and Hamdi A. Tchelepi},
journal={arXiv preprint arXiv:2409.16257},
year={2024},
archivePrefix={arXiv},
eprint={2409.16257},
primaryClass={math.NA cs.NA}
}
|
aronson2024pressure
|
arxiv-661463
|
2409.16258
|
SWARM: Replicating Shared Disaggregated-Memory Data in No Time
|
<|reference_start|>SWARM: Replicating Shared Disaggregated-Memory Data in No Time: Memory disaggregation is an emerging data center architecture that improves resource utilization and scalability. Replication is key to ensure the fault tolerance of applications, but replicating shared data in disaggregated memory is hard. We propose SWARM (Swift WAit-free Replication in disaggregated Memory), the first replication scheme for in-disaggregated-memory shared objects to provide (1) single-roundtrip reads and writes in the common case, (2) strong consistency (linearizability), and (3) strong liveness (wait-freedom). SWARM makes two independent contributions. The first is Safe-Guess, a novel wait-free replication protocol with single-roundtrip operations. The second is In-n-Out, a novel technique to provide conditional atomic update and atomic retrieval of large buffers in disaggregated memory in one roundtrip. Using SWARM, we build SWARM-KV, a low-latency, strongly consistent and highly available disaggregated key-value store. We evaluate SWARM-KV and find that it has marginal latency overhead compared to an unreplicated key-value store, and that it offers much lower latency and better availability than FUSEE, a state-of-the-art replicated disaggregated key-value store.<|reference_end|>
|
arxiv
|
@article{murat2024swarm:,
title={SWARM: Replicating Shared Disaggregated-Memory Data in No Time},
author={Antoine Murat, Cl'ement Burgelin, Athanasios Xygkis, Igor Zablotchi,
Marcos K. Aguilera, Rachid Guerraoui},
journal={arXiv preprint arXiv:2409.16258},
year={2024},
doi={10.1145/3694715.3695945},
archivePrefix={arXiv},
eprint={2409.16258},
primaryClass={cs.DC}
}
|
murat2024swarm:
|
arxiv-661464
|
2409.16261
|
CDChat: A Large Multimodal Model for Remote Sensing Change Description
|
<|reference_start|>CDChat: A Large Multimodal Model for Remote Sensing Change Description: Large multimodal models (LMMs) have shown encouraging performance in the natural image domain using visual instruction tuning. However, these LMMs struggle to describe the content of remote sensing images for tasks such as image or region grounding, classification, etc. Recently, GeoChat make an effort to describe the contents of the RS images. Although, GeoChat achieves promising performance for various RS tasks, it struggles to describe the changes between bi-temporal RS images which is a key RS task. This necessitates the development of an LMM that can describe the changes between the bi-temporal RS images. However, there is insufficiency of datasets that can be utilized to tune LMMs. In order to achieve this, we introduce a change description instruction dataset that can be utilized to finetune an LMM and provide better change descriptions for RS images. Furthermore, we show that the LLaVA-1.5 model, with slight modifications, can be finetuned on the change description instruction dataset and achieve favorably better performance.<|reference_end|>
|
arxiv
|
@article{noman2024cdchat:,
title={CDChat: A Large Multimodal Model for Remote Sensing Change Description},
author={Mubashir Noman and Noor Ahsan and Muzammal Naseer and Hisham Cholakkal
and Rao Muhammad Anwer and Salman Khan and Fahad Shahbaz Khan},
journal={arXiv preprint arXiv:2409.16261},
year={2024},
archivePrefix={arXiv},
eprint={2409.16261},
primaryClass={cs.CV}
}
|
noman2024cdchat:
|
arxiv-661465
|
2409.16262
|
Extended one-dimensional reduced model for blood flow within a stenotic artery
|
<|reference_start|>Extended one-dimensional reduced model for blood flow within a stenotic artery: In this paper, we introduce an adapted one-dimensional (1D) reduced model aimed at analyzing blood flow within stenosed arteries. Differing from the prevailing 1D model \cite{Formaggia2003, Sherwin2003_2, Sherwin2003, Quarteroni2004, 10.1007/978-3-642-56288-4_10}, our approach incorporates the variable radius of the blood vessel. Our methodology begins with the non-dimensionalization of the Navier-Stokes equations for axially symmetric flow in cylindrical coordinates and then derives the extended 1D reduced model, by making additional adjustments to accommodate the effects of variable radii of the vessel along the longitudinal direction. Additionally, we propose a method to extract radial velocity information from the 1D results during post-processing, enabling the generation of two-dimensional (2D) velocity data. We validate our model by conducting numerical simulations of blood flow through stenotic arteries with varying severities, ranging from 23% to 50%. The results were compared to those from the established 1D model and a full three-dimensional (3D) simulation, highlighting the potential and importance of this model for arteries with variable radius. All the code used to generate the results presented in the paper is available at https://github.com/qcutexu/Extended-1D-AQ-system.git.<|reference_end|>
|
arxiv
|
@article{canic2024extended,
title={Extended one-dimensional reduced model for blood flow within a stenotic
artery},
author={Suncica Canic, Shihan Guo, Yifan Wang, Xiaohe Yue, Haibiao Zheng},
journal={arXiv preprint arXiv:2409.16262},
year={2024},
archivePrefix={arXiv},
eprint={2409.16262},
primaryClass={math.NA cs.NA}
}
|
canic2024extended
|
arxiv-661466
|
2409.16266
|
REBEL: Rule-based and Experience-enhanced Learning with LLMs for Initial Task Allocation in Multi-Human Multi-Robot Teams
|
<|reference_start|>REBEL: Rule-based and Experience-enhanced Learning with LLMs for Initial Task Allocation in Multi-Human Multi-Robot Teams: Multi-human multi-robot teams combine the complementary strengths of humans and robots to tackle complex tasks across diverse applications. However, the inherent heterogeneity of these teams presents significant challenges in initial task allocation (ITA), which involves assigning the most suitable tasks to each team member based on their individual capabilities before task execution. While current learning-based methods have shown promising results, they are often computationally expensive to train, and lack the flexibility to incorporate user preferences in multi-objective optimization and adapt to last-minute changes in real-world dynamic environments. To address these issues, we propose REBEL, an LLM-based ITA framework that integrates rule-based and experience-enhanced learning. By leveraging Retrieval-Augmented Generation, REBEL dynamically retrieves relevant rules and past experiences, enhancing reasoning efficiency. Additionally, REBEL can complement pre-trained RL-based ITA policies, improving situational awareness and overall team performance. Extensive experiments validate the effectiveness of our approach across various settings. More details are available at https://sites.google.com/view/ita-rebel .<|reference_end|>
|
arxiv
|
@article{gupte2024rebel:,
title={REBEL: Rule-based and Experience-enhanced Learning with LLMs for Initial
Task Allocation in Multi-Human Multi-Robot Teams},
author={Arjun Gupte, Ruiqi Wang, Vishnunandan L.N. Venkatesh, Taehyeon Kim,
Dezhong Zhao, Byung-Cheol Min},
journal={arXiv preprint arXiv:2409.16266},
year={2024},
archivePrefix={arXiv},
eprint={2409.16266},
primaryClass={cs.RO}
}
|
gupte2024rebel:
|
arxiv-661467
|
2409.16267
|
Performance Comparison of HTTP/3 and HTTP/2: Proxy vs Non-Proxy Environments
|
<|reference_start|>Performance Comparison of HTTP/3 and HTTP/2: Proxy vs Non-Proxy Environments: This paper provides a systematic evaluation of the performance of QUIC/HTTP3 (H3) and TCP/HTTP2 (H2) protocols in proxy-enhanced environments. By leveraging features such as UDP-based flow-controlled streams, integrated TLS, multiplexed connections, and connection migration, H3 promises enhanced web communication. Despite extensive research, the impact of proxy integration and connection migration remains underexplored. This study addresses this gap by evaluating these protocols across various scenarios in noisy networks and proxy setups. Our findings reveal that H3 excels under high loss and latency conditions, significantly benefiting from its connection migration and multiplexing features. H3's connection migration remains robust, maintaining stable performance even in proxy-enhanced environments, ensuring seamless network transitions. The proxy has a more neutral impact on H3, while it significantly enhances H2 performance, especially when using BBR. Any improvements observed in H3 under a proxy are minor and do not fundamentally alter H3's performance as they do for H2. Importantly, while H2 with the right congestion control algorithm (CCA) can achieve performance comparable to H3, H3's performance is more robust, as it is less impacted by network conditions, proxy settings, and CCA variations.<|reference_end|>
|
arxiv
|
@article{liu2024performance,
title={Performance Comparison of HTTP/3 and HTTP/2: Proxy vs. Non-Proxy
Environments},
author={Fan Liu, John Dehart, Jyoti Parwatikar, Behrooz Farkiani, Patrick
Crowley},
journal={arXiv preprint arXiv:2409.16267},
year={2024},
archivePrefix={arXiv},
eprint={2409.16267},
primaryClass={cs.NI}
}
|
liu2024performance
|
arxiv-661468
|
2409.16269
|
Bound-preserving OEDG schemes for Aw-Rascle-Zhang traffic models on networks
|
<|reference_start|>Bound-preserving OEDG schemes for Aw-Rascle-Zhang traffic models on networks: Physical solutions to the widely used Aw-Rascle-Zhang (ARZ) traffic model and the adapted pressure (AP) ARZ model should satisfy the positivity of density, the minimum and maximum principles with respect to the velocity $v$ and other Riemann invariants. Many numerical schemes suffer from instabilities caused by violating these bounds, and the only existing bound-preserving (BP) numerical scheme (for ARZ model) is random, only first-order accurate, and not strictly conservative. This paper introduces arbitrarily high-order provably BP DG schemes for these two models, preserving all the aforementioned bounds except the maximum principle of $v$, which has been rigorously proven to conflict with the consistency and conservation of numerical schemes. Although the maximum principle of $v$ is not directly enforced, we find that the strictly preserved maximum principle of another Riemann invariant $w$ actually enforces an alternative upper bound on $v$. At the core of this work, analyzing and rigorously proving the BP property is a particularly nontrivial task: the Lax-Friedrichs (LF) splitting property, usually expected for hyperbolic conservation laws and employed to construct BP schemes, does not hold for these two models. To overcome this challenge, we formulate a generalized version of the LF splitting property, and prove it via the geometric quasilinearization (GQL) approach [Kailiang Wu and Chi-Wang Shu, SIAM Review, 65: 1031-1073, 2023]. To suppress spurious oscillations in the DG solutions, we employ the oscillation-eliminating (OE) technique, recently proposed in [Manting Peng, Zheng Sun, and Kailiang Wu, Mathematics of Computation, in press], which is based on the solution operator of a novel damping equation. Several numerical examples are included to demonstrate the effectiveness, accuracy, and BP properties of our schemes, with applications to traffic simulations on road networks.<|reference_end|>
|
arxiv
|
@article{chen2024bound-preserving,
title={Bound-preserving OEDG schemes for Aw-Rascle-Zhang traffic models on
networks},
author={Wei Chen, Shumo Cui, Kailiang Wu, Tao Xiong},
journal={arXiv preprint arXiv:2409.16269},
year={2024},
archivePrefix={arXiv},
eprint={2409.16269},
primaryClass={math.NA cs.NA}
}
|
chen2024bound-preserving
|
arxiv-661469
|
2409.16271
|
AIM 2024 Challenge on UHD Blind Photo Quality Assessment
|
<|reference_start|>AIM 2024 Challenge on UHD Blind Photo Quality Assessment: We introduce the AIM 2024 UHD-IQA Challenge, a competition to advance the No-Reference Image Quality Assessment (NR-IQA) task for modern, high-resolution photos. The challenge is based on the recently released UHD-IQA Benchmark Database, which comprises 6,073 UHD-1 (4K) images annotated with perceptual quality ratings from expert raters. Unlike previous NR-IQA datasets, UHD-IQA focuses on highly aesthetic photos of superior technical quality, reflecting the ever-increasing standards of digital photography. This challenge aims to develop efficient and effective NR-IQA models. Participants are tasked with creating novel architectures and training strategies to achieve high predictive performance on UHD-1 images within a computational budget of 50G MACs. This enables model deployment on edge devices and scalable processing of extensive image collections. Winners are determined based on a combination of performance metrics, including correlation measures (SRCC, PLCC, KRCC), absolute error metrics (MAE, RMSE), and computational efficiency (G MACs). To excel in this challenge, participants leverage techniques like knowledge distillation, low-precision inference, and multi-scale training. By pushing the boundaries of NR-IQA for high-resolution photos, the UHD-IQA Challenge aims to stimulate the development of practical models that can keep pace with the rapidly evolving landscape of digital photography. The innovative solutions emerging from this competition will have implications for various applications, from photo curation and enhancement to image compression.<|reference_end|>
|
arxiv
|
@article{hosu2024aim,
title={AIM 2024 Challenge on UHD Blind Photo Quality Assessment},
author={Vlad Hosu and Marcos V. Conde and Lorenzo Agnolucci and Nabajeet
Barman and Saman Zadtootaghaj and Radu Timofte},
journal={arXiv preprint arXiv:2409.16271},
year={2024},
archivePrefix={arXiv},
eprint={2409.16271},
primaryClass={cs.CV}
}
|
hosu2024aim
|
arxiv-661470
|
2409.16275
|
Generative Factor Chaining: Coordinated Manipulation with Diffusion-based Factor Graph
|
<|reference_start|>Generative Factor Chaining: Coordinated Manipulation with Diffusion-based Factor Graph: Learning to plan for multi-step, multi-manipulator tasks is notoriously difficult because of the large search space and the complex constraint satisfaction problems. We present Generative Factor Chaining~(GFC), a composable generative model for planning. GFC represents a planning problem as a spatial-temporal factor graph, where nodes represent objects and robots in the scene, spatial factors capture the distributions of valid relationships among nodes, and temporal factors represent the distributions of skill transitions. Each factor is implemented as a modular diffusion model, which are composed during inference to generate feasible long-horizon plans through bi-directional message passing. We show that GFC can solve complex bimanual manipulation tasks and exhibits strong generalization to unseen planning tasks with novel combinations of objects and constraints. More details can be found at: https://generative-fc.github.io/<|reference_end|>
|
arxiv
|
@article{mishra2024generative,
title={Generative Factor Chaining: Coordinated Manipulation with
Diffusion-based Factor Graph},
author={Utkarsh A. Mishra and Yongxin Chen and Danfei Xu},
journal={arXiv preprint arXiv:2409.16275},
year={2024},
archivePrefix={arXiv},
eprint={2409.16275},
primaryClass={cs.RO}
}
|
mishra2024generative
|
arxiv-661471
|
2409.16277
|
Compressed Depth Map Super-Resolution and Restoration: AIM 2024 Challenge Results
|
<|reference_start|>Compressed Depth Map Super-Resolution and Restoration: AIM 2024 Challenge Results: The increasing demand for augmented reality (AR) and virtual reality (VR) applications highlights the need for efficient depth information processing. Depth maps, essential for rendering realistic scenes and supporting advanced functionalities, are typically large and challenging to stream efficiently due to their size. This challenge introduces a focus on developing innovative depth upsampling techniques to reconstruct high-quality depth maps from compressed data. These techniques are crucial for overcoming the limitations posed by depth compression, which often degrades quality, loses scene details and introduces artifacts. By enhancing depth upsampling methods, this challenge aims to improve the efficiency and quality of depth map reconstruction. Our goal is to advance the state-of-the-art in depth processing technologies, thereby enhancing the overall user experience in AR and VR applications.<|reference_end|>
|
arxiv
|
@article{conde2024compressed,
title={Compressed Depth Map Super-Resolution and Restoration: AIM 2024
Challenge Results},
author={Marcos V. Conde and Florin-Alexandru Vasluianu and Jinhui Xiong and
Wei Ye and Rakesh Ranjan and Radu Timofte},
journal={arXiv preprint arXiv:2409.16277},
year={2024},
archivePrefix={arXiv},
eprint={2409.16277},
primaryClass={eess.IV cs.CV}
}
|
conde2024compressed
|
arxiv-661472
|
2409.16278
|
Semantic Refocused Tuning for Open-Vocabulary Panoptic Segmentation
|
<|reference_start|>Semantic Refocused Tuning for Open-Vocabulary Panoptic Segmentation: Open-vocabulary panoptic segmentation is an emerging task aiming to accurately segment the image into semantically meaningful masks based on a set of texts. Despite existing efforts, it remains challenging to develop a high-performing method that generalizes effectively across new domains and requires minimal training resources. Our in-depth analysis of current methods reveals a crucial insight: mask classification is the main performance bottleneck for open-vocab. panoptic segmentation. Based on this, we propose Semantic Refocused Tuning (SMART), a novel framework that greatly enhances open-vocab. panoptic segmentation by improving mask classification through two key innovations. First, SMART adopts a multimodal Semantic-guided Mask Attention mechanism that injects task-awareness into the regional information extraction process. This enables the model to capture task-specific and contextually relevant information for more effective mask classification. Second, it incorporates Query Projection Tuning, which strategically fine-tunes the query projection layers within the Vision Language Model (VLM) used for mask classification. This adjustment allows the model to adapt the image focus of mask tokens to new distributions with minimal training resources, while preserving the VLM's pre-trained knowledge. Extensive ablation studies confirm the superiority of our approach. Notably, SMART sets new state-of-the-art results, demonstrating improvements of up to +1.3 PQ and +5.4 mIoU across representative benchmarks, while reducing training costs by nearly 10x compared to the previous best method. Our code and data will be released.<|reference_end|>
|
arxiv
|
@article{chng2024semantic,
title={Semantic Refocused Tuning for Open-Vocabulary Panoptic Segmentation},
author={Yong Xien Chng, Xuchong Qiu, Yizeng Han, Kai Ding, Wan Ding, Gao Huang},
journal={arXiv preprint arXiv:2409.16278},
year={2024},
archivePrefix={arXiv},
eprint={2409.16278},
primaryClass={cs.CV}
}
|
chng2024semantic
|
arxiv-661473
|
2409.16279
|
On 1-Planar Graphs with Bounded Cop-Number
|
<|reference_start|>On 1-Planar Graphs with Bounded Cop-Number: Cops and Robbers is a type of pursuit-evasion game played on a graph where a set of cops try to capture a single robber. The cops first choose their initial vertex positions, and later the robber chooses a vertex. The cops and robbers make their moves in alternate turns: in the cops' turn, every cop can either choose to move to an adjacent vertex or stay on the same vertex, and likewise the robber in his turn. If the cops can capture the robber in a finite number of rounds, the cops win, otherwise the robber wins. The cop-number of a graph is the minimum number of cops required to catch a robber in the graph. It has long been known that graphs embedded on surfaces (such as planar graphs and toroidal graphs) have a small cop-number. Recently, Durocher et al. [Graph Drawing, 2023] investigated the problem of cop-number for the class of $1$-planar graphs, which are graphs that can be embedded in the plane such that each edge is crossed at most once. They showed that unlike planar graphs which require just three cops, 1-planar graphs have an unbounded cop-number. On the positive side, they showed that maximal 1-planar graphs require only three cops by crucially using the fact that the endpoints of every crossing in an embedded maximal 1-planar graph induce a $K_4$. In this paper, we show that the cop-number remains bounded even under the relaxed condition that the endpoints induce at least three edges. More precisely, let an $\times$-crossing of an embedded 1-planar graph be a crossing whose endpoints induce a matching; i.e., there is no edge connecting the endpoints apart from the crossing edges themselves. We show that any 1-planar graph that can be embedded without $\times$-crossings has cop-number at most 21. Moreover, any 1-planar graph that can be embedded with at most $\gamma$ $\times$-crossings has cop-number at most $\gamma + 21$.<|reference_end|>
|
arxiv
|
@article{bose2024on,
title={On 1-Planar Graphs with Bounded Cop-Number},
author={Prosenjit Bose, Jean-Lou De Carufel, Anil Maheshwari and Karthik
Murali},
journal={arXiv preprint arXiv:2409.16279},
year={2024},
archivePrefix={arXiv},
eprint={2409.16279},
primaryClass={cs.DM math.CO}
}
|
bose2024on
|
arxiv-661474
|
2409.16280
|
MonoFormer: One Transformer for Both Diffusion and Autoregression
|
<|reference_start|>MonoFormer: One Transformer for Both Diffusion and Autoregression: Most existing multimodality methods use separate backbones for autoregression-based discrete text generation and diffusion-based continuous visual generation, or the same backbone by discretizing the visual data to use autoregression for both text and visual generation. In this paper, we propose to study a simple idea: share one transformer for both autoregression and diffusion. The feasibility comes from two main aspects: (i) Transformer is successfully applied to diffusion for visual generation, and (ii) transformer training for autoregression and diffusion is very similar, and the difference merely lies in that diffusion uses bidirectional attention mask and autoregression uses causal attention mask. Experimental results show that our approach achieves comparable image generation performance to current state-of-the-art methods as well as maintains the text generation capability. The project is publicly available at https://monoformer.github.io/.<|reference_end|>
|
arxiv
|
@article{zhao2024monoformer:,
title={MonoFormer: One Transformer for Both Diffusion and Autoregression},
author={Chuyang Zhao, Yuxing Song, Wenhao Wang, Haocheng Feng, Errui Ding,
Yifan Sun, Xinyan Xiao, Jingdong Wang},
journal={arXiv preprint arXiv:2409.16280},
year={2024},
archivePrefix={arXiv},
eprint={2409.16280},
primaryClass={cs.CV}
}
|
zhao2024monoformer:
|
arxiv-661475
|
2409.16282
|
An Explicit Consistency-Preserving Loss Function for Phase Reconstruction and Speech Enhancement
|
<|reference_start|>An Explicit Consistency-Preserving Loss Function for Phase Reconstruction and Speech Enhancement: In this work, we propose a novel consistency-preserving loss function for recovering the phase information in the context of phase reconstruction (PR) and speech enhancement (SE). Different from conventional techniques that directly estimate the phase using a deep model, our idea is to exploit ad-hoc constraints to directly generate a consistent pair of magnitude and phase. Specifically, the proposed loss forces a set of complex numbers to be a consistent short-time Fourier transform (STFT) representation, i.e., to be the spectrogram of a real signal. Our approach thus avoids the difficulty of estimating the original phase, which is highly unstructured and sensitive to time shift. The influence of our proposed loss is first assessed on a PR task, experimentally demonstrating that our approach is viable. Next, we show its effectiveness on an SE task, using both the VB-DMD and WSJ0-CHiME3 data sets. On VB-DMD, our approach is competitive with conventional solutions. On the challenging WSJ0-CHiME3 set, the proposed framework compares favourably over those techniques that explicitly estimate the phase.<|reference_end|>
|
arxiv
|
@article{ku2024an,
title={An Explicit Consistency-Preserving Loss Function for Phase
Reconstruction and Speech Enhancement},
author={Pin-Jui Ku, Chun-Wei Ho, Hao Yen, Sabato Marco Siniscalchi, and
Chin-Hui Lee},
journal={arXiv preprint arXiv:2409.16282},
year={2024},
archivePrefix={arXiv},
eprint={2409.16282},
primaryClass={eess.AS cs.SD}
}
|
ku2024an
|
arxiv-661476
|
2409.16283
|
Gen2Act: Human Video Generation in Novel Scenarios enables Generalizable Robot Manipulation
|
<|reference_start|>Gen2Act: Human Video Generation in Novel Scenarios enables Generalizable Robot Manipulation: How can robot manipulation policies generalize to novel tasks involving unseen object types and new motions? In this paper, we provide a solution in terms of predicting motion information from web data through human video generation and conditioning a robot policy on the generated video. Instead of attempting to scale robot data collection which is expensive, we show how we can leverage video generation models trained on easily available web data, for enabling generalization. Our approach Gen2Act casts language-conditioned manipulation as zero-shot human video generation followed by execution with a single policy conditioned on the generated video. To train the policy, we use an order of magnitude less robot interaction data compared to what the video prediction model was trained on. Gen2Act doesn't require fine-tuning the video model at all and we directly use a pre-trained model for generating human videos. Our results on diverse real-world scenarios show how Gen2Act enables manipulating unseen object types and performing novel motions for tasks not present in the robot data. Videos are at https://homangab.github.io/gen2act/<|reference_end|>
|
arxiv
|
@article{bharadhwaj2024gen2act:,
title={Gen2Act: Human Video Generation in Novel Scenarios enables Generalizable
Robot Manipulation},
author={Homanga Bharadhwaj, Debidatta Dwibedi, Abhinav Gupta, Shubham
Tulsiani, Carl Doersch, Ted Xiao, Dhruv Shah, Fei Xia, Dorsa Sadigh, Sean
Kirmani},
journal={arXiv preprint arXiv:2409.16283},
year={2024},
archivePrefix={arXiv},
eprint={2409.16283},
primaryClass={cs.RO cs.CV cs.LG eess.IV}
}
|
bharadhwaj2024gen2act:
|
arxiv-661477
|
2409.16285
|
Age of Gossip in Networks with Multiple Views of a Source
|
<|reference_start|>Age of Gossip in Networks with Multiple Views of a Source: We consider the version age of information (AoI) in a network where a subset of nodes act as sensing nodes, sampling a source that in general can follow a continuous distribution. Any sample of the source constitutes a new version of the information and the version age of the information is defined with respect to the most recent version of the information available for the whole network. We derive a recursive expression for the average version AoI between different subsets of the nodes which can be used to evaluate the average version AoI for any subset of the nodes including any single node. We derive asymptotic behavior of the average AoI on any single node of the network for various topologies including line, ring, and fully connected networks. The prior art result on version age of a network by Yates [ISIT'21] can be interpreted as in our derivation as a network with a single view of the source, e.g., through a Poisson process with rate $\lambda_{00}$. Our result indicates that there is no loss in the average version AoI performance by replacing a single view of the source with distributed sensing across multiple nodes by splitting the same rate $\lambda_{00}$. Particularly, we show that asymptotically, the average AoI scales with $O(\log(n))$ and $O(\sqrt{n})$ for fully connected and ring networks, respectively. More interestingly, we show that for the ring network the same $O(\sqrt{n})$ asymptotical performance on average AoI is still achieved with distributed sensing if the number of sensing nodes only scales with $O(\sqrt{n})$ instead of prior known result which requires $O(n)$. Our results indicate that the sensing nodes can be arbitrarily chosen as long as the maximum number of consecutive non-sensing nodes also scales as $O(\sqrt{n})$.<|reference_end|>
|
arxiv
|
@article{khojastepour2024age,
title={Age of Gossip in Networks with Multiple Views of a Source},
author={Kian J. Khojastepour and Matin Mortaheb and Sennur Ulukus},
journal={arXiv preprint arXiv:2409.16285},
year={2024},
archivePrefix={arXiv},
eprint={2409.16285},
primaryClass={cs.IT cs.NI cs.SY eess.SP eess.SY math.IT}
}
|
khojastepour2024age
|
arxiv-661478
|
2409.16287
|
Articulated Object Manipulation using Online Axis Estimation with SAM2-Based Tracking
|
<|reference_start|>Articulated Object Manipulation using Online Axis Estimation with SAM2-Based Tracking: Articulated object manipulation requires precise object interaction, where the object's axis must be carefully considered. Previous research employed interactive perception for manipulating articulated objects, but typically, open-loop approaches often suffer from overlooking the interaction dynamics. To address this limitation, we present a closed-loop pipeline integrating interactive perception with online axis estimation from segmented 3D point clouds. Our method leverages any interactive perception technique as a foundation for interactive perception, inducing slight object movement to generate point cloud frames of the evolving dynamic scene. These point clouds are then segmented using Segment Anything Model 2 (SAM2), after which the moving part of the object is masked for accurate motion online axis estimation, guiding subsequent robotic actions. Our approach significantly enhances the precision and efficiency of manipulation tasks involving articulated objects. Experiments in simulated environments demonstrate that our method outperforms baseline approaches, especially in tasks that demand precise axis-based control. Project Page: https://hytidel.github.io/video-tracking-for-axis-estimation/.<|reference_end|>
|
arxiv
|
@article{wang2024articulated,
title={Articulated Object Manipulation using Online Axis Estimation with
SAM2-Based Tracking},
author={Xi Wang, Tianxing Chen, Qiaojun Yu, Tianling Xu, Zanxin Chen, Yiting
Fu, Cewu Lu, Yao Mu and Ping Luo},
journal={arXiv preprint arXiv:2409.16287},
year={2024},
archivePrefix={arXiv},
eprint={2409.16287},
primaryClass={cs.RO cs.AI cs.GR cs.LG}
}
|
wang2024articulated
|
arxiv-661479
|
2409.16288
|
Self-Supervised Any-Point Tracking by Contrastive Random Walks
|
<|reference_start|>Self-Supervised Any-Point Tracking by Contrastive Random Walks: We present a simple, self-supervised approach to the Tracking Any Point (TAP) problem. We train a global matching transformer to find cycle consistent tracks through video via contrastive random walks, using the transformer's attention-based global matching to define the transition matrices for a random walk on a space-time graph. The ability to perform "all pairs" comparisons between points allows the model to obtain high spatial precision and to obtain a strong contrastive learning signal, while avoiding many of the complexities of recent approaches (such as coarse-to-fine matching). To do this, we propose a number of design decisions that allow global matching architectures to be trained through self-supervision using cycle consistency. For example, we identify that transformer-based methods are sensitive to shortcut solutions, and propose a data augmentation scheme to address them. Our method achieves strong performance on the TapVid benchmarks, outperforming previous self-supervised tracking methods, such as DIFT, and is competitive with several supervised methods.<|reference_end|>
|
arxiv
|
@article{shrivastava2024self-supervised,
title={Self-Supervised Any-Point Tracking by Contrastive Random Walks},
author={Ayush Shrivastava, Andrew Owens},
journal={arXiv preprint arXiv:2409.16288},
year={2024},
archivePrefix={arXiv},
eprint={2409.16288},
primaryClass={cs.CV}
}
|
shrivastava2024self-supervised
|
arxiv-661480
|
2409.16290
|
Computer Aided Detection and Classification of mammograms using Convolutional Neural Network
|
<|reference_start|>Computer Aided Detection and Classification of mammograms using Convolutional Neural Network: Breast cancer is one of the most major causes of death among women, after lung cancer. Breast cancer detection advancements can increase the survival rate of patients through earlier detection. Breast cancer that can be detected by using mammographic imaging is now considered crucial step for computer aided systems. Researchers have explained many techniques for the automatic detection of initial tumors. The early breast cancer symptoms include masses and micro-calcifications. Because there is the variation in the tumor shape, size and position it is difficult to extract abnormal region from normal tissues. So, machine learning can help medical professionals make more accurate diagnoses of the disease whereas deep learning or neural networks are one of the methods that can be used to distinguish regular and irregular breast identification. In this study the extraction method for the classification of breast masses as normal and abnormal we have used is convolutional neural network (CNN) on mammograms. DDSM dataset has been used in which nearly 460 images are of normal and 920 of abnormal breasts.<|reference_end|>
|
arxiv
|
@article{ishaq2024computer,
title={Computer Aided Detection and Classification of mammograms using
Convolutional Neural Network},
author={Kashif Ishaq, Muhammad Mustagis},
journal={arXiv preprint arXiv:2409.16290},
year={2024},
archivePrefix={arXiv},
eprint={2409.16290},
primaryClass={eess.IV cs.CV}
}
|
ishaq2024computer
|
arxiv-661481
|
2409.16291
|
Beyond Following: Mixing Active Initiative into Computational Creativity
|
<|reference_start|>Beyond Following: Mixing Active Initiative into Computational Creativity: Generative Artificial Intelligence (AI) encounters limitations in efficiency and fairness within the realm of Procedural Content Generation (PCG) when human creators solely drive and bear responsibility for the generative process. Alternative setups, such as Mixed-Initiative Co-Creative (MI-CC) systems, exhibited their promise. Still, the potential of an active mixed initiative, where AI takes a role beyond following, is understudied. This work investigates the influence of the adaptive ability of an active and learning AI agent on creators' expectancy of creative responsibilities in an MI-CC setting. We built and studied a system that employs reinforcement learning (RL) methods to learn the creative responsibility preferences of a human user during online interactions. Situated in story co-creation, we develop a Multi-armed-bandit agent that learns from the human creator, updates its collaborative decision-making belief, and switches between its capabilities during an MI-CC experience. With 39 participants joining a human subject study, Our developed system's learning capabilities are well recognized compared to the non-learning ablation, corresponding to a significant increase in overall satisfaction with the MI-CC experience. These findings indicate a robust association between effective MI-CC collaborative interactions, particularly the implementation of proactive AI initiatives, and deepened understanding among all participants.<|reference_end|>
|
arxiv
|
@article{lin2024beyond,
title={Beyond Following: Mixing Active Initiative into Computational Creativity},
author={Zhiyu Lin, Upol Ehsan, Rohan Agarwal, Samihan Dani, Vidushi Vashishth,
Mark Riedl},
journal={arXiv preprint arXiv:2409.16291},
year={2024},
archivePrefix={arXiv},
eprint={2409.16291},
primaryClass={cs.HC cs.AI}
}
|
lin2024beyond
|
arxiv-661482
|
2409.16292
|
Explaining Human Comparisons using Alignment-Importance Heatmaps
|
<|reference_start|>Explaining Human Comparisons using Alignment-Importance Heatmaps: We present a computational explainability approach for human comparison tasks, using Alignment Importance Score (AIS) heatmaps derived from deep-vision models. The AIS reflects a feature-map's unique contribution to the alignment between Deep Neural Network's (DNN) representational geometry and that of humans. We first validate the AIS by showing that prediction of out-of-sample human similarity judgments is improved when constructing representations using only higher-scoring AIS feature maps identified from a training set. We then compute image-specific heatmaps that visually indicate the areas that correspond to feature-maps with higher AIS scores. These maps provide an intuitive explanation of which image areas are more important when it is compared to other images in a cohort. We observe a correspondence between these heatmaps and saliency maps produced by a gaze-prediction model. However, in some cases, meaningful differences emerge, as the dimensions relevant for comparison are not necessarily the most visually salient. To conclude, Alignment Importance improves prediction of human similarity judgments from DNN embeddings, and provides interpretable insights into the relevant information in image space.<|reference_end|>
|
arxiv
|
@article{truong2024explaining,
title={Explaining Human Comparisons using Alignment-Importance Heatmaps},
author={Nhut Truong, Dario Pesenti, Uri Hasson},
journal={arXiv preprint arXiv:2409.16292},
year={2024},
archivePrefix={arXiv},
eprint={2409.16292},
primaryClass={cs.CV cs.AI}
}
|
truong2024explaining
|
arxiv-661483
|
2409.16293
|
Excitation Waveforms for Maximum Instantaneous Power Delivery
|
<|reference_start|>Excitation Waveforms for Maximum Instantaneous Power Delivery: This paper introduces a computational approach to identify performance constraints in the time-domain based on optimizing the excitation waveform. The method builds on an optimization algorithm that has been employed for decades to establish fundamental limits in the frequency domain and this paper showcases its first comprehensive application to time-domain pulses. The method is applied to arbitrarily polarized multiport antennas and arrays. The demonstration performed is based on finding an antenna's maximum peak radiation intensity in a given direction and time with limited total input energy available. To highlight the generality of the approach, an analysis on finding optimal illumination for antiferromagnetic memory switching is conducted.<|reference_end|>
|
arxiv
|
@article{liska2024excitation,
title={Excitation Waveforms for Maximum Instantaneous Power Delivery},
author={Jakub Liska and Lukas Jelinek and Miloslav Capek},
journal={arXiv preprint arXiv:2409.16293},
year={2024},
archivePrefix={arXiv},
eprint={2409.16293},
primaryClass={cs.IT math.IT}
}
|
liska2024excitation
|
arxiv-661484
|
2409.16294
|
GenCAD: Image-Conditioned Computer-Aided Design Generation with Transformer-Based Contrastive Representation and Diffusion Priors
|
<|reference_start|>GenCAD: Image-Conditioned Computer-Aided Design Generation with Transformer-Based Contrastive Representation and Diffusion Priors: The creation of manufacturable and editable 3D shapes through Computer-Aided Design (CAD) remains a highly manual and time-consuming task, hampered by the complex topology of boundary representations of 3D solids and unintuitive design tools. This paper introduces GenCAD, a generative model that employs autoregressive transformers and latent diffusion models to transform image inputs into parametric CAD command sequences, resulting in editable 3D shape representations. GenCAD integrates an autoregressive transformer-based architecture with a contrastive learning framework, enhancing the generation of CAD programs from input images and providing a representation learning framework for multiple data modalities relevant to engineering designs. Extensive evaluations demonstrate that GenCAD significantly outperforms existing state-of-the-art methods in terms of the precision and modifiability of generated 3D shapes. Notably, GenCAD shows a marked improvement in the accuracy of 3D shape generation for long sequences, supporting its application in complex design tasks. Additionally, the contrastive embedding feature of GenCAD facilitates the retrieval of CAD models using image queries from databases which is a critical challenge within the CAD community. While most work in the 3D shape generation literature focuses on representations like meshes, voxels, or point clouds, practical engineering applications demand modifiability and the ability for multi-modal conditional generation. Our results provide a significant step forward in this direction, highlighting the potential of generative models to expedite the entire design-to-production pipeline and seamlessly integrate different design modalities.<|reference_end|>
|
arxiv
|
@article{alam2024gencad:,
title={GenCAD: Image-Conditioned Computer-Aided Design Generation with
Transformer-Based Contrastive Representation and Diffusion Priors},
author={Md Ferdous Alam, Faez Ahmed},
journal={arXiv preprint arXiv:2409.16294},
year={2024},
archivePrefix={arXiv},
eprint={2409.16294},
primaryClass={cs.CV cs.GR cs.LG}
}
|
alam2024gencad:
|
arxiv-661485
|
2409.16295
|
Efficient Training of Self-Supervised Speech Foundation Models on a Compute Budget
|
<|reference_start|>Efficient Training of Self-Supervised Speech Foundation Models on a Compute Budget: Despite their impressive success, training foundation models remains computationally costly. This paper investigates how to efficiently train speech foundation models with self-supervised learning (SSL) under a limited compute budget. We examine critical factors in SSL that impact the budget, including model architecture, model size, and data size. Our goal is to make analytical steps toward understanding the training dynamics of speech foundation models. We benchmark SSL objectives in an entirely comparable setting and find that other factors contribute more significantly to the success of SSL. Our results show that slimmer model architectures outperform common small architectures under the same compute and parameter budget. We demonstrate that the size of the pre-training data remains crucial, even with data augmentation during SSL training, as performance suffers when iterating over limited data. Finally, we identify a trade-off between model size and data size, highlighting an optimal model size for a given compute budget.<|reference_end|>
|
arxiv
|
@article{liu2024efficient,
title={Efficient Training of Self-Supervised Speech Foundation Models on a
Compute Budget},
author={Andy T. Liu, Yi-Cheng Lin, Haibin Wu, Stefan Winkler, Hung-yi Lee},
journal={arXiv preprint arXiv:2409.16295},
year={2024},
archivePrefix={arXiv},
eprint={2409.16295},
primaryClass={eess.AS cs.CL cs.LG cs.SD}
}
|
liu2024efficient
|
arxiv-661486
|
2409.16296
|
LiDAR-3DGS: LiDAR Reinforced 3D Gaussian Splatting for Multimodal Radiance Field Rendering
|
<|reference_start|>LiDAR-3DGS: LiDAR Reinforced 3D Gaussian Splatting for Multimodal Radiance Field Rendering: In this paper, we explore the capabilities of multimodal inputs to 3D Gaussian Splatting (3DGS) based Radiance Field Rendering. We present LiDAR-3DGS, a novel method of reinforcing 3DGS inputs with LiDAR generated point clouds to significantly improve the accuracy and detail of 3D models. We demonstrate a systematic approach of LiDAR reinforcement to 3DGS to enable capturing of important features such as bolts, apertures, and other details that are often missed by image-based features alone. These details are crucial for engineering applications such as remote monitoring and maintenance. Without modifying the underlying 3DGS algorithm, we demonstrate that even a modest addition of LiDAR generated point cloud significantly enhances the perceptual quality of the models. At 30k iterations, the model generated by our method resulted in an increase of 7.064% in PSNR and 0.565% in SSIM, respectively. Since the LiDAR used in this research was a commonly used commercial-grade device, the improvements observed were modest and can be further enhanced with higher-grade LiDAR systems. Additionally, these improvements can be supplementary to other derivative works of Radiance Field Rendering and also provide a new insight for future LiDAR and computer vision integrated modeling.<|reference_end|>
|
arxiv
|
@article{lim2024lidar-3dgs:,
title={LiDAR-3DGS: LiDAR Reinforced 3D Gaussian Splatting for Multimodal
Radiance Field Rendering},
author={Hansol Lim, Hanbeom Chang, Jongseong Brad Choi, Chul Min Yeum},
journal={arXiv preprint arXiv:2409.16296},
year={2024},
archivePrefix={arXiv},
eprint={2409.16296},
primaryClass={cs.CV cs.GR eess.IV}
}
|
lim2024lidar-3dgs:
|
arxiv-661487
|
2409.16297
|
Analyzing Recursiveness in Multimodal Generative Artificial Intelligence: Stability or Divergence?
|
<|reference_start|>Analyzing Recursiveness in Multimodal Generative Artificial Intelligence: Stability or Divergence?: One of the latest trends in generative Artificial Intelligence is tools that generate and analyze content in different modalities, such as text and images, and convert information from one to the other. From a conceptual point of view, it is interesting to study whether these modality changes incur information loss and to what extent. This is analogous to variants of the classical game telephone, where players alternate between describing images and creating drawings based on those descriptions leading to unexpected transformations of the original content. In the case of AI, modality changes can be applied recursively, starting from an image to extract a text that describes it; using the text to generate a second image, extracting a text that describes it, and so on. As this process is applied recursively, AI tools are generating content from one mode to use them to create content in another mode and so on. Ideally, the embeddings of all of them would remain close to those of the original content so that only small variations are observed in the generated content versus the original one. However, it may also be the case the distance to the original embeddings increases in each iteration leading to a divergence in the process and to content that is barely related to the original one. In this paper, we present the results of an empirical study on the impact of recursive modality changes using GPT-4o, a state-of-the-art AI multimodal tool, and DALL-E 3. The results show that the multimodality loop diverges from the initial image without converging to anything specific. We have observed differences depending on the type of initial image and the configuration of the models. These findings are particularly relevant due to the increasing use of these tools for content generation, reconstruction, and adaptation, and their potential implications for the content on the Internet of the future.<|reference_end|>
|
arxiv
|
@article{conde2024analyzing,
title={Analyzing Recursiveness in Multimodal Generative Artificial
Intelligence: Stability or Divergence?},
author={Javier Conde, Tobias Cheung, Gonzalo Mart'inez, Pedro Reviriego, Rik
Sarkar},
journal={arXiv preprint arXiv:2409.16297},
year={2024},
archivePrefix={arXiv},
eprint={2409.16297},
primaryClass={cs.MM}
}
|
conde2024analyzing
|
arxiv-661488
|
2409.16298
|
BetterBodies: Reinforcement Learning guided Diffusion for Antibody Sequence Design
|
<|reference_start|>BetterBodies: Reinforcement Learning guided Diffusion for Antibody Sequence Design: Antibodies offer great potential for the treatment of various diseases. However, the discovery of therapeutic antibodies through traditional wet lab methods is expensive and time-consuming. The use of generative models in designing antibodies therefore holds great promise, as it can reduce the time and resources required. Recently, the class of diffusion models has gained considerable traction for their ability to synthesize diverse and high-quality samples. In their basic form, however, they lack mechanisms to optimize for specific properties, such as binding affinity to an antigen. In contrast, the class of offline Reinforcement Learning (RL) methods has demonstrated strong performance in navigating large search spaces, including scenarios where frequent real-world interaction, such as interaction with a wet lab, is impractical. Our novel method, BetterBodies, which combines Variational Autoencoders (VAEs) with RL guided latent diffusion, is able to generate novel sets of antibody CDRH3 sequences from different data distributions. Using the Absolut! simulator, we demonstrate the improved affinity of our novel sequences to the SARS-CoV spike receptor-binding domain. Furthermore, we reflect biophysical properties in the VAE latent space using a contrastive loss and add a novel Q-function based filtering to enhance the affinity of generated sequences. In conclusion, methods such as ours have the potential to have great implications for real-world biological sequence design, where the generation of novel high-affinity binders is a cost-intensive endeavor.<|reference_end|>
|
arxiv
|
@article{vogt2024betterbodies:,
title={BetterBodies: Reinforcement Learning guided Diffusion for Antibody
Sequence Design},
author={Yannick Vogt, Mehdi Naouar, Maria Kalweit, Christoph Cornelius
Miething, Justus Duyster, Joschka Boedecker, Gabriel Kalweit},
journal={arXiv preprint arXiv:2409.16298},
year={2024},
archivePrefix={arXiv},
eprint={2409.16298},
primaryClass={q-bio.BM cs.LG}
}
|
vogt2024betterbodies:
|
arxiv-661489
|
2409.16299
|
HyperAgent: Generalist Software Engineering Agents to Solve Coding Tasks at Scale
|
<|reference_start|>HyperAgent: Generalist Software Engineering Agents to Solve Coding Tasks at Scale: Large Language Models (LLMs) have revolutionized software engineering (SE), demonstrating remarkable capabilities in various coding tasks. While recent efforts have produced autonomous software agents based on LLMs for end-to-end development tasks, these systems are typically designed for specific SE tasks. We introduce HyperAgent, a novel generalist multi-agent system designed to address a wide spectrum of SE tasks across different programming languages by mimicking human developers' workflows. Comprising four specialized agents - Planner, Navigator, Code Editor, and Executor. HyperAgent manages the full lifecycle of SE tasks, from initial conception to final verification. Through extensive evaluations, HyperAgent achieves state-of-the-art performance across diverse SE tasks: it attains a 25.01% success rate on SWE-Bench-Lite and 31.40% on SWE-Bench-Verified for GitHub issue resolution, surpassing existing methods. Furthermore, HyperAgent demonstrates SOTA performance in repository-level code generation (RepoExec), and in fault localization and program repair (Defects4J), often outperforming specialized systems. This work represents a significant advancement towards versatile, autonomous agents capable of handling complex, multi-step SE tasks across various domains and languages, potentially transforming AI-assisted software development practices.<|reference_end|>
|
arxiv
|
@article{phan2024hyperagent:,
title={HyperAgent: Generalist Software Engineering Agents to Solve Coding Tasks
at Scale},
author={Huy Nhat Phan, Tien N. Nguyen, Phong X. Nguyen, Nghi D. Q. Bui},
journal={arXiv preprint arXiv:2409.16299},
year={2024},
archivePrefix={arXiv},
eprint={2409.16299},
primaryClass={cs.SE cs.AI}
}
|
phan2024hyperagent:
|
arxiv-661490
|
2409.16301
|
Gait Switching and Enhanced Stabilization of Walking Robots with Deep Learning-based Reachability: A Case Study on Two-link Walker
|
<|reference_start|>Gait Switching and Enhanced Stabilization of Walking Robots with Deep Learning-based Reachability: A Case Study on Two-link Walker: Learning-based approaches have recently shown notable success in legged locomotion. However, these approaches often lack accountability, necessitating empirical tests to determine their effectiveness. In this work, we are interested in designing a learning-based locomotion controller whose stability can be examined and guaranteed. This can be achieved by verifying regions of attraction (RoAs) of legged robots to their stable walking gaits. This is a non-trivial problem for legged robots due to their hybrid dynamics. Although previous work has shown the utility of Hamilton-Jacobi (HJ) reachability to solve this problem, its practicality was limited by its poor scalability. The core contribution of our work is the employment of a deep learning-based HJ reachability solution to the hybrid legged robot dynamics, which overcomes the previous work's limitation. With the learned reachability solution, first, we can estimate a library of RoAs for various gaits. Second, we can design a one-step predictive controller that effectively stabilizes to an individual gait within the verified RoA. Finally, we can devise a strategy that switches gaits, in response to external perturbations, whose feasibility is guided by the RoA analysis. We demonstrate our method in a two-link walker simulation, whose mathematical model is well established. Our method achieves improved stability than previous model-based methods, while ensuring transparency that was not present in the existing learning-based approaches.<|reference_end|>
|
arxiv
|
@article{xia2024gait,
title={Gait Switching and Enhanced Stabilization of Walking Robots with Deep
Learning-based Reachability: A Case Study on Two-link Walker},
author={Xingpeng Xia, Jason J. Choi, Ayush Agrawal, Koushil Sreenath, Claire
J. Tomlin, Somil Bansal},
journal={arXiv preprint arXiv:2409.16301},
year={2024},
archivePrefix={arXiv},
eprint={2409.16301},
primaryClass={cs.RO cs.LG cs.SY eess.SY}
}
|
xia2024gait
|
arxiv-661491
|
2409.16302
|
How Redundant Is the Transformer Stack in Speech Representation Models?
|
<|reference_start|>How Redundant Is the Transformer Stack in Speech Representation Models?: Self-supervised speech representation models, particularly those leveraging transformer architectures, have demonstrated remarkable performance across various tasks such as speech recognition, speaker identification, and emotion detection. Recent studies on transformer models revealed a high redundancy between layers and the potential for significant pruning, which we will investigate here for transformer-based speech representation models. We perform a detailed analysis of layer similarity in speech representation models using three similarity metrics: cosine similarity, centered kernel alignment, and mutual nearest-neighbor alignment. Our findings reveal a block-like structure of high similarity, suggesting two main processing steps and significant redundancy of layers. We demonstrate the effectiveness of pruning transformer-based speech representation models without the need for post-training, achieving up to 40% reduction in transformer layers while maintaining over 95% of the model's predictive capacity. Furthermore, we employ a knowledge distillation method to substitute the entire transformer stack with mimicking layers, reducing the network size 95-98% and the inference time by up to 94%. This substantial decrease in computational load occurs without considerable performance loss, suggesting that the transformer stack is almost completely redundant for downstream applications of speech representation models.<|reference_end|>
|
arxiv
|
@article{dorszewski2024how,
title={How Redundant Is the Transformer Stack in Speech Representation Models?},
author={Teresa Dorszewski, Albert Kj{o}ller Jacobsen, Lenka Tv{e}tkov'a,
Lars Kai Hansen},
journal={arXiv preprint arXiv:2409.16302},
year={2024},
archivePrefix={arXiv},
eprint={2409.16302},
primaryClass={eess.AS cs.CL cs.LG cs.SD}
}
|
dorszewski2024how
|
arxiv-661492
|
2409.16305
|
Damage detection in an uncertain nonlinear beam based on stochastic Volterra series: an experimental application
|
<|reference_start|>Damage detection in an uncertain nonlinear beam based on stochastic Volterra series: an experimental application: The damage detection problem becomes a more difficult task when the intrinsically nonlinear behavior of the structures and the natural data variation are considered in the analysis because both phenomena can be confused with damage if linear and deterministic approaches are implemented. Therefore, this work aims the experimental application of a stochastic version of the Volterra series combined with a novelty detection approach to detect damage in an initially nonlinear system taking into account the measured data variation, caused by the presence of uncertainties. The experimental setup is composed by a cantilever beam operating in a nonlinear regime of motion, even in the healthy condition, induced by the presence of a magnet near to the free extremity. The damage associated with mass changes in a bolted connection (nuts loosed) is detected based on the comparison between linear and nonlinear contributions of the stochastic Volterra kernels in the total response, estimated in the reference and damaged conditions. The experimental measurements were performed on different days to add natural variation to the data measured. The results obtained through the stochastic proposed approach are compared with those obtained by the deterministic version of the Volterra series, showing the advantage of the stochastic model use when we consider the experimental data variation with the capability to detect the presence of the damage with statistical confidence. Besides, the nonlinear metric used presented a higher sensitivity to the occurrence of the damage compared with the linear one, justifying the application of a nonlinear metric when the system exhibits intrinsically nonlinear behavior.<|reference_end|>
|
arxiv
|
@article{villani2024damage,
title={Damage detection in an uncertain nonlinear beam based on stochastic
Volterra series: an experimental application},
author={Luis Gustavo Gioacon Villani, Samuel da Silva, Americo Cunha Jr, and
Michael D. Todd},
journal={Mechanical Systems and Signal Processing, vol. 128, pp. 463-478,
2019},
year={2024},
doi={10.1016/j.ymssp.2019.03.045},
archivePrefix={arXiv},
eprint={2409.16305},
primaryClass={cs.CE cs.CV cs.LG math.PR stat.AP}
}
|
villani2024damage
|
arxiv-661493
|
2409.16307
|
DeepScore: A Comprehensive Approach to Measuring Quality in AI-Generated Clinical Documentation
|
<|reference_start|>DeepScore: A Comprehensive Approach to Measuring Quality in AI-Generated Clinical Documentation: Medical practitioners are rapidly adopting generative AI solutions for clinical documentation, leading to significant time savings and reduced stress. However, evaluating the quality of AI-generated documentation is a complex and ongoing challenge. This paper presents an overview of DeepScribe's methodologies for assessing and managing note quality, focusing on various metrics and the composite "DeepScore", an overall index of quality and accuracy. These methodologies aim to enhance the quality of patient care documentation through accountability and continuous improvement.<|reference_end|>
|
arxiv
|
@article{oleson2024deepscore:,
title={DeepScore: A Comprehensive Approach to Measuring Quality in AI-Generated
Clinical Documentation},
author={Jon Oleson},
journal={arXiv preprint arXiv:2409.16307},
year={2024},
archivePrefix={arXiv},
eprint={2409.16307},
primaryClass={cs.CL cs.AI stat.AP}
}
|
oleson2024deepscore:
|
arxiv-661494
|
2409.16308
|
Probabilistic Spatiotemporal Modeling of Day-Ahead Wind Power Generation with Input-Warped Gaussian Processes
|
<|reference_start|>Probabilistic Spatiotemporal Modeling of Day-Ahead Wind Power Generation with Input-Warped Gaussian Processes: We design a Gaussian Process (GP) spatiotemporal model to capture features of day-ahead wind power forecasts. We work with hourly-scale day-ahead forecasts across hundreds of wind farm locations, with the main aim of constructing a fully probabilistic joint model across space and hours of the day. To this end, we design a separable space-time kernel, implementing both temporal and spatial input warping to capture the non-stationarity in the covariance of wind power. We conduct synthetic experiments to validate our choice of the spatial kernel and to demonstrate the effectiveness of warping in addressing nonstationarity. The second half of the paper is devoted to a detailed case study using a realistic, fully calibrated dataset representing wind farms in the ERCOT region of Texas.<|reference_end|>
|
arxiv
|
@article{li2024probabilistic,
title={Probabilistic Spatiotemporal Modeling of Day-Ahead Wind Power Generation
with Input-Warped Gaussian Processes},
author={Qiqi Li and Mike Ludkovski},
journal={arXiv preprint arXiv:2409.16308},
year={2024},
archivePrefix={arXiv},
eprint={2409.16308},
primaryClass={cs.LG cs.SY eess.SY physics.ao-ph physics.data-an stat.AP}
}
|
li2024probabilistic
|
arxiv-661495
|
2409.16310
|
A Survey on Codes from Simplicial Complexes
|
<|reference_start|>A Survey on Codes from Simplicial Complexes: In the field of mathematics, a purely combinatorial equivalent to a simplicial complex, or more generally, a down-set, is an abstract structure known as a family of sets. This family is closed under the operation of taking subsets, meaning that every subset of a set within the family is also included in the family. The purpose of this paper is two-fold. Firstly, it aims to present a comprehensive survey of recent results in the field. This survey intends to provide an overview of the advancements made in codes constructed from simplicial complexes. Secondly, the paper seeks to propose open problems that are anticipated to stimulate further research in this area. By highlighting these open problems, the paper aims to encourage and inspire future investigations and developments in the field of codes derived from simplicial complexes.<|reference_end|>
|
arxiv
|
@article{wu2024a,
title={A Survey on Codes from Simplicial Complexes},
author={Yansheng Wu, Chao Li, Jong Yoon Hyun},
journal={arXiv preprint arXiv:2409.16310},
year={2024},
archivePrefix={arXiv},
eprint={2409.16310},
primaryClass={cs.IT math.CO math.IT}
}
|
wu2024a
|
arxiv-661496
|
2409.16311
|
New Insights into Global Warming: End-to-End Visual Analysis and Prediction of Temperature Variations
|
<|reference_start|>New Insights into Global Warming: End-to-End Visual Analysis and Prediction of Temperature Variations: Global warming presents an unprecedented challenge to our planet however comprehensive understanding remains hindered by geographical biases temporal limitations and lack of standardization in existing research. An end to end visual analysis of global warming using three distinct temperature datasets is presented. A baseline adjusted from the Paris Agreements one point five degrees Celsius benchmark based on data analysis is employed. A closed loop design from visualization to prediction and clustering is created using classic models tailored to the characteristics of the data. This approach reduces complexity and eliminates the need for advanced feature engineering. A lightweight convolutional neural network and long short term memory model specifically designed for global temperature change is proposed achieving exceptional accuracy in long term forecasting with a mean squared error of three times ten to the power of negative six and an R squared value of zero point nine nine nine nine. Dynamic time warping and KMeans clustering elucidate national level temperature anomalies and carbon emission patterns. This comprehensive method reveals intricate spatiotemporal characteristics of global temperature variations and provides warming trend attribution. The findings offer new insights into climate change dynamics demonstrating that simplicity and precision can coexist in environmental analysis.<|reference_end|>
|
arxiv
|
@article{zhou2024new,
title={New Insights into Global Warming: End-to-End Visual Analysis and
Prediction of Temperature Variations},
author={Meihua Zhou, Nan Wan, Tianlong Zheng, Hanwen Xu, Li Yang, and Tingting
Wang},
journal={arXiv preprint arXiv:2409.16311},
year={2024},
archivePrefix={arXiv},
eprint={2409.16311},
primaryClass={physics.ao-ph cs.HC stat.AP}
}
|
zhou2024new
|
arxiv-661497
|
2409.16312
|
SEE: Semantically Aligned EEG-to-Text Translation
|
<|reference_start|>SEE: Semantically Aligned EEG-to-Text Translation: Decoding neurophysiological signals into language is of great research interest within brain-computer interface (BCI) applications. Electroencephalography (EEG), known for its non-invasiveness, ease of use, and cost-effectiveness, has been a popular method in this field. However, current EEG-to-Text decoding approaches face challenges due to the huge domain gap between EEG recordings and raw texts, inherent data bias, and small closed vocabularies. In this paper, we propose SEE: Semantically Aligned EEG-to-Text Translation, a novel method aimed at improving EEG-to-Text decoding by seamlessly integrating two modules into a pre-trained BART language model. These two modules include (1) a Cross-Modal Codebook that learns cross-modal representations to enhance feature consolidation and mitigate domain gap, and (2) a Semantic Matching Module that fully utilizes pre-trained text representations to align multi-modal features extracted from EEG-Text pairs while considering noise caused by false negatives, i.e., data from different EEG-Text pairs that have similar semantic meanings. Experimental results on the Zurich Cognitive Language Processing Corpus (ZuCo) demonstrate the effectiveness of SEE, which enhances the feasibility of accurate EEG-to-Text decoding.<|reference_end|>
|
arxiv
|
@article{tao2024see:,
title={SEE: Semantically Aligned EEG-to-Text Translation},
author={Yitian Tao, Yan Liang, Luoyu Wang, Yongqing Li, Qing Yang, and Han
Zhang},
journal={arXiv preprint arXiv:2409.16312},
year={2024},
archivePrefix={arXiv},
eprint={2409.16312},
primaryClass={q-bio.QM cs.AI eess.SP}
}
|
tao2024see:
|
arxiv-661498
|
2409.16313
|
SEA-ViT: Sea Surface Currents Forecasting Using Vision Transformer and GRU-Based Spatio-Temporal Covariance Modeling
|
<|reference_start|>SEA-ViT: Sea Surface Currents Forecasting Using Vision Transformer and GRU-Based Spatio-Temporal Covariance Modeling: Forecasting sea surface currents is essential for applications such as maritime navigation, environmental monitoring, and climate analysis, particularly in regions like the Gulf of Thailand and the Andaman Sea. This paper introduces SEA-ViT, an advanced deep learning model that integrates Vision Transformer (ViT) with bidirectional Gated Recurrent Units (GRUs) to capture spatio-temporal covariance for predicting sea surface currents (U, V) using high-frequency radar (HF) data. The name SEA-ViT is derived from ``Sea Surface Currents Forecasting using Vision Transformer,'' highlighting the model's emphasis on ocean dynamics and its use of the ViT architecture to enhance forecasting capabilities. SEA-ViT is designed to unravel complex dependencies by leveraging a rich dataset spanning over 30 years and incorporating ENSO indices (El Ni\~no, La Ni\~na, and neutral phases) to address the intricate relationship between geographic coordinates and climatic variations. This development enhances the predictive capabilities for sea surface currents, supporting the efforts of the Geo-Informatics and Space Technology Development Agency (GISTDA) in Thailand's maritime regions. The code and pretrained models are available at \url{https://github.com/kaopanboonyuen/gistda-ai-sea-surface-currents}.<|reference_end|>
|
arxiv
|
@article{panboonyuen2024sea-vit:,
title={SEA-ViT: Sea Surface Currents Forecasting Using Vision Transformer and
GRU-Based Spatio-Temporal Covariance Modeling},
author={Teerapong Panboonyuen},
journal={arXiv preprint arXiv:2409.16313},
year={2024},
archivePrefix={arXiv},
eprint={2409.16313},
primaryClass={physics.ao-ph cs.LG}
}
|
panboonyuen2024sea-vit:
|
arxiv-661499
|
2409.16316
|
Surface solar radiation: AI satellite retrieval can outperform Heliosat and generalizes well to other climate zones
|
<|reference_start|>Surface solar radiation: AI satellite retrieval can outperform Heliosat and generalizes well to other climate zones: Accurate estimates of surface solar irradiance (SSI) are essential for solar resource assessments and solar energy forecasts in grid integration and building control applications. SSI estimates for spatially extended regions can be retrieved from geostationary satellites such as Meteosat. Traditional SSI satellite retrievals like Heliosat rely on physical radiative transfer modelling. We introduce the first machine-learning-based satellite retrieval for instantaneous SSI and demonstrate its capability to provide accurate and generalizable SSI estimates across Europe. Our deep learning retrieval provides near real-time SSI estimates based on data-driven emulation of Heliosat and fine-tuning on pyranometer networks. By including SSI from ground stations, our SSI retrieval model can outperform Heliosat accuracy and generalize well to regions with other climates and surface albedos in cloudy conditions (clear-sky index < 0.8). We also show that the SSI retrieved from Heliosat exhibits large biases in mountain regions, and that training and fine-tuning our retrieval models on SSI data from ground stations strongly reduces these biases, outperforming Heliosat. Furthermore, we quantify the relative importance of the Meteosat channels and other predictor variables like solar zenith angle for the accuracy of our deep learning SSI retrieval model in different cloud conditions. We find that in cloudy conditions multiple near-infrared and infrared channels enhance the performance. Our results can facilitate the development of more accurate satellite retrieval models of surface solar irradiance.<|reference_end|>
|
arxiv
|
@article{schuurman2024surface,
title={Surface solar radiation: AI satellite retrieval can outperform Heliosat
and generalizes well to other climate zones},
author={K. R. Schuurman and A. Meyer},
journal={arXiv preprint arXiv:2409.16316},
year={2024},
archivePrefix={arXiv},
eprint={2409.16316},
primaryClass={physics.ao-ph cs.AI cs.LG}
}
|
schuurman2024surface
|
arxiv-661500
|
2409.16317
|
A Literature Review of Keyword Spotting Technologies for Urdu
|
<|reference_start|>A Literature Review of Keyword Spotting Technologies for Urdu: This literature review surveys the advancements of keyword spotting (KWS) technologies, specifically focusing on Urdu, Pakistan's low-resource language (LRL), which has complex phonetics. Despite the global strides in speech technology, Urdu presents unique challenges requiring more tailored solutions. The review traces the evolution from foundational Gaussian Mixture Models to sophisticated neural architectures like deep neural networks and transformers, highlighting significant milestones such as integrating multi-task learning and self-supervised approaches that leverage unlabeled data. It examines emerging technologies' role in enhancing KWS systems' performance within multilingual and resource-constrained settings, emphasizing the need for innovations that cater to languages like Urdu. Thus, this review underscores the need for context-specific research addressing the inherent complexities of Urdu and similar URLs and the means of regions communicating through such languages for a more inclusive approach to speech technology.<|reference_end|>
|
arxiv
|
@article{rizvi2024a,
title={A Literature Review of Keyword Spotting Technologies for Urdu},
author={Syed Muhammad Aqdas Rizvi},
journal={arXiv preprint arXiv:2409.16317},
year={2024},
archivePrefix={arXiv},
eprint={2409.16317},
primaryClass={eess.AS cs.AI cs.CL cs.LG cs.SD}
}
|
rizvi2024a
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.