forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
Su1I7Z64hC | LoFTPat: Low-Rank Subspace Optimization for Parameter-Efficient Fine-Tuning of Genomic Language Models in Pathogenicity Identification | [
"Sajib Acharjee Dip"
] | Pathogen identification from genomic sequences is vital for disease surveillance, antimicrobial resistance monitoring, and vaccine development. While Large Language Models (LLMs) excel in genomic sequence modeling, existing approaches prioritize accuracy over efficiency, leading to high memory overhead, long training times, and scalability issues. We introduce LoFTPat, a structurally constrained fine-tuning framework that integrates Low-Rank Adaptation within PathoLM’s self-attention layers, enabling efficient task-specific weight modulation.
LoFTPat reduces training time by 4.02\%, GPU memory usage by 64.3\%, and trainable parameters by 99.24\%, while surpassing full fine-tuning approaches with +0.44\% accuracy, +0.44\% F1 score, +0.02\% AUC-ROC, and +0.52\% balanced accuracy. It efficiently adapts to short- and long-read sequences, demonstrating strong generalization across bacterial and viral pathogens. By optimizing feature transformations with minimal parameter overhead, LoFTPat offers a scalable, computationally efficient framework for large-scale pathogen classification and genomic analysis. | [
"loftpat",
"subspace optimization",
"genomic language models",
"approaches",
"accuracy",
"pathogenicity identification loftpat",
"genomic sequences",
"vital",
"disease surveillance"
] | Accept | https://openreview.net/pdf?id=Su1I7Z64hC | https://openreview.net/forum?id=Su1I7Z64hC | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"5d53cCV1GX"
],
"note_type": [
"decision"
],
"note_created": [
1741071728606
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"comment\": \"Decision os based on reviews submitted on both (duplicate) submissions.\", \"title\": \"Paper Decision\"}"
]
} |
SrPaXJdYmP | Hierarchical Assembly of Long DNA Libraries from Short Oligonucleotide Pools | [
"Shaozhong Zou",
"zhien wu",
"Chunfu Xu"
] | Large-scale screening and high-throughput experimental data generation are essential for advancing AI-driven genomics research. However, these processes are generally constrained by the length limitation of chip-synthesized oligo-pools ($<300$ bp). In addition, synthesizing gene-sized DNA sequences at scale remains economically unfeasible, making it difficult to validate the experimental performance of certain machine learning models or to generate new datasets for further training. To address this challenge, we developed a novel method for the high-throughput assembly of gene-sized DNA sequences, starting from cost-effective chip-synthesized oligo-pools. In contrast to Polymerase Cycling Assembly (PCA), we employed Golden Gate Assembly (GGA) to facilitate the ligation of short DNA fragments. This approach enabled us to successfully assemble high-quality DNA libraries containing up to 96 gene-sized sequences (600 bp) in a single-pot reaction, with convenient retrieval of individual sequences. If numerous reactions are conducted in parallel---for example, in a 96-well plate---we can readily assemble up to 9,216 (96 x 96) genes. When combined with advances in automation technologies, this enables the efficient and cost-effective synthesis of gene-sized DNA sequences at scale, thereby accelerating the generation of experimental data for the Machine Learning community. | [
"dna sequences",
"hierarchical assembly",
"long dna libraries",
"short oligonucleotide",
"scale",
"screening",
"experimental data generation",
"essential",
"genomics research"
] | Accept | https://openreview.net/pdf?id=SrPaXJdYmP | https://openreview.net/forum?id=SrPaXJdYmP | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"QVzPEf2dUu"
],
"note_type": [
"decision"
],
"note_created": [
1741031685398
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
Sh88LK85AR | SPELL: Spatial Prompting with Chain-of-Thought for Zero-Shot Learning in Spatial Transcriptomics | [
"Sumeer Ahmad Khan",
"Xabier Martinez de Morentin",
"Vincenzo Lagani",
"Robert Lehmann",
"Abdel Rahman Alsabbagh",
"Mahmoud Zahran",
"Narsis A. Kiani",
"David Gomez-Cabrero",
"Jesper Tegnér"
] | Zero-shot learning (ZSL) for cell-type classification in spatially resolved transcriptomics remains underexplored, particularly when integrating spatial context with marker gene semantics. Here, we introduce SPELL (Spatial Prompt-Enhanced Zero-Shot Learning), combining graph autoencoder (GAE)-derived spatial embeddings with chain-of-thought (CoT) prompting for zero-shot classification.
SPELL uses a spatial k-nearest neighbor graph to encode local cellular neighborhoods and generates interpretable prompts that integrate marker gene expression and the spatial embedding norms. We evaluated SPELL across five state-of-the-art zero-shot LLM classifiers on MERFISH, MIBI-TOF, and Stereo-seq datasets for cell-type classification.
Guided by only expression values and spatial context, the two BART models solved the classification task surprisingly well (distilbart-mnli-12-1i 64\% accuracy on the MERFISH, bart-large-mnli achieved 52\% accuracy on MIBI-TOF dataset). Interestingly, removing the spatial context from the CoT prompt revealed a significant performance drop (20 – 24 \% drop in accuracy), underscoring spatial information's critical role in zero-shot learning.
Our work bridges spatial omics with LLM reasoning,enabling flexible adaptation and offering robust cell-type classification across diverse datasets without task-specific fine-tuning while maintaining biological interpretability. | [
"learning",
"spatial",
"classification",
"spell",
"spatial context",
"accuracy",
"merfish",
"spatial transcriptomics spell",
"spatial transcriptomics",
"zsl"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=Sh88LK85AR | https://openreview.net/forum?id=Sh88LK85AR | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"Zbv3wjFFMd"
],
"note_type": [
"decision"
],
"note_created": [
1741143642465
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
Rr2opkycEu | AI-Powered Virtual Tissues from Spatial Proteomics for Clinical Diagnostics and Biomedical Discovery | [
"Johann Wenckstern",
"Eeshaan Jain",
"Kiril Vasilev",
"Matteo Pariset",
"Andreas Wicki",
"Gabriele Gut",
"Charlotte Bunne"
] | Spatial proteomics technologies have transformed our understanding of complex tissue architectures by enabling simultaneous analysis of multiple molecular markers and their spatial organization. The high dimensionality of these data, varying marker combinations across experiments and heterogeneous study designs pose unique challenges for computational analysis. Here, we present Virtual Tissues (VirTues), a foundation model framework for biological tissues that operates across the molecular, cellular and tissue scale. VirTues introduces innovations in transformer architecture design, including a novel tokenization scheme that captures both spatial and marker dimensions, and attention mechanisms that scale to high-dimensional multiplex data while maintaining interpretability. Trained on diverse cancer and non-cancer tissue datasets, VirTues demonstrates strong generalization capabilities without task-specific fine-tuning, enabling cross-study analysis and novel marker integration. As a generalist model, VirTues outperforms existing approaches across clinical diagnostics, biological discovery and patient case retrieval tasks, while providing insights into tissue function and disease mechanisms. | [
"virtual tissues",
"clinical diagnostics",
"virtues",
"spatial proteomics",
"biomedical discovery",
"understanding",
"complex tissue architectures",
"simultaneous analysis",
"multiple molecular markers"
] | Accept (Spotlight) | https://openreview.net/pdf?id=Rr2opkycEu | https://openreview.net/forum?id=Rr2opkycEu | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"ldT0rnAXiY"
],
"note_type": [
"decision"
],
"note_created": [
1740961565419
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"title\": \"Paper Decision\"}"
]
} |
RdHLANURna | A data-driven recommendation framework for genomic discovery | [
"Ying Yang",
"Zhaoying Pan",
"Jinge Ma",
"Daniel J. Klionsky"
] | Data-driven approaches to genomic discovery have been accelerated by emerging efforts in machine learning. However, due to the inherent complexity of genomic data, it can be challenging to model or utilize the data and their intricate relationships. In this work, we propose a framework for genomic prediction utilizing information from various genomic databases. We use a knowledge graph following existing work to extract gene representations and either use XGBoost or construct a graph to rank feature importance. By filtering key features and computing relevancy scores with genes that are known to be associated or unassociated with a specified area, we recommend unlabeled gene candidates with a high likelihood of association for further genomic research. We demonstrate how this framework works by applying it to autophagy genomics, illustrating its potential as a powerful recommendation system for genomic discovery. | [
"genomic discovery",
"recommendation framework",
"work",
"framework",
"approaches",
"discovery",
"efforts",
"machine learning",
"due",
"inherent complexity"
] | Accept | https://openreview.net/pdf?id=RdHLANURna | https://openreview.net/forum?id=RdHLANURna | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"IVtsbOXwEv"
],
"note_type": [
"decision"
],
"note_created": [
1740961101763
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
R2LTPle31d | Piloting Structure-Based Drug Design via Modality-Specific Optimal Schedule | [
"Keyue Qiu",
"Yuxuan Song",
"Zhehuan Fan",
"Peidong Liu",
"Zhe Zhang",
"Mingyue Zheng",
"Hao Zhou",
"Wei-Ying Ma"
] | Structure-Based Drug Design (SBDD) is crucial for identifying bioactive molecules. Recent deep generative models are faced with challenges in geometric structure modeling. A major bottleneck lies in the twisted probability path of multi-modalities—continuous 3D positions and discrete 2D topologies—which jointly determine molecular geometries. By establishing the fact that noise schedules decide the Variational Lower Bound (VLB) for the twisted probability path, we propose VLB-Optimal Scheduling (VOS) strategy in this under-explored area, which optimizes VLB as a path integral for SBDD. Our model effectively enhances molecular geometries and interaction modeling, achieving state-of-the-art PoseBusters passing rate of 95.9\% on CrossDock, more than 10\% improvement upon strong baselines, while unlocking the potential of repurposing SBDD model as docking method, with 44.0\% RMSD $<$ 2\r{A} on PoseBusters V2. | [
"drug design",
"optimal schedule",
"sbdd",
"twisted probability path",
"molecular geometries",
"vlb",
"crucial",
"bioactive molecules",
"challenges"
] | Accept | https://openreview.net/pdf?id=R2LTPle31d | https://openreview.net/forum?id=R2LTPle31d | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"Dld7c6i8Bz"
],
"note_type": [
"decision"
],
"note_created": [
1740860973979
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
QcujughqCJ | DrugAgent: Multi-Agent Large Language Model-Based Reasoning for Drug-Target Interaction Prediction | [
"Yoshitaka Inoue",
"Tianci Song",
"Xinling Wang",
"Augustin Luna",
"Tianfan Fu"
] | Advancements in large language models (LLMs) allow them to address diverse questions using human-like interfaces. Still, limitations in their training prevent them from answering accurately in scenarios that could benefit from multiple perspectives. Multi-agent systems allow the resolution of questions to enhance result consistency and reliability. While drug-target interaction (DTI) prediction is important for drug discovery, existing approaches face challenges due to complex biological systems and the lack of interpretability needed for clinical applications.
DrugAgent is a multi-agent LLM system for DTI prediction that combines multiple specialized perspectives with transparent reasoning. Our system adapts and extends existing multi-agent frameworks by (1) applying coordinator-based architecture to the DTI domain, (2) integrating domain-specific data sources, including ML predictions, knowledge graphs, and literature evidence, and (3) incorporating Chain-of-Thought (CoT) and ReAct (Reason+Act) frameworks for transparent DTI reasoning.
We conducted comprehensive experiments using a kinase inhibitor dataset, where our multi-agent LLM method outperformed the non-reasoning multi-agent model (GPT-4o mini) by 45% in F1 score (0.514 vs 0.355). Through ablation studies, we demonstrated the contributions of each agent, with the AI agent being the most impactful, followed by the KG agent and search agent. Most importantly, our approach provides detailed, human-interpretable reasoning for each prediction by combining evidence from multiple sources - a critical feature for biomedical applications where understanding the rationale behind predictions is essential for clinical decision-making and regulatory compliance. Code is available at https://anonymous.4open.science/r/DrugAgent-B2EA. | [
"reasoning",
"drugagent",
"large language",
"prediction",
"interaction prediction drugagent",
"interaction prediction advancements",
"large language models",
"llms",
"diverse questions",
"interfaces"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=QcujughqCJ | https://openreview.net/forum?id=QcujughqCJ | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"ticYPytMBy"
],
"note_type": [
"decision"
],
"note_created": [
1740961985648
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
Q74mey5NZl | Enhancing E. coli Genomic Analysis with Retrieval-Augmented Generation | [
"KRITIKA CHUGH"
] | This study presents a framework that leverages retrieval augmented generation (RAG) to enhance the interpretation and analysis of complex bioinformatics data in Escherichia coli (E.coli) genomics. By integrating bioinformatics tools including pairwise alignment, NCBI annotation, multiple sequence alignment (MSA) with large language models (LLMs) such as GPT o3-mini, Gemini 2.0 Advanced Flash Thinking Experimental model, and Grok 3, our approach combines real-time data retrieval with dynamic natural language generation. This integration enables the conversion of raw computational output into coherent and accessible narratives, facilitating a deeper understanding of genomic organization and gene function. The RAG framework augments LLM capabilities by retrieving the latest domain-specific knowledge, which is then used to refine and contextualize the insights generated. Through custom prompt engineering, our system synthesizes diverse datasets to highlight key aspects of genomic variation, conserved synteny, and annotation consistency across multiple E. coli strains. In general, our work demonstrates that integrating RAG with traditional bioinformatics methods offers a powerful, scalable solution to transform complex genomic datasets into actionable biological insights, paving the way for more efficient and accurate genomic analysis in microbial research. | [
"coli genomic analysis",
"generation",
"rag",
"study",
"framework",
"retrieval augmented generation",
"interpretation",
"analysis",
"complex bioinformatics data",
"escherichia coli"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=Q74mey5NZl | https://openreview.net/forum?id=Q74mey5NZl | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"Elo55Kj1z1"
],
"note_type": [
"decision"
],
"note_created": [
1741143597767
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
Q15Dg5lQou | Benchmarking Fine-Tuned RNA Language Models for Intronic Branch Point Prediction | [
"Pablo Rodenas Ruiz",
"Ali Saadat",
"Timothy T. Tran",
"Oliver Müller Smedt",
"Peng Zhang",
"Jacques Fellay"
] | Accurate prediction of RNA branch points is critical for understanding splicing mechanisms and identifying variants that may lead to genetic diseases. Despite their biological importance, few computational methods have been developed for reliably identifying branch points. In this work, we fine-tune several RNA language models for branch point prediction. The top-performing model, ERNIE-RNA, achieved an $F_1$ score of 0.811, a sequence accuracy of 0.790, and an average precision score of 0.868, outperforming previous leading models. These results showcase the potential of RNA-specific language models in capturing the subtle sequence features relevant to splicing. Our findings suggest that extended training and hyperparameter tuning could yield additional performance gains, positioning this study as a strong baseline for future research in RNA splicing. | [
"rna language models",
"rna branch points",
"critical",
"mechanisms",
"variants",
"genetic diseases",
"biological importance",
"computational methods"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=Q15Dg5lQou | https://openreview.net/forum?id=Q15Dg5lQou | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"rfHylvUvuM"
],
"note_type": [
"decision"
],
"note_created": [
1740950643263
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
PdbfAKWZs3 | Multi-modal single-cell foundation models via dynamic token adaptation | [
"Wenmin Zhao",
"Ana Solaguren-Beascoa",
"Grant Neilson",
"Louwai Muhammed",
"Liisi Laaniste",
"Aylin Cakiroglu"
] | Recent advances in applying deep learning in genomics include DNA-language and single-cell foundation models. However, these models take only one data type as input. We introduce dynamic token adaptation and demonstrate how it allows combining these models to predict gene regulation at single-cell level in different genetic contexts. Although the method is generalisable, we focus on an illustrative example by training an adapter from DNA-sequence embeddings to a single-cell foundation model's token embedding space. As qualitative evaluation, we assess the impact of DNA sequence changes on the model’s learned gene regulatory networks by mutating the transcriptional start site of the transcription factor \textit{GATA4} \textit{in silico}, observing predicted expression changes in its target genes in fetal cardiomyocytes. | [
"foundation models",
"dynamic token adaptation",
"models",
"deep learning",
"genomics",
"data type",
"input",
"gene regulation",
"level"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=PdbfAKWZs3 | https://openreview.net/forum?id=PdbfAKWZs3 | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"NU4XJF1NBu"
],
"note_type": [
"decision"
],
"note_created": [
1740961024196
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
NYumxueWx2 | What do single-cell models already know about perturbations? | [
"Andreas Bjerregaard",
"Vivek Das",
"Anders Krogh"
] | Generative models implicitly learn underlying dynamics of data and can do more than just reconstruction. By leveraging output gradients with respect to the latent dimensions, we explore a simple approach to infer arbitrary perturbation effects which generates interpretive flow maps within high-dimensional biological datasets. By applying this method to several cases in single-cell RNA-sequencing, we demonstrate its use in inferring effects from knockdown, overexpression, toxin response and embryonic development. This approach can further add global structure to dimensionality reductions which normally only preserve local patterns. Needing only a decoder, our method simplifies analyses, is applicable to already trained models, and offers clearer insights into cellular dynamics without complex setups. In turn, this gives a more straightforward interpretation of results, making it easier to discern underlying biological pathways with easily understandable visual representations. Code available on https://github.com/yhsure/perturbations. | [
"models",
"perturbations",
"generative models",
"dynamics",
"data",
"reconstruction",
"output gradients",
"respect",
"latent dimensions",
"simple"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=NYumxueWx2 | https://openreview.net/forum?id=NYumxueWx2 | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"D8v37FHblz"
],
"note_type": [
"decision"
],
"note_created": [
1741143628053
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
N9uuyDEUIY | Featurization of single cell trajectories through kernel mean embedding of optimal transport maps | [
"Alec Plotkin",
"Justin Milner",
"Natalie Stanley"
] | Longitudinal single-cell data has spurred the development of computational
trajectory models with the power to make time-resolved, testable predictions
about cell fates. As ”real-time” trajectory inference methods proliferate, there is
a growing need for tools that integrate their inherently high-dimensional outputs.
In this work, we propose a novel strategy to facilitate downstream analysis of
single-cell optimal-transport trajectory models, by constructing feature vectors
that contain information about a cell’s state across the entirety of its trajectory.
This approach leverages kernel mean embedding of distributions to create
trajectory features with applications in several domains, including cell clustering
and comparison of perturbation response trajectories. We demonstrate how
k-means clustering on trajectory features produces interpretable clusters that
respect the underlying cell trajectories. Furthermore, we develop a divergence
metric between single-cell trajectories based on the maximum mean discrepancy
(MMD). We use this trajectory divergence to show that modeling perturbation
trajectories may help uncover experimentally interesting perturbations at higher
significance levels than by comparing perturbation responses at only a single time
point. | [
"kernel mean embedding",
"single cell trajectories",
"trajectory features",
"featurization",
"data",
"development",
"computational trajectory models",
"power"
] | Accept | https://openreview.net/pdf?id=N9uuyDEUIY | https://openreview.net/forum?id=N9uuyDEUIY | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"KpSmh0hBgT"
],
"note_type": [
"decision"
],
"note_created": [
1741071308310
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
N5nhleVGAi | Structure-based metabolite function prediction using graph neural networks | [
"Tancredi Cogne",
"Mariam Ait Oumelloul",
"Ali Saadat",
"Janna Hastings",
"Jacques Fellay"
] | Being able to broadly predict the function of novel metabolites based on their structures has applications in systems biology, environmental monitoring and drug discovery. To date, machine learning models aiming to predict functional characteristics of metabolites have largely been limited in scope to predicting single functions, or only a small number of functions simultaneously. Using the Human Metabolome Database as a source for a wider range of functional annotations, we assess the feasibility of predicting metabolite functions more broadly, as defined by four elements, namely location, role, the process it is involved in, and its physiological effect. We evaluated three graph neural network architectures to predict available functional ontology terms. Among the models tested, the Graph Attention Network, incorporating embeddings from the pre-trained ChemBERTa model to predict the process metabolites are involved in, achieved the highest performance with an F1-score of 0.889 and a recall of 0.903. The model identified function-associated structural patterns within metabolite families, demonstrating the potential for interpretably predicting metabolite functions from structural information. | [
"metabolite function prediction",
"graph neural networks",
"metabolite functions",
"able",
"function",
"novel metabolites",
"structures",
"applications",
"systems biology",
"environmental monitoring"
] | Accept | https://openreview.net/pdf?id=N5nhleVGAi | https://openreview.net/forum?id=N5nhleVGAi | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"5w9CrDTwqP"
],
"note_type": [
"decision"
],
"note_created": [
1740948020223
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
M5fIK5mDqd | Multi-omic Causal Discovery using Genotypes and Gene Expression | [
"Stephen M. Asiedu",
"David Watson"
] | Causal discovery in multi-omic datasets is crucial for understanding the bigger picture of gene regulatory mechanisms but remains challenging due to high dimensionality, differentiation of direct from indirect relationships, and hidden confounders. We introduce GENESIS (GEne Network inference from Expression SIgnals and SNPs), a constraint-based algorithm that leverages the natural
causal precedence of genotypes to infer ancestral relationships in transcriptomic data. Unlike traditional causal discovery methods that start with a fully connected graph, GENESIS initializes an empty ancestrality matrix and iteratively populates it with direct, indirect or non-causal relationships using a series of provably sound marginal and conditional independence tests. By integrating genotypes as fixed
causal anchors, GENESIS provides a principled “head start” to classical causal discovery algorithms, restricting the search space to biologically plausible edges. We test GENESIS on synthetic and real-world genomic datasets. This framework offers a powerful avenue for uncovering causal pathways in complex traits, with promising applications to functional genomics, drug discovery, and precision
medicine. | [
"genotypes",
"genesis",
"causal discovery",
"direct",
"gene expression",
"datasets",
"crucial",
"bigger picture",
"gene regulatory mechanisms"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=M5fIK5mDqd | https://openreview.net/forum?id=M5fIK5mDqd | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"gwu0BZtfrh"
],
"note_type": [
"decision"
],
"note_created": [
1740961508839
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
L2jj5ekPxa | Leveraging GPT Continual Fine-Tuning for Improved RNA Editing Site Prediction | [
"Zohar Rosenwasser",
"Erez Levanon",
"Michael Levitt",
"Gal Oren"
] | RNA editing is a critical regulatory process that diversifies the transcriptome by altering nucleotide sequences in messenger RNA molecules. We propose a novel framework for predicting adenosine-to-inosine (A-to-I) RNA editing sites by leveraging a specialized fine-tuned GPT-4o-mini model and a tissue-specific liver dataset. Grounding our approach in the high expression levels of ADAR1 in liver tissue, we avoid confounding factors from other ADAR isoforms and complex multi-tissue data. We categorize editing levels into progressively narrower thresholds (1%, 5%, 10%, and 15%) and introduce continual fine-tuning (CFT) to guide the model step-by-step from low-editing (1%) to high-editing (15%) scenarios. Compared to static fine-tuning (SFT) on a single threshold, our multi-stage method incrementally refines the model's ability to distinguish editing features and demonstrates superior performance over base GPT-3.5/4o-mini models across various configurations. We further show that employing strict, non-overlapping threshold bins facilitates clearer distinctions between edited and non-edited sites, consequently improves performance. In contrast, reducing the distinction between edited and non-edited classes significantly degrades classification accuracy. These findings underscore the importance of biologically appropriate data partitioning and continual, threshold-based fine-tuning in enhancing the predictive power of generative language models for RNA editing. Our study paves the way for future work on building more nuanced models that incorporate tissue-specific constraints, ultimately broadening the applicability of generative AI in post-transcriptional regulation analysis.
The sources of this work are available at our repository: https://zenodo.org/records/14873200. | [
"model",
"gpt continual",
"improved rna",
"sites",
"site prediction",
"critical regulatory process",
"transcriptome",
"nucleotide sequences",
"messenger rna molecules"
] | Accept | https://openreview.net/pdf?id=L2jj5ekPxa | https://openreview.net/forum?id=L2jj5ekPxa | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"yUm7btHfcL"
],
"note_type": [
"decision"
],
"note_created": [
1740961736242
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
Ky0CkFiVhu | Helix-mRNA: A Hybrid Foundation Model For Full Sequence mRNA Therapeutics | [
"Matthew Wood",
"Mathieu Klop",
"Maxime Allard"
] | mRNA-based vaccines have become a major focus in the pharmaceutical industry. The coding sequence as well as the Untranslated Regions (UTRs) of an mRNA can strongly influence translation efficiency, stability, degradation, and other factors that collectively determine a vaccine’s effectiveness. However, optimizing mRNA sequences for those properties remains a complex challenge. Existing deep learning models often focus solely on coding region optimization, overlooking the UTRs. We present Helix-mRNA, a structured state-space-based and attention hybrid model to address these challenges. In addition to a first pre-training, a second pre-training stage allows us to specialise the model with high-quality data. We employ single nucleotide tokenization of mRNA sequences with codon separation, ensuring prior biological and structural information from the original mRNA sequence is not lost. Our model, Helix-mRNA, outperforms existing methods in analysing both UTRs and coding region properties. It can process sequences 6x longer than current approaches while using only 10\% of the parameters of existing foundation models. Its predictive capabilities extend to all mRNA regions.
We open-source the model (https://github.com/helicalAI/helical) and model weights (https://huggingface.co/helical-ai/helix-mRNA) | [
"utrs",
"model",
"hybrid foundation model",
"mrna sequences",
"https",
"vaccines",
"major focus",
"pharmaceutical industry",
"sequence"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=Ky0CkFiVhu | https://openreview.net/forum?id=Ky0CkFiVhu | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"7X9z9GTWZz"
],
"note_type": [
"decision"
],
"note_created": [
1740872622726
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
Kx3tpVG4M7 | Enhancing DNA Foundation Models to Address Masking Inefficiencies | [
"Monireh Safari",
"Pablo Andres Millan Arias",
"Scott C. Lowe",
"Lila Kari",
"Angel X Chang",
"Graham W. Taylor"
] | Masked language modelling (MLM) as a pretraining objective has been widely adopted in genomic sequence modelling. While pretrained models can successfully serve as encoders for various downstream tasks, the distribution shift between pretraining and inference detrimentally impacts performance, as the pretraining task is to map [MASK] tokens to predictions, yet the [MASK] is absent during downstream applications. This means the encoder does not prioritize its encodings of non-[MASK] tokens, and expends parameters and compute on work only relevant to the MLM task, despite this being irrelevant at deployment time.
In this work, we propose a modified encoder-decoder architecture based on the masked autoencoder framework, designed to address this inefficiency within a BERT-based transformer. We empirically show that the resulting mismatch is particularly detrimental in genomic pipelines where models are often used for feature extraction without fine-tuning. We evaluate our approach on the BIOSCAN-5M dataset, comprising over 2 million unique DNA barcodes. We achieve substantial performance gains in both closed-world and open-world classification tasks when compared against causal models and bidirectional architectures pretrained with MLM tasks. The code repository is available at https://github.com/bioscan-ml/BarcodeMAE. | [
"mask",
"dna foundation models",
"inefficiencies",
"work",
"language modelling",
"mlm",
"pretraining objective",
"genomic sequence modelling",
"pretrained models",
"encoders"
] | Accept | https://openreview.net/pdf?id=Kx3tpVG4M7 | https://openreview.net/forum?id=Kx3tpVG4M7 | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"FEgwMaWfVM"
],
"note_type": [
"decision"
],
"note_created": [
1741031155151
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
HrU10YCDoh | FACA-GEN: Investigating Bias and Generalization in Active Learning for Genomics AI | [
"Amber Qayum Hawabaz"
] | In the rapidly evolving field of Genomics AI, fairness and generalization are critical challenges, especially when AI systems rely on Active Learning (AL) to optimize data selection. Traditional AL methods, while effective in selecting informative samples, often overlook fairness considerations, leading to biased models that fail to generalize across diverse populations. This paper introduces Fairness-Aware Causal Active Learning for Genomics AI (FACA-GEN), a novel framework that integrates fairness-aware AL, Causal Representation Learning (CRL), and Reinforcement Learning (RL) to address these issues. FACA-GEN dynamically selects training samples while optimizing for both fairness and causal validity, ensuring that models do not rely on biased proxies like race or ethnicity. We employ multi-objective optimization to balance informativeness, fairness, and causal validity, using RL to adaptively adjust fairness constraints over time. Additionally, we introduce Causal Consistency Loss to enforce the learning of true genetic markers and mitigate shortcut biases. Our approach actively selects samples based on informativeness, fairness, and causal relevance, overcoming bias and shortcut learning prevalent in genomics AI. Through experiments on genomics datasets, we demonstrate that FACA-GEN significantly improves model fairness and generalization, offering a more robust and equitable solution for AI-driven biology. The results show significant improvements in fairness metrics (Demographic Parity, Equalized Odds) and causal validity compared to existing methods. | [
"generalization",
"genomics",
"fairness",
"active learning",
"causal validity",
"investigating bias",
"models",
"informativeness"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=HrU10YCDoh | https://openreview.net/forum?id=HrU10YCDoh | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"PcCRPP4WsI"
],
"note_type": [
"decision"
],
"note_created": [
1740861201583
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
Gg0850bgBy | BirdieDNA: Reward-Based Pre-Training for Genomic Sequence Modeling | [
"Sam Blouir",
"Defne Circi",
"Asher Moldwin",
"Amarda Shehu"
] | Transformer-based language models have shown promise in genomics but face challenges unique to DNA, such as sequence lengths spanning hundreds of millions of base pairs and subtle long-range dependencies. Although next-token prediction remains the predominant pre-training objective (inherited from NLP), recent research suggests that multi-objective frameworks can better capture complex structure. In this work, we explore whether the Birdie framework, a reinforcement learning-based, mixture-of-objectives pre-training strategy, can similarly benefit genomic foundation models. We compare a slightly modified Birdie approach with a purely autoregressive, next token prediction baseline on standard Nucleotide Transformer benchmarks. Our results show performance gains in the DNA domain, indicating that mixture-of-objectives training could be a promising alternative to next token prediction only pre-training for genomic sequence modeling. | [
"birdiedna",
"genomic sequence",
"language models",
"promise",
"genomics",
"face challenges",
"sequence lengths",
"hundreds",
"millions",
"base pairs"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=Gg0850bgBy | https://openreview.net/forum?id=Gg0850bgBy | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"ZbkurZIGfl"
],
"note_type": [
"decision"
],
"note_created": [
1740861114555
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
G2zzdbgKxl | Predicting Drug-likeness via Biomedical Knowledge Alignment and EM-like One-Class Boundary Optimization | [
"Dongmin Bang",
"Inyoung Sung",
"Yinhua Piao",
"Sangseon Lee",
"Sun Kim"
] | The advent of generative AI now enables large-scale $\textit{de novo}$ design of molecules, but identifying viable drug candidates among them remains an open problem. Existing drug-likeness prediction methods often rely on ambiguous negative sets or purely structural features, limiting their ability to accurately classify drugs from non-drugs. In this work, we introduce BounDr.E: a novel modeling of drug-likeness as a compact space surrounding approved drugs through a dynamic deep one-class boundary approach. Specifically, we enrich the chemical space through biomedical knowledge alignment, and then iteratively tighten the drug-like boundary by pushing non-drug-like compounds outside via an Expectation-Maximization (EM)-like process. Empirically, BounDr.E achieves 10\% F1-score improvement over the previous state-of-the-art and demonstrates robust cross-dataset performance, including zero-shot toxic compound filtering. Additionally, we showcase its effectiveness through comprehensive case studies in large-scale $\textit{in silico}$ screening. | [
"biomedical knowledge alignment",
"boundary optimization",
"drugs",
"boundary",
"advent",
"generative ai",
"design",
"molecules",
"viable drug candidates",
"open problem"
] | Accept | https://openreview.net/pdf?id=G2zzdbgKxl | https://openreview.net/forum?id=G2zzdbgKxl | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"kpIfmna5Bk"
],
"note_type": [
"decision"
],
"note_created": [
1740947681527
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
F8bxhsUsLO | PRISM: Enhancing Protein Inverse Folding through Fine-Grained Retrieval on Structure-Sequence Multimodal Representations | [
"Sazan Mahbub",
"Souvik Kundu",
"Eric P. Xing"
] | 3D structure-conditioned protein sequence generation, also known as protein inverse folding, is a key challenge in computational biology. While large language models for proteins have made significant strides, they cannot dynamically integrate rich multimodal representations from existing datasets, specifically the combined information of 3D structure and 1D sequence. Additionally, as datasets grow, these models require retraining, leading to inefficiencies. In this paper, we introduce PRISM, a novel retrieval-augmented generation (RAG) framework that enhances protein sequence design by dynamically incorporating fine-grained multimodal representations from a larger set of known structure-sequence pairs. Our experiments demonstrate that PRISM significantly outperforms state-of-the-art techniques in sequence recovery, emphasizing the advantages of incorporating fine-grained, multimodal retrieval-augmented generation in protein design. | [
"prism",
"protein inverse",
"retrieval",
"multimodal representations",
"datasets",
"generation",
"multimodal representations prism",
"protein sequence generation",
"protein inverse folding",
"key challenge"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=F8bxhsUsLO | https://openreview.net/forum?id=F8bxhsUsLO | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"OfNK1jjAYI"
],
"note_type": [
"decision"
],
"note_created": [
1740872507716
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
EXq8TWHJHI | RNAGym: Benchmarks for RNA Fitness and Structure Prediction | [
"Rohit Arora",
"Murphy Angelo",
"Christian Andrew Choe",
"Aaron W Kollasch",
"Fiona Qu",
"Courtney A. Shearer",
"Ruben Weitzman",
"Artem Gazizov",
"Sarah Gurev",
"Erik Xie",
"Debora Susan Marks",
"Pascal Notin"
] | Predicting the structure and the effects of mutations in RNA are pivotal for numerous biological and medical applications. However, the evaluation of machine learning-based RNA models has been hampered by disparate and limited experimental datasets, along with inconsistent model performances across different RNA types. To address these limitations, we introduce RNAGym, a comprehensive and large-scale benchmark specifically tailored for RNA fitness and structure prediction. This benchmark suite includes over 30 standardized deep mutational scanning assays, covering hundreds of thousands of mutations, and curated RNA structure datasets. We have developed a robust evaluation framework that integrates multiple metrics suitable for both predictive tasks while accounting for the inherent limitations of experimental methods. RNAGym is designed to facilitate a systematic comparison of RNA models, offering an essential resource to enhance the development and understanding of these models within the computational biology community. | [
"rnagym",
"rna fitness",
"benchmarks",
"structure prediction",
"mutations",
"rna models",
"structure prediction rnagym",
"structure",
"effects",
"rna"
] | Accept | https://openreview.net/pdf?id=EXq8TWHJHI | https://openreview.net/forum?id=EXq8TWHJHI | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"IbBup8YWb5"
],
"note_type": [
"decision"
],
"note_created": [
1740961959768
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
EIiM0eCjoz | PIONEER: a virtual platform for iterative improvement of genomic deep learning | [
"Alessandro Crnjar",
"John J Desmarais",
"Justin Kinney",
"Peter K Koo"
] | Deep neural networks (DNNs) have improved our ability to predict regulatory activity from DNA sequences, providing valuable insights into gene regulation. However, these models often fail to generalize to sequences underrepresented in their training data, limiting applications like variant effect prediction and de novo sequence design. This limitation reflects a bias toward natural variation across the genome, making DNNs vulnerable to covariate shifts, where test sequences diverge statistically from the training distribution. Here, we introduce PIONEER, a computational platform that simulates functional genomics experiments to systematically benchmark and optimize training data composition through iterative AI-experiment cycles. Using PIONEER, we compare sequence proposal strategies—including active learning and random baselines—evaluating their impact on model generalization across increasing levels of covariate shift. To ensure a fair comparison, we also assess each approach within a fixed experimental budget, accounting for DNA synthesis costs. PIONEER provides a scalable and extensible framework for optimizing training data composition to enhance model generalization, advancing applications in regulatory genomics, synthetic biology, and precision medicine. | [
"pioneer",
"virtual platform",
"iterative improvement",
"applications",
"training data composition",
"model generalization",
"genomic deep",
"dnns",
"ability"
] | Accept | https://openreview.net/pdf?id=EIiM0eCjoz | https://openreview.net/forum?id=EIiM0eCjoz | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"6fy9m3CCZX"
],
"note_type": [
"decision"
],
"note_created": [
1740961615809
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
DzlhvkuFqz | NOLAN: SELF-SUPERVISED FRAMEWORK FOR MAPPING CONTINUOUS TISSUE ORGANIZATION | [
"Artemy Bakulin",
"Nathan Levy",
"Can Ergen",
"Jonas Maaskola",
"Nir Yosef"
] | Spatial transcriptomics offers unprecedented insights into tissue organization, yet current methods often overlook transitional zones between cellular niches. We introduce NOLAN, a self-supervised framework that goes beyond detecting discrete niches to capture the continuous spectrum of tissue organization patterns. NOLAN learns cell representations informed by their neighborhoods, capturing variation within niches and across their boundaries. Using these representations, NOLAN constructs a graph-based abstraction of the tissue, modeling it as a network of interconnected regions bridged by transitional zones. Applying NOLAN to a multi-cancer spatial transcriptomics atlas, we uncover a landscape of both tissue-specific and shared cellular niches. Crucially, NOLAN reveals the continuous gradients of gene expression and cell type composition across these transitional zones, showcasing the ability of NOLAN to build a common coordinate system of tissues in an integrative analysis. | [
"nolan",
"framework",
"transitional zones",
"cellular niches",
"tissue organization",
"current methods",
"discrete niches",
"continuous spectrum"
] | Accept | https://openreview.net/pdf?id=DzlhvkuFqz | https://openreview.net/forum?id=DzlhvkuFqz | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"LnJtDHBUGo"
],
"note_type": [
"decision"
],
"note_created": [
1740962139674
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
DjxLkshuSx | AI AGENT FOR DATA-DRIVEN HYPOTHESIS EXPLORATION IN SINGLE-CELL TRANSCRIPTOMICS | [
"Artemy Bakulin",
"Pierre Boyeau",
"Nir Yosef"
] | Large Language Models (LLMs) have the ability to utlilize expert knowledge and simulate human thinking, which potentially makes them instrumental for a variety of scientific tasks. However, since scientific data is heterogeneous, often presented in the form of unordered tables, bridging the gap between unstructured non-textual data and the language processing capabilities of LLMs remains an open challenge. Agentic AI offers a promising approach by enabling LLMs to interactively query datasets for relevant information. Here, we explore the application of this agentic paradigm to single-cell transcriptomic analysis, with a specific focus on cell type annotation. Our results show that when LLMs are equipped with data-querying capabilities, their performance in annotating cell types improves significantly compared to single-shot prompting. Furthermore, we provide a proof of concept illustration of how our method can be used to integrate diverse single-cell datasets (e.g., cell census), ensuring consistent annotation across multiple sources, facilitating meta-analysis across big sample cohorts. | [
"llms",
"hypothesis exploration",
"ai agent",
"transcriptomics",
"agent",
"ability",
"expert knowledge",
"human thinking",
"instrumental"
] | Accept (Spotlight) | https://openreview.net/pdf?id=DjxLkshuSx | https://openreview.net/forum?id=DjxLkshuSx | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"CLDCjj36wV"
],
"note_type": [
"decision"
],
"note_created": [
1741030985437
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"title\": \"Paper Decision\"}"
]
} |
Cv84fXtQPJ | Curly Flow Matching for Learning Non-gradient Field Dynamics | [
"Katarina Petrović",
"Lazar Atanackovic",
"Kacper Kapusniak",
"Michael M. Bronstein",
"Joey Bose",
"Alexander Tong"
] | Modeling the transport dynamics of natural processes from population-level observations is a ubiquitous problem that arises in the natural sciences. A key challenge in these settings is making important modeling assumptions over the scientific process at hand that enable faithful learning of governing dynamics that mimic actual system behavior. Traditionally, the de-facto assumption present in approaches relies on the principle of least action that result in gradient field dynamics, that lead to trajectories that minimize an energy functional between two probability measures. However, many real world systems such as cell cycles in single-cell RNA are known to exhibit non-gradient, periodic behavior, which fundamentally cannot be captured by current state-of-the-art methods such as optimal transport based conditional flow matching. In this paper, we introduce Curly Flow Matching (Curly-FM), a novel approach that is capable of learning non-gradient field dynamics by designing and solving a Schrödinger bridge problem with a reference process with non-zero drift---in stark contrast from zero-drift reference processes---which is constructed using inferred velocities in addition to population snapshot data. We instantiate Curly-FM by solving the single-cell trajectory inference problem with approximate velocities inferred using RNA velocity. We demonstrate that Curly-FM can learn trajectories that match both RNA velocity and population marginals. Curly-FM expands flow matching models beyond the modeling of populations and towards the modeling known periodic behavior observed in cells. | [
"field dynamics",
"trajectories",
"periodic behavior",
"rna velocity",
"flow",
"transport dynamics",
"natural processes",
"observations",
"ubiquitous problem",
"natural sciences"
] | Accept | https://openreview.net/pdf?id=Cv84fXtQPJ | https://openreview.net/forum?id=Cv84fXtQPJ | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"k3Q0e5FIZr"
],
"note_type": [
"decision"
],
"note_created": [
1740970698096
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
ClewUE4sUK | A Scalable LLM Framework for Therapeutic Biomarker Discovery: Grounding Q/A Generation in Knowledge Graphs and Literature | [
"Marc Boubnovski Martell",
"Kaspar Märtens",
"Lawrence Phillips",
"Daniel Keitley",
"Maria Dermit",
"Julien Fauqueur"
] | Therapeutic biomarkers are crucial in biomedical research and clinical decision-making, yet the field lacks standardized datasets and evaluation methods for complex, context-dependent questions. To address this, we integrate large language models (LLMs) with knowledge graphs (KGs) to filter PubMed abstracts, summarize biomarker contexts, and generate a high-quality synthetic Q/A dataset. Our approach mirrors biomarker scientists' workflows, decomposing question generation into classification, named entity recognition (NER), and summarization. We release a 24k high quality Q/A dataset and show through ablation studies that incorporating NER and summarization improves performance over using abstracts alone. Evaluating multiple LLMs, we find that while models achieve 96\% accuracy on multiple-choice questions, performance drops to 69\% on open-ended Q/A, highlighting the need for synthetic data to address the issue of novel discovery. By addressing a critical resource gap, this work provides a scalable tool for biomarker research and demonstrates AI’s broader potential in scientific discovery. | [
"knowledge graphs",
"scalable llm framework",
"therapeutic biomarker discovery",
"grounding",
"generation",
"questions",
"dataset",
"ner",
"literature",
"literature therapeutic biomarkers"
] | Accept | https://openreview.net/pdf?id=ClewUE4sUK | https://openreview.net/forum?id=ClewUE4sUK | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"W7FeIM17tA"
],
"note_type": [
"decision"
],
"note_created": [
1740860877447
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
BjAxwETx9V | MolCap-Arena: A Comprehensive Captioning Benchmark on Language-Enhanced Molecular Property Prediction | [
"Carl Edwards",
"Ziqing Lu",
"Ehsan Hajiramezanali",
"Tommaso Biancalani",
"Heng Ji",
"Gabriele Scalia"
] | Bridging biomolecular modeling with natural language information, particularly through large language models (LLMs), has recently emerged as a promising interdisciplinary research area. LLMs, having been trained on large corpora of scientific documents, demonstrate significant potential in understanding and reasoning about biomolecules by providing enriched contextual and domain knowledge. However, the extent to which LLM-driven insights can improve performance on complex predictive tasks (e.g., toxicity) remains unclear. Further, the extent to which $\textit{relevant}$ knowledge can be extracted from LLMs also remains unknown. In this study, we present Molecule Caption Arena: the first comprehensive benchmark of LLM-augmented molecular property prediction. We evaluate over twenty LLMs, including both general-purpose and domain-specific molecule captioners, across diverse prediction tasks. To this goal, we introduce a novel, battle-based rating system for molecule captioners. Our findings confirm the ability of LLM-extracted knowledge to enhance state-of-the-art molecular representations, with notable model-, prompt-, and dataset-specific variations. | [
"molecular property prediction",
"llms",
"comprehensive captioning benchmark",
"extent",
"knowledge",
"molecule captioners",
"biomolecular modeling",
"natural language information",
"large language models"
] | Accept | https://openreview.net/pdf?id=BjAxwETx9V | https://openreview.net/forum?id=BjAxwETx9V | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"tT2A5O6DPv"
],
"note_type": [
"decision"
],
"note_created": [
1741146544941
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
BZIxe22wEO | Spatially-Informed Sampling Enables Accurate Prediction of Large-Scale Mutational Effects | [
"Maxime Basse",
"Dianzhuo Wang",
"Eugene Shakhnovich"
] | Predicting protein binding affinities across large combinatorial mutation spaces remains a critical challenge in molecular biology, particularly for understanding viral evolution and antibody interactions. While combinatorial mutagenesis experiments provide valuable data for training predictive models, they are typically limited due to experimental constraints. This creates a significant gap in our ability to predict the effects of more extensive mutation combinations, such as those observed in emerging SARS-CoV-2 variants. We present ProxiClust, which strategically combines smaller combinatorial mutagenesis experiments to enable accurate predictions across larger combinatorial spaces. Our approach leverages the spatial proximity of amino acid residues to identify potential epistatic interactions, using these relationships to optimize the design of manageable-sized combinatorial experiments. By combining just two small combinatorial datasets, we achieve accurate binding affinity predictions across substantially larger mutation spaces ($R^2\approx0.8$), with performance strongly correlated with capture of high-order epistatic effects. We validated our method in five different protein-protein interaction datasets, including binding of SARS-CoV-2 receptor binding domain (RBD) to various antibodies and cellular receptors, as well as influenza RBD-antibody interactions. This work provides a practical framework for extending the predictive power of combinatorial mutagenesis beyond current experimental constraints, offering applications in viral surveillance and antibody engineering. | [
"mutational effects",
"sampling enables",
"prediction",
"enables accurate prediction",
"protein",
"affinities",
"critical challenge",
"molecular biology",
"viral evolution"
] | Accept | https://openreview.net/pdf?id=BZIxe22wEO | https://openreview.net/forum?id=BZIxe22wEO | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"QGSXG9iATr"
],
"note_type": [
"decision"
],
"note_created": [
1740959311720
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
BJ2uCHIfEW | HybriDNA: A Hybird Transformer-Mamba2 Long-Range DNA Language Model | [
"Mingqian Ma",
"Guoqing Liu",
"Chuan Cao",
"Pan Deng",
"Tri Dao",
"Albert Gu",
"Peiran Jin",
"Zhao Yang",
"Yingce Xia",
"Renqian Luo",
"Pipi Hu",
"Zun Wang",
"Yuan-Jyue Chen",
"Haiguang Liu",
"Tao Qin"
] | Advances in natural language processing and large language models have sparked growing interest in modeling DNA, often referred to as the “language of life”. However, DNA modeling poses unique challenges. First, it requires the ability to process ultra-long DNA sequences while preserving single-nucleotide resolution, as individual nucleotides play a critical role in DNA function. Second, success in this domain requires excelling at both generative and understanding tasks: generative tasks hold potential for therapeutic and industrial applications, while understanding tasks provide crucial insights into biological mechanisms and diseases. To address these challenges, we propose HybriDNA, a decoder-only DNA language model that incorporates a hybrid Transformer-Mamba2 architecture, seamlessly integrating the strengths of attention mechanisms with selective state-space models. This hybrid design enables HybriDNA to efficiently process DNA sequences up to 131kb in length with single-nucleotide resolution. HybriDNA achieves state-of-the-art performance across 33 DNA understanding datasets curated from the BEND, GUE, and LRB benchmarks, and demonstrates exceptional capability in generating synthetic cis-regulatory elements (CREs) with desired properties. Furthermore, we show that HybriDNA adheres to expected scaling laws, with performance improving consistently as the model scales from 300M to 3B and 7B parameters. These findings underscore HybriDNA’s versatility and its potential to advance DNA research and applications, paving the way for innovations in understanding and engineering the “language of life”. | [
"hybridna",
"dna",
"hybird",
"language",
"life",
"resolution",
"potential",
"performance"
] | Accept | https://openreview.net/pdf?id=BJ2uCHIfEW | https://openreview.net/forum?id=BJ2uCHIfEW | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"RZNcTPZuLa"
],
"note_type": [
"decision"
],
"note_created": [
1740846409528
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
BBXNA1mQCW | Knockoff Statistics-Driven Interpretable Deep Learning Models for Uncovering Potential Biomarkers for COVID-19 Severity Prediction | [
"Qian Liu",
"Daryl Fung",
"Huanjing Liu",
"Pingzhao Hu"
] | COVID-19 affects individuals differently, with some experiencing severe symptoms while others remain asymptomatic. Identifying genetic determinants behind this variability can improve disease management, resource allocation, and public health decisions. Traditional approaches like genome-wide association studies and polygenic risk scores offer limited interpretability and predictive accuracy. In this
study, we developed an computational framework that involves deep generative model and xAI to predict COVID-19 severity based on whole-genome data. Our framework identified 72 significant genetic markers and achieved an improved prediction performance (ROC-AUC = 0.64) using whole-genome data from 6752 samples in Canada’s CGEn HostSeq project. Among these markers, 50 are novel, linked to hematopoietic stem cell differentiation, lung fibrosis, and SARS-CoV-2 mitochondrial interactions. This study introduces an interpretable AI tool for personalized COVID-19 severity prediction. | [
"severity prediction",
"potential biomarkers",
"study",
"data",
"knockoff",
"affects individuals",
"severe symptoms",
"others",
"asymptomatic"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=BBXNA1mQCW | https://openreview.net/forum?id=BBXNA1mQCW | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"rjWL93ngEn"
],
"note_type": [
"decision"
],
"note_created": [
1740959462837
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
A8EkdmRBYX | SpaceDX: A Bayesian test for localized differential expression in population-level spatial transcriptomics datasets | [
"Niklas Stotzem",
"Simon Chang",
"Na Cai",
"Francesco Paolo Casale"
] | Spatial transcriptomics allows for the study of gene expression within its spatial context, yet current spatial methods for differential expression require the definition of specific discrete regions of interest across the analyzed sections, which limits their applicability and statistical power. To address this limitation, we introduce SpaceDX, the first framework for spatial differential expression that automatically localizes regions of interest without requiring tissue registration or manual annotations. SpaceDX employs an attention mechanism to detect tissue contexts exhibiting differential gene expression and uses a hierarchical Bayesian framework to overcome the typical challenge of low sample sizes in spatial datasets.
We first applied SpaceDX to a structured mouse brain dataset consisting of Visium sections from 38 animals, comparing stressed and control groups.
Since the brain has well-defined anatomical regions, we could benchmark SpaceDX against traditional differential expression methods that rely on predefined regions, showing a 110% increase in significant gene detection and the automatic localization of regions exhibiting these differences. Next, we tested SpaceDX on a less structured dataset, specifically using sections from patients with inflammatory skin disease, where it successfully identified regions of interest exhibiting differential gene expression, demonstrating its broad applicability. | [
"spacedx",
"interest",
"regions",
"bayesian test",
"localized differential expression",
"differential gene expression",
"spatial transcriptomics datasets",
"study",
"gene expression"
] | Accept | https://openreview.net/pdf?id=A8EkdmRBYX | https://openreview.net/forum?id=A8EkdmRBYX | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"lT3KQJK7gW"
],
"note_type": [
"decision"
],
"note_created": [
1740962038426
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
8kZSO4WbTh | Relaxed Equivariance via Multitask Learning | [
"Ahmed A. A. Elhag",
"T. Konstantin Rusch",
"Francesco Di Giovanni",
"Michael M. Bronstein"
] | Incorporating equivariance as an inductive bias into deep learning architectures to take advantage of the data symmetry has been successful in multiple applications, such as chemistry and dynamical systems. In particular, roto-translations are crucial for effectively modeling geometric graphs and molecules, where understanding the 3D structures enhances generalization. However, equivariant models often pose challenges due to their higher computational complexity. In this paper, we introduce REMUL, a training procedure for approximating equivariance with multitask learning. We show that unconstrained models (which do not build equivariance into the architecture) can learn approximate symmetries by minimizing an additional simple equivariance loss. By formulating equivariance as a new learning objective, we can control the level of approximate equivariance in the model. Our method achieves competitive performance compared to equivariant baselines while being $10 \times$ faster at inference and $2.5 \times$ at training. | [
"equivariance",
"multitask",
"relaxed equivariance",
"inductive bias",
"deep learning architectures",
"advantage",
"data symmetry",
"successful",
"multiple applications",
"chemistry"
] | Accept (Spotlight) | https://openreview.net/pdf?id=8kZSO4WbTh | https://openreview.net/forum?id=8kZSO4WbTh | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"WjY2MdccVk"
],
"note_type": [
"decision"
],
"note_created": [
1740861002168
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"title\": \"Paper Decision\"}"
]
} |
8SBZU35mxb | Flexible Models of Functional Annotations to Variant Effects using Accelerated Linear Algebra | [
"Alan Nawzad Amin",
"Andres Potapczynski",
"Andrew Gordon Wilson"
] | To predict and understand the causes of disease, geneticists build models that predict how a genetic variant impacts phenotype from genomic features. There is a vast amount of data available from the large projects that have sequence hundreds of thousands of genomes; yet, state-of-the-art models, like LD score regression, cannot leverage this data as they lack flexibility due to their simplifying assumptions. These models use simplifying assumptions to avoid solving the large linear algebra problems introduced by the genomic correlation matrices. In this paper, we leverage modern fast linear algebra techniques to develop WASP (genome Wide Association Studies with Preconditioned iteration), a method to train large and flexible neural network models. On semi-synthetic and real data we show that WASP better predicts phenotype and better recovers its functional causes compared to LD score regression. Finally, we show that training larger WASP models on larger data leads to better explanations of phenotypes. | [
"models",
"functional annotations",
"variant effects",
"flexible models",
"accelerated linear algebra",
"causes",
"disease",
"genetic variant impacts",
"genomic features"
] | Accept | https://openreview.net/pdf?id=8SBZU35mxb | https://openreview.net/forum?id=8SBZU35mxb | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"QRR1xDPnbo"
],
"note_type": [
"decision"
],
"note_created": [
1740846202929
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
7Jxb5zfcZn | 2DE: a probabilistic method for differential expression across niches in spatial transcriptomics data | [
"Nathan Levy",
"Florian Ingelfinger",
"Artemy Bakulin",
"Giacomo Cinnirella",
"Pierre Boyeau",
"Can Ergen",
"Nir Yosef"
] | Spatial transcriptomics enables studying cellular interactions by measuring gene
expression in situ while preserving tissue context. Within tissues, distinct cellular
niches define micro-environments that influence cell states and function. A fundamental task in spatial transcriptomics is identifying differentially expressed genes
within a specific cell type across different niches to quantify context-dependent
cell state variation. Despite advances in cell segmentation algorithms, the persisting problem of the wrong assignment of molecules to cells can obscure the analysis by introducing spurious differentially expressed genes that originate from
neighboring cells rather than the group of interest. Here, we introduce 2DE, a
probabilistic framework designed to refine spatial differential expression analyses
by filtering out genes that are over-expressed due to local contamination rather
than true cell-intrinsic expression. 2DE operates downstream of any differential
expression method, filtering irrelevant genes by considering gene over-expression
relative to the expression in the neighborhood and returning marker confidence
scores. In a study of human breast cancer, we demonstrate that 2DE improves the
precision of the discoveries. | [
"differential expression",
"genes",
"probabilistic",
"niches",
"spatial transcriptomics",
"cells",
"expression",
"spatial transcriptomics data",
"spatial transcriptomics enables",
"cellular interactions"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=7Jxb5zfcZn | https://openreview.net/forum?id=7Jxb5zfcZn | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"hVZ5jZNbFs"
],
"note_type": [
"decision"
],
"note_created": [
1740961583628
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
7Fk9OnBziL | Interpretable prediction of DNA replication origins in S. cerevisiae using attention-based motif discovery | [
"Zohreh Piroozeh",
"Ildem Akerman",
"Stefan Kesselheim",
"Olga Kalinina",
"Alina Bazarova"
] | In a living cell, DNA replication begins at multiple genomic sites called replication origins. Identifying these origins and their underlying base sequence composition is crucial for understanding replication process. Existing machine learning methods for origin prediction often require labor-intensive feature engineering or lack interpretability. Here, we employ DNABERT to predict yeast replication origins and uncover sequence motifs by combining attention maps with MEME, a classical bioinformatics tool. Our approach eliminates manual feature extraction and identifies biologically relevant motifs across datasets of varying complexity. This work advances interpretable machine learning in genomics, offering a potentially generalizable framework for origin prediction and motif discovery. | [
"dna replication",
"cerevisiae",
"motif discovery",
"origin prediction",
"interpretable prediction",
"living cell",
"multiple genomic sites",
"replication origins",
"origins"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=7Fk9OnBziL | https://openreview.net/forum?id=7Fk9OnBziL | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"F8eIIJ11Mf"
],
"note_type": [
"decision"
],
"note_created": [
1740963053273
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
4nh51vZXUr | WASSERSTEIN CYCLEGAN FOR SINGLE-CELL RNA- SEQ DATA GENERATION USING CROSS-MODALITY TRANSLATION | [
"Sajib Acharjee Dip",
"Liqing Zhang"
] | Single-nucleus RNA sequencing (snRNA-seq) provides insights into gene expression in complex tissues but suffers from lower resolution compared to single-cell RNA sequencing (scRNA-seq). To bridge this gap, we propose scWC-GAN, a Wasserstein CycleGAN-based model that translates snRNA-seq data into high-resolution scRNA-seq profiles. Our method leverages Earth Mover’s Distance (EMD) for cycle consistency and a latent feature-preserving generator to capture transcriptomic structures better. Through extensive evaluation, scWC-GAN outperforms baseline models in FID score and SSIM, demonstrating its ability to generate biologically meaningful data. While challenges remain in fine-grained cell-type resolution, our results suggest scWC-GAN as a promising tool for cross-modality single-cell data translation, enhancing downstream analysis in genomics. | [
"seq data generation",
"wasserstein cyclegan",
"translation wasserstein cyclegan",
"translation",
"rna sequencing",
"insights",
"gene expression",
"complex tissues",
"suffers",
"lower resolution"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=4nh51vZXUr | https://openreview.net/forum?id=4nh51vZXUr | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"LJGpfqxmop"
],
"note_type": [
"decision"
],
"note_created": [
1740861089009
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
4YMUZ49lB3 | Integrating Protein Language Model and Active Learning for Few-Shot Viral Variant Detection | [
"Marian Huot",
"Dianzhuo Wang",
"Jiacheng Liu",
"Eugene Shakhnovich"
] | Early detection of high-fitness SARS-CoV-2 variants is crucial for pandemic response, yet limited experimental resources hinder timely identification. We propose an active learning framework that integrates a protein language model, a Gaussian process with uncertainty estimation, and a biophysical model to predict the fitness of novel receptor-binding domain (RBD) variants in a few-shot learning setting. Our approach prioritizes the most informative variants for experimental characterization, accelerating high-fitness variant detection by up to 5× compared to random sampling while testing fewer than 1\% of all possible variants. Benchmarking on deep mutational scans, we show that our method identifies evolutionarily significant sites, particularly those facilitating antibody escape. We systematically compare different acquisition strategies and demonstrate that incorporating uncertainty-driven exploration enhances coverage of the mutational landscape, enabling the discovery of evolutionarily distant yet high-risk variants. Our results suggest that this framework could serve as an efficient early warning system for identifying concerning SARS-CoV-2 variants before they achieve widespread circulation. | [
"variants",
"protein language model",
"active learning",
"viral variant detection",
"crucial",
"pandemic response",
"experimental resources",
"timely identification",
"active learning framework"
] | Accept | https://openreview.net/pdf?id=4YMUZ49lB3 | https://openreview.net/forum?id=4YMUZ49lB3 | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"BN4AobM14f"
],
"note_type": [
"decision"
],
"note_created": [
1740961231578
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
4JGIrEGfYz | COLOR: A COMPOSITIONAL LINEAR OPERATION BASED REPRESENTATION OF PROTEIN SEQUENCES FOR IDENTIFICATION OF MONOMER CONTRIBUTIONS TO PROPERTIES | [
"Akash Pandey",
"Wei Chen",
"Sinan Keten"
] | The properties of biological materials like proteins and nucleic acids are largely determined by their primary sequence. Certain segments in the sequence strongly influence specific functions, identifying these segments, or so-called motifs, is challenging due to the complexity of sequential data. While deep learning (DL) models can accurately capture sequence-property relationships, the degree of nonlinearity in these models limits the assessment of monomer contributions to a property - a critical step in identifying key motifs. Recent advances in explainable AI (XAI) offer attention and gradient-based methods for estimating monomeric contributions. However, these methods are primarily applied to classification tasks, such as binding site identification, where they achieve limited accuracy (40–45\%) and rely on qualitative evaluations. To address these limitations, we introduce a DL model with interpretable steps, enabling direct tracing of monomeric contributions. Inspired by the masking technique commonly used in vision and language processing domains, we propose a new metric ($\mathcal{I}$) for quantitative evaluation on datasets mainly containing distinct properties of anti-cancer peptides (ACP), antimicrobial peptides (AMP), and collagen. Our model exhibits 22\% higher explainability than the gradient and attention-based state-of-the-art models, recognizes critical motifs (RRR, RRI, and RSS) that significantly destabilize ACPs, and identifies motifs in AMPs that are 50\% more effective in converting non-AMPs to AMPs. These findings highlight the potential of our model in guiding mutation strategies for designing protein-based biomaterials. | [
"monomer contributions",
"models",
"compositional linear operation",
"representation",
"protein sequences",
"identification",
"properties",
"methods",
"monomeric contributions",
"model"
] | Accept | https://openreview.net/pdf?id=4JGIrEGfYz | https://openreview.net/forum?id=4JGIrEGfYz | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"IS8gS9H95u"
],
"note_type": [
"decision"
],
"note_created": [
1740896102750
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
433YQo03lM | Decision Tree Induction with Dynamic Feature Generation: A Framework for Interpretable DNA Sequence Analysis | [
"Nicolas Huynh",
"Krzysztof Kacprzyk",
"Ryan M Sheridan",
"David L. Bentley",
"Mihaela van der Schaar"
] | The analysis of DNA sequences has become increasingly critical in numerous fields, from evolutionary biology to understanding gene regulation and disease mechanisms. While machine learning approaches to DNA sequence classification, particularly deep neural networks, achieve remarkable performance, they typically operate as black boxes, severely limiting their utility for scientific discovery and biological insight. Decision trees offer a promising direction for interpretable DNA sequence analysis, yet they suffer from a fundamental limitation: considering individual raw features in isolation at each split limits their expressivity, which results in prohibitive tree depths that hinder both interpretability and generalization performance. We address this challenge by introducing $\texttt{DEFT}$, a novel framework that adaptively generates high-level sequence features during tree construction. $\texttt{DEFT}$ leverages large language models to propose biologically-informed features tailored to the local sequence distributions at each node and to iteratively refine them with a reflection mechanism. Through a comprehensive case study on RNA polymerase II pausing prediction, we demonstrate that $\texttt{DEFT}$ discovers
human-interpretable sequence features which are highly predictive of pausing, providing insights into this complex phenomenon. | [
"deft",
"dynamic feature generation",
"framework",
"sequence features",
"decision tree induction",
"analysis",
"dna sequences",
"critical"
] | Accept (Spotlight) | https://openreview.net/pdf?id=433YQo03lM | https://openreview.net/forum?id=433YQo03lM | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"2CzfriofN1"
],
"note_type": [
"decision"
],
"note_created": [
1740861028179
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"title\": \"Paper Decision\"}"
]
} |
36mqwnxtnK | To trap or not to trap--analyzing the trade-offs in diffusion transport models | [
"Rushmila Shehreen Khan",
"Md. Shahriar Karim"
] | Information transmission by diffusing particles is crucial in many biophysical and artificial systems. The factors that make a diffusive model an optimal choice in a given context remain elusive and vital in narrowing the search space for context-specific applications. This study explores a class of diffusion-reaction paradigms on different performance objectives. Precisely, we compare the robustness, characteristic length scale, and stochastic variability of the competing transport models considering the mesoscopic and microscopic views of the transport, asking whether the entrapment of diffusing molecules improves the reliability of the diffusive transport models. | [
"diffusion transport models",
"particles",
"crucial",
"many biophysical",
"artificial systems",
"factors",
"diffusive model",
"optimal choice",
"context remain elusive"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=36mqwnxtnK | https://openreview.net/forum?id=36mqwnxtnK | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"qH4gi9hOkC"
],
"note_type": [
"decision"
],
"note_created": [
1740861240874
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
1MPTY6JaQc | Gradient-Based Gene Selection for Multimodal scRNA-seq Foundation Models | [
"Pakaphol Thadawasin",
"Farhan khodaee",
"Rohola Zandie",
"Elazer R Edelman"
] | Foundation models have emerged as powerful tools for analyzing single-cell RNA sequencing (scRNA-seq) data. However, selecting informative gene features for both input to the model and analysis in the output remains a critical challenge. Traditional feature selection methods filter on the basis of highly variable genes and analyze them using differential distribution, but they often struggle with scalability and robustness in heterogeneous, high-dimensional datasets. In this study, we explore the limitations of conventional feature selection techniques in the context of a multimodal foundation model and propose alternative gradient-based attribution techniques on learned feature embeddings to improve feature selection. We demonstrate how our selection strategy enhances model performance, overcomes the limitations of traditional approaches, and holds the potential to reveal the inherent polygenicity of diseases. | [
"gene selection",
"multimodal",
"limitations",
"foundation models",
"powerful tools",
"rna sequencing",
"data",
"informative gene features",
"input"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=1MPTY6JaQc | https://openreview.net/forum?id=1MPTY6JaQc | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"2csdEaNY51"
],
"note_type": [
"decision"
],
"note_created": [
1740961869944
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
0X2jafslBR | Uncovering BioLOGICAL Motifs and Syntax via Sufficient and Necessary Explanations | [
"Beepul Bharti",
"Gabriele Scalia",
"Tommaso Biancalani",
"Alex M Tseng"
] | Deep neural networks (DNNs) have achieved remarkable success in predicting transcription factor (TF) binding from high-throughput genome profiling data. Since TF binding is primarily driven by sequence motifs, understanding how DNNs make accurate predictions could help identify these motifs and their logical syntax. However, the black-box nature of DNNs complicates interpretation. Most post-hoc methods evaluate the importance of each base pair in isolation, often resulting in noise since they overlook the fact that motifs are contiguous regions. Additionally, these methods fail to capture the complex interactions between different motifs. To address these challenges, we propose Motif Explainer Models (MEMs), a novel explanation method that uses sufficiency and necessity to identify important motifs and their syntax. MEMs excel at identifying multiple disjoint motifs across DNA sequences, overcoming limitations of existing methods. Moreover, by accurately pinpointing sufficient and necessary motifs, MEMs can reveal the logical syntax that governs genomic regulation. | [
"syntax",
"sufficient",
"methods",
"biological motifs",
"dnns",
"motifs",
"logical syntax",
"mems",
"necessary explanations"
] | Accept | https://openreview.net/pdf?id=0X2jafslBR | https://openreview.net/forum?id=0X2jafslBR | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"XBy7fvWAFZ"
],
"note_type": [
"decision"
],
"note_created": [
1741071357966
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
0Tivf6L5eA | InfoSEM: A Deep Generative Model with Informative Priors for Gene Regulatory Network Inference | [
"Tianyu Cui",
"Song-Jun Xu",
"Artem Moskalev",
"Shuwei Li",
"Tommaso Mansi",
"Mangal Prakash",
"Rui Liao"
] | Inferring Gene Regulatory Networks (GRNs) from gene expression data is crucial for understanding biological processes. While supervised models are reported to achieve high performance for this task, they rely on costly ground truth (GT) labels and risk learning gene-specific biases—such as class imbalances of GT interactions—rather than true regulatory mechanisms. To address these issues, we introduce InfoSEM, an unsupervised generative model that leverages textual gene embeddings as informative priors, improving GRN inference without GT labels. InfoSEM can also integrate GT labels as an additional prior when available, avoiding biases and further enhancing performance. Additionally, we propose a biologically motivated benchmarking framework that better reflects real-world applications such as biomarker discovery and reveals learned biases of existing supervised methods. InfoSEM outperforms existing models by 38.5% across four datasets using textual embeddings prior and further boosts performance by 11.1% when integrating labeled data as priors. | [
"infosem",
"informative priors",
"deep generative model",
"gt labels",
"biases",
"gene regulatory networks",
"grns"
] | Accept (Spotlight) | https://openreview.net/pdf?id=0Tivf6L5eA | https://openreview.net/forum?id=0Tivf6L5eA | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"psYsjmP7za"
],
"note_type": [
"decision"
],
"note_created": [
1740846267313
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"title\": \"Paper Decision\"}"
]
} |
0I1LsQEMin | Learning Non-Equilibrium Signaling Dynamics in Single-Cell Perturbation Dynamics | [
"Heman Shakeri"
] | Cancer cells exploit non-equilibrium signaling dynamics to develop transient drug resistance through mechanisms that conventional equilibrium-based analyses cannot detect. We present a probabilistic framework integrating live-cell biosensor data with asynchronous multi-omics snapshots to learn these adaptive states. Using data from BRAF-V600E melanoma as a model system, we demonstrate how such learning scheme characterize competing timescales drive resistance mechanisms: rapid post-translational feedback (minutes) versus delayed transcriptional regulation (hours), including RAF dimer rewiring, DUSP-mediated ERK reactivation pulses, and NRAS^Q61K-dependent EGFR recycling. Our approach further combines multi-marginal Schrödinger bridges for distribution alignment with the extracted dynamical patterns from live-cell trajectories. Each step of the algorithm is validated with real-data and further validation is through in silico melanoma models. This framework could help identify therapeutic windows that delay progression to persistent resistant states and targeting adaptive plasticity across cancer types. | [
"dynamics",
"perturbation dynamics",
"transient drug resistance",
"mechanisms",
"conventional",
"analyses",
"probabilistic framework",
"biosensor data",
"asynchronous"
] | Accept (Spotlight) | https://openreview.net/pdf?id=0I1LsQEMin | https://openreview.net/forum?id=0I1LsQEMin | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"0eOOWBqZOI"
],
"note_type": [
"decision"
],
"note_created": [
1740961438827
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"title\": \"Paper Decision\"}"
]
} |
z3H20lc5tR | SpectralFlowNet: Resolution-Invariant Continuous Neural Dynamics for Mesh-Based PDE Modeling | [
"Tianrun Yu",
"Fang Sun",
"Haixin Wang",
"Xiao Luo",
"Yizhou Sun"
] | Accurate mesh-based simulation is central to modeling phenomena governed by PDEs, such as flow, elasticity, and climate. Recent machine learning solutions, including Graph Neural Networks (GNNs) and Fourier Neural Operators (FNOs), enable faster approximations but can struggle with long-range interactions, irregular mesh topologies, or fixed time steps.
To address the above challenges, we introduce SpectralFlowNet, a unified framework for mesh-based PDE simulation that marries graph spectral methods with continuous-time neural dynamics. By projecting mesh data onto an intrinsic spectral basis via the Graph Fourier Transform (GFT) and evolving these spectral coefficients using Neural Ordinary Differential Equations (ODEs), our model naturally handles multiscale spatial structures and temporal dynamics. This resolution-invariant, multiscale approach achieves state-of-the-art performance on plastic deformation tasks and demonstrates robust zero-shot transfer across resolutions. | [
"Multiscale Modeling",
"Graph Fourier Transform",
"Neural Ordinary Differential Equations"
] | Accept (Poster) | https://openreview.net/pdf?id=z3H20lc5tR | https://openreview.net/forum?id=z3H20lc5tR | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"xBNCkj0Ymj"
],
"note_type": [
"decision"
],
"note_created": [
1741244762138
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
xqjs2ideRD | Flow Domain Parameterization and Training of Generalized Physics-Informed Neural Networks for Solving Navier-Stokes Equations | [
"Ivan Stebakov",
"Alexei Kornaev",
"Elena Kornaeva"
] | Physics-informed neural networks (PINNs) have shown promise in solving the Navier-Stokes equations for fluid flow problems, but most existing approaches require retraining for each new flow case, limiting their applicability to a wide range of scenarios. This study addresses the challenge of multi-dimensional parameterization of flow domain geometries, which has not been extensively explored in previous research. We propose an approach for parameterizing the flow domain for solving the stationary Navier-Stokes equations for Newtonian fluid flow using PINNs. The proposed approach allows scaling PINN for new cases not considered in training and significantly reduces computational costs in comparison with the numerical solution. | [
"Navier–Stokes equations",
"physics-informed neural networks",
"incompressible fluid",
"domain parametrization"
] | Accept (Poster) | https://openreview.net/pdf?id=xqjs2ideRD | https://openreview.net/forum?id=xqjs2ideRD | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"NSjVcXQS4Q"
],
"note_type": [
"decision"
],
"note_created": [
1741244689906
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
xnHjZt21BS | Scalable and Efficient Multi-Weather Classification for Autonomous Driving with Coresets, Pruning, and Resolution Scaling | [
"Tushar Shinde",
"AVINASH KUMAR SHARMA"
] | Autonomous vehicles require robust perception systems capable of operating in diverse weather conditions, including snow, rain, fog, and storms. In this work, we present a scalable and efficient approach for multi-weather classification in autonomous driving, leveraging the WEDGE (WEather images by DALL-E GEneration) dataset. Our study investigates three complementary techniques to enhance classification performance and efficiency: Coreset Selection, Resolution Scaling, and Model Compression via Adaptive Pruning and Quantization. Specifically, we evaluate the impact of coreset selection methods (random and margin-based) at varying data fractions (e.g., 1, 0.75, 0.5, 0.25, 0.1), assess model robustness under low-resolution settings (224x224, 112x112, 56x56), and demonstrate that adaptive pruning combined with 8-bit quantization can reduce model size by up to 85% while maintaining competitive classification accuracy. Experimental results validate the effectiveness of our integrated approach, providing a scalable and robust solution for multi-weather classification. This work advances the feasibility of deploying perception models in real-world autonomous driving systems operating under adverse weather conditions and limited computational resources. | [
"Multi-weather classification",
"coreset selection",
"resolution scaling",
"pruning",
"model compression"
] | Accept (Poster) | https://openreview.net/pdf?id=xnHjZt21BS | https://openreview.net/forum?id=xnHjZt21BS | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"EjqBEjmcfJ"
],
"note_type": [
"decision"
],
"note_created": [
1741245041446
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
wzt7hZo82u | Scaling Dynamic Mode Decomposition For Real Time Analysis Of Infant Movements | [
"Navya Annapareddy",
"Lisa Letzkus",
"Santina Zanelli",
"Stephen Baek"
] | The analysis of characteristic motions in infants plays a pivotal role in quantifying developmental progress and clinical risk for neurodevelopmental and musculoskeletal abnormalities. Traditional methods often rely on resource intensive
manual motion assessments carried out by clinicians while computer assisted approaches frequently utilize computationally expensive simulations or black-box
classification models. These approaches struggle to efficiently to both capture
and differentiate the highly correlated dynamics of infant motion, limiting their
ability to deliver actionable insights in a clinically viable decision time frame. In
response to these challenges, we introduce the use of Dynamic Mode Decomposition (DMD) as a transformative approach for decomposing complex infant motion
into interpretable, independent components that are linearly additive in nature.
DMD not only enables extraction of large scale clinically meaningful patterns
but also can integrate with existing computer assisted interventions with regard
to standardized motion features. We assess an optimized DMD formulation on
275,000 frames of infant motion in clinical settings that have undergone manual
motion assessment by clinicians. Our experimental results show that using DMD
modes as predictive components not only result in equal or superior accuracy in
predicting abnormal clinical motion assessments compared to traditional manual
or computer assisted methods but serve as highly data rich features themselves
that can be used as a novel basis for personalized clinical analysis and uncertainty
quantification at scale. | [
"Computer Vision",
"Digital Twin",
"Healthcare",
"Machine Learning",
"Pose Estimation",
"Dynamical Systems",
"ODE"
] | Accept (Poster) | https://openreview.net/pdf?id=wzt7hZo82u | https://openreview.net/forum?id=wzt7hZo82u | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"gPP2hZL027"
],
"note_type": [
"decision"
],
"note_created": [
1741244867472
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
w0zlwNHFox | NeuralDEM: Real-time Simulation of Industrial Particulate Flows | [
"Benedikt Alkin",
"Tobias Kronlachner",
"Samuele Papa",
"Stefan Pirker",
"Thomas Lichtenegger",
"Johannes Brandstetter"
] | Advancements in computing power have made it possible to numerically simulate large-scale fluid-mechanical and/or particulate systems, many of which are integral to core industrial processes. The discrete element method (DEM) provides one of the most accurate representations of a wide range of physical systems involving granular materials. Additionally, DEM can be integrated with grid-based computational fluid dynamics (CFD) methods, enabling the simulation of chemical processes taking place, e.g., in fluidized beds.
However, DEM is computationally intensive because of the intrinsic multiscale nature of particulate systems, restricting either the duration of simulations or the number of particles that can be simulated. Towards this end, NeuralDEM presents a first end-to-end approach to replace slow and computationally demanding DEM routines with fast deep learning surrogates.
NeuralDEM treats the Lagrangian discretization of DEM as an underlying continuous field, while simultaneously modeling macroscopic behavior directly as additional auxiliary fields using ``multi-branch neural operators'', enabling fast and scalable neural surrogates.
NeuralDEM will open many new doors to advanced engineering and much faster process cycles. | [
"Particle Simulation",
"Neural Operator",
"Industrial Simulation"
] | Accept (Poster) | https://openreview.net/pdf?id=w0zlwNHFox | https://openreview.net/forum?id=w0zlwNHFox | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"sWlMUem9y9"
],
"note_type": [
"decision"
],
"note_created": [
1741243989034
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\", \"comment\": \"Conditional on the authors fixing the formatting to use the workshop LaTeX style ([file](https://multiscale-ai.github.io/ICLR_2025_MLMP_Workshop.zip), [Overleaf](https://www.overleaf.com/read/wrsbgszhyntq#61c4bc)). It is almost identical to the main track templates, except for the header.\"}"
]
} |
u3sEGcYyqw | UPT++: Latent Point Set Neural Operators for Modeling System State Transitions | [
"Andreas Fürst",
"Florian Sestak",
"Artur P. Toshev",
"Benedikt Alkin",
"Nikolaus A. Adams",
"Andreas Mayr",
"Günter Klambauer",
"Johannes Brandstetter"
] | Particle methods comprise a wide spectrum of numerical algorithms, ranging from computational fluid dynamics governed by the Navier-Stokes equations to molecular dynamics governed by the many-body Schrödinger equation. At its core, these methods represent the continuum as a collection of discrete particles, on which the respective PDE is solved. We introduce UPT++, a latent point set neural operator for modeling the dynamics of such particle systems by mapping a particle set back to a continuous (latent) representation, instead of operating on the particles directly. We argue via what we call the discretization paradox that continuous modeling is advantageous even if the reference numerical discretization scheme comprises particles. Algorithmically, UPT++ extends Universal Physics Transformers -- a framework for efficiently scaling neural operators -- by novel importance-based encoding and decoding. Furthermore, our encoding and decoding enable outputs that remain consistent across varying input sampling resolutions, i.e., UPT++ is a neural operator. We discuss two types of UPT++ operators: (i) time-evolution operator for fluid dynamics, and (ii) sampling operator for molecular dynamics tasks. Experimentally, we demonstrate that our method reliably models complex physics phenomena of fluid dynamics and exhibits beneficial scaling properties, tested on simulations of up to 200k particles. Furthermore, we showcase on molecular dynamics simulations that UPT++ can effectively explore the metastable conformation states of unseen peptide molecules. | [
"neural operator",
"navier-stokes",
"molecular dynamics",
"latent space",
"particle simulations"
] | Accept (Poster) | https://openreview.net/pdf?id=u3sEGcYyqw | https://openreview.net/forum?id=u3sEGcYyqw | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"JrzJ3KXgHE"
],
"note_type": [
"decision"
],
"note_created": [
1741244447350
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
rnxAKI2kRD | Analysis of Neural ODE Performance in Long-term PDE Sequence Modeling | [
"Fang Sun",
"Maxwell Dalton",
"Yizhou Sun"
] | Predicting solutions to partial differential equations (PDEs) from limited snapshots is central to many scientific and engineering disciplines. However, standard autoregressive neural models often suffer from compounding errors when rolled out over large time scales. In this paper, we investigate Neural Ordinary Differential Equations (Neural ODEs) as an alternative to purely discrete, stepwise approaches for PDE sequence modeling. We propose a continuous-time neural solver architecture that combines a graph encoder to process local spatial features, a learned ODE network to integrate node embeddings forward in time, and a decoder that produces PDE field predictions at future timesteps. We compare this Neural ODE solver to a baseline autoregressive GNN on a long-horizon sequence-to-sequence prediction task for Burgers' equation without diffusion. Empirical results indicate that our Neural ODE approach significantly reduces error growth over extended time scale and achieves improved stability. These findings highlight the promise of continuous-time neural modeling for robust PDE simulation and pave the way for applying learned surrogates to complex scientific systems. Our implementation is available at https://github.com/FrancoTSolis/neural-ode-pde. | [
"PDE Simulation; Long Timescale Prediction; Neural ODE"
] | Accept (Poster) | https://openreview.net/pdf?id=rnxAKI2kRD | https://openreview.net/forum?id=rnxAKI2kRD | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"O8SQiBGPgP"
],
"note_type": [
"decision"
],
"note_created": [
1741245238484
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
n64e2mKzTg | Leveraging LLM-based sentiment analysis for portfolio allocation with proximal policy optimization | [
"Kemal Kirtac",
"Guido Germano"
] | Portfolio optimization requires adaptive strategies to maximize returns while managing risk. Reinforcement learning (RL) has gained traction in financial decision-making, and proximal policy optimization (PPO) has demonstrated strong performance in dynamic asset allocation. However, traditional PPO relies solely on historical price data, ignoring market sentiment, which plays a crucial role in asset movements. We propose a sentiment-augmented PPO (SAPPO) model that integrates daily financial news sentiment extracted from Refinitiv using LLaMA 3.3, a large language model optimized for financial text analysis. The sentiment layer refines portfolio allocation by incorporating real-time market sentiment alongside price movements. We evaluate both PPO and SAPPO on a three-stock portfolio consisting of Google, Microsoft and Meta, and we compare performance against standard market benchmarks. Results show that SAPPO improves risk-adjusted returns with a superior Sharpe ratio and reduced drawdowns. Our findings highlight the value of integrating sentiment analysis into RL-driven portfolio management. | [
"Deep reinforcement learning",
"stock return forecasting",
"large language models",
"portfolio allocation"
] | Accept (Poster) | https://openreview.net/pdf?id=n64e2mKzTg | https://openreview.net/forum?id=n64e2mKzTg | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"OPW0FZYdDb"
],
"note_type": [
"decision"
],
"note_created": [
1741245165432
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
lRVhXYoKQ7 | Physics-Transfer Learning: A Framework to Address the Accuracy-Performance Dilemma in Modeling Morphological Complexities in Brain Development | [
"Yingjie Zhao",
"Zhiping Xu"
] | The development of theoretical science follows an observation-assumption-model approach, effective for simple systems but hindered by complexity in engineering. Artificial intelligence (AI) and machine learning (ML) offer a data-driven alternative for making inferences when direct solutions are elusive. Feature engineering extends dimensional analysis, revealing hidden physics from data. We present a physics-transfer (PT) framework to predict physics across digitally varied models, addressing the accuracy-performance trade-off in multiscale challenges. This is exemplified in modeling brain morphology development, essential for disease diagnosis and prognosis. Nonlinear deformation physics from basic geometries is encoded into a neural network and applied to complex brain models. Results agree with longitudinal magnetic resonance imaging (MRI) data, and learned variables correlate with physical descriptors, such as undetectable stress states and submicroscopic characteristics, demonstrating the effectiveness of PT in understanding multiscale problems. | [
"Physics-Transfer Learning; Accuracy-Performance Dilemma; Morphological Complexity; Brain Development; Digital libraries"
] | Accept (Poster) | https://openreview.net/pdf?id=lRVhXYoKQ7 | https://openreview.net/forum?id=lRVhXYoKQ7 | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"xlv1FhrqWr"
],
"note_type": [
"decision"
],
"note_created": [
1741245698984
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Conditional on the authors fixing the formatting to use the workshop LaTeX style ([file](https://multiscale-ai.github.io/ICLR_2025_MLMP_Workshop.zip), [Overleaf](https://www.overleaf.com/read/wrsbgszhyntq#61c4bc)). It is almost identical to the main track templates, except for the header.\", \"title\": \"Paper Decision\"}"
]
} |
jKdZsWdRLZ | Hard-constraining Neumann boundary conditions in physics-informed neural networks via Fourier feature embeddings | [
"Christopher Straub",
"Philipp Brendel",
"Vlad Medvedev",
"Andreas Rosskopf"
] | We present a novel approach to hard-constrain Neumann boundary conditions in physics-informed neural networks (PINNs) using Fourier feature embeddings. Neumann boundary conditions are used to described critical processes in various application, yet they are more challenging to hard-constrain in PINNs than Dirichlet conditions. Our method employs specific Fourier feature embeddings to directly incorporate Neumann boundary conditions into the neural network's architecture instead of learning them. The embedding can be naturally extended by high frequency modes to better capture high frequency phenomena. We demonstrate the efficacy of our approach through experiments on a diffusion problem, for which our method outperforms existing hard-constraining methods and classical PINNs, particularly in multiscale and high frequency scenarios. | [
"Physics-informed neural networks",
"hard-constraint",
"Neumann boundary condition",
"Fourier feature embedding"
] | Accept (Poster) | https://openreview.net/pdf?id=jKdZsWdRLZ | https://openreview.net/forum?id=jKdZsWdRLZ | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"xKC4lWxfwB"
],
"note_type": [
"decision"
],
"note_created": [
1741244020292
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
iPoX2qIJaj | An Property-prompted Multi-scale Data Augmentation Approach for Crystal Representation | [
"Zhongyi Deng",
"Shuzhou LI",
"Tong Zhang",
"C. L. Philip Chen"
] | The inverse design of crystals with multiple objectives represents a significant challenge in materials science. The interplay among various desired properties often results in unbalanced crystal structure generation. In schemes based on generative language models, this issue primarily stems from the models' limited capability to learn continuous property values, compounded by the scarcity of high-quality material data for training. To address these challenges, a property prompt-based scheme has been proposed to achieve multi-scale data augmentation for crystal representation. This scheme constructs learnable prompt templates for the single property and extends them to multiple properties. The property prompt introduces learnable templates that map continuous property values to discrete prompt spaces, enhancing the learning ability of generative language models for discrete property values. Multi-scale data augmentation disentangles the interactions between various material properties and transforms them into mutual promotion through end-to-end pre-training, thereby alleviating the problem of insufficient high-quality material data. The scheme has been validated for key properties that affect the crystal structure composition, including the formation energy and the band gap, as well as their various combinations. Experimental results demonstrate that the proposed model achieves significant performance improvements across multiple target property combinations, showcasing its robust representation and generalization capabilities in the inverse design of crystals with multiple objectives. | [
"Crystal Representation",
"Prompt Learning",
"Data Enhancement",
"Generative Language Model",
"SLICES"
] | Accept (Poster) | https://openreview.net/pdf?id=iPoX2qIJaj | https://openreview.net/forum?id=iPoX2qIJaj | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"448MU77vKl"
],
"note_type": [
"decision"
],
"note_created": [
1741244371627
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
i8vGPXrDMa | Towards Interpretable Structure Prediction With Sparse Autoencoders | [
"Nithin Parsan",
"David J Yang",
"John Jingxuan Yang"
] | Protein language models have revolutionized structure prediction, but their nonlinear nature obscures how sequence representations inform structure prediction. While sparse autoencoders (SAEs) offer a path to interpretability here by learning linear representations in high-dimensional space, their application has been limited to smaller protein language models unable to perform structure prediction. In this work, we make two key advances: (1) we scale SAEs to ESM2-3B, the base model for ESMFold, enabling mechanistic interpretability of protein structure prediction for the first time, and (2) we adapt Matryoshka SAEs for protein language models, which learn hierarchically organized features by forcing nested groups of latents to reconstruct inputs independently. We demonstrate that our Matryoshka SAEs achieve comparable or better performance than standard architectures. Through comprehensive evaluations, we show that SAEs trained on ESM2-3B significantly outperform those trained on smaller models for both biological concept discovery and contact map prediction. Finally, we present an initial case study demonstrating how our approach enables targeted steering of ESMFold predictions, increasing structure solvent accessibility while fixing the input sequence. To facilitate further investigation by the broader community, we open-source our code, dataset, pretrained models, and visualizer. | [
"protein language models",
"sparse autoencoders",
"ESM2-3B",
"ESMFold",
"mechanistic interpretability",
"Matryoshka architecture",
"hierarchical features",
"structure prediction",
"contact map prediction",
"model steering",
"solvent accessibility",
"biological concept discovery",
"feature representation",
"high-dimensional space",
"linear representations"
] | Accept (Poster) | https://openreview.net/pdf?id=i8vGPXrDMa | https://openreview.net/forum?id=i8vGPXrDMa | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"BLOqlHzasx"
],
"note_type": [
"decision"
],
"note_created": [
1741245823098
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Conditional on the authors fixing the formatting to use the workshop LaTeX style ([file](https://multiscale-ai.github.io/ICLR_2025_MLMP_Workshop.zip), [Overleaf](https://www.overleaf.com/read/wrsbgszhyntq#61c4bc)). It is almost identical to the main track templates, except for the header.\", \"title\": \"Paper Decision\"}"
]
} |
dGhfVsgs01 | Generating $\pi$-Functional Molecules Using STGG+ with Active Learning | [
"Alexia Jolicoeur-Martineau",
"Yan Zhang",
"Boris Knyazev",
"Aristide Baratin",
"Cheng-Hao Liu"
] | Generating novel molecules with out-of-distribution properties is a major challenge in molecular discovery. While supervised learning methods generate high-quality molecules similar to those in a dataset, they struggle to generalize to out-of-distribution properties. Reinforcement learning can explore novel spaces but often conducts 'reward-hacking' and generates non-synthesizable molecules. In this work, we address this problem by integrating a state-of-the-art supervised learning method, STGG+, in an active learning loop. Our approach iteratively generates, evaluates, and fine-tunes STGG+ to continuously expand its knowledge. We apply this method to the design of organic $\pi$-functional materials, specifically two challenging tasks: 1) generating highly absorptive molecules characterized by a large oscillator strength and 2) designing absorptive molecules with reasonable oscillator strength in the near-infrared range. The generated molecules are validated and rationalized \textit{in-silico} with time-dependent density functional theory. Our results demonstrate that our method is highly effective in generating novel molecules with high oscillator strength, contrary to existing methods such as reinforcement learning (RL) methods. | [
"Active learning",
"Molecular Design",
"Optoelectronics",
"DFT"
] | Accept (Poster) | https://openreview.net/pdf?id=dGhfVsgs01 | https://openreview.net/forum?id=dGhfVsgs01 | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"0liKIuW5pC"
],
"note_type": [
"decision"
],
"note_created": [
1741244822249
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Conditional on the authors fixing the formatting to use the workshop LaTeX style ([file](https://multiscale-ai.github.io/ICLR_2025_MLMP_Workshop.zip), [Overleaf](https://www.overleaf.com/read/wrsbgszhyntq#61c4bc)). It is almost identical to the main track templates, except for the header.\", \"title\": \"Paper Decision\"}"
]
} |
ck3vM17ckg | Unifying Renormalization with Markov Categories | [
"Paolo Perrone",
"Andrey E Ustyuzhanin"
] | This paper explores a novel approach for modeling renormalization processes using Markov categories, a formalism rooted in category theory. By leveraging the abstraction provided by Markov categories, we aim to provide a coherent framework that bridges stochastic processes with renormalization theory, potentially enhancing the interpretability and application of these crucial transformations. Our study elucidates theoretical insights, outlines computational benefits, and suggests interdisciplinary applications, espe cially in conjunction with machine learning methodologies. Key comparisons with existing models highlight the advantages in terms of flexibility and abstraction. | [
"Renormalization",
"scaling",
"Markov categories",
"category theory"
] | Accept (Poster) | https://openreview.net/pdf?id=ck3vM17ckg | https://openreview.net/forum?id=ck3vM17ckg | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"O2OCTvOro0"
],
"note_type": [
"decision"
],
"note_created": [
1741245109412
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Conditional on the authors fixing the formatting to use the workshop LaTeX style ([file](https://multiscale-ai.github.io/ICLR_2025_MLMP_Workshop.zip), [Overleaf](https://www.overleaf.com/read/wrsbgszhyntq#61c4bc)).\", \"title\": \"Paper Decision\"}"
]
} |
YM3koX4nHp | Compute-Adaptive Surrogate Modeling of Partial Differential Equations | [
"Payel Mukhopadhyay",
"Michael McCabe",
"Ruben Ohana",
"Miles Cranmer"
] | Modeling dynamical systems governed by partial differential equations presents significant challenges for machine learning-based surrogate models. While transformers have shown potential in capturing complex spatial dy- namics, their reliance on fixed-size patches limits flexibility and scalability. In this work, we introduce two convolutional encoder and decoder architec- tural blocks—Convolutional Kernel Modulator (CKM) and Convolutional Stride Modulator (CSM)—designed for patch embedding and reconstruction in autoregressive prediction tasks. These blocks unlock dynamic patching and striding strategies to balance accuracy and computational efficiency during inference. Furthermore, we propose a rollout strategy that adap- tively adjusts patching and striding configurations throughout temporally sequential predictions, mitigating patch artifacts and long-term error accu- mulation while improving the capture of fine-scale structures. We show that our approaches enable dynamic control over patch sizes at inference time without losing accuracy over fixed patch baselines. | [
"Vision transformers",
"convolution kernel modulator",
"convolution stride modulator",
"spatio-temporal data"
] | Accept (Poster) | https://openreview.net/pdf?id=YM3koX4nHp | https://openreview.net/forum?id=YM3koX4nHp | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"kcz79T8vJM"
],
"note_type": [
"decision"
],
"note_created": [
1741244039744
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\", \"comment\": \"Conditional on the authors fixing the formatting to use the workshop LaTeX style ([file](https://multiscale-ai.github.io/ICLR_2025_MLMP_Workshop.zip), [Overleaf](https://www.overleaf.com/read/wrsbgszhyntq#61c4bc)). It is almost identical to the main track templates, except for the header.\"}"
]
} |
WPj94Xn1qx | Exploring Thermodynamic Behavior of Spin Glasses with Machine Learning | [
"Vitalii Kapitan",
"Dmitrii Kapitan",
"Petr Andriushchenko"
] | In this paper, we consider the regression problem of predicting thermodynamic quantities - specifically the average energy $\langle E \rangle$ - as a function of temperature $T$ for spin glasses on a square lattice. The spin glass is represented as a weighted graph, where exchange interactions define the edge weights. We investigate how the spatial distribution of these interactions relates to $\langle E \rangle$, leveraging several machine learning approaches that we specifically developed for this task. While $\langle E \rangle$ is used to demonstrate the approach, our framework is general and can be applicable to the prediction of other thermodynamic characteristics. | [
"spin glass",
"Ising model",
"neural network"
] | Accept (Poster) | https://openreview.net/pdf?id=WPj94Xn1qx | https://openreview.net/forum?id=WPj94Xn1qx | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"3xnrjhB1MS"
],
"note_type": [
"decision"
],
"note_created": [
1741244335183
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Conditional on the authors fixing the formatting to use the workshop LaTeX style ([file](https://multiscale-ai.github.io/ICLR_2025_MLMP_Workshop.zip), [Overleaf](https://www.overleaf.com/read/wrsbgszhyntq#61c4bc)). It is almost identical to the main track templates, except for the header.\", \"title\": \"Paper Decision\"}"
]
} |
WNwRteM7H9 | Improved Sampling of Diffusion Models in Fluid Dynamics with Tweedie's Formula | [
"Youssef Shehata",
"Benjamin Holzschuh",
"Nils Thuerey"
] | Denoising Diffusion Probabilistic Models (DDPMs), while powerful, require extensive sampling due to a high number of function evaluations (NFEs) for accurate predictions, hindering their use in long-term spatio-temporal physics predictions. We address this limitation by introducing two novel sampling strategies: 1) Truncated Sampling Models, which achieve high-fidelity single/few-step sampling by truncating the diffusion process, bridging the gap with deterministic methods; and 2) Iterative Refinement, a reformulation of DDPM sampling as a few-step refinement process. We demonstrate that both methods significantly improve accuracy over DDPMs, DDIMs, and EDMs with NFEs $\leq$ 10 for compressible transonic flows over a cylinder and provide stable long-term predictions. | [
"fluid dynamics",
"diffusion models"
] | Accept (Poster) | https://openreview.net/pdf?id=WNwRteM7H9 | https://openreview.net/forum?id=WNwRteM7H9 | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"IHPW0ZCqWp"
],
"note_type": [
"decision"
],
"note_created": [
1741245456659
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
VTKPOXKXlx | FreeFlow: Latent Flow Matching for Free Energy Difference Estimation | [
"Ege Erdogan",
"Radoslav Ralev",
"Mika Rebensburg",
"Céline Marquet",
"Leon Klein",
"Hannes Stark"
] | Estimating free energy differences between molecular systems is fundamental for understanding molecular interactions and accelerating drug discovery. Current techniques use molecular dynamics to sample the Boltzmann distributions of the two systems and of several intermediate "alchemical" distributions that interpolate between them. From the resulting ensembles, free energy differences can be estimated by averaging importance weight analogs for multiple distributions. We replace slow alchemical simulations with a fast-to-train flow model bridging two systems. After training, we obtain free energy differences by integrating the flow's instantaneous change of variables when transporting samples between the two distributions. To map between molecular systems with different numbers of atoms, we replace the previous solutions of simulating auxiliary "dummy atoms" by additionally training two autoencoders that project the systems to a same-dimensional latent space in which our flow operates. A generalized change of variables formula for trans-dimensional mappings motivates the use of autoencoders in our free energy estimation pipeline. We validate our approach on pharmaceutically relevant ligands in solvent and results show strong agreement with reference values. | [
"free energy",
"flow matching",
"free energy perturbation",
"computational biology"
] | Accept (Poster) | https://openreview.net/pdf?id=VTKPOXKXlx | https://openreview.net/forum?id=VTKPOXKXlx | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"BjgziCr54Z"
],
"note_type": [
"decision"
],
"note_created": [
1741245290132
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
T86jIuSi5A | DoMiNO: Down-scaling Molecular Dynamics with Neural Graph Ordinary Differential Equations | [
"Fang Sun",
"Zijie Huang",
"Yadi Cao",
"Xiao Luo",
"Wei Wang",
"Yizhou Sun"
] | Molecular dynamics (MD) simulations are crucial for understanding and predicting the behavior of molecular systems in biology and chemistry. However, their wide adoption is hindered by two main challenges: (1) computational cost, because fine-grained simulations often require millions of small timesteps, and (2) lack of flexibility, as existing machine-learning-based surrogates typically operate at either a single small or single large timestep. These approaches either accumulate significant rollout errors or lose the ability to produce finegrained results if the timestep is large. To address these issues, we propose DoMiNO: Down-scaling Molecular Dynamics with Neural Graph Ordinary Differential Equations, a hierarchical framework that models multi-scale dynamics. Specifically, DoMiNO performs down-scaling by progressively up-sampling the trajectory across multiple temporal resolutions, equipping each level with a Neural Graph ODE to capture that scale's dominant behavior. At inference, DoMiNO flexibly combines different timestep sizes to predict both short- and longrange dynamics with high fidelity. Empirical results on challenging MD bench-marks-ranging from small molecules to proteins-demonstrate the method's longterm stability, flexibility, and accuracy. Our implementation is available at https://github.com/FrancoTSolis/domino-code. | [
"Multi-scale Modeling; Molecular Dynamics; Neural ODE"
] | Accept (Oral) | https://openreview.net/pdf?id=T86jIuSi5A | https://openreview.net/forum?id=T86jIuSi5A | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"HRQFwL9Fug"
],
"note_type": [
"decision"
],
"note_created": [
1741245860951
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Oral)\", \"title\": \"Paper Decision\"}"
]
} |
SRCsyJafgP | On Incorporating Scale into Graph Networks | [
"Christian Koke",
"Yuesong Shen",
"Abhishek Saroha",
"Marvin Eisenberger",
"Bastian Rieck",
"Michael M. Bronstein",
"Daniel Cremers"
] | Standard graph neural networks assign vastly different latent embeddings to graphs describing the same physical system at different resolution scales. This precludes consistency in applications and prevents generalization between scales as would fundamentally be needed in many scientific applications. We uncover the underlying obstruction, investigate its origin and show how to overcome it. | [
"Generalization",
"(Resolution-)Scale",
"Graph Neural Networks"
] | Accept (Oral) | https://openreview.net/pdf?id=SRCsyJafgP | https://openreview.net/forum?id=SRCsyJafgP | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"ER7BLYzZ5X"
],
"note_type": [
"decision"
],
"note_created": [
1741244658406
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Oral)\", \"comment\": \"Conditional on the authors fixing the formatting to use the workshop LaTeX style ([file](https://multiscale-ai.github.io/ICLR_2025_MLMP_Workshop.zip), [Overleaf](https://www.overleaf.com/read/wrsbgszhyntq#61c4bc)). It is almost identical to the main track templates, except for workshop name in the header.\", \"title\": \"Paper Decision\"}"
]
} |
SGg5SSZGu2 | 5D Neural Surrogates for Nonlinear Gyrokinetic Simulations of Plasma Turbulence | [
"Gianluca Galletti",
"Fabian Paischer",
"Paul Setinek",
"William Hornsby",
"Lorenzo Zanisi",
"Naomi Carey",
"Stanislas Pamela",
"Johannes Brandstetter"
] | Nuclear fusion plays a pivotal role in the quest for reliable and sustainable energy production. A major roadblock to achieving commercially viable fusion power is understanding plasma turbulence, which can significantly degrade plasma confinement. Modelling turbulence is crucial to design performing plasma scenarios for next-generation reactor-class devices and current experimental machines. The nonlinear gyrokinetic equation underpinning turbulence modelling evolves a 5D distribution function over time. Solving this equation numerically is extremely expensive, requiring up to weeks for a single run to converge, making it unfeasible for iterative optimisation and control studies. In this work, we propose a method for training neural surrogates for 5D gyrokinetic simulations. Our method extends a hierarchical vision transformer to five dimensions and is trained on the 5D distribution function for the adiabatic electron approximation. We demonstrate that our model can accurately infer downstream physical quantities such as heat flux time trace and electrostatic potentials for single-step predictions two orders of magnitude faster than numerical codes. Our work paves the way towards neural surrogates for plasma turbulence simulations to accelerate deployment of commercial energy production via nuclear fusion. | [
"Neural surrogates",
"Surrogate models",
"Gyrokinetics"
] | Accept (Oral) | https://openreview.net/pdf?id=SGg5SSZGu2 | https://openreview.net/forum?id=SGg5SSZGu2 | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"NDC3Ot1K01"
],
"note_type": [
"decision"
],
"note_created": [
1741244057568
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Oral)\", \"title\": \"Paper Decision\"}"
]
} |
R1Lrk1EffC | On Learning Quasi-Lagrangian Turbulence | [
"Artur P. Toshev",
"Teodor Kalinov",
"Nicholas Gao",
"Stephan Günnemann",
"Nikolaus A. Adams"
] | Lagrangian, or particle-based, fluid mechanics methods are the dominant numerical tool for simulating complex boundaries, solid-fluid interactions, and multi-phase flows. While their counterpart, the Eulerian framework, has seen significant progress in learning turbulence closures – such as large eddy simulation (LES) modeling – turbulence modeling in the Lagrangian framework has been far less successful. In this paper, we first explain why preserving the correct energy spectrum, crucial for analyzing turbulence, is fundamentally impossible in a fully Lagrangian description. This limitation necessitates using quasi-Lagrangian schemes – methods that adjust the evolution of fluid particle positions beyond their physical velocity to improve accuracy. However, manually designing such corrections is challenging, motivating data-driven approaches. To this end, we are the first to investigate machine-learned quasi-Lagrangian fluid dynamics surrogates. Our experiments are on a new quasi-Lagrangian 2D turbulent Kolmogorov dataset, where velocities from a high-fidelity direct numerical simulation (DNS) solver are spectrally interpolated onto fluid particles, interleaved with particle relaxations to achieve weakly compressible fluid dynamics. We compare six machine-learning parametrizations for evolving the positions and velocities of particles. Our results show that learning simple unconstrained correction terms yields coarse-grained simulations that align well with the reference high-fidelity simulation. | [
"fluid mechanics",
"Lagrangian",
"particle-based",
"graph neural networks"
] | Accept (Poster) | https://openreview.net/pdf?id=R1Lrk1EffC | https://openreview.net/forum?id=R1Lrk1EffC | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"UOCU6Ja9A7"
],
"note_type": [
"decision"
],
"note_created": [
1741244898162
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
OCM7OkVg9C | LOGLO-FNO: Efficient Learning of Local and Global Features in Fourier Neural Operators | [
"Marimuthu Kalimuthu",
"David Holzmüller",
"Mathias Niepert"
] | Modeling high-frequency information is a critical challenge in scientific machine learning. For instance, fully turbulent flow simulations of Navier-Stokes equations at Reynolds numbers 3500 and above can generate high-frequency signals due to swirling fluid motions caused by eddies and vortices. Faithfully modeling such signals using neural networks depends on the accurate reconstruction of moderate to high frequencies. However, it has been well known that deep neural nets exhibit the so-called spectral bias toward learning low-frequency components. Meanwhile, Fourier Neural Operators (FNOs) have emerged as a popular class of data-driven models in recent years for solving Partial Differential Equations (PDEs) and for surrogate modeling in general. Although impressive results have been achieved on several PDE benchmark problems, FNOs often perform poorly in learning non-dominant frequencies characterized by local features. This limitation stems from the spectral bias inherent in neural networks and the explicit exclusion of high-frequency modes in FNOs and their variants. Therefore, to mitigate these issues and improve FNO's spectral learning capabilities to represent a broad range of frequency components, we propose two key architectural enhancements: (i) a parallel branch performing local spectral convolutions and (ii) a high-frequency propagation module. Moreover, we propose a novel frequency-sensitive loss term based on radially binned spectral errors. This introduction of a parallel branch for local convolutions reduces the number of trainable parameters by up to 50% while achieving the accuracy of baseline FNO that relies solely on global convolutions. Experiments on three challenging PDE problems in fluid mechanics and biological pattern formation, and the qualitative and spectral analysis of predictions show the effectiveness of our method over the state-of-the-art neural operator baselines. | [
"Fourier Neural Operators",
"Multi-Scale Modeling",
"High-Frequency Modeling",
"Partial Differential Equations",
"Spectral Loss",
"Operator Learning"
] | Accept (Oral) | https://openreview.net/pdf?id=OCM7OkVg9C | https://openreview.net/forum?id=OCM7OkVg9C | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"XcaKXdph3t"
],
"note_type": [
"decision"
],
"note_created": [
1741244781840
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Oral)\", \"title\": \"Paper Decision\"}"
]
} |
MjhOqgiDju | Symbolic Regression for Learning Scale Transition Equations in Synthetic Fractal Surface Roughness | [
"Aneesh Chatrathi",
"Zayan Hasan"
] | Modeling surface roughness in materials science is a challenging multiscale problem, as surface textures often exhibit hierarchical (fractal-like) structure across multiple scales. In this work, we present a synthetic data-driven approach to studying scale transitions in surface roughness using fractal data generation and symbolic regression. We construct coarse-grained representations of synthetic fractal surfaces and apply symbolic regression to derive interpretable mathematical expressions that map fine-scale features to coarse-scale behavior. On controlled synthetic data, our approach achieves high predictive accuracy (R² near 1, low MSE), serving as a baseline validation. While the data is idealized, these results suggest that symbolic regression can capture scale-transition relationships in hierarchical surface structures and may also be able to support future efforts in data-driven multiscale modeling. This work highlights the potential of symbolic learning in accelerating modeling workflows for complex physical systems. | [
"Symbolic Regression",
"Surface Roughness",
"Fractal Geometry",
"Coarse Graining",
"Multiscale Modeling",
"Machine Learning",
"Uncertainty Quantification"
] | Accept (Poster) | https://openreview.net/pdf?id=MjhOqgiDju | https://openreview.net/forum?id=MjhOqgiDju | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"a6y374Bt6l"
],
"note_type": [
"decision"
],
"note_created": [
1741245263388
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Conditional on the authors fixing the formatting to use the workshop LaTeX style ([file](https://multiscale-ai.github.io/ICLR_2025_MLMP_Workshop.zip), [Overleaf](https://www.overleaf.com/read/wrsbgszhyntq#61c4bc)). It is almost identical to the main track templates, except for the header.\", \"title\": \"Paper Decision\"}"
]
} |
MByFkdS8SI | Boosting Protein Graph Representations through Static-Dynamic Fusion | [
"Pengkang Guo",
"Bruno Correia",
"Pierre Vandergheynst",
"Daniel Probst"
] | Machine learning for protein modeling faces significant challenges due to proteins' inherently dynamic nature, yet most graph-based machine learning methods rely solely on static structural information. Recently, the growing availability of molecular dynamics trajectories provides new opportunities for understanding the dynamic behavior of proteins; however, computational methods for utilizing this dynamic information remain limited. We propose a novel graph representation that integrates both static structural information and dynamic correlations from molecular dynamics trajectories, enabling more comprehensive modeling of proteins. By applying relational graph neural networks (RGNNs) to process this heterogeneous representation, we demonstrate significant improvements over structure-based approaches across three distinct tasks: atomic adaptability prediction, binding site detection, and binding affinity prediction. Our results validate that combining static and dynamic information provides complementary signals for understanding protein-ligand interactions, offering new possibilities for drug design and structural biology applications. | [
"Graph Neural Networks",
"Protein Modeling",
"Molecular Dynamics",
"Heterogeneous Graph"
] | Accept (Poster) | https://openreview.net/pdf?id=MByFkdS8SI | https://openreview.net/forum?id=MByFkdS8SI | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"IVQ0UDKOqo"
],
"note_type": [
"decision"
],
"note_created": [
1741245151477
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
Eb7fQBdA3D | ViNE-GATr: scaling geometric algebra transformers with virtual nodes embeddings | [
"Julian Suk",
"Thomas Hehn",
"Arash Behboodi",
"Gabriele Cesa"
] | Equivariant neural networks can effectively model physical systems by naturally handling the underlying geometric quantities and preserving their symmetries, but scaling them to large geometric data remains challenging. Naive downsampling typically disrupts features’ transformation laws, limiting their applicability in large scale settings. In this work, we propose a scalable equivariant transformer that efficiently processes geometric data in a coarse-grained latent space while preserving E(3) symmetries of the problem. In particular, by building on the Geometric Algebra Transformer (GATr) and PerceiverIO architectures, our method learns equivariant latent tokens which allow us to decouple the processing complexity from the input data representation while maintaining global equivariance. | [
"Geometric deep learning",
"Transformers",
"Virtual nodes learning",
"Equivariance"
] | Accept (Poster) | https://openreview.net/pdf?id=Eb7fQBdA3D | https://openreview.net/forum?id=Eb7fQBdA3D | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"lS7CN93pzJ"
],
"note_type": [
"decision"
],
"note_created": [
1741244922695
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
CvFc81LQcE | When is Bayesian Optimization Beneficial? A Critical Assessment of Optimization Strategies in High-Throughput Organic Photovoltaic Manufacturing | [
"Matthew Osvaldo",
"Leonard Ng Wei Tat"
] | We present a systematic evaluation of optimization strategies for high-throughput organic photovoltaic (OPV) manufacturing. Analyzing 11,587 PBF-QxF:Y6 devices across 11 manufacturing parameters through 25 optimization iterations, we compared Bayesian Optimization (BO) and Random Search (RS). While BO achieved 7.69% PCE versus RS's 7.66%, this 0.03% advantage required 20x more computational overhead. Statistical analysis revealed no significant performance difference between methods (t-stat = 0.53, p > 0.05). Environmental factors, particularly humidity (r = 0.380), showed stronger correlation with performance than optimization strategy choice. Manufacturing process control, rather than algorithmic sophistication, emerges as the critical factor for high-throughput OPV optimization. These findings suggest prioritizing robust process control systems over complex optimization algorithms in manufacturing environments. | [
"Organic photovoltaics",
"Bayesian optimization",
"High-throughput manufacturing",
"Self-Driving Labs"
] | Accept (Poster) | https://openreview.net/pdf?id=CvFc81LQcE | https://openreview.net/forum?id=CvFc81LQcE | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"LsngMFnZbF"
],
"note_type": [
"decision"
],
"note_created": [
1741244422491
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Conditional on the authors fixing the formatting to use the workshop LaTeX style ([file](https://multiscale-ai.github.io/ICLR_2025_MLMP_Workshop.zip), [Overleaf](https://www.overleaf.com/read/wrsbgszhyntq#61c4bc)).\", \"title\": \"Paper Decision\"}"
]
} |
BrsPVXO6f5 | A Joint Space-Time Encoder for Geographic Time-Series Data | [
"David Mickisch",
"Konstantin Klemmer",
"Mélisande Teng",
"David Rolnick"
] | Many real-world processes are characterized by complex spatio-temporal dependencies, from climate dynamics to disease spread. Here, we introduce a new neural network architecture to model such dynamics at scale: the \emph{Space-Time Encoder}. Building on recent advances in \emph{location encoders}, models that take as inputs geographic coordinates, we develop a method that takes in geographic and temporal information simultaneously and learns smooth, continuous functions in both space and time. The inputs are first transformed using positional encoding functions and then fed into neural networks that allow the learning of complex functions. We implement a prototype of the \emph{Space-Time Encoder}, discuss the design choices of the novel temporal encoding, and demonstrate its utility in climate model emulation. We discuss the potential of the method across use cases, as well as promising avenues for further methodological innovation. | [
"climate and weather",
"surrogate modelling",
"geographic time series",
"location encodings",
"deep learning regularization"
] | Accept (Poster) | https://openreview.net/pdf?id=BrsPVXO6f5 | https://openreview.net/forum?id=BrsPVXO6f5 | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"tZkVmOrNYB"
],
"note_type": [
"decision"
],
"note_created": [
1741244846779
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Conditional on the authors fixing the formatting to use the workshop LaTeX style ([file](https://multiscale-ai.github.io/ICLR_2025_MLMP_Workshop.zip), [Overleaf](https://www.overleaf.com/read/wrsbgszhyntq#61c4bc)). It is almost identical to the main track templates, except for the header.\", \"title\": \"Paper Decision\"}"
]
} |
9niAAZES5o | Multi-Scale Modeling of Financial Systems Using Neural Differential Equations: Applications to High-Frequency Trading, Regime Switching, and Portfolio Optimization | [
"Tao Qiu"
] | This paper explores the application of neural differential equations (NDEs) to model the multi-scale dynamics of financial systems, with a focus on high-frequency trading, regime-switching asset prices, and portfolio optimization. We propose a novel framework that integrates stochastic volatility and hierarchical architectures to capture both short-term fluctuations and long-term trends. e demonstrate the effectiveness of NDEs in predicting prices, identifying regime transitions, and optimizing portfolios across multiple time scales. The framework is compared with traditional methods such as GARCH and LSTMs, showing superior performance in terms of predictive accuracy, computational efficiency, and risk-adjusted returns. The results highlight the potential of NDEs for real-time applications in financial markets, offering a scalable and interpretable solution for modeling complex systems. | [
"Neural Differential Equations",
"Multi-Scale Modeling",
"High-Frequency Trading",
"Regime Switching. Portfolio Optimization"
] | Accept (Poster) | https://openreview.net/pdf?id=9niAAZES5o | https://openreview.net/forum?id=9niAAZES5o | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"hKR42mouOX"
],
"note_type": [
"decision"
],
"note_created": [
1741245191893
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Conditional on the authors fixing the formatting to use the workshop LaTeX style ([file](https://multiscale-ai.github.io/ICLR_2025_MLMP_Workshop.zip), [Overleaf](https://www.overleaf.com/read/wrsbgszhyntq#61c4bc)). It is almost identical to the main track templates, except for the header.\", \"title\": \"Paper Decision\"}"
]
} |
8y57m1AEiQ | Generative subgrid-scale modeling | [
"Jiaxi Zhao",
"Sohei Arisaka",
"Qianxiao Li"
] | The mismatch between the a-priori and a-posteriori error is ubiquitous in data-driven subgrid-scale (SGS) modeling, which is an important ingredient in large eddy simulations. In this work, we investigate the cause of this mismatch in depth and attribute it to two issues: data imbalance and multi-valuedness. Based on this understanding, we propose a generative modeling approach for the SGS stresses that resolves the issue of multi-valuedness and demonstrate its effectiveness in the Kuramoto-Sivashinsky equation. | [
"Subgrid-scale model",
"generative model",
"chaotic system"
] | Accept (Poster) | https://openreview.net/pdf?id=8y57m1AEiQ | https://openreview.net/forum?id=8y57m1AEiQ | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"Zeo7i9uhja"
],
"note_type": [
"decision"
],
"note_created": [
1741244728974
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
8aTDqamK5u | AdS-GNN - a Conformally Equivariant Graph Neural Network | [
"Maksim Zhdanov",
"Nabil Iqbal",
"Erik J Bekkers",
"Patrick Forré"
] | Conformal symmetries, i.e. coordinate transformations that preserve angles, play a key role in many fields, including physics, mathematics, computer vision and (geometric) machine learning. Here we build a neural network that is equivariant under general conformal transformations. To achieve this, we lift data from flat Euclidean space to Anti de Sitter (AdS) space. This allows us to exploit a known correspondence between conformal transformations of flat space and isometric transformations on the Anti de Sitter space. We then build upon the fact that such isometric transformations have been extensively studied on general geometries in the geometric deep learning literature. In particular, we then employ message-passing layers conditioned on the proper distance, yielding a computationally efficient framework. We validate our model on point cloud classification (SuperPixel MNIST) and semantic segmentation (PascalVOC-SP). | [
"scale-equivariance; conformal group; equivariance"
] | Accept (Poster) | https://openreview.net/pdf?id=8aTDqamK5u | https://openreview.net/forum?id=8aTDqamK5u | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"SWnU0YtdOE"
],
"note_type": [
"decision"
],
"note_created": [
1741244710670
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
3pj6salA3Y | PRISM: Enhancing Protein Inverse Folding through Fine-Grained Retrieval on Structure-Sequence Multimodal Representations | [
"Sazan Mahbub",
"Souvik Kundu",
"Eric P. Xing"
] | 3D structure-conditioned protein sequence generation, also known as protein inverse folding, is a key challenge in computational biology. While large language models for proteins have made significant strides, they cannot dynamically integrate rich multimodal representations from existing datasets, specifically the combined information of 3D structure and 1D sequence. Additionally, as datasets grow, these models require retraining, leading to inefficiencies. In this paper, we introduce PRISM, a novel retrieval-augmented generation (RAG) framework that enhances protein sequence design by dynamically incorporating fine-grained multimodal representations from a larger set of known structure-sequence pairs. Our experiments demonstrate that PRISM significantly outperforms state-of-the-art techniques in sequence recovery, emphasizing the advantages of incorporating fine-grained, multimodal retrieval-augmented generation in protein design. | [
"Retrieval Augmented Generation",
"Protein Inverse Folding",
"Protein Sequence Design",
"Multimodal Representation"
] | Accept (Poster) | https://openreview.net/pdf?id=3pj6salA3Y | https://openreview.net/forum?id=3pj6salA3Y | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"jSZYMSjtSD"
],
"note_type": [
"decision"
],
"note_created": [
1741245677139
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"A small comment, in the OpenReview console the title shows as `$\\\\textbf{PRISM:}$...` Not sure how it will be displayed in the list of accepted papers, but you might want to check.\", \"title\": \"Paper Decision\"}"
]
} |
02VgrmjZwd | Connecting Scales: Learning Dynamics for Efficient Ionic Conductivity Predictions with Graphs | [
"Volha Turchyna",
"Artem Maevskiy",
"Alexandra Carvalho",
"Andrey E Ustyuzhanin"
] | Multiscale approaches are crucial for advancing our understanding of material properties, particularly in the search for novel solid electrolytes essential for solid-state batteries. Estimating ionic conductivity through traditional molecular dynamics (MD) simulations is computationally intensive, requiring significant time to capture macro-scale behavior from micro-scale interatomic interactions. This work addresses the challenge of connecting micro-scale interatomic potentials with macro-scale conductivity measurements. We propose using equivariant graph neural networks to develop a faster mapping between these scales, significantly enhancing the efficiency of ionic diffusion predictions. This proof-of-concept demonstrates the potential to accelerate material discovery for solid electrolytes, addressing a critical need in energy storage technology. | [
"Solid Electrolytes",
"Ionic Conductivity",
"Accelerated Material Discovery",
"Equivariant Graph Neural Networks"
] | Accept (Poster) | https://openreview.net/pdf?id=02VgrmjZwd | https://openreview.net/forum?id=02VgrmjZwd | ICLR.cc/2025/Workshop/MLMP | 2025 | {
"note_id": [
"uisx22FyAI"
],
"note_type": [
"decision"
],
"note_created": [
1741245488905
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLMP/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
e4uwE1muhR | Is API Access to LLMs Useful for Generating Private Synthetic Tabular Data? | [
"Marika Swanberg",
"Ryan McKenna",
"Edo Roth",
"Albert Cheu",
"Peter Kairouz"
] | Differentially private (DP) synthetic data is a versatile tool for enabling the analysis of private data. Recent advancements in large language models (LLMs) have inspired a number of algorithm techniques for improving DP synthetic data generation. One family of approaches uses DP finetuning on the foundation model weights; however, the model weights for state-of-the-art models may not be public. In this work we propose two DP synthetic tabular data algorithms that only require API access to the foundation model. We adapt the Private Evolution algorithm (Lin et al., 2023; Xie et al., 2024) --- which was designed for image and text data---to the tabular data domain. In our extension of Private Evolution, we define a query workload-based distance measure, which may be of independent interest. We propose a family of algorithms that use one-shot API access to LLMs, rather than adaptive queries to the LLM. Our findings reveal that API-access to powerful LLMs does not always improve the quality of DP synthetic data compared to established baselines that operate without such access. We provide insights into the underlying reasons and propose improvements to LLMs that could make them more effective for this application. | [
"differential privacy",
"tabular",
"text",
"synthetic",
"llm",
"api"
] | Accept | https://openreview.net/pdf?id=e4uwE1muhR | https://openreview.net/forum?id=e4uwE1muhR | ICLR.cc/2025/Workshop/SynthData | 2025 | {
"note_id": [
"tUED4Fw0ZL"
],
"note_type": [
"decision"
],
"note_created": [
1741094280772
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/SynthData/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"Thanks for your submission!\"}"
]
} |
zVbFfPuqPi | STAMP Your Content: Proving Dataset Membership via Watermarked Rephrasings | [
"Saksham Rastogi",
"Pratyush Maini",
"Danish Pruthi"
] | Given how large parts of the publicly available text are crawled to pretrain large language models (LLMs), creators increasingly worry about the inclusion of their proprietary data for model training without attribution or licensing. Their concerns are also shared by benchmark curators whose test-sets might be compromised. In this paper, we present STAMP, a framework for detecting dataset membership—i.e., determining the inclusion of a dataset in the pretraining corpora of LLMs. Given an original piece of content, our proposal involves generating multiple watermarked rephrases such that a distinct watermark is embedded in each rephrasing. One version is released publicly while others are kept private. Subsequently, creators can compare model likelihoods between public and private versions using paired statistical tests to prove membership. We show that our framework can successfully detect contamination across four benchmarks which appear only once in the training data and constitute less than 0.001% of the total tokens, outperforming several contamination detection and dataset inference baselines. We verify that our approach preserves both the semantic meaning and the utility of benchmarks in comparing different models. We apply STAMP to two real-world scenarios to confirm the inclusion of paper abstracts and blog articles in the pretraining corpora. | [
"watermarking",
"test set contamination",
"membership inference",
"dataset inference",
"benchmarks",
"synthetic data"
] | Accept | https://openreview.net/pdf?id=zVbFfPuqPi | https://openreview.net/forum?id=zVbFfPuqPi | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"BklTctHXXK"
],
"note_type": [
"decision"
],
"note_created": [
1741250134616
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
ygE0U21vxM | ON-DEVICE WATERMARKING: A SOCIO-TECHNICAL IMPERATIVE FOR AUTHENTICITY IN THE AGE OF GENERATIVE AI | [
"Houssam Kherraz"
] | As generative AI models produce increasingly realistic output, both academia and industry are focusing on the ability to detect whether an output was generated by an AI model or not. Many of the research efforts and policy discourse are centered around robust watermarking of AI outputs. While plenty of progress has been made, all watermarking and AI detection techniques face severe limitations. In this position paper, we argue that we are adopting the wrong approach, and should instead focus on watermarking trustworthy content rather than AI generated ones. For audio-visual content, in particular, all real content is grounded in the physical world and captured via hardware sensors. This presents a unique opportunity to watermark at the hardware layer, and we lay out a socio-technical framework and draw parallels with HTTPS certification and Blu-Ray verification protocols. While acknowledging implementation challenges, we contend that hardware-based authentication offers a more tractable path forward, particularly from a policy perspective. As generative models approach perceptual indistinguishability, the research community should be wary of being overly optimistic with AI watermarking, and we argue that AI watermarking research efforts are better spent in the text and LLM space, which are ultimately not traceable to a physical sensor. | [
"AI",
"watermarking",
"C2PA",
"cryptography",
"SoC",
"policy",
"hardware",
"diffusion models",
"generative",
"misinformation"
] | Accept | https://openreview.net/pdf?id=ygE0U21vxM | https://openreview.net/forum?id=ygE0U21vxM | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"4nxZO78mcm"
],
"note_type": [
"decision"
],
"note_created": [
1741250134365
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
yXKnzFxNWK | Learning Self-Supervised Style Representations for Detecting AI-Generated Faces | [
"Tharun Anand",
"Sanjeev Manivannan",
"Siva Sankar S",
"Kaushik Mitra"
] | The proliferation of AI-generated photorealistic faces—from GANs to diffusion models have become indistinguishable from authentic images. This poses significant privacy and security risks, enabling misinformation and identity fraud at scale on social media and other platforms. To detect these AI-Generated faces effectively, we propose a fundamentally new approach inspired by the intrinsic stylistic discrepancies between authentic and synthetic images. Our key insight is that even highly realistic AI-generated faces exhibit persistent differences in style representations, which manifest as distinguishable patterns in the W+ Style Space. We introduce a self-supervised style representation learning approach that captures intrinsic differences between actual and synthetic faces. By first learning the style distribution of authentic images, our method identifies deviations indicative of AI generation without relying on explicit generative watermarks. This enables strong generalization across unseen generators, including diffusion-based models. Experiments show high detection accuracy (93\%+) across multiple generative datasets and significant improvements in cross-domain settings. | [
"Representation-Learning",
"Self-Supervised Learning",
"Anomaly Detection"
] | Accept | https://openreview.net/pdf?id=yXKnzFxNWK | https://openreview.net/forum?id=yXKnzFxNWK | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"7RGcExqGzv"
],
"note_type": [
"decision"
],
"note_created": [
1741250135829
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
yENK6DYuXv | Towards A Correct Usage of Cryptography in Semantic Watermarks for Diffusion Models | [
"Jonas Thietke",
"Andreas Müller",
"Denis Lukovnikov",
"Asja Fischer",
"Erwin Quiring"
] | Semantic watermarking methods enable the direct integration of watermarks into the generation process of latent diffusion models by only modifying the initial latent noise. One line of approaches building on Gaussian Shading relies on cryptographic primitives to steer the sampling process of the latent noise. However, we identify several issues in the usage of cryptographic techniques in Gaussian Shading, particularly in its proof of lossless performance and key management, causing ambiguity in follow-up works, too. In this work, we therefore revisit the cryptographic primitives for semantic watermarking. We introduce a novel, general proof of lossless performance based on IND\$-CPA security for semantic watermarks. We then discuss the configuration of the cryptographic primitives in semantic watermarks with respect to security, efficiency, and generation quality. | [
"Gaussian Shading",
"watermarking",
"lossless watermarking"
] | Accept | https://openreview.net/pdf?id=yENK6DYuXv | https://openreview.net/forum?id=yENK6DYuXv | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"GbQ4cIbISZ"
],
"note_type": [
"decision"
],
"note_created": [
1741250136423
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
xetVzmw9dW | Watermarking Language Models with Error Correcting Codes | [
"Patrick Chao",
"Yan Sun",
"Edgar Dobriban",
"Hamed Hassani"
] | Recent progress in large language models enables the creation of realistic machine-generated content. Watermarking is a promising approach to distinguish machine-generated text from human text, embedding statistical signals in the output that are ideally undetectable to humans. We propose a watermarking framework that encodes such signals through an error correcting code. Our method, termed robust binary code (RBC) watermark, introduces no noticeable degradation in quality. We evaluate our watermark on base and instruction fine-tuned models and find our watermark is robust to edits, deletions, and translations. We provide an information-theoretic perspective on watermarking, a powerful statistical test for detection and for generating $p$-values, and theoretical guarantees. Our empirical findings suggest our watermark is fast, powerful, and robust, comparing favorably to the state-of-the-art. | [
"Large Language Model; Watermarking; Error Correcting Code"
] | Accept | https://openreview.net/pdf?id=xetVzmw9dW | https://openreview.net/forum?id=xetVzmw9dW | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"sRVCLFEvTm"
],
"note_type": [
"decision"
],
"note_created": [
1741250135119
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
wXrrxqQGzq | SWA-LDM: Toward Stealthy Watermarks for Latent Diffusion Models | [
"Zhonghao Yang",
"Linye Lyu",
"Xuanhang Chang",
"Daojing He",
"YU LI"
] | Latent Diffusion Models (LDMs) have established themselves as powerful tools in the rapidly evolving field of image generation, capable of producing highly realistic images. However, their widespread adoption raises critical concerns about copyright infringement and the misuse of generated content. Watermarking techniques have emerged as a promising solution, enabling copyright identification and misuse tracing through imperceptible markers embedded in generated images. Among these, latent-based watermarking techniques are particularly promising, as they embed watermarks directly into the latent noise without altering the underlying LDM architecture.
In this work, we demonstrate—for the first time—that such latent-based watermarks are practically vulnerable to detection and compromise through systematic analysis of output images' statistical patterns. To counter this, we propose SWA-LDM (Stealthy Watermark for LDM), a lightweight framework that enhances stealth by dynamically randomizing the embedded watermarks using the Gaussian-distributed latent noise inherent to diffusion models.
By embedding unique, pattern-free signatures per image, SWA-LDM eliminates detectable artifacts while preserving image quality and extraction robustness. Experiments demonstrate an average of 20\% improvement in stealth over state-of-the-art methods, enabling secure deployment of watermarked generative AI in real-world applications. | [
"Watermarking",
"AI Safety",
"Latent Diffusion Models",
"Generative AI"
] | Accept | https://openreview.net/pdf?id=wXrrxqQGzq | https://openreview.net/forum?id=wXrrxqQGzq | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"JvXASqrgyz"
],
"note_type": [
"decision"
],
"note_created": [
1741250135889
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
wLaP37BrhE | First-Place Solution to NeurIPS 2024 Invisible Watermark Removal Challenge | [
"Fahad Shamshad",
"Tameem Bakr",
"Yahia Salaheldin Shaaban",
"Noor Hazim Hussein",
"Karthik Nandakumar",
"Nils Lukas"
] | Content watermarking is an important tool for the authentication and copyright protection of digital media.
However, it is unclear whether existing watermarks are robust against adversarial attacks.
We present the \textbf{winning solution} to the NeurIPS 2024 \textit{Erasing the Invisible} challenge, which stress-tests watermark robustness under varying degrees of an adversary's knowledge. The challenge consisted of two tracks: a black-box and beige-box track, depending on whether the adversary knows which watermarking method was used by the provider.
For the \textbf{beige-box} track, we leverage an \textit{adaptive} VAE-based evasion attack, with a test-time optimization and color-contrast restoration in CIELAB space to preserve the image's quality. For the \textbf{black-box} track, we first cluster images based on their artifacts in the spatial or frequency-domain. Then, we apply image-to-image diffusion models with controlled noise injection and semantic priors from ChatGPT-generated captions to each cluster with optimized parameter settings. Empirical evaluations demonstrate that our method successfully \textbf{achieves near-perfect watermark removal} (95.7\%) with negligible impact on the residual image's quality. We hope that our attacks inspire the development of more robust image watermarking methods. | [
"watermarking",
"robustness",
"neurips competition erasing the invisible"
] | Accept | https://openreview.net/pdf?id=wLaP37BrhE | https://openreview.net/forum?id=wLaP37BrhE | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"NxyX96Aerl"
],
"note_type": [
"decision"
],
"note_created": [
1741250136552
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
uVWme6Rk9D | Optimizing Adaptive Attacks against Content Watermarks for Language Models | [
"Abdulrahman Diaa",
"Toluwani Aremu",
"Nils Lukas"
] | Large Language Models (LLMs) can be misused to spread online spam and misinformation. Content watermarking deters misuse by hiding a message in generated outputs, enabling detection using a secret \emph{watermarking key}. Robustness is a core security property, stating that evading detection requires (significant) degradation of the content's quality. Many LLM watermarking methods have been proposed, but robustness is tested only against \emph{non-adaptive} attackers who lack knowledge of the provider's watermarking method and can find only suboptimal attacks. We formulate the robustness of LLM watermarking as an objective function and use preference-based optimization to tune \emph{adaptive} attacks against the specific watermarking method. Our evaluation shows that: (i) adaptive attacks evade detection against all surveyed watermarking methods. (ii) Even in a non-adaptive setting, attacks optimized adaptively against known watermarks remain effective when tested on unseen watermarks, and (iii) optimization-based attacks are scalable and use limited computational resources of less than seven GPU hours. Our findings underscore the need to test robustness against adaptive attacks. | [
"Generative AI",
"LLM",
"Watermark",
"Adversarial Attack"
] | Accept | https://openreview.net/pdf?id=uVWme6Rk9D | https://openreview.net/forum?id=uVWme6Rk9D | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"aQAS70YAQY"
],
"note_type": [
"decision"
],
"note_created": [
1741250136089
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
tm1pZhec2e | CoheMark: A Novel Sentence-Level Watermark for Enhanced Text Quality | [
"Junyan Zhang",
"Shuliang Liu",
"Aiwei Liu",
"Yubo Gao",
"Jungang Li",
"Xiaojie Gu",
"Xuming Hu"
] | Watermarking technology is a method used to trace the usage of content generated by large language models. Sentence-level watermarking aids in preserving the semantic integrity within individual sentences while maintaining greater robustness. However, many existing sentence-level watermarking techniques depend on arbitrary segmentation or generation processes to embed watermarks, which can limit the availability of appropriate sentences. This limitation, in turn, compromises the quality of the generated response. To address the challenge of balancing high text quality with robust watermark detection, we propose CoheMark, an advanced sentence-level watermarking technique that exploits the cohesive relationships between sentences for better logical fluency. The core methodology of CoheMark involves selecting sentences through trained fuzzy c-means clustering and applying specific next sentence selection criteria. Experimental evaluations demonstrate that CoheMark achieves strong watermark strength while exerting minimal impact on text quality. | [
"NLP applications",
"watermarking",
"security"
] | Accept | https://openreview.net/pdf?id=tm1pZhec2e | https://openreview.net/forum?id=tm1pZhec2e | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"cyhskdNIK9"
],
"note_type": [
"decision"
],
"note_created": [
1741250134805
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
rUs5ryYqZe | The Good, the Bad and the Ugly: Watermarks, Transferable Attacks and Adversarial Defenses | [
"Grzegorz Gluch",
"Berkant Turan",
"Sai Ganesh Nagarajan",
"Sebastian Pokutta"
] | We formalize and analyze the trade-off between backdoor-based watermarks and adversarial defenses, framing it as an interactive protocol between a verifier and a prover. While previous works have primarily focused on this trade-off, our analysis extends it by identifying transferable attacks as a third, counterintuitive but necessary option. Our main result shows that for all learning tasks, at least one of the three exists: a watermark, an adversarial defense, or a transferable attack. By transferable attack, we refer to an efficient algorithm that generates queries indistinguishable from the data distribution and capable of fooling _all_ efficient defenders. Using cryptographic techniques, specifically fully homomorphic encryption, we construct a transferable attack and prove its necessity in this trade-off. Furthermore, we show that any task that satisfies our notion of a transferable attack implies a cryptographic primitive, thus requiring the underlying task to be computationally complex. Finally, we show that tasks of bounded VC-dimension allow adversarial defenses against all attackers, while a subclass allows watermarks secure against fast adversaries. | [
"Watermarks",
"Adversarial Defenses",
"Transferable Attacks",
"Interactive Proof Systems",
"Cryptography",
"Backdooring",
"Game Theory",
"Learning Theory"
] | Accept | https://openreview.net/pdf?id=rUs5ryYqZe | https://openreview.net/forum?id=rUs5ryYqZe | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"cu1vYofMIa"
],
"note_type": [
"decision"
],
"note_created": [
1741250136481
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
pFMNK403AH | Are Semantic Watermarks for Diffusion Models Resilient to Layout Control? | [
"Denis Lukovnikov",
"Andreas Müller",
"Jonas Thietke",
"Erwin Quiring",
"Asja Fischer"
] | Semantic watermarking methods embed information into generated images by modifying the initial latent noise, subtly modifying the output images. However, the widespread use of layout control techniques, such as ControlNets, raises questions about the applicability of semantic watermarking with layout control. After all, if semantic watermarks are really realized as meaningful changes in images (such as its layout), external layout specifications (e.g. through edge maps), could destroy the watermark information during denoising. This work empirically evaluates two semantic watermarking approaches---Tree-Ring Watermarking and Gaussian Shading---under various ControlNet-guided generation settings. Our results show that while ControlNets can slightly degrade watermark strength, both watermarking approaches remain largely detectable, demonstrating the potential viability of semantic watermarks even under strong layout constraints. | [
"semantic watermarking",
"controlnets",
"gaussian shading",
"treering"
] | Accept | https://openreview.net/pdf?id=pFMNK403AH | https://openreview.net/forum?id=pFMNK403AH | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"kPqXtWlzzY"
],
"note_type": [
"decision"
],
"note_created": [
1741250135940
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
ot30XyrcnZ | Watermarking for AI Content Detection: A Review on Text, Visual, and Audio Modalities | [
"Lele Cao"
] | The rapid advancement of generative artificial intelligence (GenAI) has revolutionized content creation across text, visual, and audio domains, simultaneously introducing significant risks such as misinformation, identity fraud, and content manipulation. This paper presents a practical survey of watermarking techniques designed to proactively detect GenAI content. We develop a structured taxonomy categorizing watermarking methods for text, visual, and audio modalities and critically evaluate existing approaches based on their effectiveness, robustness, and practicality. Additionally, we identify key challenges, including resistance to adversarial attacks, lack of standardization across different content types, and ethical considerations related to privacy and content ownership. Finally, we discuss potential future research directions aimed at enhancing watermarking strategies to ensure content authenticity and trustworthiness. This survey serves as a foundational resource for researchers and practitioners seeking to understand and advance watermarking techniques for AI-generated content detection. | [
"watermarking",
"AI-generated content",
"detection",
"text",
"visual",
"audio"
] | Accept | https://openreview.net/pdf?id=ot30XyrcnZ | https://openreview.net/forum?id=ot30XyrcnZ | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"MsBMd3RhGK"
],
"note_type": [
"decision"
],
"note_created": [
1741250135405
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
mRCXybDMF6 | Watermarks vs. Perturbations for Preventing AI-based Style Editing | [
"Qiuyu Tang",
"Aparna Bharati"
] | The remarkable image editing capabilities of generative models have led to growing concerns regarding unauthorized editing of multimedia. To mitigate against such misuse, artists and creators can utilize traditional image watermarking and more recent adversarial perturbation-based protection techniques to protect media assets. Watermarks generally protect the origin by establishing ownership, but can be easily removed. However, perturbation-based protection is aimed at disrupting editing and is harder to remove. In this paper, we evaluate the effectiveness of the two methods against Stable Diffusion in preventing the generation of usable edits. | [
"Diffusion",
"watermark",
"asset protection",
"style editing"
] | Accept | https://openreview.net/pdf?id=mRCXybDMF6 | https://openreview.net/forum?id=mRCXybDMF6 | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"9VGReKA6W6"
],
"note_type": [
"decision"
],
"note_created": [
1741250135188
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
lLEi0I4wDu | Robust Multi-bit Text Watermark with LLM-based Paraphrasers | [
"Xiaojun Xu",
"Jinghan Jia",
"Yuanshun Yao",
"Yang Liu",
"Hang Li"
] | We propose an imperceptible multi-bit text watermark embedded by paraphrasing with LLMs. We fine-tune a pair of LLM paraphrasers that are designed to behave differently so that their paraphrasing difference reflected in the text semantics can be identified by a trained decoder. To embed our multi-bit watermark, we use two paraphrasers alternatively to encode the pre-defined binary code at the sentence level. Then we use a text classifier as the decoder to decode each bit of the watermark. Through extensive experiments, we show that our watermarks can achieve over 99.99\% detection AUC with small (1.1B) text paraphrasers while keeping the semantic information of the original sentence. More importantly, our pipeline is robust under word substitution and sentence paraphrasing perturbations and generalizes well to out-of-distributional data. We also show the stealthiness of our watermark with LLM-based evaluation. We will open-source our code and watermark demo once the paper is accepted. | [
"Text Watermarking",
"LLM"
] | Accept | https://openreview.net/pdf?id=lLEi0I4wDu | https://openreview.net/forum?id=lLEi0I4wDu | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"5TSiyk28x5"
],
"note_type": [
"decision"
],
"note_created": [
1741250136447
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
jUokwqdemZ | On the Coexistence and Ensembling of Watermarks | [
"Aleksandar Petrov",
"Shruti Agarwal",
"Philip Torr",
"Adel Bibi",
"John Collomosse"
] | Watermarking, the practice of embedding imperceptible information into media such as images, videos, audio, and text, is essential for intellectual property protection, content provenance and attribution. The growing complexity of digital ecosystems necessitates watermarks for different uses to be embedded in the same media. However, in order to be able to detect and decode all watermarks, they need to coexist well with one another. We perform the first study of coexistence of deep image watermarking methods and, contrary to intuition, we find that various open-source watermarks can coexist with only minor impacts on image quality and decoding robustness. The coexistence of watermarks also opens the avenue for ensembling watermarking methods. We show how ensembling can increase the overall message capacity and enable new trade-offs between capacity, accuracy, robustness and image quality, without needing to retrain the base models. | [
"watermarking",
"watermark",
"ensembles",
"content provenance"
] | Accept | https://openreview.net/pdf?id=jUokwqdemZ | https://openreview.net/forum?id=jUokwqdemZ | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"ohPJuQ5qC7"
],
"note_type": [
"decision"
],
"note_created": [
1741250135346
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
ik00ZDnB3B | A Watermark for Black-Box Language Models | [
"Dara Bahri",
"John Frederick Wieting"
] | Watermarking has recently emerged as an effective strategy for detecting the outputs of large language models (LLMs). Most existing schemes require \emph{white-box} access to the model's next-token probability distribution, which is typically not accessible to downstream users of an LLM API. In this work, we propose a principled watermarking scheme that requires only the ability to sample sequences from the LLM (i.e. \emph{black-box} access), boasts a \emph{distortion-free} property, and can be chained or nested using multiple secret keys. We provide performance guarantees, demonstrate how it can be leveraged when white-box access is available, and show when it can outperform existing white-box schemes via comprehensive experiments. | [
"watermarking",
"AI-text detection",
"black-box"
] | Accept | https://openreview.net/pdf?id=ik00ZDnB3B | https://openreview.net/forum?id=ik00ZDnB3B | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"lb0SGh1qnZ"
],
"note_type": [
"decision"
],
"note_created": [
1741250134994
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
gtXbVRMwQh | Are Watermarks For Diffusion Models Radioactive? | [
"Jan Dubiński",
"Michel Meintz",
"Franziska Boenisch",
"Adam Dziedzic"
] | As generative artificial intelligence (AI) models become increasingly widespread, ensuring transparency and provenance in AI-generated content has become a critical challenge. Watermarking techniques have been proposed to embed imperceptible yet detectable signals in AI-generated images, enabling provenance tracking and copyright enforcement. However, a second party can repurpose images generated by an existing model to train their own diffusion model, potentially disregarding the ownership rights of the original model creator.
Recent research in language models has explored the concept of watermark \textit{radioactivity}, where embedded signals persist when training or fine-tuning a new model, enabling the detection of models trained on watermarked data. In this work, we investigate whether similar persistence occurs in diffusion models. Our findings reveal that none of the tested watermarking methods transfer their signal when used for fine-tuning a second model. This means that images generated by this new model exhibit detection results for the watermarks of the original model indistinguishable from random guessing. These results indicate that existing techniques are insufficient for ensuring watermark propagation through the model derivation chain and that novel approaches are needed to achieve effective and resilient watermark transfer in diffusion models. | [
"diffusion models",
"watermarking",
"radioactivity"
] | Accept | https://openreview.net/pdf?id=gtXbVRMwQh | https://openreview.net/forum?id=gtXbVRMwQh | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"zPpVCuGThw"
],
"note_type": [
"decision"
],
"note_created": [
1741250134881
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
f1YEcePhSs | IConMark: Robust Interpretable Concept-Based Watermark For AI Images | [
"Vinu Sankar Sadasivan",
"Mehrdad Saberi",
"Soheil Feizi"
] | With the rapid rise of generative AI and synthetic media, distinguishing AI-generated images from real ones has become crucial in safeguarding against misinformation and ensuring digital authenticity. Traditional watermarking techniques have shown vulnerabilities to adversarial attacks, undermining their effectiveness in the presence of attackers. We propose IConMark, a novel in-generation robust semantic watermarking method that embeds interpretable concepts into AI-generated images. Unlike traditional methods, which rely on adding noise or perturbations to AI-generated images, IConMark incorporates meaningful semantic attributes, making it interpretable to humans and, hence, resilient to adversarial manipulation. This method is not only robust against various image augmentations but also human-readable, enabling manual verification of watermarks. We demonstrate a detailed evaluation of IConMark’s effectiveness, demonstrating its superiority in terms of detection accuracy and maintaining image quality. Moreover, IConMark can be combined with existing watermarking techniques to further enhance and complement its robustness. We introduce IConMark+SS, a hybrid approach combining IConMark with StegaStamp, to further bolster robustness against multiple types of image manipulations. | [
"AI-image watermarking",
"Interpretable",
"Robust",
"Semantic"
] | Accept | https://openreview.net/pdf?id=f1YEcePhSs | https://openreview.net/forum?id=f1YEcePhSs | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"fkHLJf6Thy"
],
"note_type": [
"decision"
],
"note_created": [
1741250134083
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. While some reviews were negative, we believe that the core idea of your paper is compelling and has the potential to spark interest and discussion within the community.\\nThe reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing dialogue in the field. Although certain weaknesses were noted, they do not warrant rejection, especially given the non-archival nature of this workshop.\\nWe encourage you to address the major concerns raised by the reviewers to strengthen your work and submit the revised version by the 5th of April.\"}"
]
} |
erXPKrUsoD | Proactive Detection of Speaker Identity Manipulation with Neural Watermarking | [
"Wanying Ge",
"Xin Wang",
"Junichi Yamagishi"
] | We propose a neural network-based watermarking approach for defending against speaker identity manipulation attacks. Our method extracts a source speaker embedding from a carrier waveform and embeds it back into the waveform before transmission. After undergoing various channel transmissions and potential identity manipulation attacks, the receiver reconstructs the source speaker embedding from the extracted watermark and compares it with the embedding obtained from the received waveform to assess the likelihood of identity manipulation. Experimental results demonstrate the robustness of the proposed framework against multiple digital signal processing based transmissions and attacks. However, we observe that while neural codec algorithms have minimal impact on manipulating speaker identity, they significantly degrade watermark detection accuracy, leading to failures in detecting identity manipulation. | [
"Neural watermarking",
"speaker embedding",
"speaker identity manipulation",
"speech security"
] | Accept | https://openreview.net/pdf?id=erXPKrUsoD | https://openreview.net/forum?id=erXPKrUsoD | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"MNbKPnBT6G"
],
"note_type": [
"decision"
],
"note_created": [
1741250134036
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. While some reviews were negative, we believe that the core idea of your paper is compelling and has the potential to spark interest and discussion within the community.\\nThe reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing dialogue in the field. Although certain weaknesses were noted, they do not warrant rejection, especially given the non-archival nature of this workshop.\\nWe encourage you to address the major concerns raised by the reviewers to strengthen your work and submit the revised version by the 5th of April.\"}"
]
} |
dTlCMiEOdk | Learning to watermark LLM-Generated Text Via Reinforcement Learning | [
"Xiaojun Xu",
"Yuanshun Yao",
"Yang Liu"
] | We study how to watermark LLM outputs, i.e. embedding algorithmically detectable signals into LLM-generated text to track misuse. Unlike the current mainstream methods that work with a fixed LLM, we expand the watermark design space by including the LLM tuning stage in the watermark pipeline. We propose a co-training framework based on reinforcement learning that iteratively (1) trains a detector to detect the generated watermarked text and (2) tunes the LLM to generate text easily detectable by the detector while keeping its normal utility. We empirically show that our watermarks are more accurate, robust, and adaptable (to new attacks) with no generation overhead. It also allows watermarked model open-sourcing. In addition, if used together with alignment, the extra overhead introduced is low -- we only need to train an extra reward model (i.e. our detector). We hope our work can bring more effort into studying a broader watermark design that is not limited to working with LLMs with unchanged model weights. | [
"LLM Watermark"
] | Accept | https://openreview.net/pdf?id=dTlCMiEOdk | https://openreview.net/forum?id=dTlCMiEOdk | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"iASgXDpZim"
],
"note_type": [
"decision"
],
"note_created": [
1741250135443
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
d1v2hkB0Vj | Can you Finetune your Binoculars? Embedding Text Watermarks into the Weights of Large Language Models | [
"Fay Elhassan",
"Niccolò Ajroldi",
"Antonio Orvieto",
"Jonas Geiping"
] | The indistinguishability of AI-generated content from human text raises challenges in transparency and accountability. While several methods exist to watermark models behind APIs, embedding watermark strategies directly into model weights that are later reflected in the outputs of the model is challenging.
In this study we propose a strategy to finetune a pair of low-rank adapters of a model, one serving as the text-generating model, and the other as the detector, so that a subtle watermark is embedded into the text generated by the first model and simultaneously optimized for detectability by the second. In this way, the watermarking strategy is fully learned end-to-end. This process imposes an optimization challenge, as balancing watermark robustness, naturalness, and task performance requires trade-offs. We discuss strategies on how to optimize this min-max objective and present results showing the effect of this modification to instruction finetuning. | [
"Watermarking",
"Binocular Score",
"Large Language Models",
"Low-Rank Adaptation (LoRA)"
] | Accept | https://openreview.net/pdf?id=d1v2hkB0Vj | https://openreview.net/forum?id=d1v2hkB0Vj | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"CKTWGN45LV"
],
"note_type": [
"decision"
],
"note_created": [
1741250133775
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. While some reviews were negative, we believe that the core idea of your paper is compelling and has the potential to spark interest and discussion within the community.\\nThe reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing dialogue in the field. Although certain weaknesses were noted, they do not warrant rejection, especially given the non-archival nature of this workshop.\\nWe encourage you to address the major concerns raised by the reviewers to strengthen your work and submit the revised version by the 5th of April.\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.