forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
hDzVxEUN5C | How Effective Are AI Models in Translating English Scientific Texts to Nigerian Pidgin: A Low-resource Language? | [
"Flora Oladipupo",
"Anthony Soronnadi",
"Ife Adebara",
"Olubayo Adekanmbi"
] | This research explores the challenges and limitations of applying deep learning models to the translation of scientific texts from English to Nigerian Pidgin, a widely spoken but low-resource language in West Africa. Despite advancements in machine translation, translating domain-specific content such as biological research papers presents unique obstacles, including data scarcity, linguistic complexity, and model generalization issues. We investigate the performance of AI models, including Pidgin-UNMT, mt5-base model, AfriTeVa base, Afri-mt5 base model and GPT 4.0 model through a comparative analysis using BLEU scores, CHRF, TER, Africomet metrics on a newly created Eng-PidginBioData dataset of biological texts. Our findings reveal significant gaps in model performance, emphasizing the need for more domain-specific fine-tuning, improved dataset creation, and collaboration with native speakers to enhance translation accuracy. By presenting real-world challenges encountered in applying deep learning to low-resource languages this research suggests strategies to overcome these barriers. Our study provides valuable insights into the persistent challenges faced by AI-driven translation systems, from limited data to domain mismatches, and highlights ways to enhance their effectiveness for underrepresented languages. By addressing these constraints, we offer actionable strategies for more inclusive and impactful scientific knowledge dissemination. | [
"Machine translation",
"Nigerian Pidgin",
"Scientific Texts"
] | Accept | https://openreview.net/pdf?id=hDzVxEUN5C | https://openreview.net/forum?id=hDzVxEUN5C | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"hr5peHCarc"
],
"note_type": [
"decision"
],
"note_created": [
1741192696336
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
gGM8zxxFnS | Impact of Task Phrasing on Presumptions in Large Language Models | [
"Kenneth J. K. Ong"
] | Concerns with the safety and reliability of applying large-language models (LLMs) in unpredictable real-world applications motivate this study, which examines how task phrasing can lead to presumptions in LLMs, making it difficult for them to adapt when the task deviates from these assumptions. We investigated the impact of these presumptions on the performance of LLMs using the iterated prisoner's dilemma as a case study. Our experiments reveal that LLMs are susceptible to presumptions when making decisions even with reasoning steps. However, when the task phrasing was neutral, the models demonstrated logical reasoning without much presumptions. These findings highlight the importance of proper task phrasing to reduce the risk of presumptions in LLMs. | [
"AI Agents",
"Large Language Models",
"Decision Making",
"Bias and Presumptions",
"Reasoning"
] | Accept | https://openreview.net/pdf?id=gGM8zxxFnS | https://openreview.net/forum?id=gGM8zxxFnS | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"yOn3g52rhE"
],
"note_type": [
"decision"
],
"note_created": [
1741192625308
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
fQxYC3WOYb | Data Mixing can Induce Phase Transitions in Knowledge Acquisition | [
"Xinran Gu",
"Kaifeng Lyu",
"Jiazheng Li",
"Jingzhao Zhang"
] | Large Language Models (LLMs) are typically trained on data mixtures: most data come from web scrapes, while a small portion is curated from high-quality sources with dense domain-specific knowledge.
In this paper, we show that when training LLMs on such data mixtures, knowledge acquisition from knowledge-dense datasets does not always follow a smooth scaling law but can exhibit phase transitions with respect to the mixing ratio and model size. First, through controlled experiments on a synthetic biography dataset mixed with web-scraped data, we demonstrate that:
(1) as we increase the model size to a critical value, the model suddenly transitions from memorizing very few to most of the biographies;
(2) below a critical mixing ratio, the model memorizes almost
nothing even with extensive training, but beyond
this threshold, it rapidly memorizes more biographies.
We then adopt an information-theoretic perspective to understand and characterize the existence and value of the thresholds. Based on these insights, we identify two mitigation strategies that improve the efficiency of knowledge acquisition from knowledge-dense datasets, and validate their effectiveness on both synthetic and real-world Wikipedia datasets. | [
"memorization",
"scaling laws",
"large language models"
] | Accept | https://openreview.net/pdf?id=fQxYC3WOYb | https://openreview.net/forum?id=fQxYC3WOYb | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"zKN3z0TUM8"
],
"note_type": [
"decision"
],
"note_created": [
1741192570842
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
dVzxB1isCQ | From Fog to Failure: The Unintended Consequences of Dehazing on Object Detection in Clear Images | [
"Ashutosh Kumar",
"Aman Chadha"
] | This study explores the challenges of integrating human visual cue-based dehazing into object detection, given the selective nature of human perception. While human vision adapts dynamically to environmental conditions, computational dehazing does not always enhance detection uniformly. We propose a multi-stage framework where a lightweight detector identifies regions of interest (RoIs), which are then improved via spatial attention-based dehazing before final detection by a heavier model. Though effective in foggy conditions, this approach unexpectedly degrades the performance on clear images. We analyze this phenomenon, investigate possible causes, and offer insights for designing hybrid pipelines that balance enhancement and detection. Our findings highlight the need for selective preprocessing and challenge assumptions about universal benefits from cascading transformations. | [
"dehazing",
"deep learning",
"bio-inspired",
"object detection"
] | Accept | https://openreview.net/pdf?id=dVzxB1isCQ | https://openreview.net/forum?id=dVzxB1isCQ | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"FXY5szPRyW"
],
"note_type": [
"decision"
],
"note_created": [
1741192661952
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
ccOqMvAdav | Do We Need Too Much Attention? A Time Series Perspective | [] | The present work proposes a method for time series prediction with applications across domains, like in agriculture for optimal crop timing, stock market forecasting, and in e-commerce. Studies suggest that with slight modification, Large Language Models (LLMs) can be adapted for time series prediction. In the telecom sector, this approach could help in significant energy conservation during network operations. In this work, various models have been evaluated for this purpose and their performances are compared that includes traditional Machine Learning and Deep Learning methods like ARIMA, RNNs and LSTMs. More recent LLM-based models were also explored such as Chronos, and PatchTST which utilizes fewer attention layers compared to Chronos. It was surprising to observe that among these models, PatchTST achieved the best performance only after fine-tuning. While Chronos is designed for zero-shot forecasting and captures some intricate temporal dependencies, PatchTST’s multiscale input helps the model to understand the macro and the micro level trends and therefore might help it perform better than other methods. The results seem to indicate that effective forecasting could be achieved with fewer attention layers when supported by well-engineered input contextual representations. | [
"Time Series",
"Large Language Models",
"Energy Optimization",
"Wireless Networks",
"6G"
] | Reject | https://openreview.net/pdf?id=ccOqMvAdav | https://openreview.net/forum?id=ccOqMvAdav | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"D9l1wdFmjX"
],
"note_type": [
"decision"
],
"note_created": [
1741192568076
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
cTckUeh0Sw | On the Power of Heuristics in Temporal Graphs | [
"Filip Cornell",
"Oleg Smirnov",
"Gabriela Zarzar Gandler",
"Lele Cao"
] | Dynamic graph datasets often exhibit strong temporal patterns, such as recency, which prioritizes recent interactions, and popularity, which favors frequently occurring nodes. We demonstrate that simple heuristics leveraging only these patterns can perform on par or outperform state-of-the-art neural network models under standard evaluation protocols. To further explore these dynamics, we introduce metrics that quantify the impact of recency and popularity across datasets. Our experiments on BenchTemp and the Temporal Graph Benchmark show that our approaches achieve state-of-the-art performance across all datasets in the latter and secure top ranks on multiple datasets in the former. These results emphasize the importance of refined evaluation schemes to enable fair comparisons and promote the development of more robust temporal graph models. Additionally, they reveal that current deep learning methods often struggle to capture the key patterns underlying predictions in real-world temporal graphs. For reproducibility, we have made our code publicly available. | [
"temporal graphs",
"graph neural networks",
"dynamic link prediction",
"heuristic algorithms"
] | Accept | https://openreview.net/pdf?id=cTckUeh0Sw | https://openreview.net/forum?id=cTckUeh0Sw | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"WOKZ2tm2G8"
],
"note_type": [
"decision"
],
"note_created": [
1741192701136
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
bpBDx2OprB | Reassessing the Utility of Topology Based Losses for Image Segmentation | [] | Image segmentation is an important and widely performed task in computer vision. Accomplishing effective image segmentation in diverse settings, often requires custom model architectures and loss functions. A set of models that specialize in segmenting thin tubular structures are topology preservation based loss functions. These models often utilize a pixel skeletonization process claimed to generate more precise segmentation masks of thin tubes and better capture the structures other models often missed.
One such model, \ac{SRL} proposed by Kirchhoff et al \citep{kirchhoff2024srl}, was stated to produce state-of-the-art results on benchmark tubular datasets. In this work, we tested the validity of the SRL loss by using two approaches: empirically and theoretically. Upon comparing the performance of the proposed method on some of the tubular datasets (used in the original work, along with some additional datasets), we found that the performance of SRL based segmentation models did not exceed traditional baseline models. We then go on to examine and provide a theoretical explanation as to why losses based on topology based enhancements (including the SRL) fail to fulfill their objective. | [
"Segmentation",
"Thin-tubular structures",
"Skeleton Recall Loss"
] | Reject | https://openreview.net/pdf?id=bpBDx2OprB | https://openreview.net/forum?id=bpBDx2OprB | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"9yZ2rQVfUs"
],
"note_type": [
"decision"
],
"note_created": [
1741192607188
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
an0UOai3A3 | Performance of Zero-Shot Time Series Foundation Models on Cloud Data | [
"William Toner",
"Thomas L Lee",
"Artjom Joosen",
"Rajkarn Singh",
"Martin Asenov"
] | Time series foundation models (FMs) have emerged as a popular paradigm for zero-shot multi-domain forecasting. FMs are trained on numerous diverse datasets and claim to be effective forecasters across multiple different time series domains, including cloud data. In this work we investigate this claim, exploring the effectiveness of FMs on *cloud data*. We demonstrate that many well-known FMs fail to generate meaningful or accurate zero-shot forecasts in this setting. We support this claim empirically, showing that FMs are outperformed consistently by simple linear baselines. We also illustrate a number of interesting pathologies, including instances where FMs suddenly output seemingly erratic, random-looking forecasts. Our results suggest a widespread failure of FMs to model cloud data. | [
"Time Series",
"Foundation Models",
"Zero-shot",
"Cloud"
] | Accept | https://openreview.net/pdf?id=an0UOai3A3 | https://openreview.net/forum?id=an0UOai3A3 | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"nE0XaP5j2g"
],
"note_type": [
"decision"
],
"note_created": [
1741192411081
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
ZwGpFCUvoQ | On the Limitations of Neural Networks for Option Pricing: Analysis of Volatility Regime Sensitivity | [
"Tarun Raheja",
"Nilay Pochhi"
] | Recent work demonstrates neural networks' theoretical ability to approximate option pricing functions, but empirical evidence regarding robustness to market regime shifts remains limited. Motivated by practical scenarios where the classical deterministic Black-Scholes equation becomes computationally challenging in high-dimensional settings or under complex market conditions, we examine neural network performance during volatility regime transitions. Models trained on low-volatility regimes ($\sigma=0.2$) show significant errors under higher volatility ($\sigma=0.3$). We provide detailed theoretical and empirical analyses indicating that these errors reflect fundamental representational limits of current architectures rather than optimization issues. | [
"Option Pricing",
"Neural Networks",
"Volatility Regimes",
"Distribution Shift",
"Financial Machine Learning",
"Model Robustness",
"Black-Scholes Approximation",
"Deep Learning Limitations"
] | Accept | https://openreview.net/pdf?id=ZwGpFCUvoQ | https://openreview.net/forum?id=ZwGpFCUvoQ | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"KbyuByImaZ"
],
"note_type": [
"decision"
],
"note_created": [
1741192511914
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
WpYdiLd5Fm | On the Role of Structure in Hierarchical Graph Neural Networks | [
"Luca Sbicego",
"Sevda Öğüt",
"Manuel Madeira",
"Yiming QIN",
"Dorina Thanou",
"Pascal Frossard"
] | Hierarchical Graph Neural Networks (GNNs) integrate pooling layers to generate graph representations by progressively coarsening graphs. These GNNs are provably more expressive than traditional GNNs that solely rely on message passing. While prior work shows that hierarchical architectures do not exhibit empirical performance gains, these findings are based on small datasets where structure-unaware baselines often perform well, limiting their generalizability. In this work, we comprehensively investigate the
role of graph structure in pooling-based GNNs. Our analysis includes: (1) reproducing previous studies on larger, more diverse datasets, (2) assessing the robustness of different architectures to structural perturbations of the graphs at varying depths of the network layers, and (3) comparing against structure-agnostic baselines. Our results confirm previous findings and demonstrate that they hold across newly tested datasets, even when graph structure is meaningful for the task. Interestingly, we observe that hierarchical GNNs exhibit improved performance recovery to structural perturbations compared to their flat counterparts. These findings highlight both the potential and limitations of pooling-based GNNs, motivating the need for more structure-sensitive benchmarks and evaluation frameworks. | [
"Graph Neural Networks",
"Graph Pooling"
] | Accept | https://openreview.net/pdf?id=WpYdiLd5Fm | https://openreview.net/forum?id=WpYdiLd5Fm | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"rQkkEfWJeU"
],
"note_type": [
"decision"
],
"note_created": [
1741192610013
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
TKydQh6koc | Rethinking Evaluation for Temporal Link Prediction through Counterfactual Analysis | [
"Aniq Ur Rahman",
"Alexander Modell",
"Justin Coon"
] | In response to critiques of existing evaluation methods for temporal link prediction (TLP) models, we propose a novel approach to verify if these models truly capture temporal patterns in the data. Our method involves a sanity check formulated as a counterfactual question: ``What if a TLP model is tested on a temporally distorted version of the data instead of the real data?'' Ideally, a TLP model that effectively learns temporal patterns should perform worse on temporally distorted data compared to real data. We analyse this hypothesis and introduce two temporal distortion techniques to assess six well-known TLP models. | [
"temporal link prediction",
"graph learning"
] | Accept | https://openreview.net/pdf?id=TKydQh6koc | https://openreview.net/forum?id=TKydQh6koc | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"ogwnjS71b9"
],
"note_type": [
"decision"
],
"note_created": [
1741192481284
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
ScSW6vmdlO | Lost-in-distance: Impact of Contextual Proximity on LLM Performance in Graph Tasks | [
"Hamed Firooz",
"Maziar Sanjabi",
"Wenlong Jiang",
"Xiaoling Zhai"
] | Large Language Models (LLMs) exhibit blind spots that impair their ability to retrieve and process relevant contextual data effectively. We demonstrate that LLM performance in graph tasks with complexities beyond the "needle-in-a-haystack" scenario—where solving the problem requires cross-referencing and reasoning across multiple subproblems *jointly*—is influenced by the proximity of relevant information within the context, a phenomenon we term "lost-in-distance". We examine two fundamental graph tasks: identifying common connections between two nodes and assessing similarity among three nodes, and show that the model's performance in these tasks significantly depends on the relative positioning of common edges. We evaluate three publicly available LLMs using various graph encoding techniques that represent graph structures for LLM input. Results indicate that model accuracy can decline by up to 6x as the distance between node connections increases, independent of graph encoding and model size. | [
"Large Language Models",
"Graph Tasks",
"Long Context"
] | Accept | https://openreview.net/pdf?id=ScSW6vmdlO | https://openreview.net/forum?id=ScSW6vmdlO | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"3Reme1cHOP"
],
"note_type": [
"decision"
],
"note_created": [
1741192524925
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
SKZuWQOHTh | An Integrated YOLO and VLM System for Fire Detection in Enclosed Environments | [
"Joanne Kim",
"Yejin Lee",
"DongSik Yoon",
"Chansung Jung",
"Gunhee Lee"
] | While YOLO models show promise in car fire detection, they remain insufficient for real-world deployment in confined parking environments due to dataset limitations, evaluation gaps, and deployment constraints. We first fine-tune YOLO on a fire/smoke-augmented dataset, but analysis reveals its struggles with ambiguous fire-smoke boundaries, leading to false predictions. To address this, we propose a real-time end-to-end framework integrating YOLOv8s with Florence2 VLM, combining object detection with contextual reasoning. While YOLOv8s with VLM improves detection reliability, challenges are still ongoing. Our findings highlight YOLO’s limitations in fire detection and the need for a more adaptive, environment-aware approach. | [
"car park fire detection",
"yolo model",
"Vision-Language-Model",
"End-to-End Framework"
] | Accept | https://openreview.net/pdf?id=SKZuWQOHTh | https://openreview.net/forum?id=SKZuWQOHTh | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"2Hj8FP3RA8"
],
"note_type": [
"decision"
],
"note_created": [
1741192674753
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
ODY9UitugC | Not constructing Ramsey Graphs using Deep Reinforcement Learning | [
"David Berghaus"
] | We consider the problem of constructing Ramsey graphs using deep reinforcement learning. We introduce a novel permutation invariant architecture that combines ideas from GNNs with self-attention algorithms over the cliques, which shows promising results in a related regression task. To generate graphs, we train our model using established reinforcement learning algorithms such as PPO and A2C. Our results are however very poor compared to traditional local-search algorithms, indicating that this problem is not well-suited for neural networks yet. | [
"ramsey graphs",
"reinforcement learning",
"machine learning for mathematics",
"graph construction"
] | Accept | https://openreview.net/pdf?id=ODY9UitugC | https://openreview.net/forum?id=ODY9UitugC | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"G1sOoylrV5"
],
"note_type": [
"decision"
],
"note_created": [
1741192538996
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
N5n6SAfnU0 | Graph Networks Struggle With Variable Scale | [
"Christian Koke",
"Yuesong Shen",
"Abhishek Saroha",
"Marvin Eisenberger",
"Bastian Rieck",
"Michael M. Bronstein",
"Daniel Cremers"
] | Standard graph neural networks assign vastly different latent embeddings to graphs describing the same object at different resolution scales. This precludes consistency in applications and prevents generalization between scales as would fundamentally be needed e.g. in AI4Science. We uncover the underlying obstruction, investigate its origin and show how to overcome it by modifying the message passing paradigm. | [
"Generalization",
"(Resolution-)Scale",
"Graph Neural Networks"
] | Accept | https://openreview.net/pdf?id=N5n6SAfnU0 | https://openreview.net/forum?id=N5n6SAfnU0 | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"NzmgUaHJtQ"
],
"note_type": [
"decision"
],
"note_created": [
1741192765359
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
LT3Av8WoPq | Deep learning on cloud | [] | Deep learning has shown promise in optimizing cloud resource management by enabling dynamic workload scheduling, auto-scaling, and cost-efficient operations. However, our real-world deployment of a deep reinforcement learning-based (DRL) scheduler for virtual machine (VM) allocation and scaling in a multi-cloud environment revealed unexpected failures. Despite extensive training on historical workload data, the model underperformed compared to rule-based heuristics due to distribution shifts, delayed feedback loops, and computational inefficiencies. This paper investigates the root causes of these failures, highlights key challenges in applying deep learning to cloud infrastructure, and provides actionable recommendations for improving robustness, scalability, and interpretability in real-world AI-driven cloud management systems. | [
"Deep Learning",
"Cloud Computing",
"Resource Management",
"Reinforcement Learning",
"Virtual Machine Allocation",
"Workload Optimization",
"Model Deployment",
"Distribution Shift",
"Scalability",
"Interpretability",
"Computational Efficiency",
"Auto-Scaling",
"Scheduling",
"Cost Optimization",
"Adaptive Learning"
] | Reject | https://openreview.net/pdf?id=LT3Av8WoPq | https://openreview.net/forum?id=LT3Av8WoPq | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"JE8qVFeREa"
],
"note_type": [
"decision"
],
"note_created": [
1741192566575
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
I45DDAWDwH | EXPLORING ADAPTIVE STRUCTURE LEARNING FOR HETEROPHILIC GRAPHS | [] | Graph Convolutional Networks (GCNs) gained traction for graph representation
learning, with recent attention on improving performance on heterophilic graphs
for various real-world applications. The localized feature aggregation in a typi-
cal message-passing paradigm hinders the capturing of long-range dependencies
between non-local nodes of the same class. The inherent connectivity structure
in heterophilic graphs often conflicts with information sharing between distant
nodes of same class. We propose structure learning to rewire edges in shallow
GCNs itself to avoid performance degradation in downstream discriminative tasks
due to oversmoothing. Parameterizing the adjacency matrix to learn connections
between non-local nodes and extend the hop span of shallow GCNs facilitates the
capturing of long-range dependencies. However, our method is not generalizable
across heterophilic graphs and performs inconsistently on node classification task
contingent to the graph structure. | [
"Graph Machine Learning",
"Structure Learning",
"Geometric Deep Learning",
"Representation Learning"
] | Reject | https://openreview.net/pdf?id=I45DDAWDwH | https://openreview.net/forum?id=I45DDAWDwH | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"5dGGdGxeCO"
],
"note_type": [
"decision"
],
"note_created": [
1741192539822
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
HsjHGNYv2O | Are We Really Unlearning? The Presence of Residual Knowledge in Machine Unlearning | [
"Hsiang Hsu",
"Pradeep Niroula",
"Zichang He",
"Chun-Fu Chen"
] | Machine unlearning seeks to remove a set of forget samples from a pre-trained model to comply with emerging privacy regulations. While existing machine unlearning algorithms focus on effectiveness by either achieving indistinguishability from a re-trained model or closely matching its accuracy, they often overlook the vulnerability of unlearned models to slight perturbations of forget samples. In this paper, we identify a novel privacy vulnerability in unlearning, which we term residual knowledge. We find that even when an unlearned model no longer recognizes a forget sample---effectively removing direct knowledge of the sample---residual knowledge often persists in its vicinity, which a re-trained model does not recognize at all. Addressing residual knowledge should become a key consideration in the design of future unlearning algorithms. | [
"machine unlearning",
"residual knowledge",
"adversarial attacks"
] | Accept | https://openreview.net/pdf?id=HsjHGNYv2O | https://openreview.net/forum?id=HsjHGNYv2O | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"XL4Xl6WGcS"
],
"note_type": [
"decision"
],
"note_created": [
1741192674416
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
HfNI0Zg5b7 | Failure Modes of Time Series Interpretability Algorithms for Critical Care Applications | [] | Interpretability is a crucial aspect of deploying deep learning models in critical care, especially in constantly evolving conditions that influence patient survival. However, common interpretability algorithms face unique challenges when applied to dynamic prediction tasks, where patient trajectories evolve over time. Gradient, Occlusion, and Permutation-based methods often struggle with time-varying target dependency and temporal smoothness. This paper systematically analyzes these failure modes and supports learnable mask-based interpretability frameworks as alternatives, which can incorporate temporal continuity and label consistency constraints to learn feature importance over time. We argue that learnable mask-based approaches for dynamic time-series prediction problems provide
more reliable and consistent interpretations for applications in critical care and similar domains. | [
"time series interpretability",
"critical care",
"deep learning",
"circulatory failure"
] | Reject | https://openreview.net/pdf?id=HfNI0Zg5b7 | https://openreview.net/forum?id=HfNI0Zg5b7 | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"YNwEY01Y14"
],
"note_type": [
"decision"
],
"note_created": [
1741192619209
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
HHr30FMGxO | Challenges of Multi-Modal Coreset Selection for Depth Prediction | [
"Viktor Moskvoretskii",
"Narek Alvandian"
] | Coreset selection methods are effective in accelerating training and reducing memory requirements but remain largely unexplored in applied multimodal settings. We adapt a state-of-the-art (SoTA) coreset selection technique for multimodal data, focusing on the depth prediction task.
Our experiments with embedding aggregation and dimensionality reduction approaches reveal the challenges of extending unimodal algorithms to multimodal scenarios, highlighting the need for specialized methods to better capture inter-modal relationships. | [
"Multimodal",
"Coreset selection",
"Depth prediction"
] | Accept | https://openreview.net/pdf?id=HHr30FMGxO | https://openreview.net/forum?id=HHr30FMGxO | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"MhNNgIizhk"
],
"note_type": [
"decision"
],
"note_created": [
1741192601720
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
7JHHL11TDE | Fantastic Allosteric Binding Sites and Why Deep Learning Cannot Find Them | [
"Dhvani S. Vora",
"Shashank Yadav"
] | The discovery of druggable and structurally distinct allosteric sites across various protein classes has introduced new avenues for small molecules to modulate protein activity and, hence, cellular functions. Ligands that target allosteric sites may provide advantages like enhanced selectivity and often exhibit the possibility of targeting existing drug-resistant mutations. However, recent deep learning approaches show limited effectiveness in predicting allosteric sites, as demonstrated in the present study. We compare the performance of two deep learning methods, PUResNetV2.0 and VNEGNN, with Fpocket, a traditional geometry-based method and P2Rank, a geometry and machine learning ensemble approach. | [
"Deep Learning",
"Allosteric",
"Orthosteric",
"Ligand Binding Site",
"Prediction"
] | Accept | https://openreview.net/pdf?id=7JHHL11TDE | https://openreview.net/forum?id=7JHHL11TDE | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"vVTnzvn8LV"
],
"note_type": [
"decision"
],
"note_created": [
1741192634094
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
723nBHZffD | In Search of Forgotten Domain Generalization | [
"Prasanna Mayilvahanan",
"Roland S. Zimmermann",
"Thaddäus Wiedemer",
"Evgenia Rusak",
"Attila Juhos",
"Matthias Bethge",
"Wieland Brendel"
] | Out-of-Domain (OOD) generalization is the ability of a model trained on one or more domains to generalize to unseen domains. In the ImageNet era of computer vision, evaluation sets for measuring a model's OOD performance were designed to be strictly OOD with respect to style. However, the emergence of foundation models and expansive web-scale datasets has obfuscated this evaluation process, as datasets cover a broad range of domains and risk test domain contamination. In search of the forgotten domain generalization, we create large-scale datasets subsampled from LAION---LAION-Natural and LAION-Rendition---that are strictly OOD to corresponding ImageNet and DomainNet test sets in terms of style. Training CLIP models on these datasets reveals that a significant portion of their performance is explained by in-domain examples. This indicates that the OOD generalization challenges from the ImageNet era still prevail and that training on web-scale data merely creates the illusion of OOD generalization. Furthermore, through a systematic exploration of combining natural and rendition datasets in varying proportions, we identify optimal mixing ratios for model generalization across these domains. Our datasets and results re-enable meaningful assessment of OOD robustness at scale---a crucial prerequisite for improving model robustness. | [
"OOD generalization",
"CLIP",
"Robustness"
] | Accept | https://openreview.net/pdf?id=723nBHZffD | https://openreview.net/forum?id=723nBHZffD | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"KWHyU48Myf"
],
"note_type": [
"decision"
],
"note_created": [
1741192589006
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
6BCO7bqnjO | Bridging the Language Gap: Evaluating Machine Translation for Animal Health in Low-Resource Settings | [
"Godwin Adegbehingbe",
"Anthony Soronnadi",
"Ife Adebara",
"Olubayo Adekanmbi"
] | Machine translation (MT) has made significant progress in high-resource languages, but translating technical texts into low-resource languages remains an open challenge. This study investigates the ability of state-of-the-art multilingual models to translate animal health reports from English to Yoruba, a crucial task for enhancing veterinary communication in underserved regions. Although previous research has explored low-resource MT, domain-specific translation for animal health has been largely overlooked. Using a curated dataset of 1,468 parallel sentences, we evaluated several MT models in zero-shot and fine-tuned settings. Despite the promise of multilingual models, we find substantial limitations in their ability to generalize to this domain, raising concerns about their applicability in specialized, low-resource contexts. We analyze potential causes, including vocabulary mismatch, training data scarcity, and constraints of model architecture. Our
findings highlight the need for more targeted approaches to low-resource domain-specific MT and emphasize the broader implications for AI deployment in real-world applications. | [
"machine translation",
"animal health",
"domain adaptation"
] | Accept | https://openreview.net/pdf?id=6BCO7bqnjO | https://openreview.net/forum?id=6BCO7bqnjO | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"96MqcnT3I4"
],
"note_type": [
"decision"
],
"note_created": [
1741192494357
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
14BmcWqqXC | Tiny Expert? Architectural Optimization for Resource-Constrained Domain Tasks | [] | Recent advances in large language models have led to increased adoption across specialized domains, but their effectiveness on tasks with limited training data remains unclear. We investigate this question through bias detection in medical curriculum text, comparing models ranging from DistilBERT (67M parameters) to Llama-3.2 (1.2B parameters) using both sequence classification and causal language modeling approaches. Our findings challenge conventional assumptions about model scaling: while the instruction-tuned Llama achieved the strongest screening performance (AUC: 0.7904, F2: 0.5760), architectural choices proved more critical than model size. DistilBERT demonstrated competitive performance through targeted architectural choices, achieving the second-highest AUC (0.8857) despite its smaller size. These results suggest that for specialized classification tasks with limited training data, architectural alignment and instruction tuning may be more crucial than increased model capacity. Our work provides practical insights for deploying language models in domain-specific applications where expert annotation is expensive and dataset size is necessarily limited. | [
"ML Architecture",
"SME",
"Domain Expertise"
] | Reject | https://openreview.net/pdf?id=14BmcWqqXC | https://openreview.net/forum?id=14BmcWqqXC | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"NCoHg7xNlj"
],
"note_type": [
"decision"
],
"note_created": [
1741192466372
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
00CGdb6CSh | ViT Registers and Fractal ViT | [] | Drawing inspiration from recent findings including surprisingly decent performance of transformers without positional encoding (NoPE) in the domain of language models and how registers (additional throwaway tokens not tied to input) may improve the performance of large vision transformers (ViTs), we invent and test a variant of ViT called fractal ViT that breaks permutation invariance among the tokens by applying an attention mask between the regular tokens and "summary tokens" similar to registers, in isolation or in combination with various positional encodings. These models do not improve upon the baseline performance, highlighting the fact that these findings may be scale, domain, or application-specific. | [
"ViT",
"positional encoding",
"ImageNet",
"registers"
] | Reject | https://openreview.net/pdf?id=00CGdb6CSh | https://openreview.net/forum?id=00CGdb6CSh | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"ZfMD7x7qog"
],
"note_type": [
"decision"
],
"note_created": [
1741192656194
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
zEWqLMReyr | Collective Model Intelligence Requires Compatible Specialization | [
"Jyothish Pari",
"Samy Jelassi",
"Pulkit Agrawal"
] | In this work, we explore the limitations of combining models by averaging intermediate features, referred to as $\textit{model merging}$, and propose a new direction for achieving collective model intelligence through what we call $\textit{compatible specialization}$. Current methods for model merging, such as parameter and feature averaging, struggle to effectively combine specialized models due to representational divergence during fine-tuning. As models specialize to their individual domains, their internal feature representations become increasingly incompatible, leading to poor performance when attempting to merge them for new tasks. We analyze this phenomenon using centered kernel alignment (CKA) and show that as models specialize, the similarity in their feature space structure diminishes, hindering their capacity for collective use. To address these challenges, we investigate routing-based merging strategies, which offer more flexible methods for combining specialized models by dynamically routing across different layers. This allows us to improve on existing methods by combining features from multiple layers rather than relying on fixed, layer-wise combinations. However, we find that these approaches still face limitations when layers within models are representationally incompatible. Our findings highlight the importance of designing new approaches for model merging that operate on well-defined input and output spaces, similar to how humans communicate through language rather than intermediate neural activations. | [
"MoE",
"Model Merging",
"CKA"
] | Accept | https://openreview.net/pdf?id=zEWqLMReyr | https://openreview.net/forum?id=zEWqLMReyr | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"ykoudoKfPd",
"rEIkfusnGa",
"lm79OBrKTG",
"T4aTVM55eu"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1741226299008,
1740724742514,
1740611785380,
1740809901499
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission10/Reviewer_ZKPw"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission10/Reviewer_P1KA"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission10/Reviewer_1Loe"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This paper argues that specialized models develop incompatible internal representations leading to difficulties in merging specialized models to achieve collective intelligence. Most of the reviewers liked the paper, found it relevant to the workshop, and recommended acceptance. We suggest the authors to incorporate the comments of the reviewers to further strengthen the paper. Overall, we're recommend to accept this work to the workshop.\"}",
"{\"summary\": \"This paper explores compatible specialization as a fundamental prerequisite for successfully merging models fine-tuned on distinct tasks. The authors argue that parameter or feature-based averaging is inherently limited due to increasing representational divergence as specialization progresses. Specifically, models fine-tuned on disparate domains (e.g., mathematics and programming) develop distinct internal feature structures, rendering direct interpolation suboptimal. To mitigate this, the authors propose a mixture-of-experts (MoE) framework that leverages routing mechanisms to dynamically select and integrate specialized layers. While this approach demonstrates improvements over simple averaging in controlled settings, it remains constrained by the extent of representational dissimilarity among specialized layers. Through empirical analysis on both in-domain and cross-domain tasks, as well as Centered Kernel Alignment (CKA) evaluations, the study illustrates the challenges of model mergeability and posits that future research should explore architectural innovations that facilitate structured model communication rather than relying on direct feature alignment.\\n\\nThis study contributes to the growing discourse on model fusion, highlighting the limitations of traditional parameter aggregation techniques and advocating for alternative strategies that emphasize structured knowledge transfer. The findings suggest that future advancements in model merging should prioritize compatibility mechanisms over naive averaging methods, leveraging explicit pathways for model interaction.\", \"strengths_and_weaknesses\": [\"### Strengths\", \"Relevant Topic: The paper addresses a key challenge in model merging, a growing concern as pre-trained models become more widespread.\", \"Clear Motivation: The study convincingly argues for routing-based merging over naive feature interpolation.\", \"Empirical Support: The experiments demonstrate that simple feature averaging is insufficient for specialized models, reinforcing the need for improved merging techniques.\", \"### Weaknesses\", \"Abstract Definition of Compatibility: The concept of compatible specialization is important but lacks precise formalization.\", \"Limited Baseline Comparisons: The paper does not benchmark against more advanced merging methods, reducing its comparative insight.\", \"Performance Trade-offs: Even with routing, the merged models do not always outperform fine-tuning, highlighting unresolved challenges.\"], \"suggestions\": [\"Discuss Specialization vs. Mergeability Trade-offs: The paper underscores the challenge of merging highly specialized models, but further discussion on the trade-offs between preserving specialization and ensuring mergeability would be valuable. Addressing whether these trade-offs can be systematically optimized would be particularly beneficial.\", \"Expand Baseline Comparisons: The current baselines primarily focus on feature and parameter averaging. Comparing the proposed approach against alternative merging techniques such as linear mode connectivity or ensemble distillation would better contextualize its performance.\", \"Analyze Computational Overheads: While routing provides more flexibility in merging, it introduces increased computational complexity. A discussion on the trade-offs between performance gains and efficiency costs would aid practitioners in evaluating the feasibility of routing-based merging.\"], \"reason_for_giving_a_higher_score\": [\"Timely Topic: Model merging and collaborative intelligence are active areas of research, and the paper addresses a real challenge: how to combine specialized models effectively.\", \"Useful Empirical Findings: The experiments demonstrate that naive feature interpolation has inherent limitations, and routing improves performance in several cases. These insights could spark new ideas for future model-merging methods.\"], \"reason_for_giving_a_lower_score\": [\"Lack of Robust Formalism: While \\u201ccompatible specialization\\u201d is introduced, it remains somewhat abstract. A more rigorous framework or metric for what compatibility entails would strengthen the paper\\u2019s impact.\", \"Limited Exploration of Alternatives: The paper largely compares routing methods to basic averaging without broader baselines, making it hard to position this work among other existing or emerging techniques for merging or modularizing specialized models.\", \"Performance Gap: Even with routing, the merged models typically underperform standard fine-tuning for new tasks. That limitation, though well-disclosed, might temper excitement about practical utility in real-world multi-task scenarios.\"], \"rating\": \"7\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}",
"{\"summary\": \"This paper explores the limitations of model merging through parameter and feature averaging, identifying representational divergence as a major barrier to achieving collective model intelligence. The authors introduce the concept of compatible specialization, arguing that models must not only be specialized but also maintain compatibility in feature representations to be effectively merged. Using centered kernel alignment (CKA), they show there's a critical point during fine-tuning where model representations become too divergent for effective merging.\\n\\nThe authors explore routing-based merging strategies as a more flexible alternative to simple parameter averaging. They test several approaches with increasing degrees of freedom, from simple interpolation strategies to more complex routing methods. While more complex routing strategies demonstrate improved performance, they still underperform direct fine-tuning. The authors suggest shifting from feature-space merging to input-output space routing, similar to how software libraries are composed, to achieve true collective model intelligence.\", \"strengths_and_weaknesses\": [\"**Strengths:**\", \"The paper introduces \\\"compatible specialization\\\" a key challenge in model merging, highlighting the trade-off between specialization and representation compatibility and its effect on the performance of model merging.\", \"The use of CKA to analyze representational similarity provides good evidence of why merging becomes difficult as models specialize. The identification of a critical threshold (t) where merging fails is valuable. The comparisons between interpolation-based and routing-based merging provide useful insights.\", \"The authors extend standard MoE routing to allow experts to be reused across different layers, which is a novel contribution. Results suggest that increasing routing complexity improves performance, though with diminishing returns.\", \"The paper suggests promising future directions for achieving compatible specialization through input-output space routing instead of feature-space merging.\", \"**Weaknesses:**\", \"Limited Evaluation Scope: The evaluation is restricted to math and coding domains. It would be great to extend it to more diverse domains, languages, or even modalities to make a more generalizable conclusion. The definition of cross-domain tasks is unclear. It would be great to see the train and validation data examples. It is not evident how merging performs on each task separately rather than in a combined setting. The study only considers GPT-2, raising concerns about generalizability to other architectures (e.g., LLaMA, T5, Mistral, etc.).\", \"Unfair Comparison Between Routing and Interpolation: The MoE routing approach requires training with adaptation data, while interpolation-based methods do not, making the comparison potentially unfair. When comparing finetuning with routing and merging, it is always important to consider the trade-off between memory+computation cost and performance.\", \"Missing Baseline Comparisons: The study only tests LERP, SLERP, and Activation interpolation, while several state-of-the-art (SoTA) merging techniques exist such as TIES-Merging, Dare-merging, DellaMerging, or Evolutionary Model Merging (parameter + data-space merging). Without these baselines, it is difficult to assess whether routing is better than model merging. Also, the performance of each merging technique is so sensitive to its parameter setting. It is not easy without the help of an appendix to understand the details.\", \"Lack of Clear Methodology: Understanding the finetuning details, pretraining or adaptation datasets, parameter settings, different routing, and merging techniques requires reading the appendix, whereas these details should be clear from the main text\"], \"suggestions\": [\"Expand task diversity: Test on a broader range of tasks, languages, or domains to strengthen generalizability claims. Even, try more than two domains for multi-domain settings. This might be insightful when comparing routing versus interpolation approaches as load-balancing issues might appear in more diverse task settings.\", \"Include larger models and more diverse model architectures: Evaluate whether the findings hold for other state-of-the-art models beyond GPT-2.\", \"Compare with additional baselines: Include comparisons with other recent model merging techniques. Also, show how you selected the merging parameters\", \"Must-Do: Refine the paper\\u2019s structure and figure orders. Some of the important information about the methodology is hidden in the Appendix and this significantly reduces the readability of this paper.\", \"Nice-to-Have: Develop prototype solution: Implement at least a preliminary approach for compatibility-aware model merging or input-output routing to demonstrate feasibility.\", \"Nice-to-Have: Extend to non-language models: Test whether the findings generalize to other model types like vision transformers or multimodal models.\", \"Nice-to-Have: Analyze attention mechanisms: Investigate whether attention layers show similar compatibility issues as MLP layers.\", \"**Minor Comments:**\", \"Page 3, line 117: It is import -> It is important\", \"Page 5, line 266: that is is -> that is\"], \"reason_for_giving_a_higher_score\": \"The paper provides a pretty good theoretical and empirical foundation for \\\"compatible specialization\\\" in model merging. The use of CKA analysis, the identification of a critical threshold for merging failure, and the insightful comparisons between interpolation-based and routing-based merging are valuable contributions. The novel extension of MoE routing enhances model flexibility, and the proposed future directions offer promising avenues for improving specialization compatibility.\", \"reason_for_giving_a_lower_score\": \"Most importantly, key methodological details are buried in the appendix, making the paper less readable. This paper requires major refinement and restructuring between the main text and the appendix. The evaluation is limited in scope, focusing only on math and coding and only on GPT-2 architecture, raising concerns about generalizability. As the paper is not focused on the solution but explores the compatible specialization issue, I expect an expansion of the experiments to collect more evidence.\", \"rating\": \"6\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"This paper examines the difficulties of merging specialized models to achieve collective intelligence. It finds that simply averaging model features fails because specialized models develop incompatible internal representations. The authors analyze this divergence and test routing-based merging strategies but find that even these flexible methods have limitations. The paper highlights that effective model merging requires structured input-output communication rather than direct integration of neural activations. Experiments show that increased complexity in routing strategies does improve adaptation performance, but a plateau is eventually reached due to persistent representational incompatibility between layers. The authors suggest that future work should focus on enabling models to communicate more like humans---using language instead of merging internal representations---and propose approaches like RL-based routers and clear model descriptions to improve collaboration.\", \"strengths_and_weaknesses\": [\"Strengths:\", \"The paper highlights that current model merging methods face challenges in effectively combining specialized models because fine-tuning causes representational divergence. Using centered kernel alignment (CKA), the paper shows that as models become more specialized, their feature space similarity decreases.\", \"The paper explores routing-based merging strategies as a more flexible approach to combining specialized models by dynamically routing across different layers. It shows that increasing the complexity and capacity of model merging through routing results in performance gains.\", \"The paper emphasizes the need for new model merging approaches that operate within well-defined input and output spaces, akin to human communication through language. This shift moves away from merging internal representations and instead focuses on enabling models to exchange information effectively.\", \"The paper provides insights for future work by advocating a shift toward routing models in their input-output spaces, treating them as specialized functions within a shared space. It also suggests key design considerations for routers, including an RL-based approach to routing and the use of clear model functionality descriptions.\"], \"weaknesses\": [\"The paper discusses a trade-off between specialization and compatibility. It suggests that beyond a certain point, increased specialization can hinder a model's ability to merge effectively, resulting in diminishing returns. This suggests that achieving the right balance between specialization and the ability to merge models is a challenge that the paper highlights but does not fully resolve.\", \"The paper highlights the need for compatible specialization but offers no specific solution for achieving it. It primarily focuses on highlighting the shortcomings of current model merging techniques without providing a clear path forward or a practical implementation.\", \"The experiments primarily focus on specific tasks such as math, coding, and cross-domain adaptation using GPT-2 models. The generalizability of the findings to other tasks, models, and domains may be limited.\"], \"suggestions\": [\"The authors could explore regularization techniques during fine-tuning to encourage models to maintain a degree of representational similarity.\", \"The authors propose shifting from feature-space merging to routing models in their input-output spaces. They could provide a more detailed exploration of how this could be implemented, potentially including a preliminary implementation or simulation.\", \"The authors should expand their experiments to cover a broader range of tasks and models, including larger models.\"], \"reason_for_giving_a_higher_score\": \"See the strengths\", \"reason_for_giving_a_lower_score\": \"See the weaknesses\", \"rating\": \"5\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}"
]
} |
yyo54Z8VTy | Training Plug n' Play Knowledge Modules with Deep Context Distillation | [
"Lucas Caccia",
"Alan Ansell",
"Ivan Vulić",
"Edoardo Ponti",
"Alessandro Sordoni"
] | Dynamically integrating new or rapidly evolving information after Language Model (LM) pre-training remains challenging, particularly in low-data scenarios or when dealing with private and specialized documents. In-context learning and retrieval-augmented generation (RAG) face limitations, including their high inference costs and their inability to capture global document information. In this paper, we propose a way of modularizing knowledge by training Knowledge Modules (KMs). KMs are lightweight components implemented as parameter-efficient LoRA modules, which are trained to store information about new documents and can be easily plugged into models on demand. We show that next-token prediction performs poorly in training KMs. We instead propose Deep Context Distillation: we learn KMs parameters such as to simulate hidden states and logits of a teacher that takes the document in context. Our method outperforms standard next-token prediction and pre-instruction training techniques, across two datasets. Finally, we highlight synergies between KMs and retrieval-augmented generation. | [
"Distillation",
"Modularity",
"RAG"
] | Accept | https://openreview.net/pdf?id=yyo54Z8VTy | https://openreview.net/forum?id=yyo54Z8VTy | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"xBzg0GCrb3",
"gW6diNOGtA",
"f4YMgRE4kN",
"ZaXthd6q8f"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740972178635,
1740617427157,
1741226299191,
1740780977378
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission45/Reviewer_8VhE"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission45/Reviewer_Ygxj"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission45/Reviewer_FuH9"
]
],
"structured_content_str": [
"{\"summary\": \"This paper proposes a method for efficiently integrating knowledge from new or specialized documents into LLMs by training plug-and-play Knowledge Modules. KMs are lightweight LoRA-based modules trained using DCD to emulate the behavior of LLMs within a document context, thereby encoding knowledge into parameters without requiring access to the document. DCD combines output probability and hidden state distillation while leveraging synthetic summary data to enhance learning signals. Experiments show that KMs outperform traditional next-token prediction and pre-instruction training methods in both closed-book and open-book question-answering tasks.\", \"strengths_and_weaknesses\": [\"Strengths:\", \"The idea of leveraging LoRA as a Knowledge Module to embed knowledge from new documents is interesting.\", \"The experimental results demonstrate the effectiveness of the proposed method.\"], \"weaknesses\": [\"Training LoRA to encode document knowledge requires additional training, which can be costly in scenarios with frequent knowledge updates.\", \"Finding a suitable teacher model that contains new knowledge is challenging, especially when the teacher and student models must share the same architecture. In such cases, it may be more efficient to use the teacher model directly for downstream tasks rather than distilling a student model from it.\"], \"suggestions\": \"* Provide a more in-depth discussion of the second weakness to better illustrate the necessity of the proposed method.\\nIf multiple documents need to be incorporated, consider constructing a Multi-LoRA system for knowledge management. \\n* Adding relevant literature and discussions, such as [1-3], could strengthen the argument.\\n\\n[1] LoraRetriever: Input-Aware LoRA Retrieval and Composition for Mixed Tasks in the Wild\\n\\n[2] A Survey on Model MoErging: Recycling and Routing Among Specialized Experts for Collaborative Learning\\n\\n[3] Towards Modular LLMs by Building and Reusing a Library of LoRAs\", \"reason_for_giving_a_higher_score\": \"See weaknesses. In particular, the proposed method relies on a teacher model with the same architecture that already contains the knowledge from new documents. This raises the question of whether distilling a separate student model is truly necessary.\", \"reason_for_giving_a_lower_score\": \"The idea of incorporating document knowledge through LoRA is highly intriguing.\", \"rating\": \"6\", \"confidence\": \"3\", \"workshop_fit\": \"3\"}",
"{\"summary\": \"The authors propose a novel method of modularising document knowledge in Language Models using LoRA adapters. They name these lightweight components as Knowledge Modules (KM) and train them using Deep Context Distillation, an alternative training procedure to next-token prediction. Their experiments show improved performance over baseline conditions under different settings.\", \"strengths_and_weaknesses\": \"Strengths:\\n1. A novel, lightweight design of modular knowledge storage that is plug-and-play and so can be easy to use when adapting a language model to understand different document knowledge without re-training and storing multiple models. \\n2. Interesting method of training these KMs using deep context distillation, it is an intuitive method since we want the model to have a comprehend the entire document and its specific context. While next token prediction mainly trains the model to be able to understand the structure of language in general. \\n3. Compatible with standard methods like RAG and has significant performance improvements in the closed book setting and when combining with RAG for open book.\", \"weaknesses\": \"1. KMs do not seem transferable to different models, and each set of KMs must be re-trained to fit each model. \\n2. In the closed book setting, KMs do not seem to perform too well on its own, even when combined with RAG, but requires an additional training step with Knowledge Extractors (KEs).\", \"suggestions\": \"1. In document DCD, does the teacher predict the first N/2 tokens from the last N/2 tokens? (Eqn. 3) perhaps the other way around would be better? or using a masked token prediction for the teacher.\\n2. Could it be possible to train a KM for the open book setting? \\n3. Is it possible to use other adapters besides LoRA?\", \"minor\": \"typos on line 127, 138, table 1 (right), under open book, should ICL be bolded instead?\", \"reason_for_giving_a_higher_score\": \"The concept is novel, with great experimental results, and fits the theme of using modular components to maximize the potential of language models with plug-and-play fine-tuned adapters.\", \"reason_for_giving_a_lower_score\": \"One limitation is lack of transferability between different models, even with the same architecture, since each KM has to be trained to adapt to the model's pre-trained weights.\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This paper is a good fit for the workshop and has been positively received by all the reviewers. We encourage the authors to take reviewers' comments and suggestions into consideration, especially ablations proposed by FuH9, for the final version of the paper.\"}",
"{\"summary\": \"In this paper authors propose a way of modularizing knowledge by training Knowledge Modules (KMs) which are trained to store information about new documents and can be easily plugged into pre-trained models on demand. Authors claim that this method serves to be more effective than the traditional approaches of RAG and ICL specially in cases when documents are too long leading to high inference cost. Although I agree with the premise and the problem described, I am not fully convinced about the novelty of solution presented in the paper which seems marginally incremental to existing works on knowledge distillation and more recently context distillation, both of which are referred to in the paper. Similarly, the results presented on open-book setting aren't particularly impressive which calls for additional evidence needed in terms of ablation-- what's the added benefit of Knowledge Extractors (KEs) on top of KMs? Although the results in Table 1show that KE+KM+RAG is superior, I'd like the authors to dive deeper into explaining it further.\", \"strengths_and_weaknesses\": \"Strength: The problem discussed in the paper fits well with the theme of the workshop and is a real one as approaches like RAG and ICL do hit their limits pretty soon when the documents are long or the memory of sequence-calling of LLMs gets bloated. The paper is reasonably well written and I was able to easily follow it for the most part.\", \"weakness\": \"The contribution doesn't meet the bar. The proposed way of training KMs is adopted from works like Context Distillation using a combination of KL loss and L1 loss on hidden states. The notion of backpropagating the hidden states loss was demonstrated by Sanh et al., 2019. The notable change made was switching the cosine loss to an L1 loss. Furthermore, I am not fully convinced of the value added by KEs on top of KMs. More ablation studies could help here.\", \"suggestions\": \"The paper has potential but in its current form doesn't meet the bar from my perspective. My concerns are around the lack of ablation studies, the importance of KMs, KMs+RAG on top of KEs. More experiments and intuitive explanation on why these components improve the KEs will help address the concerns.\", \"reason_for_giving_a_higher_score\": \"N/A\", \"reason_for_giving_a_lower_score\": \"The paper has potential but in its current form doesn't meet the bar from my perspective. My concerns are around the lack of ablation studies, the importance of KMs, KMs+RAG on top of KEs. More experiments and intuitive explanation on why these components improve the KEs will help address the concerns.\", \"rating\": \"5\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}"
]
} |
u89LDBIyDe | Exact Unlearning of Finetuning Data via Model Merging at Scale | [
"Kevin Kuo",
"Amrith Setlur",
"Kartik Srinivas",
"Aditi Raghunathan",
"Virginia Smith"
] | Approximate unlearning has gained popularity as an approach to efficiently update a model so it (roughly) behaves as if it was not trained on a subset of data. However, approximate unlearning methods have been shown to be quite brittle in practice. In fact, such approaches can easily be attacked to reveal supposedly unlearned information. To address this issue, we instead propose a *model merging* approach, **ClAMU**, which produces combined models that can support both *efficient and exact* deletion of unlearning data. In addition to leveraging techniques from model merging and localization, **ClAMU** relies on two key innovations. First, we cluster tasks together and serve per-cluster models, balancing the tradeoff between the utility of local models versus the storage cost of a global model. Second, unlike existing localization methods which compress local models into masks, we propose directly optimizing local (or cluster-level) masks, which greatly improves utility. Relative to model merging and localization baselines, **ClAMU** serves models with up to 20% improved accuracy while reducing storage costs by up to 75%. | [
"merging",
"localization",
"task arithmetic",
"unlearning",
"masking"
] | Accept | https://openreview.net/pdf?id=u89LDBIyDe | https://openreview.net/forum?id=u89LDBIyDe | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"oT8EdySBZ6",
"ezHuhzHXJm",
"KCoz0iC8X9",
"06o2R8Wsm6"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740289327232,
1740520630696,
1741226299187,
1740806853195
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission36/Reviewer_t9VV"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission36/Reviewer_wc1z"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission36/Reviewer_uHZu"
]
],
"structured_content_str": [
"{\"summary\": \"The authors propose ClAMU for efficient and exact data unlearning for merged models. ClAMU proceeds in two steps, 1) tasks are clustered together and cluster-level masks are learnt, and then 2) masks are optimized on training data. The authors propose that this method improves overall utility of merged models, reduces storage requirements, and reduces unlearning costs.\", \"strengths_and_weaknesses\": \"Strengths:\\n1. The paper tackles an important topic area (unlearning), and the use of merging and focus on scalability are interesting and relevant.\\n\\n2. Experimental validation of the benefits of their ClAMU over baselines with regard to accuracy and storage appears thorough.\", \"weaknesses\": \"1. The problem setting is not clearly defined. The authors note that a major limitation of unlearning methods is their brittleness such that models can easily be attacked to reveal the unlearnt information. But then the paper doesn't seem to address this identified shortcoming, instead focusing on storage, utility and cost.\\n\\n2. Lack of clarity in presentation and unsupported claims. In the introduction, the authors claim to propose the technique of model merging for unlearning (line 37), however the experimental results centre on their masking approach, as opposed to justifying the more core concept of merging for unlearning. Furthermore, the authors state outright merging is well-suited to unlearning (line 132), but, again, this is a proposition that needs justification and not something that can just be asserted. \\n\\n3. Missing experiments. I would've expected experiments demonstrating the efficacy of removing unwanted knowledge from the model using the proposed approach vs baseline unlearning methods. However, experiments seem to center on the improved accuracy-storage tradeoff of their masking method and then later unlearning cost.\\n\\n(More minor point): line 48 notes that prior work has considered merging small numbers of models, but theres no citation provided.\", \"suggestions\": \"1. The authors should consider rewriting the motivation of their method, as the claimed brittleness of unlearning methods is not addressed\\n\\n2. The authors should consider clarifying whether they are proposing the concept of model merging for unlearning, or whether they are taking merging for unlearning as a given and proposing a novel masking technique. If the former, then the authors should consider experimental justification for this claim, including experiments demonstrating the efficacy of removing knowledge by forms of merging. If the latter, then the authors need to rewrite the introduction to clarify that they do not propose merging for unlearning, and just propose a novel masking technique. This would also then require some discussion of merging for unlearning in the related work.\", \"reason_for_giving_a_higher_score\": \"Unlearning is a timely and important topic and the focus on scalability and application of model merging is interesting.\", \"reason_for_giving_a_lower_score\": \"I think there are quite large issues regarding what the paper claims to propose (both model merging for unlearning and a novel masking method), the problems the method aims to solve (brittleness in unlearning methods), and how the authors use experiments to justify its claims. If I have misunderstood the logic of the paper, then I would appreciate clarification from the authors on the identified weaknesses and I would be happy to re-evaluate my score.\", \"rating\": \"4\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"This paper tackles exact unlearning in the context of finetuned models. The key idea is to store a single global model obtained by merging many local finetuned models. Because each local model\\u2019s contribution is an additive task vector [1], unlearning that subset can be done exactly by subtracting its vector. While conceptually straightforward, merging can degrade performance across many heterogeneous tasks. The main contributions lie in introducing (1) Clustering tasks to reduce storage overhead and (2) Masking to recover local performance from a single merged checkpoint.\\n\\n\\n***\\n[1] Ilharco et al, \\u201cEditing Models with Task Arithmetic\\u201d, ICLR, 2023.\", \"strengths_and_weaknesses\": \"**Strengths**\\n- The paper is well-written and easy to follow. Its motivation is also clear and straightforward. \\n- The paper suggests concise methods (e.g., clustering and masking) to address the scalability of merging-based unlearning approaches, which are sound and easy to understand. \\n- The paper shares a number of interesting, sound empirical results that support the authors\\u2019 claim. \\n\\n***\\n\\n**Weaknesses**\\n- The objective of the paper is puzzling. For instance, why is Sec 3.1 presented in the main paper? Is it closely related to the other parts of the paper? Furthermore, the paper opens Sec. 3 by illustrating the issues associated with merging a large number of models (500) and suggests that existing approaches (e.g., localization) show some significant cost issues. If localization is indeed the cause of this issue, can we avoid using them? Refining the paper and clarifying the objectives would strengthen the paper substantially. \\n- The idea of unlearning using task arithmetic is not novel [1], and several concurrent works [2,3] are present. While the paper assumes a unique setting where a large number of models are merged, there is a lack of elaboration on why this setting is probable and important.\\n\\n- The paper claims that it can reduce the storage cost of saving model weights by substituting them with optimizable masks. However, wouldn\\u2019t this create additional costs in optimization (training)? While I believe the optimization costs would not surpass the gains of using masks, I would like to see how much they cost. Furthermore, I would like to know the details of clustering high-dimensional task vectors (e.g., is every layer compared? How is the cost of clustering?)\\n\\n- Lack of theoretical analysis. While post-training (especially task arithmetic) is a dominantly experimental field, I suggest the authors add a theoretical analysis. Again, consider this a minor weakness as the reviewer is aware of the experimental nature of the post-training literature.\\n\\n\\n***\\n[1] Ilharco et al, \\u201cEditing Models with Task Arithmetic\\u201d, ICLR, 2023.\\n\\n[2] Kim et al, \\u201cNegMerge: Consensual Weight Negation for Strong Machine Unlearning\\u201d, ArXiv, 2024.\\n\\n[3] Kadhe et al, \\u201cSplit, Unlearn, Merge: Leveraging Data Attributes for More Effective Unlearning in LLMs\\u201d, ArXiv, 2024.\", \"suggestions\": \"The topic of exact unlearning is an important topic that requires significant attention, as it is directly related to illuminating how learned knowledge is stored in the model. While this paper shares very insightful empirical results, there is a lack of theoretical analysis on the topic. We strongly recommend the authors to include a theoretical analysis or at least a more detailed empirical analysis on the model's weight/parameter space.\", \"reason_for_giving_a_higher_score\": \"Please refer to the strengths.\", \"reason_for_giving_a_lower_score\": \"Please refer to the weaknesses.\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This paper tackles exact unlearning in the context of finetuned models by leveraging model merging and localization. Most of the reviewers liked the paper, found it relevant to the workshop, and recommended acceptance. We suggest the authors to incorporate the comments on reframing the paper brought up by the reviewer t9VV to further strengthen the paper. Overall, we're recommend to accept this work to the workshop.\"}",
"{\"summary\": \"This paper presents ClAMU, a novel framework for improving machine unlearning through model merging and localization. ClAMU addresses the challenge of efficiently removing data influence from fine-tuned models, especially when handling numerous tasks. At a high level, it employs clustering to group similar tasks and optimizes masks at the cluster level to improve utility and reduce storage costs. The framework's effectiveness is validated across both vision and language tasks, demonstrating improved accuracy and efficiency in unlearning. Experimental results show that ClAMU outperforms existing baselines in utility, storage, and unlearning cost. Additionally, the paper examines how data heterogeneity affects merging quality, suggesting future research directions for improving model merging in unlearning.\", \"strengths_and_weaknesses\": [\"Strengths:\", \"The paper addresses the challenge of efficiently updating a fine-tuned model to eliminate the influence of specific training data. This is crucial for complying with privacy regulations and reducing risks associated with fine-tuning.\", \"The paper introduces ClAMU, a novel framework for improving machine unlearning by integrating model merging and localization techniques. It introduces two key innovations: (1) task clustering, where similar tasks are grouped into clusters, and masks are learned at the cluster level instead of the task level. (2) improved localization, which optimizes masks directly on the training data for improved performance. These innovations allow ClAMU to outperform existing baselines in utility, storage, and unlearning cost.\", \"The paper investigates model merging with a large number of models (up to 500) and examines when merging can effectively achieve exact unlearning. It finds that the success of merging depends on data heterogeneity\\u2014performance remains high when data is relatively homogeneous but degrades significantly when data varies widely across tasks.\", \"The paper evaluates the combination of clustering and masking, demonstrating an improved efficiency-utility tradeoff compared to existing baselines.\"], \"weaknesses\": [\"The paper demonstrates that merging quality varies significantly across tasks, with a noticeable degradation when the data is highly heterogeneous. However, the paper lacks a thorough analysis of how varying degrees of heterogeneity impact ClAMU's performance and how ClAMU handles extreme cases of data heterogeneity.\", \"ClAMU addresses localization costs through clustering tasks and learning cluster-level masks rather than task-level masks. However, localization still introduces costs since local masks need to be relearned after unlearning and the storage cost scales with the number of clusters.\", \"The combination of clustering and masking generally sacrifices some utility compared to storing all local models.\", \"It is unclear how ClAMU handles scenarios where unlearning involves multiple tasks, particularly when it is difficult to determine which specific tasks to unlearn or the extent to which each task should be unlearned.\"], \"suggestions\": [\"The authors should provide a more in-depth analysis of ClAMU's limitations and potential failure cases when handling highly heterogeneous data. Specifically, they should examine how data heterogeneity affects performance and offer clear guidelines for practitioners on managing such challenges effectively.\", \"The authors should clarify how ClAMU reduces costs compared to other localization methods.\", \"The authors should discuss the conditions under which sacrificing utility is acceptable and provide guidance on how practitioners can balance utility and storage trade-offs.\", \"It would be beneficial to explore how ClAMU handles unlearning in scenarios involving multiple tasks, as mentioned earlier.\"], \"reason_for_giving_a_higher_score\": \"Please see the strengths.\", \"reason_for_giving_a_lower_score\": \"Please see the weaknesses.\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"5\"}"
]
} |
stv0Fqxekz | A Framework for Double-Blind Federated Adaptation of Foundation Models | [
"Nurbek Tastan",
"Karthik Nandakumar"
] | The availability of foundational models (FMs) pre-trained on large-scale data has advanced the state-of-the-art in many computer vision tasks. While FMs have demonstrated good zero-shot performance on many image classification tasks, there is often scope for performance improvement by adapting the FM to the downstream task. However, the data that is required for this adaptation typically exists in silos across multiple entities (data owners) and cannot be collated at a central location due to regulations and privacy concerns. At the same time, a learning service provider (LSP) who owns the FM cannot share the model with the data owners due to proprietary reasons. In some cases, the data owners may not have the resources to even store such large FMs. Hence, there is a need for algorithms to **adapt the FM in a double-blind federated manner**, i.e., the data owners do not know the FM or each other's data and the LSP does not see the data for the downstream tasks. In this work, we propose a framework for double-blind federated adaptation of FMs using fully homomorphic encryption (FHE). The proposed framework first decomposes the FM into a sequence of FHE-friendly blocks through knowledge distillation. The resulting FHE-friendly model is adapted for the downstream task via low-rank parallel adapters that can be learned without backpropagation through the FM. Since the proposed framework requires the LSP to share intermediate representations with the data owners, we design a privacy-preserving permutation scheme to prevent the data owners from learning the FM through model extraction attacks. Finally, a secure aggregation protocol is employed for federated learning of the low-rank parallel adapters. Empirical results on four datasets demonstrate the practical feasibility of the proposed framework. | [
"federated learning",
"federated fine-tuning",
"double-blind federated adaptation"
] | Accept | https://openreview.net/pdf?id=stv0Fqxekz | https://openreview.net/forum?id=stv0Fqxekz | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"ZMUQotMZfy",
"LDp9xJLMYk",
"5IveSH5bHg"
],
"note_type": [
"official_review",
"official_review",
"decision"
],
"note_created": [
1741028991150,
1740518038789,
1741226299299
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission49/Reviewer_bxzT"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission49/Reviewer_7JNg"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
]
],
"structured_content_str": [
"{\"summary\": \"The authors propose a method for adapting foundation models to downstream tasks in a federated learning framework that ensures \\\"double-blind\\\" privacy, meaning the model owner never reveals the model parameters, and the data owners do not expose their raw data. Their approach first distills the foundation model into a version that is compatible with fully homomorphic encryption, enabling secure encrypted computations. Then, they train low-rank parallel adapters that efficiently fine-tune the distilled model for specific tasks without compromising privacy.\", \"strengths_and_weaknesses\": [\"Strengths:\", \"The paper tackles an important problem: adapting the increasing number of foundation models to local settings using FL while ensuring privacy for both model and data.\", \"The paper is very comprehensive, detailing each component of the approach.\", \"The work combines very modern techniques (distillation, FHE, MPC, ) and discusses practical issues (e.g. mitigating malicious clients) along with potential solutions.\"], \"weaknesses\": [\"The structure of the paper is strange; it seems to have been shortened from a longer version without fully ensuring a coherent result. For example, no experiments/results are presented in the main text, even though they are listed as main contributions.\", \"Similarly, for a non-domain expert it is not clear which methods are new and which are simply applied. For instance, half a page in the main text explains the vanilla FL setting and FedAVG. This could be moved to the appendix to allow space to highlight novel frameworks and results.\", \"The writing is dense, with very long sentences and paragraphs that make it hard to identify the key concepts (like the \\\"encrypted inference\\\" and \\\"local learning\\\" sections).\"], \"suggestions\": \"To address the weaknesses, I would suggest restructuring the paper so that the key ideas and results fit within six pages. Move detailed descriptions of established methods to the appendix, allowing the main text to focus on the novel contributions. The description of the new approach should be more concise and clearly highlight the critical elements. Additionally, it is essential to include at least some experimental results in the main text so readers can properly evaluate the effectiveness of the approach\", \"reason_for_giving_a_higher_score\": \"I am not deeply familiar with the related work in this area, so it is hard to judge how novel it is to integrate techniques like distillation and fully homomorphic encryption into a single framework.\", \"reason_for_giving_a_lower_score\": \"Similarly, since I am not an expert in each of the frameworks used (distillation, FHE, ..) I couldn't fully assess the soundness of how the authors combined these techniques.\", \"rating\": \"6\", \"confidence\": \"2\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"This paper introduces a double-blind federated adaptation framework for foundation models (FMs), ensuring that data privacy and model privacy are preserved simultaneously. The primary challenge addressed is adapting large FMs for downstream tasks when data is distributed across multiple entities and cannot be centralized due to privacy regulations, while the FM owner (Learning Service Provider - LSP) cannot share the model due to proprietary concerns.\\n\\nThe proposed framework leverages Fully Homomorphic Encryption (FHE) and Secure Multi-Party Computation (MPC) to enable encrypted inference and privacy-preserving training. The FM is first distilled into a sequence of FHE-friendly blocks, enabling inference without exposing model parameters. Adaptation is achieved using low-rank parallel adapters that do not require backpropagation through the FM, reducing computational costs. A privacy-preserving permutation scheme prevents clients from extracting model information, and secure aggregation ensures the confidentiality of model updates.\\n\\nEmpirical results on four datasets (CIFAR-10, CIFAR-100, SVHN, and Fed-ISIC2019) demonstrate that the proposed approach achieves competitive accuracy while ensuring privacy. Compared to full fine-tuning and linear probing, the method offers a balance between efficiency, scalability, and privacy, making it suitable for real-world federated learning applications. The paper provides a theoretically sound and practically feasible solution for privacy-preserving FM adaptation in decentralized settings.\", \"strengths_and_weaknesses\": \"Strength:\\n\\nThe framework avoids full fine-tuning, which is computationally expensive, by using low-rank parallel adapters that do not require backpropagation through the FM.\\nThe FHE-friendly model decomposition enables efficient inference while maintaining privacy.\\n\\nEmpirical results on CIFAR-10, CIFAR-100, SVHN, and Fed-ISIC2019 demonstrate that the method scales well across different datasets.\\nThe approach is tested under varying degrees of data heterogeneity (Dirichlet partitioning).\\n\\nThe method outperforms linear probing in most settings while being more efficient than full fine-tuning.\\nShows resilience to non-i.i.d. data, making it suitable for real-world federated learning (FL) applications.\\n\\nThe privacy-preserving permutation scheme prevents clients from reconstructing FM parameters, mitigating potential model extraction attacks.\", \"weaknesses\": \"While FHE ensures strong privacy, it is computationally expensive. The paper does not discuss practical latency implications in detail, which could be a concern for large-scale deployments.\\nBootstrapping operations in FHE are computationally demanding, potentially limiting real-time applications.\\n\\nThe framework is evaluated on relatively small datasets (CIFAR-10, CIFAR-100, SVHN, and Fed-ISIC2019).\\nNo evaluation on larger FMs that are typically used in foundation model applications.\\n\\nThe framework requires frequent encrypted intermediate data exchanges between the server and clients, which may result in high communication costs in real-world federated settings.\\nWhile the authors acknowledge this limitation, further analysis is needed to quantify the impact.\\n\\nThe paper does not evaluate how the framework performs under adversarial attacks, such as malicious clients sending incorrect updates or inference-time perturbations.\\nAdditional robustness experiments could strengthen the paper\\u2019s claims on secure adaptation.\", \"suggestions\": \"Optimize FHE Computation: Explore techniques such as quantized FHE, hybrid encryption schemes, or alternative secure computation methods to reduce computational cost.\", \"evaluate_on_larger_models\": \"Test the approach on larger-scale FMs to demonstrate scalability.\", \"quantify_communication_overhead\": \"Include a detailed communication cost analysis to assess the feasibility of deploying the framework in real-world federated settings.\", \"security_testing_against_adversaries\": \"Conduct robustness experiments to evaluate resistance to model inversion attacks, gradient manipulation attacks, or adversarial perturbations.\\n\\nOverall, the paper presents a strong and novel approach for privacy-preserving FM adaptation but could be further improved with additional large-scale evaluations and security analyses.\", \"reason_for_giving_a_higher_score\": \"The paper presents a strong and novel contribution to the field of federated learning by addressing the challenge of adapting foundation models (FMs) in a double-blind setting, ensuring both model and data privacy. The proposed approach leverages Fully Homomorphic Encryption (FHE) and Secure Multi-Party Computation (MPC) to enable privacy-preserving inference and adaptation without exposing sensitive data or model parameters. The methodology is well-founded, introducing low-rank parallel adapters to avoid the computational overhead of full fine-tuning while maintaining strong adaptation performance. The empirical validation across multiple datasets (CIFAR-10, CIFAR-100, SVHN, and Fed-ISIC2019) demonstrates the method\\u2019s effectiveness, showing that it outperforms linear probing while being more efficient than full fine-tuning. Additionally, the framework remains scalable under varying levels of data heterogeneity, making it suitable for real-world federated learning applications. The introduction of a privacy-preserving permutation scheme further strengthens security by mitigating model extraction attacks. While the computational overhead of FHE and the lack of evaluation on larger-scale foundation models present areas for improvement, the balance between privacy, efficiency, and accuracy justifies a high score.\", \"reason_for_giving_a_lower_score\": \"One major concern is the computational overhead of Fully Homomorphic Encryption (FHE), which, while ensuring strong privacy guarantees, is known to be computationally expensive. The paper does not provide a detailed analysis of the practical latency and efficiency trade-offs, which raises concerns about its feasibility for large-scale deployment. Additionally, the communication overhead caused by frequent encrypted exchanges between the server and clients is not thoroughly analyzed, potentially making the approach impractical for real-world federated learning settings with limited bandwidth. Another limitation is the lack of evaluation on large-scale foundation models\\u2014the experiments focus on relatively small datasets (CIFAR-10, CIFAR-100, SVHN, and Fed-ISIC2019), but do not assess scalability to more complex models. Furthermore, the paper does not include a robustness analysis against adversarial settings, such as clients sending incorrect updates or attempting to infer model parameters through side-channel attacks. Without such evaluations, the security claims of the framework remain incomplete. Lastly, while the proposed privacy-preserving permutation scheme helps defend against model extraction attacks, its effectiveness is not empirically tested against known adversarial techniques. Given these shortcomings, particularly regarding scalability, computational cost, and robustness analysis, the paper\\u2019s contributions, while valuable, may require further refinement before being fully applicable to large-scale federated learning scenarios.\", \"rating\": \"6\", \"confidence\": \"3\", \"workshop_fit\": \"5\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This paper introduces a method for double-blind FL to fine-tune foundation models without sharing model parameters or data. The paper is relevant to the topic of decentralized training. Overall, we recommend acceptance and suggest the authors to take reviewers' comments in consideration.\"}"
]
} |
stFPf3gzq1 | Improving the Efficiency of Distributed Training using Sparse Parameter Averaging | [
"Matt Beton",
"Matthew Reed",
"Seth Howes",
"Alex Cheema",
"Mohamed Baioumy"
] | Large language model (LLM) training is typically distributed across many accelerators to reduce training time, necessitating frequent exchange of information across high-speed, low-latency networks. Federated learning algorithms like DiLoCo have relaxed this requirement by grouping accelerators into islands, between which communication is infrequent. In the case of DiLoCo, synchronization between workers happens every $H$ steps, thus reducing the communication cost by a factor of $H$. However, if $H$ is too large, model convergence is affected as nodes performing local optimization diverge too far. In this work, we explore Sparse Parameter Averaging (referred to as SPARTA), where models asynchronously share a small subset of the parameters (e.g., 0.05\%) at each training iteration. This keeps them within the same basin to reduce divergence between models. The main contribution of this paper, is to combine SPARTA with DiLoCo, which provides two benefits over `pure' DiLoCo. First, using SPARTA increases correlation between nodes. This enables a 100× increase in the DiLoCo interval without incurring additional wall-clock time, whilst still achieving performance gains. Second, we show that SPARTA acts as a regularizer, allowing for a higher learning rate and faster convergence. | [
"Distributed Training",
"Large Language Models",
"Ensemble Methods"
] | Accept | https://openreview.net/pdf?id=stFPf3gzq1 | https://openreview.net/forum?id=stFPf3gzq1 | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"ufgyRsIzFI",
"YNL3KCrnbY",
"LzKO1xTAOa",
"8TPZG7NdmF"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740671819202,
1740437713505,
1741043728391,
1741226298926
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission39/Reviewer_mbfU"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission39/Reviewer_sf3z"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission39/Reviewer_aba5"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
]
],
"structured_content_str": [
"{\"summary\": \"This research paper tackles the significant challenge of reducing communication overhead in distributed training of large language models while maintaining their effectiveness. The authors present SPARTA (Sparse Parameter Averaging), a novel approach that works in conjunction with DiLoCo, a distributed low-communication optimization method. DiLoCo minimizes communication by synchronizing parameters less frequently. SPARTA complements this by asynchronously averaging a small subset of model parameters, effectively reducing communication without increasing processing time. When combined, these methods lead to better convergence when communication is less frequent. SPARTA also appears to function as a regularizer, allowing for faster convergence through higher learning rates.\", \"strengths_and_weaknesses\": \"SPARTA is a relatively simple idea that shows significant promise for improving distributed training with DiLoCo. The results shown are quite compelling (particularly Figure 3) and the reduction in inter-node communication is potentially quite significant when comparing H=100 versus H=10000, even when the cost of the per-iteration updates is included. The major omission in the paper is a fuller discussion of the overall performance in terms of wall-clock time. For example, the caption of Figure 3 states \\\"whilst reducing wall-clock time\\\" - by how much? I presume that the loss calculations dominate the overall wall clock time; does the reduction in inter-node communication provide a significant speed increase or is it relatively small because DiLoCo already improves speeds a lot? Even it if is small, there appears to be additional benefits to using this approach. While the paper states that the communication is asynchronous, have there been any experiments to determine how asynchronous it can be? Is it always assumed that the weight communication happens within one gradient update? Or can larger delays be tolerated? This would be interesting for situations where inter-node communication is particularly slow (e.g., where resources are not geographically co-located).\", \"suggestions\": \"See strengths and weaknesses.\", \"a_very_minor_point\": \"it would be nice to include the pairwise correlation for DiLoCo with H=100 (page 3, line 158+).\", \"typos\": [\"\\\"papaer\\\" should be \\\"paper\\\".\", \"\\\"communiaiton\\\" should be \\\"communication\\\".\", \"\\\"to additional wall-clock time\\\" should be \\\"no additional wall-clock time\\\".\", \"Algorithm 1, line 7: \\\"do do\\\" should be \\\"do\\\"\"], \"reason_for_giving_a_higher_score\": \"A good paper with a simple idea explained well.\", \"reason_for_giving_a_lower_score\": \"-\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The paper introduces SPARTA (Sparse Parameter Averaging), a distributed training approach that asynchronously shares a tiny fraction (0.05%) of model parameters between workers at each training step. The authors combine this with DiLoCo, an existing distributed training method, to achieve two key benefits: 1) enabling much less frequent full model synchronization (every 10,000 steps vs 100 steps) without hurting performance, and 2) providing a regularization effect that allows higher learning rates. Using a 124M parameter nano-GPT model, they demonstrate a 14.3% reduction in validation perplexity while reducing communication overhead by 100x.\", \"strengths_and_weaknesses\": \"The proposed approach introduces a novel combination of sparse parameter sharing with DiLoCo that addresses a real pain point in distributed training and the authors show strong empirical results showing improved performance with drastically reduced communication. The paper is well-written and there is a clear explanation of how asynchronous parameter sharing helps maintain model alignment.\\n\\nMy concerns with the paper center mainly around the limited experimental validation; the authors only tested their approach on nano-GPT which is a relatively small model of 124M parameters so it's not clear if this approach will scale to larger models. I also would have liked to see more comparisons against other recent approaches like Async Local-SGD, as well as discussion of potential failure modes, limitations and how the approach scales with model size or number of workers.\", \"suggestions\": \"See concerns listed above.\", \"reason_for_giving_a_higher_score\": \"The paper presents a practical and effective solution to a significant problem in distributed training. The 100x reduction in communication overhead while maintaining or improving performance is impressive. The approach is simple to implement and could have immediate impact for organizations training large models with limited networking infrastructure.\", \"reason_for_giving_a_lower_score\": \"The limited experimental validation and lack of theoretical analysis make it hard to fully trust the results will generalize. More rigorous comparisons against competing approaches and testing on larger models would strengthen the paper's claims. The regularization effect, while intriguing, needs better characterization.\", \"rating\": \"6\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}",
"{\"summary\": \"The paper proposes averaging sets of sparse parameters across model replicas within the DiLoCo setting, effectively increasing the global synchronization interval with negligible information exchange overhead. Additionally, this approach appears to introduce a regularization effect, leading to improved model convergence.\", \"strengths_and_weaknesses\": [\"# Strengths\", \"Reducing the communication overhead of DiLoCo is a valuable contribution, particularly since DiLoCo is a widely used baseline for decentralized data parallelism. The proposed approach can also be applied to other DiLoCo variants, enhancing its general applicability.\", \"The regularization effect induced by sparse parameter averaging is intriguing, and its impact on convergence appears to be significant.\", \"The method increases model correlation, which can influence the final aggregated performance\\u2014a beneficial characteristic in many settings.\", \"# Weaknesses\", \"The experiments were conducted on a relatively small model, making it unclear whether the observed characteristics would hold at scale. Additional experiments on larger models would strengthen the paper\\u2019s conclusions.\", \"The writing could be improved for clarity and readability (e.g., Line 050).\", \"The plots could be more polished and high-resolution to enhance professionalism and readability.\"], \"suggestions\": \"Authors should consider adding more scaled up experiments. The current experimental setting can be quite unconvincing given the smaller size of the models.\\n\\nIt is not clear to me if there is a practical reduction of the wall clock time due to this approach since DiLoCo synchronization is already sparse. So other than the regularization effect, is there any practical advantage in the decentralized settings? If there is, please make it clearer.\", \"reason_for_giving_a_higher_score\": \"Reducing the communication overhead of DiLoCo is a valuable contribution, particularly since DiLoCo is a widely used baseline for decentralized data parallelism. The proposed approach can also be applied to other DiLoCo variants, enhancing its general applicability.\", \"reason_for_giving_a_lower_score\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This work proposes to communicate only a sparse set of parameters at each training iteration when using local optimization algorithms such as diloco, while computing full averaging every H steps. Results are promising when increasing H, and reviewers recommend acceptance. We encourage the authors to carefully consider the reviewers' comments and suggestions when preparing the final version.\"}"
]
} |
qKLTBFUBsJ | CAMEx: Curvature-aware Merging of Experts | [
"Dung Viet Nguyen",
"Minh Hoang Nguyen",
"Luc Nguyen",
"Rachel S.Y. Teo",
"Tan Minh Nguyen",
"Linh Duy Tran"
] | Existing methods for merging experts during model training and fine-tuning predominantly rely on Euclidean geometry, which assumes a flat parameter space. This assumption can limit the model's generalization ability, especially during the pre-training phase, where the parameter manifold might exhibit more complex curvature. Curvature-aware merging methods typically require additional information and computational resources to approximate the Fisher Information Matrix, adding memory overhead. In this paper, we introduce CAMEx (Curvature-Aware Merging of Experts), a novel expert merging protocol that incorporates natural gradients to account for the non-Euclidean curvature of the parameter manifold. By leveraging natural gradients, CAMEx adapts more effectively to the structure of the parameter space, improving alignment between model updates and the manifold's geometry. This approach enhances both pre-training and fine-tuning, resulting in better optimization trajectories and improved generalization without the substantial memory overhead typically associated with curvature-aware methods. Our contributions are two-fold: (1) CAMEx significantly outperforms traditional Euclidean-based expert merging techniques across various natural language processing tasks, leading to enhanced performance during pre-training and fine-tuning; (2) we introduce a dynamic merging architecture that optimizes resource utilization, achieving high performance while reducing computational costs, facilitating efficient scaling of large language models. | [
"Sparse Mixture-of-Experts",
"efficiency",
"expert merging"
] | Accept | https://openreview.net/pdf?id=qKLTBFUBsJ | https://openreview.net/forum?id=qKLTBFUBsJ | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"zCNDCLGBcz",
"XrYDRcNUXD",
"KBS3EkvnN7",
"23mtM4CN28"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740677275375,
1740773345711,
1741226298753,
1740595367409
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission6/Reviewer_7ENf"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission6/Reviewer_LSvg"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission6/Reviewer_ud9b"
]
],
"structured_content_str": [
"{\"summary\": \"The paper introduces CAMEx, a novel technique for merging experts in Sparse Mixture of Experts (SMoE) architectures that leverages natural gradients to incorporate the curvature of the parameter manifold. By moving beyond traditional Euclidean-based merging methods, CAMEx proposes a dynamic merging architecture that not only improves performance\\u2014demonstrated through experiments on different tasks such as language modeling (WikiText), text classification (GLUE), and image classification (ImageNet)\\u2014but also achieves computational efficiency with reduced memory overhead. The method is supported by both theoretical insights and empirical validations, positioning it as a promising direction for scalable and efficient model training.\", \"strengths_and_weaknesses\": \"**Strengths**\\n\\nThe paper's primary strengths lie in its novel use of curvature-aware natural gradients for expert merging, which leads to improved performance on different tasks/domains compared to traditional Euclidean methods.The methods is theoretically well-explained and empirically tested across different domains and tasks. \\n\\n**Weaknesses**\\n\\nThe paper relegates many key experimental results and supporting figures to the appendix, which diminishes the clarity and impact of its main claims. There is an inconsistency in the use of models\\u2014switching between T5 and GPT-2 across different evaluations\\u2014without a clear justification, thereby undermining the demonstration of generalizability. Additionally, the lack of significance testing for the improvements in the curvature-aware variants weakens the confidence in the reported gains. Finally, the experiments are primarily focused on general datasets, with limited evaluation on domain-specific tasks, and the image classification results, a claimed contribution, are not prominently featured in the main text.\", \"suggestions\": [\"Main Text Integration: Move key figures, tables, and experimental results that support the paper's primary claims from the appendix to the main text for greater clarity and impact.\", \"Consistency in Model Choice: Clarify why the experiments switch between T5 and GPT-2 (e.g., GLUE versus Wikitext perplexity) and consider using both models consistently on the same tasks to better demonstrate generalizability.\", \"Statistical Significance: Include significance testing for the improvements of the curvature-aware variants (as shown in Tables 2, 3, and 4) to robustly validate the novel approach.\", \"Domain-Specific Evaluation: Extend experiments to more domain-specific datasets\\u2014such as those in math, coding, finance, healthcare, or non-English languages (e.g., Korean, Arabic)\\u2014to assess performance across varied contexts.\", \"Image Classification Results: Since image classification is claimed as a key contribution, integrate these results into the main text rather than keeping them exclusively in the appendix.\", \"Convergence Claims: Ensure that claims regarding rapid convergence are substantiated by results presented in the main text\", \"Provide clear motivations for the choice of models and datasets.\", \"**Minor Comments**\", \"Table 1: Rescalling factor -> Rescaling factor\", \"Table 2: Use the same order for -CA counterparts: Swap the rows for DARE-CA and Ties-CA to be in the same order as the non-CA ones.\"], \"reason_for_giving_a_higher_score\": \"I give a higher score because the paper presents a novel curvature-aware merging method that leverages natural gradients to improve model performance and efficiency. Its strong theoretical foundation and empirical results across diverse tasks underscore its potential impact in scalable model training.\", \"reason_for_giving_a_lower_score\": \"I give a lower score because key experimental evidence is hidden in the appendix, reducing clarity and impact. Additionally, inconsistent model choices and a lack of statistical validation weaken the robustness of the claims. These issues suggest that further refinement is needed before the approach can be broadly adopted.\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"5\"}",
"{\"summary\": \"The paper proposes a curvature-aware merging method and the experiments show that CAMEx can outperform Euclidean-based expert merging. Overall, it is a very interesting idea, I suggest the authors to conduct more experiments on \\\"combination of the skills\\\". For example, one expert trained from the math dataset, another expert trained from the code dataset, it will be interesting if the curvature-aware merging can help to solve the math problems using code.\", \"strengths_and_weaknesses\": [\"# Strengths\", \"The idea is interesting and the paper is presented clearly.\", \"The experimental results are promising to solve complex tasks.\", \"# Weaknesses\", \"The evaluation benchmark is limited. It would be good to conduct more analysis on zero-shot tasks.\", \"Is the improvement of the results from the \\\"Curvature-Aware\\\" information? It would be good to conduct more analysis results.\", \"The results and conclusions are only based on T5. Although there are also some results based on Phi-3, it would be good to have more results based on other backbone models.\"], \"suggestions\": \"As I said from the weakness.\", \"reason_for_giving_a_higher_score\": \"I think the method is very interesting for expert or model merging. It would be good to conduct more experiments in this direction.\", \"reason_for_giving_a_lower_score\": \"no\", \"rating\": \"6\", \"confidence\": \"5\", \"workshop_fit\": \"4\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This work proposes a new merging method, which is very relevant to this workshop. All reviewers recommend acceptance, and we're pleased to accept it to this workshop.\"}",
"{\"summary\": \"The paper proposes CAMEx, a method to for merging experts in (sparse) mixture-of-expert models. CAMEx improves over prior work by taking an approximation of the curvature of the parameter space into account.\\n\\nAdditionally, the authors propose a \\\"dynamic merging\\\" strategy which reduces the number of parameters while keeping FLOPs the same by merging into a global expert shared expert shared across layers.\\n\\nCAMEx and the dynamic merging strategy are evaluated via a broad range of experiments on natural language as well as image tasks, and shown to improve over prior merging methods.\", \"strengths_and_weaknesses\": [\"Strengths:\", \"The proposed method achieves consistent improvements on a broad set of tasks.\", \"Experiments are extensive & reported in a detailed way.\"], \"weaknesses\": [\"The paper is lacking in clarity. The experimental setup is not clearly explained (i.e., what \\\"Vanilla\\\", \\\"SMoE\\\", and \\\"Domain-Specific\\\" refers to in Tables 3 and 4) and the dynamic merging strategy is only introduced in a Figure and via a formula but never explained in text.\", \"The paper skips comparing against some prior methods taking second order information into account, e.g. [[1]]. The authors' reasoning is that \\\"[these approaches] become costly for large models, as storage and transmission demands increase linearly with model size and the number of tasks as well as the number of experts. Therefore, we choose baselines that are needless of extra information and computational cost to perform comparisons.\\\". However, the proposed method also incurs extra computational cost & needs extra information so this seems like a weak argument.\", \"[1]: https://arxiv.org/abs/2111.09832\"], \"suggestions\": [\"Please introduce the experimental setup more clearly. As mentioned above, currently, what precisely the different baselines refer to is hard to understand.\", \"The dynamic merging method should also be given some more space to explain it more clearly.\", \"I would also recommend moving some of the content at the start of the introduction to related work, since it is essentially an enumeration of related work (Lines 33 - 43). This could make the introduction easier to read & more to the point.\"], \"reason_for_giving_a_higher_score\": \"The paper introduces & extensively evaluates a novel method which could be of interest to the community.\", \"reason_for_giving_a_lower_score\": \"-\", \"rating\": \"6\", \"confidence\": \"3\", \"workshop_fit\": \"5\"}"
]
} |
mGAAoEWOq9 | Multi-Agent Verification: Scaling Test-Time Compute with Multiple Verifiers (Abridged) | [
"Shalev Lifshitz",
"Sheila A. McIlraith",
"Yilun Du"
] | By utilizing more computational resources at test-time, large language models (LLMs) can improve without additional training. One common strategy uses *verifiers* to evaluate candidate outputs. In this work, we propose a novel scaling dimension for test-time compute: *scaling the number of verifiers*. We introduce Multi-Agent Verification (MAV) as a test-time compute paradigm that combines multiple verifiers to improve performance. We propose using Aspect Verifiers (AVs), off-the-shelf LLMs prompted to verify different aspects of outputs, as one possible choice for the verifiers in a MAV system. AVs are a convenient building block for MAV since they can be easily combined without additional training. Moreover, we introduce BoN-MAV, a simple multi-agent verification algorithm that combines best-of-*n* sampling with multiple verifiers. BoN-MAV demonstrates stronger scaling patterns than self-consistency and reward model verification, and we demonstrate both weak-to-strong generalization, where combining weak verifiers improves even stronger LLMs, and self-improvement, where the same base model is used to both generate and verify outputs. Our results establish scaling the number of verifiers as a promising new dimension for improving language model performance at test-time. | [
"large language models",
"test-time compute",
"verification",
"scaling"
] | Accept | https://openreview.net/pdf?id=mGAAoEWOq9 | https://openreview.net/forum?id=mGAAoEWOq9 | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"gPH13HptUx",
"cyc3oKBL69",
"CMdnB3QZUA",
"5iNYXTgjGr"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1740696125738,
1741226299351,
1740699987902,
1740289975901
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission35/Reviewer_vKyS"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission35/Reviewer_9y4h"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission35/Reviewer_KJJr"
]
],
"structured_content_str": [
"{\"summary\": \"The paper proposes to verify MATH solutions by an ensemble of 'multi-agent' verifiers (MAV): the algorithms goes as:\\n1) generate different MATH solutions for a problem;\\n2) prompt different models (gpt-4o-mini, gemini-flash) with different prompts to predict the quality of each solution;\\n3) aggregate the verifications;\\n4) rerank the solutions.\\nMAV performs better than consistency or RM scoring. The paper is mildly in topic for this workshop, the closest aspect being the usage of different models to verify the solutions (which might be different in architecture from the original generators).\", \"strengths_and_weaknesses\": \"\\\\+ good perf on using ensemble of verifiers\\n\\n\\\\+ nicely written\\n\\n\\\\- mildly in topic for workshop\\n\\n\\\\- couldn't find analysis about diversity of the verifiers\\n\\n\\\\- comparison btw using different models vs different GVs for the same model\", \"suggestions\": \"Reason for low-ish score is workshop topic fit. I think it would be more interesting for this workshop if the paper focused on analyzing the diversity of the verifiers and how perf changes w. the diversity of the verifiers. the abstract says : \\\"where performance improves with both the number and diversity of GVs\\\", it could be my misunderstanding but I couldn't find an analysis of the diversity of the verifiers and how this impact the performance, which might be more interesting for this workshop.\\n\\nPaper seems to be missing a baseline which is basically using gemini or gpt-4o-mini only across GVs?\\n\\nUsing open weights model, rather than gemini or gpt-4o-mini, might also be interesting. Which models should we choose? Is there a way to get to know which model would be better for this task if I have a set of them available?\", \"reason_for_giving_a_higher_score\": \"n/a\", \"reason_for_giving_a_lower_score\": \"n/a\", \"rating\": \"5\", \"confidence\": \"2\", \"workshop_fit\": \"3\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This paper proposes to use multiple external verifiers to assess the solution quality of a model and provide a better verification signals through simple voting mechanisms. This is a novel and timely idea that shows some potential. We encourage the authors to take reviewers' comments and suggestions into consideration for the final version of the paper.\"}",
"{\"summary\": \"This work studies the important problem of improving test-time performance by introducing Goal Verifiers that are LMs prompted to provide binary scores for a generated response. By scaling the number of Goal Verifiers, the generated responses can be judged on several dimensions. A Best of N Multi-Agent Verification mechanism is proposed that samples n responses from the generator and runs them against the Goal Verifiers. Then the solution receiving the highest total score from the Verifiers is chosen as the final response. Experiments are run on datasets like MATH, HumanEval, GPQA, MMLU with 5 different LMs to demonstrate the effectiveness of MAV.\", \"strengths_and_weaknesses\": [\"Strengths\", \"Well-written paper, easy to follow\", \"Several experiments across domains and LMs\", \"Weakness\", \"The idea is itself not new, but has been presented well. Similar solutions of sampling n responses and choosing the best based on some form of critique mechanism has already been well explored in the literature\"], \"suggestions\": [\"\\\"combining multiple relatively weak verifiers together can enhance the performance of stronger generator models\\\" I found this interesting. It would be nice to have some analysis on the cost effectiveness of this approach compared to baselines?\", \"An interesting aspect would be leveraging the multi-agent framework to more effectively score the generations via Debate style methods.\"], \"reason_for_giving_a_higher_score\": \"N/A\", \"reason_for_giving_a_lower_score\": \"Refer weakness\", \"rating\": \"6\", \"confidence\": \"4\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"This work introduces Goal Verifiers (GVs), external large language models (LLMs) that assess the correctness of a solution from different perspectives at test time. GVs do not require additional training and naturally integrate multiple verification signals through simple voting mechanisms. Additionally, this work proposes Multi-Agent Verification (MAV), which leverages multiple GVs to enhance verification reliability. The results demonstrate that model performance improves as both the number and diversity of GVs increase, highlighting the effectiveness of MAV in refining LLM outputs.\", \"strengths_and_weaknesses\": \"### **Strengths**\\n1. The approach of scaling test-time computation by incorporating more verifiers is intuitive. Experimental results confirm that it outperforms self-consistency and reward model verification. \\n2. The verifier requires no additional training and can evaluate solutions from different verification aspects, making it a flexible and adaptable approach. \\n\\n### **Weaknesses** \\n1. The performance **highly depends** on the strength of the verifiers. In this paper, the authors use strong closed-source models, Gemini-1.5-Flash and GPT-4o-mini. The overall performance may degrade when using smaller or weaker models as verifiers. \\n2. Although the verifier does not require additional training, the need for multiple verifiers to cover diverse aspects of verification could still introduce significant computational costs.\", \"suggestions\": \"1. Consider including experimental results using weaker models as verifiers to analyze performance degradation and provide insights into the robustness of the proposed approach.\\n2. It may be useful to explore more stable verification methods, such as static verifiers, to reduce reliance on LLM-based verification. However, static verifiers might lack the flexibility that LLM-based verifiers provide.\", \"reason_for_giving_a_higher_score\": \"See strengths.\", \"reason_for_giving_a_lower_score\": \"See weakness.\", \"rating\": \"6\", \"confidence\": \"4\", \"workshop_fit\": \"4\"}"
]
} |
mAy1IbbLoR | Soup-of-Experts: Pretraining Specialist Models via Parameters Averaging | [
"Pierre Ablin",
"Angelos Katharopoulos",
"Skyler Seto",
"David Grangier"
] | Machine learning models are routinely trained on a mixture of different data domains.
Different domain weights yield very different downstream performances.
We propose the Soup-of-Experts, a novel architecture that can instantiate a model at test time for any domain weights with minimal computational cost and without re-training the model.
Our architecture consists of a bank of expert parameters, which are linearly combined to instantiate one model.
We learn the linear combination coefficients as a function of the input domain weights.
To train this architecture, we sample random domain weights, instantiate the corresponding model, and backprop through one batch of data sampled with these domain weights.
We demonstrate how our approach obtains small specialized models on several language modeling tasks quickly.
Soup-of-Experts are particularly appealing when one needs to ship many different specialist models quickly under a model size constraint. | [
"Pretraining",
"data mixing",
"model merging"
] | Accept | https://openreview.net/pdf?id=mAy1IbbLoR | https://openreview.net/forum?id=mAy1IbbLoR | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"mjV8bdGdpq",
"WKbpKwCu7I",
"Urv0bk1jPd",
"Lm6YW67flI",
"H7hcjr3LYi",
"3SjQPaVw8U"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1741196049351,
1741226298226,
1741193415684,
1740911370819,
1740910347407,
1740746795707
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission42/Reviewer_rwsU"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission42/Reviewer_dKCr"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission42/Reviewer_KrcX"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission42/Reviewer_NSZr"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission42/Reviewer_mK5j"
]
],
"structured_content_str": [
"{\"summary\": \"This paper proposes a new method, namely, Soup-of-Experts, that pre-trains multiple experts that are inherently designed to be linearly combined, resulting in a single specialized model.\", \"strengths_and_weaknesses\": \"Strength: The paper explores a new use of model merging in a meta-learning setting.\", \"weakness\": \"1. The paper is not quite novel, which is a combination of existing techniques. \\n\\n2. Experimental results are not sufficient, and much more will be needed to justify the advantages of the proposed method.\\n\\n3. The presentation of the paper needs to be enhanced.\", \"suggestions\": \"1. Experiments on different tasks and with larger models will help improve the paper.\\n\\n2. Providing theoretical guarantees for the proposed method will be a plus.\", \"reason_for_giving_a_higher_score\": \"I give a marginally above acceptance threshold score for this paper since the use of model merging in a meta-learning setting is an interesting direction to explore.\", \"reason_for_giving_a_lower_score\": \"N/A\", \"rating\": \"6\", \"confidence\": \"5\", \"workshop_fit\": \"3\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This work proposes a soup of experts where a model can be created from a specific merging. This is a relevant topic to this workshop, and all reviewers recommend acceptance, therefore we're please to accept this work to the workshop.\"}",
"{\"summary\": \"This paper proposes a pretraining framework that leverages, \\\"trainable\\\" parameter averaging across experts while all of the experts are being trained. After the pretraining, an estimate of domain weight distribution for the target data (they use the nearest-neighbor method) is sufficient to compute interpolation weights and use for parameter averaging. More concretely, a simple two-layer MLP is used to project domain weights to interpolation weights (which is trained end-to-end during pretraining).\\n\\nThe paper compares the proposed method with naive pretraining and CRISP where importance sampling is used to pretrain strong specialized models. While CRIPS requires to pretrain a new model for each domain of interest, the proposed techniques only requires lightweight meta-training to learn interpolation weights. The paper shows promising results, showing the best performance on specialized domains while being slightly worse than naive pretraining on the pretraining loss.\", \"strengths_and_weaknesses\": \"Strengths :\\n1. Soup-of-expert is a very novel framework that leverages parameter averaging in the pretraining time, leading to better performance in the inference time on specialized domains with a smaller model size. Especially in the inference time, the framework does not require any training for computing interpolation weights except domain embeddings for their centroids.\\n\\n2. The paper is very well written, and the framework is described clearly. In this context, domain sampling, learning interpolation weights, and pretraining experts include sufficient details. \\n\\n3. Experimental results are promising.\", \"weaknesses\": \"1. I think the main weakness of the paper is the baselines and comparison in their experiments. Although the proposed method averages the parameters in the inference time, leading a small model (110M parameters in the experiments), it pre-trains many more parameters (14B). Instead of comparing independently trained models (standard pretraining or CRISP), comparing SoE to an MoE architecture with Top-1 expert (with adjusted parameters to make total and active parameter count similar) would be more fair. \\n\\n2. Similarly, in the experiments, SoE was trained with 1+128 experts, however, Domain expert was trained with 64 experts (64 separate models), which leads to an unfair comparison.\", \"suggestions\": \"As mentioned above, I suggest the authors include a comparison with MoE-based architecture. More concretely, these 2 architectures would be best to compare:\\n1. Standard sparse MoE with Top-1 expert where the dimensions are adjusted to make the total and active number of parameters the same with the proposed method. \\n\\n2. Branch-Train-Merge (https://arxiv.org/abs/2208.03306), where a base model is first pre-trained, then new branches are trained/finetuned to merge or ensemble in the inference time.\", \"reason_for_giving_a_higher_score\": \"I refer to my comments above about the \\\"strengths\\\" of the work as reasons for a high score.\", \"reason_for_giving_a_lower_score\": \"I refer to my comments above about the \\\"weaknesses\\\" of the work as reasons for a low score.\", \"rating\": \"7\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}",
"{\"summary\": \"The paper proposed a new scheme for creating a small specialized model from a larger one without requiring any fine-tuning. This specialized model is targeted toward a specific domain following a specific distribution of the pretrained domains, modelled by domain weights $h$. The specialized model can further be seen as the good initialization, and then fine-tuned on the target domain to get better performance with fewer steps.\", \"strengths_and_weaknesses\": [\"**Strength:**\", \"The idea is straight-forward and easy to follow.\", \"The authors provide an efficient training pipeline for learning the mapping function between the domain weights $h$ and the experts parameters $\\\\alpha$.\", \"The specialized model can be created beforehand, thus lowering the computational and memory cost for deploying model in practice.\", \"**Weakness:**\", \"The quantitative result is missing, only loss is provided. I would like to see more meaningful metrics like perplexity or other downstream tasks' metric.\", \"No direct comparison with others model merging methods, as in zero-shot setting as well as in fine-tuning setting.\", \"While this idea is interesting, is there anyway to used pretrained models instead of training the model from scratch?\"], \"suggestions\": \"See above section.\", \"reason_for_giving_a_higher_score\": \"N/A\", \"reason_for_giving_a_lower_score\": \"N/A\", \"rating\": \"7\", \"confidence\": \"4\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"This paper introduces a method for sampling a specialized model from a warehouse of expert models with the aid of a meta-domain distribution vector. The method shines in scenarios requiring rapid deployment of small, specialized models under size constraints, such as language modeling tasks. By avoiding retraining for each domain, Soup-of-Experts achieves efficiency and flexibility, delivering low loss $L(\\\\theta, h)$ for the instantiated models tailored to specific domain needs.\", \"strengths_and_weaknesses\": [\"**Strengths**:\", \"The motivation of the paper is appealing and practical.\", \"The training pipeline is novel and interesting.\", \"The qualitative results are promising.\", \"**Weakness**:\", \"I believe the author should also consider comparing the proposed method with other fast model merging baselines, such as [1] and [2]. If these baselines are not suitable for comparison, I recommend combining the proposed method with them to see if they can boost each other's performance.\", \"The experiments are conducted only on GPT2-base model size. I am wondering if the proposed method worked well for both small and large model sizes.\", \"[1] Prateek Yadav, Derek Tam, Leshem Choshen, Colin Raffel, and Mohit Bansal. 2023. **TIES-Merging: Resolving Interference When Merging Models.** In *Neurips 2023*.\", \"[2] Le Yu, Bowen Yu, Haiyang Yu, Fei Huang, and Yongbin Li. 2024. **Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch**. In *ICML 2024*.\"], \"suggestions\": \"1. Add more baselines comparison.\\n2. Add more information about the model's parameters number and an ablation on different model scale.\", \"reason_for_giving_a_higher_score\": \"N/A\", \"reason_for_giving_a_lower_score\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The paper proposes Soup-of-Experts (SoE), the method which pretrains a set of expert parameters that can be linearly combined to instantiate small, specialized language models without extensive retraining. Using a combination of bank of experts, shared parameters, and a MLP router they obtain rapid specialization, while being competitive to the generic pretraining setup. Evaluated on popular and established datasets, their method outperforms other baselines in the specialised setting while maintaining good general performance.\", \"strengths_and_weaknesses\": [\"Strengths:\", \"SoE leverages parameter averaging to create a modular, scalable solution for model specialization, aligning with the workshop\\u2019s focus on reusable ML components.\", \"The method excels in specialized domains and reduces fine-tuning costs, as shown in Figure 2.\"], \"weaknesses\": [\"Authors choose sparse meta distributions, and don't explore the motivation deeply or test alternative meta distribution choices. The following question remains; How do these affect training and performance. Moreover, robustness to this choice remains unaddressed.\", \"Algorithm 1 estimates the combination of experts from specialization data, but the number of samples used is unspecified. This omission obscures how robust SoE is to limited data compared to baselines\\u2014key for practical deployment where specialization samples may be scarce.\"], \"suggestions\": [\"Analyse the impact of different choices of meta-distribution during training.\", \"Report sample size and test performance with varying amounts of specialization data.\"], \"reason_for_giving_a_higher_score\": \"SoE tackles a timely challenge with an elegant, modular design and promising results. Addressing the meta-distribution\\u2019s role and specialization sample size would strengthen its claims, but these gaps don\\u2019t overshadow its innovation or potential.\", \"reason_for_giving_a_lower_score\": \"Lack of ablations and some experimental details.\", \"rating\": \"8\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}"
]
} |
kWGBPSRtf9 | MoLEx: Mixture of Layer Experts for Finetuning with Sparse Upcycling | [
"Rachel S.Y. Teo",
"Tan Minh Nguyen"
] | Large-scale pre-training of deep models, followed by fine-tuning them to adapt to downstream tasks, is currently the cornerstone of natural language processing (NLP). The massive size of these models has led to remarkable success in many NLP tasks. However, a detriment is the expense required to retrain all the base model's parameters for the adaptation to each task or domain. Parameter Efficient Fine-Tuning (PEFT) provides a highly effective solution for this challenge by minimizing the number of parameters required to be trained while maintaining the quality of the model. In this paper, we study layers as extractors of different types of linguistic information that are valuable when used in conjunction with each other. We then propose the Mixture of Layer Experts (MoLEx), a novel sparse mixture of experts (SMoE) whose experts are layers in the pre-trained model. It performs a conditional computation of a mixture of layers during fine-tuning to provide the model with more structural knowledge about the data. By providing an avenue for information exchange between layers, MoLEx enables the model to make a more well-informed prediction for the downstream task, leading to better fine-tuning results with the same number of effective parameters. As experts can be processed in parallel, MoLEx introduces minimal additional computational overhead. We empirically corroborate the advantages of MoLEx when combined with popular PEFT baseline methods on a variety of downstream fine-tuning tasks, including the popular GLUE benchmark and End-to-End Challenge (E2E). | [
"Parameter efficient fine-tuning",
"mixture of experts",
"sparse upcycling"
] | Accept | https://openreview.net/pdf?id=kWGBPSRtf9 | https://openreview.net/forum?id=kWGBPSRtf9 | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"mQQw3K5xAb",
"I9DttPCMwe",
"HbDSRzeFx9",
"CiM7gCmUCb",
"4J8BvDS4Yf"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1740750074268,
1741022239738,
1741226298842,
1740656311047,
1741177445283
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission29/Reviewer_Jveu"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission29/Reviewer_95dS"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission29/Reviewer_KHtE"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission29/Reviewer_ZpdG"
]
],
"structured_content_str": [
"{\"summary\": \"The paper proposes a new parameter efficient adaptation technique that is based on sparse upcycling. The method adapts a transformer into a Mixture of experts, where the experts are the different layers of the transformer. At each layer $t$, the model learns a (shared) linear router which selects across all other layers which one will be processed in parallel of the current layer $t$, and have its output combined with layer $t$.\\n\\nFinally, this system is used in conjunction with other PEFT methods, and it shown to outperform the same PEFT method without upcycling. The authors provide some theoretical results to reinforce their method.\", \"strengths_and_weaknesses\": \"Strengths:\\n1. The paper is well written, the method is easy to understand and the results are well presented. \\n2. The proposed method is elegant, and enables better communication across layers \\n3. The zero-shot transfer results are quite interesting\", \"weaknesses\": \"1. I am not sure the comparison to LoRA is fair; my understanding is that MoLEx is roughly 2x the compute of the base model, while LoRA stays closer to 1x ? At the end of the day, you need more resources to run MoLEx, and I think further clarifications in the paper would help\", \"suggestions\": \"Some questions / clarifications:\\n1. Regarding the statement in the intro on the high computational cost of retraining all parameters: PEFT methods are parameter efficient, but not necessarily compute efficient. For example, the computational cost of training a LoRA is only marginally smaller than full finetuning, as you still need to forward / backward across the full model.\", \"reason_for_giving_a_higher_score\": \"adressed above\", \"reason_for_giving_a_lower_score\": \"adressed above\", \"rating\": \"7\", \"confidence\": \"4\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"This paper introduces MoLEX, a mixture of experts framework to improve fine-tuning process of language models. MoLEX views different layers of the pre-trained model as experts to provide more structural knowledge of the data, it delivers performance improvement over LoRA over different evaluation scenarios across models and benchmarks.\", \"strengths_and_weaknesses\": \"Strengths:\\n1. MoLEX delivers performance improvement over LoRA across different settings, although improvement over some task is moderate.\\n2. A theoretical analysis is provided in a simplified setting for showing the improved robustness brought by MoLEX.\", \"weaknesses\": \"1. MoLEX introduces additional computation costs in both training evaluating. According to Equation 4 the computation costs seem to be almost doubled. A thorough comparison / discussion before and after applying MoLEX to PEFT in both training and evaluation has to be studied and provided in this case for people to better evaluate the computation-performance trade-off brought by this method. This is missing currently while only very briefly mentioned in Section 4.2.\", \"suggestions\": \"1. The information provided in Figure 1, the main figure, is somehow limited. I suggest the authors to extend this figure to accurately illustrate the mechanism of MoLEX, e.g., showing the different experts, the selection process.\\n2. Some sentences like \\\"without ... any increase in effective parameter count\\\" could be misleading as the proposed method actually introduces additional trainable parameters.\\n3. Any potential explainations for MoLEX having a notably larger improvement in \\\"zero-shot transfer learning\\\" (Table 3) than the ID accuracy cases (Table 1)?\", \"reason_for_giving_a_higher_score\": \"Empirical evidance for performance improvement.\", \"reason_for_giving_a_lower_score\": \"Computational cost analysis has to be provided and discussed.\", \"rating\": \"6\", \"confidence\": \"4\", \"workshop_fit\": \"3\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This work proposes to use a mixture of layers and conditionally select layers within a pre-trained model during fine-tuning. The paper has been positively received by all reviewers. We encourage the authors to carefully consider the reviewers' comments and suggestions when preparing the final version.\"}",
"{\"summary\": \"This paper proposes a parameter-efficient fine-tuning technique Mixture of Layer Experts (MoLEx), which treats layers as the role of experts in conventional MoE models and trains a router to selectively combine them together with skip connections.\", \"strengths_and_weaknesses\": \"Pros:\\n\\n1. The proposed method achieves consistent improvement over the established LoRA method, making the new technique valuable.\\n\\n2. Theoretical analyses are provided to guarantee the robustness of the proposed optimization.\", \"cons\": \"1. The experiments could be substantially improved. (1) Only LoRA is used as the baseline and the other parameter-efficient tuning methods are ignored. (2) All the experiments (including these in the appendices) are conducted on very small-scale models. Considering that the proposed technique actually needs much more computation, I doubt that if MoLEx can maintain a similar running time with LoRA when the base model scales up.\\n\\n2. I am not sure if the paper perfectly fits the target of this workshop. Although using a design motivated by MoE, MoLEx actually enhances the coupling of language model modules by connecting different layers together and is definitely not encouraging the modularity of language models.\", \"suggestions\": \"The formular description of the proposed method is unnecessarily complicated to me considering that the proposed method is rather straightforward. I would recommend saving the space and moving more experiments in the appendices, especially the efficiency analyses, into the content.\", \"reason_for_giving_a_higher_score\": \"Theoretical analyses are impressive.\", \"reason_for_giving_a_lower_score\": \"Not very fit the topic of this workshop.\", \"rating\": \"6\", \"confidence\": \"4\", \"workshop_fit\": \"3\"}",
"{\"summary\": \"This paper proposes a parameter-efficient fine-tuning (PEFT) method that leverages the Mixture-of-Experts (MoE) architecture on top of a dense model using sparse upscaling. However, unlike standard MoE, MoLEx uses layers paired with PEFT adapters as experts. The paper presents results where Roberta or Llama (in the appendix) models are used as the backbone LLM, and evaluations include natural language understanding (GLUE benchmark) and generation tasks (E2E NLG Challenge). Furthermore, the paper provides a theoretical perspective and layer-wise feature analysis for the proposed method.\", \"strengths_and_weaknesses\": \"Strengths:\\n1. The proposed method is very interesting and includes a novel approach. It has some connections with the Mixture of Depths (https://arxiv.org/abs/2404.02258) paper. \\n2. The paper presents a decent body of experiments and analysis, showing promising results for the proposed methods. \\n3. The theoretical perspective supports the robustness claims. \\n4. Beyond the main pages, there are additional experiments with larger model sizes like Llama-3.2-1B on Alpaca eval.\", \"weaknesses\": \"1. Especially the results from Roberta-based models are not conclusive since the improvements are commonly within the error margin. \\n2. Only one rank of Lora has been used for comparison. To validate the robustness of the method, multiple LoRA ranks should be used and compared with naive LoRA. \\n3. What is mixed through MoLEX gating needs to be more clear descriptions in Section 2. From equation (4) and the introduction, I understand that the layer outputs from different layers are mixed. However, it is not clear whether the same input passing throuh different layers (to mix their output) or not.\", \"suggestions\": \"1. Regarding the connection with the Mixture of Depths paper, it would be nice to explain architectural differences.\\n\\n2. Since the paper proposes a type of parameter-efficient mixture of expert methods, experimental comparing with the other MoE-Lora methods such as MoLoRA (https://arxiv.org/abs/2309.05444), MoLE (https://arxiv.org/abs/2404.13628) would be good for the paper. \\n\\n3. It would also be nice to include full fine-tuning results in the comparison to see how the proposed method close to the \\\"upper-bound\\\" of the result.\", \"reason_for_giving_a_higher_score\": \"As a reason for a high score, I refer to the \\\"strengths\\\" of the paper mentioned above.\", \"reason_for_giving_a_lower_score\": \"As a reason for a low score, I refer to the \\\"weaknesses\\\" of the paper mentioned above.\", \"rating\": \"7\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}"
]
} |
hDfD4HcDo0 | Rethinking Decentralized Learning: Towards More Realistic Evaluations with a Metadata-Agnostic Approach | [
"Tianyu Zhang",
"Lu Li",
"Tongtian Zhu",
"Suyuchen Wang",
"Can Wang",
"Yong Chen"
] | Decentralized learning has been regarded as a privacy-preserving training paradigm that enables distributed model training without exposing raw data. However, many experimental settings in decentralized learning research assume metadata awareness among participants, which contradicts real-world constraints where participants lack shared metadata knowledge. We distinguish between Metadata-Dependent Supervised Learning (MDSL), which assumes global metadata synchronization, and Metadata-Agnostic Zero-Shot Learning (MAZEL), where participants do not share metadata. Our contributions are (1) highlight the difference between MAZEL and MDSL; (2) present empirical evidence demonstrating that long-held claims of MDSL-based decentralized learning may not hold under MAZEL settings; (3) provide benchmarks using up to 8–16 diverse datasets to rigorously evaluate newly proposed decentralized methods under real metadata-agnostic cases; and (4) propose two-stage and cosine gossip schedulers to optimize communication efficiency. Our code is available at: https://anonymous.4open.science/r/More-Realistic-Evaluations. | [
"Decentralized Learning",
"Metadata-Agnostic"
] | Accept | https://openreview.net/pdf?id=hDfD4HcDo0 | https://openreview.net/forum?id=hDfD4HcDo0 | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"vtARDgmD4Q",
"j7bCUdznVm",
"cMga0xHy2o",
"BnhNGzr8TB"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740842855345,
1740502007325,
1741226298569,
1740673162945
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission5/Reviewer_p49a"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission5/Reviewer_sb3X"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission5/Reviewer_Edmg"
]
],
"structured_content_str": [
"{\"summary\": \"The paper critiques current decentralized learning evaluations that assume shared metadata and introduces Metadata-Agnostic Zero-Shot Learning (MAZEL), a more realistic setting where nodes lack metadata synchronization. Empirical results show that long-standing claims about poor generalization and slow convergence do not hold under MAZEL. The paper benchmarks decentralized methods on 8\\u201316 diverse datasets and proposes new gossip schedulers to improve communication efficiency.\", \"strengths_and_weaknesses\": \"Strength:\\n1. Introduces a realistic, privacy-preserving decentralized learning framework.\\n2. Provides empirical benchmarks with diverse datasets and new communication strategies.\\n3. Challenges existing assumptions and shows that decentralized models generalize well under MAZEL.\", \"weaknesses\": \"1. Relies on CLIP embeddings, which may limit applicability to non-image tasks.\\n2. Computational overhead of metadata-agnostic approaches is not fully analyzed.\", \"suggestions\": \"Refer to weaknesses\", \"reason_for_giving_a_higher_score\": \"Well-written paper with a clear motivation and showcase depth of understanding.\", \"reason_for_giving_a_lower_score\": \".\", \"rating\": \"8\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"This paper challenges the metadata awareness used in decentralized learning evaluation, i.e., simulating various non-IID degrees of data distribution through Dirichlet functions. The authors argue that such an evaluation tactic assumes access to the total number of classes as shared global information, breaking privacy constraints. To address this challenge, a new experimental setting called MAZEL (Metadata-Agnostic Zero-Shot Learning) is proposed. This setting relies on image/dataset captioning at each client through a CLIP-based model. The similarity between the predicted embedding and stored class embeddings determines the predicted class.\", \"strengths_and_weaknesses\": \"Strengths:\\n\\n1. The paper is well-structured and the literature review is quite thorough.\\n\\n2. Evaluation is done across a range of datasets.\", \"weaknesses\": \"1. My major issue with this paper is the fundamental assumption in decentralized/federated learning the authors challenge. The degree of non-IID data (represented by $\\\\alpha$ in most papers) is used for evaluation, but this technique is in no way part of the training loop. For example, state-of-the-art decentralized algorithms tackling data heterogeneity [1-3] do not base their approach on the degree of heterogeneity. $\\\\alpha$ is an evaluation tactic, and even if one is not aware of it, it is possible to evaluate but this just provides a finer control and structured evaluation. I do not think that this breaks data privacy.\\n\\n2. The results show that the technique proposed in this paper, MAZEL, performs better than MDSL. However, the reasoning behind it is lacking, and the reader is left to figure that out themselves. After spending some time, I think it performs better because embeddings are a form of soft label compared to the hard labels used in the traditional MDSL setup. I may be wrong, but again, the paper doesn't help me understand this.\", \"suggestions\": \"Overall, I believe the motivation is not convincing, but this technique can be explored in its own light, given how well it performs. I would strongly encourage the authors to spend some time explaining their results better for future submissions to any other venues.\", \"reason_for_giving_a_higher_score\": \"--\", \"reason_for_giving_a_lower_score\": \"1. The motivation is not well-grounded.\\n\\n2. The results need to be explained better through some qualitative analysis or arguments.\", \"rating\": \"4\", \"confidence\": \"5\", \"workshop_fit\": \"4\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"The paper critiques current decentralized learning evaluations that assume shared metadata by highlighting a discrepancy between research settings and real-world constraints. The paper recieved scores with high variance. We suggest the authors to incorporate the comments and suggestions from reviewer sb3X to strengthen the paper. The paper seems relevant to the topic of decentralized training. Overall, we're recommend to accept this work to the workshop.\"}",
"{\"summary\": \"This paper challenges current experimental approaches in decentralized learning research by highlighting a discrepancy between research settings and real-world constraints. The authors identify two distinct paradigms:\\n\\nMetadata-Dependent Supervised Learning (MDSL): The conventional approach where participants share metadata (like class labels) across sites, typically using datasets like CIFAR-100 partitioned with Dirichlet distributions.\\nMetadata-Agnostic Zero-Shot Learning (MAZEL): A proposed approach that better reflects real-world privacy constraints, where participants cannot share metadata across sites. Under MAZEL, local models generalize well to global test sets, contradicting conventional wisdom from MDSL settings. MAZEL settings show faster convergence in terms of Average Local Accuracy (ALA), Average Global Accuracy (AGA), and Gossip Gain compared to MDSL. Different gossip scheduling strategies significantly impact model performance, with early communication being more beneficial than later-stage communication. The authors argue that MAZEL provides a more realistic evaluation framework for decentralized learning that better aligns with real-world privacy constraints, and they encourage researchers to adopt this framework for future evaluations.\", \"strengths_and_weaknesses\": [\"Strengths:\", \"The authors provide extensive experiments across multiple datasets, models, and settings to support their claims, comparing MDSL and MAZEL approaches directly.\", \"The proposed gossip schedulers (Two-Stage and Cosine) offer concrete solutions to optimize communication efficiency in decentralized learning scenarios.\", \"Weaknesses\", \"While the empirical results are strong, the paper lacks deeper theoretical analysis explaining why MAZEL settings lead to better generalization and faster convergence than MDSL.\"], \"suggestions\": \"Expand the Theoretical Foundation: Develop a theoretical framework that explains why MAZEL settings lead to better generalization and faster convergence than MDSL. This would strengthen your empirical findings and provide deeper insights for the research community.\", \"scale_the_experiments\": \"Test your approach on larger networks (e.g., 50+ sites) to demonstrate scalability.\", \"reason_for_giving_a_higher_score\": [\"The paper introduces a significant paradigm shift in evaluating decentralized learning systems, addressing a genuine gap between research practices and real-world constraints.\", \"The paper goes beyond criticism by offering concrete solutions (MAZEL framework and gossip schedulers) that can improve decentralized learning in practice.\", \"The experimental design is thorough and considers multiple models, datasets, and training configurations to support their claims.\"], \"reason_for_giving_a_lower_score\": [\"The paper relies heavily on empirical results without providing adequate theoretical explanations for why MAZEL outperforms MDSL in key metrics.\", \"While the paper presents a new evaluation framework, the technical contributions (gossip schedulers) could be viewed as incremental.\", \"The experiments are limited to visual classification tasks with CLIP models and don't demonstrate broader applicability to other domains or model types.\"], \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"3\"}"
]
} |
fn2U1VYfQ5 | Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models | [
"Weixin Liang",
"LILI YU",
"Liang Luo",
"Srini Iyer",
"Ning Dong",
"Chunting Zhou",
"Gargi Ghosh",
"Mike Lewis",
"Luke Zettlemoyer",
"Xi Victoria Lin"
] | The development of large language models (LLMs) has expanded to multi-modal systems capable of processing text, images, and speech within a unified framework. Training these models demands significantly larger datasets and computational resources compared to text-only LLMs. To address the scaling challenges, we introduce Mixture-of-Transformers (MoT), a sparse multi-modal transformer architecture that significantly reduces pretraining computational costs. MoT decouples non-embedding parameters of the model by modality -- including feed-forward networks, attention matrices, and layer normalization -- enabling modality-specific processing with global self-attention over the full input sequence. We evaluate MoT across multiple settings and model scales. In the Chameleon 7B setting (autoregressive text-and-image generation), MoT matches the dense baseline's performance using only 55.8% of the FLOPs. When extended to include speech, MoT reaches speech performance comparable to the dense baseline with only 37.2% of the FLOPs. In the Transfusion setting, where text and image are trained with different objectives, a 7B MoT model matches the image modality performance of the dense baseline with one third of the FLOPs, and a 760M MoT model outperforms a 1.4B dense baseline across key image generation metrics. System profiling further highlights MoT's practical benefits, achieving dense baseline image quality in 47.2% of the wall-clock time and text quality in 75.6% of the wall-clock time (measured on AWS p4de.24xlarge instances with NVIDIA A100 GPUs). | [
"Sparse architecture",
"Efficient deep architecture",
"Multi-modal foundation models",
"Mixture-of-Experts",
"Transformer"
] | Accept | https://openreview.net/pdf?id=fn2U1VYfQ5 | https://openreview.net/forum?id=fn2U1VYfQ5 | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"o1qTdIJYn6",
"LARvv0eTFm",
"L4xpVtRvvk",
"Gql4PiKl8U",
"FH6TpQdo5e"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1740606863627,
1741088384457,
1741226298073,
1740727596692,
1741078034660
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission1/Reviewer_NNEU"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission1/Reviewer_ZW75"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission1/Reviewer_1Ff9"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission1/Reviewer_jTYM"
]
],
"structured_content_str": [
"{\"summary\": \"The paper proposes Mixture-of-Transformers, a sparse multi-modal transformer architecture that decouples non-embedding parameters of the model by modality. This design allows modality\\u2010specific processing while maintaining global self-attention, leading to significant reductions in training FLOPs and wall\\u2010clock time compared to dense and MoE baselines across text, image, and speech.\", \"strengths_and_weaknesses\": \"Strengths:\\n1. Innovative use of modality-specific parameter decoupling to reduce computational cost.\\n2. Experiments across multiple settings consistently show MoT matching or outperforming dense models and a 4-expert MoE baseline.\\n3. The PCA of the latent feature space (Figure 3b) reveals natural clustering by modality, which validates the design choice of using modality-specific parameters\", \"weaknesses\": \"1. The paper would benefit from deeper theoretical insights into why decoupling these specific parameters improves training dynamics and cross-modal interactions.\\n2. More discussion is needed on how hyperparameter choices (such as the degree of sparsity or regularization parameters) affect model stability and convergence, as well as potential trade-offs in robustness.\", \"suggestions\": \"This paper could benefit from a deeper theoretical explanation of why the decoupling strategy leads to such notable efficiency gains.\\n\\nAlso, provide more details on hyperparameter tuning and stability analysis\", \"reason_for_giving_a_higher_score\": \"The strong empirical results and significant computational savings support a higher score.\", \"reason_for_giving_a_lower_score\": \"Limited theoretical discussion\", \"rating\": \"8\", \"confidence\": \"4\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The submission introduces **Mixture-of-Transformers (MoT)**, *a sparse multi-modal transformer architecture* designed for *efficient training of foundation models* across text, image, and speech modalities. MoT decouples non-embedding parameters (feed-forward networks, attention projection matrices, and layer normalization) by modality while maintaining a unified global self-attention mechanism over interleaved tokens.\", \"the_paper_evaluates_the_architecture_in_several_settings\": \"in the *Chameleon setting*, a 7B MoT model achieves comparable performance to a dense baseline using only \\\\~56% of the FLOPs for text and image generation ; in a *Chameleon+Speech setting*, the model integrates speech effectively, achieving similar performance with only \\\\~37% of the FLOPs for the speech modality ; in the *Transfusion setting*, combining autoregressive objectives for text with diffusion-based objectives for images, a 760M MoT model outperforms a 1.4B dense baseline on metrics such as FID, CLIP score, and CIDEr score.\\n\\nThe study provides detailed training loss curves, step matching analyses, and wall-clock time comparisons to validate its efficiency improvements. Future work is suggested in exploring hybrid architectures and further scaling.\", \"strengths_and_weaknesses\": [\"**Strengths :**\", \"*Efficiency :* The architecture significantly reduces FLOPs and training time while maintaining or improving performance across modalities.\", \"*Modular design :* Decoupling parameters by modality enables tailored processing for each data type while preserving cross-modal interactions through global self-attention.\", \"*Comprehensive evaluation :* Extensive experiments compare MoT with both dense transformers and traditional Mixture-of-Experts approaches across various scales and objectives.\", \"**Weaknesses :**\", \"*Incremental novelty :* The approach is largely an adaptation of established Mixture-of-Experts techniques to a multi-modal context, which may weaken the conceptual innovation.\", \"*Training complexity :* Additional hyperparameters and modality-specific tuning requirements introduce complexity that could impact reproducibility.\"], \"suggestions\": [\"*Ablation studies :* Include experiments isolating the impact of each modality-specific component (e.g., feed-forward networks, attention projections, layer normalization) to clarify their individual contributions.\", \"*Parameter sharing :* It could be interesting to explore strategies for partial parameter sharing to reduce overall memory footprint while preserving modality-specific advantages.\", \"Expanded evaluations :* Eventually extend the evaluation to include additional real-world datasets and diverse scenarios to better assess the model\\u2019s robustness and generalization.\"], \"reason_for_giving_a_higher_score\": \"The paper presents a well-engineered solution that demonstrates significant computational efficiency improvements for multi-modal training, supported by thorough empirical validation and promising directions for future work.\", \"reason_for_giving_a_lower_score\": \"The contribution is mostly an incremental extension of known Mixture-of-Experts methods to multi-modal settings.\", \"rating\": \"6\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This paper proposes a mixture of transformers, where each expert is specialized per modality. More modularity is a relevant topic to this workshop, and all reviewers recommend acceptance, therefore we're pleased to accept this work to the workshop.\"}",
"{\"summary\": \"The work presents a compelling approach to addressing the computational challenges of multi-modal pretraining through modality-specific parameter decoupling. A key strength lies in its ability to maintain global self-attention, preserving cross-modal interactions while achieving substantial efficiency gains\\u2014evidenced by consistent FLOP reductions and wall-clock time savings across diverse tasks and scales. The experimental validation is thorough, covering autoregressive and diffusion-based objectives, multiple modalities (text, image, speech), and comparisons to both dense and MoE baselines.\", \"strengths_and_weaknesses\": \"Strengths\\n1.\\tInnovative Architecture: Modality-specific parameter decoupling avoids MoE\\u2019s load imbalance and training instability. Global self-attention preserves cross-modal interactions for interleaved inputs (text, image, speech).\\n2.\\tStrong Empirical Results: A) Consistent efficiency gains across multiple modalities (text, image, speech) and training objectives (autoregressive, diffusion). B) Outperforms MoE-4x baselines in non-text modalities, with larger wall-clock time advantages.\\n3.\\tPractical Impact: System-level measurements (A100 GPU) validate real-world training efficiency. Hybrid MoT+MoE experiments show promise for combining sparse architectures.\", \"weaknesses\": \"1.\\tExperiments focus on text, image, and speech; extension to video, or complex cross-modal tasks (e.g., any-to-any QA) is untested.\\n2.\\tParameter Management Overhead: Modality-specific parameters may complicate deployment as the number of modalities grows\\n3.\\t(Minor) Missing evaluation of inference latency, memory, or throughput, which is critical for deployment.\", \"suggestions\": \"Please see the weaknesses\", \"reason_for_giving_a_higher_score\": \"Please see the strengths\", \"reason_for_giving_a_lower_score\": \"Please see the weaknesses\", \"rating\": \"8\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}",
"{\"summary\": \"This paper aims to accelerate multi-modal model pretraining by introducing Mixture-of-Transformers (MoT), a sparse and scalable transformer architecture. MoT partitions non-embedding parameters by modality while utilizing global self-attention to maintain cross-modal interactions, thereby improving training efficiency without sacrificing multi-modal capabilities. Experimental results demonstrate that training with MoT significantly reduces FLOPS and wall-clock time compared to using dense models across various multi-modal settings, including text-image and text-image-speech models.\", \"strengths_and_weaknesses\": [\"# Strengths\", \"The paper, especially the introduction, is well-written and easy to follow.\", \"The study includes extensive evaluations across multiple multi-modal tasks and model sizes. The architecture demonstrates efficiency gains compared with dense models across different model scales, from small (37M) to large (7B) models.\", \"# Weaknesses\", \"Many crucial results and conclusions are only presented in the appendix rather than in the main text. This weakens the paper\\u2019s impact and readability.\", \"In line 158, the term \\\"modality-specific weights\\\" is mentioned but is not explicitly defined. Does this refer to attention weights, or does it encompass other components?\", \"The paper does not sufficiently discuss related works, especially in comparison to previous sparse multi-modal transformer models. Prior works have explored different strategies for multi-modal fusion, but the paper does not present how MoT differs from these approaches.\", \"While the authors extensively compare MoT against dense models, they fail to include strong baselines and architectural ablation studies. It is unclear how MoT compares to other sparse multi-modal architectures. For example, the choice of global self-attention over cross-attention is not well justified.\"], \"suggestions\": [\"Reorganize the paper structure to highlight key experimental results in the main text rather than relegating them to the appendix.\", \"Expand the related work section to include a broader discussion of previous approaches, particularly sparse multi-modal transformers.\", \"Clarify the definition of modality-specific weights in the architecture.\", \"Compare MoT to other sparse multi-modal architectures, not just dense models. For example, examine the difference between global self-attention and cross-attention to justify its design choices\", \"Provide ablation studies on MoT\\u2019s architecture, including the effect of varying sparsity levels across different modalities and layers. This might provide deeper insights into the trade-offs between performance and computational efficiency.\"], \"reason_for_giving_a_higher_score\": \"None\", \"reason_for_giving_a_lower_score\": \"None\", \"rating\": \"6\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}"
]
} |
fb0yjiF7CV | Federated Circuits: A Unified Framework for Scalable and Efficient Federated Learning | [
"Jonas Seng",
"Florian Peter Busch",
"Pooja Prasad",
"Devendra Singh Dhami",
"Martin Mundt",
"Kristian Kersting"
] | Probabilistic circuits (PCs) enable us to represent joint distributions over a set of random variables and can be seen as hierarchical mixture models. This representation allows for various probabilistic queries to be answered in tractable time. However, the properties of PCs so far have only been explored in the realm of tractable probabilistic modeling. In this work, we unveil a deep connection between PCs and federated learning (FL), leading to federated circuits (FCs)---a novel, flexible, modular, and communication-efficient federated learning (FL) framework that unifies for the first time horizontal, vertical, and hybrid FL in one framework by re-framing FL as a density estimation problem over distributed datasets. Also, FCs allow us to scale \textit{tractable} probabilistic models (PCs) to large-scale datasets by recursively partitioning datasets and the model itself across a distributed learning environment. We empirically demonstrate FC's versatility in handling horizontal, vertical, and hybrid FL within a unified framework on multiple classification tasks. Further, we demonstrate FCs' capabilities to scale PCs to large-scale datasets on various real-world image datasets. | [
"federated learning",
"probabilistic circuits"
] | Accept | https://openreview.net/pdf?id=fb0yjiF7CV | https://openreview.net/forum?id=fb0yjiF7CV | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"cPdiry1hRN",
"E0hSTXARBY",
"4E4RoYSW5t",
"30ptbmLhh6"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1740700220529,
1741226298236,
1740694599456,
1741032925866
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission18/Reviewer_2rae"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission18/Reviewer_A9Bw"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission18/Reviewer_Jyoh"
]
],
"structured_content_str": [
"{\"summary\": \"This paper introduces Federated Circuits (FCs), a unified framework for federated learning that leverages the semantics of probabilistic circuits (PCs) to jointly address horizontal, vertical, and hybrid federated learning (FL) settings. By re-framing FL as a density estimation task, the authors propose a novel approach that builds modular, communication\\u2010efficient models\\u2014termed FedPCs\\u2014through the use of sum nodes (for aggregating client-specific distributions) and product nodes (for integrating disjoint feature spaces). A key contribution is the design of a one\\u2010pass training algorithm that significantly reduces communication overhead while scaling up the expressivity of PCs across distributed datasets. The paper supports its claims with extensive experiments on both large-scale image datasets (e.g., Imagenet, CelebA) and tabular datasets (e.g., credit, medical, income), comparing against strong baselines such as EiNets, PyJuice, FedAvg, SplitNN, and FedTree.\", \"strengths_and_weaknesses\": [\"**Strengths:**\", \"Novel Concept: The paper presents an innovative idea by linking the semantics of probabilistic circuits to federated learning, thereby offering a unified framework for multiple FL settings.\", \"Unified Approach: FCs elegantly handle horizontal, vertical, and hybrid FL within a single framework, which could simplify and generalize current FL methodologies.\", \"Communication Efficiency: The one-pass training algorithm significantly reduces communication overhead\\u2014a key advantage in federated scenarios.\", \"Extensive Empirical Evaluation: Experiments on both image and tabular data are thorough, demonstrating scalability and performance gains.\", \"Theoretical Grounding: The paper provides a solid theoretical basis by leveraging properties of PCs and by analyzing communication costs.\", \"**Weaknesses:**\", \"Assumptions: The approach relies on modeling assumptions (e.g., mixture marginals and cluster independence) that could benefit from further discussion regarding their practical validity.\", \"Comparative Analysis: Broader comparisons with more recent state-of-the-art FL methods could help position the contribution more clearly within the literature.\", \"Ablation Studies: While extensive experiments are presented, additional ablation studies (e.g., on the effect of the number of clients, sensitivity to hyperparameters) would strengthen the evaluation.\", \"Scalability to Heterogeneous Data: It remains to be seen how robust the method is when faced with highly heterogeneous client data distributions or when scaling to an even larger number of clients.\"], \"suggestions\": [\"Scalability to Massive Client Numbers: Evaluate the framework in settings with thousands of clients to assess communication overhead, model aggregation challenges, and robustness to client dropouts or asynchronous updates.\", \"Handling Extreme Non-IID Data: Extend experiments and analysis to include more severe non-IID scenarios. Consider introducing simulated heterogeneity or using real-world federated datasets to examine convergence, stability, and performance degradation.\", \"Clarification of Assumptions: Provide a more detailed discussion of the underlying assumptions (mixture marginals and cluster independence), including potential limitations when these assumptions are violated\"], \"reason_for_giving_a_higher_score\": \"The paper makes a significant conceptual contribution by unifying disparate FL settings under a single framework and provides strong empirical evidence to support its claims. The approach is innovative, and the communication efficiency improvements, coupled with solid experimental results, make it a promising direction for scalable FL.\", \"reason_for_giving_a_lower_score\": \"The framework, though innovative, is limited by its evaluation on a small number of clients and lacks thorough analysis of performance under extreme non-IID conditions.\", \"rating\": \"7\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This work proposes to combine probabilitic circuits and federated learning, and modelize a distributed optimization as a density estimation over distributed dataset, a relevant topic to this workshop. All reviewers recommend acceptance and we're pleased to accept this work to the workshop.\"}",
"{\"summary\": \"This paper introduces Federated Circuits (FCs) and Federated Probabilistic Circuits (FedPCs) as a novel FL framework flexible to diverse kind of FL scenarios (horizontal, vertal, hybrid) which promises increased communication efficiency and parallelization in FL.\\nBy showing the similarities between data distribution in various FL settings and concepts in PCs, authors devise a training procedure to learn FCs via a FL procedure.\", \"strengths_and_weaknesses\": [\"Strenghts:\", \"The formalization appears sound, and the parallelism with PCs is mostly clear. The approach seems novel and interesting to me.\", \"The experiments seem convincing in showing the potential of this novel approach\", \"The paper is carefully written and the code is already open source. The discussion in the supplementary is useful and well presented\"], \"weaknesses\": [\"While the effort in writing the manuscript is clear, the proposed is not very clear, in particular for readers who don't know about probabilistic circuits. As a consequence, while the general high level idea is conveyed, the detail of what happens during client training and what it is sent over the network are not clear. In particular, while authors claim that the proposed method is compliant with privacy requirements of FL, this matter is not explained.\", \"Relationship with classical approaches is not discussed: this agains insists on clarity, because it is not clear if, in a given scenarios (let's say horizontal FL) this approach has advatages related to model quality.\", \"It is not clear which kind of problems this approach can work with: authors presented classification and density estimation, but the paper would benefit a discussion of pros/cons of the proposed approach\"], \"suggestions\": [\"Suggestions:\", \"Invest more in explanations regarding pros/cons of the algorithms, its applicability and relationship with predominant approaches\", \"Explain in detail what computations clients do, what they exchange and why it is still privacy preserving\", \"Please explain more in detail the practical meaning of the assumptions used and how they relate to standard assumptions\", \"Evaluate to provide a proof about the effect of data heterogeneity: if my understanding is correct, it should be possible to show that FCs are not affected by data heterogeneity, offering a substantial advantage over standard algorithms that require advanced techniques to handle heterogeneity\"], \"reason_for_giving_a_higher_score\": \"Interesting approach, very different from common ones\", \"reason_for_giving_a_lower_score\": \"Some crucial parts are unclear, and this partially impedes reviewing the work.\", \"rating\": \"6\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The submission titled \\\"Federated Circuits: A Unified Framework for Scalable and Efficient Federated Learning\\\" introduces Federated Circuits (FCs), a framework that unifies horizontal, vertical, and hybrid federated learning (FL) by treating FL as a density estimation problem over distributed datasets. The authors claim that FCs enable scalable and communication-efficient learning. The submission includes extensive experimental results showing FCs' performance on multiple tasks compared to existing methods. It also proposes a one-pass training scheme for Federated Probabilistic Circuits (FedPCs).\", \"strengths_and_weaknesses\": [\"Strengths:\", \"Unified Framework: The paper introduces a novel framework that unifies horizontal, vertical, and hybrid federated learning (FL), which to the best of my knowledge is a significant advancement in the field.\", \"Empirical Validation: The paper provides thorough experimental results that show FCs outperform existing methods on large-scale density estimation tasks and achieve competitive results on classification tasks.\", \"Public Availability: The authors have made the code publicly available, promoting transparency and reproducibility in research.\"], \"weakness\": [\"Complexity: The proposed framework and training scheme are complex, which might pose challenges for implementation and understanding by practitioners who are not familiar with probabilistic circuits or federated learning.\"], \"suggestions\": \"Although the paper does not have a dedicated limitations section, it does implicitly discuss limitations in other sections. It would be beneficial to make this discussion more explicit. Similarly, future work is briefly mentioned, but it could be more directly discussed.\\n\\nThe language used is clear and correctly spelled throughout the document, but some phrases could be improved for readability. Software like Grammarly could help identify and reformulate such phrases.\", \"reason_for_giving_a_higher_score\": \"The paper presents an innovative and unified framework for federated learning. It also includes a thorough empirical validation.\", \"reason_for_giving_a_lower_score\": \"N/A\", \"rating\": \"7\", \"confidence\": \"2\", \"workshop_fit\": \"5\"}"
]
} |
cizhOu3CZa | ReMod: Learning Structured Sparsity with ReLU Modulation | [
"Wenbo Zhang",
"Xiang Ren"
] | Large language models demand substantial computational resources for training and inference. Leveraging contextual sparsity to convert dense modules into sparsely computed Mixture of Experts (MoE) offers a promising solution, but existing methods face challenges in effectively partitioning modules and handling abrupt, non-differentiable changes during conversion. We introduce ReMod (ReLU Modulation), which creates sparsity smoothly and differentiably while integrating clustering directly into training. Our method trains a small ReLU-gated modulator that scales hidden states to sparsify computation, then clusters modulator weights to create structured sparsity with optimized hardware utilization. When applied to MLPs and Attention projections in Bert-base, ReMod reduces inference FLOPs by up to 93% while maintaining comparable accuracy—significantly outperforming previous approaches. | [
"inference efficiency",
"sparsity",
"sparsification",
"mixture of experts",
"moefication",
"conditional computation",
"dynamic neural network",
"modularity",
"large language models"
] | Accept | https://openreview.net/pdf?id=cizhOu3CZa | https://openreview.net/forum?id=cizhOu3CZa | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"th05rzrGJX",
"dDXJwEeCdp",
"YuDhTmbS2d",
"LpvBPSKmJz"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740295409561,
1740780345006,
1740701086013,
1741226299457
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission51/Reviewer_4qoA"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission51/Reviewer_TDfP"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission51/Reviewer_4HsL"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
]
],
"structured_content_str": [
"{\"summary\": \"This paper presents ReLU Modulation as a novel approach to introduce sparsity and convert dense neural network modules into Mixture of Experts (MoE) post hoc. Unlike existing Moefication methods, which rely on predicting activation sparsity within MLP modules, ReM directly trains a modulator gated by ReLU to sparsify hidden states. This approach generalizes beyond MLPs, applies to linear layers and attention mechanisms, and achieves significant FLOP reductions (up to 93% in inference) while maintaining accuracy with minimal retraining cost.\", \"strengths_and_weaknesses\": \"Pros of the paper:\\n1. Unlike existing Moefication techniques that rely on predicting activation sparsity, ReM directly introduces sparsity through a modulator trained alongside the model.This eliminates the need for ReLUfication of non-ReLU models, making it applicable to a broader range of architectures. \\n\\n2. Prior Moefication methods were mainly applicable to MLPs since they depended on activation sparsity in feedforward layers. ReM removes this limitation and is successfully applied to linear layers and attention mechanisms,making it more versatile.\\n\\n3. Achieves up to 93% FLOP reduction in inference, which is significantly higher than previous MoEfication methods like D2DMoE (62.6% FLOP reduction). Also, instead of sparsifying at the neuron level (which could introduce irregular sparsity), ReM clusters neurons into groups. This makes it easier to convert the dense model into a structured MoE, which is better suited for parallel execution on GPUs\", \"areas_for_improvement\": \"1. The paper introduces a sparsity loss using an Lp norm with p=0.5, but no theoretical explanation is provided on why this particular value was chosen.\\n\\n2. All experiments are only conducted on BERT-base. The method should be tested on decoder-only models (GPT-style transformers) or vision models (ResNet, Swin Transformer) to confirm generalizability. It\\u2019s unclear how well ReM would work for models with highly structured activation patterns.\\n\\n3. Theoretically, 93% FLOP reduction should result in more speedup, but the actual speedup is only 2.46x in wall time. The paper attributes this to indexing overhead, attention computations, and layer normalization costs, but further optimization could be done to improve practical gains. \\n\\n4. The paper should compare ReM not only against the dense BERT-base model but also against other optimization techniques beyond just Moefication methods like D2DMoE. Moefication is not the only way to achieve efficiency. Without these baselines, it\\u2019s hard to fully assess whether ReM is the best choice for optimizing inference efficiency.\", \"suggestions\": \"Mentioned in the above section\", \"reason_for_giving_a_higher_score\": \"NA\", \"reason_for_giving_a_lower_score\": \"1. Promising method, but evaluation is too limited.\\n2. Practical speedup does not fully match theoretical FLOP reduction.\\n3. Needs comparisons with other sparsification techniques.\\n4. Would be stronger if evaluated on more model architectures (e.g., decoder models, CNNs).\", \"rating\": \"5\", \"confidence\": \"5\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The paper develops a new sparsification technique for models which involves using fine-tuned and clustered ReLU modules to induce sparsity patterns that can be well accelerated on modern hardware, such as GPUs.\", \"strengths_and_weaknesses\": \"Strengths:\\nThe idea is simple and compelling.\", \"weaknesses\": \"The experimental setup is on just one dataset that is relatively uncommonly used and only for a BERT model. The benchmark stems from some previous work. In itself, this result makes it difficult to establish if this method works or not. There are too many confounding factors to say how well this method does.\", \"suggestions\": \"For BERT in particular running on the RoBERTa fine-tuning suite is a good start, but BERT models are known to be easily compressible. I would also suggest to try this technique on LLMs. The baseline for other sparsification papers often demonstrate good quality compression and sparsification ratios on models like Llama 2 or 3 while using the LLM eval harness to get results for various datasets. I would recommend going this approach to verify the authors novel method.\", \"reason_for_giving_a_higher_score\": \"I think it is a neat and simple idea which might foster some discussion at the workshop.\", \"reason_for_giving_a_lower_score\": \"I think currently the empirical evidence for this method is to thin to discuss it meaningfully. The idea itself is valuable, but more experiments would need to be run to discuss if this is a method that other researchers can build on. I think for a workshop this is okay though.\", \"rating\": \"6\", \"confidence\": \"5\", \"workshop_fit\": \"2\"}",
"{\"summary\": \"The paper proposes ReM, which aims to MoEfiy the model, to achieve better computational efficiency. Compared to previous approaches that have different components for MoEfications and RELUfication, ReM has one router training that includes all the steps. Experiments are BERT model showing that the method can achieve competitive performance while using fewer FLOPs.\", \"strengths_and_weaknesses\": \"[Strength]\\n1. The experimental results on BERT show the token efficiency and the computation efficiency of the approach.\\n \\n[Weakness]\\n1. In Iine 116, the paper writes \\\"D2DMoE had to replace the attention projections with MLPs to convert them into MoE\\\". I assume the attention projections mean the `v_proj` and `o_proj` (value projection and output projection) in transformer architecture, and aren't they already forming as MLP layers?\\n\\n2. I currently don't get why the paper uses clustering to cluster and merge the weights in modulators. My understanding of using clustering in previous approaches is that: we want to group the weights that are often activated together for the inputs so that we use clustering to find the groups and then only need to use one group, which is a subset of weights (denoting experts), at one time. The only thing we need is to label from clustering. But this paper also merges the modulator's weight based on clustering results, and I am not sure what is the purpose and motivation of it. Won't this merging make the modulator's prediction more inaccurate?\", \"suggestions\": [\"It would be better to introduce the architecture and its function for the modulator more clearly before Sec 2.1 and Sec 2.2. The paper currently has no clear definition of the architecture of the modulator until Sec 3.1, so I was confused about the meaning of the output layer of the modulator in line 145 in Sec 2.2.\", \"The current experimental setup is a little bit obsolete. It would be great if the paper could apply the approach to more recent language models and tasks.\"], \"reason_for_giving_a_higher_score\": \"ReM achieves strong token efficiency and the computation efficiency of the approach on BERT models.\", \"reason_for_giving_a_lower_score\": \"Currently, I have some clarification questions about the details. Also, the experimental setups are obsolete.\", \"rating\": \"6\", \"confidence\": \"2\", \"workshop_fit\": \"4\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This paper proposes a new moefication/sparsification method to improve computational efficiency. The paper has been positively received by the reviewers who found enough contribution for acceptance at the workshop. We encourage the authors to take reviewers' comments and suggestions into consideration for the final version of the paper.\"}"
]
} |
YgR8U5DSj9 | Mastering Massive Multi-Task Reinforcement Learning via Mixture-of-Expert Decision Transformer | [
"Yilun Kong",
"Guozheng Ma",
"Qi Zhao",
"Haoyu Wang",
"Li Shen",
"Xueqian Wang",
"Dacheng Tao"
] | Despite recent advancements in offline multi-task reinforcement learning (MTRL) have harnessed the powerful capabilities of the Transformer architecture, most approaches focus on a limited number of tasks, with scaling to extremely massive tasks remaining a formidable challenge. In this paper, we first revisit the key impact of task numbers on current MTRL method, and further reveal that naively expanding the parameters proves insufficient to counteract the performance degradation as the number of tasks escalates. Building upon these insights, we propose M3DT, a novel mixture-of-experts (MoE) framework that tackles task scalability by further unlocking the model’s parameter scalability. Specifically, we enhance both the architecture and the optimization of the agent, where we strengthen the Decision Transformer (DT) backbone with MoE to reduce task load on parameter subsets, and introduce a three-stage training mechanism to facilitate efficient training with optimal performance. Experimental results show that, by increasing the number of experts, M3DT not only consistently enhances its performance as model expansion on the fixed task numbers, but also exhibits remarkable task scalability, successfully extending to 160 tasks with superior performance. | [
"Multi-task reinforcement learning",
"offline reinforcement learning",
"mixture-of-expert"
] | Accept | https://openreview.net/pdf?id=YgR8U5DSj9 | https://openreview.net/forum?id=YgR8U5DSj9 | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"rC11oqSgPR",
"plcFtdxULt",
"fg0Ebmivzi",
"OBGafO7ni6",
"FJ5tKAUfdu"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1741048655131,
1740778259931,
1741226298066,
1740597136761,
1740989742359
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission34/Reviewer_GWAP"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission34/Reviewer_fY6D"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission34/Reviewer_fBMt"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission34/Reviewer_tKxd"
]
],
"structured_content_str": [
"{\"summary\": \"This paper presents a novel framework for multi-task reinforcement learning (MTRL) which first trains a Prompt-DT backbone on all tasks at once for a limited amount of steps, then adds expert modules to the model which are individually trained on subsets of the tasks with the backbone kept frozen and finally trains a router on all tasks to dynamically use the different experts in a weighted fashion. Their framework scales to a very large number of tasks (160) and outperforms considered baselines for 10, 80 and 160 tasks.\", \"strengths_and_weaknesses\": [\"**Strengths:**\", \"The paper presents an innovative solution to the number of tasks scaling issues encountered in MTRL.\", \"As far as I can tell the method is novel and the results are strong.\", \"Extensive experiments on diverse benchmarks (Meta-World, DMControl, Mujoco) and detailed ablation studies reinforce the empirical claims.\", \"The three-stage training mechanism is well-motivated and addresses the inherent challenges in MoE training.\", \"**Weaknesses:**\", \"The three-stage training process, while effective, introduces additional complexity and computational overhead. I believe the method is still worth presenting even with the added complexity and computational requirements but for a full paper it would be good to discuss this overhead to increase the methods usefulness for practicians.\", \"The abstract and the introduction could benefit from a bit of polishing, there are a couple of typos (e.g. citation problem on line 43) and \\\"number of tasks\\\" is more accurate than \\\"task numbers\\\".\", \"The number of tasks and parameters scaling issues identified don't seem novel, it might be useful to go quicker over those results and more in detail in the rest of the paper.\", \"Citing more related works from the MoE or \\\"routing among experts\\\" literature would be good. (e.g. (Learning to Route Among Specialized Experts for Zero-Shot Generalization)[https://arxiv.org/abs/2402.05859] or (Towards Modular LLMs by Building and Reusing a Library of LoRAs)[https://arxiv.org/abs/2405.11157]\", \"More ablations regarding the different training stages would be interesting. Does it help to train everything together at the end? etc.\"], \"suggestions\": \"See weaknesses above.\", \"reason_for_giving_a_higher_score\": \"While the method is novel for the setting of interest, the elements comprising it are not necessarily novel individually.\", \"reason_for_giving_a_lower_score\": \"The method is well motivated, obtains good results and adequate ablations are conducted. The paper is fairly well written.\", \"rating\": \"8\", \"confidence\": \"4\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"In this paper the authors propose M3DT, a mixture-of-experts (MoE) framework that tackles task scalability by introducing a mixture-of-experts (MoE) module that given a task picks the right expert to execute the task. Authors claim to strengthen the Decision Transformer (DT) backbone with MoE to reduce task load on parameter subsets, and introduce a three-stage training mechanism to facilitate efficient training with optimal performance. Experiments show M3DT\\u2019s superior performance compared to existing baselines, establishing its\\nstate-of-the-art effectiveness in MTRL scenarios.\", \"strengths_and_weaknesses\": \"Strengths: The authors pick a simple idea of mixture of experts to address the gradient-conflict issue in MTRL. The idea although simple works well in practice. They present a 3-step approach of training the M3DT architecture where they first train a common DT on all tasks and then iteratively fine tune each expert on subset of tasks followed by training only the router for effective routing. Combined together this approach shows effective improvement over baselines. The approach is presented neatly in the paper and is easy to follow. The experiments are diverse and extensive enough to convince me about the effectiveness of the approach.\", \"weaknesses\": \"I'd like to see more detailed study of related work. I found the section to be a bit lacking in coverage. Given the simplicity of the approach I'd like to see how it compares against more SOTA algorithms in MTRL setup that do not rely on DTs. All the baselines discussed in paper are DT-based.\", \"suggestions\": \"Mentioned in weaknesses.\", \"reason_for_giving_a_higher_score\": \"The paper fits well with the theme of this conference where they show that simply scaling the model architecture does not help with mastering variety of tasks. They found the root cause of why simple scaling doesn't work, i.e., gradient-conflict, address that issue by a MoE routing technique, and show its effectiveness via experiments. The paper is well written, easy to follow, and backed by experiments.\", \"reason_for_giving_a_lower_score\": \"N/A\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This work proposes a new Mixture-of-Experts system for RL that displays superior task scalability. Being able to learn many tasks is critical for continual collaborative learning and relevant for this workshop. All reviewers recommend acceptance and we're happy to accept it to the workshop.\"}",
"{\"summary\": \"Based on their observation that multi-task scaling is challenging for monolithic (decision) transformer architecture, the authors propose a mixture-of-expert model and a three-stage training. The model has both shared parameters as well as parameters that are specific to groups of tasks (experts) and a router, at each transformer block, decides which experts to use based on the incoming representation.\\n\\nThe authors empirically demonstrate the gains of their approach through a primary study and a few additional (ablation studies) in the appendix. The proposed approach seems to scale better to more tasks (up to 160).\", \"strengths_and_weaknesses\": \"The idea of enabling a single model to solve many related tasks in an RL setting is natural and worthwhile exploring. As the authors discuss, much prior work has only studied a relatively modest number of tasks. In that sense, the author's exploration and development push the community forward. I find this to be adequate, especially for a workshop submission.\\n\\nRegarding weaknesses, I did not find anything major. Overall, the 6-page format limits a bit the type of exploration that might yield further insights. Further unpacking the results in Figure 1 could provide additional insights. I made one or two suggestions about it below. I also did not find the precise definition of the router (in addition to the depiction in Figure 2) and whether it's essential in the setup (more on that below).\", \"suggestions\": [\"The paper is relatively easy to understand, but there are typos and sentence formulations that could be improved.\", \"Relatedly, I wasn't sure about the following. Is it really an insight as it's phrased as a question? Also, did you mean to write *maximize* the number of tasks to be learned? \\\"Key Insight: How can we effectively minimize the number of tasks to be learned, while efficiently scaling model parameters, thereby maximizing performance?\\\"\", \"In Figure 1 (left), I don't precisely understand why gradient similarity goes down since the set of tasks is the same across experiments.\", \"Gradient conflict. The authors seemingly use conflict and similarity to denote the same concept. Previous work (e.g., [1]) defines conflict as gradients going in opposite directions (i.e., negative cosine). With that in mind, it might be worth separating conflicting gradients from \\\"less similar\\\" gradients to gain additional insights. Other authors (e.g., [2]) have also argued that the gradient magnitude has value. Again, I suggest it might be worth it for the authors to explore/discuss this in their context.\", \"I didn't understand why the random grouping did so well. You mention \\\"task load,\\\" but what does that mean precisely? This relates to my comment above about why gradient similarity is so high when you have only 10 tasks. Perhaps the model can learn a representation that is useful for all tasks.\", \"I wasn't sure of the router's purpose given the current setup (tasks are observable, and the mapping from expert to task is fixed a priori). That might constitute an interesting ablation study.\", \"In terms of baselines, it could be interesting to compare against an approach with no parameter sharing (i.e., every model is trained on a single task to learn whether a multi-task setup helps overall and on which tasks it is most (un-)helpful. I don't think this is in the appendix either, but sorry if I missed it.\", \"At the end of Section 4, you mention \\\"later experiments\\\" without any pointers (maybe you mean Fig 6 in the appendix?).\", \"Looking at Figure 3, I wonder if you have hypotheses to explain the plateau-ing effect of M3DT as you increase the number of tasks. I am interested in the impact of other hyperparameters (including expanding the width of the backbone and using a different clustering algorithm). I know that some of those were explored in the appendix.\", \"[1] Gradient Surgery for Multi-Task Learning, Yu et al., NeurIPS'20\", \"[2] Task-agnostic continual reinforcement learning: Gaining insights and overcoming challenges, Caccia et al., Collas'23\"], \"reason_for_giving_a_higher_score\": \"I am not an expert on recent multi-task RL work so that I might have missed some important relevant literature. This uncertainty prevents me from giving it a higher rating. I also find the paper still has some areas for improvement, even though I realize the authors only have 6-pages.\", \"reason_for_giving_a_lower_score\": \"I think this is a solid contribution that is on topic, so I suggest it should be accepted.\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The paper tackles the challenge of scaling multi-task reinforcement learning (MTRL) to a large number of tasks. It first analyzes how performance deteriorates and gradient conflicts arise as task count increases, showing that simply enlarging shared network parameters reaches a performance plateau. To address this, the authors introduce M3DT, which integrates a mixture-of-experts (MoE) architecture within a Decision Transformer (DT) backbone. Using a routing network, it distributes tasks across multiple experts, reducing the effective task load per parameter subset. Experiments across 160 continuous control tasks demonstrate that M3DT mitigates performance degradation as task numbers grow. Ablation studies highlight the impact of explicit task grouping, a staged training strategy, and design choices such as early stopping on the backbone. The paper finally argues that modular parameter separation enables effective scaling and resolves gradient conflicts in large-scale MTRL.\", \"strengths_and_weaknesses\": \"Strengths:\\n1. The paper addresses a critical limitation of current multi-task RL approaches - scalability to a large number of tasks.\\n2. The paper introduces a novel combination of MoE and decision transformer architectures tailored for MTRL.\\n3. Extensive experiments on 160 tasks across multiple benchmark domains and detailed ablation studies support the claimed improvements over existing methods.\\n4. The proposed three-stage training method effectively leverages modularity to reduce inter-task interference.\", \"weaknesses\": \"1. MoE models can be computationally expensive, especially when scaling the number of experts. The paper does not discuss potential efficiency bottlenecks at large scales.\\n2. The three-stage training procedure and a complex architecture introduce numerous hyperparameters. The performance might be sensitive to them and better discussions on their tuning and impact would be helpful.\", \"suggestions\": \"1. Provide more comprehensive ablation studies on hyperparameter sensitivity\\n2. Consider experiments on additional benchmarks or real-world applications to demonstrate broader applicability.\\n3. Provide a more detailed discussion regarding the computational cost and training time overhead introduced by the MoE components and multi-stage training.\", \"reason_for_giving_a_higher_score\": \"The novel architecture combined with extensive experiments (though simulations) make it an interesting contribution that forms a strong basis for future research.\", \"reason_for_giving_a_lower_score\": \"The architecture and method might be complex to tune due to numerous components and training stages, and its scalability and real-world applicability are unclear. These should be clarified in the revision.\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}"
]
} |
Y9WvvAak2D | BICEC: Attachable Classification-Based Intelligent Control for Sustainable Computer Vision Systems | [
"Jonathan Burton-Barr",
"Deepu Rajan",
"Basura Fernando"
] | Computer vision systems can employ multiple vision models to complete a single task or an array of tasks. Reasons may span from no single model being available that meets user requirements, hosting devices lacking the compute to execute a single model that contains the full required functionality, or training a new model requires extensive resources or expertise. Without intelligent input discrimination, these systems risk inefficient processing, leading to increased inference times and energy consumption. This paper investigates the impact of intelligent model activation regulation on energy efficiency and inference speed. We propose BICEC (Branched Image Classification Evaluative Controller), a lightweight solution based on a branched EfficientNetv2 architecture. BICEC adapts to existing vision systems without requiring system retraining by creating model-specific branches optimized for minimal size and near-optimal performance. Results show good performance for identifying when a model is relevant and significant reductions in system inference time and energy cost. While the scope of this work focuses on vision systems, we hope to exemplify how tighter control of AI systems can enhance sustainability and computational efficiency. | [
"Computer Vision",
"Cost-Reduction",
"Modular AI",
"Intelligent Control",
"Sustainable AI"
] | Accept | https://openreview.net/pdf?id=Y9WvvAak2D | https://openreview.net/forum?id=Y9WvvAak2D | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"wQXOuciTkK",
"sR2IKBFTpQ",
"q1jrdW5c9s",
"YCpxiwZPzp",
"CDiWW9vs7t"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1740082208784,
1741226298996,
1741079575459,
1740589130695,
1741015518455
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission20/Reviewer_TvJj"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission20/Reviewer_zmHC"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission20/Reviewer_ZPcd"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission20/Reviewer_msSw"
]
],
"structured_content_str": [
"{\"summary\": \"The paper introduces BICEC (Branched Image Classification Evaluative Controller), an intelligent control mechanism designed to optimize the activation of computer vision models within multi-model AI systems. The key goal is to enhance energy efficiency and inference speed by reducing unnecessary model activations. The authors propose an attachable, classification-based approach using a branched EfficientNetV2 architecture. The system can be integrated into existing AI models without retraining them and achieves substantial reductions in computational cost.\", \"the_core_contributions_of_this_paper_are\": \"1. A lightweight branched neural network based on EfficientNetV2 that selectively activates relevant models.\\n2. A two-phase training process that balances model efficiency and activation accuracy.\\n3. An attachable design, allowing seamless integration with existing vision pipelines.\\n4. Demonstrated improvements in energy efficiency and inference speed through experimental results.\\n5. The authors compare BICEC with an existing attachable control system, SICEC, highlighting superior accuracy, efficiency, and adaptability.\", \"strengths_and_weaknesses\": \"Pros of the paper:\\n1. The research is very relevant in the context of sustainable AI, and it addresses the increasing computational and energy demands of deep learning models. By regulating model activation dynamically, BICEC aligns with recent efforts to reduce AI's carbon footprint and optimize resource utilization.\\n\\n2. The paper also presents a novel approach by introducing a branched classification-based mechanism that enables an intelligent activation control. Contrast to the existing approaches, BICEC\\n\\n a. Uses a structured two-phase training process to optimize efficiency.\\\\\\n b. Allows seamless branch addition and removal without requiring system retraining.\\\\\\n c. Leverages transfer learning and Uniform Element Selection (UES) for efficient weight adaptation.\\n\\n3. The authors have also provided a thorough evaluation using multiple datasets (COCO, Movie, Y-VLOG) and various metrics. The energy and inference time reduction (52.1% and 54.7%, respectively) validate BICEC's effectiveness in improving computational efficiency.\\n\\n a. Performance Metrics: Accuracy, Correct Model Activation (CMA), and Incorrect Model Activation (IMA).\\\\\\n b. Computational Efficiency Metrics: FLOPs reduction, inference speed, and energy consumption.\\\\\\n c. Comparison with SICEC: BICEC shows better scalability and activation accuracy while reducing decision space complexity.\\n\\n\\n4. BICEC is designed to be integrated into real-world vision systems without requiring modifications to existing models. This plug-and-play nature makes it applicable to diverse AI systems where multiple vision models operate in parallel.\", \"areas_for_improvement\": \"1. While the paper compares BICEC with SICEC, it does not benchmark against dynamic model selection techniques such as Mixture of Experts (MoE) or adaptive multi-task learning approaches (AdaMTL, AdaMV-MoE). Including a comparison with these systems would provide a more comprehensive analysis of BICEC\\u2019s advantages and trade-offs.\\n\\n2. The activation conditions defined in the paper(e.g., object detection, segmentation, action recognition) are very limited. They mostly focus on high-level vision tasks. However, many real-world applications require finer-grained decisions, such as:\\n\\n a. Scene-dependent activation (e.g., detecting road signs in autonomous driving).\\\\\\n b. Context-aware model switching in multi-modal systems.\\\\\\n Exploring more complex decision boundaries for model activation would strengthen the generalizability of BICEC.\\n\\n3. The paper also discussed the impact of binary threshold adjustment on CMA and IMA but does not explore adaptive thresholding mechanisms that dynamically adjust thresholds based on input uncertainty. A future improvement could involve self-tuning thresholds that optimize trade-offs between false activations and missed activations.\\n\\n4. The paper has estimated the energy consumption based on theoretical FLOPs-to-Watt conversion for an RTX 3070 GPU. However, actual energy usage can vary due to:\\\\\\n a. Memory bandwidth limitations.\\\\\\n b. GPU power management states.\\\\\\n c. System overhead (e.g., data loading time, I/O operations).\\\\\\n Using a more precise energy measurement tool (e.g., NVIDIA's NVML API) would provide a more accurate assessment of BICEC\\u2019s power savings.\\n\\n5. The datasets used (COCO, Movie, Y-VLOG) are well-known but limited in domain coverage. Introducing more diverse datasets (e.g., medical imaging, autonomous driving) would demonstrate BICEC\\u2019s generalizability.\", \"suggestions\": \"1. While the paper compares BICEC with SICEC, it would be beneficial to benchmark against Mixture of Experts (MoE) approaches and adaptive multi-task learning (AdaMTL, AdaMV-MoE). Adding a quantitative comparison (e.g., efficiency gains, accuracy trade-offs) against dynamic model selection techniques would help establish BICEC's relative strengths.\\n\\n2. Expanding BICEC to handle context-dependent activations, such as dynamic scene changes or temporal dependencies, would increase its real-world applicability.\\n\\n3. Implementing self-adjusting thresholds based on confidence scores or input uncertainty could improve Correct Model Activation (CMA) while reducing Incorrect Model Activation (IMA). The study explores fixed binary threshold values but does not investigate adaptive thresholding mechanisms.\\n\\n4. The paper focuses solely on computer vision models. Extending BICEC to multi-modal systems (e.g., vision + language, vision + audio) could broaden its impact and make it relevant for LLMs and multi-modal AI applications.\", \"reason_for_giving_a_higher_score\": \"The higher score is justified because BICEC presents an innovative and practical approach to optimizing model activation in multi-model vision systems, significantly improving computational efficiency and sustainability. The experimental results demonstrate notable reductions in inference time (-54.7%) and energy consumption (-52.1%), highlighting its effectiveness in reducing computational overhead without compromising accuracy. Its scalability and flexibility, particularly the ability to add or remove models without retraining, make it a highly adaptable solution for evolving AI systems.\\n\\nAlso, stated the reasons in the pros of the paper\", \"reason_for_giving_a_lower_score\": \"NA\", \"rating\": \"7\", \"confidence\": \"5\", \"workshop_fit\": \"4\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This work investigates modularity of models based on speed and energy efficiency, this modular aspect is very relevant to that workshop. Some reviewers have noted possible improvements, that we encourage the authors to attend to. However, the majority recommended the paper to be accepted, and thus we're please to accept this paper to the workshop.\"}",
"{\"summary\": \"The proposed paper introduces **BICEC**, an attachable classification-based intelligent controller designed to optimize model selection and use in computer vision systems. The proposed key contribution lies in its *branched neural network architecture* derived from a pre-trained EfficientNetV2 backbone, which employs *shared layers coupled with model-specific branches* that output binary decisions regarding model activation. The system operates in a two-phase training process : the first phase establishes a *base configuration using transfer learning*, and the second phase *iteratively scales each branch via Uniform Element Selection (UES)* to reduce parameters and FLOPs while preserving near-optimal activation performance. BICEC\\u2019s modular design supports non-invasive integration with existing systems, allowing branch removal or addition without necessitating full retraining, thereby addressing dynamic system changes. Experimental evaluations on multiple datasets demonstrate *reductions in inference time* (\\\\~55%) and *energy consumption* (\\\\~52%), while *maintaining a good correct model activation accuracy*. Overall, the work contributes to show that tighter, data-driven regulation of model activation can enhance both computational efficiency and sustainability in AI systems.\", \"strengths_and_weaknesses\": [\"**Strengths :**\", \"*Architecture and training strategy :* BICEC\\u2019s design is novel for such computer vision use, employing a two-phase training process that enables attachable and adaptable control without requiring full system retraining.\", \"*Efficiency gains :* The method demonstrates significant reductions in energy consumption and inference time, which is crucial for sustainable AI and resource-constrained environments.\", \"*Adaptability :* The support for branch removal and addition offers flexibility to handle evolving system models with minimal reconfiguration effort.\", \"*Comprehensive internal analysis of the method :* Detailed experiments and network analyses (e.g., branch scaling, binary threshold adjustment, and cost reduction evaluations).\", \"**Weaknesses :**\", \"*Limited comparative analysis :* The paper primarily compares BICEC with SICEC, and this comparison is presented only briefly and largely relegated to the appendix. There is a lack of broader contextualization with alternative methods such as dynamic routing or multi-task learning frameworks.\", \"*Overemphasis on self-comparison :* Most experimental results focus on demonstrating improvements relative to BICEC\\u2019s internal baselines rather than contrasting its performance with a wider array of existing architectures.\", \"*Simplified activation mechanism :* The use of a binary classification approach for model activation may not capture more nuanced or multi-label activation scenarios that could be beneficial in complex environments.\", \"*Parameter sensitivity and robustness :* The system\\u2019s performance appears sensitive to key parameters (e.g., accuracy drop thresholds, binary threshold adjustments), and the paper could benefit from a more extensive robustness analysis under varied and noisy input conditions.\"], \"suggestions\": [\"*Expand comparative analysis :* Consider including additional comparisons with other dynamic activation control strategies, such as models that employ dynamic routing, sparse expert selection, or multi-task learning frameworks. A more comprehensive evaluation against diverse baselines would strengthen the claims of improved efficiency and adaptability.\", \"*Enhance discussion on activation mechanisms :* Explore the potential of extending beyond binary classification for model activation. Discussing or experimenting with multi-label or probabilistic approaches might yield insights into handling more complex input conditions.\", \"*Robustness and sensitivity analysis :* Incorporate additional experiments that test the system\\u2019s sensitivity to parameter variations and evaluate its performance under noisy or unexpected input conditions. This could help to better understand and mitigate potential limitations in real-world scenarios.\"], \"reason_for_giving_a_higher_score\": \"The paper presents an interesting and practical approach to enhancing energy efficiency and inference speed in computer vision systems. The architecture and training methodology, along with comprehensive internal evaluations, offer a compelling case for the benefits of attachable control. The flexibility to add or remove branches without full retraining is particularly appealing for sustainable applications.\", \"reason_for_giving_a_lower_score\": \"The paper\\u2019s comparative analysis is limited in scope, with most comparisons confined to a single baseline (SICEC) and primarily detailed in the appendix. This narrow focus raises questions about how BICEC stacks up against a broader range of alternative methods. Additionally, the reliance on binary activation and sensitivity to parameter tuning may present challenges in more diverse or real world applications.\", \"rating\": \"6\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The paper proposes a control system for vision tasks in the research vain of SICEC (Burton-Barr et al, 2024). The proposed methodology aims for a more computationally efficient model selection in terms of time and energy cost. The main contribution is a process for branch creation and scaling, emphasizing weights transfers.\", \"strengths_and_weaknesses\": \"The authors provide a clear explanation of the proposed methodology (if somewhat high-level) and the motivation for the work. The comparison versus existing methods can be expanded but overall the results are supportive of authors' claim of improving the efficency (time and energy). The claims of an attachable and adaptable system are only partially substantiated: more results on these two aspects would be great.\", \"suggestions\": \"The main suggestions are:\\n(a) Expand on the use of EfficientNetV2-B0, which seems to be integral to their proposed construct; and\\n(b) Focus on substantiating the claim of adaptability (end of Section 1), which is merely discussed but not tested.\", \"reason_for_giving_a_higher_score\": \"It is good paper that focuses on the combination & modularization of existing models via an intelligent control. Whilst specific to vision, it may be extendable to other domains. It fits well within the theme of the workshop and the results are encouraging.\", \"reason_for_giving_a_lower_score\": \"The comparison versus existing approaches can be expanded (in addition to discussed). Furthermore, there is some lack of clarity in what parameters are trained and when: further clarification would be beneficial.\", \"rating\": \"7\", \"confidence\": \"4\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The paper presents BICEC, a method to select which model(s) to apply for a given input image among a pool of available models, to perform several tasks. BICEC is built on using a pretrained EfficientNet-v2 backbone, and has minimal cost compared to the models themselves. It is trained on different phases: First, the last block of EfficientNet-v2 is trained to predict whether to use each of the available models in the pool (using binary cross entropy). Then, the size of the last block is reduced to reduce the total cost of BICEC while trying to keep the classification accuracy. During inference, all the models for which the BICEC score of a given input surpases a predefined threshold (hparam), are activated. The paper compares BICEC against SICEC, a similar approach published in 2024. BICEC offers more flexibility than SICEC, since it allows for cheap brach removal and addition.\", \"strengths_and_weaknesses\": [\"**Strengths**\", \"The method performs better than the baseline (SICEC) with a smaller model size (9.2M vs 2.47M params), after tuning some hyperparameters. Note however, that there are some caveats with this comparison (see weaknesses regarding cost below).\", \"The proposed method allows add support for additional models in the future (i.e. see Branch addition experiments in section 3.1).\", \"**Weaknesses**\", \"The (down)scaling of the last block of the EfficientNet-v2, that is copied and used to output a binary classification on whether to use the different models is not entirely justified. Table 2 shows how the GFLOPs of BICEC are reduced, but this doesn't show the total cost relative to running BICEC plus the actual models. Table 6 contains the GFLOPs of the different models and they are in the range of [35.2, 109.1] GFLOPs, thus reducing the cost of BICEC from 2.6 to 2.0 or even 1.2 makes almost no difference in practice: -3.8% total GFLOPs reduction being super optimistic, assuming only the cheapest model is always used; -1.3% GFLOPs if only the most expensive model is selected; or ~1% total GFLOPs if 2 models are selected per input with uniform frequency.\", \"The latter makes the comparison with SICEC a bit incomplete: could BICEC be better than SICEC simply because it activates more models per input?\", \"The paper only compares against a single baseline (SICEC), but it is not clear if the evaluation was done under comparable conditions (same EfficientNet-v2 architecture? same training data & evaluation benchmarks?). In terms of alternative methods, the authors cite a decent amount of different alternatives in Section 4, but they only compare against SICEC.\", \"The architecture used for BICEC is quite old (in realtive terms for computer vision research). It uses an EfficientNet-v2, from 2021. It's not clear why this architecture is preferred instead of more modern alternatives (e.g. small ResNets or Transformers). I guess that comparing against SICEC is the obvious anwer, but this reviewer keeps wondering if the approach works with other architectures and waht the performance would be.\", \"The abstract reads \\\"BICEC adapts to existing vision systems without requiring system retraining\\\". When one reads this one might get the impression that the proposed method is \\\"zero-shot\\\", but it actually requires the tuning of the EfficientNet backbone used to build BICEC.\"], \"suggestions\": \"Most imporantly, please make the appropriate changes to address the weaknesses that I mentioned.\", \"other_non_critical_but_nice_improvements_to_the_text\": [\"Address the fine-tuning requirements in the abstract, it's a bit misleading as it is.\", \"Please, fix scientific notation. I assume that the \\\"learning rate of $1e^{-4}$\\\" (line 160) actually means $10^{-4} = 0.0001$ and not $e^{-4} = (2.7182...)^{-4} \\\\approx 0.0183156$.\", \"In lines 124-125: $Lb$ and $Ub$ mean $L \\\\cdot b$ and $U \\\\cdot b$ respectively. However $Sl$ is a single symbol, and does not mean $S \\\\cdot l$. This is confusing to the reader. Maybe use $\\\\text{Sl}$, or simply $S$ for the latter?\", \"The symbol $R$ in line 114-115 is not used anywhere. It refers to the \\\"set of scales\\\", but then when this set is defined in Eq. (1) the paper uses \\\"Scales = {\\\" rathern than \\\"$R$ = {\\\". So, it is a completely unnecessary symbol.\"], \"reason_for_giving_a_higher_score\": \"I would absolutely need to see a proper evaluation against SICEC and other benchmarks preferably, that takes total cost into account. See also the other weaknesses that I mentioned that would need to be addressed for a higher score.\", \"reason_for_giving_a_lower_score\": \"The proposed method seems to have some benefits that the baseline does not. Thus, I'm not giving a lower score.\", \"rating\": \"4\", \"confidence\": \"4\", \"workshop_fit\": \"3\"}"
]
} |
XXC8RYoPji | Conditioning on Local Statistics for Scalable Heterogeneous Federated Learning (Tiny Paper) | [
"Rickard Brannvall"
] | Federated learning is a distributed machine learning approach where multiple clients collaboratively train a model without sharing their local data, which contributes to preserving privacy. A challenge in federated learning is managing heterogeneous data distributions across clients, which can hinder model convergence and performance due to the need for the global model to generalize well across diverse local datasets. We propose to use local characteristic statistics, by which we mean some statistical properties calculated independently by each client using only their local training dataset. These statistics, such as means, covariances, and higher moments, are used to capture the characteristics of the local data distribution. They are not shared with other clients or a central node. During training, these local statistics help the model learn how to condition on the local data distribution, and during inference, they guide the client's predictions. Our experiments show that this approach allows for efficient handling of heterogeneous data across the federation, has favorable scaling compared to approaches that directly try to identify peer nodes that share distribution characteristics, and maintains privacy as no additional information needs to be communicated. | [
"Heterogeneous Data; Personalized Federated Learning;"
] | Accept | https://openreview.net/pdf?id=XXC8RYoPji | https://openreview.net/forum?id=XXC8RYoPji | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"vvmwXudb8c",
"lIVa2YAxgo",
"jzXOauBhkc",
"9Ab2WIJi9n"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740534076688,
1740952259774,
1741226298237,
1740517096410
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission23/Reviewer_yCfT"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission23/Reviewer_hmvZ"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission23/Reviewer_4LN8"
]
],
"structured_content_str": [
"{\"summary\": \"This paper proposes an improved federated learning method to address the issue of heterogeneous data distributions across clients. The approach involves calculating local statistics (such as mean, covariance, etc.) at each client, which helps the model adapt to the local data distribution during training. During inference, these statistics assist the client in making more accurate predictions.\", \"strengths_and_weaknesses\": \"## Strengths:\\n1. This approach simplifies communication and computation processes by leveraging statistical information to handle the heterogeneous data in federated learning without directly identifying peer nodes that share distribution characteristics.\\n2. The paper demonstrates the proposed method's effectiveness on both synthetic tasks and the EMNIST dataset.\\n\\n## Weakness:\\nBy using some simple statistical information for distinguishing heterogeneous data, this approach may not necessarily be effective in some complex data scenarios. Furthermore, the method's effectiveness has not been sufficiently validated in more complex real-world settings.\", \"suggestions\": \"The paper has not yet provided a more detailed description of the method. Additionally, the method itself is relatively simple and needs to be validated and further enhanced to demonstrate its effectiveness in more complex scenarios.\", \"reason_for_giving_a_higher_score\": \"The overall content of the paper is not yet sufficiently comprehensive, and the method is relatively simple, lacking a more detailed justification.\", \"reason_for_giving_a_lower_score\": \"Using statistical information to distinguish heterogeneous data distributions is intuitively feasible and can help reduce communication overhead while optimizing the computational process.\", \"rating\": \"6\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The paper introduces a federated learning strategy designed to handle nodes with heterogeneous feature distributions. The approach conditions the model on statistical properties of local datasets, such as higher-order moments of variable distributions, to account for distribution shifts across nodes. The authors evaluated their approach across three tasks: a regression task and a classification task using synthetic data, and a classification task based on the EMNIST dataset. The results demonstrate that this method significantly outperforms a FL model that omits dataset statistics, and its performance is only slightly below that of a model trained on nodes with uniform data distributions.\", \"strengths_and_weaknesses\": [\"Strengths:\", \"They showcased their approach on different types of simple datasets.\", \"The idea is simple and would be easily applicable to data distributions where such statistics can be computed.\"], \"weaknesses\": \"- As outlined in the suggestions, it would be nice to include some robustness of the results (e.g. including confidence intervals).\\n\\nOtherwise, there are no clear weaknesses; the tiny paper opens up many opportunities for future work, starting with evaluating the approach on more diverse and realistic datasets.\", \"suggestions\": [\"Present the results in a more robust manner by including measures of uncertainty, such as using 5\\u2011fold CV or computing 95% confidence intervals, to better assess the significance of the findings.\", \"Strengthen the related work section by discussing how dataset distribution statistics are used in FL and clarifying how existing approaches differ from the proposed method.\", \"I find the extremely high RMSE for the global model surprising. The authors should consider computing the RMSE values on normalized data for a more intuitive comparison across models.\", \"Consider revising terminology by replacing \\\"multi-level perceptron\\\" with \\\"multilayer perceptron\\\"\", \"Change the name of Table 4 in the appendix\"], \"reason_for_giving_a_higher_score\": \"I am not familiar with the existing literature in this domain (i.e. including dataset statistics to train FL models). Thus, if this is the first approach leveraging dataset statistics in such a straightforward way, the paper could deserve a higher score.\", \"reason_for_giving_a_lower_score\": \"Similarly, I am unfamiliar with the novelty (or lack thereof) of the approach. Thus if this problem has already been well studied and more comprehensive papers exist, the score could be lowered.\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"5\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This paper proposes an improved federated learning method to address the issue of heterogeneous data distributions across clients. the topic is relevant to the works. Most of the reviewers liked the paper and recommended acceptace. We suggest the authors to incorporate the comments of the reviewers to further strengthen the paper. Overall, we're recommend to accept this work to the workshop.\"}",
"{\"summary\": \"This paper introduces a novel approach to handling heterogeneous data distributions in federated learning (FL) by conditioning on local characteristic statistics computed independently by each client. Unlike Personalized Federated Learning (PFL) and Clustered Federated Learning (CFL), which introduce significant computational and communication overhead, this method operates without modifying the standard aggregation process or increasing data-sharing requirements. Each client calculates statistical properties such as means, covariances, and higher moments using its local dataset, which are then used during training and inference to tailor predictions to the local distribution without being shared. Experimental results on synthetic regression and classification tasks, as well as the EMNIST handwritten character dataset, demonstrate that this approach effectively improves model performance while preserving privacy. By avoiding explicit client clustering and additional meta-learning steps, it remains scalable and efficient. The results show that local statistical conditioning allows models to generalize better across diverse client distributions while matching or outperforming global and client-specific models. Future work could explore compressed representations of local statistics and extend the method to high-dimensional data modalities such as images and speech.\", \"strengths_and_weaknesses\": \"Privacy-Preserving Approach: The method ensures that no raw data or computed statistics are shared between clients, maintaining privacy while effectively handling data heterogeneity.\", \"scalability_and_efficiency\": \"Unlike Personalized Federated Learning (PFL) and Clustered Federated Learning (CFL), this approach does not require additional communication or computational overhead, making it suitable for large-scale FL applications.\", \"theoretical_justification\": \"The use of local statistical moments (means, covariances, higher-order moments) is well-founded, leveraging the property that many multivariate distributions are uniquely determined by their moments.\", \"experimental_validation\": \"The method is rigorously evaluated on both synthetic tasks and real-world data (EMNIST), demonstrating strong performance compared to global, clustered, and client-specific models.\", \"improved_generalization\": \"By conditioning on local statistics, the proposed model generalizes better across diverse client distributions, achieving performance close to clustered models without requiring explicit cluster identification.\", \"suggestions\": \"Limited Scope of Evaluation: The experiments primarily focus on linear regression, logistic regression, and a small-scale image classification task (EMNIST). The approach should be tested on more complex datasets and deep learning models to validate its broader applicability.\", \"potential_sensitivity_to_choice_of_statistics\": \"While the paper suggests using means, covariances, and PCA-based representations, it does not extensively analyze how different statistical choices impact model performance. A more thorough ablation study could provide insights into the optimal statistical representations.\", \"lack_of_adaptive_mechanism\": \"The method assumes that the same set of local statistics is useful across all clients and tasks. Introducing an adaptive mechanism that selects or weights statistics dynamically based on data distribution shifts could further enhance its robustness.\", \"computational_cost_of_local_statistics\": \"While the method reduces communication overhead, the computational cost of calculating higher-order statistics (e.g., covariance matrices, PCA components) at the client level is not discussed in detail. This could be a bottleneck in resource-constrained environments.\", \"absence_of_robustness_analysis\": \"The paper does not evaluate how the method performs under adversarial conditions, such as clients providing noisy or biased statistics. A robustness analysis would strengthen its practical viability in real-world FL scenarios.\", \"reason_for_giving_a_higher_score\": \"Novelty and Practicality: The proposed approach to conditioning on local statistics in federated learning is a simple yet effective way to handle heterogeneous data distributions without increasing communication overhead. This makes it a valuable contribution to the field.\", \"privacy_preserving_and_scalable\": \"Unlike existing methods such as Personalized Federated Learning (PFL) and Clustered Federated Learning (CFL), this approach does not require sharing additional information between clients, ensuring privacy while maintaining efficiency.\", \"theoretical_soundness\": \"The use of local statistical moments (means, covariances, and PCA representations) is well-justified, leveraging fundamental statistical properties that uniquely characterize distributions.\", \"strong_empirical_results\": \"The experiments on synthetic data and EMNIST demonstrate that the method achieves competitive performance while avoiding the need for explicit cluster formation, making it a promising alternative to traditional FL techniques.\", \"reason_for_giving_a_lower_score\": \"Limited Scope of Evaluation: While the experiments demonstrate effectiveness in small-scale settings, the approach has not been tested on larger and more complex datasets, such as ImageNet or real-world federated applications (e.g., medical or financial data).\", \"lack_of_robustness_analysis\": \"The paper does not evaluate the method's performance under adversarial settings, such as clients providing noisy or biased statistics. Assessing robustness would be crucial for real-world deployment.\", \"potential_computational_overhead\": \"The paper does not discuss the computational cost of computing local statistics, particularly for high-dimensional data, which may limit the approach's applicability in resource-constrained environments.\", \"no_dynamic_adaptation\": \"The method assumes a fixed set of local statistics for all clients and tasks, but some form of adaptive selection based on data distribution shifts could improve generalization and performance.\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"5\"}"
]
} |
WNzxsWtjUV | Hierarchical Subspaces of Policies for Continual Offline Reinforcement Learning | [
"Anthony Kobanda",
"Rémy Portelas",
"Odalric-Ambrym Maillard",
"Ludovic Denoyer"
] | We consider a Continual Reinforcement Learning setup, where a learning agent must continuously adapt to new tasks while retaining previously acquired skill sets, with a focus on the challenge of avoiding forgetting past gathered knowledge and ensuring scalability with the growing number of tasks.
Such issues prevail in autonomous robotics and video game simulations, notably for navigation tasks prone to topological or kinematic changes.
To address these issues, we introduce
HiSPO, a novel hierarchical framework designed specifically for continual learning
in navigation settings from offline data. Our method leverages distinct policy
subspaces of neural networks to enable flexible and efficient adaptation to new tasks while preserving existing knowledge.
We demonstrate, through a careful
experimental study, the effectiveness of our method in both classical MuJoCo
maze
environments and complex video game-like navigation simulations, showcasing
competitive performances and satisfying adaptability with respect to classical continual learning
metrics, in particular regarding the memory usage and efficiency. | [
"Continual Learning",
"Continual Offline Reinforcement Learning",
"Continual Reinforcement Learning",
"Hierarchical Policies",
"Mazes",
"Navigation",
"Offline Learning",
"Offline Reinforcement Learning",
"Reinforcement Learning",
"Reinforcement Learning for Navigation"
] | Accept | https://openreview.net/pdf?id=WNzxsWtjUV | https://openreview.net/forum?id=WNzxsWtjUV | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"sUKmsC5MMr",
"eWzRp7Lsg3",
"a140H9PX7n",
"MGfxX943bR",
"0cwmi0Gshl"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1741226299735,
1741078350634,
1740886227069,
1739818292739,
1740583630945
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission13/Reviewer_wsXK"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission13/Reviewer_BYYJ"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission13/Reviewer_Fe4j"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission13/Reviewer_goy4"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This work proposes a hierarchical continual learning method for RL, which has some relevance to this workshop. 3 over 4 reviewers recommend acceptance, therefore we are accepting this work to the workshop. However in the spirit of peer reviewing improving the state of research, we strongly recommend the authors to take notes of the suggestions from reviewer Fe4j, they are directly actionable and would strengthen this work.\"}",
"{\"summary\": \"The paper explores the use of Continual Subspace of Policies (CSP) in the context of continual offline goal-conditioned imitation learning. It introduces a subspace for both high-level and low-level policies, aiming to maintain performance while minimizing memory usage.\", \"strengths_and_weaknesses\": \"## Strength\\n1. The paper is clear and well written.\\n2. The proposed hierarchical subspace of policy is novel and performs better than other baselines.\\n\\n## Weakness\\n1. The paper template requires a maximum of 6 pages, but the paper submitted is 8 pages.\\n2. When encountering a new task, it is necessary to first train new parameters before deciding whether to retain them. If they are not retained, the new training process would be completely wasted.\", \"suggestions\": \"Revise the paper to meet the required length of 6 pages.\", \"reason_for_giving_a_higher_score\": \"See Strength\", \"reason_for_giving_a_lower_score\": \"See Weakness\", \"rating\": \"6\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"This paper presents a new method for continual offline RL, in which \\\"anchor\\\" parameter sets are progressively added as new tasks are seen during training. The decision for whether new anchors are added or not is based on whether the loss is reduced by updating the subspace with the new anchor. The algorithm is fairly simple and provides modest improvements over some previous methods, and the tradeoff between performance and memory usage is explored.\", \"notes_from_reading_the_paper\": \"-Hierarchical subspaces of policies for continual offline-RL. \\n -Task where agent must adapt to new tasks while retaining previously acquired skills. \\n -Challenge is avoiding forgetting past gathered knowledge. Needs to be scalable as new tasks are added. \\n -One domain where this comes up is in navigation, where topology or kinematics can change. \\n -mujoco maze and video game navigation. \\n -Focuses on goal-conditioned rl. \\n -Continual subspace of policies (Gaya 2023). \\n -Learn a simplex weighting over different anchor parameters. \\n -Contribution involves growing separate parameter subspaces for a high-level path planning policy and a low-level path-following policy. \\n -High level policy predicts sub-goal k steps in the future. \\n -Uses hindsight experience replay. \\n -Add new anchor if the new subspace lowers loss. \\n -replay buffer seems fine to me. I'm skeptical that this is so bad, although I understand that this is an established issue in this area.\", \"strengths_and_weaknesses\": \"Strengths:\\n -The paper is well-written and explains both the problem and the method well.\", \"weaknesses\": \"-Performance not much better than SCN baseline, while the SCN baseline seems much simpler. \\n -There is not much analysis of what the anchors learn.\", \"suggestions\": \"Maybe it would be nice to give some analysis of what different anchors learn, to show that the pruning of some new anchors is useful. Additionally, it would be nice to see that the anchors are actually diverse and reflect different skills.\", \"reason_for_giving_a_higher_score\": \"The paper is well-written and the method is explained well. The results are solid, establishing a new trade-off between memory usage and performance on learning from multiple tasks.\", \"reason_for_giving_a_lower_score\": \"More analysis could be given, for example showing an example problem showing how different anchors learn diverse subgoals, and how this benefits continual learning. Additionally the results are relatively modest given the complexity of the method.\", \"rating\": \"6\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"This submission focuses on using policy subspaces for offline continual reinforcement learning. The setting is sequentially observed tasks, and keeping data from old tasks is not permitted. The proposed algorithm, HISPO, is similar to CSP with the exception that it is a two-level policy; one level focuses on sub-goal selection and the other focuses on reaching sub-goals (or goals more generally). HiSPO is evaluated in two sets of 2D gymnasium grid-world and 3D video game maze environments.\", \"strengths_and_weaknesses\": \"---\\n\\n### Strengths\\n\\n1. The paper includes extended evaluations.\\n2. The paper is mostly well-written, with the exception of \\u00a74.\\n3. The paper includes ample explanations and examples from environments, baselines and training details in the Appendix.\\n\\n---\\n\\n### Weaknesses\\n\\nWhile I like the overall direction of this research, I believe it needs more work before publishing. There are missing baselines, the experimental gains are marginal, and the overall motivation for the paper needs more work.\\n\\n1. The set of baselines is incomplete. The current baselines mostly include strawmans (SC1, SCN, FT1, FTN, FZ, L2) and only two CRL baselines, EWC and PNN. Given the problem setting, the most natural baseline to consider are Progress and Compress [1] (different work than PNN) and Orthogonal Gradient Descent [2]. \\n2. Improvements over baselines are marginal. The FTN strawman has a slight memory footprint growth compared to HiSPO with similar performance (indistinguishable from confidence intervals based on Tables 7-9). Considering how simple this strawman is, this weakens the empirical edge of HiSPO. Positive empirical results are not necessary for an interesting submission and technical novelty can often overcome negative experimental results; But HiSPO is mostly similar to CSP, and the technical novelty of this work cannot overcome the issues with evaluations.\\n3. The paper mentions in line 52-53 that CSP is untested in offline CRL and may face new challenges. It would be expected to motivate the investigation with what these new challenges might be. Considering how empirically there does not seem to be much improvement with CSP-like techniques in \\u00a75, motivating why the investigation is necessary becomes even more important.\\n4. I'm somewhat confused on what \\u00a75.5.2 and \\u00a75.5.3 teach us.\\\\\\n\\u00a75.5.2 suggests that were we to use LoRA, memory footprint would be smaller; I feel this is trivial, and authors should mention why they believe that to not be the case.\\\\\\n\\u00a75.5.3 suggests a PAC learnability criteria for zero-shot subspace evaluation. However, the PAC criteria focuses on some distance function in action space, and **proximity in action space does not imply proximity in returns** (unless actions fully match, which means the environments were identical). This is due to the stateful nature of RL; minor deviations in actions in a trajectory can lead to noticeably different outcomes, i.e., the butterfly effect. Thus, this PAC learnability criteria, which is mentioned as a suggestion at this submission, does not prompt good discussion.\\n5. A link is mentioned in abstract with no explanation. This usually suggests the link includes source codes or evaluation runs or examples of policy rollouts. However, this link is is empty at time of review, and only includes the abstract. I will give the benefit of the doubt to authors and assume they ran out of time to update the code repo or website, but in general, including a link without any substance is considered misleading, if not malicious.\\n\\n---\\n\\n### Minor issues:\\n\\n1. I find the motivation for the problem setting itself to be lacking, though this does not impact my decision; I merely mention this here as feedback.\\\\\\nThe problem setting is sequential offline CRL, where an offline dataset of trajectories is observed from each task one by one, and the dataset must be removed before observing the next task. This removal of datasets is the part that makes this problem setting quite niche.\\\\\\nIf the issue is data storage, the memory footprint of keeping datasets can be included in empirical evaluations. Furthermore, one can be selective when storing trajectories from tasks for long-term use and not keep entire datasets.\\\\\\nIf the issue is privacy, perhaps detailed examples with citations would be more convincing.\\n2. Writing issues: Missing explanation for $\\\\tilde{\\\\theta}_k$ in line 143 or FTN in line 318.\\n\\n[1] Schwarz, Jonathan, et al. \\\"Progress & compress: A scalable framework for continual learning.\\\" International conference on machine learning. PMLR, 2018.\\n\\n[2] Farajtabar, Mehrdad, et al. \\\"Orthogonal gradient descent for continual learning.\\\" International Conference on Artificial Intelligence and Statistics. PMLR, 2020.\", \"suggestions\": \"I have four suggestions:\\n\\n1. Extend your set of baselines to include more recent baselines as well as P&C and OGD.\\n2. Amend how HiSPO works to improve its empirical edge in the experiments.\\n3. Rewrite some text in the introduction to better motivate this problem setup and why studying CSP in offline CRL is interesting in the first place.\\n4. Either remove \\u00a75.5.2/\\u00a75.5.3, or expand on them to include non-trivial discussion.\", \"reason_for_giving_a_higher_score\": \"If the results were better (and included more extensive baselines) or the technical novelty of HiSPO was more significant, I would recommend acceptance.\", \"reason_for_giving_a_lower_score\": \"As I have covered in the weaknesses section, this submission needs more work on all frontiers; writing/motivation, technical novelty and evaluation.\", \"rating\": \"3\", \"confidence\": \"4\", \"workshop_fit\": \"3\"}",
"{\"summary\": \"This paper proposed Hierarchical Subspaces of Policies (HiSPO), which essentially applies Continual Subspace of Policies (CSP) into offline continual reinforcement learning for navigation tasks. HiSPO decomposes the policy into two layers, a high-level path-planning policy and a low-level path-following policy. When training the policy for a new task, HiSPO applies the concept of CSP, which leverages policy subspaces to preserve previously acquired skills while flexibly adapts to new tasks. More specifically, HiSPO first trains a new anchor weight with efficient exploration in the existing weight subspace, and then evaluates whether the existing anchor weights can be reused so that the new weight can be pruned. The paper demonstrates the effectiveness of HiSPO through experiments in both classical and complex video game-like navigation environments, showing competitive performance while maintaining scalability and efficient memory usage.\", \"strengths_and_weaknesses\": [\"Strengths:\", \"The paper provides a comprehensive comparison with many classical CRL algorithms across different navigation tasks in both classical and video game-like environments. The proposed method demonstrates good performance while maintaining a relatively low memory size.\", \"The paper preliminarily validates additional improvements, such as the incorporation of LoRA.\"], \"weaknesses\": [\"The paper is somewhat hard to follow. Section 3 Preliminaries could be expanded to provide clearer definitions and explanations. For example, the definition of $\\\\tilda \\\\theta$ in CRL and the meanings of BWT and FWT are not fully explained.\", \"Section 5.5.3 lacks experimental support.\", \"HiSPO performs badly with regard to FWT. This suggests that HiSPO may struggle with generalizing to entirely new tasks or environments.\"], \"suggestions\": [\"Consider improving the cohesion of the writing to make the paper flow more naturally. The authors can provide more detailed descriptions of their methodology and assumptions, especially in the early sections of the paper, so readers unfamiliar with the specific terms can grasp the paper's full meaning.\", \"Section 5.5.3 needs to be supported with experimental results to strengthen the claims made in that part of the paper.\", \"The authors may include real-world navigation tasks to further demonstrate the generalizability and efficiency of HiSPO.\"], \"reason_for_giving_a_higher_score\": \"None\", \"reason_for_giving_a_lower_score\": \"None\", \"rating\": \"6\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}"
]
} |
W2ZEYIE7vU | An Empirical Study of Policy Interpolation via Diffusion Models | [
"Yuqing Xie",
"Chao Yu",
"Ya Zhang",
"Yu Wang"
] | Diffusion-based policies have shown great potential in multi-task settings, as they can solve new tasks without additional training through inference-time steering. In this paper, we explore the inference-time composition of diffusion-based policies using various interpolation methods. Our results show that, while existing methods merely switch between predefined action modes, our proposed approach can generate entirely new action patterns by leveraging existing policies, all without the need for further training or tuning. | [
"Diffusion model",
"imitation learning",
"policy merge"
] | Accept | https://openreview.net/pdf?id=W2ZEYIE7vU | https://openreview.net/forum?id=W2ZEYIE7vU | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"nlysHBtFdt",
"IdmgbiU361",
"F7uDOzhdcQ",
"0fbVrZulL6"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740686750470,
1739805560481,
1741226299701,
1739746271395
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission38/Reviewer_wCrU"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission38/Reviewer_DTsb"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission38/Reviewer_rTg6"
]
],
"structured_content_str": [
"{\"summary\": \"The paper investigates merging for test-time steering of diffusion models. This is done by controlling the noise distribution that's added over the input during the iterative diffusion procedure; by making this noise distribution conditioned on additional variable $y$ (e.g. a task id or label), steering is enabled. The contributions of the paper relate to how one can merge / interpolate across two different policies that are obtained via different steering values $y1,y2$. On mujoco experiments, where data from different policies are collected (half-cheetah at different speeds) the authors examine several ways to interpolate across policies. They conclude that Classifier-Free Guidance can achieve strong policy interpolation, effectively showing that the final policy operates at an interpolated speed from the two policies.\", \"strengths_and_weaknesses\": \"**Strengths**\\n1. The experiment proposed is the paper is clear and shows that CFG achieves the expected goal of policy interpolation\\n2. For a 2-page paper, the authors did a good job at presenting it\\n\\n**Weaknesses**\\n1. It would be much better if the authors can expand the current version. It's unclear to me what is the contribution of this paper. Is it the application and CFG / CFG++ for policy interpolation ? What is the related work for policy interpolation with diffusion models in this setting ? The authors mention Decision Diffuser (DD), however DD is not in the experimental section\\n2. The paper could greatly benefit from a paragraph that details how the conditioning on the task is done. (see suggestions)\", \"suggestions\": \"Some questions I have, hopefully addressing them will lead to a clearer paper\\n\\nWhat is \\\"f\\\" here exactly (lines 69-74) ? \\nWhat does it mean to have a specific label, or label2|label1 as input?\", \"reason_for_giving_a_higher_score\": \"addressed above\", \"reason_for_giving_a_lower_score\": \"adressed above\", \"rating\": \"6\", \"confidence\": \"3\", \"workshop_fit\": \"3\"}",
"{\"summary\": \"In this paper, the authors investigate how to blend or \\u201cinterpolate\\u201d multiple diffusion-based robot policies at inference time to produce new, previously unseen behaviors. They focus on a multi-task setting in which each policy is trained through diffusion modeling on different, discrete tasks (e.g., varying target velocities for a HalfCheetah robot), but they want to combine these tasks without additional training or fine-tuning.\", \"strengths_and_weaknesses\": \"I find the use of classifier free guidance to merge diffusion policies unique and novel to my knowledge. The results in Figure 1 show how CFG provides a consistent merging behavior whereas DD has a harder time producing out of distribution outputs. I think it would be good to scale the experiments up for future work not just with tasks but also more policies.\", \"suggestions\": \"typo eq 1. missing )\", \"reason_for_giving_a_higher_score\": \"I think the method is novel and can scale up to more practical robotics applications to allow for efficient composition and merging of existing base policies.\", \"reason_for_giving_a_lower_score\": \"I think having more experiments where tasks and the number of policies are scaled, especially in manipulation would strengthen the paper. However, given the 2 page tiny format I understand the constraint.\", \"rating\": \"7\", \"confidence\": \"4\", \"workshop_fit\": \"4\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This paper presents an interesting contribution on merging for test-time steering of diffusion models. The program chairs have reviewed the comment from reviewer rTg6 and strongly encourage the authors to address this point in the final version of the paper.\"}",
"{\"summary\": \"This paper is an empirical investigation into how generative diffusion policies can be used for policy interpolation. A variety of interpolation strategies are evaluated in a MuJoCo HalfCheetah environment; in the offline dataset the actor moves at a fixed speed (velocity 1, 2, 3), and the goal is to guide the conditional diffusion model to run at alternative speeds (1.5, 2.5).\", \"strengths_and_weaknesses\": \"---\\n\\n### Strengths\\n\\nI like the problem motivation, and found the introduction and findings to be concise and well-explained. Except for a few issues with formulations in \\u00a72 and figures in \\u00a73, I liked the presentation.\\n\\nThere did not seem to be any major errors or inconsistencies in the text. \\n\\n---\\n### Weaknesses\\n\\nUnfortunately, I think there is a significant flaw in the problem formulation that limits what we can learn from this work. I will try my best to articulate it here.\\n\\n***TLDR:*** There is a fundamental reason that makes this problem impossible to solve, and strongly suggests that the empirical observations will not generalize. The only way to correct this is to rethink the problem formulation.\\n\\nSuppose the trajectory distribution for velocity $v$ (task $y_v$) are identified with some one-hot encoded variable $z_v \\\\in \\\\\\\\{0, 1\\\\\\\\}$. Based on \\u00a72, we condition the diffusion model with $f(z_1, z_2)$ (I'll ignore velocity 3 for simplicity). Note that there is no information in $z_v$ about $v$, the actual velocity, and what $f(z_1, z_2)$ learns is open-ended.\\n\\nNow, the goal here was to see if you can interpolate $f(1, 0)$ (task $y_1$ with velocity 1) and $f(0, 1)$ (task $y_2$ with velocity 2) to somehow achieve speed $\\\\frac{1+2}{2} = 1.5$. The key question to ask here, is **\\\"if the diffusion model or $f(z_1, z_2) do not see the velocity, how should there be an interpolation that consistently guides them to 1.5?\\\"**. \\n\\nSuppose, our of sheer luck, $f(z_1, z_2)$ actually learns to encode the true speed, i.e., $f(z_1, z_2) = z_1 + z_2 \\\\times 2$. Then, due to this linear relationship, $\\\\frac{f(1, 0)+f(0, 1)}{2} = 1.5$ should condition the diffusion model to generate trajectories at speed 1.5, and the CFG merge function works.\\n\\nBut now suppose the function learns to encode the log of speed, simply because that was easier to encode, i.e., $f(z_1, z_2) = z_1 \\\\log 1 + z_2 \\\\times \\\\log 2 \\\\approx \\\\log2 z_2$. Now, $\\\\frac{f(1, 0)+f(0, 1)}{2} = 0.5\\\\log2$ represents a speed of $\\\\sqrt{2}$ to the diffusion model, and CFG merge fails. In fact, none of the merge functions will work.\\n\\nWhat the latent encoder $f(\\\\cdot)$ and the diffusion model will converge to is not unique, since infinitely many models will reach the same exact loss. To see why, consider any invertible function $H(\\\\cdot): \\\\mathcal{R}^d \\\\rightarrow \\\\mathcal{R}^d$. For a given latent encoder $f(z_1, z_2): \\\\mathcal{R}^2 \\\\rightarrow \\\\mathcal{R}^d$, create a new encoder $\\\\hat{f}(z_1, z_2)=H(f(z_1, z_2))$. For the corresponding diffusion model $g(\\\\tau, c, t)$, where $c$ is the latent embedding, create a new diffusion model $\\\\hat{g}(\\\\tau, c, t)=g(\\\\tau, H^{-1}(c), t)$. The pair $<\\\\hat{f}(\\\\cdot), \\\\hat{g}(\\\\cdot)>$ will achieve the same exact training loss as $<f(\\\\cdot), g(\\\\cdot)>$, but will behave differently on all interpolation functions, including those you consider in \\u00a72 and \\u00a73.\\n\\nSo essentially, for every interpolation function that mixes the tasks well in some environment for some seed, there is some training seed that adversarially breaks that merging function for the same environment. Even if some merge function works consistently well for some environment due to function regularization, it may not work on another environment.\\n\\nNote that while the DD paper does something similar with merging multiple tasks, the setting there is slightly different. Each \\\"task\\\" is some constraint on trajectories, e.g., the actor cannot move outside of some bounded region in the 2-D state space. The merging there intends to generate trajectories that respect both bounds. From a function approximation perspective, this type of merge is more feasible to achieve consistently empirically, although it suffers from the same adversarial issue.\\n\\n---\\n\\n**Below are minor issues and did not affect my decision. I include them here as feedback.**\\n\\nThere are some issues with formalization in \\u00a72, such as Equation 2 missing a closing parenthesis, or the formula for CFG++ being incorrect.\\n\\nAlso, I think the visualization in Figure 1 could have been better. For example, you could plot CDFs of achieved speed across steps, where each curve is some merging approach. You would then need 2 plots at most to show everything, and this would make comparisons easier. It is also best to include the x and y labels in the figures themselves, even though you include them in the captions.\", \"suggestions\": \"The key issue here, in my opinion, is the problem formulation. Without specifying how you want the tasks merged, no solution exists for this problem, and this statement is not specific to diffusion models. I would suggest changing the problem formulation and revisiting this.\", \"reason_for_giving_a_higher_score\": \"If the issue with problem formulation did not exist, all other issues are minor, and this would be a good paper to include in the workshop.\", \"reason_for_giving_a_lower_score\": \"The most important weakness of the paper is the problem formulation. I do not think the problem can be solved as it stands, and the current results will not generalize broadly (I have discussed why). This makes it challenging to find the takeaway message of the submission.\", \"rating\": \"3\", \"confidence\": \"4\", \"workshop_fit\": \"3\"}"
]
} |
UyPDg7ksTM | Beyond Top-K: Structured Sparsification for Compression in Pipeline Parallel | [
"Sameera Ramasinghe",
"Thalaiyasingam Ajanthan",
"Gil Avraham",
"Yan Zuo",
"Alexander Long"
] | In decentralized training, efficient communication is critical, particularly when training large-scale models over low-bandwidth, heterogeneous networks. Although gradient compression techniques have proven effective in Distributed Data-Parallel (DDP) settings, extending them to pipeline parallel (PP) training is challenging due to cumulative compression errors that exacerbate with network depth. In this work, we introduce a novel compression framework for PP that preserves the column space of activations and gradients instead of compressing individual elements. We derive tight theoretical error bounds and demonstrate the effectiveness of our method by training models over 80 Mbps connections, achieving up to 90\% compression along with around $2 \times$ training and $12 \times$ inference throughput improvements. | [
"Decentralised training",
"pipeline parallel",
"compression"
] | Accept | https://openreview.net/pdf?id=UyPDg7ksTM | https://openreview.net/forum?id=UyPDg7ksTM | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"gwBMaKomga",
"e359tT3xBG",
"NRzqolqPsi",
"EvnKMoNjCU"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1740718966739,
1741226297768,
1741197338835,
1741192274917
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission30/Reviewer_ebQK"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission30/Reviewer_Ui2Z"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission30/Reviewer_eVPC"
]
],
"structured_content_str": [
"{\"summary\": \"This paper proposes a communication compression technique that works well with pipeline parallelism-based decentralized training for large models. The authors first demonstrate how and why top-k compression fails to do well in a pipeline parallelism setting. Then, a detailed analysis shows the benefits of preserving the column space using column-wise magnitude-based compression. The results show that it is possible to achieve up to 90% compression without any loss in performance, with 2x training and 12x inference throughput benefits.\", \"strengths_and_weaknesses\": \"Strengths:\\n\\n1. The paper is very well-written and motivates the contributions well.\\n\\n2. The detailed analysis of how and why top-k fails is intriguing and necessary to understand the need for better compression strategies for pipeline parallelism.\", \"weaknesses\": \"1. More details about the experimental setup can help to understand the nuances. For example, was error compensation used to fix the performance loss by top-k? How many rounds of communication does it take for models to converge with the new compression technique?\", \"suggestions\": \"Please provide additional experimental details as mentioned above, as well as compare with quantization based compression techniques as well.\", \"reason_for_giving_a_higher_score\": \"The paper is well written and clarifies the problem with existing compression techniques quite well. I believe the contributions are unique and can provide interesting insights into designing compression techniques for pipeline parallelism-based training mechanisms.\", \"reason_for_giving_a_lower_score\": \"NA\", \"rating\": \"8\", \"confidence\": \"4\", \"workshop_fit\": \"4\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This paper has been appreciated by all reviewers. We recommend taking suggestions into consideration for the final version of the manuscript.\"}",
"{\"summary\": \"This paper proposed a compression method for Pipeline Parallel that drops columns with low norms rather than elements. This method helps preserve the column space of the compressed matrix. The authors showed theoretically and empirically how this method improves upon standard element top-k compression.\", \"strengths_and_weaknesses\": [\"Strength:\", \"The paper is well-motivated. The authors explained how errors can pile up in PP, suggesting a better compression method is necessary for DP, and provided theoretical proof of why selecting columns is helpful.\", \"The result of f/ & b/ pass is very significant. Using column masking made training possible where top-k breaks.\"], \"weakness\": [\"The paper only performs experiments on one model and one dataset. Testing multiple models + datasets or different model scales can better assess the effectiveness of the proposed method.\"], \"suggestions\": [\"Most of the proof connects compression rate with misalignment of the column space. However, I still don't understand how misalignment connects to error in Theorem 3.1 and training loss decrease in the experiments. Is it omitted because it is too obvious?\", \"Figure 2 didn't show too much information for the space it occupies. It may be better to use a 2x2 table instead. But, if there is result throughput for other compression ratios, adding them to the figure will be very nice.\", \"It is unclear whether structured sparsification is helpful only for PP or generally suitable for pretrained language models. If it is applied to the gradient in DDP, is it better than Top-k?\", \"In Figure 1, it will be better to use a consistent color scheme for different compression rates, e.g. baseline is black, 10% is red, and then put top-k and column masking in one figure, shown with different line style, e.g. - - line for column masking, and ... line for top-k so that it will be easier to compare the two methods.\"], \"reason_for_giving_a_higher_score\": [\"The paper validates the method with both proof and experiment. For example, the advantage of selecting columns is shown with both proof and experiment comparison with the row-masking variant.\"], \"reason_for_giving_a_lower_score\": [\"It will be better to compare with other compression methods mentioned in related works, e.g., low-rank compression and quantization.\"], \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The authors note that while DP uses top-k compression, it's harder to use for PP because the error compounds as the number of layers increases.\\nInstead, they propose to compress by pruning entire column based on the L2 norm.\", \"strengths_and_weaknesses\": \"# Strengths\\n\\n* The proposed method is extremely simple to implement, and can have a great impact\\n * Empirical results vs top-k (particularly when pruning both the forward and backward pass) are significant\\n* The reasoning behind their method is sound\\n\\n# Weaknesses\\n\\nIt is noted that \\\"the compound error in PP training can grow exponentially with the number of layers\\\". However, it'd make sense to only do compression between pipeline *stage* rather than between every layers. I'd be curious to understand how well the proposed compression would fare then vs top-k. In particular, we could do a 4-stages, and thus only compression the first 1/4th layer instead of the first few layers -- which is much more harmful! In that more realistic case, how badly would top-k fare vs the proposed method?\", \"suggestions\": [\"how many parameters are in the model?\", \"how well top-k vs l2-column-pruning fare as we scale the model?\"], \"reason_for_giving_a_higher_score\": \"Simple method that shows clear empirical gain.\", \"reason_for_giving_a_lower_score\": \"More studies, across different model scales and number of PP stages, would be interesting.\", \"rating\": \"7\", \"confidence\": \"4\", \"workshop_fit\": \"3\"}"
]
} |
UIzvc5u2Eu | NoEsis: A Modular LLM with Differentially Private Knowledge Transfer | [
"Rob Romijnders",
"Stefanos Laskaridis",
"Ali Shahin Shamsabadi",
"Hamed Haddadi"
] | Large Language Models (LLM) are typically trained on vast amounts of data, springing from various sources. Even when designed modularly (e.g., Mixture-of-Experts), LLMs can leak privacy on their sources. Conversely, training such models in isolation arguably prohibits generalization. To this end, we propose a framework, NoEsis, which builds upon the desired properties of modularity, privacy, and knowledge transfer. NoEsis integrates differential privacy with a hybrid two-staged parameter-efficient fine-tuning that combines domain-specific low-rank adapters, acting as experts, with common prompt tokens, acting as a knowledge-sharing backbone. Results from our evaluation on CodeXGLUE showcase that NoEsis can achieve provable privacy guarantees with tangible knowledge transfer across domains, and empirically show protection against Membership Inference Attacks. Finally, on code completion tasks, NoEsis bridges at least 77% of the accuracy gap between the non-shared and the non-private baseline. | [
"Modularity",
"domain experts",
"privacy",
"knowledge transfer"
] | Accept | https://openreview.net/pdf?id=UIzvc5u2Eu | https://openreview.net/forum?id=UIzvc5u2Eu | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"cmeEzJDgc6",
"VmnryE455D",
"7xkEqbJgGH"
],
"note_type": [
"official_review",
"decision",
"official_review"
],
"note_created": [
1741098494729,
1741226297687,
1740767757106
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission7/Reviewer_3NH4"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission7/Reviewer_cvim"
]
],
"structured_content_str": [
"{\"summary\": \"The authors propose a modular framework that aims to enable both privacy between domains but also knowledge transfer.\\n\\nThey note that private learning is leading to poor performance, and MoE to leakage of information. Thus, they propose NoEsis:\\n* first, tune prompt tokens, using differentially private learning mechanisms, allowing transfer between tasks/domains\\n* second, train a MixLoRA, with different adapters per tasks/domains\\n\\nThe domains considered, for a LM, are the Python, Java, and Go programming languages, using a 220M decoder model.\\nThe two main baselines are 1) sharing nothing -- a lower bound, and 2) sharing everything and thus leaking -- a potential upper bound.\", \"strengths_and_weaknesses\": \"# Strengths\\n\\nIn no particular order.\\n\\n1. NoEsis tackles efficiently two directions, which are usually opposed:\\n* Enabling privacy-oriented methods is critical for taping more data and personalization of LM\\n* Mixture-of-Experts reach best performance, but leakage of information between domains experts is a problem\\n\\n2. Main results on table 2 are impressive, where the proposed method reaches better performance than per-domain specialist LM while avoiding unwanted leakage between said domains\\n\\n3. Method is simple, based principally on two well studied methods (prompt tuning + mixture of LoRAs), and would benefit seamlessly from improvements of any of those two methods\\n\\n4. Code is provided\\n\\n# Weaknesses\\n\\nIn no particular order.\\n\\n1. Table 3 evaluates varying rank for the common LoRa and varying prompt size. However, the performance itself doesn't seem to vary much, how much of it is significant?\\n\\n2. Would the results hold on other tasks than coding?\", \"suggestions\": [\"instead of private domains, how would you extend to private tokens? e.g. Python knowledge can leak to the Java knowledge, but some specific tokens (let's say user-defined ids) shouldn't be leaked: aka changing the granularity of the privacy\", \"can we streamline the procedure in a single unified stage for simplicity, training at the same time the prompt and lora?\", \"in the deployment phase, what's the performance if all LoRAs are merged together?\", \"what's the performance of DP prompt tuning, without MixLoRA?\"], \"reason_for_giving_a_higher_score\": \"The problem tackled makes sense, the performance (particularly table 2) are good, and the methodology is sound.\\n\\nMoreover, it fits nicely the topic of the workshop, about exploring new approaches to modularity, here privacy-oriented modularity, which is seldom explored to my knowledge.\", \"reason_for_giving_a_lower_score\": \"I'm not convinced about table 3. I'm not sure how well the results hold at a larger scale than 220M and more diverse tasks than coding. I'd have liked more discussions on some details (see my suggestions).\\n\\nHowever, it's an excellent workshop paper in my opinion.\", \"rating\": \"9\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This work proposes to bring private learning to modular systems through a two-stage procedure, reaching good performance while avoiding leaking information from one domain to another. Learning efficiently across several domains without leaking info is relevant for collaborative learning and this work. The reviewers all recommend acceptance, and we're happy to accept it to this workshop.\"}",
"{\"summary\": \"The paper proposes NOESIS, a novel framework that integrates differential privacy into a modular adaptation of large language models. The approach is based on a two-stage, parameter-efficient fine-tuning method. In the first stage, a shared set of trainable prompt tokens is trained under differential privacy, enabling safe knowledge transfer across domains. In the second stage, domain-specific low-rank adapters (Mix-LoRA) are tuned on individual private datasets/domains. The method is evaluated on a code completion task across three programming languages (Python, Java, and Go) and shows that NOESIS bridges a substantial part of the accuracy gap between non-private and non-shared models while reducing privacy leakage, as evidenced by empirical membership inference attacks.\", \"strengths_and_weaknesses\": \"Strengths\\n1. The proposed multi-step approach is novel. Despite existing work already exploring differential privacy in parameter-efficient fine-tuning or modular learning separately, the combination of Mix-LoRA (modular PEFT) with differentially private prompt tuning is innovative, and NOESIS merges these approaches effectively. \\n2. The work addresses a critical issue in AI, privacy-preserving multi-domain learning, which is highly relevant given rising concerns over LLM data leakage and different regulations over the globe.\\n3. The work has a clear and rigorous empirical methodology, with hypothesis clear and extensive ablation studies on the effect of each component. The method and experimental setting choices are well explained. \\n4. The method also presents strong results, bridging 77% of the accuracy gap between non-private and fully private models, and outperforming DP and PEFT baselines. Also, experiments demonstrate robust knowledge transfer to low-resource language (Go). \\n\\nWeakness\\n1. The experiments on knowledge transfer do not fully substantiate the paper\\u2019s claims. The authors should have included baselines that explicitly demonstrate the benefit of having shared parameters across all languages, including the high-resources. For instance, what if training on Java+Python alone outperforms Java+Python+Go? Similarly, could Noesis trained only on Python achieve better results than the full multi-domain settings? Analyzing potential cross-domain transfer degradation would strengthen the claims regarding knowledge sharing.\\n2. A key missing baseline is Mix-LoRA with DP-SGD training (on out-of-domain data). This would provide a direct comparison to assess the effectiveness of differentially private prompt tuning and determine whether shared prompts are an efficient contribution to knowledge transfer beyond domain-specific fine-tuning, or if its effects can be reached with a simpler approach. \\n3. The experiments focus on one specific task (code completion) which not necessarily would generalize to all cases (but this is acceptable considering the scope of the workshop).\", \"suggestions\": [\"Could the authors clarify the distinction between \\\"NOESIS with Common LoRA\\\" as shown in Figure 4 and the \\\"Single Common Adapter\\\"? Are these two names referring to the same model, or does \\\"NOESIS with Common LoRA\\\" denote the variant mentioned in line 252 that employs a LoRA as a common knowledge-sharing backbone?\", \"The paper would benefit from a more detailed discussion on how the domain-specific experts are merged during deployment. For example, If I want to use only Python and Java languages in production, are the experts merged? And how does this affect performance?\", \"Can the authors elaborate on the functioning of the domain router? Specifically, is the routing learned and performed dynamically, as in the original MixLoRA? Or is it handled manually by loading and unloading the relevant LoRA experts when switching datasets?\", \"There appears to be a notation issue in Equations 1 and 2, what does the W mean?\", \"It would strengthen the claims if the authors considered a stronger baseline where a domain-specific LoRA are also trained on other domains using DP-SGD. Do the authors think such a model might outperform NOESIS? A comparative analysis here could provide valuable insights into the advantages of the proposed approach.\", \"The \\\"Experiments Results\\\" subsection could be better structured for clarity. However, I understand the lack of space given by the workshop constraints.\"], \"reason_for_giving_a_higher_score\": [\"The problem is relevant and the solution is innovative.\", \"Overall the empirical methodology is robust and rigorous.\", \"The results on the particular task are relevant, outperforming previous solutions.\"], \"reason_for_giving_a_lower_score\": [\"The narrow evaluation focus (only code completion) limits the impact.\", \"Baseline choices lacking a comparison to a strong baseline (DP-trained MixLoRA).\", \"No clear discussion of negative transfer effects, which could impact performance in certain domains.\"], \"rating\": \"7\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}"
]
} |
U8V2n9kquU | Truncate without Fear: Module Aggregation and Redistribution in Federated Low-Rank Adaptation | [
"Zhijie Chen",
"Yuxing Liu",
"Arindam Banerjee"
] | While low-rank adaptations (LoRA) have shown promise as an efficient fine-tuning technique in federated learning (FL) to reduce communication complexity, the practical application requires careful attention to the challenges posed by the aggregation schemes on client modules. In this paper, we introduce TFLoRA, which directly optimizes over the adapter weights $W = BA^\top$, and redistributes the LoRA modules using the updated adapter weights. Our theoretical analysis shows the truncation error introduced during the redistribution step is mild and TFLoRA
achieves an $O(1/\sqrt{T})$ convergence rate. Compared to the existing methods, TFLoRA supports a wide range of optimizers on the server side and maintain the advantages in low communication overhead. We show empirical evidence that TFLoRA achieves better performance than the state-of-the-art federated LoRA mechanisms on various benchmarks including image/text classification and commonsense inference. Additionally, TFLoRA is demonstrated to be more favorable as the number of clients increases and with non-i.i.d client data distributions. | [
"Federated Learning",
"Low-rank adaptation",
"model merging"
] | Accept | https://openreview.net/pdf?id=U8V2n9kquU | https://openreview.net/forum?id=U8V2n9kquU | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"mbqm1QoVFr",
"gIJ8fKlzp0",
"OlenisadkR",
"NVNqvgXdrd"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1739932274669,
1740673139976,
1740688649206,
1741226298414
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission33/Reviewer_RY9w"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission33/Reviewer_1AG8"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission33/Reviewer_cd6c"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
]
],
"structured_content_str": [
"{\"summary\": \"Thanks for the interesting paper. This paper proposes an aggregation mechanism for federated low-rank adaptation. Utilizing psudogradient updates from FedOPT (Reddi et al., 2021), the central theme is to perform a truncation on the server-side via SVD. By Eckart-Young, it is clear that this is the best low-rank approximation of the averaged low-rank updates with respect to Frobenius norm, which are in practice higher rank. The authors then give a convergence analysis of this proposed framework. They empirically validate their approach via GPT2, ViT-B, RoBERTa-Base federated low-rank fine-tuning, which appear to show improvements over other baselines.\", \"strengths_and_weaknesses\": \"The convergence analysis seems reasonably solid. The experiments use larger models appropriate and pertinent to the proposed work. The discussion is well-cited, and it is easy to see where the assumptions come from, as well as the motivation.\\n\\nI am not too familiar with the federated LoRA literature. However, their paper was an interesting read from a layperson's perspective. I did not carefully check all the details of the mathematics.\", \"suggestions\": \"My personal preference is to have additional evaluation, especially for language models. For example, it would be interesting to see federated language model inference performance via low-rank adaptation, such as by evaluating superglue with Roberta. However, this is a strictly personal preference.\\n\\nAlso, I believe that the recent paper: Federated LLMs Fine-tuned with Adaptive Importance-Aware LoRA (Yang et al) may be relevant to this work.\", \"reason_for_giving_a_higher_score\": \"The paper is a clear read, and well-motivated.\", \"reason_for_giving_a_lower_score\": \"NA\", \"rating\": \"6\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The paper introduces TFLoRA, a federated learning method that directly optimizes the adapter weight matrix \\\\(W = BA^\\\\top\\\\) to avoid the aggregation noise incurred when averaging the low-rank matrices \\\\(B\\\\) and \\\\(A\\\\) separately. Instead of forming \\\\(\\\\bar{B}\\\\,\\\\bar{A}^\\\\top\\\\) from client updates, TFLoRA aggregates the individual adapter weights and then employs truncated singular value decomposition (SVD) to project the result back to a low-rank form. The authors provide theoretical guarantees showing that the truncation error remains mild - being at most quadratic in the learning rate - and that the method converges at an \\\\(O(1/\\\\sqrt{T})\\\\) rate under standard assumptions. Empirical studies on vision and language benchmarks further demonstrate that TFLoRA outperforms state-of-the-art federated LoRA approaches, particularly in settings with high client numbers and non-i.i.d. data distributions.\", \"strengths_and_weaknesses\": \"Strengths:\\n- Novel aggregation technique - TFLoRA most certainly is a novel technique\\n\\n- Rigorous theoretical analysis - the work provides solid theoretical guarantees, including a convergence rate of \\n \\\\[\\n O\\\\left(\\\\frac{1}{\\\\sqrt{T}}\\\\right),\\n \\\\]\\n\\n- Comprehensive empirical evaluation - extensive experiments across multiple benchmarks (image/text classification and commonsense inference) give a good initial demonstration that TFLoRA outperforms existing federated LoRA methods. The method shows robustness against increasing client numbers and data heterogeneity.\\n\\n- Flexibility with server optimizers - TFLoRA supports a variety of server-side optimizers, including adaptive methods like Adam. This flexibility is advantageous in federated learning settings, where the choice of optimizer can significantly impact communication efficiency and convergence behavior.\", \"weaknesses\": [\"Increased computational overhead - although this submission argues SVD-based truncation step is computationally efficient (e.g. using methods like Lanczos), it will likely introduce additional overhead on the server side. This might become a bottleneck in very large-scale or resource-constrained deployments.\", \"Has a strong reliance on theoretical assumptions - the convergence analysis is built on several assumptions (e.g., smoothness, bounded gradients, quadratic growth) that may not always be satisfied in practice, especially in highly non-convex deep learning scenarios\"], \"suggestions\": \"The paper could benefit from clearer exposition in the theoretical sections. A brief, intuitive overview or diagram illustrating the core idea behind TFLoRA and its truncation mechanism would help readers grasp the approach without getting lost in technical details.\\n\\nAdditionally, including an ablation study on key hyperparameters\\u2014especially the role of the redistribution hyperparameter \\u03b1\\u2014could further substantiate the practical advantages of the method.\", \"reason_for_giving_a_higher_score\": \"The work introduces a novel and well-motivated approach to address the challenges of federated low-rank adaptation. The comprehensive theoretical analysis, convergence proofs, and extensive empirical evaluations across diverse benchmarks collectively strengthen the paper\\u2019s contributions.\\n\\nThe method demonstrates clear improvements in test accuracy and robustness, particularly in highly heterogeneous data settings, which is highly valuable for real-world federated learning applications.\", \"reason_for_giving_a_lower_score\": \"Despite its strengths, the presentation in some sections, especially the dense theoretical derivations, could be more accessible. This may limit the paper\\u2019s impact on a broader audience not deeply familiar with federated learning or low-rank adaptation techniques.\\n\\nThe discussion on computational overhead, while addressed, would benefit from more detailed analysis on scalability in practical, large-scale deployments.\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"TFLoRA is a method that improves LoRA in federated finetuning. While existing baselines directly average the low-rank matrices A and B, this results in aggregation noise i.e. does not match the result of first expanding the low rank updates and aggregating in the full-rank space. Other baselines attempt to compute the full rank 'noiseless' update, but this can make future communication expensive. To balance these two limitations, TFLoRA computes the full-rank aggregate update, but then uses truncated SVD to project this update back into the low-rank space. Across 3 datasets, TFLoRA achieves better performance than several other LoRA + FL baselines.\", \"strengths_and_weaknesses\": \"The method is simple and performs well. The experiments test several standard methods and tasks.\", \"suggestions\": \"Could TFLoRA also improve over baselines in settings where the rank is heterogeneous? For example, Heterogeneous LoRA (Cho et al) proposes a similar low-rank aggregation scheme also suffers from aggregation noise.\\n\\nThe paper makes an interesting point that although TFLoRA introduces truncation noise, this error can be accumulated across multiple rounds at the server. Is there a way we can empirically compare truncation noise versus low-rank aggregation noise, and that less error w.r.t to a full-rank aggregation correlates with better performance? Can we also show that accumulating the truncation error across rounds (which appears built into the method) improves over a naive baseline that does not consider error accumulation?\\n\\nCan authors provide theoretical justification, ablations, or intuition on why TFLoRA would scale much better with number of clients / heterogeneity than existing baselines?\", \"reason_for_giving_a_higher_score\": \"The authors present a reasonable problem (noisy aggregation in LoRA), constraints (full rank aggregation costs more communication), and a suitable solution (projecting full rank aggregates back into the low-rank space).\", \"reason_for_giving_a_lower_score\": \"n/a\", \"rating\": \"7\", \"confidence\": \"5\", \"workshop_fit\": \"4\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This work proposes a distributed adapter method, which is doubly relevant to this workshop through its modular nature of adapters and distributed optimization. The reviewers recommend acceptance and we're happy to accept it to this workshop.\"}"
]
} |
U7fUT9J11Y | On-Device Collaborative Language Modeling via a Mixture of Generalists and Specialists | [
"Dongyang Fan",
"Bettina Messmer",
"Nikita Doikov",
"Martin Jaggi"
] | On-device LLMs have gained increasing attention for their ability to enhance privacy and provide a personalized user experience. To facilitate private learning with scarce data, Federated Learning has become a standard approach. However, it faces challenges such as computational resource heterogeneity and data heterogeneity among end users. We propose CoMiGS ($\textbf{Co}$llaborative learning with a $\textbf{Mi}$xture of $\textbf{G}$eneralists and $\textbf{S}$pecialists), the first approach to address both challenges. A key innovation of our method is the bi-level optimization formulation of the Mixture-of-Experts learning objective, where the router is optimized using a separate validation set to ensure alignment with the target distribution. We solve our objective with alternating minimization, for which we provide a theoretical analysis. Our method shares generalist experts across users while localizing a varying number of specialist experts, thereby adapting to users’ computational resources and preserving privacy. Through extensive experiments, we show CoMiGS effectively balances general and personalized knowledge for each token generation. We demonstrate that CoMiGS remains robust against overfitting—due to the generalists' regularizing effect—while adapting to local data through specialist expertise. We open source our codebase for collaborative LLMs. | [
"Federated Learning",
"Collaborative Learning",
"On-device LLMs",
"Mixture of Experts",
"Alternating Minimization"
] | Accept | https://openreview.net/pdf?id=U7fUT9J11Y | https://openreview.net/forum?id=U7fUT9J11Y | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"hZYlmwwBFM",
"WT2ukLSO6g",
"L6pv7KrfG8"
],
"note_type": [
"official_review",
"official_review",
"decision"
],
"note_created": [
1740531154860,
1740691933280,
1741226298529
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission41/Reviewer_q78M"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission41/Reviewer_HJ9M"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
]
],
"structured_content_str": [
"{\"summary\": \"This paper propose CoMiGS, a modular federated learning architecture for LLM adaptation via a mixture of generalist and specialist LoRA experts. Specifically, CoMiGS is trained using a bi-level optimization objective, alternating between routing and expert parameter optimization. Experiments on GPT-125m and Llama-3.2-1B show superior performance to selected local and federated baselines, while also providing extra insights about the behavior of the framework, theoretically and empirically.\", \"strengths_and_weaknesses\": [\"### Strengths\", \"The approach of having two sets of experts that focus on global vs. local objectives seems well motivated and turns out to work well, under proper routing.\", \"I like how the authors showcase the effectiveness of their method compared to baselines.\", \"I also appreciate the in and out of distribution analyses.\", \"### Weaknesses\", \"The paper title is somewhat misleading, as it quotes \\\"on-device\\\", but no evaluation has been done there. Moreover, a batch size of 64 might be prohibitive in the memory of edge devices.\", \"The federated paradigm put forward does not seem to be focusing on cross-device setup, but rather to assume small client sizes and full participation.\", \"The selection of certain hyperparameters is not properly explained ($\\\\tau$ for router optimization, calibration set extraction)\"], \"suggestions\": [\"It would be insightful to the reader if the authors could provide additional details on the size and extraction method of the validation for reproducibility.\", \"It would also be interesting to see the behaviour of CoMiGS with other PEFT methods or adapters (e.g. DORA, VERA).\", \"Another interesting avenue for exploration, especially for on-device deployment, would be the interplay of the technique with quantization methods, where the router and adapters may operate on a lossy pretrained model.\", \"Since the paper inherits a federated setup, an interesting question arise wrt the tradeoff of utility and privacy when training the generalists under DP.\", \"$\\\\theta^G$ and $\\\\theta^S$ have not been formally defined as the global and local LoRA parameters.\", \"The assumed federated setup should be part of the main paper.\", \"Baselines could be referenced against their original papers in \\u00a73.1.\", \"It is very unclear from the paper how the authors have federated the datasets and whether the distribution is non-IID amongst clients.\", \"Figures 3-5 font sizes are very small to read.\", \"Figures 4-5 should have their x-axis label annotated.\", \"What does the expert average score represent in Figure 4?\", \"Table 1 should signify what the number in the parentheses signify.\", \"Does Figure 3 suggest that we might not need the same number of specialists across layers? If so, is there the potential for further optimization?\", \"The modification of the load-balancing loss with importance weighting could be mentioned from the main text.\"], \"reason_for_giving_a_higher_score\": \"Lots of analysis wrt behaviour of model and interaction of router and experts, method seems promising.\", \"reason_for_giving_a_lower_score\": \"On-device and federated setup misaligned with actual evaluation.\", \"rating\": \"6\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}",
"{\"summary\": \"The proposed method is a federated/collaborative learning approach where each client is learning a mixture of experts. During a single round, each client locally performs an alternating optimization on its experts and router. Between rounds, a set of generalist experts are sent to a central server for aggregation.\", \"strengths_and_weaknesses\": \"The method appears to perform well -- overall the paper shows that using a mixture of generalist and specialist experts can achieve the benefits of both.\\n\\nMy general criticism is that it is not clear how the generalist experts are determined, nor what the difference is in the generalist vs. expert initialization. During local training, how do you ensure that the experts you have defined as the generalist / expert are indeed learning general / specialized knowledge?\\n\\nFurthermore, is there a specific reason why the router is not aggregated across all clients? My intuition is that the router should also be a general structure. Like the previous question, it is not clear if better results are simply coming from one particularly strong expert or both experts being universally strong.\\n\\nThe efficiency limitations of applying MoEs should also be clarified. Does FedAvg use 1 or 2 experts? How much benefit is there from using 2 experts over 1?\", \"suggestions\": \"While the comparison of FedAvg vs 2G or Local vs 2S shows the effectiveness of using a layer-wise router, this contribution does not seems to be the most important part of the paper. The other ablation I reccomend is to use a vanilla router and apply the selective aggregation method e.g. FedAvg-1G1S. This would be similar in spirit to applying partial model personalziation (https://proceedings.mlr.press/v162/pillutla22a.html) to MoEs.\", \"reason_for_giving_a_higher_score\": \"The method is well justified, and the authors provide lots of analysis and supporting experiments.\", \"reason_for_giving_a_lower_score\": \"The results are a bit dense, and it seems that a significant amount of improvement relies on layer-level routing.\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This work proposes a mixture of generalist and specialist models, which is very relevant to this workshop. All reviewers recommend acceptance, and we're pleased to accept it to this workshop.\"}"
]
} |
PlAQHoW26z | ComfyGen: Prompt-Adaptive Workflows for Text-to-Image Generation | [
"Rinon Gal",
"Adi Haviv",
"Yuval Alaluf",
"Amit Haim Bermano",
"Daniel Cohen-Or",
"Gal Chechik"
] | The practical use of text-to-image generation has evolved from simple, monolithic models to complex workflows combining multiple specialized components. These components are independently trained by different practitioners to excel at specific tasks – from improving photorealism or anime-style generation to fixing common artifacts like malformed hands. Using these components to craft effective workflows requires significant expertise due to the large number of available models and their complex interdependencies. We introduce prompt-adaptive workflow generation, where the goal is to automatically tailor a workflow to each user prompt by intelligently selecting and combining these specialized components. We propose two LLM-based approaches: a tuning-based method, and an in-context approach. Both approaches lead to improved image quality compared to monolithic models or generic workflows, demonstrating that prompt-dependent flow prediction offers a new pathway to improving text-to-image generation. | [
"Applications of modularity",
"Text-to-image generation"
] | Accept | https://openreview.net/pdf?id=PlAQHoW26z | https://openreview.net/forum?id=PlAQHoW26z | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"V5WSE1uU0v",
"9F78qN2neR",
"8VH2SFFhr3",
"402Xyb0K9H"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740989610026,
1740727942487,
1741226299538,
1740514712293
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission14/Reviewer_PGSn"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission14/Reviewer_7xXh"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission14/Reviewer_hxGf"
]
],
"structured_content_str": [
"{\"summary\": \"ComfyGen automatically generates adaptive workflows for text-to-image generation using Large Language Models. Instead of relying on a single, monolithic model, it dynamically assembles specialized components based on the user\\u2019s prompt. The goal is to improve text-to-image generation quality.\", \"strengths_and_weaknesses\": [\"Strengths:\", \"Prompt-adaptation is an important problem. If we treat all prompts the same the model loses flexibility, being mindful of prompt contents and dynamically adapting the generation workflow could be used to improve outputs.\", \"In the 2-page space the authors had for the submission they are fairly clear on why prompt-adaption is important and provide good examples within the text.\", \"I appreciate the idea of using a user interface (ComfyUI) to visualize workflows. When applied correctly, this could offer a user-friendly way for users to tailor their generative systems. However, in this submission, ComfyUI introduced significant complexity with limited expansion on its capabilities.\", \"The results show that even when outperformed by baselines, ComfyGen-FT remains comparable. The baselines often experience more extreme drops in results across conditions compared to their counter-parts.\"], \"weaknesses\": [\"I understand that this is a short-submission but your methodology and contributions feel unclear. Especially when creating workflows or methods that consist of multiple components clear explanations on each component, their method, and how they interact are needed. I understand this is difficult but perhaps aiming for a 6-page submission would have helped. For example \\\"collected 500 diverse prompts and tested 310 workflows\\\", what was the criteria and how did you collect them? Another example \\\"The LLM analyzes new prompts and matches them to workflows that performed well on similar content.\\\", the idea is good but some expansion would be nice.\", \"\\\"ComfyGen-FT outperforms all baseline approaches\\\" feels like an overstatement. In figure 2 we see different baseline approaches outperforming ComfyGen-FT in position, attribute binding, counting, and single object. The only feature that ComfyGen-FT consistently achieves higher performance is for the two object comparison. However, your results are good, they would just benefit from a more concise comparitive statement.\"], \"suggestions\": \"Again, I understand that the 2-page submission length means a fair amount of content is omitted but make sure in the full study you are careful about the statements you make regarding results. Generally, having more prompt adaptive methodology leads to some additional training and infer costs, discussion and analysis of this would be good. The remaining suggestions I have are all mentioned in strengths and weaknesses.\", \"reason_for_giving_a_higher_score\": \"I believe that the weaknesses I highlighted are partially caused by the page-limitation and may not be presented in the expanded work. The topic the authors have focused on is important and they explain and show the potential benefits of more prompt-adaptive methods. I believe that the authors are far enough into their work to provide interesting insights at MCDC. I also believe that discussions at MCDC will be beneficial to the authors in future expansion on this work.\", \"reason_for_giving_a_lower_score\": \"As mentioned in the weaknesses section the method contains many components and would have benefitted from more explanation and breakdown of components. Additionally, the results would have benefited from more discussion opposed to the more generalised statement under figure 2.\", \"rating\": \"6\", \"confidence\": \"2\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The paper introduces an innovative framework called COMFYGEN, designed to automatically generate text-to-image workflows based on user prompts. This workflow can automatically select and combine the necessary components to significantly improve image generation quality, without requiring users to have expert knowledge of the complex components. The core contribution of the paper lies in proposing two methods that utilize LLMs to generate adaptive workflows: ComfyGen-FT method, which is tuning-based, and the ComfyGen-IC method, which relies on in-context learning. By comparing COMFYGEN to single model approaches, fixed workflows, and other uses of LLMs in generation layout prediction, the research demonstrates that COMFYGEN excels in selecting components suited to the generation task.\", \"strengths_and_weaknesses\": \"Strengths\\n\\u2022\\tThe COMFYGEN framework offers a highly practical solution for enhancing text-to-image generation by automating the selection and integration of specialized components. This eliminates the need for users to have deep expertise in understanding the complex interdependencies among different models, making it accessible and useful for a broader audience.\\n\\u2022\\tThe paper is well-written, with clear and concise explanations of both the challenges in text-to-image generation and the proposed solutions.\\n\\u2022\\tThe effectiveness of COMFYGEN has been validated through user studies, adding credibility to its practical applications. This empirical evidence demonstrates that COMFYGEN not only performs well on automated metrics but also meets user expectations and preferences.\\n\\nWeaknesses\\n1.\\tThe effectiveness of COMFYGEN is inherently dependent on the quality and diversity of the available specialized components. If the components are not well-optimized or lack diversity, the overall quality of generated images might be limited, potentially affecting the framework's generalization capability across different styles and requirements.\\n1.\\tThe experiments primarily are conducted on text-to-image generation tasks, leaving questions about the framework's adaptability to other types of generative tasks, such as image-to-image translation. Expanding the scope of evaluation could provide a more comprehensive understanding of the framework's versatility and limitations in different generative contexts.\\n2.\\tThe paper does not thoroughly analyze scenarios where ComfyGen-IC and ComfyGen-FT might make incorrect decisions in selecting components. Also lacks of discussion on how the use of inappropriate components might negatively impact the quality of the generated images.\", \"suggestions\": \"1.\\tWhich specific components are used in the COMFYGEN framework? Could you provide details on the specific models and parameter configurations for these components?\\n2.\\tWill the fine-tuning data for Llama 3.1 be released in the future?\\n3.\\tHow does the COMFYGEN framework handle the integration of newly developed components, and is there a mechanism in place for continuously updating the system with the latest advancements in text-to-image generation models?\", \"reason_for_giving_a_higher_score\": \"Please refer to the Strengths.\", \"reason_for_giving_a_lower_score\": \"Please refer to the weaknesses.\", \"rating\": \"6\", \"confidence\": \"4\", \"workshop_fit\": \"4\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This paper expolores the idea of automatically generating text-to-image workflows based on the user prompts. This problem can also be seen as how to route the input throgh different modules and hence it is a good fit for the workshop. Most of the reviewers have a positive opinion on the paper. To further strengthen the paper the paper we recommend to provide more details for the experimental setup and how the data was produced following review hxGf's suggestion. Overall, we recommend accepting this paper.\"}",
"{\"summary\": \"This work introduces ComfyGen, an LLM-based component introduced at the beginning of the ComfyUI system (* additional detail), that is able to generate the optimal ComfyUI workflow configuration for a given input prompt. This addition greatly simplifies the procedure of selecting the optimal combination of modular components into a workflow for a given prompt, which typically requires significant expertise and experimentation. The authors evaluate their contribution in an end-to-end fashion, comparing their system on the GenEval benchmark against other single model systems and fixed workflow configurations. The addition of their per prompt optimised workflow generation is able to produce images that seem to convincingly beat both single models, and fixed generation workflows in the ComfyUI pipeline results. Additionally, the authors show through a user study that the images produced with the introduction of ComfyGen are able to outperform their baseline results.\\n\\n*an open-source system addressing the problem of modularity in T21 problems by allowing users to create (T21) generation pipelines using a graph-based interface; in effect, this allows users to wire together monolithic components into a larger workflow to create more impressive results than each of the systems in isolation.\", \"strengths_and_weaknesses\": [\"In short: this paper presents a very interesting mechanism to tie together monolithic components into a text-to-image generation workflow on top of ComfyUI. However, the specific contribution of the paper is not related to the MCDC workshop CFP in its entirety; the contribution is essentially the development of an LLM that is able to predict structured JSON output for the ComfyUI system based on a text prompt. This is not modular, collaborative nor decentralised; in effect it is an add-on to an existing system that enables modularity. Furthermore, there are many details that have been omitted from the paper which leads one to doubt the claims made in the paper. Specific details are below:\", \"**Strengths:**\", \"The idea of ComfyGen seems absolutely necessary for the proper use of the ComfyUI. In isolation, the idea is very compelling if one assumes the significance of the results presented without the additional detail that has been requested below. It seems that it will provide great benefit for those who wish to use the system for T2I generation.\", \"Overall, great presentation of the idea. The introductory figure explains the idea of the paper well, and the work is largely well presented.\", \"Great balance between automatic metrics and user study preference: this shows that this work brings improvement to the end user beyond what is quantified in GenEval. Particularly, Figure 3 is well presented.\", \"**Weaknesses:**\", \"Not related to the scope of the MCDC CFP. The authors' contribution is the development of an LLM that predicts structured JSON output for ComfyUI based on textual input. Yes, this then ties together individual components in a modular way with ComfyUI. However, the contribution itself is more to LLM modeling than the MCDC CFP.\", \"More details on ComfyUI would be useful to contextualise the work.\", \"Very little details on their experiment setup related to the creation of ComfyGen-IC and ComfyGen-FT which would prevent its reproduction. Specifically:\", \"How was the data curated? What types of models did you consider? How were the workflows created? Additional details besides the summary statistics would be helpful to contextualise the work.\", \"IC Learning: How did you structure the prompt?\", \"FT: What was the training specification? For how many epochs did training occur ? Which optimiser? What was the learning rate?\", \"Results in the third section:\", \"The results were gathered over one seed. This is not statistically significant.\", \"How were the other baselines reproduced?\", \"Ambiguity on the workflow baselines: what is the most popular workflow? Additional detail would be helpful.\", \"Limited details on the user study: How many participants? How were the participants recruited? How did you run the study?\"], \"nb\": \"It is noted that the 2 page limit of the submission does inherently contribute to the brevity in details of any submitted work. However, the lack of detail in the correct paper version is viewed to be independent from this content restriction. In its current form, the work is not recommended for acceptance.\", \"suggestions\": \"Section 1 or 2:\\nIntroducing ComfyUI more specifically here and not in the appendix would be useful. The contribution builds directly on top of ComfyUI.\", \"section_3\": \"\", \"elaborate_on_the_workflow_baselines\": \"what are the most popular workflows? As per the weaknesses, please provide more details about the experiment setup so that it would be reproducible. Why/How did you choose the specific baselines?\", \"figure_2\": \"- Over how many seeds was this run? It would be good to see the confidence intervals for these results.\\n\\nFigures 4 and 5.\", \"the_comparison_here_is_not_clear\": \"which images are from the baseline and which are from ComfyGen?\", \"figure_6\": \"\", \"a_larger_discussion_on_the_outputs_would_be_beneficial\": \"why do these show that ComfyGen is better? ComfyGen-FT generated two green apples which were not in the prompt, for example.\", \"additional_experiment_suggestions\": [\"It would be useful to evaluate how accurate the generation of the ComfyUI JSON is in isolation, since this is the contribution. For example, how much further work is required from the user to ensure that the generated output is parsable by/compatible with the ComfyUI?\", \"It would be interesting to evaluate the typical workflow(s) produced by the LLM? What types of routes does it construct in particular? Which components/models come up more frequently? How does this relate to the training data; is this something that an expert would have likely created themselves, or has the LLM exploited an unusual pattern.\"], \"reason_for_giving_a_higher_score\": \"The idea of the paper, in isolation, is very interesting and a fantastic addition to the ComfyUI workflow. Provided it works well, it would provide extreme benefit for the end users of this system.\", \"reason_for_giving_a_lower_score\": \"The paper\\u2019s relevance to the overall workshop theme is marginal.\\nMissing information in the paper puts into question the veracity of the claims.\", \"rating\": \"4\", \"confidence\": \"4\", \"workshop_fit\": \"1\"}"
]
} |
OjKum8isMR | Efficient Distributed Optimization under Heavy-Tailed Noise | [
"Su Hyeong Lee",
"Manzil Zaheer",
"Tian Li"
] | Distributed optimization has become the default training paradigm in modern machine learning due to the growing scale of models and datasets. To mitigate communication overhead, local updates are often applied before global aggregation, resulting in a nested optimization approach with inner and outer steps. However, heavy-tailed stochastic gradient noise remains a significant challenge, particularly in attention-based models, hindering effective training. In this work, we propose TailOPT, an efficient framework designed to address heavy-tailed noise by leveraging adaptive optimization or clipping techniques. We establish convergence guarantees for the TailOPT framework under heavy-tailed noise with potentially unbounded gradient variance and local updates.
Among its variants, we highlight a memory and communication efficient instantiation which we call $Bi^2Clip$, which performs coordinate-wise clipping at both the inner and outer optimizers, achieving adaptive-like performance (e.g., Adam) without the cost of maintaining or transmitting additional gradient statistics. Empirically, TailOPT, including $Bi^2Clip$, demonstrates superior performance on several language tasks and models, outperforming state-of-the-art methods. | [
"Distributed Optimization",
"Adaptive Optimization",
"Scalable Algorithms"
] | Accept | https://openreview.net/pdf?id=OjKum8isMR | https://openreview.net/forum?id=OjKum8isMR | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"uISscer0mX",
"MsFOG2gAEY",
"0Xh8l0iVJc"
],
"note_type": [
"official_review",
"official_review",
"decision"
],
"note_created": [
1740354707309,
1740637398411,
1741226297638
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission4/Reviewer_YyBz"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission4/Reviewer_3djD"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
]
],
"structured_content_str": [
"{\"summary\": \"The authors present TailOPT, a framework for handling heavy-tailed gradient noise in distributed training. The main technical contribution is BiClip, which does coordinate-wise clipping to mimic adaptive optimizers without their memory overhead. While the theoretical analysis aims to prove convergence under heavy-tailed noise with potentially unbounded variance, there are some gaps in the proofs that need addressing.\", \"strengths_and_weaknesses\": \"The key innovation here is BiClip's approach to amplifying small gradients while tempering large ones at a coordinate level. This is clever and the empirical results on GLUE and WMT benchmarks look solid. The implementation is also memory-efficient compared to methods like Adam, which will matter for large-scale training.\\n\\nThat said, I'm concerned about several aspects:\\n1. The asymptotic approximation in eq. 14-15 requires more rigorous justification, particularly in the treatment of boundary terms when integrating by parts and in proving that the claimed limit holds uniformly. The current analysis does not adequately address potential singularities when \\\\nu <= 0 or justify the exchange of limits, which is crucial for establishing the tightness of the convergence bounds\\n2. All experiments are on NLP tasks; I'd like to see empirical analysis on other domains like CV or RL\\n3. No real discussion of how sensitive the method is to the choice of clipping thresholds\", \"suggestions\": \"See about in the weaknesses section.\", \"reason_for_giving_a_higher_score\": \"This is tackling a real problem in distributed training and the solution is both theoretically grounded and practically useful. BiClip is a neat trick for getting adaptive behavior without the memory hit. The GLUE results are impressive and the analysis, while not perfect, establishes meaningful guarantees.\", \"reason_for_giving_a_lower_score\": \"The math needs tightening up, especially in the convergence proofs. Experiments are too NLP-focused to really show broad applicability. Missing practical details about implementation and parameter tuning that would be crucial for adoption.\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"5\"}",
"{\"summary\": \"The paper introduces TailOPT as an efficient framework designed to address heavy-tailed noise. TailOPT utilizes BiClip mechanism that clips the coordinate-wise values from both and below . Theoretical guarantees and empricial evidence are given to show the effectiveness of the proposed method.\", \"strengths_and_weaknesses\": \"Strength:\\n1. The paper is well-written, with the movtivation clearly described and excellent expositions.\\n2. Theoretical guarantees of TailOPT are provided.\\n3. Experimental configuration is extensive. The abundant ablation studies greatly help in demonstrating the effectiveness of the proposed method.\\n\\nWeakness/Minor Comments:\\n1. Most of the existing works [1] show a one-sided (upper side) clipping is sufficient for convergence under heavy-tailed noises. The authors claim BiClip takes advantage of the adaptivity by incorporating a two-sided clipping mechanism. It might be better to make it clear on how/whether the lower side clipping (d) has an effect on the heavy-tailed noise.\\n2. It's a little surprising to see BiClip can outperform Adam in most cases, even without leveraging momentum. Does BiClip method natively support momentum? If so, how momentum mechanism can be applied to the client updates? Does applying momentum to BiClip improve the performance (at some additional cost)?\\n\\n[1] Zhang, J., Karimireddy, S.P., Veit, A., Kim, S., Reddi, S., Kumar, S. and Sra, S., 2020. Why are adaptive methods good for attention models?. Advances in Neural Information Processing Systems, 33, pp.15383-15393.\", \"suggestions\": \"1. A proof sketch demonstrating the key challenges in the theories will be informative and helpful.\", \"reason_for_giving_a_higher_score\": \"1. Rigorous theoris are provided to show theconvergence of BiClip under heavy-tailed noise.\\n2. Abundant ablation studies give good insights on how and when BiClip methods outperform traditional adaptive methods.\", \"reason_for_giving_a_lower_score\": \"N/A\", \"rating\": \"9\", \"confidence\": \"4\", \"workshop_fit\": \"4\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This distributed optimization work proposes an adaptive inner clipping that enables good performance with highly noisy gradients with less memory overhead than existing counterparts. Distributed optimization is a relevant topic of this workshop, and the reviewers all recommend acceptance Therefore, we're happy to accept this work to the workshop.\"}"
]
} |
LBBKIA0NpN | How to Merge Multimodal Models Over Time? | [
"Sebastian Dziadzio",
"Vishaal Udandarao",
"Karsten Roth",
"Ameya Prabhu",
"Zeynep Akata",
"Samuel Albanie",
"Matthias Bethge"
] | Model merging combines multiple expert models finetuned from a base foundation model on diverse tasks and domains into a single, more capable model. However, most existing model merging approaches assume that all experts are available simultaneously. In reality, new tasks and domains emerge progressively over time, requiring strategies to integrate the knowledge of expert models as they become available: a process we call *temporal model merging*. The temporal dimension introduces unique challenges not addressed in prior work, raising new questions such as: when training for a new task, should the expert model start from the merged past experts or from the original base model? Should we merge all models at each time step? Which merging techniques are best suited for temporal merging? Should different strategies be used to initialize the training and deploy the model? To answer these questions, we propose a unified framework called TIME (Temporal Integration of Model Expertise) which defines temporal model merging across three axes: (1) initialization, (2) deployment, and (3) merging technique. Using TIME, we study temporal model merging across model sizes, compute budgets, and learning horizons on the FoMo-in-Flux benchmark. Our comprehensive suite of experiments across TIME allows us to build a better understanding of current challenges and best practices for effective temporal model merging. | [
"model merging",
"continual learning",
"multimodal models"
] | Accept | https://openreview.net/pdf?id=LBBKIA0NpN | https://openreview.net/forum?id=LBBKIA0NpN | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"ziz2E3xdJu",
"fUU0s0POMu",
"ZV4qi8COsg",
"BXIr4Ydlht"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740697797426,
1740801853040,
1740766784813,
1741226297821
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission50/Reviewer_9Py6"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission50/Reviewer_4tfY"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission50/Reviewer_aajb"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
]
],
"structured_content_str": [
"{\"summary\": \"The authors propose a framework TIMES under which they study temporal merging for continual learning of tasks. They identify three axes a) Initialization b) deployment c) merging technique and study existing approaches for each of them.\", \"strengths_and_weaknesses\": \"-Rigorous evaluation with multiple merging methods\\n \\n- Findings in the paper about merging methods not mattering much. Additionally, initialization and deployment seems to matter more is insightful\", \"suggestions\": [\"If time permits, I suggest experiment with https://arxiv.org/pdf/2312.04339. Here they have a method to merge models using a conjugate gradient algorithm and combine initialization from different merging methods and improve upon them.\", \"It is weird that multitask training is reducing the zero-shot performance of the model in Figure 3. Can you explain this behavior?\", \"( any_init, deployEMA, f_merge) is kind of misleading since there is no EMA. Equation 1 does merging of previous model with current expert model with f_merge method. Where does the exponential moving average part come in?\"], \"reason_for_giving_a_higher_score\": \"Experiments presented in the paper and insights drawn from it are helpful to the community\", \"reason_for_giving_a_lower_score\": \"None\", \"rating\": \"8\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}",
"{\"summary\": \"In summary, this is a good paper in the area of model merging. The paper introduces a novel framework, TIME (Temporal Integration of Model Expertise), which addresses the understudied problem of temporal (continual) model merging.\", \"strengths_and_weaknesses\": \"Strength:\\n\\n1.The paper introduces a novel framework, TIME (Temporal Integration of Model Expertise), which addresses the understudied problem of temporal model merging.\\n2. The authors break down temporal model merging into three key axes\\u2014initialization, deployment, and merging techniques\\u2014providing a structured approach to understanding and implementing temporal merging. This framework is well-defined and allows for a systematic exploration of the design space.\", \"weaknesses\": \"1. While the paper provides extensive empirical results, it lacks a theoretical analysis of why certain initialization and deployment strategies work better than others. A deeper theoretical understanding could strengthen the paper and provide more generalizable insights.\\n2. The paper concludes that complex merging techniques provide marginal benefits. I don't think that's entirely true. For larger models, simple model merging approaches can work well and complex merging techniques show marginal benefits, but for smaller models, complex approaches can sometimes have significant performance advantages. Therefore, the effectiveness of merging techniques can vary depending on the model size. \\n3. I suggest the authors consider citing the recent concurrent work by Anke Tang et al., \\\"Merging Models on the Fly Without Retraining: A Sequential Approach to Scalable Continual Model Merging,\\\" as it provides complementary insights into continual model merging and could further enrich the discussion on temporal merging strategies.\", \"suggestions\": \"Refer to the weaknesses.\", \"reason_for_giving_a_higher_score\": \"refer to strengths and weaknesses.\", \"reason_for_giving_a_lower_score\": \"refer to strengths and weaknesses.\", \"rating\": \"7\", \"confidence\": \"5\", \"workshop_fit\": \"4\"}",
"{\"summary\": [\"The submission proposes TIME (Temporal Integration of Model Expertise), a unifying framework for temporal model merging\\u2014that is, merging multiple specialist or _expert_ models over time as new tasks arrive and expert checkpoints are produced. The key premise is that standard _offline_ model merging, which merges experts only once (after all tasks), fails in the more realistic continual or temporal scenario. TIME systematically explores three primary design decisions (initialization, deployment, and merging technique) that shape how each new expert is created and eventually merged to form a \\u201cglobal\\u201d model.\", \"The paper\\u2019s experiments, conducted on the FoMo-in-Flux benchmark (63 multimodal datasets) and tested on a CLIP-based model architecture, demonstrate the following:\", \"The _temporal_ dimension is critical. Standard offline merging underperforms in sequential scenarios.\", \"Sophisticated merging methods (like TIES, Task Arithmetic, etc.) offer only marginal gains over simple weighted averaging when used over many time steps.\", \"The choice of initialization (e.g., reusing an exponential moving average of previous experts) and deployment (e.g., also using an EMA across trained checkpoints) is more critical than the minor differences among merging algorithms.\", \"Temporal model merging scales favorably with larger models, more compute, and longer task sequences, often outperforming naive fine-tuning or offline baselines.\", \"This work fills a gap in the literature by showing how to merge multiple models in a long-running, evolving setting, clarifying best practices (especially around initialization/deployment with EMA) and identifying open challenges (e.g., memory constraints or highly divergent tasks).\"], \"strengths_and_weaknesses\": [\"### Strengths\", \"**Well-Structured Framework**. The paper offers a clear taxonomy via TIME, systematically categorizing initialization, deployment, and merging technique. This provides a modular approach that can be adapted to other model families and tasks.\", \"**Thorough Empirical Study**. Multiple large-scale experiments on a multimodal continual pretraining benchmark (FoMo-in-Flux) bolster the paper\\u2019s conclusions. The authors carefully compare different strategies and systematically show the trade-offs.\", \"**Useful Practitioner Takeaways**. The finding that simpler merges work well, combined with the strong effect of exponential moving average initialization/deployment, is both practical and broadly relevant for real-world continual learning scenarios.\", \"**Scalability Experiments**. The paper investigates model size, number of tasks, and compute budget, all of which are highly relevant for modern large-scale (foundation) models.\", \"### Weaknesses\", \"**Memory/Storage Constraints**. The presented method retains a buffer of all trained checkpoints (expert models). For extremely long task sequences or very large models, storing hundreds of checkpoints is impractical. The text mentions this concern but does not propose advanced checkpoint selection or compression methods.\", \"**Limited Theoretical Insight**. Although the experiments are comprehensive, the paper is relatively light on rigorous theoretical justifications for why EMA merges so consistently dominate other merging strategies. A deeper discussion or analysis of underlying geometry or mode connectivity would enrich the results.\"], \"suggestions\": [\"To address memory growth, experiment with either pruning older experts or storing only partial snapshots. This would practically demonstrate how TIME might scale in extremely long-horizon settings.\", \"If possible, add an outline or short theoretical argument on how EMA merges preserve \\u201clinear mode connectivity\\u201d (or some variant) across tasks. This would lend more insight into why it consistently outperforms other merges.\", \"Provide (even briefly) a per-task breakdown of performance. While the authors do show aggregated metrics, seeing how each new task\\u2019s knowledge is integrated might further clarify merging trade-offs.\", \"In Section 2's first paragraph (Notation), it seems a bit unclear what would the variable $t$ represent: it seems to be simultaneously used for tasks and for time steps. It would be nice to reword both notations.\"], \"reason_for_giving_a_higher_score\": \"1. **Novel Problem Setting**: This is one of the first large-scale, systematic explorations of temporal model merging, bridging offline merging methods with continual learning challenges.\\n2. **Practical Significance**: The paper yields actionable recommendations (particularly around EMA for initialization/deployment) that are likely to benefit real-world practitioners who retrain or continually update large models.\\n3. **Comprehensive Empirical Evidence**: The experiments are extensive, scaling across dimensions (model size, compute, # tasks), which is relatively rare in this domain.\", \"reason_for_giving_a_lower_score\": \"1. **Limited Theory**. Readers looking for deeper theoretical grounding or interpretability of weight merges (beyond empirical success) might find the paper lacking.\\n2. **Memory Constraint Oversight**. Storing every expert could be infeasible for certain real-world or extremely long-horizon applications, and the paper mainly sidesteps that issue.\", \"rating\": \"9\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This paper has been highly appreciated by all reviewers. We recommend taking suggestions into consideration for the final version of the manuscript.\"}"
]
} |
IwNOUYgtuz | Revisiting Sparse Mixture of Experts for Resource-adaptive Federated Fine-tuning Foundation Models | [
"Van-Tuan Tran",
"Le Huy Khiem",
"Quoc-Viet Pham"
] | Existing federated fine-tuning methods for large-scale foundation models (FMs) assign heterogeneous low-rank adaptation (LoRA) ranks for clients based on their computation capabilities to address system heterogeneity. However, these approaches require merging LoRA matrices into the original model to obtain the full model, causing the computational overhead for resource-constrained clients at inference time. Moreover, their performance is not as effective as that of the homogeneous LoRA, in which the lowest rank is applied to all clients.
To overcome these limitations, we propose a resource-adaptive federated fine-tuning method by revisiting the conditional computation property of Sparsely-activated Mixture-of-Experts (SMoE). The key principle here is to extend the data-conditional computation property of SMoE to a new dimension - resource-conditional computation, where clients can activate a suitable number of experts depending on their available resources. Furthermore, to address the imbalanced expert utilization caused by heterogeneous expert activation patterns, we propose a new Activation-aware aggregation algorithm for SMoE (A3SMoE). This algorithm enhances the aggregation process by incorporating client-specific expert activation patterns. Through experiments across independent and identically distributed (IID) and non-IID scenarios, we demonstrate that our proposed method achieves superior performance compared to both homogeneous- and heterogeneous-LoRA approaches under different computation budgets. We also show that LoRA-based methods can be improved when integrated with A3SMoE. | [
"Federated Learning",
"Resource-adaptive",
"Mixture-of-Experts"
] | Accept | https://openreview.net/pdf?id=IwNOUYgtuz | https://openreview.net/forum?id=IwNOUYgtuz | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"wGMtjGbQ3V",
"KCAB9L7Ixf",
"HuS5Ok3lWb",
"FMexJOOCUH"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740695744400,
1740631131453,
1740514297751,
1741226298379
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission24/Reviewer_oz6X"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission24/Reviewer_HrDf"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission24/Reviewer_g8s7"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
]
],
"structured_content_str": [
"{\"summary\": \"This paper proposes a novel approach for federated fine-tuning of large-scale foundation models by extending the sparsely-activated mixture-of-experts (SMoE) paradigm to account for resource heterogeneity. In contrast to standard LoRA methods\\u2014which either use a fixed (homogeneous) rank or assign heterogeneous ranks that necessitate expensive merging at inference\\u2014the proposed method enables each client to activate a number of experts (determined by its computational budget) during both training and inference. To address the resulting imbalance in expert utilization across clients, the authors introduce an activation-aware aggregation algorithm (A3SMoE) that weights client updates by both their local data sizes and the frequency of expert activation. Experimental results on an instruction-tuning task (using the Dolly-15k dataset) under both IID and non-IID conditions show that A3SMoE outperforms both homogeneous and heterogeneous LoRA baselines, particularly at lower computation budgets. Moreover, the approach is shown to be complementary, improving performance when integrated with existing LoRA methods.\", \"strengths_and_weaknesses\": [\"**Strengths:**\", \"**Novelty:** The paper introduces a compelling extension of SMoE by incorporating resource-conditional computation, addressing a critical challenge in federated fine-tuning where clients have diverse computational capabilities.\", \"**Activation-aware Aggregation:** The proposed aggregation strategy that factors in client-specific expert activation counts is innovative and effectively tackles the imbalance issue in heterogeneous expert utilization.\", \"**Empirical Validation:** The experimental evaluation is thorough, covering multiple computation budgets and both IID and non-IID data distributions, which convincingly demonstrates the advantages of A3SMoE over baseline methods.\", \"**Integration Potential:** The demonstration that A3SMoE can be integrated with existing LoRA-based methods to further boost performance adds to its practical significance.\", \"**Weaknesses:**\", \"**Limited Dataset Scope:** The experiments are confined to a single instruction-tuning task on the Dolly-15k dataset. Broader evaluation across multiple tasks or datasets would strengthen the generality of the findings.\", \"**Clarity of Method Description:** Some parts of the method, particularly the details of the activation-aware aggregation mechanism, could benefit from additional clarification and more intuitive explanation.\", \"**Discussion of Limitations:** The paper would be improved by a more explicit discussion of potential limitations\\u2014for instance, scenarios where extreme heterogeneity might lead to issues that the current aggregation scheme does not fully address.\"], \"suggestions\": [\"**Broaden Experimental Evaluation:** Consider testing A3SMoE on additional datasets or tasks to further validate its robustness.\", \"**Robustness of the Aggregation Strategy:** Elaborate on the activation-aware aggregation algorithm (Equation (7)) by discussing its behavior in edge cases\\u2014such as when certain experts are rarely activated or when client updates vary widely in quality.\", \"**Detailed Analysis of Expert Activation Dynamics:** Provide a more comprehensive discussion on the trade-offs between activating different numbers of experts. For instance, include analyses or visualizations that illustrate how varying \\\\(K_c\\\\) impacts both computational cost and model performance. This can help readers understand the practical implications of resource-conditional computation.\"], \"reason_for_giving_a_higher_score\": \"The work offers a novel and practically significant solution to the problem of system heterogeneity in federated fine-tuning. The extension of SMoE to handle resource constraints, combined with a thoughtful aggregation strategy that leverages client-specific activation patterns, represents a meaningful advancement. The promising empirical results across different computational budgets further support the paper\\u2019s contributions, making it a strong candidate for inclusion in the workshop.\", \"reason_for_giving_a_lower_score\": \"Despite its strengths, the paper\\u2019s experimental validation is limited to a single dataset and task, which raises questions about its generalizability.\", \"rating\": \"7\", \"confidence\": \"5\", \"workshop_fit\": \"5\"}",
"{\"summary\": \"The authors propose resource-conditional SMoE, which supports clients activate a suitable number of experts based on their available resources. This aggregation process is enhanced by incorporating client-specific expert activation patterns. Empirical stidies show the proposed method achieves superior performance in various scenarios.\", \"strengths_and_weaknesses\": \"Strength:\\n1. The motivation of this work is clear and the studied problem on system heterogeneity is important.\\n2. The experiments are extensive in the sense that various situations including different budget levels, different client numbers and non-i.i.d data distributions, are considered. \\n3. The proposed method can be effectively integrated with existing LoRA-based methods.\", \"weakness\": \"1. The role of LoRA in the proposed SMoE paradigm is not clear. LoRA is involved only in the local finetuning step (5). It seems the proposed method can be translated to other parameter-efficient fine-tuning methods, such as ControlNet[1]. What are the unique challenges brough up by LoRA in this work?\\n2. The exposition of the proposed paradigm are lacking some details: The LoRA parameters are merged into experts weights after local finetuning by (7). Then, how are the LoRA parameters re-initialized in the next round?\\n\\n[1] Zhang, L., Rao, A. and Agrawala, M., 2023. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 3836-3847).\", \"suggestions\": \"1. A clear algorithm block will be extermely helpful for the readers to understand the workflow of the proposed method.\\n2. The authors can consider to extend the methods to other finetuning methods, if applicable.\\n3. An intuitive explanation on the design and choice of balancing parameters in (7) is highly encouraged.\", \"reason_for_giving_a_higher_score\": \"The method provides a general approach to solve the system heterogeneity issue in federated learning. Overall the work is interesting and is likely to be deployed in production. Authors have also demonstrated the ability of the proposed method to work well with the existing LoRA methods.\", \"reason_for_giving_a_lower_score\": \"The clarity of the exposition can be improved by adding an algorithm block. Also, the relation between LoRA and SMoE is somewhat weak.\", \"rating\": \"7\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}",
"{\"summary\": \"The paper proposes a federated fine-tuning method suitable for networks with system heterogeneity. The proposed method is based on the Sparsely-Activated Mixture of Experts (SMoE) model, where the sparsity level (i.e., the number of activated experts) depends on each client\\u2019s computational resources. One technical challenge in training this model is the imbalance in expert updates, as some experts may be updated more frequently than others. To address this, the authors propose an activation-aware aggregation step (activation refers to the active experts here) that accounts for client-specific updates along with client-specific expert activation counts. Instead of the standard averaging step, the proposed aggregation computes a weighted average of expert parameters where the weights depend on the activation counts. The proposed method is shown to outperform other baseline methods for this problem through numerical experiments with the Dolly-15k dataset.\", \"strengths_and_weaknesses\": [\"Strengths:\", \"Perfect fit for the workshop theme\", \"Well written\", \"Promising empirical performance of the proposed method\"], \"weaknesses\": [\"Heuristic nature of the proposed approach\", \"Very limited numerical experiments (only one dataset, only one splitting, etc.)\", \"Missing details on the numerical experiments (important details like number of local steps, the stopping condition for each method, are the results averaged over random seeds, etc.) -- the results are not reproducible unless the authors decide to share the code\"], \"about_the_novelty\": \"Novel in the sense this model has not been applied before to address system heterogeneity\\nLimited novelty in the sense the method simply mimics existing techniques to deal with data heterogeneity.\", \"suggestions\": \"Given that the study relies on empirical analysis, a more comprehensive and carefully designed set of numerical experiments is important to reach a clear conclusion when reading the paper.\", \"reason_for_giving_a_higher_score\": \"Please see strengths.\", \"reason_for_giving_a_lower_score\": \"Please see weaknesses.\", \"rating\": \"6\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"The proposed method account for resource discrepency in a FL setting where different client can have different resources. The resource-conditional SMoE allows to activate a suitable number of experts based on their available resources. Most of the reviewers liked the paper, found it relevant to the workshop, and recommended acceptace. We suggest the authors to incorporate the comments of the reviewers to further strengthen the paper. Overall, we're recommend to accept this work to the workshop.\"}"
]
} |
HxPDWbaK1E | Exploring Asynchronism in SWARM Parallelism | [
"Yan Zuo",
"Gil Avraham",
"Thalaiyasingam Ajanthan",
"Sameera Ramasinghe",
"Alexander Long"
] | SWARM parallelism is a framework that enhances pipeline parallelism in distributed training by incorporating fault tolerance. However, the synchronous nature of this approach introduces inefficiencies that can hinder performance and scalability. We analyze these inefficiencies and propose an asynchronous modification to the framework that enables nodes to perform local updates and periodically average their states. Our results demonstrate that this modified asynchronous SWARM achieves higher throughput without sacrificing model convergence. | [
"Decentralized Training",
"Asynchronous Pipeline Parallel"
] | Accept | https://openreview.net/pdf?id=HxPDWbaK1E | https://openreview.net/forum?id=HxPDWbaK1E | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"vhu2jOSg0K",
"Nnu6ymChyX",
"JSbDrkfRza",
"21hviSb1bz"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740695552034,
1740750551855,
1740569360706,
1741226298058
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission32/Reviewer_cmvd"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission32/Reviewer_m9cB"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission32/Reviewer_h9q3"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
]
],
"structured_content_str": [
"{\"summary\": \"The paper extends SWARM Parallelism, a distributed training framework that allows geographically dispersed nodes with consumer-grade accelerators to participate in collaborative training. It addresses the synchronization bottlenecks caused by gradient accumulation across pipeline stages by introducing an asynchronous execution strategy. However, asynchronous updates can lead to gradient staleness, which degrades convergence. To mitigate this, the authors propose a weight correction technique using Nesterov Accelerated Gradient (NAG). Their experiments demonstrate that this approach improves training stability and efficiency in asynchronous settings, making distributed training more scalable and resilient.\", \"strengths_and_weaknesses\": [\"Strengths\", \"The paper provides experimental validation through multiple ablation studies, demonstrating the effectiveness of the proposed approach under different conditions (e.g., varying batch sizes, warm-up times, and artificial delays).\", \"The results indicate that the NAG-adapted SWARM approach significantly improves training stability and convergence compared to both synchronous and asynchronous baselines.\", \"The asynchronous execution strategy relaxes synchronization constraints, allowing for more efficient training with reduced wall-clock time.\", \"Weaknesses\", \"The paper focuses solely on enabling local updates within pipeline stages but does not explore state averaging techniques, which could further enhance performance.\"], \"suggestions\": [\"Demonstrating how performance scales with number of pipeline stages or nodes would make the results more impactful\", \"\\\"In our context, the velocity component dt acts as a weight correction term, compensating for gradient staleness by anticipating the future weight position\\\" - Would be interesting to perform an analysis to determine whether this is empirically observed.\"], \"reason_for_giving_a_higher_score\": \"N/A\", \"reason_for_giving_a_lower_score\": \"N/A\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The work studies SWARM parallelism, a distributed parallelism technique that is 1-step-stale synchronous, but the paper studies how SWARM parallelism works with local asynchronous updates which have the potential to reduce bandwidth requirements of training considerably. The paper finds that Nesterov's accelerated gradient stabilizes asynchronous training.\", \"strengths_and_weaknesses\": \"The paper studies a simple hypothesis and presents a simple and effective solution. This is the hallmark of a good workshop paper.\\n\\nThe weaknesses are mostly that more extensive hyperparameter search, particularly for the baselines, and more extensive experiments would strengthen the paper -- but this is also a usual mark of a workshop paper.\", \"suggestions\": \"I think more diverse experiments and careful comparison under more hyperparameter settings are usual. Beyond that, mostly asynchronous approach break down with scale, so training on a larger scale would be important for a full paper.\", \"reason_for_giving_a_higher_score\": \"This is a great workshop paper that will lead to discussion among its participants.\", \"reason_for_giving_a_lower_score\": \"The finding is neat, but compared to other work might not have a wide impact because distributed training over multiple regions is still a relatively uncommon way of training large models. As such, this is solid work that is more niche and with that only interesting to some sub-population in the workshop. Other work might have larger impact.\", \"rating\": \"7\", \"confidence\": \"5\", \"workshop_fit\": \"5\"}",
"{\"summary\": \"In this work, Nesterov Accelerated Gradient is combined with SWARM parallelism to improve performance of asynchronous SWARM parallelism. Inspiration is drawn from papers such as Async Local-SGD (Liu et. al. 2024) to allow for increased hardware utilization.\", \"strengths_and_weaknesses\": \"The concept behind this paper is compelling; to combine a DiLoCo-style distributed training technique with pipeline parallelism. Pipeline parallelism (via SWARM) allows larger models to be trained, whilst DiLoCo reduces communication requirements between nodes. The paper notes the challenge of gradient staleness when applying an inner-outer training technique such as DiLoCo, and is convincing in its application of Nesterov Accelerated Gradient for alleviating these issues.\\n\\nThe experiments provided were compelling, showing the strength of the combined async+NAG training strategy.\\n\\nHowever, I felt there was a lack of explanation of some of the key ideas in this paper. The method was not clearly explained and left quite a lot to the imagination. Further elaboration of the method, algorithm, and rationale would be beneficial.\", \"suggestions\": [\"As mentioned above, whilst this paper targets a very interesting idea, I felt that some of the explanations were quite sparse. I would advise elaboration on the following:\", \"Explain the previous work on this field. DiLoCo is crucial here, and there are works on combining this with SWARM (eg DiLoCo-SWARM, Mika Senghaas, 2025)\", \"Explain how DiLoCo or DiPaCo are implemented in your work. I can't tell from your paper whether you used either of them - if you didn't use them it should be clear how they are relevant and why they weren't used. In general a more full explanation of your method would be beneficial.\", \"Explain where the 'async' step comes in. Maybe an written-out algorithm would be informative.\", \"Explain why gradient staleness occurs\"], \"reason_for_giving_a_higher_score\": \"Strong ideas and fundamentals. Well aligned with workshop goal of modularity. Convincing experimental results\", \"reason_for_giving_a_lower_score\": \"Sparse explanation, leaving a lot up to the imagination.\", \"rating\": \"7\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This work proposes an asynchronous version of the SWARM pipelining. Enabling inference of model across distributed devices is a relevant topic to this workshop, and all reviewers recommend acceptance, therefore we're please to accept this paper to the workshop.\"}"
]
} |
Edn2EigJFP | FedMoDN: Federated Modular Decision Support Networks | [
"Cécile Trottet",
"Michael Krauthammer",
"Mary-Anne Hartley"
] | This work proposes FedMoDN, a novel federated modular neural network architecture for collaborative learning across all features of an imperfectly interoperable distributed dataset. Here, distributed data centers that collect variable combinations of features are able to use the full complement of their features with minimal exposure to biased missingness. Our approach enables data owners collecting different feature subsets to train a joint model without sharing, discarding, or imputing any data.
We evaluate the robustness of our approach through experiments that mirror realistic challenges encountered with medical data, particularly in resource-limited settings. Our results show that this modular approach is significantly more robust than a monolithic neural network when dealing with missing data, systematic bias, or heterogeneous feature subsets. | [
"modular deep learning",
"federated learning",
"clinical decision support"
] | Accept | https://openreview.net/pdf?id=Edn2EigJFP | https://openreview.net/forum?id=Edn2EigJFP | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"zBwMoQjnRT",
"gWFCNpVuuc",
"ddubcJ38GG",
"WyjQDl3N8I",
"FoYUpHXbpE"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1741226299511,
1741058815440,
1740907791039,
1740533241918,
1740672268366
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission44/Reviewer_iKKr"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission44/Reviewer_6Byv"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission44/Reviewer_xcuT"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission44/Reviewer_p4Lg"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"The paper critiques current decentralized learning evaluations that assume shared metadata by highlighting a discrepancy between research settings and real-world constraints. The paper recieved borderline score from most of the reviewer and however most of them agree that the paper is good fit for the workshop. We suggest the authors to perform some experiments on real world data along with some additional baseline as suggested by multiple reviewers of authors to strengthen the paper. Despite the comments a workshop paper it is okay for a workshop paper to just present experiments related to the main claim, however, we agree the experiments could be strengthened further. Overall, this is a borderline case and we're recommend to accept this work to the workshop.\"}",
"{\"summary\": \"This work proposes FedMoDN, a federated modular neural network designed to handle imperfectly interoperable distributed data (IIO), particularly relevant to medical applications. Unlike traditional federated learning (FL), which often assumes complete feature availability across nodes, FedMoDN allows collaborative learning across vertically partitioned datasets without discarding or imputing missing data. The architecture leverages modular encoders and decoders, enabling institutions with different feature sets to contribute to a shared model without exposing their raw data resulting in a robust model and better performance validated over 2 datasets.\", \"strengths_and_weaknesses\": \"Strengths:\\n1. Sufficient experimentation along with adequate explanation are presented with well-documented, meaningful results.\\n2. Superior handling of systematic bias in data collection is demonstrated.\\n3 .Robustness when dealing with limited feature interoperability is obtained.\", \"weaknesses\": \"1. Medical data and applications are mentioned at various places but no medical datasets are used.\\n2. While the paper mentions handling scenarios with training instance overlap (where different nodes have data from the same patients), the actual experiments only focus on non-overlapping cases. The proposed approach for overlapping instances is just outlined theoretically without empirical validation\\n3. The paper compares mainly against a basic MLP with mean imputation rather than against more sophisticated approaches for handling missing data or vertical federated learning\\n4. Analysis of how sensitive the model is to hyperparameter choices like state vector dimension, which could significantly impact performance is missing.\\n5. The paper presentation needs work, especially the sectioning of data.\", \"suggestions\": \"For analysis based suggestions, refer to Weaknesses.\\n\\nPresentation also needs more work, some key points are:\\n1. Personal pronouns like \\\"we\\\" should be avoided.\\n2. Medical data is mentioned but not used anywhere.\\n3. Mathematical formulation/modelling behind the algorithm is missing.\\n4. Consider merging or removing some sections in the Method section.\", \"reason_for_giving_a_higher_score\": \"The idea is novel, well-motivated and the paper presents valid and impressive results. The analysis follows a justifiable trend and is logically sound.\", \"reason_for_giving_a_lower_score\": \"The presentation requires a lot of work, with key experiments central to the work missing.\", \"rating\": \"5\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The paper introduces a novel federated modular neural network architecture, FedMoDN, which addresses the challenges of learning from imperfectly interoperable distributed datasets. This is well-motivated and particularly relevant in clinical settings where data heterogeneity and missingness are common.\", \"strengths_and_weaknesses\": [\"Strengths:\", \"This work is well-motivated and particularly relevant in clinical settings where data heterogeneity and missingness are common.\", \"The experimental results demonstrates that the proposed FedMoDN is significantly more robust to missing data compared to monolithic neural networks.\", \"The paper is well-written and easy to read.\"], \"weaknesses\": [\"While the paper presents promising results on synthetic and public datasets, it lacks validation on real-world clinical data.\", \"The paper compares FedMoDN with a federated MLP baseline and a centralized baseline. However, it would be beneficial to include comparisons with other state-of-the-art federated learning methods, especially those designed for handling heterogeneous data.\", \"Investigating the scalability of FedMoDN in terms of computational and communication efficiency would provide a more comprehensive understanding of its applicability in large-scale federated learning scenarios.\"], \"suggestions\": \"Refer to the weaknesses.\", \"reason_for_giving_a_higher_score\": \"The paper introduces a novel federated modular neural network architecture, FedMoDN, which addresses a critical challenge in federated learning: learning from imperfectly interoperable distributed datasets. This well-motivated and have positive practical impact.\", \"reason_for_giving_a_lower_score\": \"Refer to the weaknesses.\", \"rating\": \"6\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The paper introduces FedMoDN, a federated learning approach that leverages Modular Decision Support Networks (MoDN) for collaborative learning across distributed datasets with imperfect interoperability. By extending MoDN (Trottet et al., 2023) to the federated learning (FL) setting, the authors assess its effectiveness through experiments simulating realistic challenges in medical data sharing, particularly in resource-limited settings. Results demonstrate that the proposed modular approach outperforms FL methods on monolithic architectures.\", \"strengths_and_weaknesses\": [\"*Strengths*\", \"The paper addresses a relevant challenge in FL, particularly in medical data applications.\", \"Experimental results indicate performance improvements over monolithic FL architectures.\", \"*Weaknesses*\", \"The MoDN architecture was originally proposed in (Trottet et al., 2023). The main contribution appears to train MoDN via FL, but it is not clear what modifications, if any, were made to adapt it to the federated setting.\", \"The paper shows that MoDN outperforms MLP when both are trained in FL, but this advantage also holds in centralized training. It is unclear whether the modular approach specifically improves FL performance beyond what is expected from MoDN itself.\", \"The naming convention could be improved. If FedBaseline is FedAvg, why not call it simply FedAvg and specify which model used?\", \"Evaluating more federated learning methods could strengthen the analysis.\"], \"suggestions\": [\"Clearly state the technical novelty beyond applying MoDN to an FL setting.\", \"Discuss whether the improvements stem from federated training or are simply an inherent advantage of MoDN.\"], \"reason_for_giving_a_higher_score\": \"Clearly articulating the FL-specific contributions beyond simply applying MoDN in a federated setting, along with including additional FL baselines to contextualize performance gains, would strengthen the paper's impact and make the experimental evaluation more convincing.\", \"reason_for_giving_a_lower_score\": \"If the paper merely reimplements MoDN in an FL setting without meaningful adaptation or insights into FL-specific challenges, its novelty may be limited, and the contribution is unclear, as the observed benefits could stem from MoDN itself rather than any federated learning-specific advantage.\", \"rating\": \"5\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}",
"{\"summary\": \"The paper extends the MoDN architecture (Trottet et al., 2023) to enable federated training while addressing vertical data partitioning in IIO datasets. Unlike standard federated learning, which requires imputation or discarding missing data, FedMoDN allows hospitals/nodes to train and use only the relevant modules for their local dataset, making it more efficient and flexible.\", \"strengths_and_weaknesses\": [\"Strengths\", \"The paper evaluates FedMoDN under different challenges, including missing data, feature interoperability, and systematic bias, providing a well-rounded assessment.\", \"The study benchmarks FedMoDN against both centralized and federated models, highlighting its strengths and trade-offs.\", \"The emphasis on low-resource clinical settings reflects practical constraints, making the approach relevant for real-world use.\", \"Weaknesses\", \"The paper does not provide a detailed computational overhead analysis of FedMoDN compared to FedBaseline, which is crucial for deployment in resource-constrained environments.\", \"The modular nature of FedMoDN suggests potential for improved interpretability, but the paper does not explore this aspect in detail.\", \"Significance is assessed but specific tests of significance are not mentioned or their corresponding values\"], \"suggestions\": [\"The study relies heavily on synthetic data, and while the California Housing dataset is publicly available, further validation on real-world clinical datasets would strengthen the findings.\", \"Exploring model interpretability to understand how modular decisions are made.\", \"Investigating scalability and communication efficiency in larger FL deployments.\", \"More justification of the method and comparison with a broader relevant literature\"], \"reason_for_giving_a_higher_score\": \"N/A\", \"reason_for_giving_a_lower_score\": \"not enough justification of the method, could be better placed in the context of the broader literature. The paper heavily relies on Trottet et al 2023 and does not provide enough detail as a standalone work.\", \"rating\": \"6\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}"
]
} |
EPxzr9WU1R | ROBUST ONLINE INFERENCE USING ADAPTIVE MODEL SWITCHING | [
"Kalpan Mukherjee",
"Vikramank Singh",
"Abishek Sankararaman",
"Balakrishnan Murali Narayanaswamy",
"Tim Kraska"
] | It is well known that Large language models (LLMs) have good zero-shot and few-shot performance which makes them a promising candidate for inference when no or few training samples are available. However, when there is abundant task data, small custom trained models perform as well or are superior in performance to pre-trained LLMs, even after accounting for in-context examples. Further, smaller models are far cheaper and easier to maintain and serve for online traffic. This paper studies algorithms to optimally switch between such models for online inference. In the case when inference traffic is stationary, it makes sense to start with LLMs during the cold-start phase, and then switch over to small custom models once there is sufficient data. However, when distribution shifts are encountered, it is essential to fall back on LLMs while the custom models adapts to the distribution shift. We present an empirical study of such switching behaviors on 3 common real-world tasks like classification, regression, and forecasting across different data modalities like images, text, and time series and show how they can add value from the perspective of both cost and performance. | [
"Large Language Models",
"Inference",
"Model Switching",
"Machine Learning",
"Online Learning",
"Distribution Shift",
"Cold Start"
] | Accept | https://openreview.net/pdf?id=EPxzr9WU1R | https://openreview.net/forum?id=EPxzr9WU1R | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"qScyJqWyt0",
"jQquKUqfrq",
"exJL5gFQy0",
"RtgcnjDeNQ",
"9r3sBA2R9R",
"2k6hDypTvA"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1741017537976,
1741017944880,
1741226298440,
1741031544578,
1740939820435,
1740999748260
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission28/Reviewer_VRUF"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission28/Reviewer_uqoV"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission28/Reviewer_3WD4"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission28/Reviewer_MSq3"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission28/Reviewer_zGy8"
]
],
"structured_content_str": [
"{\"summary\": \"This paper examines the potential for switching between small custom models and large pretrained language models. The goal is to solve the long-standing problems of cold start and distribution shift that plague the online inference tasks commonly encountered in machine learning.\\n\\nThe authors demonstrate the problematic cold start and distributional shift phenomena, then outline simple switching strategies. They find that LLMs could serve as effective placeholders during the cold-start and distribution shift periods in place of task-specific models.\", \"strengths_and_weaknesses\": \"### Strengths\\nThe ideas presented in this paper are appealing and should benefit the machine learning community at large.\\nThe methodological approach is sound and well presented. For instance, the obfuscation of suspected LLM training datasets is very important to ensure fair comparison of these online inference mechanisms.\\n\\n\\n### Weaknesses\\nThe main weakness I find with this paper is its presentation. Figures and Tables are referenced very far from where they were presented. For instance, Figure 1 is referenced at the end of the paper, even though it is crucial, particularly for introducing the various baselines. (Also, its caption is quite confusing, and should clarify that the value added here is the performance!)\\n\\nThe various baselines should be briefly described before encountering them in the tables. For instance, TSM-\\u03bb is first introduced at line 239. And the fact that the contribution methods are only described at the end even though they appear throughout is not conducive to clear understanding.\", \"some_questions_remain_unanswered\": [\"Why are some values in bold in Table 2? What do they represent?\", \"(Line 287): How are the labels used to train the binary classifier for HYN obtained?\"], \"suggestions\": \"The main suggestion is to address the questions I pointed out in Weaknesses. Additionally, what is $\\\\gamma$ at line 131?\\n\\nFix minor weaknesses and typos. For instance, line 045 is missing some terms, which obfuscate understanding.\", \"reason_for_giving_a_higher_score\": \"The work is important, and valuable to this workshop. Interesting future work is envisioned based on principled methods.\", \"reason_for_giving_a_lower_score\": \"The overall paper presentation is lacking. More effort on presentation should help raise the score.\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"This paper studies online inference, where the distribution of input data undergoes some change. The paper considers two types of solutions: either using a generic large language model (LLM), using in-context-learning to make it perform well on the task at hand without any retraining, or training a small Task Specific Model (TSM) on the fly on the task at hand. Using the LLM yields out of the box good performance, at no training cost. When sufficiently enough data has been seen, the TSM becomes a specialist and becomes better than the LLM.\\nThe authors experiments in this setting on three different domains, where the task swithces mid-training.\\nThe authors then discuss possible switching strategies, in order to get the best of both worlds: when the task switches, first using the LLM, and then the TSM whne enough data has been gathered.\\nThe authors focus the discussion on the training costs, assuming that the bottleneck is the cost of backprop -- hence only the TSM pays a non-zero price.\", \"strengths_and_weaknesses\": [\"# Strengths\", \"the paper is well written, the exposition is very clear\", \"the problem studied by the authors is a very important problem for the deployment in the wild of ml models, in the paradigm of large pretrained models.\", \"The experiments are convincing\", \"The discussion is very interesting.\", \"# Weaknesses\", \"The problem studied here has many different parameters, like TSM and LLM scale, LLM performance, type of distribution shift, hardness of the task to address, etc. There are also different types of relevant constraints, on either the cost of backward or forward, or cost of switching models. It is hard to explore this space exhaustively, and the authors already do a good job at clarifying the picture. However, I think that exploring more the direction of model scale would greatly enhance the paper. Indeed, if the TSM is small enough, the bottleneck is no longer the cost of backprop through it, but rather the inference of the LLM. The fact that the trained TSM is better than the LLM is also very task and TSM scale dependent (for instance, on a very hard reasoning task, a TSM trained for ages on the task cannot outperform the LLM). I think that having a more thorough discussion about these points would make the paper better\", \"likewise, the assumption that we only pay the cost of backprop is a strong choice, which restricts quite a lot the generality of the paper's discussion (indeed, automatic differentiation theory guarantees that cost of backprop < 3 * cost of inference, and in many cases I expect that cost of backprop through TSM << cost of inference for LLM). Also, in many applications, one only trains a few models but these models are served many times; in that case the cost of inference dominates.\"], \"suggestions\": \"I think that discussing the points mentioned above would clarify the paper.\", \"reason_for_giving_a_higher_score\": \"Not sure what this means\", \"reason_for_giving_a_lower_score\": \"Not sure what this means\", \"rating\": \"8\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This work proposes how to switch between multiple models in inference. This is relevant to this workshop, as how to exploit an ensemble of networks is a form of modularity. The reviewers recommend acceptance, and we're happy to accept it to the workshop.\"}",
"{\"summary\": \"The paper studies when a pre-trained LLM should be used for a particular task as opposed to a task-specific trained model (TSM) in the online setting where a constant stream of observations and labels are observed. The authors study this question predominantly from the lens of the cold-start problem as well as distribution shifts and uncover that leveraging an LLM with in-context examples is beneficial for the cold-start problem (i.e. in low task-specific data regime) as well as in the case of distribution shifts. Overall, they pose an interesting question and show analysis across tasks on different modalities.\", \"strengths_and_weaknesses\": [\"**Strengths**\", \"The tasks considered in the paper are fairly diverse and they show that the conclusions the authors arrive at are shared across different modalities.\", \"The problem studied in the work is quite interesting and relevant, especially from the lens of production.\", \"**Weaknesses**\", \"The overall writing and presentation of the work could be significantly improved (see suggestions).\", \"The experimental details are missing. When the authors put examples in the context of the LLMs, how do they handle increasing number of tokens?\", \"Do their experimental results hold in the case of a different LLM or a different TSM? The authors should provide some form of sensitivity analysis on models as well.\", \"While relevant, why don't the authors consider the more natural setting where the TSM is a fine-tuning of an LLM model on the observations received?\"], \"suggestions\": [\"The writing can be significantly improved. In particular, I found that the description of the general problem misleading and unnecessary given the authors only tackle a particular setting of it. While I agree that the setting they studied is non-trivial, it could have just been explained as is without the added complexity of $\\\\beta$ or multiple models, which I believe should only be there if the authors are showing some preliminary analysis on it.\", \"The authors talk about TSM-$\\\\lambda$ before it has even been introduced. They should consider re-ordering some parts of the draft.\", \"The plots are a bit unreadable. The authors should consider a different color-scheme, line-width and introducing some markers.\"], \"reason_for_giving_a_higher_score\": \"The problem formulation fits well with the workshop theme and the work tackles an interesting and relevant problem. The analysis is interesting and highlights clear cases when and where TSM might outperform an LLM, and when not.\", \"reason_for_giving_a_lower_score\": \"The analysis done is a bit preliminary and does not handle a number of interesting cases: eg. TSMs as finetuned LLM models or finetuning done on distilled LLM models. It is expected that LLMs would outperform TSMs in the cold-start problem **because** they are pre-trained on language while TSMs have no signal in the start. It is also expected they will do well on distribution shifts **because** they have been trained on a large corpora of a number of different distribution shifts. In this regard, the findings are not surprising.\", \"rating\": \"6\", \"confidence\": \"5\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"This work addresses the problem of adaptive model switching between large language models (LLMs) and smaller task-specific models (TSMs) for online inference. The motivation comes from the observation that LLMs perform well in zero-shot/few-shot settings but are generally expensive, whereas custom TSMs are more cost-effective but struggle with cold-start and distribution shifts. To balance these trade-offs, the paper proposes several adaptive switching algorithms - including Loss-Guided Switching (LGS), a Hypernetwork-based approach (HYN), and Uncertainty-Based Switching (UBS) - that decide, at each inference step, which model to use, aiming to minimize cost while maintaining accuracy. They conduct experiments across classification, regression, and forecasting tasks, using datasets from different modalities (text, images, time-series). Their results show that simple switching heuristics and learned policies can significantly outperform static strategies (always using LLM or TSM) in both performance (area under the learning curve, ALC) and cost efficiency (FLOPs).\", \"strengths_and_weaknesses\": \"Strengths\\n1. The paper tackles a practical and important problem in online inference by leveraging the strengths of both LLMs and TSMs. The motivation is well explained, particularly the challenges of cold-start and distribution shifts.\\n2. The experiments cover diverse tasks and data modalities, making the findings more generalizable across different scenarios.\\n3. The work raises some open research questions, which can help guide future work.\\n\\nWeakness\\n1. While the problem is relevant, the work lacks novelty since it mainly applies simple existing methods to this problem rather than introducing new techniques.\\n2. The experimental methodology is not fully explained, making it difficult to interpret the results. Some important details for reproducibility are missing. Additionally, the paper treats some aspects as trivial, such as using an LLM for time-series prediction, which is still an open research challenge.\\n3. The study focuses on an idealized scenario where distribution shifts are sudden and clearly defined. While this can be valid simplification for the scope of this work, the paper does not discuss how its findings generalize to real-world cases, where shifts are often gradual and unpredictable. It would be stronger if it drew clearer parallels between the idealized setup and real-world challenges.\", \"suggestions\": [\"For reproducibility, it would be helpful to release the exact in-context prompts used for LLMs in the experiments in each of the datasets/modalities.\", \"The paper could discuss how LLMs are used in online learning and how to adapt them for regression tasks, which would add clarity.\", \"It is unclear why LLM performance remains constant if new in-context learning examples are added during the experiment (as mentioned in line 267). As the prompt is changing over time, some explanation is needed.\", \"Why not evaluate switching back to TSMs after some time instead of keeping LLMs indefinitely after a shift? This would significantly reduce costs and align better with adaptive online learning / model selection technique. Using LLMs forever after is as prohibitively expensive as using them all the time.\", \"The paper's structure could be improved. Some results appear before the baselines and models are introduced, making it harder to follow. While I understand the baselines are mainly relevant for RQ3, it initially makes it difficult to understand the setup.\", \"The work could explore simple but more effective approaches, such as online reinforcement learning-based policies that actively explore and exploit model choices, and could optimize cost and accuracy together.\", \"The work could discuss more the limitations of the work scope. For example, using FLOPs as the main cost metric does not capture important factors like latency, memory usage, or energy consumption, which are critical in real-world deployment.\"], \"reason_for_giving_a_higher_score\": [\"The problem is relevant and well motivated.\", \"The work makes a contribution to the field discussing relevant derivative problems.\", \"The work chooses good datasets for evaluation, covering different tasks and modalities.\"], \"reason_for_giving_a_lower_score\": [\"The work lacks novelty beyond empirical analysis of switching strategies.\", \"Its simplifying assumptions (immediate labels, no switching cost) reduce real-world applicability. The work lacks discussion on long-term model drift and gradual shifts.\", \"There is limited comparison with advanced model selection techniques.\"], \"rating\": \"6\", \"confidence\": \"4\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"This paper explores the problem of adaptive model switching in online inference for classification and regression tasks. The study compares the performance and cost trade-offs between using large language models (LLMs) and task-specific models (TSMs), particularly under cold-start conditions and distribution shifts. The authors conduct experiments on MNIST (image classification), sentiment analysis (text classification), and time series forecasting, demonstrating the advantages and limitations of LLMs versus TSMs. Additionally, they propose and evaluate switching methods to dynamically select between models to balance cost and performance.\", \"strengths_and_weaknesses\": \"Strengths:\\n\\nThe problem of cost & performance optimization by proper model selection is a relevant topic.\\nThe paper is easy to follow.\", \"weaknesses\": \"I am concerned by a potential lack of novelty.\\nThe first two experimental questions (RQ1 and RQ2) are showcasing experiments to measure the existence of well known behaviors: pre-trained LLMs can be used for downstream tasks, and they can be better than specialized models, especially under low-data settings (i.e. cold-start or distribution shift).\", \"rq3_is_the_most_interesting_experimental_question\": \"how to design a good switching/routing mechanism to get the best of both worlds ? Here a few simple methods are proposed and tested. I am not expert enough to assess the novelty of these approaches, but it seems that SOTA routing systems are not considered, or at least the paper is lacking a proper positioning among existing solutions.\", \"minor\": \"l.302 \\\"HYN is a relatively costlier algorithm as it learns a mapping from features to right model and performs better\\nif both TSM and LLM are constantly run on all samples in background.\\\" --> what is being shown in the figure then ? HYN with LLM and TSM inference at all time, or HYN with single inference type per sample ? By looking at Figure 1 I guess it is the former.\", \"suggestions\": \"Reduce emphasis on RQ1 and RQ2.\\nBetter position your proposed switching/routing solutions wrt to state of the art.\", \"reason_for_giving_a_higher_score\": \"see Strengths And Weaknesses\", \"reason_for_giving_a_lower_score\": \"see Strengths And Weaknesses\", \"rating\": \"6\", \"confidence\": \"2\", \"workshop_fit\": \"4\"}"
]
} |
EAMnemlXB2 | Adaptive Local Training in Federated Learning | [
"Donald Shenaj",
"Eugene Belilovsky",
"Pietro Zanuttigh"
] | Federated learning is a machine learning paradigm where multiple clients collaboratively train a global model by exchanging their locally trained model weights instead of raw data. In the standard setting, every client trains the local model for the same number of epochs.
We introduce ALT (Adaptive Local Training), a simple yet effective feedback mechanism that can be exploited at the client side to limit unnecessary and degrading computations. ALT dynamically adjusts the number of training epochs for each client based on the similarity between their local representations and the global one, ensuring that well-aligned clients can train longer without experiencing client drift. We evaluated ALT on federated partitions of the CIFAR-10 and Tiny-ImageNet datasets, demonstrating its effectiveness in improving model convergence and stability. | [
"Federated Learning",
"Local Training",
"Adaptive ML"
] | Accept | https://openreview.net/pdf?id=EAMnemlXB2 | https://openreview.net/forum?id=EAMnemlXB2 | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"i5lUPyMFve",
"ba7mLfcDIA",
"KILbDAUuVD",
"DkYz5uGDz7"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1741226299172,
1740233188288,
1740660305253,
1740540218683
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission25/Reviewer_Rakc"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission25/Reviewer_nUYo"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission25/Reviewer_q5KB"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This paper proposed an Adaptive Local Training framework for clients in an FL setup that dynamically adjusts the number of local training epochs for each client, addressing challenges like client drift in heterogeneous data settings. The paper received scores with high variance. We suggest the authors incorporate the comments and suggestions from reviewer nUYo to strengthen the paper. The paper seems relevant to the topic of decentralized training. Overall, we recommend accepting this work to the workshop.\"}",
"{\"summary\": \"This paper presents Adaptive Local Training (ALT), a novel method of dynamically adjusting training epochs per client based on cosine similarity measures of client model's embeddings and global model's embeddings. This serves to avoid excessive computation and mitigate client drift in cases of heterogenous data distribution. The authors demonstrate improved performance and reduced computational costs on CIFAR-10 and TinyImageNet datasets compared to baselines like FedAvg and MOON.\", \"strengths_and_weaknesses\": \"Strengths:\\n\\n1. Novel yet straightforward proposal to reduce computational load and excessive training.\\n2. Well-written paper with a clear motivation.\\n3. Decent generalization shown across datasets and comparison with baseline methods.\", \"weaknesses\": \"1. Results not presented with different threshold functions.\\n2. Limited theoretical analysis of why the method works.\", \"suggestions\": \"1. Consider adding ablation study with threshold functions.\\n2. Provide theoretical explanations of why exactly the method works.\\n3. Consider referencing Chen et al. \\\"Dap-FL: Federated Learning Flourishes by Adaptive Tuning and Secure Aggregation\\\".\", \"reason_for_giving_a_higher_score\": \"1. Novel and straightforward approach with a detailed method.\\n2. Well presented paper\\n3. Sufficient experiments\", \"reason_for_giving_a_lower_score\": \"1. Limited theoretical analysis.\", \"rating\": \"8\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"This paper introduces Adaptive Local Training (ALT) in Federated Learning (FL), a novel method that dynamically adjusts the number of training epochs for each client based on the similarity between local and global model representations.\\nIn traditional FL, all clients train for a fixed number of epochs per round, regardless of how closely their models align with the global model. In contrast, ALT halts training early for clients whose local representations have sufficiently aligned with the global model. More concretely, for each client kk, training stops when the cosine similarity:\\n\\n\\\\[\\n\\\\cos(\\\\mathbf{p}_s, mathbf{p}_g) < Th(r).\\n\\\\]\\n Falls below an adaptive threshold\\n\\n\\\\[\\nTh(r) = a + b \\\\frac{r}{R}\\n\\\\]\\n\\nwhere \\\\(r\\\\) is the current round and \\\\(R\\\\) is the total number of rounds.\\n\\nThe paper evaluates ALT on CIFAR-10 and Tiny ImageNet using its integration into FedAvg and MOON, demonstrating that it reduces total training epochs while maintaining comparable or improved accuracy.\", \"strengths_and_weaknesses\": \"Strengths:\\nKey concepts are supported by figures - (a), (c), and (d) illustrate the effectiveness of ALT in reducing total epochs while maintaining accuracy.\\nEasy integration with other FL algorithms - ALT can be used with existing methods like FedAvg and MOON\\nDemonstrates improvements in computational efficiency without training degradation\", \"weaknesses\": [\"Training instability - figure (b) shows a sinusoidal training curve that has clearly not converged at the limit of the graph, with no evidence therefore included in the paper that training is robust on the Tiny ImageNet dataset.\", \"Lack of clear causality - it is unclear whether ALT\\u2019s improvements stem from the adaptive schedule or simply from reducing the number of training epochs per round. A simple baseline that uniformly decreases the number of epochs per round should have been tested.\", \"Weak conclusion - The conclusions restate general benefits (e.g., energy savings) but do not effectively summarize key empirical results.\", \"Incorrect figure labelling - figure (c) is mislabeled\\u2014the y-axis should be \\\"Total Epochs Per Round\\\" instead of cumulative epochs.\", \"Questionable novelty - adaptive local training in FL has been explored in FedProx (Li et al., 2020) and SCAFFOLD (Karimireddy et al., 2020), though ALT introduces a specific similarity-based criterion\", \"Lack of theoretical justification - there is no explanation for the choice of the adaptive threshold equation beyond empirical results.\", \"No figure references in the text - figures are not referenced within the main paper, making it harder to connect findings to visual results.\", \"Weak justification for parameter selection - the values of \\\\( a \\\\) and \\\\( b \\\\) are not empirically justified, nor is there an ablation study exploring different settings\", \"Limited non-i.i.d. evaluation - the single concentration parameter \\\\alpha = 100 results in a data split that is almost i.i.d. A lower \\\\alpha should have been tested to assess performance under stronger non-i.i.d. partitions.\", \"Throwaway statements included throughout the paper that are non-specific - for example, \\u201cIn addition to that, when the clients\\u2019 data distribution is very heterogeneous, training each client for a fixed pre-defined number of steps leads to client drift and complicates the aggregation.\\u201d What does complicating the aggregation mean?\"], \"suggestions\": [\"Reference figures within the paper\", \"Not clear whether improvement is related to the schedule. You might see similar performances if you simply dropped the number of epochs each client trains for each round. This wasn\\u2019t tested\", \"Rationale as to the choice of parameters a and b - an empirical analysis would be sufficient\", \"Test a variety of different values for the concentration parameter \\\\alpha to see whether the method can handle non-i.i.d. is sensitive to partitions which are more different.\"], \"reason_for_giving_a_higher_score\": [\"ALT provides a simple mechanism for reducing computation in FL by dynamically adjusting client training epochs. Its strengths include:\", \"Reduced computational overhead while maintaining accuracy.\", \"Compatibility with existing FL algorithms (FedAvg, MOON).\", \"Clear empirical benefits demonstrated on CIFAR-10 and Tiny ImageNet.\"], \"reason_for_giving_a_lower_score\": [\"The paper suffers from several major weaknesses that limit its impact to the field:\", \"Unclear novelty - adaptive training in FL has been explored before, and ALT does not introduce a fundamentally new concept.\", \"Lack of rigorous theoretical grounding - it is unclear to me what the theoretical rationale is behind the formulation of the linear schedule.\", \"Weak experimental baselines - a simple method that gradually reduces epochs per round was not tested, leaving uncertainty about whether ALT\\u2019s benefits can be similarly met by using a non-adaptive approach that simply reduces the number of epochs per round\", \"Training instability - the curve (Figure b) contradicts claims of improved convergence. The gradient of the curve is still markedly positive at the limit of the graph and has not converged\", \"Superficial conclusions - the results discussion is weak, with a tenuous claim relating to implications for energy efficiency that is not well-supported by the data.\", \"Without addressing these issues, the paper falls short of making a significant contribution to the field.\"], \"rating\": \"3\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"This paper proposed an Adaptive Local Training framework that dynamically adjusts the number of local training epochs for each client, addressing challenges like client drift in heterogeneous data settings. Empirical results on CIFAR-10 and TinyImageNet show the effectiveness of the proposed method.\", \"strengths_and_weaknesses\": \"## Strengths\\nThe proposed ALT framework provides new ideas and methods for addressing client drift and data heterogeneity by dynamically adjusting the number of local training epochs based on the similarity between local and global model representations. Moreover, the proposed method is simple and effective and can be integrated into existing methods like FedAvg.\\n\\n## Weaknesses\\n1. The paper lacks relevant theoretical analysis.\\n2. The paper lacks additional experiments to discuss the impact of varying degrees of data heterogeneity on the performance of the method.\", \"suggestions\": \"See weaknesses. Additionally, more baseline methods could be included to demonstrate the flexibility and adaptability of the proposed ALT.\", \"reason_for_giving_a_higher_score\": \"The paper lacks a convergence analysis of the proposed method, and the experimental validation is insufficient.\", \"reason_for_giving_a_lower_score\": \"The proposed method offers new insights for addressing client drift and data heterogeneity.\", \"rating\": \"7\", \"confidence\": \"4\", \"workshop_fit\": \"4\"}"
]
} |
8wt2eKkVe6 | Exploring Sparse Adapters for Scalable Merging of Parameter Efficient Experts | [
"Samin Yeasar Arnob",
"Zhan Su",
"Minseon Kim",
"Oleksiy Ostapenko",
"Doina Precup",
"Lucas Caccia",
"Alessandro Sordoni"
] | Model merging aims to integrate knowledge from multiple finetuned experts into a single, unified multi-task model. To Merging parameter-efficient task experts has recently gained growing attention as a way to build modular architectures that can be rapidly adapted on the fly for specific downstream tasks, without requiring additional fine-tuning. Typically, LoRA serves as the foundational building block of such parameter-efficient modular architectures, leveraging low-rank weight structures to reduce the number of trainable parameters. In this paper, we study the properties of sparse adapters, which train only a subset of weights in the base neural network, as potential building blocks of modular architectures. First, we propose a simple method for training highly effective sparse adapters, which is conceptually simpler than existing methods in the literature and surprisingly outperforms both LoRA and full fine-tuning in our setting. Next, we investigate the merging properties of these sparse adapters by merging adapters for up to 20 natural language processing tasks, thus scaling beyond what is usually studied in the literature. Our findings demonstrate that sparse adapters yield superior in-distribution performance post-merging compared to LoRA or full model merging. Achieving strong held-out performance remains a challenge for all methods considered. | [
"Sparse adapter",
"Parameter-efficient finetuning",
"Model merging",
"LLM"
] | Accept | https://openreview.net/pdf?id=8wt2eKkVe6 | https://openreview.net/forum?id=8wt2eKkVe6 | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"yOV08kWY7L",
"v2rF4Gwscp",
"pLRhjhsa0u",
"SK29JYtrfW",
"ASygUfoxGl"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1741226299363,
1740263698135,
1740630278341,
1741196595548,
1740992098995
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission46/Reviewer_RKzo"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission46/Reviewer_9WoG"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission46/Reviewer_CSHa"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission46/Reviewer_YaPN"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This paper expolores the idea of sparse finetuning as a PEFT method. Most of the reviewers agree that the paper well written and is a good fit for the workshop. Moreover, reviewers are mostly satisfied with the experiments in the paper. We believe that further clarifying the benefits in adopting sparse fine-tuning for memory efficiency would improve the paper. Specifically, how this sparse finetuning reduces the memory footprint furing training. Overall, the reviews are positive and hence we recommend acceptance.\"}",
"{\"summary\": \"This work explores the idea of sparse fine-tuning for performing Parameter Efficient Fine-tuning (PEFT) and improving model merging on NLP. The proposed framework uses the SNIP saliency score to periodically calibrate the mask during sparse adaptation. Experiments on NLP are carried out on established benchmarks, considering both held-in and held-out tasks merging performance.\", \"strengths_and_weaknesses\": \"**Strengths**\\n- The proposed approach shows strong results, leaving no doubt in its benefits in terms of performance.\\n- Both held-in and held-out performance is assessed, which is interesting and a (sometimes) overlooked factor in model merging.\\n- The paper is clearly written and easy to follow. The pseudo-code (Algorithm 1) further enhances the understanding of the methodology.\\n\\n------------------\\n\\n**Major Weaknesses**\\n\\n**W1.** The paper's contribution is very limited as Sparse Fine-tuning/Adaptation is not a novel concept [1,2]. The only difference I'm seeing is in the way the sparse mask gets calibrated (using a very well known Pruning-at-Initialization saliency score).\\n\\n**W2.** I'm not clearly seeing the benefit in adopting sparse fine-tuning for memory efficiency (i.e. as a PEFT method), as it costs full memory at least in the beginning and I'm not quite seeing how it reduces the memory footprint after mask calibration. For instance, if a mask of a parameter has at least one non-zero entry, then it is unclear how it is possible to move that parameter on cpu and throw away the mask (as at least one element still requires to be updated). Also, the cost in memory of the masks seemingly is not taken into account, as well as the increased computation required by the element-wise multiplication with the masks.\\n\\n**W3.** A comparison with other sparse fine-tuning approaches [1,2] is missing and would solidify the validity of this study.\\n\\n[1] A. Panda, et al. \\\"Lottery ticket adaptation: Mitigating destructive interference in llms.\\\" arXiv preprint arXiv:2406.16797 (2024).\\\\\\n[2] L. Baohao Liao, et al. \\\"Parameter-Efficient Fine-Tuning without Introducing New Latency.\\\" In ACL, 2023.\\n\\n------------------\\n\\n**Minor Things**\\n- I suggest to highlight better the contributions (eg. with bullet points) at the end of the Introduction.\\n- Also, it is not really clear whether a global or local masking approach is adopted when calibrating the masks.\", \"suggestions\": \"I would suggest the authors to thoroughly examine and report computational costs and memory efficiency analyses, as true sparse masks theoretical savings hardly translate to real improvements. Also, some minor clarifications would improve the treatment (eg. does the method consider a global or layer-wise ranking in the mask calibration logic?)\\n\\nFinally, I would suggest to compare the proposed framework with other sparse fine-tuning approaches (see weaknesses).\", \"reason_for_giving_a_higher_score\": \"See Suggestions.\", \"reason_for_giving_a_lower_score\": \"N/A\", \"rating\": \"4\", \"confidence\": \"4\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The authors propose a novel, effective method to train sparse adapters for fine-tuning. They use an element-wise mask and a block-wise mask to make the weight updates sparse with an efficient training method. Additionally, there are experiments to analyse different merging techniques for various fine-tuning methods to showing the sparse adapters advantage on held-in tasks.\", \"strengths_and_weaknesses\": \"Strengths:\\n1. The training scheme for the sparse adapters is computationally efficient since the mask is only updated during the first epoch and only a small fraction of model weights are changed during fine-tuning, yet the procedure is highly effective in the single task setting as shown experimentally. \\n2. The writing is clear and well-structured, making the paper readable and easy to follow. \\n3. There is a comprehensive comparison with various merging methods to show the advantage of sparse adapters on held-in data when averaging them. This is probably due to the well-designed merging update of the sparse adapter to account for the parameter update overlaps.\", \"weaknesses\": \"1. There is a small ambiguity on the way the mask retains the parameters based on the saliency scores, if Top K or a threshold was used as both are mentioned but are different. \\n2. Regarding the memory requirements, the short discussion was appreciated, but perhaps a concrete comparison, along with an efficiency analysis would improve the paper.\", \"suggestions\": \"1. Having a table for the single task performance (Figure 1) would be more straightforward to understand your results.\\n2. The percentages on line 269 seem a bit confusing for me, I guess it is a 13.52%/11.01%/12.48% increase in performance as compared to the baseline but using the actual difference in rouge-L would be clearer for a reader.\", \"minor\": \"Figure 2 is labelled with left and right instead of a and b, which is inconsistent with the text. Typo (equation) on line 160. Parenthesis citation on line 60.\", \"reason_for_giving_a_higher_score\": \"The paper is well-written with clear motivation and comprehensive experimental results that illustrate the method's effectiveness. Naturally, since this is a PEFT method, it fits well with the theme of the workshop.\", \"reason_for_giving_a_lower_score\": \"Sparse adaptation is not an entirely new idea, even though the author's design of it may be novel. The method is comparable but does not improve on held-out tasks during merging.\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"This paper presents a parameter-efficient fine-tuning method based on learned \\\"sparse adapters\\\" where first a sparse mask (either element-wise or block-wise) is learned using a saliency-based technique, then selected parameters (via sparse mask) are fine-tuned for the specific tasks separately. The author finds that using these sparse adapters outperforms the same size LoRA adapters and full fine-tuning.\\n\\nFurthermore, the paper also experiments with merging the trained sparse adapters to evaluate on both held-in and held-out tasks. While merging sparse adapters outperforms all other compared merging methods (Averaging, Task-aritmetics, Ties, Breadcrumbs, and Lora merging) on held-in tasks, it falls slightly behind Ties merging on held-out tasks.\", \"strengths_and_weaknesses\": \"Strengths:\\n1. The paper is well written, and experiments are carefully designed in terms of fair comparison for the proposed framework and the evaluation setting. \\n2. Although there are similar methods for sparse fine-tuning (see \\\"weaknesses\\\" below), this paper applies a saliency-based method to compute a sparse mask, unlike the previous work. Also, the authors show that updating the mask multiple times during the first epoch is useful.\\n3. In a single-task setting, the proposed sparse adapters lead to a better performance than LoRA adapters with the same size and interestingly also outperforms the full model fine-tuning.\", \"weaknesses\": \"1. I think the only major weakness of the paper is that there are similar sparse finetuning frameworks exist in the literature, as mentioned in the paper. There are certain differences, however, novelty of the proposed framework is limited.\", \"suggestions\": \"Although similar results have been reported where the sparse finetuning outperforms full model finetuning, it would be good to have a comprehensive analysis of this to validate that it is not a side effect of fine-tuning data size or hyperparameter selection.\", \"reason_for_giving_a_higher_score\": \"I refer to the \\\"strengths\\\" of the paper I mentioned above as reasons for a high score.\", \"reason_for_giving_a_lower_score\": \"I refer to the \\\"weaknesses\\\" of the paper I mentioned above as reasons for a low score.\", \"rating\": \"7\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}",
"{\"summary\": \"The proposed method for training sparse adapters, and is demonstrated to be beneficial in the post-training merging process. However, the proposed merging method is a simple averaging, compared to more advanced model merging methods is missing.\", \"strengths_and_weaknesses\": \"Strengths:\\n\\n1. The paper is well-structured and clearly presented.\\n2. The paper demonstrates that sparse adapters outperform LoRA and full fine-tuning in certain scenarios, particularly in terms of parameter efficiency and merging properties.\\n3. The incorporation of saliency-based pruning to identify important weights is a well-motivated choice.\", \"weaknesses\": \"1. The proposed merging method is simple averaging. The paper lacks comparison with subspace-based model merging methods, such as Ties-Mering and TALL mask.\\n2. The paper lacks a strong theoretical foundation or analysis. While the empirical results are promising, the authors do not provide a rigorous theoretical justification for why sparse adapters outperform LoRA or full fine-tuning.\\n3. The paper lacks sufficient references to recent works on model merging.\", \"suggestions\": \"Refer to the weaknesses.\", \"reason_for_giving_a_higher_score\": \"Refer to the strengths.\", \"reason_for_giving_a_lower_score\": \"Refer to the weaknesses.\", \"rating\": \"6\", \"confidence\": \"4\", \"workshop_fit\": \"4\"}"
]
} |
5ukL6nPcYe | HDEE: Heterogeneous Domain Expert Ensemble | [
"Oguzhan Ersoy",
"Jari Kolehmainen",
"Gabriel Passamani Andrade"
] | Training dense LLMs requires enormous amounts of data and centralized compute, which introduces fundamental bottlenecks and ever-growing costs for large models.
Several studies aim to reduce this dependency on centralization by reducing the communication overhead of training dense models.
Taking this idea of reducing communication overhead to a natural extreme, by training embarrassingly parallelizable ensembles of small independent experts, has been shown to outperform large dense models trained in traditional centralized settings.
However, existing studies do not take into account underlying differences amongst data domains and treat them as monolithic, regardless of their underlying complexity, size, or distribution.
In this paper, we explore the effects of introducing heterogeneity to these ensembles of domain expert models.
Specifically, by allowing models within the ensemble to vary in size--as well as the number of training steps taken depending on the training data's domain--we study the effect heterogeneity has on these ensembles when evaluated against domains included in, and excluded from, the training set.
We use the same compute budget to train heterogeneous ensembles and homogeneous baselines for comparison.
We show that the heterogeneous ensembles achieve the lowest perplexity scores in $20$ out of the $21$ data domains used in the evaluation. Our code is available at https://github.com/gensyn-ai/hdee. | [
"Large Language models",
"ensemble model",
"heterogeneity"
] | Accept | https://openreview.net/pdf?id=5ukL6nPcYe | https://openreview.net/forum?id=5ukL6nPcYe | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"Un8UG6lNVf",
"Ln6fU0D1cp",
"IzVYTS2eQU",
"EUw1pq5ZLo"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1741226299585,
1740614226996,
1740623186357,
1740627286693
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission27/Reviewer_Jueg"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission27/Reviewer_sVzH"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission27/Reviewer_HLoU"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This paper poposes improvement on BTM-style methods but exploring the effect of having heterogeneous expert models with different sizes and training budget. Some reviewers argued that the novelty is somewhat limited but most of the reviewers felt that the paper was clear and is a good fit for the workshop. We recommend fixing the font used in the paper as it deviated from the provided ICLR template font. Overall, we recommend accepting the paper.\"}",
"{\"summary\": \"The paper demonstrates that heterogeneous ensembles achieve lower perplexity across domains versus homogeneous ensembles. Two kinds of heterogeneity are considered: expert model size and expert training steps. They are compared to homogeneous ensemble training. Each ensemble training method given a fixed compute budget. The paper explores the idea of distributing a fixed compute budget non-uniformly across a set of experts according to the difficulty of the domain.\", \"strengths_and_weaknesses\": \"The strongest aspect is the simplicity and clarity of the idea: take an existing idea of ELMForests and consider what happens when we vary the model size or training time of experts depending on their difficulty.\", \"the_weakest_aspect_is_the_methodology\": \"difficulty is determined based on perplexity, which is also the thing that is being measured. Therefore, this may simply be a tautology: it may be a simple observation about the shape of the perplexity curve for different domains.\", \"suggestions\": \"I would like to have seen more experiments run to confirm the cause and effect of allocating more compute to more difficult domains. It would have been great to consider some ablations. For example, what happens if more compute is allocated to easier domains and less to more difficult domains? Do we get worse performance than the homogeneous baseline? Is it better? Why? What happens if they are randomly assigned different amounts of compute?\", \"reason_for_giving_a_higher_score\": \"Simple, clear idea. Experiments show promise. This would enable more efficient allocation of compute for training of embarrassingly parallelizable ensembles of small independent experts.\", \"reason_for_giving_a_lower_score\": \"More ablations needed. Not a clear cause and effect here of greater difficulty requiring more compute as the paper claims. For example, it could simply be that non-uniform allocation of compute between experts has benefits in itself.\", \"rating\": \"6\", \"confidence\": \"3\", \"workshop_fit\": \"5\"}",
"{\"summary\": \"The authors explore the effect of heterogeneity in model ensembles, where heterogeneity is introduced at the model level (allowing variable model size) and at the training level (allowing varying training steps) depending on the difficulty of the task. The method permits better allocation of compute depending on task difficulty for the domain experts in the ensemble, resulting in lower perplexity scores in almost all domains.\", \"strengths_and_weaknesses\": \"Strengths:\\n\\n1. The paper is clearly written, with a well defined problem setting and logical conclusions drawn from the arguments and experiments in the paper.\", \"weaknesses\": \"1. Lack of novelty and limited contribution. That using a larger model or training for longer on a harder task will produce better results is a well-established result already. Indeed we have well-substantiated results regarding the compute optimal ratios of model size, data size, and training time from scaling laws [1,2], and so this paper seems to just be a direct application of known results. \\n\\n2 (somewhat more minor). Experimental validation is limited. The paper uses quite small models, starting at 5M and going up to 135M params. I appreciate the problem setting selected by the authors requires a lot of training, and so very large models may be overly burdensome, but for an empirical paper I would've hoped for models at larger scale.\\n\\n[1] Kaplan et al. Scaling Laws for Neural Language Models\\n\\n[2] Hoffmann et al. Training Compute-Optimal Large Language Models\", \"suggestions\": \"1. The authors should consider to what extent their contribution is novel. One way to build on the knowledge on compute optimal model size-training-data size could be to more closely consider how ensembling interacts with these properties. For example, are there different ensembling strategies that better leverage heterogeneity?\\n\\n2. For an empirical paper, I would encourage the authors to be as thorough as possible with their experimental validation and to try avoid using very small models.\", \"reason_for_giving_a_higher_score\": \"The paper is clearly written and easy to follow\", \"reason_for_giving_a_lower_score\": \"The main issue is the novelty of the work, as I believe the main lesson of the paper - that using larger models or training them for longer on harder tasks improves performance - is already well-established.\", \"rating\": \"4\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The paper explores the impact of heterogeneous experts (size and # of training steps) for branch-train-merge type of approaches. I think the paper is perfect fit for the workshop.\", \"strengths_and_weaknesses\": \"\\\\+ perfect topic match for the workshop\\n\\n\\\\+ nicely written until experimental setting\\n\\n\\\\- experimental setting a bit dense and confusing\\n\\n\\\\- results are hard to read and might not be very surprising\", \"suggestions\": [\"Main suggestion is to make the experimental part is a bit easier to read, I found it dense and a bit confusing.\", \"It took me a while to understand what were the three \\\"setups\\\" (Tiny spread, ...) and why they were called as such, it's not written in the paper.\", \"it took me another while to figure out what Tiny spread - M_he I_ho means, given that both I and M vary in the the three settings.\", \"Degree of heterogeneity might be better defined: is there a particular formula you are looking at?\", \"Some sentences are hard to grasp, e.g. \\\"heterogeneous models, when combined, perform better as the degree of heterogeneity increases\\\", or a bit trivial \\\"This implies that when the model sizes are closer, training with more data is more impactful than training with a larger model.\\\"\", \"Hard to read results in Table 3, could you maybe compute an average?\", \"Another suggestion is whether you really need continual training of the experts on the three datasets for each level of difficulty. Training on the three datasets jointly (each column) would make the writing easier and still be valid to demonstrate your results?\"], \"reason_for_giving_a_higher_score\": \"n/a\", \"reason_for_giving_a_lower_score\": [\"The experimental setting might be explained more clearly, had to go back and forth few times and not sure to still have gotten everything right.\", \"All models are pretty small, experiments are still toyish.\"], \"rating\": \"6\", \"confidence\": \"3\", \"workshop_fit\": \"5\"}"
]
} |
5IlxDGpSOl | Tight Clusters Make Specialized Experts | [
"Stefan Nielsen",
"Rachel S.Y. Teo",
"Laziz Abdullaev",
"Tan Minh Nguyen"
] | At the core of Sparse Mixture-of-Experts (MoE) models is the router that learns the clustering structure of the input distribution in order to direct tokens to suitable experts. However these latent clusters may be unidentifiable, causing slow convergence, vulnerability to contamination, and degraded representations. We examine the router through the lens of clustering optimization, deriving optimal feature weights that maximally distinguish these clusters. Using these weights, we compute token-expert assignments in an adaptively transformed space that better separates clusters, helping identify the best-matched expert for each token. In particular, for each expert cluster, we compute weights that scale features according to whether that expert clusters tightly along that feature. We term this novel router the Adaptive Clustering (AC) router. Our AC router confers three connected benefits: 1) faster convergence, 2) better robustness, and 3) overall performance improvement, as experts are specialized in semantically distinct regions of the input space. We empirically demonstrate the advantages of our AC router in language modeling and image classification in both clean and corrupted settings. | [
"Mixture-of-Experts",
"Clustering",
"Robustness"
] | Accept | https://openreview.net/pdf?id=5IlxDGpSOl | https://openreview.net/forum?id=5IlxDGpSOl | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"y9rOeLlAAt",
"gJEj6CTsgI",
"SyfDZ2Qi5j",
"Rglq1A79Ag",
"8REsmS9Wsi"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1741226297927,
1740652044380,
1741060065071,
1741194170800,
1741085061782
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission37/Reviewer_ygoQ"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission37/Reviewer_RZE9"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission37/Reviewer_gZGU"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission37/Reviewer_3bPS"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This work proposes a new router for Sparse MoE enabling more specialized experts. This is relevant to the workshop, as it improves current modular capabilities. All the reviewers all recommend acceptance, and we're please to accept it to the workshop.\"}",
"{\"summary\": \"This paper describes an improved token-expert routing method for MoE. It proposes to view the token-expert assignment as a feature-weighted clustering problem and provides a theoretical framework to analyze the robustness and convergence.\", \"strengths_and_weaknesses\": \"Pros:\\n\\n1. The proposed method is supported by rigorous theoretical analyses for its robustness.\\n\\n2. Experiments on relatively large datasets and both language and vision domains validate the effectiveness of the proposed method compared to established baselines.\", \"cons\": \"1. The writing and organization of this paper could be improved. The formular description of the proposed method is unnecessarily long considering that the proposed method is not so complicated. I would recommend saving the space by using straightforward descriptions and moving more experiments in the appendix into the content.\\n\\n2. The experiments are based on rather small-scale models and MoE methods proposed by 2022. It would be a valuable addition if more experiments were conducted on recent MoE models, especially the language models.\", \"suggestions\": \"There are some missing references in the appendices, especially A.1.\", \"reason_for_giving_a_higher_score\": \"The theoretical analysis is impressive.\", \"reason_for_giving_a_lower_score\": \"The experiments are not so hard to argue.\", \"rating\": \"7\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}",
"{\"summary\": \"This paper introduces a novel routing mechanism, the Adaptive Clustering (AC) router, for Mixture-of-Experts (MoE) architectures. The core idea is to determine token-expert assignments within an adaptively transformed space that more effectively uncovers latent data clusters. This transformation is guided by a feature-weighted clustering optimization approach, where features that enhance compact clustering for each expert receive higher weights. The authors present both theoretical and empirical evidence demonstrating the method\\u2019s advantages, including faster convergence, improved robustness to data contamination, and superior performance across language modeling and image classification tasks.\", \"strengths_and_weaknesses\": \"# Strengths\\n\\n1. The paper addresses a very important topic: how to optimally assign tokens to each expert, which is a crucial contribution.\\n2. The paper provides solid theoretical results. \\n3. Experimental results are convincing. Specially Figure 2 is amazing. Faster convergence can translate into massive cost savings in large scale.\\n\\n# Weaknesses\\n\\n1. The provided experiments are conducted with small scale models (220M). It is unclear how these insights would scale with larger models.\\n2. More ablations with hyperparameters would have been helpful. But since this is a workshop paper, that is acceptable.\", \"suggestions\": \"1. At least a single large scale model experiment would have been more illuminating.\\n2. It is unclear how this method works with deeper models (more layers) as there can be an error propagation as the model grows in depth. It is good to include a discussion on this.\", \"reason_for_giving_a_higher_score\": \"The paper is well motivated and well written. Provides a solid theoretical analysis and back it up with experiments.\", \"reason_for_giving_a_lower_score\": \"N/A\", \"rating\": \"8\", \"confidence\": \"4\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The paper presents an interpretration of routing in MoE models as a form of clustering. From this interpretation, the Adaptive Clustering router is introduced, which computes token-expert assignments in a transformed space. The experiments show that an MoE with this router achieves better results than Switch Transformers and GLaM for language modeling, and better than Swin Transformers for image classification.\", \"strengths_and_weaknesses\": [\"**Strengths**\", \"The proposed approach is well fundamented, starting from a (not novel but) refreshing perspective.\", \"All the theoretical guarantees and propositions are well explained and proved with rigor, with quite reasonable assumptions (in most cases, see later).\", \"The proposed router is evaluated on different Transformer-based architectures and tasks (language modeling and image classification).\", \"The paper is excellently written, kudos to the authors.\", \"**Weaknesses**\", \"The matrix $\\\\mathbf{M}_{k^*}^{l - 1}$ is based on the token assignments (from the previous layer). This can clearly break causality in auto-regressive models (the weight of a token at time $t$ may depend on the clustering of future tokens), so it's not clear how the authors (correctly) applied this method for language modeling. This means that the PPL and accuracy evaluations could be potentially invalid.\", \"Comparison with other baselines is lacking. For instance, \\\"On the Representation Collapse of Sparse Mixture of Experts\\\" (Chi et al., 2022) also suggests using a (low-rank) linear projection to compute the router weights. \\\"ModuleFormer: Modularity Emerges from Mixture-of-Experts\\\" (Shen et al., 2023) uses an MLP instead of a linear projection. Finally, clustering and Optimal Transport can be related, and theres a plethora of works using different OT-related approaches for MoEs (e.g. see the survey \\\"Routers in Vision Mixture of Experts: An Empirical Study\\\").\", \"Many of the theoretical claims are made based on fixed tokens and routing parameters. However, these are not fixed when we train the model. Thus, claims regarding the \\\"optimality\\\" of AC may be irrelevant in practice (i.e. Proposition 1 and 2).\"], \"suggestions\": \"Address the mentioned weaknesses. In terms of writing suggestions, I find the paper of excellent quality.\", \"reason_for_giving_a_higher_score\": \"In order to give a higher score, I would need to see a convincing explanation on the causality issues that I raised, and a proper comparison with other related works.\", \"reason_for_giving_a_lower_score\": \"Even if the language model results are invalid due to causality breakeage, studying the router through the lens of clustering is also interesting. Plus, the method is perfectly applicable for encoder models (e.g. image classification). Thus, I could maybe downgrade the rating a bit due to the causality concerns, but I still find the paper very interesting.\", \"rating\": \"7\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}",
"{\"summary\": \"## Summary\\nThe authors address the problem of expert-token \\\"misrouting\\\" within the router component of Sparse Mixture of Experts (SMoE) models/layers with an \\\"Adaptive clustering\\\" (AC) router. in the AC MoE layer, the expert-token score is modified to include a simple and computationally cheap weighting/scaling transformation applied on the token vectors, as inspired by clustering optimization technique of finding feature weighting to represent how much a given cluster \\\"cares\\\" about each dimension/feature proportionally. The authors claim that this simple weighting leads to: \\n- faster convergence for training, as the Hessian of the loss function has a lower condition number compoared to the loss for the \\\"naive\\\" MoE with a router that doesn't include the weighting transformation\\n- robustness, as a result of the weighting transform is that the token clusters corresponding to the experts have better separability, thus leading to higher tolerance for noise in the token vectors\\n- better overall performance, because the clusters are more semantically distinct and corresponding experts specialized\\n\\nThe form the transformation takes is a $ d \\\\times d$ diagonal matrix with entries being the reciprocals of each dimension's \\\"spread\\\" (the authors use mean absolute deviation for \\\"spread\\\") within the token vectors assigned to that expert cluster.\", \"strengths_and_weaknesses\": \"## Strong\\nThe claim that the weighting matrix acts as an effective preconditioner on the Hessian with respect to the expert vector $e_i$ is straightforward and convincing. paired with empirical results in Figure 2 and the fact that the method requires relatively little additional computation, it is compelling.\\n\\nPerformance improvements are modest but consistent against appropriate baselines using appropriate evals, which paints a convincing picture.\\n\\nEmpirical results for robustness support that the AC router has significant effect in handling noise and adversarial inputs.\\n\\nThe fact that the scaling matrix can approach an identity matrix for uniform values of spread across dimensions frame this method as something that can be implemented \\\"just in case\\\" with negligble downside due to the low resource requirement.\\n\\n\\n## Weak\\n\\nThe cluster separability boost shown in lemma 2 is based on data from a gaussian mixture model with the same number of sources as the number of clusters. A less idealized case would strengthen the conclusions of the authors\\n\\nThe derivation of the weights is based on tokens belonging to a single cluster. The cluster mapping function r, which acts as an analogy to the MoE router, is described as a classifier, which would mean our router sends a token to only one expert. Since the paper concerns the more general case of sending tokens to multiple experts, why can we expect these weights derived from single-cluster assignment to be useful. The vision model the authors trained does indeed use a top 1 router, but the text model uses top 2 routing\\n\\nThe formulation of AcMoE layers in definition 2 uses the clustering/asignment from the previous layer as the scaling matrix for the current layer, which implies that the clustering of the current layer should be similar to the clustering of the previous layer. Why is that assumption not an issue?\\n\\n### Questions\\n\\nThe derivation of this scaling matrix is shown as coming from a clustering optimization, in which optimal weights are found analytically for a fixed clustering scheme. The solution for the weights in this context are shown in equation 5 and the proof backing it is shown in the appendix. However, there is a leap from eqn 5:\\n$$\\nw_{qk} = \\\\frac{\\\\lambda/d}{s_{qk} + \\\\alpha_k}\\n$$\\nto the authors matter of factly describe the weights as proportional to the reciprocal of the spread\\n$$\\nw_{qk} \\\\propto \\\\frac{1}{s_{qk}}\\n$$\\nwithout explanation for why we can effectively ignore the $\\\\alpha_k$ constant term. In the proof of lemma 3 in the appendix A1, it is shown based on the shape of the $\\\\phi$ functions functions, there are $d$ values of $\\\\alpha_k$ based on the root finding formulation and that the roots will lie in the intervals $ (s_{q-1,k}, s_{qk}) $ except for the rightmost interval which is $ (s_{d,k}, \\\\infty ) $.\\nSo all put together, I am not sure why the alpha terms in the denominator can be ignored. Is it because you can say that it is similar to one of the $s_qk$ and thus we can combine s and alpha and count it as a multiplicative constant?\", \"suggestions\": [\"## Suggestions\", \"Highlight assumptions in theoretical analysis more explicitly and address how these assumptions are not met in experimental data/setup.\", \"Address the load balancing aspect of routing. Does the AC router still allow for experts to be utilized well?\", \"Cover more training convergence results\", \"### Nitpicks\", \"a few times in appendix A1 text refers to \\\"Eqn. ??\\\" which I assume is supposed to be eqn 5\"], \"reason_for_giving_a_higher_score\": \"Very strong experimental results, compelling theoretical analysis, low cost, evaluated against both clean and \\\"corrupt\\\" test sets\", \"reason_for_giving_a_lower_score\": \"assumptions of theoretical analysis not described deeply and experimental set up is not compared to these assumptions\", \"rating\": \"7\", \"confidence\": \"4\", \"workshop_fit\": \"5\"}"
]
} |
4O8nzTkHPI | Momentum Look-Ahead for Asynchronous Distributed Low-Communication Training | [
"Thalaiyasingam Ajanthan",
"Sameera Ramasinghe",
"Gil Avraham",
"Yan Zuo",
"Alexander Long"
] | Distributed Low-Communication (DiLoCo) allows large-scale model training across geographically distributed datacenters by reducing the communication overhead in the data parallel setting. Asynchronous DiLoCo further relaxes the requirement to synchronize the model updates, eliminating any bottlenecks due to slow devices or interconnects. Nevertheless, asynchronous updates introduce *stale (or delayed) gradients* as model updates and gradient computation are no longer synchronized. To alleviate staleness, we introduce a look-ahead based delay correction mechanism by *extrapolating the negative direction of momentum*. Our experiments on language modelling tasks with decoder-only architectures demonstrate that our approach consistently outperforms asynchronous and synchronous DiLoCo methods in both homogeneous and heterogeneous settings. | [
"Asynchronous Diloco",
"Nesterov method",
"Momentum look-ahead"
] | Accept | https://openreview.net/pdf?id=4O8nzTkHPI | https://openreview.net/forum?id=4O8nzTkHPI | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"YmwM9kpumV",
"UKgn82XDP4",
"TTWy9nVSGZ",
"NxHu6W1b75"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1740569492399,
1741226298768,
1740702027040,
1740670247087
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission26/Reviewer_XQpt"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission26/Reviewer_UTA3"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission26/Reviewer_542C"
]
],
"structured_content_str": [
"{\"summary\": \"This work provides an extension to Asynchronous DiLoCo taking inspiration from Nesterov Accelerated Gradient.\\n\\nBy taking a 'look-ahead' step in the direction of negative momentum, convergence can be accelerated as losses from out-of-sync updates (Async DiLoCo) are minimized.\\n\\nThe paper demonstrates experimentally the strength of DiLoCo+NAG, showing that their technique can even outperform vanilla DiLoCo in both number of iterations to convergence and wall-clock time to convergence.\", \"strengths_and_weaknesses\": \"The mathematics of this paper is strongly defined and well-cited.\\n\\nThe experimental results are persuasive but I felt a lack of explanation of why this method has been shown to be *so* powerful compared with previous works.\", \"suggestions\": \"More explanation of why this method is so strong would be beneficial; instead, I found discussion of the specifics of implementation (eq 7) which were less necessary.\\n\\nI was confused by Eq. 1, as they define the outer-optimization step of DiLoCo as pure SGD - there is no reference to the momentum or other optimizer terms. I think this is just a typo.\", \"reason_for_giving_a_higher_score\": \"Strong experimental results with a clear mathematical foundation and motivation.\", \"reason_for_giving_a_lower_score\": \"Not enough discussion of *why* the results are so strong.\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"This work has been praised by some reviewers that recognized its merits, but presentation and clarity could be improved. We recommend the authors to take into consideration reviewer UTA3's comments in their final version.\"}",
"{\"summary\": \"The paper proposes a Momentum Look-Ahead mechanism to address gradient staleness in asynchronous Distributed Low-Communication (DiLoCo) training, adapting Nesterov Accelerated Gradient (NAG) by using an exponential moving average (EMA) of gradients as momentum to correct delays. Experiments on a toy dataset and WikiText language modeling with a 90M-parameter decoder-only transformer show the method outperforms synchronous and asynchronous DiLoCo. However, the contribution\\u2019s clarity and theoretical grounding remain unclear to me.\", \"strengths_and_weaknesses\": \"Strengths:\\n- Empirical Success: Figure 2 demonstrates improved performance over previous DiLoCo methods on WikiText, suggesting practical value.\", \"weaknesses\": [\"Order and Clarity of NAG Discussion: The buffer-based Nesterov approach (Liu et al., 2024) is introduced before NAG in Section 2, disrupting the flow and making it hard to follow the progression to Section 3.1, where the proposed method lacks sufficient detail for me to grasp its implementation.\", \"Inconsistent NAG Definition: Lines 120\\u2013131 describe NAG without scaling the look-ahead d_t by the learning rate, but later (lines 134, Eq 5 and 7) d_t is scaled by learning rate, deviating from typical NAG (e.g., Sutskever et al., 2013). This inconsistency confuses the method\\u2019s alignment with classical NAG.\", \"Experimental Concerns: Figures 3 and 4 suggest asynchronous methods outperform synchronous DiLoCo for the same number of iterations, raising concerns about regularization via noise in async setups potentially speeding optimization unnaturally, which could skew results.\", \"Unclear Intuition for EMA vs. Regular NAG: Replacing NAG\\u2019s momentum with an EMA of gradients isn\\u2019t intuitively justified. The paper doesn\\u2019t clarify why EMA\\u2019s properties (e.g., smoothing) are better than regular NAG momentum in asynchronous DiLoCo, especially given my limited understanding of Liu et al.\\u2019s async setup.\"], \"suggestions\": [\"Address NAG Inconsistency: Explain why learning rate is included in the look-ahead step\", \"Validate Experimental Setup: Investigate whether async noise acts as regularization, potentially explaining faster convergence. Compare training steps across sync and async setups to ensure fairness, and discuss noise effects explicitly.\", \"Reorder and Clarify Sections: Move the NAG background before discussing the buffer-based approach to establish a clear foundation. Expand Section 3.1 with detailed pseudocode or figure of the async DiLoCo setup\"], \"reason_for_giving_a_higher_score\": [\"Empirical Results: Strong performance on WikiText (Figure 2) suggests practical utility, warranting further exploration.\"], \"reason_for_giving_a_lower_score\": [\"Lack of Clarity: Inconsistencies in NAG formulation, poor section ordering, and unclear intuition for EMA vs. NAG momentum make the method hard to understand, especially for someone less familiar with this area.\", \"Experimental Concerns: The potential regularization effect of async noise (Figures 3, 4) raises doubts about the setup\\u2019s validity, undermining confidence in the results.\"], \"rating\": \"4\", \"confidence\": \"2\", \"workshop_fit\": \"5\"}",
"{\"summary\": \"In this paper, authors propose a look-ahead based delay correction to reduce the effect of stale gradients for Async Diloco training. They perform delay corrections by extrapolating the negative direction of momentum. In this way, they take into account the previous gradients steps and avoid staleness.\\n\\nIt is a modification of Nesterov acc. grad. method where in the momentum computation the gradient term is multiplied by a factor (1-\\\\gamma_j).\\n\\nThe authors tested their modification with a toy dataset over a simple MLP, and WikiText over NanoGPT (90M) architecture. In both cases, they achieve better results wrt. Async-Diloco and compatible results wrt. sync. Diloco.\", \"strengths_and_weaknesses\": [\"Strengths:\", \"proposed a modified Nesterov method that performs better than existing ones\", \"well structured paper\"], \"weaknesses\": [\"limited novelty\"], \"suggestions\": [\"further experimenting with different datasets and model architectures\"], \"reason_for_giving_a_higher_score\": [\"simple yet effective method improving performance in the async setting\"], \"reason_for_giving_a_lower_score\": [\"limited experiments (regarding model and datasets)\"], \"rating\": \"8\", \"confidence\": \"3\", \"workshop_fit\": \"5\"}"
]
} |
1rOjgUbWus | Disentangling Sequence Memorization and General Capability in Large Language Models | [
"Gaurav Rohit Ghosal",
"Pratyush Maini",
"Aditi Raghunathan"
] | Verbatim memorization in large language models remains a persistent and unsolved challenge, raising critical concerns for privacy, copyright, and responsible deployment. Existing research suggests that effective unlearning requires targeting the specific neurons responsible for memorization, as broad model updates fail to erase content reliably. However, we show that even these approaches rest on a flawed premise. Through controlled experiments, we demonstrate that memorized sequences are not naturally isolated to specific neurons during training, except in cases where the sequences are highly atypical. In this work, we put forward a new training paradigm that attempts to \textbf{isolate memorization to specific neurons by design}. The core challenge is that gradients from the repeated sequences entangle both ``generalizing'' features that improve general capability, in addition to sequence-specific memorization. We show that a simple change to standard training can implicitly disentangle these by leveraging metadata that identifies repeated sequences. We verify the efficacy of our method (\seqtd) in a proof-of-concept natural language setting and unveil the mechanism by which this disentanglement is possible through the training dynamics of memorization. We conclude by discussing the practical considerations of the deployment of \seqtd and highlight potential avenues for incorporating it into large-scale settings. | [
"Memorization",
"Unlearning",
"Localization"
] | Accept | https://openreview.net/pdf?id=1rOjgUbWus | https://openreview.net/forum?id=1rOjgUbWus | ICLR.cc/2025/Workshop/MCDC | 2025 | {
"note_id": [
"gnaXVIspNq",
"dXMkNkQQYt",
"NL1yH5DEB5",
"MgoEJeHNdC",
"CqtS2c3iHY"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740987136151,
1740673637731,
1741029203688,
1741027220326,
1741226298543
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MCDC/Submission48/Reviewer_Au7R"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission48/Reviewer_fh5y"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission48/Reviewer_UMY4"
],
[
"ICLR.cc/2025/Workshop/MCDC/Submission48/Reviewer_gtvR"
],
[
"ICLR.cc/2025/Workshop/MCDC/Program_Chairs"
]
],
"structured_content_str": [
"{\"summary\": \"The paper introduces a method to address the issue of verbatim memorization in LLMs, generally experienced during pretraining. Memorization has privacy and copyright concerns as the information can be extracted from the model, and hence, effective unlearning methods are important. The authors first show that memorized sequences are not naturally isolated to specific neurons during training, and current methods relying on this premise fail. The authors introduce a novel method, SeqTD, that explicitly disentangles memorization from generalization by partitioning neurons into \\u201cshared\\u201d and \\u201cmemorization\\u201d groups. This approach ensures that memorization accumulates in a designated subset of neurons while preventing it from interfering with generalization. The authors evaluate SeqTD on a modified TinyStories pretraining setup, showing that it significantly outperforms post-hoc localization methods by allowing memorization to be erased without degrading the model\\u2019s generalization performance.\", \"strengths_and_weaknesses\": \"Strengths:\\n1. The training method and its basis are both novel. The perspective of having groups of neurons for generalized learning and memorization is interesting.\\n2. The paper presents a wide range of experiments with clear inferences.\\n3. Along with empirical studies, the authors also provide theoretical basis for their formulation of SeqTD and how it is able to isolate memorization.\", \"weaknesses\": \"1. The major weakness of the paper lies in the scale of the experiments - using a controlled setting with TinyStories. Real-world pre-training datasets would be significantly noisier and complex, and it is difficult to conclude what findings would extend to such scenarios.\\n2. SeqTD also relies on access to metadata, and also it being very accurate - this dependency can be problematic in more large-scale pre-training settings.\\n3. The added complexity of SeqTD creates an overhead that might not scale to the settings at which modern LLMs operate.\", \"suggestions\": \"1. Evaluate SeqTD on larger and more diverse datasets or with larger model architectures to better understand its scalability and generalizability.\\n2. Test the dependency of the method on metadata more rigorously and investigate possible substitutes for the same to cater to more realistic scenarios.\\n3. The paper could discuss computational overhead and potential memory constraints when applying SeqTD at large scales.\", \"reason_for_giving_a_higher_score\": \"The paper presents a novel idea to tackle a critical issue in language model training. It also puts forward a new perspective in tackling the issue, while presenting a wide range of experiments and analysis with promising empirical gains. The theoretical analysis further strengthens the claims and findings of the authors and creates a strong foundation for future research.\", \"reason_for_giving_a_lower_score\": \"The evaluation is primarily conducted on a controlled small-scale dataset, which raises concerns about the method\\u2019s scalability and real-world applicability. The method\\u2019s reliance on high-quality sequence metadata may further limit its practicality and scalability. However, as a start, I believe these issues are not very critical, but they still suggest that further work is needed before the approach can be broadly adopted.\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"4\"}",
"{\"summary\": \"The authors first demonstrate that post-hoc localization methods (which try to identify and remove specific \\\"memorization neurons\\\") work poorly for typical text sequences, unlike for atypical content like random canaries. They show that simply enforcing isolation through gradient masking hinders cross-sequence learning and model performance. The authors propose Sequence-Tied Dropout (SeqTD), which partitions neurons into \\\"shared\\\" and \\\"memorization\\\" pools, with each repeated sequence consistently activating the same subset of memorization neurons. This approach leverages natural learning-forgetting dynamics: memorization accumulates in sequence-specific neurons while shared neurons focus on general linguistic patterns. In experiments using TinyStories, SeqTD successfully isolates memorization, enabling removal of repeated content without degrading overall model performance. The method is robust to moderate noise in sequence metadata (up to 10%) and works across various model sizes, though smaller models show more performance degradation. The paper provides theoretical analysis of how SeqTD works through learning-forgetting cycles and discusses practical implementation considerations like metadata accuracy and computational requirements.\", \"strengths_and_weaknesses\": \"Strengths:\", \"identifies_a_real_problem_in_llm_training\": \"disentangling memorization from capability+SeqTD handles typical text better than existing solutions\\nClear empirical demonstration that isolation can occur naturally\\nProvides theoretical grounding for the approach\\nMethod is robust to moderate (10%) metadata noise\", \"weknesses\": \"Limited to small-scale proof-of-concept (TinyStories)\\nUntested on multi-token memorization patterns\\nRequires sequence ID metadata during training\\nSmaller models show more degradation when using the technique\\nResource overhead from maintaining memorization neuron pool\", \"suggestions\": \"Test on real-world copyrighted content rather than synthetic repeats\\nQuantify parameter/computation overhead vs standard training\\nExperiment with adaptive neuron pool sizes based on dataset size\\nAdd ablation study on sequence consistency requirements\\nCompare with knowledge editing methods that modify trained models\", \"reason_for_giving_a_higher_score\": \"The paper solves a concrete LLM safety problem by working with the natural dynamics of memorization rather than fighting against them. The approach is theoretically grounded, empirically validated, and addresses a critical gap in the literature. The writing is clear and appropriately scopes both contributions and limitations.\", \"reason_for_giving_a_lower_score\": \"The topic is probably not the best fit for this workshop. The work is limited to small-scale experiments, leaving open questions about real-world viability. It requires additional metadata tracking during training that may be impractical for production systems. The computational overhead might prove prohibitive at scale, and the paper doesn't thoroughly explore this trade-off.\", \"rating\": \"5\", \"confidence\": \"3\", \"workshop_fit\": \"1\"}",
"{\"summary\": \"This paper introduces SeqTD, a training paradigm to isolate sequence memorization. During training, it partitions the learning signal from each example to generalization and memorization components, where the memorization is isolated to specific memorization neurons by design. Results suggest that this effectively disentangles memorization from generalization on controlled settings.\", \"strengths_and_weaknesses\": \"Strenghts:\\n1. The paper is well-written, flows naturally. \\n2. The authors provided investigations on reasons why previous approaches, the post-hoc localization methods in particular, fail in some settings and well-motivated the method.\\n3. Experimental results in controlled settings demonstratred promising performance.\", \"weaknesses\": \"1. The method is currently evaluated in a small-scale controlled setting, the results can benefit from larger-scale and more practical settings in future work.\", \"suggestions\": \"1. Section 5, the main section to describe the method, can be extended to illustrate the method in more details rather than delaying details into appendix. Some details, e.g., how the metadata is used, how the neurons are splitted, can be explained and discussed more.\\n2. The proposed method can be evaluated across different settings, possibly both synthetic and real-world settings, for a more thorough evaluation of the method.\", \"reason_for_giving_a_higher_score\": \"The method is well-motivated and intuitive.\", \"reason_for_giving_a_lower_score\": \"The experimental evaluation can be of larger scale and more complete in future work.\", \"rating\": \"7\", \"confidence\": \"4\", \"workshop_fit\": \"3\"}",
"{\"summary\": \"The paper \\\"Disentangling Sequence Memorization and General Capability in LLMs\\\" addresses the issue of memorization in large language models (LLMs), which poses risks for privacy and copyright. It finds that memorized sequences are not confined to specific neurons unless they are highly atypical, and existing methods struggle to isolate typical memorized sequences. The authors propose a new training method called Sequence-Tied Dropout (SeqTD), which isolates memorization to specific neurons using metadata to identify repeated sequences. SeqTD splits hidden-layer neurons into shared neurons and memorization neurons, allowing memorization to accumulate in specific neurons while preventing reinforcement in shared neurons. Empirical results show that SeqTD effectively isolates memorization and allows for unlearning repeated sequences without significantly affecting the model's performance on other data. The method can handle some noise in sequence metadata and works across different model sizes.\", \"strengths_and_weaknesses\": [\"Strengths:\", \"Novel Approach: To my knowledge, the Sequence-Tied Dropout (SeqTD) is a novel contribution. I find it a clever approach to use the metadata to partition hidden-layer neurons into shared neurons and memorization neurons.\", \"Empirical Validation: The paper provides thorough empirical validation, demonstrating that SeqTD effectively isolates memorization and allows for unlearning repeated sequences without significantly affecting the model's performance on other data.\", \"Practical Considerations: The paper discusses practical considerations such that accurate metadata i soften lacking and the impact of model size on SeqTD's effectiveness.\", \"Explores Robustness Assumptions: Experiments demonstrate that SeqTD can handle some noise in sequence metadata and works across different model sizes, indicating that is has potential for real-world deployment.\"], \"weaknesses\": [\"Scalability: The paper would benefit from more extensive experiments on larger-scale models and diverse datasets to strengthen claims about the scalability and generalizability of SeqTD.\", \"Metadata Dependency: SeqTD relies on accurate sequence metadata to identify repeated sequences. The appendix has an interesting discussion about dynamic generation of metadata, but could benefit from exploring more robust metadata annotations noisy environments.\", \"Impact on Training Efficiency: The impact of SeqTD on training efficiency and computational resources is not thoroughly addressed. A detailed analysis of the computational overhead would provide a clearer picture of its practicality in large-scale settings.\", \"Canary Construction Process: The paper should provide a clearer rationale for the construction of canaries and how they represent real-world scenarios.\", \"Discuss the Realism of 128 Repetitions: Justify the choice of 128 repetitions and discuss whether this number is representative of real-world data. Consider the implications of using fewer or more repetitions and examine the limitations of this choice.\", \"Discussion of Limitations: The limitations of the experimental design could be discussed further.\"], \"suggestions\": [\"Suggestions for improvement\", \"Provide more discussion on how to robustly generate and maintain accurate sequence metadata.\", \"Discussion potential additional scaling experiments with larger datasets and larger language models.\", \"There are some minor language errors that could easily be corrected, for example,\", \"Figure 1 Caption: \\\"Conceptual {Intution} of SeqTD\\\"\", \"Line 125: \\\"these neurons {must also don\\u2019t contribute) to the model\\u2019s general capabilities\\\"\", \"Line 291: \\\"There are two {crucical} requirements in deploying SeqTD\\\"\"], \"reason_for_giving_a_higher_score\": \"The paper introduces a novel approach that appears to isolate memorization in (some smaller) language models. It provides thorough, albeit limited, empirical validation on a smaller dataset and restricted setting. Additionally, the paper discusses practical considerations and the potential for real-world deployment, and contains some analysis of robustness to metadata accuracy.\", \"reason_for_giving_a_lower_score\": \"The paper's limitations could have been discussed in more detail. Additionally, the rationale for the experiment design could have been better justified, such as the construction of canaries and the choice of 128 repetitions.\", \"rating\": \"7\", \"confidence\": \"3\", \"workshop_fit\": \"3\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"The paper introduces a method to address the issue of verbatim memorization in LLMs, generally experienced during pretraining. During training, it partitions the learning signal from each example to generalization and memorization components, where the memorization is isolated to specific memorization neurons by design. Most of the reviewers agree that this is a good paper. However, we had some concerns about the relevance of the paper to the workshop's theme. In the end, we think that information localization can be thought of as some form of modularity and hence recommend accepting the paper to the workshop given the strong reviews.\"}"
]
} |
zeIFBzx1hf | Graph Pseudotime Analysis and Neural Stochastic Differential Equations for Analyzing Retinal Degeneration Dynamics and Beyond | [
"Dai Shi",
"Kuan Yan",
"Lequan Lin",
"Yue Zeng",
"Ting Zhang",
"Jialing zhang",
"Matsypura Dmytro",
"Mark C. Gillies",
"Ling Zhu",
"Junbin Gao"
] | Understanding the progression of disease at the molecular level usually requires capturing both the structural dependencies between pathways and the temporal dynamics of how diseases evolve. In this work, we resolve the former challenge by developing a biologically informed graph-forming method to efficiently construct pathway graphs for subjects from our newly curated transcriptomic dataset of JR5558 mice that spontaneously develop neovascularization beneath their retinas. We then developed Graph-level Pseudotime Analysis (GPA) to infer graph-level trajectories that reveal how the disease progresses at the population level rather than in individual subjects. Based on the trajectories estimated by GPA, we identify the most sensitive pathways that drive transitions between disease stages. In addition, we measure changes in pathway features using neural stochastic differential equations (SDEs), which enable us to formally define and compute pathway stability and disease bifurcation points (points of no return)—two fundamental problems in research on disease progression. We have extended our theory to allow pathways to interact with each other, enabling a more comprehensive and multi-faceted characterization of disease phenotypes. Comprehensive experimental results demonstrate the effectiveness of our framework in reconstructing pathway dynamics, identifying critical transitions, and providing novel insights into the mechanistic understanding of the evolution of disease. | [
"disease",
"graph pseudotime analysis",
"retinal degeneration dynamics",
"pathways",
"gpa",
"trajectories",
"progression",
"molecular level",
"structural dependencies"
] | Accept | https://openreview.net/pdf?id=zeIFBzx1hf | https://openreview.net/forum?id=zeIFBzx1hf | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"8JWOlnVst1"
],
"note_type": [
"decision"
],
"note_created": [
1740846387602
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
xBxjRdkof9 | Detecting cell level transcriptomic changes of Perturb-seq using Contrastive Fine-tuning of Single-Cell Foundation Models | [
"Wenmin Zhao",
"Ana Solaguren-Beascoa",
"Grant Neilson",
"Regina Reynolds",
"Louwai Muhammed",
"Liisi Laaniste",
"Sera Aylin Cakiroglu"
] | Genome-scale perturbation cell atlases are an exciting new resource for understanding the transcriptomic and phenotypic impact of single-gene activation or knockdown. However, in terms of differentially expressed genes identified, the signal detected in these data atlases is low, leading to the exclusion of most data from downstream analyses. Recent advances in single-cell foundation models have shown promise in capturing complex biological insights. However, their application to perturbation analysis, especially in predicting perturbed single-cell transcriptomes, remains limited. In this paper, we focus on learning representations of single-cell transcriptomes that capture subtle, yet important, transcriptome-wide changes, and we propose a novel fine-tuning strategy using contrastive learning to leverage single-cell foundation models for this task. We pre-train a single-cell foundation model and fine-tune on a genome-scale perturbation dataset using a contrastive loss, which minimises the distance between cell embeddings from unperturbed cells while maximising between perturbed and unperturbed cells. We validate and test the model on unseen perturbations,
demonstrating its ability to identify global biologically meaningful transcriptional changes that may not be captured by traditional differential expression methods. Our approach provides a novel framework for analysing single-cell perturbation data and offers a more effective means of identifying perturbations that drive systemic gene expression changes. | [
"foundation models",
"contrastive",
"transcriptomes",
"unperturbed cells",
"perturbation cell atlases",
"exciting new resource",
"transcriptomic",
"phenotypic impact",
"activation"
] | Accept | https://openreview.net/pdf?id=xBxjRdkof9 | https://openreview.net/forum?id=xBxjRdkof9 | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"YXZjFMXiXJ"
],
"note_type": [
"decision"
],
"note_created": [
1740860950842
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
wW6jhSMTEg | Exploring the potential of genetic variation and zygosity in DNA language models | [
"Ali Saadat",
"Jacques Fellay"
] | Advancements in DNA language models (DNA-LMs) have improved phenotype prediction from DNA sequences, yet the roles of zygosity and genetic variation (GV) remain underexplored. In this study we quantify their effects on gene expression prediction as an example of variation-sensitive phenotype, showing that baseline models benefit from zygosity- and GV-aware encoding, while DNA-LMs struggle to utilize them. These findings underscore the need for integrating biologically meaningful features like zygosity and GV in DNA-LM pretraining to better capture genetic diversity and improve variant interpretation. | [
"zygosity",
"genetic variation",
"potential",
"dna language models",
"phenotype prediction",
"dna sequences",
"roles",
"remain underexplored"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=wW6jhSMTEg | https://openreview.net/forum?id=wW6jhSMTEg | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"EyRtiVvXDJ"
],
"note_type": [
"decision"
],
"note_created": [
1740950915060
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
v71XmFAHAE | Learning Representations of Instruments for Partial Identification of Treatment Effects | [
"Jonas Schweisthal",
"Dennis Frauen",
"Maresa Schröder",
"Konstantin Hess",
"Niki Kilbertus",
"Stefan Feuerriegel"
] | Reliable estimation of treatment effects from observational data is important in many disciplines such as medicine. However, estimation is challenging when unconfoundedness as a standard assumption in the causal inference literature is violated. In this work, we leverage arbitrary (potentially high-dimensional) instruments to estimate bounds on the conditional average treatment effect (CATE). Our contributions are three-fold: (1) We propose a novel approach for partial identification through a mapping of instruments to a discrete representation space so that we yield valid bounds on the CATE. This is crucial for reliable decision-making in real-world applications. (2) We derive a two-step procedure that learns tight bounds using a tailored neural partitioning of the latent instrument space. As a result, we avoid instability issues due to numerical approximations or adversarial training. Furthermore, our procedure aims to reduce the estimation variance in finite-sample settings to yield more reliable estimates. (3) We show theoretically that our procedure obtains valid bounds while reducing estimation variance. We further perform extensive experiments to demonstrate the effectiveness across various settings. Overall, our procedure offers a novel path for practitioners to make use of potentially high-dimensional instruments (e.g., as in Mendelian randomization). | [
"instruments",
"procedure",
"partial identification",
"representations",
"treatment effects",
"cate",
"valid bounds",
"estimation variance",
"observational data"
] | Accept (Spotlight) | https://openreview.net/pdf?id=v71XmFAHAE | https://openreview.net/forum?id=v71XmFAHAE | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"KGNbIDnZsg"
],
"note_type": [
"decision"
],
"note_created": [
1740883216376
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"title\": \"Paper Decision\"}"
]
} |
uZ6B53QSHZ | Capturing functional context of genetic pathways through hyperedge disentanglement | [
"Yoonho Lee",
"Junseok Lee",
"Sangwoo Seo",
"Sungwon Kim",
"Yeongmin Kim",
"Chanyoung Park"
] | The hypergraph data structure has been used to represent the multiway interactions of a set of genes of a genetic pathway. Since genes within each genetic pathway collaboratively perform a biological function, the functional context of a pathway (i.e., the interaction context of a hyperedge), which is often unannotated, needs to be captured. However, most existing hypergraph neural networks fail to reflect the interaction context of each hyperedge due to their limited ability to capture important or relevant factors. In this paper, we propose a simple yet effective hyperedge disentangling method, Natural-HNN, which captures the interaction context of a hyperedge. We introduce a novel guidance mechanism for hyperedge disentanglement based on the naturality condition in category theory. In our experiments, we applied our model to hypergraphs of genetic pathways for the cancer subtype classification task and demonstrated that our model outperforms baseline approaches by capturing the functional semantic similarity of genetic pathways. | [
"genetic pathways",
"functional context",
"hyperedge disentanglement",
"interaction context",
"genes",
"genetic pathway",
"hyperedge",
"model",
"hypergraph data structure",
"multiway interactions"
] | Accept | https://openreview.net/pdf?id=uZ6B53QSHZ | https://openreview.net/forum?id=uZ6B53QSHZ | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"xvauqIrm7b"
],
"note_type": [
"decision"
],
"note_created": [
1741031611738
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
sla2edDxc3 | GENATATOR: de novo Gene Annotation With DNA Language Model | [
"Aleksei Shmelev",
"Artem Shadskiy",
"Yuri Kuratov",
"Mikhail Burtsev",
"Olga Kardymon",
"Veniamin Fishman"
] | Inference of gene structure and location based on genome sequences, also known as \textit{de novo} gene annotation, is a critical first step in biological research. However, rules of encoding gene structure in the DNA sequence are complex and poorly understood, often necessitating the use of costly transcriptomic data to achieve accurate gene annotation. Here, we present GENATATOR --- Genome Annotator Using the GENA DNA Language Model --- an advanced machine learning tool for inferring gene annotations directly from DNA sequences. Unlike previous approaches that rely on explicitly defined gene segmentation rules derived from protein-coding sequences, GENATATOR learns how to infer gene structure directly from the data. This enables GENATATOR to perform correct segmentation for previously untraceable class of non-coding transcripts and identify subset of protein-coding genes missed by other models, achieving top performance in the gene segmentation benchmarks. Finally, with in-depth analysis of GENATATOR’s model embeddings and predictions, we reveal how DNA language models utilize memory to learn the biological rules underlying gene encoding. | [
"genatator",
"gene annotation",
"gene structure",
"location",
"genome sequences",
"critical first step",
"biological research",
"rules"
] | Accept | https://openreview.net/pdf?id=sla2edDxc3 | https://openreview.net/forum?id=sla2edDxc3 | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"litP4KjFXr"
],
"note_type": [
"decision"
],
"note_created": [
1740961668687
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
rDPEBwGVnY | When repeats drive the vocabulary: a Byte-Pair Encoding analysis of T2T primate genomes | [
"Marina Popova",
"Iaroslav Chelombitko",
"Aleksey Komissarov"
] | The emergence of telomere-to-telomere (T2T) genome assemblies has opened new avenues for comparative genomics, yet effective tokenization strategies for genomic sequences remain underexplored. In this pilot study, we apply Byte-Pair Encoding (BPE) to nine T2T primate genomes—including three human assemblies—by training independent BPE tokenizers with a fixed vocabulary of 512,000 tokens using our custom tool, dnaBPE. Our analysis reveals that only 11,569 tokens are shared across all assemblies, while nearly 991,854 tokens are unique to a single genome, indicating a rapid decline in shared vocabulary with increasing assembly comparisons. Moreover, phylogenetic trees derived from token overlap failed to recapitulate established primate relationships, a discrepancy attributed to the disproportionate influence of species-specific high-copy repetitive elements. These findings underscore the dual nature of BPE tokenization: while it effectively compresses repetitive sequences, its sensitivity to high-copy elements limits its utility as a universal tool for comparative genomics. We discuss potential hybrid strategies and repeat-masking approaches to refine genomic tokenization, emphasizing the need for domain-specific adaptations in the development of large-scale genomic language models. The dnaBPE tool used in this study is open-source and available at https://github.com/aglabx/dnaBPE. | [
"vocabulary",
"tokens",
"repeats",
"encoding analysis",
"comparative genomics",
"emergence",
"genome assemblies"
] | Accept | https://openreview.net/pdf?id=rDPEBwGVnY | https://openreview.net/forum?id=rDPEBwGVnY | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"BdLGgRm3AT"
],
"note_type": [
"decision"
],
"note_created": [
1741031713325
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
qWlqfjGVWX | ProteinGPT: Multimodal LLM for Protein Property Prediction and Structure Understanding | [
"Yijia Xiao",
"Edward Sun",
"Yiqiao Jin",
"Qifan Wang",
"Wei Wang"
] | Understanding biological processes, drug development, and biotechnological advancements requires a detailed analysis of protein structures and functions, a task that is inherently complex and time-consuming in traditional protein research. To streamline this process, we introduce ProteinGPT, a state-of-the-art multimodal large language model for proteins, which allows users to upload protein sequences and/or structures for comprehensive proteins analysis and responsive inquiries. ProteinGPT seamlessly integrates protein sequence and structure encoders with linear projection layers to ensure precise representation adaptation. It leverages a large language model (LLM) to generate accurate and contextually relevant responses. To train ProteinGPT, we construct a large-scale dataset of 132,092 proteins, each annotated with 20-30 property tags and 5-10 QA pairs per protein, and optimized the instruction-tuning process using GPT-4o. Experiments demonstrate that ProteinGPT effectively generates informative responses to protein-related questions, achieving high performance on both semantic and lexical metrics. It significantly outperforms baseline models and general-purpose LLMs in understanding and responding to protein-related queries. | [
"proteingpt",
"multimodal llm",
"protein property prediction",
"process",
"proteins",
"structure understanding proteingpt",
"structure",
"understanding biological processes",
"drug development",
"biotechnological advancements"
] | Accept (Spotlight) | https://openreview.net/pdf?id=qWlqfjGVWX | https://openreview.net/forum?id=qWlqfjGVWX | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"bOOKI6LENQ"
],
"note_type": [
"decision"
],
"note_created": [
1740958551331
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"title\": \"Paper Decision\"}"
]
} |
prYPSWtKCC | Transferring Preclinical Drug Response to Patient via Tumor Heterogeneity-Aware Alignment and Perturbation Modeling | [
"Inyoung Sung",
"Dongmin Bang",
"Sun Kim",
"Sangseon Lee"
] | Accurate prediction of personalized drug response is critical for precision oncology, yet limited clinical data forces reliance on preclinical datasets. However, fundamental biological differences between preclinical cell lines and patient tumors hinder direct knowledge transfer. In this work, we introduce THERAPI, a novel tumor heterogeneity-aware Domain Adaptation (DA) framework that represents patient tumors as weighted combinations of multiple cell lines with tissue-specific context. Along with our comprehensive gene expression modeling by integrating drug-induced perturbation-based and rank-based representations, THERAPI outperforms both DA-free and DA-based models and generalizes robustly to an external dataset, highlighting its potential for applications in precision medicine. | [
"preclinical drug response",
"tumor",
"alignment",
"perturbation",
"patient tumors",
"accurate prediction",
"personalized drug response",
"critical",
"precision oncology"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=prYPSWtKCC | https://openreview.net/forum?id=prYPSWtKCC | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"Z2U4X7AM1K"
],
"note_type": [
"decision"
],
"note_created": [
1740872555685
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
o50KFyep7O | SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model | [
"Jiwei Zhu",
"Zhao Yang",
"Bing Su"
] | Inspired by the success of unsupervised pre-training paradigms, researchers have applied these approaches to DNA pre-training. However, we argue that these approaches alone yield suboptimal results because pure DNA sequences lack sufficient information, since their functions are regulated by genomic profiles like chromatin accessibility. Here, we demonstrate that supervised training for genomic profile prediction serves as a more effective alternative to pure sequence pre-training. Furthermore, considering the multi-species and multi-profile nature of genomic profile prediction, we introduce our **S**pecies-**P**rofile **A**daptive **C**ollaborative **E**xperts (SPACE) that leverages Mixture of Experts (MoE) to better capture the relationships between DNA sequences across different species and genomic profiles, thereby learning more effective DNA representations. Through extensive experiments across various tasks, our model achieves state-of-the-art performance, establishing that DNA models trained with supervised genomic profiles serve as powerful DNA representation learners. | [
"space",
"genomic profile predictor",
"approaches",
"genomic profiles",
"genomic profile prediction",
"success",
"unsupervised",
"paradigms"
] | Accept | https://openreview.net/pdf?id=o50KFyep7O | https://openreview.net/forum?id=o50KFyep7O | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"bzjkNDoXGs"
],
"note_type": [
"decision"
],
"note_created": [
1740845492546
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
nZ8IjQcJsD | GraphPINE: Graph importance propagation for interpretable drug response prediction | [
"Yoshitaka Inoue",
"Tianfan Fu",
"Augustin Luna"
] | Explainability is necessary for many tasks in biomedical research. Recent explainability methods have focused on attention, gradient, and Shapley value. These do not handle data with strong associated prior knowledge and fail to constrain explainability results based on known relationships between predictive features.
We propose GraphPINE, a graph neural network (GNN) architecture leveraging domain-specific prior knowledge to initialize node importance optimized during training for drug response prediction. Typically, a manual post-prediction step examines literature (i.e., prior knowledge) to understand returned predictive features. While node importance can be obtained for gradient and attention after prediction, node importance from these methods lacks complementary prior knowledge; GraphPINE seeks to overcome this limitation. GraphPINE differs from other GNN gating methods by utilizing an LSTM-like sequential format. We introduce an importance propagation layer that unifies 1) updates for feature matrix and node importance and 2) uses GNN-based graph propagation of feature values. This initialization and updating mechanism allows for informed feature learning and improved graph representation.
We apply GraphPINE to cancer drug response prediction using drug screening and gene data collected for over 5,000 gene nodes included in a gene-gene graph with a drug-target interaction (DTI) graph for initial importance. The gene-gene graph and DTIs were obtained from curated sources and weighted by article count discussing relationships between drugs and genes. GraphPINE achieves a PR-AUC of 0.894 and ROC-AUC of 0.796 across 952 drugs. Code is available at https://anonymous.4open.science/r/GraphPINE-40DE. | [
"graphpine",
"node importance",
"graph",
"graph importance propagation",
"attention",
"gradient",
"prior knowledge",
"relationships",
"predictive features",
"gnn"
] | Accept | https://openreview.net/pdf?id=nZ8IjQcJsD | https://openreview.net/forum?id=nZ8IjQcJsD | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"BAVO2SS2GD"
],
"note_type": [
"decision"
],
"note_created": [
1740962098833
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
mpvp5KP8fR | Gene Set Function Discovery with LLM-Based Agents and Knowledge Retrieval | [
"Daniela Pinto Veizaga",
"Aécio Santos",
"Juliana Freire",
"Wenke Liu",
"Sarah Keegan",
"David Fenyo"
] | Advancements in high-throughput technologies have generated complex biomedical datasets, posing significant challenges for knowledge discovery. Traditional tools like Gene Set Enrichment Analysis (GSEA) and over-representation analysis (ORA) map gene sets to known pathways but are limited in their ability to uncover novel biological-mechanisms, often relying on manual interpretation to synthesize insights. While large language models (LLMs) aid in summarization, they lack transparency, adaptability to new knowledge, and integration with computational tools. To address these challenges, we introduce $\texttt{Discovera}$, an agentic system that combines LLMs with established computational bioinformatics pipelines, and retrieval-augmented generation (RAG) to support mechanistic discovery. $\texttt{Discovera}$ bridges the gap between computation and interpretation, enabling users to explore hypotheses grounded in data and literature. We demonstrate the utility of $\texttt{Discovera}$ in the context of endometrial carcinoma research, where it supports functional enrichment analysis and the summarization of potential mechanisms of action for gene sets associated with an observed phenotype. | [
"discovera",
"gene",
"function discovery",
"agents",
"gene sets",
"llms",
"summarization",
"knowledge retrieval gene",
"knowledge retrieval advancements",
"technologies"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=mpvp5KP8fR | https://openreview.net/forum?id=mpvp5KP8fR | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"S8pyTEvFIX"
],
"note_type": [
"decision"
],
"note_created": [
1740960411132
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
lwK6AaIAJB | Aligning Molecules and Fragments in a Shared Embedding Space for RL-Based Molecule Generation | [
"Youngkuk Kim",
"Yinhua Piao",
"Sangseon Lee",
"Sun Kim"
] | Drug discovery is a complex and resource-intensive process requiring the design of molecules that possess specific chemical and biological properties, such as high binding affinity and drug-likeness. Fragment-based drug discovery (FBDD) has gained prominence as a strategy for efficiently identifying lead compounds by deconstructing molecules into smaller fragments. However, existing approaches face challenges in fully leveraging the relationships between molecules and their constituent fragments, especially in optimizing molecular properties. In this paper, we introduce Molecule-Fragment Representation Alignment space for RL-based Generation (M-FRAG), a novel framework that harmonizes molecule and fragment embeddings in a shared, property-driven space. By aligning fragments with their molecular context, M-FRAG ensures that fragment selection is optimized both for chemical feasibility and the desired molecular properties. Using reinforcement learning, M-FRAG generates chemically realistic molecules optimized for target properties while also providing interpretability for individual fragments during the molecule generation process. Experimental results demonstrate that M-FRAG outperforms existing methods in terms of optimization, diversity, and chemical validity, positioning it as a powerful tool for the efficient and transparent generation of drug-like molecules. | [
"molecules",
"fragments",
"embedding space",
"molecule generation",
"complex",
"process",
"design",
"specific chemical",
"biological properties"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=lwK6AaIAJB | https://openreview.net/forum?id=lwK6AaIAJB | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"N53ZlB7MiY"
],
"note_type": [
"decision"
],
"note_created": [
1740961926824
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
lHRETQDSVu | Sampling Protein Language Models for Functional Protein Design | [
"Jeremie Theddy Darmawan",
"Yarin Gal",
"Pascal Notin"
] | Protein language models have emerged as powerful tools for learning rich representations of proteins, enhancing performance across various downstream tasks such as structure prediction, mutation effects prediction, and homology detection. Their ability to learn complex distributions over protein sequences also shows significant potential for designing novel and functional proteins, with broad applications in therapeutics, new materials, and sustainability. Given the vastness of the protein sequence space, efficient exploration methods are critical to the success of protein engineering efforts.
However, the methodologies for effectively sampling from these models to achieve core protein design objectives remain underexplored and have predominantly relied on techniques initially developed for Natural Language Processing tasks.
In this work, we first develop a comprehensive *in silico* protein design evaluation framework to systematically compare different sampling methods. After a thorough review of existing sampling strategies for language models, we introduce several approaches specifically tailored for protein design. We then evaluate these strategies using our *in silico* benchmark, investigating the effects of key hyperparameters and providing practical guidance on the relative strengths of each method depending on design objectives. | [
"protein language models",
"silico",
"strategies",
"functional protein design",
"powerful tools",
"rich representations",
"proteins",
"performance",
"various downstream tasks"
] | Accept | https://openreview.net/pdf?id=lHRETQDSVu | https://openreview.net/forum?id=lHRETQDSVu | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"ybNs1Ad6Lw"
],
"note_type": [
"decision"
],
"note_created": [
1740846118241
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
kmLV911L80 | LIMEADE: Local Interpretable Manifold Explanations for Dimension Evaluations | [
"Tarek M Zikry",
"Genevera I. Allen"
] | To visualize and analyze high-dimensional biological data, scientists often turn to manifold learning and dimensionality reduction techniques such as tSNE and UMAP. However, these methods (1) are non-projective, which means that new data cannot be projected on the manifold without refitting, and (2) lack the explainability to help practitioners understand which features drive manifold locations and neighborhoods. In practice, scientists often must turn to marginal distributions along a manifold or expert annotations to explain reduced dimension data. Here, we present Local Interpretable Manifold Explanations for Dimension Evaluations (LIMEADE), a surrogate model integrated with a dimensionality reduction method, similar to the LIME surrogate used in classification and regression models. We define LIMEADE as a group lasso-regularized multi-task regression problem that identifies sparse linear projections of the data aligning with local neighborhoods of the manifold space. When applied to single-cell proteomics data, LIMEADE effectively extracts biologically meaningful features, providing a more interpretable approach to feature selection and dimensionality reduction. | [
"dimension evaluations",
"limeade",
"scientists",
"manifold",
"dimensionality reduction",
"biological data",
"learning",
"dimensionality reduction techniques",
"tsne"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=kmLV911L80 | https://openreview.net/forum?id=kmLV911L80 | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"2td4AB1Wlo"
],
"note_type": [
"decision"
],
"note_created": [
1740872588606
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
kY3zOTMtjU | ESM-Effect: An Effective and Efficient Fine-Tuning Framework towards accurate prediction of Mutation's Functional Effect | [
"Moritz Glaser",
"Johannes Brägelmann"
] | Predicting functional properties of mutations like the change in enzyme activity remains challenging and is not well captured by traditional pathogenicity prediction. Yet such functional predictions are crucial in areas like targeted cancer therapy where some drugs may only be administered if a mutation causes an increase in enzyme activity. Current approaches either leverage static Protein-Language Model (PLM) embeddings or complex multi-modal features (e.g., static PLM embeddings, structure, and evolutionary data) and either (1) fall short in accuracy or (2) involve complex data processing and pre-training. Standardized datasets and metrics for robust benchmarking would benefit model development but do not yet exist for functional effect prediction.
To address these challenges we develop ESM-Effect, an optimized PLM-based functional effect prediction framework through extensive ablation studies.
ESM-Effect fine-tunes ESM2 PLM with an inductive bias regression head to achieve state-of-the-art performance. It surpasses the multi-modal state-of-the-art method PreMode, indicating redundancy of structural and evolutionary features, while training 6.7-times faster.
In addition, we develop a benchmarking framework with robust test datasets and strategies, and propose a novel metric for prediction accuracy termed relative Bin-Mean Error (rBME): rBME emphasizes prediction accuracy in challenging, non-clustered, and rare gain-of-function regions and correlates more intuitively with model performance than commonly used Spearman’s rho. Finally, we demonstrate partial generalization of ESM-Effect to unseen mutational regions within the same protein, illustrating its potential in precision medicine applications. Extending this generalization across different proteins remains a promising direction for future research. ESM-Effect is available at: https://github.com/lovelacecode/ESM-Effect. | [
"mutation",
"effective",
"efficient",
"framework towards",
"prediction",
"functional effect",
"enzyme activity",
"prediction accuracy",
"rbme",
"functional properties"
] | Accept (Spotlight) | https://openreview.net/pdf?id=kY3zOTMtjU | https://openreview.net/forum?id=kY3zOTMtjU | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"oqunbj8Sgc"
],
"note_type": [
"decision"
],
"note_created": [
1740846292543
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"title\": \"Paper Decision\"}"
]
} |
kD8LptrZ7v | Reference-free cell-type annotation with LLM agents | [
"Yidi Huang",
"Ivan Cohen",
"Van Quynh-Thi Truong",
"Pedram B Bayat",
"Sameer A Bhatti",
"Luca Paruzzo",
"Mark M. Painter",
"Shirong Zheng",
"Derek Alan Oldridge",
"Joost Wagenaar",
"Allison R Greenplate",
"Dokyoon Kim"
] | Agentic AI research assistants, enabled by augmenting large language models with code-execution and tool-use abilities, promise to transform scientific workflows and accelerate biomedical research. In this study, we share preliminary results from our work in evaluating LLM agent capabilities in genomics. We design a simple bioinformatic research agent augmented with tool calls and code execution and instructed with a high-level task-agnostic system prompt. We implement this agent with three frontier-level LLMs: GPT-4o, o3-mini, and Claude 3.5 Sonnet, and compare their performance. We evaluate the performance of our agents in labeling cell types in clustered high-resolution transcriptomic data, a traditionally time-intensive task requiring both manual effort and domain expertise. Our agents are able to accurately complete this task, although performance fluctuates over multiple iterations due to hallucination. Overall, our results indicate that LLM agents are capable of autonomously planning and executing genomic analyses with only high-level direction. We are encouraged by these early results and look forward to extending these evaluations in future work. | [
"annotation",
"llm agents",
"performance",
"agents",
"task",
"llm agents agentic",
"research assistants",
"large language models",
"abilities",
"promise"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=kD8LptrZ7v | https://openreview.net/forum?id=kD8LptrZ7v | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"z0IcYPxINT"
],
"note_type": [
"decision"
],
"note_created": [
1740961264645
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
k4P5JXv69b | ECG-Nest-FM: A Frequency-Focused ECG Foundation Model with Nested Embeddings | [
"Abhishek Sharma",
"Lin Yang",
"Cory Y McLean",
"Justin Cosentino",
"Farhad I Hormozdiari"
] | Electrocardiograms (ECGs) are fundamental to cardiac diagnostics, providing noninvasive insights into cardiovascular conditions. Recent advancements in deep learning have led to foundation models (FMs) capable of learning powerful representations of ECG signals. However, these models often fail to fully exploit the periodic nature and diagnostic frequency bands of ECGs, leading to inefficiencies in computational cost and interpretability. We propose a novel ECG foundation model that learns nested embeddings, where each subset of dimensions encodes progressively higher-frequency information. By explicitly modeling frequency structures and applying a correlation penalty, the method achieves compact, high-rank representations that reduce model size without sacrificing performance.
We evaluate our approach on two large-scale datasets for embedding redundancy and prediction performance on downstream clinical tasks such as arrhythmia classification, and cardiac condition detection. We observe similar prediction performance AUROC scores and lower embedding redundancy, offering a computationally efficient and interpretable framework for ECG analysis. Finally, the representations obtained from our model in UK Biobank data capture known cardiovascular variants and detect novel loci, which can be applied to drug discovery. | [
"ecg foundation model",
"nested embeddings",
"ecgs",
"representations",
"redundancy",
"nested embeddings electrocardiograms",
"fundamental",
"diagnostics",
"noninvasive insights",
"cardiovascular conditions"
] | Accept | https://openreview.net/pdf?id=k4P5JXv69b | https://openreview.net/forum?id=k4P5JXv69b | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"R3NoviRIvo"
],
"note_type": [
"decision"
],
"note_created": [
1740962117589
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
iLo7Qj6Pfx | MutEmbed: Self-Supervised Learning of Biological Latent Embeddings from Cancer Mutational Profiles | [
"Aakansha Narain",
"Wu Jialun Andy",
"Hannan Wong",
"Vedant Sandhu",
"Jason J. Pitt"
] | Cancer genomes possess diverse mutational patterns across multiple profiles, including single base substitutions (SBS), small insertions and deletions (ID), copy number variations (CN), and structural variants (SV). These profiles provide distinct, yet complementary perspectives to understanding a tumor's genomic landscape, which is essential for optimal patient care. Learning unified representations across this complex mutational landscape can reveal deeper insights into cancer biology, therapeutic interventions, and patient stratification. We present MutEmbed, a self-supervised framework that uses attention mechanisms to weigh and integrate information across mutational profiles, capturing their latent biological interdependencies. We use SBS, ID, CN, and SV calls for samples from the Pan-cancer Analysis of Whole Genomes (PCAWG) dataset (n = 2748). Using MutEmbed, we derive embeddings for each sample and demonstrate their biological relevance by analyzing cancer-type specific clustering patterns and enrichment patterns with DNA damage and repair pathway activities. | [
"mutembed",
"learning",
"biological latent embeddings",
"sbs",
"cancer mutational profiles",
"multiple profiles"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=iLo7Qj6Pfx | https://openreview.net/forum?id=iLo7Qj6Pfx | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"i216PIlVlY"
],
"note_type": [
"decision"
],
"note_created": [
1740902268204
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
i4vevaqugi | RAG-ESM: Improving pretrained protein language models via sequence retrieval | [
"Damiano Sgarbossa",
"Anne-Florence Bitbol"
] | Protein language models are significantly advancing the modeling of sequence-function relationships. However, most of them are not directly informed of homology and evolutionary relationships between protein sequences. Here, we propose a method to make them homology-aware. We introduce RAG‐ESM, a retrieval‐augmented framework that allows to condition pretrained ESM2 protein language models on homologous sequences, using a minimal number of additional cross‐attention parameters and minimal computational cost. We show that RAG‐ESM models outperform larger ESM2 models for masked amino acid prediction. We find that sequence alignment capabilities spontaneously emerge in specific cross‐attention heads of RAG-ESM. By using a discrete diffusion objective for training, and by conditioning on homologs during inference, RAG‐ESM reaches state-of-the-art performance for conditional protein sequence generation and motif scaffolding, among sequence-based models. Our method thus possesses strong potential for scalable, efficient and controlled protein engineering. | [
"improving",
"protein language models",
"models",
"sequence retrieval",
"modeling",
"relationships",
"homology",
"evolutionary relationships",
"protein sequences"
] | Accept (Spotlight) | https://openreview.net/pdf?id=i4vevaqugi | https://openreview.net/forum?id=i4vevaqugi | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"J6y6QbNeYq"
],
"note_type": [
"decision"
],
"note_created": [
1740846083852
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"title\": \"Paper Decision\"}"
]
} |
hy12KXLT0K | Building Foundation Models to Characterize Cellular Interactions via Geometric Self-Supervised Learning on Spatial Genomics | [
"Yuning You",
"Zitong Jerry Wang",
"Kevin Fleisher",
"Rex Liu",
"Matt Thomson"
] | Cellular interactions form the fundamental/core circuits that drive development, physiology, and disease within tissues. Advances in spatial genomics (SG) and artificial intelligence (AI) offer unprecedented opportunities to computationally analyze and predict the behavior of cell intricate networks, and to identify interactions that drive disease states.
However, challenges arise in both \textit{methodology} and \textit{scalability}: \textbf{(i)} how to computationally characterize
complicated cellular interactions of multi-scale nature, where chemical genes/circuits in individual cells process information and drive interactions among large numbers of diverse cell types,
and \textbf{(ii)} how to scale up the pipeline to accommodate the increasing volumes of SG data that map transcriptome-scale gene expression and spatial proximity across millions of cells.
In this paper, we introduce the \textbf{Cellular Interaction Foundation Model} (\textbf{CIFM}), an AI foundation model functioning to analyze and simulate cellular interactions within living tissues.
In the CIFM pipeline, we explicitly capture and embed interactions of cells within microenvironments by leveraging the powerful and scalable geometric graph neural network model, and optimize the characterization of cellular interactions with a novel self-supervised learning objective -- we train it to infer gene expressions of cells based upon their surrounding microenvironments.
As a result, we construct CIFM with 100 million parameters by consuming SG data of 23 million cells.
Our benchmarking experiments show CIFM effectively infers gene expressions conditional on the microenvironmental contexts:
we achieve a high correlation and a low mismatch error, with 71.4\% of cells being annotated as the similar cell type based on their predicted and actual expressions on Visium-HD.
We demonstrate the downstream utility of CIFM by: (i) applying CIFM to embed tumor samples to capture cellular interactions within tumor microenvironments (ROC-AUC score of 0.862 on classifying sample conditions via linear probing on embeddings), and identifying shared signatures across samples; and (ii) using CIFM to simulate changes in microenvironmental composition in response to T cell infiltration, which highlights how CIFM can be leveraged to model cellular responses to tissue perturbations -- an essential step toward constructing ``AI virtual tissues".
Our model is open source and publicly accessible at \url{https://huggingface.co/ynyou/CIFM}. | [
"cifm",
"cellular interactions",
"cells",
"geometric",
"learning",
"spatial genomics",
"tissues",
"sg data",
"building foundation models"
] | Accept | https://openreview.net/pdf?id=hy12KXLT0K | https://openreview.net/forum?id=hy12KXLT0K | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"0myl9dUbnr"
],
"note_type": [
"decision"
],
"note_created": [
1741031189305
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
hs1AWLx6U5 | PREDICTING TIME-VARYING METABOLIC DYNAMICS USING STRUCTURED NEURAL ODE PROCESSES | [
"Santanu Rathod",
"Pietro Lio",
"Xiao Zhang"
] | Genome-scale metabolic modeling enables omic data integration through mathematical simulation and has become an indispensable cornerstone for understanding cellular metabolism. Traditional analysis tools, such as mechanistic modeling and flux balance analysis, require deep domain expertise to specify the kinetic parameters or significant manual effort to acquire fluxomic data to formulate the constrained optimization problem.
To circumvent the above limitations, we develop a novel metabolic dynamics modeling framework, which learns a structured neural ODE process (SNODEP) model to predict the time-varying flux and balance distributions by leveraging the more accessible single-cell RNA sequencing (scRNA-seq) technology. Compared with ML-based alternatives, our method achieves enhanced prediction performance, not only due to the intrinsic suitability of neural ODE for modeling dynamics-governed time series data but also because the design of SNODEP explicitly accounts for the destructive measurement process of scRNA-seq and the sequential dependence between context points. Comprehensive evaluations across $4$ metabolic pathways ($340$ experiments in total) show that our method can predict future gene expression, flux, and balance dynamics well, even generalizing to more challenging settings of irregularly sampled data and unseen gene knockout configurations. We hope our work can catalyze the development of more robust and scalable models for metabolic pathway analysis. | [
"metabolic dynamics",
"structured neural ode",
"snodep",
"flux",
"mathematical simulation",
"indispensable cornerstone",
"cellular metabolism",
"traditional analysis tools",
"mechanistic modeling"
] | Accept | https://openreview.net/pdf?id=hs1AWLx6U5 | https://openreview.net/forum?id=hs1AWLx6U5 | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"xvzDRxRg9j"
],
"note_type": [
"decision"
],
"note_created": [
1740883273139
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
gF3pYd4DmN | Searching for Phenotypic Needles in Genomic Haystacks: DNA Language Models for Sex Prediction | [
"Alla Chepurova",
"Yuri Kuratov",
"Polina Belokopytova",
"Mikhail Burtsev",
"Veniamin Fishman"
] | In this study, we explore fine-tuning of Genomic Language Models (GLM) to predict phenotypic traits directly from genomic sequence, without prior knowledge about causative loci or molecular mechanisms linking genotype to phenotype. As a case study, we focus on sex prediction, a well-defined genomic feature associated with the presence of the Y chromosome in most mammals. We adapt a pre-trained GENA-LM model for trait prediction by introducing a sequence chunk classification component with cross-attention, enabling the model to process larger genomic contexts. Training and evaluation on human and mouse genomes demonstrate that the model does not require high-quality reference genome assembly and converges even when the fraction of genomic signal associated with phenotype is below 1%. Prediction accuracy improves with increased sequencing depth, highlighting the scalability of GLMs for genome-wide tasks. Furthermore, a multi-species model effectively learns sex-specific signals for both human and mouse, confirming its cross-species predictive ability. Ablation studies demonstrate that the model relies on the Y chromosome for sex prediction, that aligns with real biological principles. Our findings highlight the applicability of GLMs for trait prediction in long and fragmented genomic data. | [
"sex prediction",
"model",
"phenotypic needles",
"genomic haystacks",
"dna language models",
"chromosome",
"trait prediction",
"human",
"glms",
"study"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=gF3pYd4DmN | https://openreview.net/forum?id=gF3pYd4DmN | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"E2A6oyqupK"
],
"note_type": [
"decision"
],
"note_created": [
1740961757755
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
eZh2UyeMrg | ShortListing Model: A Streamlined Simplex Diffusion for Biological Sequence Generation | [
"Yuxuan Song",
"Zhe Zhang",
"Yu Pei",
"Jingjing Gong",
"Mingxuan Wang",
"Hao Zhou",
"Jingjing Liu",
"Wei-Ying Ma"
] | Generative modeling of discrete variables is challenging yet crucial for applications in natural language processing and biological sequence design. We introduce the Shortlisting Model (SLM), a novel simplex-based diffusion model inspired by progressive candidate pruning. SLM operates on simplex centroids, reducing complexity and enhancing scalability. Additionally, SLM incorporates a flexible implementation of classifier-free guidance, enhancing unconditional generation performance. Extensive experiments in DNA promoter and enhancer design, and protein design demonstrate SLM's competitive performance and scalability. | [
"slm",
"model",
"streamlined simplex diffusion",
"scalability",
"biological sequence generation",
"discrete variables",
"crucial",
"applications",
"natural language processing"
] | Accept | https://openreview.net/pdf?id=eZh2UyeMrg | https://openreview.net/forum?id=eZh2UyeMrg | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"HZbQdUrcTX"
],
"note_type": [
"decision"
],
"note_created": [
1740902350293
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
e5uLntjTeD | CellMemory: Hierarchical Interpretation of Out-of-Distribution Cells Using Bottlenecked Transformer | [
"Qifei Wang"
] | Applying machine learning to cellular data presents several challenges. One such challenge is making the methods interpretable concerning both the cellular information and its context. Another less-explored challenge is the accurate representation of cells outside existing references, referred to as out-of-distribution (OOD) cells. OOD cells arise from physiological conditions (e.g., diseased vs. healthy) or technical variations (e.g., single-cell references vs. spatial queries). Inspired by the Global Workspace Theory in cognitive neuroscience, we introduce CellMemory, a bottlenecked Transformer with improved generalization designed for the hierarchical interpretation of OOD cells. CellMemory outperforms large-scale foundation models pre-trained on tens of millions of cells, even without pre-training. Moreover, it robustly characterizes malignant cells and their founder cells across different patients, revealing cellular changes caused by the diseases. We further propose leveraging CellMemory’s capacity to integrate multi-modalities and phenotypic information, advancing toward the construction of VIRTUAL ORGAN. | [
"cells",
"cellmemory",
"hierarchical interpretation",
"bottlenecked transformer",
"challenge",
"references",
"ood cells",
"bottlenecked transformer cellmemory",
"machine learning"
] | Accept | https://openreview.net/pdf?id=e5uLntjTeD | https://openreview.net/forum?id=e5uLntjTeD | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"59gcdMfGOi"
],
"note_type": [
"decision"
],
"note_created": [
1740961848565
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
dj0Oz1NsAI | Large Language Models for Zero-shot Inference of Causal Structures in Biology | [
"Izzy Newsham",
"Luka Kovačević",
"Richard Moulange",
"Nan Rosemary Ke",
"Sach Mukherjee"
] | Genes, proteins and other biological entities influence one another via causal molecular networks. Causal relationships in such networks are mediated by complex and diverse mechanisms, through latent variables, and are often specific to cellular context. It remains challenging to characterise such networks in practice. Here, we present a novel framework to evaluate the ability of large language models (LLMs) for zero-shot inference of causal relationships in biology. In particular, we systematically evaluate causal claims obtained from an LLM using real-world interventional data. This is done over one hundred variables and thousands of causal hypotheses. Furthermore, we consider several prompting and retrieval-augmentation strategies, including large, and potentially conflicting, collections of articles. Our results show that with tailored augmentation and prompting, even relatively small LLMs can capture meaningful aspects of causal structure in biological systems. This supports the notion that LLMs could act as orchestration tools in biological discovery, by helping to distil current knowledge in ways amenable to downstream analysis. Our approach to assessing LLMs with respect to experimental data is relevant for a broad range of problems at the intersection of causal learning, LLMs and scientific discovery. | [
"llms",
"inference",
"large language models",
"causal structures",
"causal relationships",
"networks",
"biology genes",
"proteins",
"biological entities influence"
] | Accept | https://openreview.net/pdf?id=dj0Oz1NsAI | https://openreview.net/forum?id=dj0Oz1NsAI | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"61Rki9HZUJ"
],
"note_type": [
"decision"
],
"note_created": [
1740902322446
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
cwi0o5rrVG | BEYOND SEQUENCE-ONLY MODELS: LEVERAGING STRUCTURAL CONSTRAINTS FOR ANTIBIOTIC RESISTANCE PREDICTION IN SPARSE GENOMIC DATASETS | [
"Mahbuba Tasmin",
"Anna G. Green"
] | To combat the rise of antibiotic-resistant $\textit{Mycobacterium tuberculosis}$, genotype-based diagnosis of resistance is critical, as it could substantially speed time to treatment. However, machine learning efforts at genotype-based resistance prediction are hindered by limited sequence diversity and high redundancy in genomic datasets, complicating model generalization. Here, we introduce a dataset of $\textit{M. tuberculosis}$ sequences for nine key resistance-associated genes and corresponding resistance phenotypes, performing genotype de-duplication to mitigate the effects of data leakage. This study introduces a Fused Ridge approach that moves beyond sequence-only prediction by introducing protein structure constraints. We compare to baseline Ridge regression and zero-shot mutation effect prediction using ESM-2 embeddings.
Our results show that Fused Ridge achieves the highest mean AUC (0.766), outperforming Ridge regression (0.755) and ESM-2-based log-likelihood ratio scoring (0.603). It also exhibits improved precision and recall in identifying resistance-conferring variants, particularly for genes such as $\textit{gyrA}$ and $\textit{rpoB}$, likely due to the strong association between the 3D location of mutations and resistance. The fusion penalty enforces smoothness in regression coefficients for spatially adjacent residues, embedding biological knowledge into the predictive framework, and improves generalization in sparse and highly redundant datasets. | [
"models",
"structural constraints",
"antibiotic resistance prediction",
"sparse genomic datasets",
"resistance",
"genes",
"ridge regression",
"rise",
"mycobacterium tuberculosis",
"diagnosis"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=cwi0o5rrVG | https://openreview.net/forum?id=cwi0o5rrVG | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"TKBmRey3AG"
],
"note_type": [
"decision"
],
"note_created": [
1740961125608
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
cd8QEKFzOQ | Supervised Contrastive Block Disentanglement | [
"Taro Makino",
"Ji Won Park",
"Natasa Tagasovska",
"TAKAMASA KUDO",
"Paula Coelho",
"Heming Yao",
"Jan-Christian Huetter",
"Ana Carolina Leote",
"Burkhard Hoeckendorf",
"Stephen Ra",
"David Richmond",
"Kyunghyun Cho",
"Aviv Regev",
"Romain Lopez"
] | Real-world datasets often combine data collected under different experimental conditions. This yields larger datasets, but also introduces spurious correlations that make it difficult to model the phenomena of interest. We address this by learning two embeddings to independently represent the phenomena of interest and the spurious correlations. The embedding representing the phenomena of interest is correlated with the target variable $y$, and is invariant to the environment variable $e$. In contrast, the embedding representing the spurious correlations is correlated with $e$. The invariance to $e$ is difficult to achieve on real-world datasets. Our primary contribution is an algorithm called Supervised Contrastive Block Disentanglement (SCBD) that effectively enforces this invariance. It is based purely on Supervised Contrastive Learning, and applies to real-world data better than existing approaches. We empirically validate SCBD on the real-world problem of batch correction. Using a dataset of 26 million Optical Pooled Screening images, we learn embeddings for \num{5050} genetic perturbations that are nearly free of technical artifacts that arise from unintended variation across wells. | [
"contrastive block disentanglement",
"spurious correlations",
"phenomena",
"interest",
"datasets",
"data",
"difficult",
"embeddings",
"invariance"
] | Accept | https://openreview.net/pdf?id=cd8QEKFzOQ | https://openreview.net/forum?id=cd8QEKFzOQ | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"IQR2QiJIiW"
],
"note_type": [
"decision"
],
"note_created": [
1740846061943
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
c8TWaIW0FU | stDiffusion: A Diffusion Based Model for Generative Spatial Transcriptomics | [
"Sumeer Ahmad Khan",
"Xabier Martínez de Morentin",
"Vincenzo Lagani",
"Robert Lehmann",
"Narsis A. Kiani",
"David Gomez-Cabrero",
"Jesper Tegnér"
] | Spatial Transcriptomics (ST) allows deep characterization of the 2D organization of expression data within tissue slices. The ST technology provides a tissue contextualization of deep single-cell profiles. Recently, numerous computational and machine learning methods have addressed challenges such as data quality, augmentation, annotation, and the development of integrative platforms for data analysis. In contrast, here we ask whether unseen spatial transcriptomics data can be predicted and if we can interpolate novel transcriptomic slices. To this end, we adopt a denoising diffusion probabilistic-based model (DDPM) to demonstrate the learning of generative ST models for several tissues. Furthermore, our generative diffusion model interpolates (predicts) unseen slices located “between” the collected finite number of ST slices. This methodology sets the stage for learning predictive deep 3D models of tissues from a finite number of spatial transcriptomics slices, thus heralding the advent of AI-augmented spatial transcriptomics. | [
"model",
"diffusion",
"finite number",
"stdiffusion",
"deep characterization",
"organization",
"expression data"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=c8TWaIW0FU | https://openreview.net/forum?id=c8TWaIW0FU | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"EOC0PMDiM7"
],
"note_type": [
"decision"
],
"note_created": [
1741144055081
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
bgM738EDYS | Enhancing Downstream Analysis in Genome Sequencing: Species Classification While Basecalling | [
"Riselda Kodra",
"Hadjer Benmeziane",
"Irem Boybat",
"William Andrew Simon"
] | The ability to quickly and accurately identify microbial species in a sample, known as metagenomic profiling, is critical across various fields, from healthcare to environmental science. This paper introduces a novel method to profile signals coming from sequencing devices in parallel with determining their nucleotide sequences, a process known as basecalling, via a multi-task deep neural network for simultaneous basecalling and multi-class genome classification. We introduce a new multi-objective loss strategy where basecalling and classification losses are back-propagated separately, with model weights combined for the shared layers, and a pre-configured ranking strategy allowing top-$\textit{K}$ species accuracy, giving users flexibility to choose between higher accuracy or lower latency at identifying the species. We achieve state-of-the-art basecalling accuracies, while multi-class classification accuracies meet and exceed the results of state-of-the-art binary classifiers, attaining an average of 92.5\%/98.9\% accuracy at identifying the top-1/3 species among a total of 17 genomes in the Wick bacterial dataset. This work has implications for future studies in metagenomic profiling by accelerating the bottleneck step of matching the DNA sequence to the correct genome. | [
"species",
"downstream analysis",
"genome sequencing",
"metagenomic profiling",
"basecalling",
"ability",
"microbial species",
"sample",
"critical",
"various fields"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=bgM738EDYS | https://openreview.net/forum?id=bgM738EDYS | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"S9n8lTwrZS"
],
"note_type": [
"decision"
],
"note_created": [
1740951068448
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
bgGZIBF6Kx | LLM4GRN: Discovering Causal Gene Regulatory Networks with LLMs – Evaluation through Synthetic Data Generation | [
"Tejumade Afonja",
"Ivaxi Sheth",
"Ruta Binkyte",
"Waqar Hanif",
"Shubhi Ambast",
"Charles Mwangi Kaumbutha",
"Matthias Becker",
"Mario Fritz"
] | Gene regulatory networks (GRNs) represent the causal relationships between transcription factors (TFs) and target genes in single-cell RNA sequencing (scRNA-seq) data. Understanding these networks is crucial for uncovering disease mechanisms and identifying therapeutic targets. In this work, we investigate the potential of large language models (LLMs) for GRN discovery, leveraging their learned biological knowledge alone or in combination with traditional statistical methods. We develop a task-based evaluation strategy to address the challenge of unavailable ground truth causal graphs. Specifically, we use the GRNs suggested by LLMs to guide causal synthetic data generation and compare the resulting data against the original dataset. Our statistical and biological assessments show that LLMs can support statistical modeling and data synthesis for biological research. | [
"llms",
"evaluation",
"grns",
"data",
"causal relationships",
"transcription factors"
] | Accept | https://openreview.net/pdf?id=bgGZIBF6Kx | https://openreview.net/forum?id=bgGZIBF6Kx | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"9Ub8Bd2b6E"
],
"note_type": [
"decision"
],
"note_created": [
1740960729732
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
av4QhBNeZo | Talk2Biomodels and Talk2KnowledgeGraph: AI agent-based application for prediction of patient biomarkers and reasoning over biomedical knowledge graphs | [
"Gurdeep Singh",
"Lilija Wehling",
"Ahmad Wisnu Mulyadi",
"Rakesh Hadne Sreenath",
"Thomas Klabunde",
"Tommaso Andreani",
"Douglas McCloskey"
] | In this study, we present Talk2Biomodels (T2B) and Talk2KnowledgeGraphs (T2KG) as open-source, user-friendly, large language model-based agentic AI platforms designed to democratize access to computational models of disease processes using natural language. T2B and T2KG eschew the traditional graphical user interface (GUI) and minimally adaptable workflow in favour of a modern agentic framework to provide a dynamic and immersive experience to explore the biology of disease in silico and how different treatment options can be efficacious in different virtual patient populations. T2B supports models encoded in the open-source community format Systems Biology Markup Language (SBML) for quantitative prediction of patient biomarkers and integrates with biomedical knowledge graphs to provide qualitative insights not captured in the computational model. A use case in precision medicine is presented to demonstrate how experts and non-experts in computational biology and data science can benefit from T2B and T2KG. | [
"patient biomarkers",
"application",
"prediction",
"biomedical knowledge graphs",
"disease"
] | Accept | https://openreview.net/pdf?id=av4QhBNeZo | https://openreview.net/forum?id=av4QhBNeZo | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"HSGaSy2its"
],
"note_type": [
"decision"
],
"note_created": [
1741187317036
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
aNmZ9s6BZV | Test-Time View Selection for Multi-Modal Decision Making | [
"Eeshaan Jain",
"Johann Wenckstern",
"Benedikt von Querfurth",
"Charlotte Bunne"
] | The clinical routine has access to an ever-expanding repertoire of diagnostic tests, ranging from routine imaging to sophisticated molecular profiling technologies. Foundation models have recently emerged as powerful tools for extracting and integrating diagnostic information from these diverse clinical tests, advancing the idea of comprehensive patient digital twins. However, it remains unclear how to select and design tests that ensure foundation models can extract the necessary information for accurate diagnosis. We introduce MAVIS (Multi-modal Active VIew Selection), a reinforcement learning framework that unifies modality selection and feature selection into a single decision process. By leveraging foundation models, MAVIS dynamically determines which diagnostic tests to perform and in what sequence, adapting to individual patient characteristics. Experiments on real-world datasets across multiple clinical tasks demonstrate that MAVIS outperforms conventional approaches in both diagnostic accuracy and uncertainty reduction, while reducing testing costs by over 80%, suggesting a promising direction for optimizing clinical workflows through intelligent test design and selection. | [
"foundation models",
"view selection",
"decision",
"diagnostic tests",
"clinical routine",
"access",
"repertoire",
"routine",
"powerful tools"
] | Accept (Spotlight) | https://openreview.net/pdf?id=aNmZ9s6BZV | https://openreview.net/forum?id=aNmZ9s6BZV | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"Sc7GNE4UYD"
],
"note_type": [
"decision"
],
"note_created": [
1741031655949
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"title\": \"Paper Decision\"}"
]
} |
aDqhVBvHHZ | Pathway-Attentive GAN for Interpretable Biomolecular Design | [
"Azmine Toushik Wasi",
"Mahfuz Ahmed Anik"
] | High-throughput sequencing has greatly advanced cancer research, but a major gap remains in connecting TCGA transcriptomic data with detailed metabolomic profiles. This disconnect limits our understanding of metabolic changes that drive tumor progression and resistance to treatment. To address this, we introduce the Pathway-Attentive GAN (PathGAN), a new framework that combines transformer-based attention mechanisms with a GNN discriminator to generate realistic and biologically relevant metabolite profiles as a case study. We validate these profiles using COBRApy-based flux balance analysis to ensure they align with key metabolic pathways. By linking transcriptomics and metabolomics, PathGAN improves our understanding of tumor metabolism and provides valuable insights for cancer therapy. We believe this work can offer a powerful tool for precision oncology, helping to develop more targeted and effective treatments. | [
"gan",
"interpretable biomolecular design",
"understanding",
"pathgan",
"sequencing",
"advanced cancer research",
"major gap",
"tcga transcriptomic data",
"detailed metabolomic profiles",
"disconnect"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=aDqhVBvHHZ | https://openreview.net/forum?id=aDqhVBvHHZ | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"JuWBvukBkC"
],
"note_type": [
"decision"
],
"note_created": [
1741143610870
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
YaEozn3y0G | A Topologically Guided Machine Learning Framework for Enhanced Fine-Mapping in Whole-Genome Bacterial Studies | [
"Tamsin Emily James",
"Peter Tino",
"Nicole E Wheeler"
] | This paper proposes a feature selection framework for machine learning–based bacterial genome-wide association studies aimed at uncovering resistance-causing traits. Using a well-characterized Staphylococcus aureus pangenome as a ground truth for causal‐variant labels, we demonstrate improved control for population structure and enhanced interpretability through the explicit incorporation of genomic context derived from graph-structured data, based on the compacted de Bruijn graph for an assembled pangenome. Our framework successfully uncovers resistance-causing traits for 9 of 14 antibiotics using a significantly reduced feature set, while preserving genomic marker identifiability via unique mappings between the encoded feature space and sequential representations that tag specific genomic loci. | [
"machine",
"framework",
"enhanced",
"bacterial studies",
"traits",
"feature selection framework",
"bacterial",
"association studies",
"staphylococcus",
"ground truth"
] | Accept | https://openreview.net/pdf?id=YaEozn3y0G | https://openreview.net/forum?id=YaEozn3y0G | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"0NkUggNxgG"
],
"note_type": [
"decision"
],
"note_created": [
1741031631511
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
YZBEBxtXyU | Uncertainty-aware genomic deep learning with knowledge distillation | [
"Jessica Zhou",
"Kaeli Rizzo",
"Ziqi Tang",
"Peter K Koo"
] | Deep neural networks (DNNs) have advanced predictive modeling for regulatory genomics, but challenges remain in ensuring the reliability of their predictions and understanding the key factors behind their decision making. Here, we introduce DEGU (Distilling Ensembles for Genomic Uncertainty-aware models), a method that integrates ensemble learning and knowledge distillation to improve the robustness and explainability of DNN predictions. DEGU distills the predictions of an ensemble of DNNs into a single model, capturing both the average of the ensemble's predictions and the variability across them, with the latter representing epistemic (or model-based) uncertainty. DEGU also includes an optional auxiliary task to estimate aleatoric, or data-based, uncertainty by modeling variability across experimental replicates. By applying DEGU across various functional genomic prediction tasks, we demonstrate that DEGU-trained models inherit the performance benefits of ensembles in a single model, with improved generalization to out-of-distribution sequences and more consistent explanations of cis-regulatory mechanisms through attribution analysis. Moreover, DEGU-trained models provide calibrated uncertainty estimates, with conformal prediction offering coverage guarantees under minimal assumptions. Overall, DEGU paves the way for robust and trustworthy applications of deep learning in genomics research. | [
"degu",
"predictions",
"models",
"genomic deep learning",
"knowledge distillation",
"dnns",
"ensembles",
"ensemble",
"single model",
"variability"
] | Accept | https://openreview.net/pdf?id=YZBEBxtXyU | https://openreview.net/forum?id=YZBEBxtXyU | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"FuTTnaJVmz"
],
"note_type": [
"decision"
],
"note_created": [
1740846151276
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
XjxO0Ayj01 | RAG-Enhanced Collaborative LLM Agents for Drug Discovery | [
"Namkyeong Lee",
"Edward De Brouwer",
"Ehsan Hajiramezanali",
"Tommaso Biancalani",
"Chanyoung Park",
"Gabriele Scalia"
] | Recent advances in large language models (LLMs) have shown great potential to accelerate drug discovery. However, the
specialized nature of biochemical data often necessitates costly domain-specific fine-tuning, posing critical challenges.
First, it hinders the application of more flexible general-purpose LLMs in cutting-edge drug discovery tasks.
More importantly, it impedes the rapid integration of the vast amounts of scientific data continuously generated through experiments and research.
To investigate these challenges, we propose CLADD, a retrieval-augmented generation (RAG)-empowered agentic system tailored to drug discovery tasks. Through the collaboration of multiple LLM agents, CLADD dynamically retrieves information from biomedical knowledge bases, contextualizes query molecules, and integrates relevant evidence to generate responses; all without the need for domain-specific fine-tuning.
Crucially, we tackle key obstacles in applying RAG workflows to biochemical data, including data heterogeneity, ambiguity, and multi-source integration.
We demonstrate the flexibility and effectiveness of this framework across a variety of drug discovery tasks, showing that it outperforms general-purpose and domain-specific LLMs as well as traditional deep learning approaches. | [
"llms",
"drug discovery tasks",
"collaborative llm agents",
"drug discovery",
"biochemical data",
"large language models",
"great potential",
"specialized nature",
"costly"
] | Accept (Spotlight) | https://openreview.net/pdf?id=XjxO0Ayj01 | https://openreview.net/forum?id=XjxO0Ayj01 | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"4VaEW1QW0j"
],
"note_type": [
"decision"
],
"note_created": [
1741145036061
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"title\": \"Paper Decision\"}"
]
} |
XYbl2uefvm | EFFICIENT FINE-TUNING OF SINGLE-CELL FOUNDATION MODELS ENABLES ZERO-SHOT MOLECULAR PERTURBATION PREDICTION | [
"Sepideh Maleki",
"Jan-Christian Huetter",
"David Richmond",
"Kangway V. Chuang",
"Gabriele Scalia",
"Tommaso Biancalani"
] | Predicting transcriptional responses to novel drugs provides a unique opportunity to accelerate biomedical research and advance drug discovery efforts. However, the inherent complexity and high dimensionality of cellular responses, combined with the extremely limited available experimental data, makes the task challenging. In this study, we leverage single-cell foundation models (FMs) pre-trained on tens of millions of single cells, encompassing multiple cell types, states, and disease annotations, to address molecular perturbation prediction. We introduce a drug-conditional adapter that allows efficient fine-tuning by training less than 1% of the original foundation model, thus enabling molecular conditioning while preserving the rich biological representation learned during pre-training. The proposed strategy allows not only the prediction of cellular responses to novel drugs, but also the zero-shot generalization to unseen cell lines. We establish a robust evaluation framework to assess model performance across different generalization tasks, demonstrating state-of-the-art results across all settings, with significant improvements in the few-shot and zero-shot generalization to new cell lines compared to existing baselines. | [
"foundation models",
"efficient",
"molecular perturbation prediction",
"drugs",
"cellular responses",
"generalization",
"transcriptional responses",
"unique opportunity",
"biomedical research"
] | Accept (Spotlight) | https://openreview.net/pdf?id=XYbl2uefvm | https://openreview.net/forum?id=XYbl2uefvm | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"gyqP9OXxig"
],
"note_type": [
"decision"
],
"note_created": [
1740846241418
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"title\": \"Paper Decision\"}"
]
} |
U3Ejoy1BG2 | HARMONY: A Multi-Representation Framework for RNA Property Prediction | [
"Junjie Xu",
"Artem Moskalev",
"Tommaso Mansi",
"Mangal Prakash",
"Rui Liao"
] | The biological functions of RNA arise from the interplay of sequence (1D), secondary structure (2D), and tertiary structure (3D). While existing machine learning models typically rely on sequence-based representations, recent studies suggest that integrating structural information can improve predictive performance, especially in low-data regimes. However, different representations have trade-offs—3D models are sensitive to noise, whereas sequence-based models are more robust to sequencing noise but lack structural insights. To address this, we introduce HARMONY, a framework that dynamically integrates 1D, 2D, and 3D representations, and seamlessly adapts to diverse real-world scenarios. Our experiments demonstrate that HARMONY consistently outperforms existing baselines across multiple RNA property prediction tasks on established benchmarks, offering a robust and generalizable approach to RNA modeling. | [
"harmony",
"framework",
"representations",
"models",
"robust",
"rna property prediction",
"biological functions",
"rna arise",
"interplay"
] | Accept (Tiny Papers) | https://openreview.net/pdf?id=U3Ejoy1BG2 | https://openreview.net/forum?id=U3Ejoy1BG2 | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"iV8XoIYVNp"
],
"note_type": [
"decision"
],
"note_created": [
1740961571046
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Tiny Papers)\", \"title\": \"Paper Decision\"}"
]
} |
Tmx4o3Jg55 | LangPert: LLM-Driven Contextual Synthesis for Unseen Perturbation Prediction | [
"Kaspar Märtens",
"Marc Boubnovski Martell",
"Cesar A. Prada-Medina",
"Rory Donovan-Maiye"
] | Systematic genetic perturbation provides critical insights into cell functioning, yet predicting their cellular effects remains a major challenge. Despite advances in computational approaches, accurately modelling cellular responses to unseen perturbations continues to be difficult. Large Language Models (LLMs) have shown promise in biological applications by synthesizing scientific knowledge, but their direct application to high-dimensional gene expression data has been impractical due to numerical limitations. We propose LangPert, a novel hybrid framework that leverages LLMs to guide a downstream k-nearest neighbors (kNN) aggregator, combining biological reasoning with efficient numerical inference. We demonstrate that LangPert achieves state-of-the-art performance on single-gene perturbation prediction tasks across multiple datasets. | [
"langpert",
"contextual synthesis",
"llms",
"critical insights",
"cell functioning",
"cellular effects",
"major challenge",
"advances"
] | Accept (Spotlight) | https://openreview.net/pdf?id=Tmx4o3Jg55 | https://openreview.net/forum?id=Tmx4o3Jg55 | ICLR.cc/2025/Workshop/MLGenX | 2025 | {
"note_id": [
"tqOZfvDG8i"
],
"note_type": [
"decision"
],
"note_created": [
1741031804551
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/MLGenX/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"title\": \"Paper Decision\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.