Search is not available for this dataset
forum_id
stringlengths 8
20
| forum_title
stringlengths 1
899
| forum_authors
sequencelengths 0
174
| forum_abstract
stringlengths 0
4.69k
| forum_keywords
sequencelengths 0
35
| forum_pdf_url
stringlengths 38
50
| forum_url
stringlengths 40
52
| note_id
stringlengths 8
20
| note_type
stringclasses 6
values | note_created
int64 1,360B
1,737B
| note_replyto
stringlengths 4
20
| note_readers
sequencelengths 1
8
| note_signatures
sequencelengths 1
2
| venue
stringclasses 349
values | year
stringclasses 12
values | note_text
stringlengths 10
56.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
XCUzATsVdU | One-Class SVM-guided Negative Sampling for Enhanced Contrastive Learning | [
"Dhruv Jain",
"Tsiry Mayet",
"Romain HÉRAULT",
"Romain MODZELEWSKI"
] | Recent studies on contrastive learning have emphasized carefully sampling and mixing negative samples.
This study introduces a novel and improved approach for generating synthetic negatives.
We propose a new method using One-Class Support Vector Machine (OCSVM) to guide in the selection process before mixing named as **Mixing OCSVM negatives (MiOC)**.
Our results show that our approach creates more meaningful embeddings, which lead to better classification performance.
We implement our method using publicly available datasets (Imagenet100, Cifar10, Cifar100, Cinic10, and STL10). We observed that MiOC exhibit favorable performance compared to state-of-the-art methods across these datasets.
By presenting a novel approach, this study emphasizes the exploration of alternative mixing techniques that expand the sampling space beyond the conventional confines of hard negatives produced by the ranking of the dot product. | [
"Contrastive Learning",
"Self Supervised Learning",
"One-Class SVM",
"Deep Learning"
] | https://openreview.net/pdf?id=XCUzATsVdU | https://openreview.net/forum?id=XCUzATsVdU | dmskMZMOvU | official_review | 1,728,531,685,099 | XCUzATsVdU | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission33/Reviewer_5y1S"
] | NLDL.org/2025/Conference | 2025 | title: A simple but effective strategy for contrastive learning
summary: The paper introduces a new method that mixes negatives selected with one-class SVM (OCSVM) for image-based contrastive learning. The method, called MiOC (Mixing OCSVM negatives), uses OCSVM to select inlier negative embeddings. These embeddings are combined with query embeddings, resulting in synthetic negatives. By using selected negatives, the method is able to generate better representations. For evaluation, authors have chosen image classification as the downstream task, using a larger dataset (ImageNet100) as well as smaller ones (CIFAR10, CIFAR100, STN10, CINIC10). In all cases, the proposed method has outperformed the baseline MoCov2 - even considering improved versions of it - and other SOTA methods.
strengths: The paper is clear, concise and well-illustrated. In particular, Fig. 1 properly shows a contrast of the proposed strategy and its counterpart dot-product hard negatives. The proposed strategy is simple but interesting. Results indicates that the strategy was effective for the problem. Authors have indicated that the code will be available upon acceptation, which would be appreciated.
weaknesses: I only have two issues about the paper, most of my comments are minors. First, the k-NN strategy only appears in Sec. 4.1.2, but it is not clear the role of its analysis. In which way does it complement Acc Top1 %? What is supposed to be the conclusion of the acc top1 x k-NN comparison? Second, since the dot-product strategy is an important counterpart, a comparison of its results versus MiOC would improve the paper (or an explicit mention if it is already being done).
Other suggestions/questions:
[Eq 6] Is k_i a set? It is not clear, because it is the result of a set concatenation, but it is used in exp() as a unique embedding.
[Line 287] Include the references of the datasets.
[A.2] Which set was used for Hyper-parameter Selection?
Minors:
- All open quotes erroneously appear as close quotes.
- Use punctuation in equations and \nonindent after them if the text continues.
- Do not use 'x' for dimensions (as in 224x224). Use \times instead
- Figures 1 and 2 are in the wrong order in the paper. They should be indexed in the same order that they are referenced.
- [Line 114] "shortly" does not seem to fit in this sentence.
- [Line 132] Include the full form of CLT in the first occurrence.
- [Figure 3] Use math "belongs to" to indicate that z- and zq are in O' and P.
- [Lines 191-199] Use uppercase for sets and lowercase for elements to improve readability. Also, k is used as an index and as the size of set Q.
- [Line 257] between each model -> for each model
- [Line 200] comprises of all -> comprises all
- [Figure 5] visualising -> visualizing
- [Line 331] The paper highlights the potential of OCSVM for a single application (image classification), although it may inspire other tasks.
- Check references, since some of them do not have publication year.
confidence: 4
justification: I mostly indicated minor suggestions, and the other issues do not significantly affect text quality. The strategy is simple and effective, and the problem is relevant and up-to-date. |
XCUzATsVdU | One-Class SVM-guided Negative Sampling for Enhanced Contrastive Learning | [
"Dhruv Jain",
"Tsiry Mayet",
"Romain HÉRAULT",
"Romain MODZELEWSKI"
] | Recent studies on contrastive learning have emphasized carefully sampling and mixing negative samples.
This study introduces a novel and improved approach for generating synthetic negatives.
We propose a new method using One-Class Support Vector Machine (OCSVM) to guide in the selection process before mixing named as **Mixing OCSVM negatives (MiOC)**.
Our results show that our approach creates more meaningful embeddings, which lead to better classification performance.
We implement our method using publicly available datasets (Imagenet100, Cifar10, Cifar100, Cinic10, and STL10). We observed that MiOC exhibit favorable performance compared to state-of-the-art methods across these datasets.
By presenting a novel approach, this study emphasizes the exploration of alternative mixing techniques that expand the sampling space beyond the conventional confines of hard negatives produced by the ranking of the dot product. | [
"Contrastive Learning",
"Self Supervised Learning",
"One-Class SVM",
"Deep Learning"
] | https://openreview.net/pdf?id=XCUzATsVdU | https://openreview.net/forum?id=XCUzATsVdU | cUmd8fRYsK | official_review | 1,728,158,102,089 | XCUzATsVdU | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission33/Reviewer_5RzJ"
] | NLDL.org/2025/Conference | 2025 | title: Interesting method, but a few concerns on originality and importance
summary: The paper proposes to use one-class support-vector machine (OCSVM) to guide the selection process for generating synthetic negatives for contrastive learning. The authors base their analysis on the idea of MoCHI
of interpolation between the randomly chosen query and a randomly chosen negative.
strengths: - Soundness/correctness: the model is sound and improves the performance upon the alternative strategy, MoCHI (although there are reservations as described in cons and the questions)
- Quality and clarity: the paper is well-written and clear (however, there are some concerns in questions below)
weaknesses: - Originality: in essence, the paper proposes a combination of two well-known approaches, MoCHI and OCSVM
- Importance: it seems to me that the proposed solution only marginally improves the performance, with the experimental results comparing the solution exclusively with MoCHI
Questions:
To expand upon the importance part, I suggest the following improvements:
- In this work, the authors focus their efforts on comparing the proposed method with MoCHI; it may be important to also present the comparison with other strategies for synthetic negatives generation, for example MixCo and iMix.
- It is also interesting that the results with MoCov2 + MoCHI do not improve upon MoCov2. Does it mean that the impact of MoCHI in this setting is actually negative? What would be the reason for this, and could the same also be a problem for the proposed, closely linked, method?
- Should Davies-Bouldin score and Calinski-Harabasz scores be given with the confidence interval? The same comment applies to Table 2.
- The text around Eq 5-6 could be given as an algorithm to facilitate reading
- Figure 1’s samples of hard negatives do not (obviously) look convincingly better than the dot-product ones. I wonder if the authors could clarify
- (Minor question) Figure 2 comes before Figure 1, the authors need to fix it
- Given that the authors seem to be focus on OCSVM, I wonder if the alternative anomaly detection techniques could be used (e.g., Nizan & Tal, 2024; Guille-Escuret et al (2024)), and what would be their benefit?
Nizan & Tal, k-NNN: Nearest Neighbors of Neighbors for Anomaly Detection, WACV workshops, 2024
Guille-Escuret, C., Rodriguez, P., Vazquez, D., Mitliagkas, I., & Monteiro, J. (2024). CADet: Fully Self-Supervised Out-Of-Distribution Detection With Contrastive Learning. Advances in Neural Information Processing Systems, 36.
confidence: 4
justification: In summary, the paper has a good idea, however the authors should clarify upon the analysis as presented above.
final_rebuttal_confidence: 4
final_rebuttal_justification: I've checked the authors rebuttal, as well as discussion, and I think the authors addressed my comments and the comments from other reviewers. Therefore I suggest acceptance. |
XCUzATsVdU | One-Class SVM-guided Negative Sampling for Enhanced Contrastive Learning | [
"Dhruv Jain",
"Tsiry Mayet",
"Romain HÉRAULT",
"Romain MODZELEWSKI"
] | Recent studies on contrastive learning have emphasized carefully sampling and mixing negative samples.
This study introduces a novel and improved approach for generating synthetic negatives.
We propose a new method using One-Class Support Vector Machine (OCSVM) to guide in the selection process before mixing named as **Mixing OCSVM negatives (MiOC)**.
Our results show that our approach creates more meaningful embeddings, which lead to better classification performance.
We implement our method using publicly available datasets (Imagenet100, Cifar10, Cifar100, Cinic10, and STL10). We observed that MiOC exhibit favorable performance compared to state-of-the-art methods across these datasets.
By presenting a novel approach, this study emphasizes the exploration of alternative mixing techniques that expand the sampling space beyond the conventional confines of hard negatives produced by the ranking of the dot product. | [
"Contrastive Learning",
"Self Supervised Learning",
"One-Class SVM",
"Deep Learning"
] | https://openreview.net/pdf?id=XCUzATsVdU | https://openreview.net/forum?id=XCUzATsVdU | b8MlXvlCYH | official_review | 1,727,366,425,467 | XCUzATsVdU | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission33/Reviewer_a8Mf"
] | NLDL.org/2025/Conference | 2025 | title: Review "One-Class SVM-guided Negative Sampling for Enhanced Contrastive Learning"
summary: The paper "One-Class SVM-guided Negative Sampling for Enhanced Contrastive Learning" discusses a novel approach to generate negative samples for contrastive learning by utilizing the one-class SVM. Contrastive learning deals with the unsupervised learning of embeddings that can later be used for supervised down-stream tasks. In this context, for a given input $x$ (the anchor) a positive example $x^p$ and a set of negative examples $\{x^n_1,x^n_1\dots,\}$ are generated, and the learning task is to move the embeddings of the anchor and positive sample closer together, while increasing the embedding distance between the anchor and the negative samples. Empirically, it has been shown that a good set of negative samples is vital to contrastive learning (CL). To improve CL, the authors propose to use the one class SVM to detect if samples in the embedding space are close together (i.e. inside the SVM's circle) or far away (i.e. outside) and then use this information to construct new (difficult) negative examples. More formally, the authors generate two sets of negative samples: The first set is based on a random interpolation between the anchor and a continuously updated set of negative samples generated via the MiOC method. A second set of negative samples is generated by training a one class SVM on the current batch of anchor points and their negative samples. Points that are inside the SVMs decision boundary are deemed difficult (i.e. the SVM did not rate these points as outliers). These difficult samples are used to generate new negative examples by interpolation (this is similar to MiOC). The authors test their method on the ImageNet100K dataset and use the obtained embeddings for various down-stream tasks such as CIFAR10/100,Cinic10 and STL10. During experiments, the authors show that their OneClass SVM approach improves CL performance. Moreover, in a subsequent analysis they show that the negative samples generated by their approach have a higher diversity, which might explain the improved performance.
strengths: I think the overall idea of using the One-class SVM to generate better negative samples is smart and well-placed in the context of Contrastive Learning. The experiments clearly show the benefit of this method and, despite the comparably small datasets, I think this approach could merit further investigation by the community. Figure 3 helps to overall understanding and additional information in the appendix is also given to further explain some aspects of the method.
weaknesses: I mainly find two weaknesses with this paper, that, I believe, can be addressed during the rebuttal:
1) While I find the general method and intuition easy to follow, I had a difficult time understanding the exact method. First, some variables such as $s_k$ (eq. 3) are explained after they have been presented. Second, I am still a bit unclear on what data the SVM is exactly trained. I would appreciate a better explanation of the overall method / section 3.1. To be more concrete:
a) On what data is the SVM trained?
b) How is $\beta$ chosen exactly?
c) What is the additional insight of using $k$-NN as well for the experiments?
d) What is the exact connection of the data augmentation and the interpolation to generate the final embeddings? I understand that this is partly discussed in the MiOC paper, but I am not 100% sure how this is placed in this paper
2) The one-class SVM is known to be comparably slow, and you need to fit a new SVM for each data batch. While the authors mention the runtime during the experiments, clear numbers are missing. What is the impact in training the SVM? How much time does this cost per batch? How does it scale?
confidence: 3
justification: I think the paper presents a novel method that might be of interest to other readers. I could not find obvious flaws in the method, experiments or presentation of the paper. While the paper itself (i.e. its presentation) can be improved, I don't think the paper's current stage warrants a rejection. Hence, I vote for acceptance. |
XCUzATsVdU | One-Class SVM-guided Negative Sampling for Enhanced Contrastive Learning | [
"Dhruv Jain",
"Tsiry Mayet",
"Romain HÉRAULT",
"Romain MODZELEWSKI"
] | Recent studies on contrastive learning have emphasized carefully sampling and mixing negative samples.
This study introduces a novel and improved approach for generating synthetic negatives.
We propose a new method using One-Class Support Vector Machine (OCSVM) to guide in the selection process before mixing named as **Mixing OCSVM negatives (MiOC)**.
Our results show that our approach creates more meaningful embeddings, which lead to better classification performance.
We implement our method using publicly available datasets (Imagenet100, Cifar10, Cifar100, Cinic10, and STL10). We observed that MiOC exhibit favorable performance compared to state-of-the-art methods across these datasets.
By presenting a novel approach, this study emphasizes the exploration of alternative mixing techniques that expand the sampling space beyond the conventional confines of hard negatives produced by the ranking of the dot product. | [
"Contrastive Learning",
"Self Supervised Learning",
"One-Class SVM",
"Deep Learning"
] | https://openreview.net/pdf?id=XCUzATsVdU | https://openreview.net/forum?id=XCUzATsVdU | I2VAGC75Ei | meta_review | 1,730,583,859,996 | XCUzATsVdU | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission33/Area_Chair_yuuB"
] | NLDL.org/2025/Conference | 2025 | metareview: This paper presents MiOC, a novel method that leverages One-Class SVM (OCSVM) to improve contrastive learning through the generation of challenging synthetic negative samples. The approach is promising, showing competitive performance against other methods on several benchmark datasets. However, the paper would benefit from clearer articulation of its methodology and its placement within the existing literature. Additionally, more extensive comparisons to alternative sampling strategies and a deeper examination of the SVM’s computational costs would strengthen the study’s claims of efficiency and scalability.
While the approach shows potential, these improvements would enhance its accessibility and confirm the scalability of MiOC. Overall, the paper contributes a fresh perspective on negative sampling for contrastive learning, though more comprehensive validation is needed to maximise its impact.
recommendation: Accept (Oral)
suggested_changes_to_the_recommendation: 1: I agree that the recommendation could be moved down
confidence: 5: The area chair is absolutely certain |
XCUzATsVdU | One-Class SVM-guided Negative Sampling for Enhanced Contrastive Learning | [
"Dhruv Jain",
"Tsiry Mayet",
"Romain HÉRAULT",
"Romain MODZELEWSKI"
] | Recent studies on contrastive learning have emphasized carefully sampling and mixing negative samples.
This study introduces a novel and improved approach for generating synthetic negatives.
We propose a new method using One-Class Support Vector Machine (OCSVM) to guide in the selection process before mixing named as **Mixing OCSVM negatives (MiOC)**.
Our results show that our approach creates more meaningful embeddings, which lead to better classification performance.
We implement our method using publicly available datasets (Imagenet100, Cifar10, Cifar100, Cinic10, and STL10). We observed that MiOC exhibit favorable performance compared to state-of-the-art methods across these datasets.
By presenting a novel approach, this study emphasizes the exploration of alternative mixing techniques that expand the sampling space beyond the conventional confines of hard negatives produced by the ranking of the dot product. | [
"Contrastive Learning",
"Self Supervised Learning",
"One-Class SVM",
"Deep Learning"
] | https://openreview.net/pdf?id=XCUzATsVdU | https://openreview.net/forum?id=XCUzATsVdU | CmvjKCnC01 | official_review | 1,728,309,742,331 | XCUzATsVdU | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission33/Reviewer_pWvU"
] | NLDL.org/2025/Conference | 2025 | title: Good read, sound approach, a few things could still be improved
summary: The authors present MiOC, an approach for generating synthetic negatives to improve classification performance. They describe the method and evaluate MiOC using various data sets. The results show that it performs better than existing approaches.
strengths: The paper is very concise and gets to the point quickly. The language is very good (except for a few minor mistakes, see below). The approach appears to be sound, which is underlined by the evaluation results.
weaknesses: The paper does not have a dedicated section on related work. I suppose this is due to the fact that little research has been done in this area yet, but at the same time, this makes it hard to assess the paper's contribution. If there's anything the authors can do to improve on this, they should do it, as it would improve the paper's scientific quality! (e.g. not only compare quantitatively with existing approaches but also qualitatively or on a conceptual level).
Some remarks:
1 Introduction: The term "they" is used two times: "They utilise two encoders" and "They construct a dynamic dictionary". The authors should make it clear to what or whom 'they' refers in these cases.
1 Introduction: "[...] in the Figure 2" should be "[...] in Figure 2"
4 Experiments: The text switches back and forth between past and present tense (especially at the beginning). The authors should decide on a tense and then stick to it.
Appendix: I did not really understand how the appendix contributes to the paper. The only reference to it is in the caption of Table 1. Is the appendix really necessary? In particular, since MoCHI has already been described in a previous work [11]? The authors should make this clear and perhaps remove redundant information that has already been published.
confidence: 2
justification: Since I'm not an expert in AI theory, I mainly focused on the presentation. As far as I can tell, the approach is novel and contributes to the state of the art. Therefore, it is worth sharing. However, it could be that I did not understand every single detail. Also, I don't have a full overview of existing literature in this area. That's why I opted for "4 Accept" and not "5 Clear accept".
final_rebuttal_confidence: 2
final_rebuttal_justification: I already accepted the paper in my initial review, so my rating remains the same. |
UQuETmoMQX | Towards concurrent real-time audio-aware agents with deep reinforcement learning | [
"Anton Debner",
"Vesa Hirvisalo"
] | Audio holds significant amount of information about our surroundings. It can be used to navigate, assess threats, communicate, as a source of curiosity, and to separate the sources of different sounds. Still, these rich properties of audio are not fully utilized by current video game agents.
We use spatial audio libraries in combination with deep reinforcement learning to allow agents to observe their surroundings and to navigate in their environment using audio cues. In general, game engines support rendering audio for one agent only. Using a hide-and-seek scenario in our experimentation we show how support for multiple concurrent listeners can be used to parallelize the runtime operation and to enable using multiple agents. Further, we analyze the effects of audio environment complexity to demonstrate the scalability of our approach. | [
"audio",
"game engine",
"deep reinforcement learning",
"unity"
] | https://openreview.net/pdf?id=UQuETmoMQX | https://openreview.net/forum?id=UQuETmoMQX | xlxIJKbigK | official_review | 1,728,514,960,133 | UQuETmoMQX | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission32/Reviewer_ft3k"
] | NLDL.org/2025/Conference | 2025 | title: Review
summary: The paper describes the design of audio-based deep rl agents, in a multi-listener system.
The paper tackles a navigation problem: in a game engine, an agent tries to find the source of a noise. This is formalized as an RL problem, where the observations are visuals and sounds, the actions are simple navigation capabilities, and the reward is the opposite of the target distance. The authors propose to enable a multi-listener support, which allow them to parrallelize the learning procedure.
The performance of the audio server and the agents is then evaluated in the Unity engine.
strengths: The approach of the paper is reasonable: it proposes a sound solution to the multi-listener problem, and uses an simple but effective approach to train the DRL agents.
weaknesses: My main concern with the paper is that the contributions are not very clear. In particular, the paper introduces two contributions, the multi-listener / parallelization part, and the DRL for navugation part, but is not clear how they are linked. I think what could improve this is the following/
First, it would be interesting to see the impact of the number of listeners on the evloution of the reward. I assume the most listener there are, the faster the reward increases ? This could be dicussed or shown as an experiment.
Second, and more generally, there could be a more explicit discussion on what is the link between these two contributions. Is the goal that more listeners will improve sample efficiency? IS the goal being able to develop multi-agent algorithms?
confidence: 2
justification: Overall, I find the paper to be of good quality.
I think it lacks a bit of clarity, not in the techniques presentation, but on the problems it try to solve. |
UQuETmoMQX | Towards concurrent real-time audio-aware agents with deep reinforcement learning | [
"Anton Debner",
"Vesa Hirvisalo"
] | Audio holds significant amount of information about our surroundings. It can be used to navigate, assess threats, communicate, as a source of curiosity, and to separate the sources of different sounds. Still, these rich properties of audio are not fully utilized by current video game agents.
We use spatial audio libraries in combination with deep reinforcement learning to allow agents to observe their surroundings and to navigate in their environment using audio cues. In general, game engines support rendering audio for one agent only. Using a hide-and-seek scenario in our experimentation we show how support for multiple concurrent listeners can be used to parallelize the runtime operation and to enable using multiple agents. Further, we analyze the effects of audio environment complexity to demonstrate the scalability of our approach. | [
"audio",
"game engine",
"deep reinforcement learning",
"unity"
] | https://openreview.net/pdf?id=UQuETmoMQX | https://openreview.net/forum?id=UQuETmoMQX | rWt94lGZXz | official_review | 1,727,879,252,339 | UQuETmoMQX | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission32/Reviewer_PkUU"
] | NLDL.org/2025/Conference | 2025 | title: Official Review
summary: This paper explores the use of deep reinforcement learning (DRL) to train audio-aware agents in video games, addressing the challenge of limited multi-listener support in game engines. The authors propose a distributed architecture where each client runs its own audio engine instance (Figure 1). They train and evaluate their approach using a hide-and-seek task, measuring performance in environments of varying complexity. Results suggest that their method enables the training of multiple concurrent audio-aware agents with good results, when compared to random baselines.
strengths: - Clear Presentation and Structure: The paper is well-organized and written in a clear style. The figures are informative and illustrate key findings. The background information, aka the problem at hand, is well explained, making the work accessible to readers with different backgrounds.
- The optimization task: The hide-and-seek approach is, as described in the manuscript, is clear and easy to understand. It follows popular ideas present in current self-supervised learning methods such as LLMs and vision models, where the model (agent) needs to predict an intentionally hidden property of the data (sound location).
- The evaluation benchmark: The authors study how the complexity of the audio environment (different levels of reflections and occlusions) affects the agent's ability to localize the sound source.
- Limitations: The authors honestly acknowledge the limitations of their work (e.g., reliance on external libraries, lack of faster-than-realtime audio) support.
Question: Could you elaborate on the rationale behind choosing the specific hide-and-seek scenario and the selected audio cues? I can envision many possibilities to increase the difficulty of the task, such as introducing a time constraint for the agent to find the sound and also making the target non-stationary. What are your insist about these?
weaknesses: - While the STFT-H agent appears to achieve slightly higher performance than the STFT-R agent (Figures 5, 6), there is a lack of discussion regarding possible reasons and implications of such results. Why does the STFT-H agent perform slightly better? Is it the simplicity bias from the STFT-R technique? Or is there something fundamentally beneficial in the Hanning window approach used to perform the STFT that collaborates with the results?
- The authors compare two variations of their proposed approaches with a RANDOM agent as a baseline. While the RANDOM agent serves as a reasonably good lower-bound performance, the comparisons lack an upper-bound performance which would demonstrate how well their approach compares with existing methods.
- Lack of Deep Reinforcement Learning Insights: Given the paper's title and focus on audio-aware agents, the analysis of the deep reinforcement learning is somewhat superficial. I missed a bit deeper exploration of the agents' learning dynamics, the impact of hyperparameter choices on the agent's behavior, or a justification for the PPO algorithm.
confidence: 3
justification: The paper addresses a practical and relevant problem in game AI development and provides a functional solution. The experimental methodology is clear and well-defined, allowing for reproducibility. Overall, the paper offers valuable insights, positively contributing to the field.
final_rebuttal_confidence: 3
final_rebuttal_justification: Thank you for the response and additional clarification. |
UQuETmoMQX | Towards concurrent real-time audio-aware agents with deep reinforcement learning | [
"Anton Debner",
"Vesa Hirvisalo"
] | Audio holds significant amount of information about our surroundings. It can be used to navigate, assess threats, communicate, as a source of curiosity, and to separate the sources of different sounds. Still, these rich properties of audio are not fully utilized by current video game agents.
We use spatial audio libraries in combination with deep reinforcement learning to allow agents to observe their surroundings and to navigate in their environment using audio cues. In general, game engines support rendering audio for one agent only. Using a hide-and-seek scenario in our experimentation we show how support for multiple concurrent listeners can be used to parallelize the runtime operation and to enable using multiple agents. Further, we analyze the effects of audio environment complexity to demonstrate the scalability of our approach. | [
"audio",
"game engine",
"deep reinforcement learning",
"unity"
] | https://openreview.net/pdf?id=UQuETmoMQX | https://openreview.net/forum?id=UQuETmoMQX | bE5W5KhdSP | official_review | 1,728,037,958,009 | UQuETmoMQX | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission32/Reviewer_ZpAp"
] | NLDL.org/2025/Conference | 2025 | title: Official Review of Submission32
summary: This paper explores the application of reinforcement learning to train audio-aware agents that utilize both visual and auditory cues in real-time environments, particularly in a hide-and-seek scenario. The paper aims to overcome the limitations of game engines that typically support only a single audio listener by constructing a multi-listener based system. Through several experiments, the authors show the results on multi-listener performance and agent efficiency, while also measuring CPU and memory utilization to evaluate the feasibility of running multiple audio-aware agents simultaneously.
strengths: [Significance]
The authors construct a framework where multiple audio-aware AI agents can simultaneously process sound in a shared environment and ensure audio-agents can run in real-time. The authors also evaluate the performance of AI agents in tasks with various complexity from simple to complex. The results show that their AI agents outperform the random agents. The paper also compares the performance of AI agents trained by different STFT windowing functions which have some effects on the behaviors of the agents.
[Quality]
This paper provides the performance of the framework on the CPU and memory utilization, demonstrating its scalability. It can support up to 40 concurrent audio listeners with minimal performance degradation.
weaknesses: [Methods]
* The novelty of this paper is not enough. For example, compared to [1], both works present similar contributions in the domain of reinforcement learning with audio inputs. Both papers use similar techniques to integrate audio data into the reinforcement learning framework.
* It is unclear how the value loss is calculated, regarding whether the agent is rewarded for decreasing the length of the shortest path to the target.
* The procedure for updating the Navigrid is not described. For example, what is the formula for updating the weights in the Navigrid? What is the range for the weights of nodes in Navigrid? If a node propagates the increase in weights to its neighbors with a decay factor, why the weights around the agent are lower than the weights near the target in Figure B.1?
* In Appendix A (line 602), the paper mentions that "the target could be moving," and the GitHub README states, "Target-related scripts (Target of the audio agent, can move around randomly)." However, in line 254, it is mentioned that "the player is represented as a static object that does not move around in the environment." It is unclear how many targets are used in the environments and whether the targets move randomly during evaluation.
* The paper mentions the faster-than-realtime method in the methodology section, but it appears that it has not been implemented. It would be better to move it to the future work section.
[Experiments]
* The paper does not have comparisons with other established baselines in the field of reinforcement learning with audio inputs.
* The paper does not include any ablation studies that examine the performance of the agent when trained solely on positional data and ray tracking, without incorporating audio data. As a result, it remains unclear whether adding audio information is necessary for this task.
* The README file on GitHub does not provide sufficient detail regarding the setup and execution of the code, which impedes the ability of other researchers to reproduce the experimental results reported in the paper.
Questions for authors:
1. The approach involves reinforcement learning training, which typically needs GPU resources. What are the specifications for GPU? If GPU resources are required, would training multiple agents simultaneously cost a lot of computation resources? Could you provide details on how resource usage is managed during training?
2. In section 3.4, it mentions head related transfer function (HRTF), which models the interaural level differences (ILD) and the interaural time difference (ITD). Is this function also used in the framework?
3. In Section 3.5, does this paper employ the faster-than-realtime technique as described in [1]?
4. Section 4.1 mentions that each Unity instance is running the exact same scene with Steam Audio and one audio agent with a CNN-based audio model. Do multiple agents train by the shared training data collected by all agents, or does each agent only trained by its own training data?
5. Can the agent successfully locate the target if it is trained solely using audio data?
6. Figure 5 shows the training curves obtained with 10 concurrent Unity instances. Do the results in Figure 6 use the same agents that were trained in Figure 5?
7. In Section A.2, do the random agents also use the Navigrid? If they do not use the Navigrid, would it be unfair to compare their performance with that of the trained agents?
[1] Hegde, A. Kanervisto, and A. Petrenko. “Agents that listen: High-throughput reinforcement learning with multiple sensory systems”. In: 2021 IEEE Conference on Games (CoG). IEEE, Aug. 2021. doi: 10.1109/cog52621. 2021.9619096.
confidence: 4
justification: In summary, this paper lacks novelty in the methodology of reinforcement learning related to audio. In addition, the paper falls short of providing detailed methods, affecting the overall clarity and reproducibility of its contributions. Please refer Weaknesses for details.
final_rebuttal_confidence: 3
final_rebuttal_justification: The authors have addressed the main concerns, particularly the motivation of this paper. The primary focus of this paper is on the parallelization of the multi-listener system, rather than advancing methodologies for audio-aware agents. The revision provides several new experiments with detailed explanations. Figure 6 also demonstrates the benefits of running multiple Unity instances concurrently. In conclusion, although the novelty of this paper is limited, the authors have effectively addressed most of my concerns, and I am inclined to accept this paper. |
UQuETmoMQX | Towards concurrent real-time audio-aware agents with deep reinforcement learning | [
"Anton Debner",
"Vesa Hirvisalo"
] | Audio holds significant amount of information about our surroundings. It can be used to navigate, assess threats, communicate, as a source of curiosity, and to separate the sources of different sounds. Still, these rich properties of audio are not fully utilized by current video game agents.
We use spatial audio libraries in combination with deep reinforcement learning to allow agents to observe their surroundings and to navigate in their environment using audio cues. In general, game engines support rendering audio for one agent only. Using a hide-and-seek scenario in our experimentation we show how support for multiple concurrent listeners can be used to parallelize the runtime operation and to enable using multiple agents. Further, we analyze the effects of audio environment complexity to demonstrate the scalability of our approach. | [
"audio",
"game engine",
"deep reinforcement learning",
"unity"
] | https://openreview.net/pdf?id=UQuETmoMQX | https://openreview.net/forum?id=UQuETmoMQX | UODeHSDdLK | meta_review | 1,730,503,257,434 | UQuETmoMQX | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission32/Area_Chair_bUUd"
] | NLDL.org/2025/Conference | 2025 | metareview: The paper addresses the (sometimes) overlooked area of audio based RL agents. The authors propose a solution where they focus on the parallelization of multi-listener systems, with a complete and well-motivated set of experiments. The work is both relevant and timely and can generate interesting discussions within the community.
All the reviewers agree on the value of the contribution. The authors have also carefully considered their suggestions and improved the paper accordingly.
recommendation: Accept (Oral)
suggested_changes_to_the_recommendation: 2: I'm certain of the recommendation. It should not be changed
confidence: 4: The area chair is confident but not absolutely certain |
UQuETmoMQX | Towards concurrent real-time audio-aware agents with deep reinforcement learning | [
"Anton Debner",
"Vesa Hirvisalo"
] | Audio holds significant amount of information about our surroundings. It can be used to navigate, assess threats, communicate, as a source of curiosity, and to separate the sources of different sounds. Still, these rich properties of audio are not fully utilized by current video game agents.
We use spatial audio libraries in combination with deep reinforcement learning to allow agents to observe their surroundings and to navigate in their environment using audio cues. In general, game engines support rendering audio for one agent only. Using a hide-and-seek scenario in our experimentation we show how support for multiple concurrent listeners can be used to parallelize the runtime operation and to enable using multiple agents. Further, we analyze the effects of audio environment complexity to demonstrate the scalability of our approach. | [
"audio",
"game engine",
"deep reinforcement learning",
"unity"
] | https://openreview.net/pdf?id=UQuETmoMQX | https://openreview.net/forum?id=UQuETmoMQX | 5EeRlB4iW9 | official_review | 1,728,484,534,585 | UQuETmoMQX | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission32/Reviewer_6bjW"
] | NLDL.org/2025/Conference | 2025 | title: Paper showing why concurrent audio aware agents would be useful in a game engine setup and how DRL is useful in building such agents.
summary: The paper focuses on enabling deep reinforcement learning agents (PPO Based) to use audio cues for navigation. The authors demonstrate a method for incorporating multiple concurrent audio-aware agents in a game engine (Unity) by using a multi-client approach, overcoming the common limitation of supporting only one audio listener per game instance.
The paper provides good motivation for the problem and shows details on the settings considered (along with images of the environment settings) and explanations on NavGrid.
strengths: 1) Clear motivation - on why concurrent audio-aware agents would be useful in game engine setup like Unity
2) Combining RL with Audio Cues - Using Audio specific sensors along with other state representations is a novel contritbution - making it a strength of the work
3) The authors use different environment setups (across varying difficulty levels) to show how their approach out-perfoms a random agent
4) The work has a practical application in game engines and the authors also discuss about the limitations and some potential future works which is good to see.
weaknesses: 1) Lack of gathering results over multiple seed runs - RL typically atleast 3 seed runs are considered necessary. (even though we see a convergence behavior in training pretty fast)
2) Have the authors tested the SPL score without the audio aware agents and other than random? its a bit unclear how those would perform
3) Plots could be made more clearer - eg Fig 3 by showing units of throughput (or defining it before) and Fig 6 - things like how many agents was used in that experiment are necessary (bit unclear)
4) Maybe more explanation of why SPL is the right evaluation metric in this case would be good to add as well. Seems like the paper over relies on this single metric? Some other analysis on agent behaviors would be interesting.
5) Would this behavior also extend to complex scenarios where the player is not stationary and more dynamic ( as in most realistic scenarios?) Has any experiment been done in this direction?
confidence: 4
justification: Confident that this type of work puts in a different perspective for the use of audio cues as an input modality for RL agents - is especially useful in game engine setups and the authors show how concurrent audio aware agents could be designed. |
UQuETmoMQX | Towards concurrent real-time audio-aware agents with deep reinforcement learning | [
"Anton Debner",
"Vesa Hirvisalo"
] | Audio holds significant amount of information about our surroundings. It can be used to navigate, assess threats, communicate, as a source of curiosity, and to separate the sources of different sounds. Still, these rich properties of audio are not fully utilized by current video game agents.
We use spatial audio libraries in combination with deep reinforcement learning to allow agents to observe their surroundings and to navigate in their environment using audio cues. In general, game engines support rendering audio for one agent only. Using a hide-and-seek scenario in our experimentation we show how support for multiple concurrent listeners can be used to parallelize the runtime operation and to enable using multiple agents. Further, we analyze the effects of audio environment complexity to demonstrate the scalability of our approach. | [
"audio",
"game engine",
"deep reinforcement learning",
"unity"
] | https://openreview.net/pdf?id=UQuETmoMQX | https://openreview.net/forum?id=UQuETmoMQX | 1CIbkCaxrc | decision | 1,730,901,556,009 | UQuETmoMQX | [
"everyone"
] | [
"NLDL.org/2025/Conference/Program_Chairs"
] | NLDL.org/2025/Conference | 2025 | title: Paper Decision
decision: Accept (Oral)
comment: We recommend an oral and a poster presentation given the AC and reviewers recommendations. |
TP0ASAlrp2 | Deep Active Latent Surfaces for Medical Geometries | [
"Patrick Møller Jensen",
"Udaranga Wickramasinghe",
"Anders Dahl",
"Pascal Fua",
"Vedrana Andersen Dahl"
] | Shape priors have long been known to be effective when reconstructing 3D shapes from noisy or incomplete data. When using a deep-learning based shape representation, this often involves learning a latent representation, which can be either in the form of a single global vector or of multiple local ones. The latter allows more flexibility but is prone to overfitting.
In this paper, we advocate a hybrid approach representing shapes in terms of 3D meshes with a separate latent vector at each vertex. During training the latent vectors are constrained to have the same value, which avoids overfitting. For inference, the latent vectors are updated independently while imposing spatial regularization constraints. We show that this gives us both flexibility and generalization capabilities, which we demonstrate on several medical image processing tasks. | [
"Shape Models",
"Medical Image Processing",
"Autodecoders"
] | https://openreview.net/pdf?id=TP0ASAlrp2 | https://openreview.net/forum?id=TP0ASAlrp2 | vfZQbwpkem | official_review | 1,728,451,295,077 | TP0ASAlrp2 | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission28/Reviewer_8zCr"
] | NLDL.org/2025/Conference | 2025 | title: Powerful and solid contributions to 3D reconstruction
summary: The 3D reconstruction problem is one of the important application cases in computer vision and medical bio-information processing. This paper makes new contributions to the field of 3D reconstruction, while also following the latest trends in the field. One important strategy for conventional 3D reconstruction is the active contour/surface model. However, since these methods directly model the observed 3D shape, it is not easy to deal with noise and regularization during training. Therefore, the proposed method first proposes a new method that applies regularization to the latent variables of the auto-decoder model. Then, in 3D reconstruction, it shows a method for expressing more flexible 3D shapes by utilizing multiple vectors (i.e., a mechanism that can explicitly combine multiple clues that are effective for multiple observed 3D shapes) while satisfying spatial consistency from the auto-decoder obtained in the learning process. As a result, learning can be done efficiently with a small number of samples, and it is well suited to real-world problems in medical bioinformatics processing, such as the 3D shape of the liver used in experiments.
strengths: - The two major contributions of this paper to the problem of 3D reconstruction are very clear.
1. First, regularization in 3D shape learning is applied to the latent variables expressed by the auto-decoder, rather than to the conventional “observed” shape. This avoids excessive smoothing.
2. Second, instead of using a single latent variable, multiple latent variables are used in the 3D reconstruction. This idea alone has been used in the past. However, when combined with the first contribution, it has the effect of reducing the number of training data in the learning process.
- It is explained that the 3D reconstruction using multiple latent vectors in the proposed method is a generalization of the conventional method using a single vector. Specifically, as explained from line 184, by controlling the weight of a specific term of the objective function, the proposed method can be reduced to the conventional method in special cases. This fact gives us an intuitive insight into why the proposed method works well.
- The survey of previous research is comprehensive, and the issues that the proposed method aims to solve and the motivation behind them are explained very clearly.
- The reproducibility of the proposed method is very high. The hyperparameter settings and how to use third-party tools are explained very clearly, along with their motivations.
- Section 4.2 deals with particularly tough problems specific to biomedical real-world problems (which are somewhat uncommon in broad computer vision tasks), and I can imagine that it will strongly attract the interest of experts in bioinformatics.
weaknesses: I have not been able to find any fatal flaws in this paper. Thank you for sharing your great contribution. The following are some of the simple questions I had while reading the paper. If there are any items that are important for a deeper understanding of the significance of the proposed method, I would be very grateful if you could let me know.
- I certainly think that the method of expanding the training data 100 times using PointWolf [ICCV2021] is appropriate. On the other hand, is such data augmentation really necessary even for relatively static organs like the liver? For example, for organs like the cardiac heart, which constantly undergo major changes in shape, I have a hunch that such data augmentation would work effectively. However, in the case of the liver, I couldn't easily imagine whether data augmentation would work effectively. I wonder if this was an effective process that was tested experimentally. If the authors had investigated this area in their preliminary study, it would be very useful for readers if they could report it briefly in the supplementary materials, etc. (This is not a critical point, so additional experiments, etc. are not necessary.)
- I found it somewhat difficult to follow the explanation of $L_{reg}$ (this seems to be a classic and well-known method, so it may be because of my lack of prior knowledge). As a result, I was only able to understand it after referring to the original paper [Bhatia&Lawrence, 1990]. However, I also understand that it is difficult to explain it further due to the page limit. If the authors could explain this point using simple illustrations in supplementary materials, etc., it may be possible for people like me with a lack of prior knowledge to follow it immediately.
confidence: 3
justification: I think this paper is of high quality in terms of clarity of contribution, reproducibility of methods, accuracy of reporting, and clarity of explanation. As it deals with the topic of 3D reconstruction, which attracts a wide range of interest, I imagine that there will be many readers who are interested in it. In addition, the experimental setup used focuses specifically on biomedical 3D reconstruction, so it will also appeal strongly to experts. Therefore, this paper is considered to have value in that it can be shared with the community.
final_rebuttal_confidence: 3
final_rebuttal_justification: My slight concerns have all been addressed in the authors' response. In particular, the authors have provided supplementary material to explain the technical details so that this paper can be a self-contained explanation. |
TP0ASAlrp2 | Deep Active Latent Surfaces for Medical Geometries | [
"Patrick Møller Jensen",
"Udaranga Wickramasinghe",
"Anders Dahl",
"Pascal Fua",
"Vedrana Andersen Dahl"
] | Shape priors have long been known to be effective when reconstructing 3D shapes from noisy or incomplete data. When using a deep-learning based shape representation, this often involves learning a latent representation, which can be either in the form of a single global vector or of multiple local ones. The latter allows more flexibility but is prone to overfitting.
In this paper, we advocate a hybrid approach representing shapes in terms of 3D meshes with a separate latent vector at each vertex. During training the latent vectors are constrained to have the same value, which avoids overfitting. For inference, the latent vectors are updated independently while imposing spatial regularization constraints. We show that this gives us both flexibility and generalization capabilities, which we demonstrate on several medical image processing tasks. | [
"Shape Models",
"Medical Image Processing",
"Autodecoders"
] | https://openreview.net/pdf?id=TP0ASAlrp2 | https://openreview.net/forum?id=TP0ASAlrp2 | mZ34Wm80by | meta_review | 1,730,384,632,718 | TP0ASAlrp2 | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission28/Area_Chair_tFPu"
] | NLDL.org/2025/Conference | 2025 | metareview: The paper proposed a novel method for 3D shapes reconstruction by learning a mesh deformation. Reviewers found the proposed approach novel and conceptually interesting with solid experimental results and reproducibility. They also commend great paper presentation and high quality of the manuscript.
Overall, this paper makes a significant contribution and is recommended for acceptance.
recommendation: Accept (Poster)
suggested_changes_to_the_recommendation: 3: I agree that the recommendation could be moved up
confidence: 4: The area chair is confident but not absolutely certain |
TP0ASAlrp2 | Deep Active Latent Surfaces for Medical Geometries | [
"Patrick Møller Jensen",
"Udaranga Wickramasinghe",
"Anders Dahl",
"Pascal Fua",
"Vedrana Andersen Dahl"
] | Shape priors have long been known to be effective when reconstructing 3D shapes from noisy or incomplete data. When using a deep-learning based shape representation, this often involves learning a latent representation, which can be either in the form of a single global vector or of multiple local ones. The latter allows more flexibility but is prone to overfitting.
In this paper, we advocate a hybrid approach representing shapes in terms of 3D meshes with a separate latent vector at each vertex. During training the latent vectors are constrained to have the same value, which avoids overfitting. For inference, the latent vectors are updated independently while imposing spatial regularization constraints. We show that this gives us both flexibility and generalization capabilities, which we demonstrate on several medical image processing tasks. | [
"Shape Models",
"Medical Image Processing",
"Autodecoders"
] | https://openreview.net/pdf?id=TP0ASAlrp2 | https://openreview.net/forum?id=TP0ASAlrp2 | izoy9iH0bs | official_review | 1,728,790,611,186 | TP0ASAlrp2 | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission28/Reviewer_rExU"
] | NLDL.org/2025/Conference | 2025 | title: Interesting approach, method details are not clear enough, evaluated only on one dataset
summary: This paper introduces an active latent shape representation model for shape reconstruction by deforming a triangulated sphere to match the target shape. The proposed approach consists of two stages: training and fitting. While the methodology is intriguing, certain aspects lack clarity and require further elaboration.
strengths: 1. The proposed method is evaluated across various shape reconstruction scenarios, such as 3D point clouds and planar curve annotations.
2. The model consistently performs better than other baselines, particularly in the liver dataset.
weaknesses: 1. Key details of the proposed method are insufficiently explained. For instance, in lines 153-154, the choice of regularization is attributed to the Laplacian regularization being "too smooth," but this might be influenced by the regularization term's weighting. The reasoning behind this choice remains unclear. Additionally, it is ambiguous whether the fitting process is performed in a single step or involves multiple iterations to optimize Eq. 4.
2. The weighting terms in Eq. 4 appear to significantly influence the final results, particularly in terms of smoothness, yet the sensitivity of these terms is not adequately explored. The approach may be sensitive to changes in such hyperparameters. Given the noticeable differences in hyperparameter values between Sections 4.1 and 4.2, the authors should provide a more detailed discussion of how these hyperparameters affect different tasks.
3. The organization of the paper could be improved. For example, the task-specific losses discussed in Section 3.2 should be consolidated into a dedicated section rather than being spread across the experimental sections.
4. It is unclear how the proposed method distinguishes itself from existing literature that uses multiple latent vectors for shape representation. Including an introductory section explaining how shape representation pipelines are typically structured could provide helpful context. Additionally, experimental details (e.g., GPU setup, learning rate) could be moved to an appendix to save some space.
5. To enhance reproducibility, the authors should demonstrate their method on at least one additional dataset, as the current evaluations are all based on a single dataset.
confidence: 3
justification: The proposed approach is conceptually interesting, but the current presentation lacks clarity in articulating both the technical contributions and the evaluation methodology.
final_rebuttal_confidence: 4
final_rebuttal_justification: Thank the authors for the rebuttal, which clarified a couple of things in the paper. I am now leaning toward accepting the paper. |
TP0ASAlrp2 | Deep Active Latent Surfaces for Medical Geometries | [
"Patrick Møller Jensen",
"Udaranga Wickramasinghe",
"Anders Dahl",
"Pascal Fua",
"Vedrana Andersen Dahl"
] | Shape priors have long been known to be effective when reconstructing 3D shapes from noisy or incomplete data. When using a deep-learning based shape representation, this often involves learning a latent representation, which can be either in the form of a single global vector or of multiple local ones. The latter allows more flexibility but is prone to overfitting.
In this paper, we advocate a hybrid approach representing shapes in terms of 3D meshes with a separate latent vector at each vertex. During training the latent vectors are constrained to have the same value, which avoids overfitting. For inference, the latent vectors are updated independently while imposing spatial regularization constraints. We show that this gives us both flexibility and generalization capabilities, which we demonstrate on several medical image processing tasks. | [
"Shape Models",
"Medical Image Processing",
"Autodecoders"
] | https://openreview.net/pdf?id=TP0ASAlrp2 | https://openreview.net/forum?id=TP0ASAlrp2 | fIpLOttTKu | official_review | 1,728,671,377,175 | TP0ASAlrp2 | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission28/Reviewer_gdUj"
] | NLDL.org/2025/Conference | 2025 | title: Review
summary: This paper proposes DALS - a method for shape reconstruction by learning vertex displacements. The method is based two steps: first, learn an autoencoder for geometric shapes. Second, at inference time, optimize over the latent vectors at each vertex to generate an updated displacement of the vertices, focusing on medical shapes.
strengths: The paper seems novel to me. Although some of the components are repeated from existing methods, the authors discuss the differences and show improved performance.
The paper is nicely written and easy to follow.
Many visualization are provided.
weaknesses: The method seems to only work for 0 genus shapes that start from a sphere. This is limited because it cannot represent all shapes. It would be interesting to know how this method can be extended to more shapes.
Missing discussion of similar work in the literature: "Generating 3D faces using Convolutional Mesh Autoencoders".
Missing comparison with classical results. While it is contemporary to use neural networks for reconstructing shapes, there are many classical techniques. Specifically for methods that could benefit medical shapes, there are level-set approaches that were shown to provide good performance, and it would be interesting to know how this method compares with them. Some examples are "Multimodal 3D Shape Reconstruction under Calibration Uncertainty Using Parametric Level Set Methods" and "Parametric Level-sets Enhanced To Improve Reconstruction (PaLEnTIR)"
Questions:
What if you want to support a varying number of vertices? Can the method support it?
How do you ensure the mesh remains valid after translating its vertices?
confidence: 4
justification: The paper is interesting and offers a contribution in medical shape reconstruction. It is well written and has good contribution, so I think it should be accepted. |
TP0ASAlrp2 | Deep Active Latent Surfaces for Medical Geometries | [
"Patrick Møller Jensen",
"Udaranga Wickramasinghe",
"Anders Dahl",
"Pascal Fua",
"Vedrana Andersen Dahl"
] | Shape priors have long been known to be effective when reconstructing 3D shapes from noisy or incomplete data. When using a deep-learning based shape representation, this often involves learning a latent representation, which can be either in the form of a single global vector or of multiple local ones. The latter allows more flexibility but is prone to overfitting.
In this paper, we advocate a hybrid approach representing shapes in terms of 3D meshes with a separate latent vector at each vertex. During training the latent vectors are constrained to have the same value, which avoids overfitting. For inference, the latent vectors are updated independently while imposing spatial regularization constraints. We show that this gives us both flexibility and generalization capabilities, which we demonstrate on several medical image processing tasks. | [
"Shape Models",
"Medical Image Processing",
"Autodecoders"
] | https://openreview.net/pdf?id=TP0ASAlrp2 | https://openreview.net/forum?id=TP0ASAlrp2 | 7NW3nMn3e6 | official_review | 1,726,920,771,961 | TP0ASAlrp2 | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission28/Reviewer_byPa"
] | NLDL.org/2025/Conference | 2025 | title: Review "Deep Active Latent Surfaces for Medical Geometries"
summary: The paper proposes a novel auto-decoder method to represent 3D shapes by training a neural net that receives a latent vector and a point on a standard 3D sphere as input and predicts the offset from the 3D sphere vertex location to the actual vertex location for the shape to be modeled. The neural net is a simple 3-layer MLP.
The latent vector is supposed to represent the entire 3D shape and is, therefore, forced to be the same across vertices during training.
At inference time, the optimization is to find latent vectors, separate for each vertex, such that the resulting predicted shape conforms to some loss, such as the agreement with 3D point clouds or human-generated segmentations.
strengths: The strengths of the paper are:
- With medical geometries, a highly relevant practical application case is addressed.
- The paper competently utilized an entire host of mathematical and practical tools from (neural) geometry processing to generate a novel approach.
- The novel approach is conceptually very clear, elegant, and efficient to an inspiring degree. The paper is a great example how even relatively simple neural network architectures (in this case an MLP) can suffice if the problem is represented in a clever way - and the problem representation in this case strikes me as exceedingly clever.
- The empirical results, both numerically and in terms of pictures shown, are highly promising.
weaknesses: If the paper has any weakness, then that it contains so much content that it is, sometimes, hard to follow for non-experts in the specific subfield the paper adresses. As such, I would rather like to pose some questions which were not entirely clear to me when reading:
* in line 125, how is x represented? Is it in 3D coordinate space or in angular space?
* is the output of D (i.e. the offset mentioned in line 125) just a scalar, regulating how far the corresponding vertex is from the center of the sphere or does it need to be a 3D vector?
* How were the layer sizes in line 140 found?
* The Fitting Scheme section (3.2) could, I think, profit from an introductory sentence introducing the problem: namely finding a matrix of latent variables Z, such that the predicted shape via D solves some downstream task, such as fitting a point cloud.
* In the Fitting Scheme in 3.2, how is Z initialized?
confidence: 4
justification: Overall, the paper strikes me as widely applicable with high-stakes practical application domains, conceptually clever (even beautiful), efficient, and empirically effective. As such, I enthusiastically vote for acceptance. If I could propose a best paper award, I would.
final_rebuttal_confidence: 4
final_rebuttal_justification: The reviewers addressed my questions fully and provided additional insight. I stand by the evaluation. |
TP0ASAlrp2 | Deep Active Latent Surfaces for Medical Geometries | [
"Patrick Møller Jensen",
"Udaranga Wickramasinghe",
"Anders Dahl",
"Pascal Fua",
"Vedrana Andersen Dahl"
] | Shape priors have long been known to be effective when reconstructing 3D shapes from noisy or incomplete data. When using a deep-learning based shape representation, this often involves learning a latent representation, which can be either in the form of a single global vector or of multiple local ones. The latter allows more flexibility but is prone to overfitting.
In this paper, we advocate a hybrid approach representing shapes in terms of 3D meshes with a separate latent vector at each vertex. During training the latent vectors are constrained to have the same value, which avoids overfitting. For inference, the latent vectors are updated independently while imposing spatial regularization constraints. We show that this gives us both flexibility and generalization capabilities, which we demonstrate on several medical image processing tasks. | [
"Shape Models",
"Medical Image Processing",
"Autodecoders"
] | https://openreview.net/pdf?id=TP0ASAlrp2 | https://openreview.net/forum?id=TP0ASAlrp2 | 3ebnkhClrN | decision | 1,730,901,555,748 | TP0ASAlrp2 | [
"everyone"
] | [
"NLDL.org/2025/Conference/Program_Chairs"
] | NLDL.org/2025/Conference | 2025 | title: Paper Decision
decision: Accept (Oral)
comment: Given the AC positive recommendation, we recommend an oral and a poster presentation given the AC and reviewers recommendations. |
SPRdfOkuHw | Learning anomalies from graph: predicting compute node failures on HPC clusters | [
"Joze M. Rozanec",
"Roy Krumpak",
"Martin Molan",
"Andrea Bartolini"
] | Today, high-performance computing (HPC) systems play a crucial role in advancing artificial intelligence. Nevertheless, the estimated global data center electricity consumption in 2022 was around 1\% of the final global electricity demand. Therefore, as HPC systems advance towards Exascale computing, research is required to ensure their growth is sustainable and environmentally friendly. Data from infrastructure monitoring can be leveraged to predict downtimes, ensure these are treated in time, and increase the overall system's utilization. In this paper, we compare four machine-learning approaches, three of them based on graph embeddings, to predict compute node downtimes. The experiments were performed with data from Marconi 100, a tier-0 production supercomputer at CINECA in Bologna, Italy. Our results show that the machine learning models can accurately predict downtime, matching current state-of-the-art models. | [
"Artificial Intelligence",
"Machine Learning",
"Graphs",
"HPC",
"Data Center",
"Anomalies Forecasting"
] | https://openreview.net/pdf?id=SPRdfOkuHw | https://openreview.net/forum?id=SPRdfOkuHw | sUWbFCyW8k | decision | 1,730,901,556,711 | SPRdfOkuHw | [
"everyone"
] | [
"NLDL.org/2025/Conference/Program_Chairs"
] | NLDL.org/2025/Conference | 2025 | title: Paper Decision
decision: Accept (Oral)
comment: We recommend an oral and a poster presentation given the AC and reviewers recommendations. |
SPRdfOkuHw | Learning anomalies from graph: predicting compute node failures on HPC clusters | [
"Joze M. Rozanec",
"Roy Krumpak",
"Martin Molan",
"Andrea Bartolini"
] | Today, high-performance computing (HPC) systems play a crucial role in advancing artificial intelligence. Nevertheless, the estimated global data center electricity consumption in 2022 was around 1\% of the final global electricity demand. Therefore, as HPC systems advance towards Exascale computing, research is required to ensure their growth is sustainable and environmentally friendly. Data from infrastructure monitoring can be leveraged to predict downtimes, ensure these are treated in time, and increase the overall system's utilization. In this paper, we compare four machine-learning approaches, three of them based on graph embeddings, to predict compute node downtimes. The experiments were performed with data from Marconi 100, a tier-0 production supercomputer at CINECA in Bologna, Italy. Our results show that the machine learning models can accurately predict downtime, matching current state-of-the-art models. | [
"Artificial Intelligence",
"Machine Learning",
"Graphs",
"HPC",
"Data Center",
"Anomalies Forecasting"
] | https://openreview.net/pdf?id=SPRdfOkuHw | https://openreview.net/forum?id=SPRdfOkuHw | q5zUKgYh30 | meta_review | 1,730,366,298,167 | SPRdfOkuHw | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission51/Area_Chair_oAFF"
] | NLDL.org/2025/Conference | 2025 | metareview: The paper presents a comparative analysis of various approaches for predicting issues in high-performance computing (HPC) systems. Strengths of the work include a detailed and insightful explanation of the dataset and a clear overview of the results. A key weakness, however, is the limited discussion on the experimental results, although the proposed methodology is sufficiently innovative. Additionally, the motivation behind the study is not sufficiently clear, and the introduction lacks background information, making it challenging for readers outside the domain to grasp the context and significance of the research.
Following the rebuttal, the authors have significantly improved the paper by adding background information and additional experimental insights. They have also refined their discussion of the results, making it more comprehensive and appropriately detailed.
Considering all comments from the reviewers, as well as the authors' constructive responses, I would recommend acceptance of the paper.
recommendation: Accept (Oral)
suggested_changes_to_the_recommendation: 1: I agree that the recommendation could be moved down
confidence: 4: The area chair is confident but not absolutely certain |
SPRdfOkuHw | Learning anomalies from graph: predicting compute node failures on HPC clusters | [
"Joze M. Rozanec",
"Roy Krumpak",
"Martin Molan",
"Andrea Bartolini"
] | Today, high-performance computing (HPC) systems play a crucial role in advancing artificial intelligence. Nevertheless, the estimated global data center electricity consumption in 2022 was around 1\% of the final global electricity demand. Therefore, as HPC systems advance towards Exascale computing, research is required to ensure their growth is sustainable and environmentally friendly. Data from infrastructure monitoring can be leveraged to predict downtimes, ensure these are treated in time, and increase the overall system's utilization. In this paper, we compare four machine-learning approaches, three of them based on graph embeddings, to predict compute node downtimes. The experiments were performed with data from Marconi 100, a tier-0 production supercomputer at CINECA in Bologna, Italy. Our results show that the machine learning models can accurately predict downtime, matching current state-of-the-art models. | [
"Artificial Intelligence",
"Machine Learning",
"Graphs",
"HPC",
"Data Center",
"Anomalies Forecasting"
] | https://openreview.net/pdf?id=SPRdfOkuHw | https://openreview.net/forum?id=SPRdfOkuHw | jhvapMcRXX | official_review | 1,727,815,356,166 | SPRdfOkuHw | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission51/Reviewer_3Ake"
] | NLDL.org/2025/Conference | 2025 | title: Lack of novelty and mathematical justifications
summary: The paper compares different approaches in predicting issues in high-performance computing systems. For this, the authors utilize the M100 dataset consisting of telemetry data with known anomalies. In the experiments, the authors compare 4 different approaches to predict future anomalies by using a discretized version of the time series and incorporating a natural visibility graph.
strengths: + Interesting idea of using natural visibility graphs, although their motivation is a bit lacking
+ Detailed and insightful explanation of the dataset
+ Clear overview of the results
weaknesses: - Transformation of the data lacks justification (e.g., the discretization)
- Although interesting, it's unclear why natural visibility graphs are used instead of simple lagged versions of columns to incorporate time structures
- Seeing that the model is trained for predicting 1, 2, or 3 steps ahead, it is unclear how this is better than the works discussed in the related work section which seem to be applicable in a similar way
- Lack of comparison with existing approaches in the experiments
- No real novelty aside from the particular approach of data processing and chosen modeling approach
confidence: 4
justification: Some remarks and suggestions:
The paper has some interesting ideas but seems rather arbitrary in many of the data processing choices. For instance:
- Why was a forward-filling imputation strategy chosen?
- Why does the data need to be discrete (and why the particular choice of 5 values)?
- Although an interesting approach, why use natural visibility graphs?
These seem to be design decisions that worked the best in practice but lack mathematical justifications. For instance, do they work particularly well with the chosen ML models? Generally, the work lacks any kind of theoretical statements about made assumptions.
Regarding the visibility graphs, wouldn't it be easier to simply take a lagged representation of past timestamps to incorporate temporal dependencies? For instance, to use timestamp t-1, you can simply make a copy of the column and move it by 1 timestamp. This can then be used in your model. For other ideas on how to represent a time series as a (causal) graph, you might want to check out Section 10 in the book Elements of Causal Inference (https://mitpressbookstore.mit.edu/book/9780262037310).
When comparing the related work, you point out that some are not able to give a concrete timeframe to predict future failures. Based on your setup by defining n+1,..., n+3 as your targets, couldn't this also be done in the related works? In this regard, there is also a lack of comparison, e.g., you can try to bring the problem into a similar setting and compare your methodology accordingly as this is the main contribution.
For the training splitting, if you assume that you have a specific order of a Markov process (e.g., the current timestamp t is independent of the t-n timestamps conditioned on timestamps t-1, ..., t-n-1), you actually don't need to ensure that the training data is before the test data as these would be independent segments. However, your strategy makes sense if you assume long-term dependencies (here, n is large).
In the results, Experiment B performed poorly, but there is a lack of explanation. Adding some thoughts or potential reasons could strengthen the insights.
Generally, it's unclear why you don't treat these as time series in the first place and use time series prediction models for this problem (see e.g., https://www.sktime.net/en/stable/)? Some discussion on this would be very insightful.
final_rebuttal_confidence: 4
final_rebuttal_justification: I want to thank the authors for their careful answers. While a few of my concerns are addressed, there is still a significant lack of theoretical justification, although I appreciate the empirical work and insights. My biggest concern about the justification of using visibility graphs remains, and I believe a more direct graphical representation (e.g., a (dynamic) Bayesian network or a graphical causal model) would be a much clearer formulation and a more direct modeling of domain knowledge. I strongly recommend looking into related works about graphical causal models as this seems to be a great application for it. While I am willing to increase my score, I cannot give a recommendation for acceptance due to the lack of theoretical work. |
SPRdfOkuHw | Learning anomalies from graph: predicting compute node failures on HPC clusters | [
"Joze M. Rozanec",
"Roy Krumpak",
"Martin Molan",
"Andrea Bartolini"
] | Today, high-performance computing (HPC) systems play a crucial role in advancing artificial intelligence. Nevertheless, the estimated global data center electricity consumption in 2022 was around 1\% of the final global electricity demand. Therefore, as HPC systems advance towards Exascale computing, research is required to ensure their growth is sustainable and environmentally friendly. Data from infrastructure monitoring can be leveraged to predict downtimes, ensure these are treated in time, and increase the overall system's utilization. In this paper, we compare four machine-learning approaches, three of them based on graph embeddings, to predict compute node downtimes. The experiments were performed with data from Marconi 100, a tier-0 production supercomputer at CINECA in Bologna, Italy. Our results show that the machine learning models can accurately predict downtime, matching current state-of-the-art models. | [
"Artificial Intelligence",
"Machine Learning",
"Graphs",
"HPC",
"Data Center",
"Anomalies Forecasting"
] | https://openreview.net/pdf?id=SPRdfOkuHw | https://openreview.net/forum?id=SPRdfOkuHw | ZiJaBkIpJS | official_review | 1,727,463,506,733 | SPRdfOkuHw | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission51/Reviewer_PR3L"
] | NLDL.org/2025/Conference | 2025 | title: Review of the "Learning anomalies from graph" paper
summary: This paper addresses the pressing challenge of predicting compute node downtimes in HPC systems, highlighting the importance of predictive maintenance for enhancing system sustainability and efficiency. The authors employ a data-driven approach using publicly available data from the Marconi 100 supercomputer, leveraging advanced ML techniques, particularly focusing on graph embeddings.
One of the significant strengths of the proposed methodology is the innovative representation of computing node data as natural visibility graphs and state graphs, which are subsequently transformed into embeddings. This approach is well-grounded in existing literature, demonstrating its relevance and potential for successful application in related fields, as evidenced by works such as "Understanding Graph Embedding Methods and Their Applications" by Xu (2021) and "Visibility Graph-Based Wireless Anomaly Detection for Digital Twin Edge Networks" by Bertalanic et al. (2024).
Regarding correctness, the preprocessing steps implemented by the authors appear appropriate and methodical. The forward-fill strategy for handling missing values, the application of a change detection algorithm, and the quantization of the data into discrete segments effectively prepare the dataset for subsequent modeling. Additionally, the shift from one-hot encoding to graph-based embeddings allows for the incorporation of historical context, enhancing the depth of information conveyed.
strengths: One of the strengths of this approach is the diversity of setups employed throughout the experimentation. By exploring various data preprocessing techniques (one-hot encoding, natural visibility graph to vector embeddings, the combination of the both of them and this combination plus state graph embedding) the study effectively addresses the complexity of anomaly detection in HPC environments. This variety not only enhances the robustness of the findings but also allows for a comprehensive evaluation of different methodologies, thereby providing valuable insights into the strengths and weaknesses of each setup. The willingness to experiment with multiple data transformations and modeling techniques demonstrates a thorough and thoughtful approach to tackling the challenges associated with anomaly detection, ultimately contributing to a deeper understanding of the problem space.
weaknesses: The proposed methodology is very interesting, but discussions about the experimental results are poor.
The discussion in sections 5 and 6 seems to suggest that the representation as vector embedding needs to be improved, without considering that perhaps the classifier was unable to take advantage of this representation. The poorer performance of CatBoost with natural visibility graph embeddings compared to simple one-hot encoded data may stem from several factors: (i) the transformation into natural visibility graphs and embeddings might have lost critical temporal or contextual information inherent in the original data. This could dilute the model's ability to recognize patterns associated with anomalies.
The embedding process may not have effectively captured the relationships or nuances present in the original dataset, leading to suboptimal representations; (ii) CatBoost is optimized for categorical features and may not perform as well with embeddings that require deeper contextual understanding. If the embeddings do not align well with the model's strengths, performance could suffer.
Alternative models, such as deep learning architectures (e.g., LSTM or CNNs), might have leveraged the embeddings more effectively due to their ability to capture complex patterns and relationships in high-dimensional data.
Table 2 should present in bold the best result for each metric in each predictive horizon.
Typo: sometimes it is written one-hot encoded, other times one hot encoded. It must standardize. On line 316 it just says "hot encoded"
confidence: 3
justification: I assigned a score of 4 out of 5 for the acceptance of this paper due to its strong methodological approach and innovative use of graph-based embeddings. The proposed representation of compute node data as natural visibility graphs and state graphs offers significant advantages, as it enables the capture of complex relationships and temporal dependencies inherent in the data. This method enhances the richness of the features utilized for predicting compute node downtimes, contributing to a more nuanced understanding of the system's behavior.
However, the discussion of the experimental results is somewhat superficial. The authors could strengthen their analysis by providing a more in-depth comparison of the preprocessing setups and correlating the data representations with the topology of the classifiers used. Furthermore, the paper would benefit from a more explicit positioning within the existing literature on high-performance computing, particularly in relation to forecasting and anomaly detection methodologies. For example, insights from the validation techniques discussed in "Data-driven Deep-learning Forecasting for Oil Production and Pressure" by Werneck et al. (2022) could enhance the rigor of their validation approach, especially regarding multi-step-ahead predictions.
While the paper presents a noteworthy methodological innovation and the experiments conducted are intriguing, a deeper critical reflection on the results and a more robust connection to the state of the art would significantly improve its impact. Nonetheless, I support the approval of this paper due to its contributions to the field and its potential for further exploration and development. |
SPRdfOkuHw | Learning anomalies from graph: predicting compute node failures on HPC clusters | [
"Joze M. Rozanec",
"Roy Krumpak",
"Martin Molan",
"Andrea Bartolini"
] | Today, high-performance computing (HPC) systems play a crucial role in advancing artificial intelligence. Nevertheless, the estimated global data center electricity consumption in 2022 was around 1\% of the final global electricity demand. Therefore, as HPC systems advance towards Exascale computing, research is required to ensure their growth is sustainable and environmentally friendly. Data from infrastructure monitoring can be leveraged to predict downtimes, ensure these are treated in time, and increase the overall system's utilization. In this paper, we compare four machine-learning approaches, three of them based on graph embeddings, to predict compute node downtimes. The experiments were performed with data from Marconi 100, a tier-0 production supercomputer at CINECA in Bologna, Italy. Our results show that the machine learning models can accurately predict downtime, matching current state-of-the-art models. | [
"Artificial Intelligence",
"Machine Learning",
"Graphs",
"HPC",
"Data Center",
"Anomalies Forecasting"
] | https://openreview.net/pdf?id=SPRdfOkuHw | https://openreview.net/forum?id=SPRdfOkuHw | IYaBhALfeT | official_review | 1,727,161,730,372 | SPRdfOkuHw | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission51/Reviewer_AqLd"
] | NLDL.org/2025/Conference | 2025 | title: Comment
summary: The paper is about graph anomaly detection, which is an interesting topic. In this paper, authors aim to predict compute node failures on HPC clusters. The paper is well written and well organized. However, there are several concerns in the current version of the paper that addressing them will increase the quality of this paper.
strengths: 1 Reasonable writing logic.
2 Novel ideas.
weaknesses: 1 The authors should provide more background in the abstract to facilitate better understanding by readers outside the field.
2 The authors can consider showing more model structure figures and experimental result figures instead of expanding a simple figure.
3 In the methodology section, have you considered using mathematical formulas to outline and clearly express the model design?
4 Some related works can be considered.
[1] Deep Temporal Graph Clustering. ICLR 2024.
[2] Alleviating structural distribution shift in graph anomaly detection. WSDM 2023.
confidence: 4
justification: The author considered the problem of graph anomaly detection in industrial scenarios, which is worthy of recognition, but there are still some concepts and details that are not clearly introduced. |
SPRdfOkuHw | Learning anomalies from graph: predicting compute node failures on HPC clusters | [
"Joze M. Rozanec",
"Roy Krumpak",
"Martin Molan",
"Andrea Bartolini"
] | Today, high-performance computing (HPC) systems play a crucial role in advancing artificial intelligence. Nevertheless, the estimated global data center electricity consumption in 2022 was around 1\% of the final global electricity demand. Therefore, as HPC systems advance towards Exascale computing, research is required to ensure their growth is sustainable and environmentally friendly. Data from infrastructure monitoring can be leveraged to predict downtimes, ensure these are treated in time, and increase the overall system's utilization. In this paper, we compare four machine-learning approaches, three of them based on graph embeddings, to predict compute node downtimes. The experiments were performed with data from Marconi 100, a tier-0 production supercomputer at CINECA in Bologna, Italy. Our results show that the machine learning models can accurately predict downtime, matching current state-of-the-art models. | [
"Artificial Intelligence",
"Machine Learning",
"Graphs",
"HPC",
"Data Center",
"Anomalies Forecasting"
] | https://openreview.net/pdf?id=SPRdfOkuHw | https://openreview.net/forum?id=SPRdfOkuHw | 7ITj0Lz5zn | official_review | 1,728,321,620,373 | SPRdfOkuHw | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission51/Reviewer_jTLt"
] | NLDL.org/2025/Conference | 2025 | title: The study focuses on predicting compute node failures in HPC clusters with learning form graph approach.
summary: The study compares four machine learning approaches for predicting compute node downtime, three of which are based on graph embeddings, and compares the results.
strengths: Feature extraction methods based on proposed graph embeddings
weaknesses: Poor writing of the study
The abstract does not reflect the study and does not contain the information required in an academic abstract.
The introduction is poorly written and does not include a paragraph about the contribution of the study.
Failure to explain all the steps taken in the study with good examples.
Poor placement of Figures and Tables in the work
confidence: 3
justification: The study was evaluated by considering all the points that should be in an academic study, starting from the summary to the conclusion. Although it is stated that it contains an innovation, it is not clearly emphasized in this study and the work done could not be expressed in a good way.
final_rebuttal_confidence: 3
final_rebuttal_justification: The authors still could not provide the desired academic article summary, especially in the abstract. For example, the contribution of the work is not quantified. Success is expressed in vague words. Again, unfortunately, there is no clear statement in the introduction for a clear contribution to the literature. Despite all these, I can say that it is an acceptable study. |
SBbh4PvJrC | Similarity-Based Intent Detection Using an Enhanced Siamese Network | [] | In Natural Language Understanding (NLU), intent detection is crucial for improving human-computer interaction. However, traditional supervised learning models rely heavily on large annotated datasets, limiting their effectiveness in low-resource scenarios with limited labeled data. Siamese networks, which are effective at learning similarity-based representations, provide a promising alternative by enabling few-shot learning. However, Siamese networks typically rely on contrastive loss or triplet loss, both of which introduce challenges. This study introduces a similarity-based intent detection model using an enhanced Siamese network to address these limitations. Our model employs Manhattan, Euclidean, and Cosine similarity metrics combined with a fusion layer to improve intent classification accuracy. We evaluated the model on the Airline Travel Information System (ATIS) and SNIPS datasets and demonstrated its superiority over state-of-the-art methods, particularly in low-resource and few-shot learning scenarios. The results highlight significant accuracy gains while maintaining computational efficiency, making it a robust solution for real-world dialog systems. | [
"Intent detection",
"Siamese network",
"Dialogue system",
"Similarity metrics"
] | https://openreview.net/pdf?id=SBbh4PvJrC | https://openreview.net/forum?id=SBbh4PvJrC | zbdrr4DM6d | official_review | 1,727,280,810,148 | SBbh4PvJrC | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission12/Reviewer_6Brq"
] | NLDL.org/2025/Conference | 2025 | title: The claim in the conclusion is not well-founded
summary: This work proposes a modification to the contrastive loss, where an extra dense layer is added after the distance calculation. This allows for using several different distance measures and combine the result though the dense layer to a single "similarity score".
The authors claim that this kind of model beats state-of-the-art models in low-resource and zero-shot scenarios.
strengths: The new thing in this work is the added dense layer after measuring the distances, and the fact that this enables using multiple different distance measures in the model.
Questions:
Q1: In 3.4, it says "A logarithmic function was used to normalize the distances for better learning and generalization."
Do you just mean you took the logarithm of the distance? Or did you use the logarithm to do some kind of normalization?
Q2: In the intro, it says: "The fusion layer improves the similarity measures between sentences with the same intent, eliminating the need for triplet selection"
Does this mean that you do not use negative pairs in your training?
And if that is the case, where then is the model's incentive to keep dissimilar things apart?
Should it not just place everything together in that case?
Q3: Since you mention reducing model complexity, did you compare with a model using standard contrastive loss, so without the extra dense layer? If so, please be explicit about the change in performance and size.
weaknesses: **W1: Is the trained models for this work actually zero-shot?**
In the abstract and introduction it is mentioned several times that this work is a zero-shot approach.
However in section 4.2, there is no mention of any intents being left out of the training data.
The compared baseline model from [31] is few-shot and the compared baseline model from [32] is zero-shot,
so if the trained models from the paper are not zero-shot different baseline models should be chosen.
So, it is unclear whether the paper is actually doing what it claims to do.
**W2: Generalizability**
There is no mention in the paper of training multiple seeds or averaging over a number of runs.
If only 1 model was trained for each combination of distance metrics, no conclusions should be drawn about the generalizability of the results.
In particular, it is not possible to conclude from these results that the proposed model will in general achieve better accuracy than all the models used as baselines.
Thus the conclusion of the paper is not well-founded.
**W3: Reproducibility**
More details are needed (could be put in an appendix) to reproduce the experiments.
In 4.2, it says that the train_test_split_function was used, but not from which library.
Used random seeds should be reported.
The paper does not mention whether the code will be released.
**W4: Related work**
Hadsell et al. "Dimensionality Reduction by Learning an Invariant Mapping" should be cited, since this work builds on contrastive loss.
[31] Xia 2021 "Pseudo Siamese Network for Few-shot Intent Generation"
[32] Xue 2021 "Intent-enhanced attentive Bert capsule network for zero-shot intention detection"
confidence: 4
justification: In its current form I do not recommend this paper to be accepted.
It is unclear whether the work is actually zero-shot as it claims to be, and even if it turns out to be zero-shot, the conclusions drawn are not well-founded.
final_rebuttal_confidence: 5
final_rebuttal_justification: I recommend to reject this paper.
The authors admitted that their work is not zero or few-shot, but they did not remove all the places in the article where they imply that it is few-shot (see list in my comments). They also did not remove all comparisons with few-shot work.
On top of this, there are many claims in this article which are not supported by results (see my comments).
If an article cannot be clear about what the proposed method does and does not support its claims with results, it is not ready to be published. |
SBbh4PvJrC | Similarity-Based Intent Detection Using an Enhanced Siamese Network | [] | In Natural Language Understanding (NLU), intent detection is crucial for improving human-computer interaction. However, traditional supervised learning models rely heavily on large annotated datasets, limiting their effectiveness in low-resource scenarios with limited labeled data. Siamese networks, which are effective at learning similarity-based representations, provide a promising alternative by enabling few-shot learning. However, Siamese networks typically rely on contrastive loss or triplet loss, both of which introduce challenges. This study introduces a similarity-based intent detection model using an enhanced Siamese network to address these limitations. Our model employs Manhattan, Euclidean, and Cosine similarity metrics combined with a fusion layer to improve intent classification accuracy. We evaluated the model on the Airline Travel Information System (ATIS) and SNIPS datasets and demonstrated its superiority over state-of-the-art methods, particularly in low-resource and few-shot learning scenarios. The results highlight significant accuracy gains while maintaining computational efficiency, making it a robust solution for real-world dialog systems. | [
"Intent detection",
"Siamese network",
"Dialogue system",
"Similarity metrics"
] | https://openreview.net/pdf?id=SBbh4PvJrC | https://openreview.net/forum?id=SBbh4PvJrC | vP0nfwv4w1 | official_review | 1,726,854,673,523 | SBbh4PvJrC | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission12/Reviewer_Mdjj"
] | NLDL.org/2025/Conference | 2025 | title: Review for "Similarity-Based Intent Detection Using an Enhanced Siamese Network"
summary: The paper proposes a novel variant of Siamese networks for intent detection in which multiple notions of distance (esp. Manhattan, Euclidean, Cosine) are merged to achieve improved results. Experiments on two data sets revealed that combining Euclidean and Cosine distances yields the best results.
strengths: The strengths of the paper are:
- Siamese Networks are a quite popular and flexible architecture, such that progress in this architecture has large potential impact in the community.
- The proposed architectural change is quite straightforward and should be simple to implement in many domains.
- The experimental results appear to show a clear benefit of the proposed change.
weaknesses: The weaknesses are:
- Perhaps most importantly, I think the paper overclaims its contribution. The notion of distance layers and alternative loss functions has already been discussed extensively in the Siamese Network community. Refer, e.g., to Chicco (2021): https://doi.org/10.1007/978-1-0716-0826-5_3 or to Wang and Liu (2021): https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Understanding_the_Behaviour_of_Contrastive_Loss_CVPR_2021_paper.pdf
- By contrast, the core contribution seems to be (to me) the evaluation and fusion of multiple distances, which should be made much more clear.
- Comparing the proposed model to published results in Table 2 may be misleading if different train/test splits were used. This should be specified. Ideally, at least one strong baseline should be trained and tested on the same split.
- The loss function used for training has not been specified. I assume that a contrastive loss has been used, but this should be clarified.
- It might have been instructive to analyze multiple possible embedding architectures, perhaps also using a BERT-based one.
- I am not sure that the many loss curves in Figure 2 contribute sufficiently to the paper to warrant taking that much space.
confidence: 3
justification: Overall, I think the core innovation - fusing multiple distances - is substantial enough to warrant acceptance, although I would appreciate more clarity.
final_rebuttal_confidence: 3
final_rebuttal_justification: I am not convinced that evaluating a BERT model would impose excessive computational load (by nowadays standards, this counts as a rather small model). Nonetheless, most of my points have been addressed so that I remain at my evaluation. |
SBbh4PvJrC | Similarity-Based Intent Detection Using an Enhanced Siamese Network | [] | In Natural Language Understanding (NLU), intent detection is crucial for improving human-computer interaction. However, traditional supervised learning models rely heavily on large annotated datasets, limiting their effectiveness in low-resource scenarios with limited labeled data. Siamese networks, which are effective at learning similarity-based representations, provide a promising alternative by enabling few-shot learning. However, Siamese networks typically rely on contrastive loss or triplet loss, both of which introduce challenges. This study introduces a similarity-based intent detection model using an enhanced Siamese network to address these limitations. Our model employs Manhattan, Euclidean, and Cosine similarity metrics combined with a fusion layer to improve intent classification accuracy. We evaluated the model on the Airline Travel Information System (ATIS) and SNIPS datasets and demonstrated its superiority over state-of-the-art methods, particularly in low-resource and few-shot learning scenarios. The results highlight significant accuracy gains while maintaining computational efficiency, making it a robust solution for real-world dialog systems. | [
"Intent detection",
"Siamese network",
"Dialogue system",
"Similarity metrics"
] | https://openreview.net/pdf?id=SBbh4PvJrC | https://openreview.net/forum?id=SBbh4PvJrC | P12X60N9SI | decision | 1,730,901,554,687 | SBbh4PvJrC | [
"everyone"
] | [
"NLDL.org/2025/Conference/Program_Chairs"
] | NLDL.org/2025/Conference | 2025 | title: Paper Decision
decision: Reject |
SBbh4PvJrC | Similarity-Based Intent Detection Using an Enhanced Siamese Network | [] | In Natural Language Understanding (NLU), intent detection is crucial for improving human-computer interaction. However, traditional supervised learning models rely heavily on large annotated datasets, limiting their effectiveness in low-resource scenarios with limited labeled data. Siamese networks, which are effective at learning similarity-based representations, provide a promising alternative by enabling few-shot learning. However, Siamese networks typically rely on contrastive loss or triplet loss, both of which introduce challenges. This study introduces a similarity-based intent detection model using an enhanced Siamese network to address these limitations. Our model employs Manhattan, Euclidean, and Cosine similarity metrics combined with a fusion layer to improve intent classification accuracy. We evaluated the model on the Airline Travel Information System (ATIS) and SNIPS datasets and demonstrated its superiority over state-of-the-art methods, particularly in low-resource and few-shot learning scenarios. The results highlight significant accuracy gains while maintaining computational efficiency, making it a robust solution for real-world dialog systems. | [
"Intent detection",
"Siamese network",
"Dialogue system",
"Similarity metrics"
] | https://openreview.net/pdf?id=SBbh4PvJrC | https://openreview.net/forum?id=SBbh4PvJrC | JZCcxrgkfl | meta_review | 1,730,723,324,354 | SBbh4PvJrC | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission12/Area_Chair_hCjo"
] | NLDL.org/2025/Conference | 2025 | metareview: The paper introduces a similarity based Siamese network for intent detection. Several similarities/dissimilarities between latent representations of input text and an intent are computed and combined into one representation, which is further processed for classification. The model outperforms the SOTA on two widely used datasets.
The paper is a borderline case, with somewhat divergent opinions from the reviewers. The main issues have been w.r.t. claims unsupported by results and lacking experimental design. While some concerns were addressed during the rebuttal phase, the majority agrees that the work is not ready for publication in its current state.
recommendation: Reject
suggested_changes_to_the_recommendation: 2: I'm certain of the recommendation. It should not be changed
confidence: 3: The area chair is somewhat confident |
SBbh4PvJrC | Similarity-Based Intent Detection Using an Enhanced Siamese Network | [] | In Natural Language Understanding (NLU), intent detection is crucial for improving human-computer interaction. However, traditional supervised learning models rely heavily on large annotated datasets, limiting their effectiveness in low-resource scenarios with limited labeled data. Siamese networks, which are effective at learning similarity-based representations, provide a promising alternative by enabling few-shot learning. However, Siamese networks typically rely on contrastive loss or triplet loss, both of which introduce challenges. This study introduces a similarity-based intent detection model using an enhanced Siamese network to address these limitations. Our model employs Manhattan, Euclidean, and Cosine similarity metrics combined with a fusion layer to improve intent classification accuracy. We evaluated the model on the Airline Travel Information System (ATIS) and SNIPS datasets and demonstrated its superiority over state-of-the-art methods, particularly in low-resource and few-shot learning scenarios. The results highlight significant accuracy gains while maintaining computational efficiency, making it a robust solution for real-world dialog systems. | [
"Intent detection",
"Siamese network",
"Dialogue system",
"Similarity metrics"
] | https://openreview.net/pdf?id=SBbh4PvJrC | https://openreview.net/forum?id=SBbh4PvJrC | FF5Rme3FrJ | official_review | 1,727,861,169,417 | SBbh4PvJrC | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission12/Reviewer_zcwh"
] | NLDL.org/2025/Conference | 2025 | title: Saturating Intent-detection benchmarks
summary: The paper proposes siamese bi-lstm networks combined with both euclidean and cosine similarity to solve intent detection. The suggested model effectively saturates both the ATIS and SNIPS datasets, setting a new SOTA.
strengths: The model performs very well on the described narrow task of intent detection. The paper is easy to read in general. The used architecture is quite simple, yet powerful.
weaknesses: Most papers combine intent detection with slot filling, this one focuses on this very narrow task and on only two (the most common) datasets for it.
More detailed feedback:
- I am surprised that table1 does not include an "all three" option
- Is it really common to not use train/val/test but only train/test in intent detection?
- The left part of figure1 is unclear - this is absolutely standard, why not just display the content on the right of the figure?
- Related work should include that GPT4 reaches around 90% and is thus clearly worse compared to specialized approaches. Furthermore, I would expect the current winners at paperswithcode to be cited here (and hope that you will upload your results there as well). In general, related work feels outdated with the most current paper dating back to 2021, has there really been no new stuff happening since then?
- "computational constraints" were mentioned but without going into details - how slow is training on which kind of hardware?
- tokenization is unclear, what exactly are you doing here?
- it is unclear why one should perform padding when using an LSTM.
- 202: log(dist) is NOT normalization!!! (even though it keeps the numbers low)
Some Latex remarks:
- The correct way of using ``quotation marks'' uses backquotes in the beginning and straight quotes in the end.
- please use booktabs
- its \mathbb R for the reals
- commonly the nonlinearity is called \sigma and not \phi
And some spelling:
- 049: where _a_ model
- 053: comma missing
- 056: an anchor ... a neg example
- 068: _in_ the latent space
- 072: no comma
- 075: _a_ fusion layer
- 077: comparison (without s!), then _has_ good performance (should be further specified that the performance is SOTA and well-above 99%)
- 116: generalize-> generalizing
- 186: for the _complete_ sequences
confidence: 4
justification: Very simple architecture that clearly outperforms competitors. |
SBbh4PvJrC | Similarity-Based Intent Detection Using an Enhanced Siamese Network | [] | In Natural Language Understanding (NLU), intent detection is crucial for improving human-computer interaction. However, traditional supervised learning models rely heavily on large annotated datasets, limiting their effectiveness in low-resource scenarios with limited labeled data. Siamese networks, which are effective at learning similarity-based representations, provide a promising alternative by enabling few-shot learning. However, Siamese networks typically rely on contrastive loss or triplet loss, both of which introduce challenges. This study introduces a similarity-based intent detection model using an enhanced Siamese network to address these limitations. Our model employs Manhattan, Euclidean, and Cosine similarity metrics combined with a fusion layer to improve intent classification accuracy. We evaluated the model on the Airline Travel Information System (ATIS) and SNIPS datasets and demonstrated its superiority over state-of-the-art methods, particularly in low-resource and few-shot learning scenarios. The results highlight significant accuracy gains while maintaining computational efficiency, making it a robust solution for real-world dialog systems. | [
"Intent detection",
"Siamese network",
"Dialogue system",
"Similarity metrics"
] | https://openreview.net/pdf?id=SBbh4PvJrC | https://openreview.net/forum?id=SBbh4PvJrC | 7r5KfP7O1K | official_review | 1,726,667,561,731 | SBbh4PvJrC | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission12/Reviewer_8o6U"
] | NLDL.org/2025/Conference | 2025 | title: Unverified claims, poor methodology, and absent analysis for overused dataset of Intent Detection
summary: This work introduces a study on intent classification through the lens of siamese networks. It uses 2 datasets for intent (ATIS and SNIPS) , trains a BiLSTM in a siamese setting and tests a variety of similarity metrics.
strengths: The setting is relevant
weaknesses: - The paper is full of claims and scenarios that arent verified. This already starts in the abstract, e.g. line 15 talking about low-resource or zero-shot learning scenario's. None of this is actually debated later in the paper. The method is never verified in a zero-shot learning setting, and the computational complexity (or even hardware for that matter) is never discussed.
- Line 67-68: "This approach brings sentence representations of the same intent closer to the latent space, thereby enabling effective intent detection.". This is never specifically shown.
- Line 193, 194, 195: This doesnt contain an actual specific reason for why they were selected.
- The large majority of plots are uninteresting the scaling of axis is consistently bad. Figure (c) reports on a difference in the 4th decimal place for accuracy, where the entire plot is a flat line. This is not a unique occurence for the graphs.
- It is unclear to me how the downstream predictions are actually done. The distance layer measures the distance using the metrics and when multiple are used they are concatenated in the fusion layer?. So then 1 or 2 metrics are used an input for the dense layer? The whole concept is vaguely explained. In the end, figure 1 and the architecture predicts a similarity score over 2 inputs. But nowhere is explained how this then goes to the predictions. The result is also a single dense node, which is also the similarity score? This also results in metrics being learned twice? One for the metric layer and one in the end?
- The https://paperswithcode.com/sota/intent-detection-on-atis mentions many works that are actually much closer to the scores reported, but only matches with 1 of the baselines (LIDSNet). What worries me more is the already high performances on this task. Especially the ATIS dataset is an already milked out benchmark in terms of performance. The work also shows that performance is high from epoch 0, with sometimes differences in the 3rd decimal after 10 epochs.
- In similar fashion, this work would take number 1 position in this benchmark but this could be due to stochastic variations
- The results section and analysis is essentially absent. There is nothing but accuracy in table form or graph form.
- There is no investigation done about any of the representations that are learned or how well the introduced layers work. What are the effects of a distance layer? It seems like the "metric transformation layer" is nothing but a logarithmic transformation? Is this valid here? No arguments are provided for these crucial choices other than "other works do it", despite those being different settings.
- The dataset is also poorly described in general. The original dataset also already comes with a training (4978) and test split (893), so why the additional train_test_split was done is unclear.
The overall story is just incoherent, and the related work and literature search feels like searching "intent classification" on Google Scholar and stringing the sources together. There is not a single thing being specifically addressed in the end. Even the comparisons to mentioned base-lines are largely uninteresting for the same mentioned reason before: ATIS and SNIPS seem to be milked out benchmarks where the measurement over their performance alone has become largely uninteresting
confidence: 4
justification: This work explores siamese networks for intent classification, specifically by using intermediate layers and metrics for learning the final similarity "measure". They largely accomplish this by introducing extra layers before learning the similarity metric (a "distance layer", "metric transformation layer", "fusion layer" and a resulting dense layer). The validity of introducing these layers and what occurs in them is not discussed. The work is very vague in terms of what it tries to show and/or accomplish. Claims are made are largely unsubstantiated (e.g. it working in a zero-shot setting or low resource). The evaluation is nothing but the same 12 uninspiring and uninforming plots and 2 tables comparing to previous results. The final result is a paper that feels like the result of a bachelors project, rather than actual contributing research. |
S5wu3nVT1a | Keeping it Simple – Computational Resources in Deep Generative versus Traditional Methods for Synthetic Tabular Data Generation in Healthcare | [] | Synthetic data has emerged as a solution to address data access challenges in healthcare, particularly for accelerating AI tool development. Deep generative methods, including generative adversarial networks, variational autoencoders, and diffusion models, have gained prominence for creating realistic and representative synthetic datasets with low re-identification risk. However, while sustainability of future computational needs is a growing topic, computational needs are often overlooked when benchmarking solutions for tabular data in healthcare.
This study compares traditional and deep generative methods in terms of computational resources needed, relative to differences in statistical similarity between the training dataset and the synthetic dataset.
The findings reveal that while quality performance within this experiment is comparable, the deep generative methods consume significantly more resources, necessitating High Performance Computing resources. We hope researchers will increasingly include computational resources as a parameter when benchmarking methods, to build a bigger canvas of literature to guide the method choice. | [
"synthetic data",
"healthcare data",
"deep generative models",
"traditional statistical generative methods",
"computational resources",
"benchmarking"
] | https://openreview.net/pdf?id=S5wu3nVT1a | https://openreview.net/forum?id=S5wu3nVT1a | XSbGAXZMxl | meta_review | 1,730,472,611,782 | S5wu3nVT1a | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission21/Area_Chair_oV3g"
] | NLDL.org/2025/Conference | 2025 | metareview: This paper addresses an important yet well-known concern regarding the computational resource demands of deep generative models versus traditional statistical approaches for synthetic healthcare data. While the study highlights the efficiency of Gaussian copula models in comparison to deep generative models, the experimental scope is notably narrow. The lack of newer generative models, broader datasets, and thorough statistical testing limits the generalisability and impact of the findings. The paper’s insights into metadata handling and resource constraints in healthcare AI are valuable, yet more comprehensive exploration is needed.
Despite improvements in response to feedback, the paper falls short of advancing novel methodologies or yielding broadly applicable insights.
recommendation: Reject
suggested_changes_to_the_recommendation: 3: I agree that the recommendation could be moved up
confidence: 5: The area chair is absolutely certain |
S5wu3nVT1a | Keeping it Simple – Computational Resources in Deep Generative versus Traditional Methods for Synthetic Tabular Data Generation in Healthcare | [] | Synthetic data has emerged as a solution to address data access challenges in healthcare, particularly for accelerating AI tool development. Deep generative methods, including generative adversarial networks, variational autoencoders, and diffusion models, have gained prominence for creating realistic and representative synthetic datasets with low re-identification risk. However, while sustainability of future computational needs is a growing topic, computational needs are often overlooked when benchmarking solutions for tabular data in healthcare.
This study compares traditional and deep generative methods in terms of computational resources needed, relative to differences in statistical similarity between the training dataset and the synthetic dataset.
The findings reveal that while quality performance within this experiment is comparable, the deep generative methods consume significantly more resources, necessitating High Performance Computing resources. We hope researchers will increasingly include computational resources as a parameter when benchmarking methods, to build a bigger canvas of literature to guide the method choice. | [
"synthetic data",
"healthcare data",
"deep generative models",
"traditional statistical generative methods",
"computational resources",
"benchmarking"
] | https://openreview.net/pdf?id=S5wu3nVT1a | https://openreview.net/forum?id=S5wu3nVT1a | IhmztxUbjF | official_review | 1,728,892,976,441 | S5wu3nVT1a | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission21/Reviewer_rJLi"
] | NLDL.org/2025/Conference | 2025 | title: The paper lacks novelty and suffers from weak experimental design, insufficient analysis, and limited practical relevance.
summary: The paper explores the computational resource requirements and performance of deep generative models (such as CTGAN and TVAE) compared to traditional methods for generating synthetic tabular healthcare data. The authors benchmark the methods using a synthetic breast cancer dataset and provide an analysis of computational resource requirements, and performance on data similarity.
strengths: 1. The paper discussed an important topic related to the computational costs of various synthetic data generation methods.
2. Choosing different hardware setup (Laptop, Azure) gives good insights about running the models in different setups.
weaknesses: 1. Lack of Originality and Novelty: The paper does not present any new concepts, methodologies, or substantial contributions to the field. The paper claims to explore computational resource requirements, but the results are predictable, confirming the well-known fact that deep generative models are resource-intensive.
2. The paper positions itself as a comparative study, but there’s no evidence that the study significantly builds on the existing work, espically that the related work section is very limited.
3. As mentioned in the limitations section of the paper, the results of the paper depend heavily on synthetic dataset (from the Dutch Cancer Registry), which undermine the validity and generalizability of the findings. In healthcare, it is crucial to assess how these methods perform on real-world data, which contains noise, missing values, and variability that synthetic data often lacks. A robust comparative study requires multiple datasets with different characteristics (e.g., size, feature types, distributions) to demonstrate the generalizability of the findings.
4. The paper compares only limited deep generative models (CTGAN and TVAE) against a single traditional method (Gaussian Copula). These models are not necessarily the best or most representative of their respective classes. For instance, newer and potentially more efficient models (such as diffusion models) should be discussed. Additionally, the choice of methods is not adequately justified. Why these specific models were chosen remains unclear.
5. No Real Statistical Comparisons: The analysis fails to include proper statistical testing or comparisons of the performance results. Simple metrics are reported (e.g., column shape similarity), but there’s no statistical comparison (e.g., t-tests, etc) to validate whether the differences between the models are statistically significant. The authors draw conclusions based on these results without showing whether the differences observed are meaningful.
6. Over-Simplification of Results: The results section lacks depth and detail. For example, when comparing the models on their ability to generate accurate data, the authors do not explore why certain models perform better or worse for specific types of data (e.g., numerical vs. categorical). The results are mostly reported in aggregate, masking any deeper insights that could be gained from a more granular analysis.
confidence: 4
justification: The primary reason for rejecting this paper stems from its lack of novelty and insufficient contribution to the fild. The study fails to introduce any novel methodologies, frameworks, or improvements. |
S5wu3nVT1a | Keeping it Simple – Computational Resources in Deep Generative versus Traditional Methods for Synthetic Tabular Data Generation in Healthcare | [] | Synthetic data has emerged as a solution to address data access challenges in healthcare, particularly for accelerating AI tool development. Deep generative methods, including generative adversarial networks, variational autoencoders, and diffusion models, have gained prominence for creating realistic and representative synthetic datasets with low re-identification risk. However, while sustainability of future computational needs is a growing topic, computational needs are often overlooked when benchmarking solutions for tabular data in healthcare.
This study compares traditional and deep generative methods in terms of computational resources needed, relative to differences in statistical similarity between the training dataset and the synthetic dataset.
The findings reveal that while quality performance within this experiment is comparable, the deep generative methods consume significantly more resources, necessitating High Performance Computing resources. We hope researchers will increasingly include computational resources as a parameter when benchmarking methods, to build a bigger canvas of literature to guide the method choice. | [
"synthetic data",
"healthcare data",
"deep generative models",
"traditional statistical generative methods",
"computational resources",
"benchmarking"
] | https://openreview.net/pdf?id=S5wu3nVT1a | https://openreview.net/forum?id=S5wu3nVT1a | GS8rDr8lBr | official_review | 1,727,099,563,916 | S5wu3nVT1a | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission21/Reviewer_6F7G"
] | NLDL.org/2025/Conference | 2025 | title: Nice idea, limited execution, muddled presentation
summary: This paper performs an experiment on synthetic healthcare data, comparing a simple statistical model (Gaussian copula) with two more sophisticated machine learning methods (a generative adversarial network and a variational autoencoder, both designed for tabular data).
The main result of the paper is that on a synthetic dataset mimicking measurements of breast cancer patients, the Gaussian copula generates synthetic data which is of comparable quality to the other methods while requiring far less computational resource.
The effect of including or omitting metadata, which ties different tumours to the same patient, is also investigated. The effect of omitting metadata on runtime is dramatic. The performance of all methods decreases in the absence of metadata, but the copula method is still competitive.
strengths: Coherence and correctness: The basic message of this paper is an important one, and one which has appeared occasionally in other areas of the machine learning literature: the performance of sophisticated machine learning methods is often matched or outperformed by simple statistical models. Those simple models require far fewer computational resources, having important downstream impact (e.g. environmental impact). This is an important message that needs to sink in to the machine learning community.
Clarity and presentation: The main message of the paper is well described. The structure of the paper is fine. The introduction places the experiment in context, and there is a good discussion of limitations and potential for future research.
There are some insights into how the presence or absence of metadata influences output, which I hadn't encountered before. Actually, I would like to see this discussed a bit more. Can you speculate on the mechanism by which metadata influences performance?
weaknesses: 1. Incrementality: The scope of the experiment described is quite limited, making it very difficult to gauge its generalizability. There are no measures of uncertainty given.
(a) The experiment is performed on only one dataset. What should we expect if we were to use a different dataset? I appreciate that it is infeasible to use real data here, but what about other synthetic data? As a minimum there should be some sort of cross-validation to investigate robustness to the training/test split within the dataset used.
(b) There is no exploration of the effect of hyperparameters. It is stated that this is "To streamline and facilitate ... reproducibility" (line 206) but I am not convinced by this. It should not be beyond the wit of a reader to reproduce the experiment with a different set of hyperparameters.
(c) Three methods are compared. I think this is a bit unambitious, particularly since one of them, TVAE, was introduced by its authors partly to demonstrate its **inferiority** compared to their primary method from the same paper. For example, diffusion models are mentioned (line 036) as another method for simulating datasets, and there do exist diffusion models for tabular data - see for example TabDDM (ICML, PMLR 202:17564-17579, 2023). How does this compare?
(d) The study does not address the presence of missing data, which was removed.
2. Clarity and presentation: The description of the methods of this paper are not given in sufficient detail.
(a) The summary the main methods under study (170-184), Gaussian copulas, CTGAN, and TVAE, are too brief. The description of TVAE only gets a single sentence, but I think it would be important for a reader to have a bit more insight into its mechanism. I understand that there is a page limit, but then the fact that their pipeline removes 10 rows of missing data was granted nine lines, so there should be room to tighten the exposition. There should be room for some brief mathematical description, which is entirely lacking in this paper, or even a fuller exposition in a much-expanded Appendix.
(b) It was not clear to me what the role of the test dataset is here. It is mentioned on line 198 as a way to "compare the performance", but then not mentioned again. The quality evaluation talks about similarities with the training data but not test data. For a problem like synthetic data generation, it is important to make a clear conceptual distinction between the main problem at hand (unsupervised) and methods of performance evaluation ('supervised'). The supervised part is lacking in description. Compare ref 10 which has the completely transparent statement: "When training a classifier or a regressor to predict one column using other columns as features, can such classifier or regressor learned from T_syn achieve a similar performance on T_test, as a model learned on T_train?" A statement along those lines (and follow-up details) is needed here.
(c) It would be helpful to give a mathematical description of the quality scores, either in the main text briefly or more fully in an Appendix. The description in 227-238 does not make it clear enough which scores are suitable for which types of data. Clearly distinguishing 'categorical' and 'real-valued' data would help. (Descriptor like 'numerical' are 'real' are unclear to me.)
3. Clarity: The standard of English is adequate, but it could be improved. There are quite a few clunky phrases - I have picked out a few where the choice of words might be making the meaning actively misleading.
065 - "benchmarks ... with deep generative methods" should be "benchmarks ... against deep generative methods" ? Line 088 similarly.
082 - "enhancement of GANs". I am not sure what this means. "enhancement given by the use of GANs"?
101 - "the experiment" -> "an experiment"
124 - "on the are" -> "on the data are" ?
206 - Sentence starting "To streamline" is uninterpretable as the subject is missing its verb.
4. Presentation: Figures are not well presented. Figures 1 and 2 are too small. It took me a while to work out why two different scoring mechanisms, Range and Category, have been interspersed with each other, before I realised that one is for real-valued and one is for categorical data. (See above.) Figure A.3 is mostly in Dutch but the "English" column, I imagine, is supposed to be an English translation. This is needed to be able to interpret the codings.
5. Reproducibility: I did not see any evidence that the code used to generate the experiments has been made available. Will this change?
confidence: 4
justification: This is an interesting idea for a paper with an important central message, but its execution is too limited in scope. I would like to see a more ambitious experiment, including some or all of: inclusion of other methods, estimation of uncertainty, some exploration of relevant parameters. A secondary issue is the main description of the work which is too vague and lacks mathematical precision. Together with a lack of code, this makes reproducibility an issue.
final_rebuttal_confidence: 4
final_rebuttal_justification: The authors have taken some steps towards addressing my and the other reviewers' concerns, and have done what can reasonably be expected given the timeframe. There are modest improvements to the presentation and some expanded explanations. But there is still essentially no mathematical exposition, and the primary weaknesses, identified by more than one reviewer, remain, particularly that both the choice of comparator methods and the scope of the experiments is too narrow. No new experiments have been performed, and the request for a comparison with diffusion models is left for future work.
If the scoring system allowed for fine-grained recommendations I might raise the score slightly, but since I can choose only between accept and reject I feel this paper still clearly falls in the latter category. |
S5wu3nVT1a | Keeping it Simple – Computational Resources in Deep Generative versus Traditional Methods for Synthetic Tabular Data Generation in Healthcare | [] | Synthetic data has emerged as a solution to address data access challenges in healthcare, particularly for accelerating AI tool development. Deep generative methods, including generative adversarial networks, variational autoencoders, and diffusion models, have gained prominence for creating realistic and representative synthetic datasets with low re-identification risk. However, while sustainability of future computational needs is a growing topic, computational needs are often overlooked when benchmarking solutions for tabular data in healthcare.
This study compares traditional and deep generative methods in terms of computational resources needed, relative to differences in statistical similarity between the training dataset and the synthetic dataset.
The findings reveal that while quality performance within this experiment is comparable, the deep generative methods consume significantly more resources, necessitating High Performance Computing resources. We hope researchers will increasingly include computational resources as a parameter when benchmarking methods, to build a bigger canvas of literature to guide the method choice. | [
"synthetic data",
"healthcare data",
"deep generative models",
"traditional statistical generative methods",
"computational resources",
"benchmarking"
] | https://openreview.net/pdf?id=S5wu3nVT1a | https://openreview.net/forum?id=S5wu3nVT1a | GP40araFN3 | official_review | 1,728,956,154,402 | S5wu3nVT1a | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission21/Reviewer_42fL"
] | NLDL.org/2025/Conference | 2025 | title: Review of Keeping it Simple – Computational Resources in Deep Gener- ative versus Traditional Methods for Synthetic Tabular Data Generation in Healthcare
summary: The purpose of the study is aimed at supporting the broader healthcare community and challenges faced with safe data access with the fast paced growth of AI tool development. Particularly, how generative AI methods can be used to engage synthetic data with realistic results and low identification risk compared with traditional tabular methods or traditional statistical methods of research. The study hopes to include more research into the needs of benchmarking and computational needs for tubular data in healthcare. The researcher undertakes a trial experiment with a synthetic generated dataset from the Dutch cancer registry of 60,000 hypothetical breast cancer patients. The experiment compares the performance of traditional statistical methods and deep generative methods both on similarity and computational resources to guide the practical use of these tools to investigate the following questions:
Q1: What are the practical resource requirements for running the pipelines of deep generative methods for generating tabular data compared to traditional generators, and
Q2: How well does deep generative methods perform for generating tabular data compared to traditional generators in terms of statistical similarity?
The traditional statistical method undertaken in the study is Gaussian Copula (Synthesizer), which is considered the benchmark. This model was used to compare the two generative AI models. The researcher pairs the experiment with the Conditional Tabular Generative Adversarial Network (CTGAN) and Tabular Variational Autoencoder (TVAE). The generative AI is run both on a laptop and virtual machine to explore questions of sustainability (with the laptop being the sustainable approach). The Synthetic Data Vault (SDV) pipeline for evaluation metrics for the models were compared to answer Q2, and practical performance measures (size and runtime) was gathered for answering Q1.
I will discuss data preparation, pipeline requirements and quality evaluation across weaknesses and strengths. The overall outcome was that the deep generative models had potential to run effectively with tabular data but only when the meta data was not defined, yet in the actual experiment the results were less favourable to generative AI than traditional methods. The TVAE outperformed the Gaussian Copula when meta data was not available but when the meta data was available, the results were not available. The CTGAN had to be aborted due to resource overload after two days. The study outlines that for this type of experiment the right choice of model will depend on the nature of the data in the different healthcare scenarios. All pipelines discussed were faster when meta data was manually defined. The study acknowledges it is a synthetic study with a synthetic dataset that can only be used in the context of a methodological test. More research and trialling would be required to develop more succinct evaluation metrics.
strengths: The strengths of the study is that the research does complete part of it's goal to address new areas of research in comparing traditional and deep generative methods in terms of computational resources needed, relative to differences in statistical similarity between the training dataset and the synthetic dataset. The study reveals that while deep generative approaches can produce results of similar (or in some cases with the TVAE) higher results, it comes at the cost of High Computing Resources which impact sustainability concerns related to AI in general. The higher TVAE results were only gained when the meta data was not available which the study acknowledges is not the most optimal approach for healthcare studies.
With a data split of 20/80 (with 20% used to train the generative AI models, 80% used for testing; to compare performance and data) the tests were done. The results relating to the speed of the information gained were interesting. For the test without meta data defined, the traditional GaussianCopula model used 17 seconds, the TVAE used 1200 seconds and the CTGAN used 9600 seconds on the Azure server and without meta data. With meta data the training was faster for all models, rendering the relative differences smaller but with a similar distribution. Storage was notably smaller when the meta data was defined.
A strength of the research is the capacity to report on the results of the generative AI versus traditional statistical methods with tabular data. With meta data, the TVAE performs at a sufficient level, but does not perform better than the simple statistical approach (Gaussian Copula) on the tabular data, with slightly lower scores for column shapes, column pair trends and coverage. The TVAE shows slightly lower quality scores than the CTGAN and lower coverage. When meta data was not manually defined, the
TVAE showed better overall similarity performance scores than the Gaussian Copula. The study also acknowledged that using the Dell laptop was possible on the TVAE generator but took so long it was not practical. This option was still faster than the CTGAN that was stopped for practical reasons after two days. Both deep generative methods required Higher Computing Resources which are not supportive of sustainable research outcomes, and this is a positive result for the study to support the larger framework of research area around computational needs.
weaknesses: The study is open that it used the SDV Evaluation Metrics Library from the original Synthetic Data Vault Project. It was deemed sufficient when using this pipeline (along with the data preparation) to use minimal bias or fairness metrics. The study acknowledges that this is beneficial for benchmarks but the outcome of this choice can obscure deviating performance for specific diagnostic features. The question is pertinent that any research on generative models of AI, particularly in the healthcare system demonstrate a significant attempt to minimise the neglect of bias and crucial neglect in downstream models. While the synthetic data was a sample, and not intended for real life consumption, the reality is that the medical and healthcare industry are rampant with bias against marginalised bodies and this impacts diagnosis statistics and information. The test data chosen was breast cancer data, which is predominantly women, which falls within a the category of marginalised body. The study acknowledges that when choosing a generative method, researchers need to consider which quality dimensions are the most practical and lists sustainability, usability, privacy, bias and fairness as factors in terms of impacting the downstream data set. But in evaluation benchmarking with generative AI, what is the true purpose of studies that evaluate a system that continue to perpetuate systems of oppression, as opposed to engaging with new approaches from the outset?
A weakness of the study is that the results of of the CTGAN were not pursued, as GAN's (unlike their descriptive counterparts in generative models) are designed to learn the underlying probability of distribution in data. This is not to say they are not without limitations (mode collapse and non convergence / instability) but research specifically into healthcare synthetic data supports the use of GAN based synthetic data that enables the generator to learn a fair deterministic transformation based on a well-defined notion of algorithmic fairness. This mode of thought and process is important for healthcare research to support healthcare organizations to improve care delivery in the era of value-based healthcare, digital innovation, and big data. While the logic behind the GAN not being pursued was time management and sustainability factors which are reasonable, is it not the responsibility of the greater AI and research community to ensure that all research dedicated to generative AI (in particular areas that are predisposed to bias amplification) provide significant frameworks to ensure research addresses these risks as a core feature of their study and research?
confidence: 5
justification: The study has been successful in achieving it's goal to explore the outcomes of TVAE generative AI against traditional statistical methods Gaussian Copula. But was not successful in obtaining clear results for the CTGAN generative AI due to time limitations. The results support research into sustainable methods of traditional and deep generative AI and consider factors such as High Performance Computer resources, sustainability and the differences the approaches have with between the training dataset and synthetic dataset.
The study states that as it is a methodological test, the objective quality was not truly determined and the researchers acknowledge that the study should be completed with a real dataset and for a defined downstream application. Other quality metrics and mechanisms should also be added, to evaluate the clinical us- ability and logic of the resulting synthetic data. I agree with this evaluation as the existence of bias in algorithmic forms, which can be cognitive bias, social bias, statistical bias or any other sort of bias, can result in an inaccuracy which is systematically incorrect. Within AI bias research, there is a misconception that a bigger dataset will provide more accurate results. This belief has held because one can usually increase the accuracy by adding more data. But unfortunately, this increased accuracy won’t translate to in-production accuracy if the additional data is biased and not reflective of the real world.
Data scientists and researchers need to be comfortable with the concept that algorithms in general are predisposed to bias (more so descriptive AI than generative) and engage processes that optimise their data to minimize bias amplification. This is specifically imperative in areas where bias is a known problem (ie healthcare.) While this study has chosen to use generative AI methods, which are considered one step to minimise bias amplification, more steps in the data preparation could be taken to ensure that AI is being used an opportunity for social equity. Particularly if the trial data relates largely to marginalised bodies. There are significant studies that discuss the use of AI creating opportunities for accurate, objective and immediate decision support in healthcare with little expert input specially valuable in resource-poor settings where there is shortage of specialist care. Given that AI poorly generalises to cohorts outside those whose data was used to train and validate the algorithms, populations in data-rich regions stand to benefit substantially more vs data-poor regions, entrenching existing healthcare disparities. These considerations need to be taken into account for all studies to be truly of value to the greater healthcare community and larger impact of social equity. |
S5wu3nVT1a | Keeping it Simple – Computational Resources in Deep Generative versus Traditional Methods for Synthetic Tabular Data Generation in Healthcare | [] | Synthetic data has emerged as a solution to address data access challenges in healthcare, particularly for accelerating AI tool development. Deep generative methods, including generative adversarial networks, variational autoencoders, and diffusion models, have gained prominence for creating realistic and representative synthetic datasets with low re-identification risk. However, while sustainability of future computational needs is a growing topic, computational needs are often overlooked when benchmarking solutions for tabular data in healthcare.
This study compares traditional and deep generative methods in terms of computational resources needed, relative to differences in statistical similarity between the training dataset and the synthetic dataset.
The findings reveal that while quality performance within this experiment is comparable, the deep generative methods consume significantly more resources, necessitating High Performance Computing resources. We hope researchers will increasingly include computational resources as a parameter when benchmarking methods, to build a bigger canvas of literature to guide the method choice. | [
"synthetic data",
"healthcare data",
"deep generative models",
"traditional statistical generative methods",
"computational resources",
"benchmarking"
] | https://openreview.net/pdf?id=S5wu3nVT1a | https://openreview.net/forum?id=S5wu3nVT1a | CNwgcZPGdL | decision | 1,730,901,555,377 | S5wu3nVT1a | [
"everyone"
] | [
"NLDL.org/2025/Conference/Program_Chairs"
] | NLDL.org/2025/Conference | 2025 | title: Paper Decision
decision: Reject |
S5wu3nVT1a | Keeping it Simple – Computational Resources in Deep Generative versus Traditional Methods for Synthetic Tabular Data Generation in Healthcare | [] | Synthetic data has emerged as a solution to address data access challenges in healthcare, particularly for accelerating AI tool development. Deep generative methods, including generative adversarial networks, variational autoencoders, and diffusion models, have gained prominence for creating realistic and representative synthetic datasets with low re-identification risk. However, while sustainability of future computational needs is a growing topic, computational needs are often overlooked when benchmarking solutions for tabular data in healthcare.
This study compares traditional and deep generative methods in terms of computational resources needed, relative to differences in statistical similarity between the training dataset and the synthetic dataset.
The findings reveal that while quality performance within this experiment is comparable, the deep generative methods consume significantly more resources, necessitating High Performance Computing resources. We hope researchers will increasingly include computational resources as a parameter when benchmarking methods, to build a bigger canvas of literature to guide the method choice. | [
"synthetic data",
"healthcare data",
"deep generative models",
"traditional statistical generative methods",
"computational resources",
"benchmarking"
] | https://openreview.net/pdf?id=S5wu3nVT1a | https://openreview.net/forum?id=S5wu3nVT1a | 2ydz6z7wlM | official_review | 1,728,247,358,034 | S5wu3nVT1a | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission21/Reviewer_2fBg"
] | NLDL.org/2025/Conference | 2025 | title: The initial review
summary: This paper presents a comparative study that evaluates computational resource consumption and data quality in generating synthetic tabular data. The authors test the traditional Gaussian Copula method and popular deep generative methods, such as TVAE and CTGAN, using the Dutch cancer registry dataset. The comparisons indicate that the traditional method requires fewer computational resources, while the quality of the generated data is comparable to those from generative AI models.
strengths: - This paper investigates an interesting problem: do we really need computationally-intensive generative models for some specific tasks?
- The authors provide detailed information on the experiment setups, evaluation metrics, and result analysis.
- The authors discuss the limitations and future work related to the proposed evaluation metrics.
weaknesses: - The experimental setup should be coherent, but some sections contain repetitive content. For example, the hardware setup is discussed twice. This section could be made more concise.
- To enhance the clarity of the paper, it would be beneficial to briefly describe the deep generative models utilized in the experiments, similar to the descriptions provided for the traditional model.
- I also have some questions listed below:
- For Table 2, why is the CTGAN model not evaluated under the "without meta data" scenario?
- For quality evaluation metrics, why is the SDV library particularly being used? Are there any other measurements that can be included?
confidence: 3
justification: I think this paper has shown a good presentation of the experimental setups, evaluation metrics, and results analysis, which indicates that the authors have conducted a detailed and systematic empirical analysis. However, this paper can also benefit from some refinements, especially in paper organization and clarity. Those would further improve the paper quality.
final_rebuttal_confidence: 2
final_rebuttal_justification: I think the authors have addressed my initial concerns. The pros are that the authors have presented their methods, experiments, and results reasonably well for this specific topic. The concerns are that it seems many of the other reviewers are skeptical about the novelty and the research scope of this paper after reading their comments. I have lowered my confidence due to my limited knowledge of tabular data generation or healthcare data. |
RqdaGXhoTa | Machine Learning-Based Coastal Terrain Classification in Tropical Regions Using Multispectral UAV Imaging: A Comparative Study of Random Forest and SVM Models | [] | Advances in various technologies and machine learning (ML) are transforming the field of remote sensing. This study proposes an ML-centered methodology for classifying coastal terrain in tropical coastal regions using multispectral unmanned aerial vehicle (UAV) image inputs. The objective is to identify suitable ML algorithms for analyzing multispectral images on limited hardware. Multispectral images of the study area were collected using a DJI Mavic 3M UAV in March 2023. K-means clustering was implemented to assist in coastal terrain identification, and the labeled data were used to train pixel-based Support Vector Machine (SVM) and Random Forest (RF) models utilizing a 5-fold block cross-validation scheme. The results showed that the optimized RF model outperformed the SVM model across most metrics. Despite this, the SVM model showed potential for live image classification due to its smaller size and quick classification speed. Additionally, the optimized models effectively classified images from areas set as an independent hold-out test set, demonstrating the applicability of ML in this type of remote sensing problem. | [
"UAV",
"Multispectral Imaging",
"SVM",
"Random Forest"
] | https://openreview.net/pdf?id=RqdaGXhoTa | https://openreview.net/forum?id=RqdaGXhoTa | l7k6z5GMiw | meta_review | 1,730,420,515,635 | RqdaGXhoTa | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission34/Area_Chair_aSEx"
] | NLDL.org/2025/Conference | 2025 | metareview: The paper discusses an interesting topic of using machine learning to identify coastal terrain acquired by UAV multispectral sensors. Reviewers rise a number of questions, with a general negative trend regarding the methodology novelty (application of existing and old methods to a new dataset), the validation and discussion of results, and the lack of more recent, deep-learning based, baselines. Despite the additional work and improvments proposed by Authors in the revised version, acknowledged by reviewers, it remains that the NLDL conference might not be the best choice as a publication venue for this paper, even as an application paper.
recommendation: Reject
suggested_changes_to_the_recommendation: 3: I agree that the recommendation could be moved up
confidence: 3: The area chair is somewhat confident |
RqdaGXhoTa | Machine Learning-Based Coastal Terrain Classification in Tropical Regions Using Multispectral UAV Imaging: A Comparative Study of Random Forest and SVM Models | [] | Advances in various technologies and machine learning (ML) are transforming the field of remote sensing. This study proposes an ML-centered methodology for classifying coastal terrain in tropical coastal regions using multispectral unmanned aerial vehicle (UAV) image inputs. The objective is to identify suitable ML algorithms for analyzing multispectral images on limited hardware. Multispectral images of the study area were collected using a DJI Mavic 3M UAV in March 2023. K-means clustering was implemented to assist in coastal terrain identification, and the labeled data were used to train pixel-based Support Vector Machine (SVM) and Random Forest (RF) models utilizing a 5-fold block cross-validation scheme. The results showed that the optimized RF model outperformed the SVM model across most metrics. Despite this, the SVM model showed potential for live image classification due to its smaller size and quick classification speed. Additionally, the optimized models effectively classified images from areas set as an independent hold-out test set, demonstrating the applicability of ML in this type of remote sensing problem. | [
"UAV",
"Multispectral Imaging",
"SVM",
"Random Forest"
] | https://openreview.net/pdf?id=RqdaGXhoTa | https://openreview.net/forum?id=RqdaGXhoTa | cN5QHJsDYC | official_review | 1,728,514,749,365 | RqdaGXhoTa | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission34/Reviewer_mNAm"
] | NLDL.org/2025/Conference | 2025 | title: Comparison of SVM and Random Forest for Coastal Terrain Classification: Lacking Novelty and Comparison with State-of-the-Art Techniques
summary: This paper presents a comparison between Support Vector Machines (SVM) and Random Forest (RF) for coastal terrain classification (segmentation). The authors used a dataset with 2 cm resolution, collected using UAVs, and applied SVM and RF to orthomosaics that combine multiple spectral bands.
strengths: The paper is well-written and easy to follow.
The authors provide a clear comparison between SVM and RF for pixel-level classification.
They gave a detailed description of the methodology and data preparation.
weaknesses: The authors merely applied SVM and RF to their dataset without introducing new techniques.
There is no comparison with state-of-the-art semantic segmentation methods.
The paper lacks a clear contribution beyond the application of existing machine learning algorithms.
The analysis of the results is superficial, focusing on describing the results without providing any justification for them.
confidence: 5
justification: The paper is well-written and presents a comparison between SVM and RF for coastal terrain classification. However, it lacks a significant contribution beyond the application of standard machine learning algorithms to the custom dataset. Furthermore, the authors did not compare their methods with state-of-the-art semantic segmentation techniques. |
RqdaGXhoTa | Machine Learning-Based Coastal Terrain Classification in Tropical Regions Using Multispectral UAV Imaging: A Comparative Study of Random Forest and SVM Models | [] | Advances in various technologies and machine learning (ML) are transforming the field of remote sensing. This study proposes an ML-centered methodology for classifying coastal terrain in tropical coastal regions using multispectral unmanned aerial vehicle (UAV) image inputs. The objective is to identify suitable ML algorithms for analyzing multispectral images on limited hardware. Multispectral images of the study area were collected using a DJI Mavic 3M UAV in March 2023. K-means clustering was implemented to assist in coastal terrain identification, and the labeled data were used to train pixel-based Support Vector Machine (SVM) and Random Forest (RF) models utilizing a 5-fold block cross-validation scheme. The results showed that the optimized RF model outperformed the SVM model across most metrics. Despite this, the SVM model showed potential for live image classification due to its smaller size and quick classification speed. Additionally, the optimized models effectively classified images from areas set as an independent hold-out test set, demonstrating the applicability of ML in this type of remote sensing problem. | [
"UAV",
"Multispectral Imaging",
"SVM",
"Random Forest"
] | https://openreview.net/pdf?id=RqdaGXhoTa | https://openreview.net/forum?id=RqdaGXhoTa | aKRNkn87hD | official_review | 1,728,478,686,852 | RqdaGXhoTa | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission34/Reviewer_Wjcx"
] | NLDL.org/2025/Conference | 2025 | title: The motivation behind the paper is overall clearly presented and contextualized. The goal of the work is also carefully and clearly described. The architecture and the metrics selected are reliable. Big limits on degree of novelty, data collection and validation.
summary: The motivation behind the paper is clearly presented and contextualized. The goal of the work is also carefully and clearly described. The paper identifies a bottleneck in UAV and investigates solutions to implement real/near-real-time image classification. The architectures and the metrics selected are very reasonable and reliable.
The introduction and the conclusions are well-written, with just one thing not explained. While the issue is firstly clearly explained, it is not clear why it is important to classify real-time UAV images and how difficult the challenge is.
Nevertheless, the paper has several weaknesses. Firstly, the lack of novelty. There is plenty of literature applying ML to UAV imaging for the classification of the natural environment, including forestry, wetlands, cultivated areas, or coastal terrains. Several of them also aim at a semi-real-time classification. Secondly, the training data have been collected only within a very small area. More data should have been collected from different locations and in different conditions, in order to improve the amount and quality of the training dataset. Thirdly, the methodology followed to create the labels isn't clear. Finally, an extensive validation is needed. Indeed, the biggest weakness of the work is that the models have been validated in the same area where they have been trained. Validation in a new area is needed to consider the results reliable and publishable.
strengths: As mentioned in the summary, the motivation behind the paper is clearly presented and contextualized overall, with a clear introduction to the topic. The goals of the work are also clearly described.
Other strengths of the work:
- The need for AI is well-motivated.
- The model and architecture selected are standard but reliable ones. SMV and RF are two of the most popular and reliable choices when dealing with multispectral imagery. The specifications of the architecture of the model and its choice are well-motivated.
- The metrics selected are meaningful and reliable. A variety of metrics have been selected and the results are clearly presented using several tables.
- The conclusion is clearly formulated.
weaknesses: The review identified several, substantial weaknesses:
- Lack of novelty. There is plenty of literature applying ML to UAV imaging for the classification of the natural environment, including forestry, wetlands, cultivated areas, or coastal terrains. A lot of it is also aiming on a near-real-time classification of the images.
- Data collection. The data collected are very limited, all within a single campaign in a small area. More data should have been collected from different locations and in different conditions, in order to improve the amount and quality of the dataset. Info on the conditions during which the data are absent.
- Labels: the methodology followed to create the labels isn't clear.
- Validation: this is definitely the biggest weakness of the work. Indeed, the models have been validated in the same small area where they have been trained. Taking 80% of the data within an 8-hectare area for training and 20% for validation and going through training-validation cycles isn't the same as validating on a different area, or randomly selecting an area on which the model hasn't been trained. Perform a real independent validation on a new area, which is considered necessary for considering the paper for publication.
Regarding the structure of the publication:
- The results and their discussion should be more clearly separated.
- The figures need a step forward in terms of explanation and quality.
More in detail:
Abstract
- The abstract doesn't mention one of the main findings of the work: that SVM is better suitable for live clarification. Add it.
Chapter 1
- While the issue is firstly clearly explained, it is not clear why it is important to classify real-time UAV images and how difficult the challenge is. Please elaborate, making it clear.
Chapter 2.1
- Why data have been collected only in one (and such a small) area? When (season, light conditions)? In a single campaign or in several? For reliable training, a few more data campaigns should have been carried out: one of them fully dedicated to validation.
Chapter 2.2
- Line 98-100: Add a figure of one of those orthomosaics.
- Are there any artifacts present in the orthomosaics? This sometimes could be an issue.
Chapter 2.3
- The chapter isn't well written. Try to rephrase, focusing on elaborating on how the labels were classified.
- Line 129: Add reference on silhouette score.
Chapter 2.4
- Remove "of ML" from the title.
Chapter 3
It would be better to have the presentation of the results and their discussion in two separate chapters.
Chapter 3.1
- Reading the chapter, it seems that the classes described are the outcome of the models. My understanding is that those are the labels used to train the models. What is correct? If the latter, then those aren't results and shouldn't been included in a chapter presenting and discussing the results. If they are results, then it needs to be explained much better.
Chapter 3.2
- The chapter is well written, but the issue is how the validation has been carried on: see what is written above.
- Line 218-221: this sentence is useless. Please remove.
- Lines 262-265: Any idea why the RF model largely prevents the frequent misclassification of shallow bare zones as shallow water, which is observed in the SVM model? Please add a sentence elaborating on the possible cause of this misclassification.
- Line 274-277: It is true that the prediction time would be shorter, but what about the resources available for real-time (live) classification? Do they have an impact?
Chapter 3.3
- This section presents results, so it goes before the discussion presented in Chapter 3.2.
- Line 293-295: Rephrase because it is not clear, and point to the figure.
Figures and Tables: All figures' captions need to be extended, containing a clear description of what the figure is. When colours are present, their meaning needs to be clearly explained. Each caption should also contain 1 sentence highlighting the main "take-home" message that the author wants the reader to remember.
- Figure 3: Improve the resolution of the images.
- Table 1: Elaborate pointing out highlights and explaining why some numbers are in red.
- Table 2: See comments for Table 1.
- Figure 6: Elaborate, the caption doesn't allow to understand what the figure is showing.
- Figure 7: What is written on the axis and the percentages are unreadable. Increase the size of the font.
- Table 3: Why green? See comments for Table 1.
confidence: 4
justification: I believe the paper isn't ready to be published. The first reason is that it lacks novelty. Indeed, there is plenty of literature already available applying ML to UAV imaging for the classification of the natural environment, including forestry, wetlands, cultivated areas, or coastal terrains. Even in the lack of novelty, it can be considered for publication because it addresses a specific bottleneck in the application of ML to UAV, but only after more work has been performed in order to back up the results and assess the weaknesses identified during the review. The weaknesses are the following:
- Data collection only took place within a small area. This strongly limits the applicability of the model to other areas/terrains. More data need to be collected from different locations and in different conditions, in order to improve the amount and quality of the dataset. The model should be retrained using this updated dataset.
- Validation: this is definitely the biggest weakness of the work described. Indeed, the models have been validated in the same small area where they have been trained. Taking 80% of the data within an 8-hectare area for training and 20% for validation and going through training-validation cycles isn't the same as validating on a different area, or randomly selecting an area on which the model hasn't been trained. Performing a validation in an area that the model hasn't been trained on is necessary for considering the paper ready for publication. |
RqdaGXhoTa | Machine Learning-Based Coastal Terrain Classification in Tropical Regions Using Multispectral UAV Imaging: A Comparative Study of Random Forest and SVM Models | [] | Advances in various technologies and machine learning (ML) are transforming the field of remote sensing. This study proposes an ML-centered methodology for classifying coastal terrain in tropical coastal regions using multispectral unmanned aerial vehicle (UAV) image inputs. The objective is to identify suitable ML algorithms for analyzing multispectral images on limited hardware. Multispectral images of the study area were collected using a DJI Mavic 3M UAV in March 2023. K-means clustering was implemented to assist in coastal terrain identification, and the labeled data were used to train pixel-based Support Vector Machine (SVM) and Random Forest (RF) models utilizing a 5-fold block cross-validation scheme. The results showed that the optimized RF model outperformed the SVM model across most metrics. Despite this, the SVM model showed potential for live image classification due to its smaller size and quick classification speed. Additionally, the optimized models effectively classified images from areas set as an independent hold-out test set, demonstrating the applicability of ML in this type of remote sensing problem. | [
"UAV",
"Multispectral Imaging",
"SVM",
"Random Forest"
] | https://openreview.net/pdf?id=RqdaGXhoTa | https://openreview.net/forum?id=RqdaGXhoTa | SZAgH6xeTU | official_review | 1,727,009,316,629 | RqdaGXhoTa | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission34/Reviewer_v4h7"
] | NLDL.org/2025/Conference | 2025 | title: Comparison of RF and SVM on k-means clustered Vegetation Indices
summary: In brief the paper describes a pipeline:
- UAV images are processed using onboard capabilities, to genereate several different VI (vegetation indices)
- Labels are generated using k-means, with a k between 2 and 8. (further in the paper it seems to resolve as 6 classes)
- Random Forest and Support Vector Machines are fitted using a k-fold validation split.
- Comparison between the two seem to indicate a stronger fit using Random Forest, albeit at a cost of computation time.
The paper purports to be a demonstration into the effectiveness of ML for classifying coastal areas, but there remains some unclear aspects to this research.
strengths: - The paper is clear and logically structured, describing the approach in an easy to follow manner, with clear figures and tables.
- The experiments are clear, but the description of the data and testing is a bit lacking.
- Explanations of methods used are clear, and the methods for classification are well founded within the field. e.g. RF being a staple for performing pixel-wise classification of nature-types.
- The paper demonstrates an interesting way to tune pixel-wise classifiers that correspond well with clusters identified using k-means, and may therefore be useful for other studies that are planning longer term UAV imaging over a constrained/well-known area.
The overall pipeline of unsupervised clustering to support training SVM or RF classifiers seems reasonable, and worth a qualitative evaluation, especially for use with onboard capabilities of an UAV.
weaknesses: **Missing/Bigger stuff**:
- The paper does not describe how the authors have performed a train/test split, with a hold out test set that they report on. There is mention of a test-image, but this should be made explicit and clear in the paper, otherwise it is tempting to assume that the reported numbers reflect the validation score from the k-fold-validation.
- It would be useful to mention or compare with other similarly used methods, e.g. kNN, Bayes, or xg-boost, but it is pointed out in the paper that the authors seemed to be operating on limited computation budget.
- When the labels are generated using a clustering algorithm like k-means, unless these data are manually verified and adjusted, it seems likely that the comparison performed is one of "How similar is RF/SVM to k-means?". This could probably be clarified better in the paper, along with a justification as to how the labels are qualitatively evaluated. It is perhaps not suprising that RF would be a good fit for separating members of clusters.
- Experiments with smaller forests, since time is mentioned as a constraint.
**Errata/smaller stuff**:
- F1 score should have a capital F (line115)
- LiDAR should have a lower case "i" (line 36)
- line 54-56, "its performance relies ... on" should probably be "their performance rely ... on" (refering to the class of SVMs earlier as they)
- K-fold cross-validation is also a tunable parameter.
- on line 256; "plurality of pixels" seems better stated as "majority of pixels"
- only a partial list of the bands in the UAV seems to be included.
- missing the max depth of random forest and the width / number of trees. This might be desirable to report w.r.t. time being a constraint.
confidence: 3
justification: - Lacking a clear description of Test-set/-data makes it difficult to trust the reported numbers. As the authors have pointed out, RF is liable to overfit, and without being able to verify that the Test-data is clearly separated from the training data, the reported numbers may be tainted.
- Would like some more justification on what the result of fitting RF/SVM to k-means clusters is. It seems here that what is outlined in the paper is a way of tuning on-board capable algorithms to perform what would otherwise be done post-hoc using clustering algorithms. This may have futher utility, but the degree to which it is a novel contribution is unclear. Tuning on-board capable classifiers is an active field within remote-sensing, but the label data will usually include human experts. An unsupervised approach may provide more usable data, but the evaluation of results should probably then include domain experts who validate the k-means generated clusters before these are trained on.
final_rebuttal_confidence: 4
final_rebuttal_justification: This paper seems focused on the application of machine learning, as it pertains to a potential on-board capability of UAVs.
It follows a reasonable, standard methodology for setting up the capabilities described, and it does not seem to introduce any novel approach to the pipelines it outlines.
The authors describe well how they've performed the work, demonstrating that one can use RF to perform segmentation of image-data retrieved from an UAV. They also demonstrate that RF has some advantages over SVMs when it comes to performance in this case, and conclude that machine learning is effective at the proposed task.
Beyond its value as a clear report of an implementation of ML as a possible future on-board pixel-based image segmenter, the paper provides no novel perspective or implementation details. The details describing the constraints of the UAVs, and what the authors suggest would be suitable hardware to support their ML-model, any mention of fitting their approach to these constraints is also lacking.
i.e. it is still unclear whether the task they suggest; The use of ML as an on-board / live image segmentation tool, is feasible within their constraints. (It likely is, yet the authors never return to their stated intentions for performing this work)
It should be mentioned that the authors seem to have taken the feedback given to heart, and improved their paper substantively from the initial submission.
As the paper stands now it is clear what, and how, the work reported has been performed.
There remains some minor grammatical mistakes, but these do not presently affect the final assessment. |
RqdaGXhoTa | Machine Learning-Based Coastal Terrain Classification in Tropical Regions Using Multispectral UAV Imaging: A Comparative Study of Random Forest and SVM Models | [] | Advances in various technologies and machine learning (ML) are transforming the field of remote sensing. This study proposes an ML-centered methodology for classifying coastal terrain in tropical coastal regions using multispectral unmanned aerial vehicle (UAV) image inputs. The objective is to identify suitable ML algorithms for analyzing multispectral images on limited hardware. Multispectral images of the study area were collected using a DJI Mavic 3M UAV in March 2023. K-means clustering was implemented to assist in coastal terrain identification, and the labeled data were used to train pixel-based Support Vector Machine (SVM) and Random Forest (RF) models utilizing a 5-fold block cross-validation scheme. The results showed that the optimized RF model outperformed the SVM model across most metrics. Despite this, the SVM model showed potential for live image classification due to its smaller size and quick classification speed. Additionally, the optimized models effectively classified images from areas set as an independent hold-out test set, demonstrating the applicability of ML in this type of remote sensing problem. | [
"UAV",
"Multispectral Imaging",
"SVM",
"Random Forest"
] | https://openreview.net/pdf?id=RqdaGXhoTa | https://openreview.net/forum?id=RqdaGXhoTa | Gt1lNA7FTl | official_review | 1,726,526,543,516 | RqdaGXhoTa | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission34/Reviewer_cYsE"
] | NLDL.org/2025/Conference | 2025 | title: Dataset represents trivial task, not deep learning
summary: The paper first introduces a new dataset consisting of multi-spectral UAV images, captured in a coastal area of the Philippines. The purpose of this dataset is to investigate different methods for pixel-wise classification of these images into one of several ecological classes. The authors then train and compare SVMs and random forest models on this dataset.
strengths: 1. The collection of a new dataset in an ecologically relevant domain is the biggest strength of this paper. Will the dataset be released?
2. The motivation for using UAVs is very clear.
3. The description of the data collection and anything related to the area of remote sensing was clear and gives the impression that the authors are knowledgeable in this area.
4. The methods used are well-known, using cross-validation is a standard good practice to ensure that experimental results are reliable. See "weaknesses" for a possible caveat on this.
weaknesses: 1. The authors use a cross-validation approach combined with grid search to find the best hyperparameter settings, describing the sets used as training and validation set. Later, they talk about a test set. I'm not sure if that was kept separate (if so, it does not seem to be mentioned), or if they use the validation set from the cross-validation and just call it test set here. In the latter case, the hyperparameters would have been chosen based on the test set during the grid search, which would go against best practices.
2. I'm not completely sure if I understand correctly how the labels were set. If the labels are based only on k-means clustering (choosing the number of clusters via silhouette score), k-means on the training set should be used as a baseline method. Comparing to k vectors is very performant, and should perform very well, if the original labels are determined via k-means. However, this would also mean that trying to predict those labels would not be very challenging, which explains the good scores of the RF model. Manual inspection and postprocessing of the labels, on the basis of k-means, could lead to better results and a more interesting dataset. In its current form, the problem seems trivially solvable.
3. The authors mention that they want to "identify suitable ML algorithms for analyzing multispectral images on limited hardware." I think that's a good goal, I could imagine that there is some kind of onboard processing on the drone while it's capturing images. If that's the case, the requirements that this goal of onboard processing imposes on the models should specified more clearly than just "limited hardware". E.g.: How big can a model be in memory? What is this limited hardware? If this is not specified, it is not possible to judge the results adequately with respect to this goal.
The prediction time in Table 3 would also be very informative if it was somehow related to the onboard prediction time. Given that the random forest takes 15 seconds for a prediction, I assume that the forest or the batch size are rather large. A large batch size is not something to be expected onboard the drone. A large forest, or deep trees, could be trimmed to trade off prediction time for performance. But this all relies on knowing more specific conditions.
4. The authors mention how big the region is that they collected the data in, and the resolution of the images, but not the final size of the dataset, e.g. in number of pixels.
5. Figures 1 to 5 are never referred to in the text.
6. In Figures 4 and 5, I don't know what the different classes are without switching back and forth to the place where they're mentioned. Replacing the integer labels with the class names would make this much more accessible to the reader.
7. Given that the submission concerns a "deep learning conference", I think the paper should use deep learning methods, which it currently does not.
confidence: 4
justification: This paper is clearly an application paper, for which the review guidelines ask that the problems be non-trivial, and the solutions novel, effective and/or practically relevant. Given that a random forest almost perfectly solves the presented problem, the problem appears to be trivial. The solution is not novel, but a well-known standard method. The solution may be effective and practically relevant, but I do not count this towards the paper's strengths, because it is an existing standard method in the field.
I am unfamiliar with the literature on this specific problem but it might be that being able to perform the presented classification this well is a step forward in terms of domain-specific knowledge. In that case, I believe that remote sensing venues would be a much more fitting place for this paper, since it does not provide any value in the area of deep learning.
Using labels based on k-means clustering for supervised learning does not seem like a reasonable approach to me, since the resulting labels are very easy to predict, as demonstrated by the experimental section. If the labels do not provide a meaningful challenge, the experimental results also do not provide any advancement in knowledge, apart from the task being trivial.
In summary, the paper presents a dataset which can be almost perfectly solved by a standard ML method, does not provide any innovation in terms of methodology, and does not belong into the area of deep learning, which is why I a recommend a rejection.
The value that the paper provides is the new dataset. To make this a stronger paper, I would recommend publicizing the dataset and finding tasks that can be solved with it, that are relevant to the application domain and non-trivial. Focusing more on the aspect of onboard processing, with concrete restrictions which the methods have to fulfill, could also be an interesting approach.
final_rebuttal_confidence: 4
final_rebuttal_justification: I commend the work that the authors have put in to respond to the reviewers' comments. The paper has clearly improved after the initial reviews: It is now much clearer how labels were created, and on which data the models were evaluated (Fig. 2). Including information about possible hardware and its limitations also makes the on-board-processing argument stronger. Mentioning the number of pixels included was also important.
My remaining concerns are of the kind that is unfortunately hard to address during a rebuttal phase.
1. The paper does not use deep learning methods, but is submitted to a deep learning conference. In a cursory search, I have not found any papers at past versions of this conference which did not include elements of deep learning, though I may have missed some. While the authors rightfully mention that topics like supervised learning are suggested under "General Machine Learning", that point is mentioned right under the headline of "We invite submissions presenting new and original research on all aspects of Deep Learning." I understand this to mean that the general machine learning topics of supervised or active learning are of interest, insofar as they pertain to deep learning methods, as well as topics like architectures, which are topics specific to deep learning.
2. Since this is an application paper, the review guidelines ask that the problems be non-trivial, and the solutions novel, effective and/or practically relevant. Given that a random forest almost perfectly solves the presented problem, the problem appears to be trivial. The solution is also not novel, but a well-known standard method. I am not a domain expert in this domain of ecology, so I can not judge whether the result is of practical relevance in this field. I think this paper would be better suited for a venue that focuses on remote sensing or the specific ecological domain, where the respective domain experts can make this decision of relevance. |
RqdaGXhoTa | Machine Learning-Based Coastal Terrain Classification in Tropical Regions Using Multispectral UAV Imaging: A Comparative Study of Random Forest and SVM Models | [] | Advances in various technologies and machine learning (ML) are transforming the field of remote sensing. This study proposes an ML-centered methodology for classifying coastal terrain in tropical coastal regions using multispectral unmanned aerial vehicle (UAV) image inputs. The objective is to identify suitable ML algorithms for analyzing multispectral images on limited hardware. Multispectral images of the study area were collected using a DJI Mavic 3M UAV in March 2023. K-means clustering was implemented to assist in coastal terrain identification, and the labeled data were used to train pixel-based Support Vector Machine (SVM) and Random Forest (RF) models utilizing a 5-fold block cross-validation scheme. The results showed that the optimized RF model outperformed the SVM model across most metrics. Despite this, the SVM model showed potential for live image classification due to its smaller size and quick classification speed. Additionally, the optimized models effectively classified images from areas set as an independent hold-out test set, demonstrating the applicability of ML in this type of remote sensing problem. | [
"UAV",
"Multispectral Imaging",
"SVM",
"Random Forest"
] | https://openreview.net/pdf?id=RqdaGXhoTa | https://openreview.net/forum?id=RqdaGXhoTa | 6wbCQ2Z3zU | decision | 1,730,901,555,978 | RqdaGXhoTa | [
"everyone"
] | [
"NLDL.org/2025/Conference/Program_Chairs"
] | NLDL.org/2025/Conference | 2025 | title: Paper Decision
decision: Reject |
QswzbrMy3R | Interpretable Function Approximation with Gaussian Processes in Value-Based Model-Free Reinforcement Learning | [
"Matthijs van der Lende",
"Matthia Sabatelli",
"Juan Cardenas-Cartagena"
] | Estimating value functions in Reinforcement Learning (RL) for continuous spaces is challenging. While traditional function approximators, such as linear models, offer interpretability, they are limited in their complexity. In contrast, deep neural networks can model more complex functions but are less interpretable. Gaussian Process (GP) models bridge this gap by offering interpretable uncertainty estimates while modeling complex nonlinear functions. This work introduces a Bayesian nonparametric framework using GPs, including Sparse Variational (SVGP) and Deep GPs (DGP), for off-policy and on-policy learning. Results on popular classic control environments show that SVGPs/DGPs outperform linear models but converge slower than their neural network counterparts. Nevertheless, they do provide valuable insights when it comes to uncertainty estimation and interpretability for RL. | [
"reinforcement learning",
"gaussian process",
"deep learning",
"uncertainty estimation"
] | https://openreview.net/pdf?id=QswzbrMy3R | https://openreview.net/forum?id=QswzbrMy3R | xQmF0g3QLA | official_review | 1,728,511,249,006 | QswzbrMy3R | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission9/Reviewer_WsxQ"
] | NLDL.org/2025/Conference | 2025 | title: A well-writen paper on the use of Gaussian processes as function approximators in RL with no significant shortcomings
summary: In this paper, the authors introduce a novel framework based on Gaussian processes (GP) used for off-policy and on-policy learning. The main motivation is the fact that GPS offers uncertainty estimates, offering the best of both worlds - interpretability similar to that of linear models, and the capacity to capture complex patterns, like deep neural networks. This type of balance makes GPs an appealing middle ground in cases where both transparency and the ability to learn intricate relationships are pivotal. More precisely, two novel GP-based reinforcement learning algorithms are proposed, and shown to outperform linear models, and somewhat underperform DNNs. The results are sound, the paper seems technically correct, while the results themselves offer only an incremental contribution, primarily extending existing work.
strengths: - The paper appears to be technically sound with the methodology being clearly laid out
- It is also logically structured and no major errors or inconsistencies have been discovered (besides what is written under Weaknesses)
- The theoretical background (Gaussian processes, value-based RL) is well presented and understandable
- As for applicability, the authors provide comprehensive implementation details - the values of the hyperparameters are provided as well as the pseudocode with all the steps of the algorithm
- The paper contains information-rich appendices with additional results and theoretical framing, which is commendable. The additional results are also sound and complement the main part of the paper nicely.
- It is interesting (and desirable) to also see energy usage as an evaluation metric (together with space complexity)
- The paper is clearly presented and acknowledges its limitations (e.g., concerning the stability of DQN with an MLP in CartPole)
- In overall, the paper makes a decent contribution, is systematically organized, and contains no major flaws.
weaknesses: - The scientific contributions are relatively incremental (albeit still reasonable)
- The results subsection is a bit short and making it more thorough would improve the paper quality even more, with more focus on interpretability
- The literature review part is somewhat scarce - the authors could have included additional literature related to the use of GP in RL, such as "Grande R, Walsh T, How J. Sample efficient reinforcement learning with Gaussian processes. In International Conference on Machine Learning 2014 Jun 18 (pp. 1332-1340). PMLR." Despite the strict page limit, that paper should be contexualized a bit more.
- Some phrasings are awkward - parts of the paper should be rewritten. For example: "This work focuses on the advantages and limitations of GPs as function approximations for model-free RL. With a particular focus..." Clearly, this should all be part of a single sentence, or alternatively, rephrased.
- The proposed approach would benefit from being tested on a wider range of RL control environments (besides CartPole and Lunar Lander). This would provide a more robust validation of the proposed methods.
confidence: 3
justification: This is a well-written methodologically-sound paper on the use of GPs in the context of value-based model-free reinforcement learning. The methodology is expounded on in great detail, all the hyperparameters and settings needed for implementation seem to be there, and the results show decent (expected) performance. While it doesn't significantly move the boundaries of its subfield, a while the proposed approaches should be tested on a wider range of RL environments, it is in my view a decent manuscript that is clearly above the acceptance threshold. |
QswzbrMy3R | Interpretable Function Approximation with Gaussian Processes in Value-Based Model-Free Reinforcement Learning | [
"Matthijs van der Lende",
"Matthia Sabatelli",
"Juan Cardenas-Cartagena"
] | Estimating value functions in Reinforcement Learning (RL) for continuous spaces is challenging. While traditional function approximators, such as linear models, offer interpretability, they are limited in their complexity. In contrast, deep neural networks can model more complex functions but are less interpretable. Gaussian Process (GP) models bridge this gap by offering interpretable uncertainty estimates while modeling complex nonlinear functions. This work introduces a Bayesian nonparametric framework using GPs, including Sparse Variational (SVGP) and Deep GPs (DGP), for off-policy and on-policy learning. Results on popular classic control environments show that SVGPs/DGPs outperform linear models but converge slower than their neural network counterparts. Nevertheless, they do provide valuable insights when it comes to uncertainty estimation and interpretability for RL. | [
"reinforcement learning",
"gaussian process",
"deep learning",
"uncertainty estimation"
] | https://openreview.net/pdf?id=QswzbrMy3R | https://openreview.net/forum?id=QswzbrMy3R | WjTDxrqUGT | meta_review | 1,730,371,555,412 | QswzbrMy3R | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission9/Area_Chair_Ae89"
] | NLDL.org/2025/Conference | 2025 | metareview: The paper presents a study that explores the use of GPs for Q-function approximation, which trades-off interpretability vs. performance. While the reviewers raised concerns about the novelty and significance of the outcomes, the analysis is sound, and the insights are useful for the RL community. Hence, with the expectation that the camera-ready version will place a stronger focus on interpretability to better substantiate the claims, I recommend accepting the paper.
recommendation: Accept (Poster)
suggested_changes_to_the_recommendation: 1: I agree that the recommendation could be moved down
confidence: 4: The area chair is confident but not absolutely certain |
QswzbrMy3R | Interpretable Function Approximation with Gaussian Processes in Value-Based Model-Free Reinforcement Learning | [
"Matthijs van der Lende",
"Matthia Sabatelli",
"Juan Cardenas-Cartagena"
] | Estimating value functions in Reinforcement Learning (RL) for continuous spaces is challenging. While traditional function approximators, such as linear models, offer interpretability, they are limited in their complexity. In contrast, deep neural networks can model more complex functions but are less interpretable. Gaussian Process (GP) models bridge this gap by offering interpretable uncertainty estimates while modeling complex nonlinear functions. This work introduces a Bayesian nonparametric framework using GPs, including Sparse Variational (SVGP) and Deep GPs (DGP), for off-policy and on-policy learning. Results on popular classic control environments show that SVGPs/DGPs outperform linear models but converge slower than their neural network counterparts. Nevertheless, they do provide valuable insights when it comes to uncertainty estimation and interpretability for RL. | [
"reinforcement learning",
"gaussian process",
"deep learning",
"uncertainty estimation"
] | https://openreview.net/pdf?id=QswzbrMy3R | https://openreview.net/forum?id=QswzbrMy3R | TEoG3LKlbJ | official_review | 1,727,982,251,966 | QswzbrMy3R | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission9/Reviewer_PyJD"
] | NLDL.org/2025/Conference | 2025 | title: Contribution is weak and writing should be formal
summary: Gaussian processes are a non-standard function approximator for reinforcement learning value functions. RL value functions are usually approximated by deep neural networks or kernel regression. The GP is a flexible non-parametric estimator and has been studied in the past in the context of RL function approximation. This paper uses SVGP and Deep GPs for off-policy and on-policy learning. These provide the advantage of uncertainty estimation and interpretability. Numerical comparisons are performed in the CartPole and Lunar Lander environments.
strengths: The core idea of the paper is interesting and deserves further studies. The experiments are done thoroughly and rigorously. The background material is thoroughly explained giving sufficient context for the reader.
weaknesses: There is a definite improvement needed in the contributions and writing before being published. While the experimental results are unfavorable, a clear illustration of the advantages offered by the paper along with more formal explanations of the math and algorithms would have pushed the paper towards a higher score. The idea of using Gaussian processes for function approximation is not new. More strong experimental evidence on other benchmarks could be useful to ascertain further.
confidence: 3
justification: The contribution is weak and the experimental results are not great. Formal approach to writing would have merited a higher score.
final_rebuttal_confidence: 3
final_rebuttal_justification: Based on the discussion in the committee chat and the discussion with the Area chairs, I am increasing my scores. The paper does not make a significant contribution and the results are weak. However, there is nothing wrong with the method or the evaluation. Hence, I am increasing the scores. |
QswzbrMy3R | Interpretable Function Approximation with Gaussian Processes in Value-Based Model-Free Reinforcement Learning | [
"Matthijs van der Lende",
"Matthia Sabatelli",
"Juan Cardenas-Cartagena"
] | Estimating value functions in Reinforcement Learning (RL) for continuous spaces is challenging. While traditional function approximators, such as linear models, offer interpretability, they are limited in their complexity. In contrast, deep neural networks can model more complex functions but are less interpretable. Gaussian Process (GP) models bridge this gap by offering interpretable uncertainty estimates while modeling complex nonlinear functions. This work introduces a Bayesian nonparametric framework using GPs, including Sparse Variational (SVGP) and Deep GPs (DGP), for off-policy and on-policy learning. Results on popular classic control environments show that SVGPs/DGPs outperform linear models but converge slower than their neural network counterparts. Nevertheless, they do provide valuable insights when it comes to uncertainty estimation and interpretability for RL. | [
"reinforcement learning",
"gaussian process",
"deep learning",
"uncertainty estimation"
] | https://openreview.net/pdf?id=QswzbrMy3R | https://openreview.net/forum?id=QswzbrMy3R | MqXSiuu2OR | decision | 1,730,901,554,501 | QswzbrMy3R | [
"everyone"
] | [
"NLDL.org/2025/Conference/Program_Chairs"
] | NLDL.org/2025/Conference | 2025 | title: Paper Decision
decision: Accept (Poster)
comment: We recommend a poster presentation given the AC and reviewers recommendations. |
QswzbrMy3R | Interpretable Function Approximation with Gaussian Processes in Value-Based Model-Free Reinforcement Learning | [
"Matthijs van der Lende",
"Matthia Sabatelli",
"Juan Cardenas-Cartagena"
] | Estimating value functions in Reinforcement Learning (RL) for continuous spaces is challenging. While traditional function approximators, such as linear models, offer interpretability, they are limited in their complexity. In contrast, deep neural networks can model more complex functions but are less interpretable. Gaussian Process (GP) models bridge this gap by offering interpretable uncertainty estimates while modeling complex nonlinear functions. This work introduces a Bayesian nonparametric framework using GPs, including Sparse Variational (SVGP) and Deep GPs (DGP), for off-policy and on-policy learning. Results on popular classic control environments show that SVGPs/DGPs outperform linear models but converge slower than their neural network counterparts. Nevertheless, they do provide valuable insights when it comes to uncertainty estimation and interpretability for RL. | [
"reinforcement learning",
"gaussian process",
"deep learning",
"uncertainty estimation"
] | https://openreview.net/pdf?id=QswzbrMy3R | https://openreview.net/forum?id=QswzbrMy3R | 1H1eD4mWx2 | official_review | 1,728,628,889,709 | QswzbrMy3R | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission9/Reviewer_GeCz"
] | NLDL.org/2025/Conference | 2025 | title: Review
summary: This paper presents a method for using Gaussian processes for approximate Q-learning. It extends GP-Q and GP-SARSA with SVGP and DGP for state-action value approximation. The experiments, though limited to two fairly simple discrete control tasks (CartPole and Lunar Lander), show that the proposed method performs better than DQNs with linear function approximations. However, DQNs with MLPs still demonstrate better performance and stability than the proposed method. The authors argue that their approach, despite lower performance compared to some baselines, is interpretable, which is an added advantage.
strengths: I view this work as a paper that combines some existing work in a fairly straightforward manner to solve the problem of action-value function approximation with interpretability on the table. While the methodological novelty may be somewhat limited, I acknowledge and appreciate the authors' efforts in synthesizing these components into a cohesive approach. I do see the value of interpretable value function approximations, and this work takes a step towards that, which I appreciate. The paper is well-written and easy to comprehend.
weaknesses: In terms of where the paper can be improved, I have a few concerns, primarily regarding the experiments and whether some of the motivations stated by the authors are sufficiently justified through the presented results. The experiments are currently limited to relatively simple control tasks, such as CartPole and Lunar Lander. I believe the authors should consider tasks with larger state and action spaces. My concern is that the performance of the proposed method, which uses GPs for value function approximation, may degrade in higher-dimensional problems, given the known limitations of GPs in handling high-dimensional problems. The current experiments do not adequately address whether this is the case and, if it is, to what extent. If this is the case, while it might be a major limitation of the method, I can still see value in the proposed method. However, in that case, I would like to see revised contributions accordingly.
Second, the authors motivate the work for why GPs are good for the value function approximation with the interpretability they offer. It is not clear to me how I should interpret GPs in value prediction. I did not find any experiments showing how this is done. Since the choice of GPs is central to the method, and given that they underperform compared to DQNs with MLPs, there needs to be a strong justification for selecting GPs as function approximators. Specifically, the interpretability offered by GPs should be clearly demonstrated to support this choice. Perhaps I might be missing something, as the authors may have thought about this more. I would appreciate any thoughts the authors have on this or experiments showing how the interpretability is achieved and how it can be used.
As for baselines, the authors use DQN with linear functions. While this is a necessary baseline, it is also a weak baseline. Another stronger baseline could be linear function approximation with features that are non-linearly projected (ex: polynomial features, simple MLPs as encoders). Such an approach would trade off the interpretability to some extent but offer better performance.
confidence: 4
justification: Overall, I think this work takes on an important problem, but in its current form, it falls short of novelty in the proposed method and empirical rigor in experiments, specifically in terms of number of different tasks considered and their difficulty, how well the experiments show the use of the proposed method for interpretability - the motivation of the work. A better selection of baselines could also strengthen the work. Due to these reasons, I am leaning toward rejecting the paper. |
Q2wVVeOpz8 | Zero-Shot Open-Vocabulary OOD Object Detection and Grounding using Vision Language Models | [
"Poulami Sinhamahapatra",
"Shirsha Bose",
"Karsten Roscher",
"Stephan Günnemann"
] | Automated driving involves complex perception tasks that require a precise understanding of diverse traffic scenarios and confident navigation. Traditional data-driven algorithms trained on closed-set data often fail to generalize upon out-of-distribution (OOD) and edge cases. Recently, Large Vision Language Models (LVLMs) have shown potential in integrating the reasoning capabilities of language
models to understand and reason about complex driving scenes, aiding generalization to OOD scenarios. However, grounding such OOD objects still remains a challenging task. In this work, we propose an automated framework zPROD, for zero-shot promptable open vocabulary OOD object detection, segmentation, and grounding in autonomous driving. We leverage LVLMs with visual grounding capabilities, eliminating the need for lengthy text communication and providing precise indications of OOD objects in the scene or on the track of the ego-centric vehicle. We evaluate our approach on OOD datasets from existing road anomaly segmentation benchmarks such as SMIYC and Fishyscapes. Our zero-shot approach shows superior performance on RoadAnomaly and RoadObstacle and comparable results on the Fishyscapes subset as compared to supervised models and acts a baseline for future zero-shot methods based on open vocabulary OOD detection. | [
"OOD object detection",
"zero-shot",
"open-vocabulary",
"segmentation",
"autonomous driving",
"vision language models"
] | https://openreview.net/pdf?id=Q2wVVeOpz8 | https://openreview.net/forum?id=Q2wVVeOpz8 | yR3PNyixbv | official_review | 1,727,613,603,565 | Q2wVVeOpz8 | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission49/Reviewer_irU7"
] | NLDL.org/2025/Conference | 2025 | title: Official Review
summary: The work proposes a zero-shot framework for OOD object detection and grounding in an autonomous vehicle setting. The framework leverages the capabilities of frozen large vision language models (LVLM) to first detect all objects in the image. In-domain objects are then detected via prompting the LVLM with a list of in-domain classes and the list of potential OOD objects is then retrieved by subtracting the masks of in-domain objects from the mask containing all objects. As the resulting OOD mask will contain noise/artifacts, two algorithms are introduced to retrieve the final list of OOD masks. These two algorithms take as input the inverse in-domain mask and the noisy OOD mask and either rely on the assumption that the number of OOD objects is known or that a threshold has been selected that considers the overlap between instances in the two masks. Finally, the LVLM is prompted to return the road mask and object outside the convex hull of the road are removed.
strengths: - Overall the proposed method is sound and while there exist approaches that address the object grounding and detection task, the focus on OOD categories is novel.
- The paper is well written and the methodology is presented in a clear manner.
- The proposed approach is intuitive.
- The work addresses an important problem of OOD object detection for autonomous driving.
weaknesses: - In the evaluation, the authors claim that the model outperforms prior approaches on the RoadAnomaly and RoadObstacle datasets while achieving competitive results on Fishyscapes (FS). However, as no results are included for RoadObstacle for the baseline methods and the difference between the best baseline approach and the proposed method on the FS dataset is quite significant, results appear overstated. Given that the experiments compare the zero-shot approach with supervised models, the reviewer believes that the significant differences on the FS dataset are not a significant problem.
- The authors mention several places that a very low fixed LVLM detector threshold is critical, however, it is not specified what this threshold is set to.
- It would have been beneficial to include a discussion of limitations and/or failure cases.
Minor:
L298 Figure reference missing.
confidence: 4
justification: Overall, the work proposes a solution to an interesting task. While the methodological novelty mostly consists of the application of promotable LVLMs and can thus be considered limited, the reviewer believes that the intuitive solution of this new tasks warrants acceptance. The reviewer does, however, strongly encourage the authors to revise the description of the experimental results to more accurately reflect the obtained results and not overstate them.
final_rebuttal_confidence: 4
final_rebuttal_justification: After going over the rebuttal and changes that the authors have made to the document, I believe that the authors have addressed the majority of the concerns and recommend acceptance of the manuscript. The paper makes an intuitive and simple, but effective contribution and the work is presented in a clear manner. The authors should, however, add a description of the newly added baseline to the manuscript. |
Q2wVVeOpz8 | Zero-Shot Open-Vocabulary OOD Object Detection and Grounding using Vision Language Models | [
"Poulami Sinhamahapatra",
"Shirsha Bose",
"Karsten Roscher",
"Stephan Günnemann"
] | Automated driving involves complex perception tasks that require a precise understanding of diverse traffic scenarios and confident navigation. Traditional data-driven algorithms trained on closed-set data often fail to generalize upon out-of-distribution (OOD) and edge cases. Recently, Large Vision Language Models (LVLMs) have shown potential in integrating the reasoning capabilities of language
models to understand and reason about complex driving scenes, aiding generalization to OOD scenarios. However, grounding such OOD objects still remains a challenging task. In this work, we propose an automated framework zPROD, for zero-shot promptable open vocabulary OOD object detection, segmentation, and grounding in autonomous driving. We leverage LVLMs with visual grounding capabilities, eliminating the need for lengthy text communication and providing precise indications of OOD objects in the scene or on the track of the ego-centric vehicle. We evaluate our approach on OOD datasets from existing road anomaly segmentation benchmarks such as SMIYC and Fishyscapes. Our zero-shot approach shows superior performance on RoadAnomaly and RoadObstacle and comparable results on the Fishyscapes subset as compared to supervised models and acts a baseline for future zero-shot methods based on open vocabulary OOD detection. | [
"OOD object detection",
"zero-shot",
"open-vocabulary",
"segmentation",
"autonomous driving",
"vision language models"
] | https://openreview.net/pdf?id=Q2wVVeOpz8 | https://openreview.net/forum?id=Q2wVVeOpz8 | X5cl5s4HEx | official_review | 1,727,005,085,060 | Q2wVVeOpz8 | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission49/Reviewer_XbXC"
] | NLDL.org/2025/Conference | 2025 | title: Good paper which presents a zero-shot framework for OOD object detection
summary: SUMMARY: This paper presents two zero-shot algorithms for out-of-distribution (OOD) object detection using Large Vision Language Models (LVLMs). Experiment shows that the proposed method outperforms its competitors.
strengths: 1. The paper addresses OOD object detection, which is an important problem in computer vision
2. The proposed methods are zero-shot
3. The experiment on standard datasets showed the superior performance of proposed methods.
weaknesses: 1. The proposed methods are still simple
confidence: 3
justification: The proposed methods levarage LVLMs, enabling a zero-shot framework for OOD object detection. The experiment on standard benchmarks showed a superior performance of proposed methods. Although two proposed algorithms are still simple, their strengths outweights weaknesses.
final_rebuttal_confidence: 3
final_rebuttal_justification: I would like to keep my rating |
Q2wVVeOpz8 | Zero-Shot Open-Vocabulary OOD Object Detection and Grounding using Vision Language Models | [
"Poulami Sinhamahapatra",
"Shirsha Bose",
"Karsten Roscher",
"Stephan Günnemann"
] | Automated driving involves complex perception tasks that require a precise understanding of diverse traffic scenarios and confident navigation. Traditional data-driven algorithms trained on closed-set data often fail to generalize upon out-of-distribution (OOD) and edge cases. Recently, Large Vision Language Models (LVLMs) have shown potential in integrating the reasoning capabilities of language
models to understand and reason about complex driving scenes, aiding generalization to OOD scenarios. However, grounding such OOD objects still remains a challenging task. In this work, we propose an automated framework zPROD, for zero-shot promptable open vocabulary OOD object detection, segmentation, and grounding in autonomous driving. We leverage LVLMs with visual grounding capabilities, eliminating the need for lengthy text communication and providing precise indications of OOD objects in the scene or on the track of the ego-centric vehicle. We evaluate our approach on OOD datasets from existing road anomaly segmentation benchmarks such as SMIYC and Fishyscapes. Our zero-shot approach shows superior performance on RoadAnomaly and RoadObstacle and comparable results on the Fishyscapes subset as compared to supervised models and acts a baseline for future zero-shot methods based on open vocabulary OOD detection. | [
"OOD object detection",
"zero-shot",
"open-vocabulary",
"segmentation",
"autonomous driving",
"vision language models"
] | https://openreview.net/pdf?id=Q2wVVeOpz8 | https://openreview.net/forum?id=Q2wVVeOpz8 | RDzLXOPtxl | meta_review | 1,730,469,016,586 | Q2wVVeOpz8 | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission49/Area_Chair_733k"
] | NLDL.org/2025/Conference | 2025 | metareview: This paper suggests to rely on large vision-language model (LVLM) to perform zero-shot out-of-domain (OOD) object detection.
All reviewers agreed about the importance of the problem, the relevance of the techniques involved, the clarity of the paper, and the significance (to some extent) of the experimental results. However, a few negative aspects were also identified, lack of technical contributions, overstated results, lack of technical details. The authors provide a systematic rebuttal to reviewers' comments and their replies were judged valuable.
Overall, the paper brings an intuitive and simple, but effective contribution and the work is presented in a clear manner. The authors should, however, be reminded to add a description of the newly added baseline to the manuscript.
All reviewers suggest to accept the paper. Given the strength of the contribution, a poster presentation seems more appropriate.
recommendation: Accept (Poster)
suggested_changes_to_the_recommendation: 3: I agree that the recommendation could be moved up
confidence: 5: The area chair is absolutely certain |
Q2wVVeOpz8 | Zero-Shot Open-Vocabulary OOD Object Detection and Grounding using Vision Language Models | [
"Poulami Sinhamahapatra",
"Shirsha Bose",
"Karsten Roscher",
"Stephan Günnemann"
] | Automated driving involves complex perception tasks that require a precise understanding of diverse traffic scenarios and confident navigation. Traditional data-driven algorithms trained on closed-set data often fail to generalize upon out-of-distribution (OOD) and edge cases. Recently, Large Vision Language Models (LVLMs) have shown potential in integrating the reasoning capabilities of language
models to understand and reason about complex driving scenes, aiding generalization to OOD scenarios. However, grounding such OOD objects still remains a challenging task. In this work, we propose an automated framework zPROD, for zero-shot promptable open vocabulary OOD object detection, segmentation, and grounding in autonomous driving. We leverage LVLMs with visual grounding capabilities, eliminating the need for lengthy text communication and providing precise indications of OOD objects in the scene or on the track of the ego-centric vehicle. We evaluate our approach on OOD datasets from existing road anomaly segmentation benchmarks such as SMIYC and Fishyscapes. Our zero-shot approach shows superior performance on RoadAnomaly and RoadObstacle and comparable results on the Fishyscapes subset as compared to supervised models and acts a baseline for future zero-shot methods based on open vocabulary OOD detection. | [
"OOD object detection",
"zero-shot",
"open-vocabulary",
"segmentation",
"autonomous driving",
"vision language models"
] | https://openreview.net/pdf?id=Q2wVVeOpz8 | https://openreview.net/forum?id=Q2wVVeOpz8 | 7YrRo7WfRI | official_review | 1,727,938,970,020 | Q2wVVeOpz8 | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission49/Reviewer_a9Xt"
] | NLDL.org/2025/Conference | 2025 | title: Review for submission #49
summary: This work proposes to leverage the Large vision-language model (LVLM) to perform zero-shot out-of-domain (OOD) object detection. The authors first query all the foreground instances within the scenes and propose two algorithms to determine the plausible OOD instances. They also propose a refinement module that enhances the result by focusing on the road area. Experiments show that the proposed approach can surpass some supervised methods in RoadAnomaly.
strengths: 1. I agree that the research paradigm has shifted since the emergence of the large model. Hence, the motivation of this work, using LVLM for OOD detection for AD scenarios, is clear.
2. The proposed techniques are intuitive and make sense to me.
3. Experiments show they can exceed some supervised methods by a large margin.
weaknesses: 1. The proposed pipeline is a post-processing. In contrast to the comparison methods, the technical contribution may not be sufficient.
2. I think Table 1 lacks a baseline method. That is, using APE to predict the OOD objects directly.
3. I think it's better to use mathematic equations to describe the pipeline (line 199-270).
confidence: 4
justification: 1. Although I think the technical contributions of this work may not be significant, it is a first and interesting attempt to prompt the LVLM in the task of OOD detection. And the authors demonstrate competitive results, which can serve as a strong baseline for later research.
2. In general, the proposed algorithms are clear and easy to follow. |
Q2wVVeOpz8 | Zero-Shot Open-Vocabulary OOD Object Detection and Grounding using Vision Language Models | [
"Poulami Sinhamahapatra",
"Shirsha Bose",
"Karsten Roscher",
"Stephan Günnemann"
] | Automated driving involves complex perception tasks that require a precise understanding of diverse traffic scenarios and confident navigation. Traditional data-driven algorithms trained on closed-set data often fail to generalize upon out-of-distribution (OOD) and edge cases. Recently, Large Vision Language Models (LVLMs) have shown potential in integrating the reasoning capabilities of language
models to understand and reason about complex driving scenes, aiding generalization to OOD scenarios. However, grounding such OOD objects still remains a challenging task. In this work, we propose an automated framework zPROD, for zero-shot promptable open vocabulary OOD object detection, segmentation, and grounding in autonomous driving. We leverage LVLMs with visual grounding capabilities, eliminating the need for lengthy text communication and providing precise indications of OOD objects in the scene or on the track of the ego-centric vehicle. We evaluate our approach on OOD datasets from existing road anomaly segmentation benchmarks such as SMIYC and Fishyscapes. Our zero-shot approach shows superior performance on RoadAnomaly and RoadObstacle and comparable results on the Fishyscapes subset as compared to supervised models and acts a baseline for future zero-shot methods based on open vocabulary OOD detection. | [
"OOD object detection",
"zero-shot",
"open-vocabulary",
"segmentation",
"autonomous driving",
"vision language models"
] | https://openreview.net/pdf?id=Q2wVVeOpz8 | https://openreview.net/forum?id=Q2wVVeOpz8 | 2BHAI5vlFo | decision | 1,730,901,556,596 | Q2wVVeOpz8 | [
"everyone"
] | [
"NLDL.org/2025/Conference/Program_Chairs"
] | NLDL.org/2025/Conference | 2025 | title: Paper Decision
decision: Accept (Oral)
comment: Given the AC positive recommendation, we recommend an oral and a poster presentation given the AC and reviewers recommendations. |
Pyqnc9eWhB | Graph Counterfactual Explainable AI via Latent Space Traversal | [
"Andreas Abildtrup Hansen",
"Paraskevas Pegios",
"Anna Calissano",
"Aasa Feragen"
] | Explaining the predictions of a deep neural network is a nontrivial task, yet high-quality explanations for predictions are often a prerequisite for practitioners to trust these models. \textit{Counterfactual explanations} aim to explain predictions by finding the ``nearest'' in-distribution alternative input whose prediction changes in a pre-specified way. However, it remains an open question how to define this nearest alternative input, whose solution depends on both the domain (e.g. images, graphs, tabular data, etc.) and the specific application considered. For graphs, this problem is complicated i) by their discrete nature, as opposed to the continuous nature of state-of-the-art graph classifiers; and ii) by the node permutation group acting on the graphs. We propose a method to generate counterfactual explanations for any differentiable black-box graph classifier, utilizing a case-specific permutation equivariant graph variational autoencoder. We generate counterfactual explanations in a continuous fashion by traversing the latent space of the autoencoder across the classification boundary of the classifier, allowing for seamless integration of discrete graph structure and continuous graph attributes. We empirically validate the approach on three graph datasets, showing that our model is consistently high performing and more robust than the baselines. | [
"Explainable AI",
"Counterfactual explanations",
"graph",
"equivariance",
"invariance",
"symmetry",
"VAE"
] | https://openreview.net/pdf?id=Pyqnc9eWhB | https://openreview.net/forum?id=Pyqnc9eWhB | xF1Ae8ww1g | decision | 1,730,901,556,586 | Pyqnc9eWhB | [
"everyone"
] | [
"NLDL.org/2025/Conference/Program_Chairs"
] | NLDL.org/2025/Conference | 2025 | title: Paper Decision
decision: Accept (Oral)
comment: Given the AC positive recommendation, we recommend an oral and a poster presentation given the AC and reviewers recommendations. |
Pyqnc9eWhB | Graph Counterfactual Explainable AI via Latent Space Traversal | [
"Andreas Abildtrup Hansen",
"Paraskevas Pegios",
"Anna Calissano",
"Aasa Feragen"
] | Explaining the predictions of a deep neural network is a nontrivial task, yet high-quality explanations for predictions are often a prerequisite for practitioners to trust these models. \textit{Counterfactual explanations} aim to explain predictions by finding the ``nearest'' in-distribution alternative input whose prediction changes in a pre-specified way. However, it remains an open question how to define this nearest alternative input, whose solution depends on both the domain (e.g. images, graphs, tabular data, etc.) and the specific application considered. For graphs, this problem is complicated i) by their discrete nature, as opposed to the continuous nature of state-of-the-art graph classifiers; and ii) by the node permutation group acting on the graphs. We propose a method to generate counterfactual explanations for any differentiable black-box graph classifier, utilizing a case-specific permutation equivariant graph variational autoencoder. We generate counterfactual explanations in a continuous fashion by traversing the latent space of the autoencoder across the classification boundary of the classifier, allowing for seamless integration of discrete graph structure and continuous graph attributes. We empirically validate the approach on three graph datasets, showing that our model is consistently high performing and more robust than the baselines. | [
"Explainable AI",
"Counterfactual explanations",
"graph",
"equivariance",
"invariance",
"symmetry",
"VAE"
] | https://openreview.net/pdf?id=Pyqnc9eWhB | https://openreview.net/forum?id=Pyqnc9eWhB | qAjeonrhDM | official_review | 1,728,308,544,226 | Pyqnc9eWhB | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission48/Reviewer_DH66"
] | NLDL.org/2025/Conference | 2025 | title: This paper presents several intriguing ideas but my current assessment is just below the acceptance threshold.
summary: This paper presents several intriguing ideas. Firstly, it introduces a generative modeling method called PEGVAE, which utilizes VAE for graph instances. Unlike image instances, graph instances require a guarantee of permutation invariance during the learning process. This aspect, which has not been considered in traditional image-based VAEs, is likely to engage readers' interest. The authors effectively detail this novel framework in sections 1.Introduction and 2.1-2.2.
Secondly, the paper proposes an innovative approach for iteratively identifying counterfactual latent representations by utilizing gradients derived from a classifier when input graphs are classified. The quality of the discovered counterfactuals is evaluated using two explainability principles: fidelity and validity.
strengths: Overall, the paper is comprehensive, and the key points are well articulated. If the authors could address the following suggestions during the review process and incorporate them into the camera-ready version, I would be open to reconsidering my current assessment, which is just below the acceptance threshold.
weaknesses: Points to improve:
1. Syntax errors and typos: The manuscript contains numerous syntax errors and typographical mistakes. I urge the authors to thoroughly proofread the entire document to improve clarity. Some errors I noted include:
L175: \rightarrow should be \leftarrow
L197: Appendix B is missing
L199: N(0|1) should be N(0, 1)
L204: Eq. (3): p(E|B,V,E) should be p(E|B,V,A)
2. Algorithmic representation of the process: Including an algorithmic representation of the process would greatly enhance readers' understanding. Currently, the method by which the PEGVAE-classifier is trained (whether end-to-end or separately) and how counterfactual samples are identified for specific instances is not clearly delineated. Additionally, the batching of graphs should be included in the algorithmic description.
3. Discussion section: The discussion is somewhat lacking. It would be beneficial to explore whether graph instances belonging to the same class exhibit similar counterfactuals. Moreover, if latent features are altered using gradients obtained from intentionally misclassified instances within a pre-trained PEGVAE-Classifier setup, what would the reconstructed graphs look like? Addressing these questions would add depth to the analysis.
4. Quality of counterfactual explanations: On L406, the statement that "generative models do not guarantee quality counterfactual explanations" raises a question about whether PEGVAE falls into this category as well. This remark could diminish the persuasive power of the paper. Instead, I recommend visualizing the generated graph samples and including a more detailed examination and analysis of the contexts in which they are produced. If further experiments are conducted in the meantime, it would be beneficial to include those results during this review process.
confidence: 5
justification: Although this paper attempts to address an important permutation invariance problem through an innovative approach, the experimental results do not align with the intended outcomes, leading me to recommend against acceptance.
final_rebuttal_confidence: 5
final_rebuttal_justification: Thank you for your efforts in addressing my questions. Your response is comprehensive and I would like to change my decision to accept this paper for NLDL 2025. Please ensure that the answer regarding the first inquiry, which relates to point 2 in my initial comments, is included in the camera-ready version. |
Pyqnc9eWhB | Graph Counterfactual Explainable AI via Latent Space Traversal | [
"Andreas Abildtrup Hansen",
"Paraskevas Pegios",
"Anna Calissano",
"Aasa Feragen"
] | Explaining the predictions of a deep neural network is a nontrivial task, yet high-quality explanations for predictions are often a prerequisite for practitioners to trust these models. \textit{Counterfactual explanations} aim to explain predictions by finding the ``nearest'' in-distribution alternative input whose prediction changes in a pre-specified way. However, it remains an open question how to define this nearest alternative input, whose solution depends on both the domain (e.g. images, graphs, tabular data, etc.) and the specific application considered. For graphs, this problem is complicated i) by their discrete nature, as opposed to the continuous nature of state-of-the-art graph classifiers; and ii) by the node permutation group acting on the graphs. We propose a method to generate counterfactual explanations for any differentiable black-box graph classifier, utilizing a case-specific permutation equivariant graph variational autoencoder. We generate counterfactual explanations in a continuous fashion by traversing the latent space of the autoencoder across the classification boundary of the classifier, allowing for seamless integration of discrete graph structure and continuous graph attributes. We empirically validate the approach on three graph datasets, showing that our model is consistently high performing and more robust than the baselines. | [
"Explainable AI",
"Counterfactual explanations",
"graph",
"equivariance",
"invariance",
"symmetry",
"VAE"
] | https://openreview.net/pdf?id=Pyqnc9eWhB | https://openreview.net/forum?id=Pyqnc9eWhB | XsLVvAxx1Y | official_review | 1,728,313,258,548 | Pyqnc9eWhB | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission48/Reviewer_mmEe"
] | NLDL.org/2025/Conference | 2025 | title: Clear approach to counterfactual explanations for graph classifiers
summary: This paper presents a method that produces counterfactual explanations for graph classifiers. At the core of the method, there is a VAE, which allows the authors to avoid the tricky problem of defining a distance on the space of graphs by moving into VAE's latent space instead.
strengths: **Coherence and clarity**
The paper is well structured, starting with a clear description of the problem and the related difficulties, then proceeding to explain the method, and finally commenting on the experimental results. I will comment on some elements that affect the clarity of some parts of the paper among the weaknesses, but overall the paper is clear.
**Incremental contribution**
The authors clearly state what's new in their work, namely the use of a permutation equivariant GVAE. I'm not super familiar with the specific topic, but a quick search in the literature seems to confirm that this is, indeed, a new element worth of being published and discussed.
**Questions**
I have three questions for the authors:
1. I was wondering how robust is your method against different types of classifiers and how sensitive it is to the reconstruction power of the GVAE.
2. You only considered binary classification tasks. Would your approach work also for more classes?
3. Mutagenicity dataset comes with ground truth, is it possible to use those to measure the performance of your method?
weaknesses: **Clarity**:
- I would recommend to the authors to take the opportunity of this review to fix a few typos. There are not many and they don't affect the overall comprehensibility. I'll make a couple of examples here. Sometimes the latent variable $z$ is written in bold (see eq. 4), sometimes it's not (see section 2.5). Lines 265, 267 and 270 have extra parentheses.
- There are some parts of the paper that could be a bit clearer, maybe you can make use of the extra page to expand and clarify them.
- In line 046 you talk about "*dataset-specific* permutations equivariant GVAE", but it's not clear to me what makes it dataset-specific.
- In line 064 you talk about the necessity for explanations to be "maximally interpretable", without specifying what that means.
- You introduce the difference between equivariance and invariance quite early in the paper (lines 96-98), maybe you could specify that you will explain it better in section 2.2.
- In line 099 it's not clear to me the difference between graph level and population level embeddings, and how that impacts the equivariance of the representation.
- As a topological distance, you use the graph edit distance. Any reason why you use this specific one instead of others?
- It's a bit unclear to me what the Signed Increase in Confidence is. Maybe you could expand it a bit better?
- The readability of table 1 could be improved, for example:
- by introducing acronyms for the metrics, so that you can increase the font size;
- by using bold numbers also in the part related to the fidelity to indicate the best-performing methods.
- The readability of figure 2 could be improved, for example:
- by making them larger;
- introducing the acronyms (GED is not explicitly defined in the text);
- "Mean Absolute Difference" is not defined within the text and "Latent Cossimilarity" has a different name in text;
- maybe you could show only one dataset with larger figures, and move the others in the appendix?
- how should the histograms be interpreted?
- Since the very beginning of the paper I was expecting to see some factual and counterfactual graphs plotted somewhere. It's good to have numbers and metrics, but maybe one example could help ground the discussion.
- You mention that Nearest Neighbor counterfactuals have low validity because they will be *often* misclassified by the classifier. I understand that it can happen, but why does it happen often?
**Reproducibility**
The method and the architectures employed are described in quite detail, but I think it's necessary to have a link to the GitHub repository. Unless it was removed for anonymity, I strongly recommend adding it.
**Incremental contributions**
Could you maybe clarify how does your work relate with the one of [1]?
[1] Ma, Jing, et al. "Clear: Generative counterfactual explanations on graphs." Advances in neural information processing systems 35 (2022): 25895-25907.
confidence: 4
justification: The paper is already well written and clear, most of the concerns that I raised about clarity are either minor flaws, or they could be addressed easily for the camera-ready version.
In my opinion, the elements introduced in this work are novel and incremental. They could be expanded further (see for examples my questions above), but it's definitely worth sharing with the community.
final_rebuttal_confidence: 4
final_rebuttal_justification: My rating was positive even before the rebuttal phase, when some of my concerns were addressed. I see that other weaknesses, pointed out by other reviewers, were also addressed. So I confirm my initial rating. |
Pyqnc9eWhB | Graph Counterfactual Explainable AI via Latent Space Traversal | [
"Andreas Abildtrup Hansen",
"Paraskevas Pegios",
"Anna Calissano",
"Aasa Feragen"
] | Explaining the predictions of a deep neural network is a nontrivial task, yet high-quality explanations for predictions are often a prerequisite for practitioners to trust these models. \textit{Counterfactual explanations} aim to explain predictions by finding the ``nearest'' in-distribution alternative input whose prediction changes in a pre-specified way. However, it remains an open question how to define this nearest alternative input, whose solution depends on both the domain (e.g. images, graphs, tabular data, etc.) and the specific application considered. For graphs, this problem is complicated i) by their discrete nature, as opposed to the continuous nature of state-of-the-art graph classifiers; and ii) by the node permutation group acting on the graphs. We propose a method to generate counterfactual explanations for any differentiable black-box graph classifier, utilizing a case-specific permutation equivariant graph variational autoencoder. We generate counterfactual explanations in a continuous fashion by traversing the latent space of the autoencoder across the classification boundary of the classifier, allowing for seamless integration of discrete graph structure and continuous graph attributes. We empirically validate the approach on three graph datasets, showing that our model is consistently high performing and more robust than the baselines. | [
"Explainable AI",
"Counterfactual explanations",
"graph",
"equivariance",
"invariance",
"symmetry",
"VAE"
] | https://openreview.net/pdf?id=Pyqnc9eWhB | https://openreview.net/forum?id=Pyqnc9eWhB | N8P8Rr61AL | official_review | 1,728,296,097,434 | Pyqnc9eWhB | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission48/Reviewer_5Zk4"
] | NLDL.org/2025/Conference | 2025 | title: preliminary but sensible contribution
summary: The paper introduces a pipeline for generating (in-distribution) counterfactual
explanations of GNN decisions. An equivariant generative graph model is used
for the purpose. Specifically, the idea is to model a generative distribution
over in-distribution graphs as a VAE; the output of this VAE is fed to the
classifier; a counterfactual is computed by traversing the latent space (via
gradient ascent/descent) in a direction that crosses the decision boundary.
An empirical evaluation is carried out against three simple but reasonable
baselines on three data sets.
strengths: - Generally well written and structured.
- Tackles a challenging problem: generating counterfactuals for
relational-continuous data.
- The proposed method is intuitively sensible.
- Experiments are relatively basic but set up appropriately.
- I appreciated that the authors do point out limitations of their proposed method.
weaknesses: - There already exist works for generating counterfactuals that rely on
generative models in the context of non-relational data. It would make sense to
mention a couple, and to explain that for graphs the problem is more challenging
due to equivariance etc.
- Section 2.1: it would have helped to provide a basic intuition behind the
layers by Maron et al.
- The notion of "fidelity" (aka "faithfulness" in the literature) normally
refers to the degree by which an explanation is faithful to the reasoning
process of a model, see for instance [1]. I would refrain from using the same
name for a different quantity: it is confusing.
- The choice of competitors is limited and the experiments are run using a single
GNN model; hinting at the fact that this is an
(understandably) preliminary work.
- In a similar vein, it would make sense to discuss the direct competitors more
in details and compare (both conceptually and empirically) against them.
**Suggestions**:
- Please name your method. It makes it easier to reference it in future work.
You can also use it in the plots to make it easier to identify your proposed
approach (the dark blue bar).
- Using $\mathcal{C}$ for the GNN classifier and $\mathcal{F}-\mathcal{D}$ for
the encoder-decoder is a bit odd - calligraphic math symbols are normally used
for denoting sets (as you do for the set of graphs).
- For conditional distribution, consider using $\mid$ instead of $|$.
- Figure 1 should probably be in page 2 (because it helps to figure out what
latent space is being traversed.)
[1] Agarwal, C., Zitnik, M., & Lakkaraju, H. (2022, May). Probing gnn explainers: A rigorous theoretical and empirical analysis of gnn explanation methods. In International Conference on Artificial Intelligence and Statistics (pp. 8969-8996). PMLR.
- The description of the classifier should probably be an experimental detail,
while it currently resides in the Method section.
- What is the reason for reporting the distance in latent space? Does it matter
for practical purposes?
confidence: 4
justification: All in all, a preliminary but promising contribution. The proposed approach is sensible and the (preliminary) evaluation shows promise. The paper is currently missing a discussion of advantages/disadvantages wrt existing works, as well as a thorough empirical comparison.
final_rebuttal_confidence: 4
final_rebuttal_justification: I was quite positive about the paper to begin with, and the rebuttal has managed to clarify the few issues I had pointed out. |
Pyqnc9eWhB | Graph Counterfactual Explainable AI via Latent Space Traversal | [
"Andreas Abildtrup Hansen",
"Paraskevas Pegios",
"Anna Calissano",
"Aasa Feragen"
] | Explaining the predictions of a deep neural network is a nontrivial task, yet high-quality explanations for predictions are often a prerequisite for practitioners to trust these models. \textit{Counterfactual explanations} aim to explain predictions by finding the ``nearest'' in-distribution alternative input whose prediction changes in a pre-specified way. However, it remains an open question how to define this nearest alternative input, whose solution depends on both the domain (e.g. images, graphs, tabular data, etc.) and the specific application considered. For graphs, this problem is complicated i) by their discrete nature, as opposed to the continuous nature of state-of-the-art graph classifiers; and ii) by the node permutation group acting on the graphs. We propose a method to generate counterfactual explanations for any differentiable black-box graph classifier, utilizing a case-specific permutation equivariant graph variational autoencoder. We generate counterfactual explanations in a continuous fashion by traversing the latent space of the autoencoder across the classification boundary of the classifier, allowing for seamless integration of discrete graph structure and continuous graph attributes. We empirically validate the approach on three graph datasets, showing that our model is consistently high performing and more robust than the baselines. | [
"Explainable AI",
"Counterfactual explanations",
"graph",
"equivariance",
"invariance",
"symmetry",
"VAE"
] | https://openreview.net/pdf?id=Pyqnc9eWhB | https://openreview.net/forum?id=Pyqnc9eWhB | B3Ksp1fp5P | meta_review | 1,730,553,716,193 | Pyqnc9eWhB | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission48/Area_Chair_XwGk"
] | NLDL.org/2025/Conference | 2025 | metareview: The authors propose a method to generate counterfactual examples of GNN decisions by pushing the output to cross the decision boundary line. Nevertheless, the reviewers and the authors qualify the results as preliminary as the method could be improved.
I, therefore, recommend accepting the paper for a poster session.
recommendation: Accept (Poster)
suggested_changes_to_the_recommendation: 3: I agree that the recommendation could be moved up
confidence: 4: The area chair is confident but not absolutely certain |
Pyqnc9eWhB | Graph Counterfactual Explainable AI via Latent Space Traversal | [
"Andreas Abildtrup Hansen",
"Paraskevas Pegios",
"Anna Calissano",
"Aasa Feragen"
] | Explaining the predictions of a deep neural network is a nontrivial task, yet high-quality explanations for predictions are often a prerequisite for practitioners to trust these models. \textit{Counterfactual explanations} aim to explain predictions by finding the ``nearest'' in-distribution alternative input whose prediction changes in a pre-specified way. However, it remains an open question how to define this nearest alternative input, whose solution depends on both the domain (e.g. images, graphs, tabular data, etc.) and the specific application considered. For graphs, this problem is complicated i) by their discrete nature, as opposed to the continuous nature of state-of-the-art graph classifiers; and ii) by the node permutation group acting on the graphs. We propose a method to generate counterfactual explanations for any differentiable black-box graph classifier, utilizing a case-specific permutation equivariant graph variational autoencoder. We generate counterfactual explanations in a continuous fashion by traversing the latent space of the autoencoder across the classification boundary of the classifier, allowing for seamless integration of discrete graph structure and continuous graph attributes. We empirically validate the approach on three graph datasets, showing that our model is consistently high performing and more robust than the baselines. | [
"Explainable AI",
"Counterfactual explanations",
"graph",
"equivariance",
"invariance",
"symmetry",
"VAE"
] | https://openreview.net/pdf?id=Pyqnc9eWhB | https://openreview.net/forum?id=Pyqnc9eWhB | 3hTcwyFAOd | official_review | 1,728,503,439,882 | Pyqnc9eWhB | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission48/Reviewer_BpLm"
] | NLDL.org/2025/Conference | 2025 | title: Review of paper 48
summary: The authors propose a method to generate semantically meaningful counterfactual explanations for graph classifiers by traversing the well regularised latent space learned by PEGVAE. The authors validate their approach and show the its effectiveness on three graph datasets.
strengths: * The paper is extremely well written and was very interesting to read.
* The idea to generate the graph counterfactual explanations by optimally traversing the regularized latent space learned by VAE is novel.
* The results on three datasets are promising towards the applicability of the approach.
weaknesses: * How scalable is the approach to more complex real-world problems with large and sparse graphs?
* The quality of the counterfactuals is directly tied to how well the VAE's latent space reflects the relationships in the original graph, which might not always be optimal.
confidence: 3
justification: The idea is novel, interesting and well executed with the experimentation, as well as the paper is well written. |
PenPJYfmaA | NEMt: Fast Targeted Explanations for Medical Image Models via Neural Explanation Masks | [
"Bjørn Leth Møller",
"Sepideh Amiri",
"Christian Igel",
"Kristoffer Knutsen Wickstrøm",
"Robert Jenssen",
"Matthias Keicher",
"Mohammad Farid Azampour",
"Nassir Navab",
"Bulat Ibragimov"
] | A fundamental barrier to the adoption of AI systems in clinical practice is the insufficient transparency of AI decision-making. The field of Explainable Artificial Intelligence (XAI) seeks to provide human-interpretable explanations for a given AI model. The recently proposed Neural Explanation Mask (NEM) framework is the first XAI method to explain learned representations with high accuracy at real-time speed.
NEM transforms a given differentiable model into a self-explaining system by augmenting it with a neural network-based explanation module. This module is trained in an unsupervised manner to output occlusion-based explanations for the original model. However, the current framework does not consider labels associated with the inputs. This makes it unsuitable for many important tasks in the medical domain that require explanations specific to particular output dimensions, such as pathology discovery, disease severity regression, and multi-label data classification.
In this work, we address this issue by introducing a loss function for training explanation modules incorporating labels. It steers explanations toward target labels alongside an integrated smoothing operator, which reduces artifacts in the explanation masks. We validate the resulting Neural Explanation Masks with target labels (NEMt) framework on public databases of lung radiographs and skin images. The obtained results are superior to the state-of-the-art XAI methods in terms of explanation relevancy mass, complexity, and sparseness. Moreover, the explanation generation is several hundred times faster, allowing for real-time clinical applications. The code is publicly available at https://github.com/baerminator/NEM_T | [
"XAI",
"MIA"
] | https://openreview.net/pdf?id=PenPJYfmaA | https://openreview.net/forum?id=PenPJYfmaA | s34acmStoz | official_review | 1,727,791,596,782 | PenPJYfmaA | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission17/Reviewer_kbVf"
] | NLDL.org/2025/Conference | 2025 | title: Review of NEMt: Fast Targeted Explanations for Medical Image Models via Neural Explanation Masks
summary: The work introduces NEMt, a modification of the NEM method to specifically explain image classification predictions, in contrast to the original model which explains latent representations.
The work also extends the NEM method with a new final layer to remove artifacts dubbed "stochastic weighted neighborhood averaging" which was needed as standard NEM produced a lot of artifacts in classification explanation.
NEMt works by essentially training a segmentation network that explains a given classification network by outputting an attribution score for each pixel.
During inference, the image is predicted by the classification network and the segmentation network is given the image and some representations from the classification network and predicts the attribution map.
During training additional steps are applied where the pixels indicated by the attribution map are occluded and the new image is fed through the classification network.
The segmentation network is then trained to find the attributions that maximize the difference in prediction while a regularization loss forces it to segment a smaller area.
The method is compared to several others using many relevant metrics in the new classification setting as well as the original latent space setting.
The results show that NEMt is still competitive in the original latent space setting, though no comparison to the original NEM is made which leaves it uncertain whether the new artifact removal is useful in this case.
The results also show that NEMt is better than the other methods in the classification setting for most metrics on two different datasets.
strengths: The work is well written and contains all immediately relevant details.
The NEM method is shown to be very useful in the classification setting and promises to be a weakly supervised segmentation method as well, though this is not mentioned in the work.
The artifact removal procedure is shown, though not thoroughly, to be helpful.
Additionally, artifacts are a general issue with attribution maps which I am glad to see tackled.
The evaluation is thorough with many metrics and previous methods, though missing some relevant ones.
In short, the work improves an already promising method which it presents clearly and evaluates thoroughly.
weaknesses: The NEM and NEMt methods do share a lot of similarities with methods developed by the Weakly Supervises Semantic Segmentation (WSSS) field which has also developed methods to extract masks using pretrained classification models, however, these are not mentioned at all in work.
The artifact removal is described as useful and necessary but its impact is not shown through evaluation, something that could have been achieved by evaluating how the method performs without it or with the original NEM regularizing loss instead (especially since one of the test cases is specifically in the domain NEM was developed for).
NEMt is compared against RELAX in the latent space setting. Since RELAX can be described to be a latent space adaption of RISE, it is strange to not also compare it against RISE in the classification setting.
While relevance mass is a useful metric it is biased towards metrics that produce positive attributions for relevant areas and negative otherwise, this is not strictly speaking needed for an attribution. This can be alleviated with normalization, but whether this is done is not described.
confidence: 3
justification: While there are some weaknesses with the study, they mostly have to do with how the method is presented and evaluated, not with the method itself which is sound and well-evaluated.
The work is easy to understand, its contribution is clear, and may even be relevant beyond the XAI domain as a WSSS method.
The work is therefore not only interesting to XAI researchers but computer vision and ML in general which is a great fit for the conference.
final_rebuttal_confidence: 4
final_rebuttal_justification: The original work contributes an evaluation of a novel method (NEM) on a new problem (image classification). Additionally, the method introduced novel solutions (artifact removal) needed for the method to work in the new domain and might be generally useful for other similar methods. There were some missing details like comparisons to similar ideas in other domains, ablation evaluation of their new solutions, a relevant existing method not being compared to, and a tiny bit of implementation details missing. These were all clarified in the rebuttal and promised to be included in the revised version. For these reasons, the work is a clear fit for the conference. |
PenPJYfmaA | NEMt: Fast Targeted Explanations for Medical Image Models via Neural Explanation Masks | [
"Bjørn Leth Møller",
"Sepideh Amiri",
"Christian Igel",
"Kristoffer Knutsen Wickstrøm",
"Robert Jenssen",
"Matthias Keicher",
"Mohammad Farid Azampour",
"Nassir Navab",
"Bulat Ibragimov"
] | A fundamental barrier to the adoption of AI systems in clinical practice is the insufficient transparency of AI decision-making. The field of Explainable Artificial Intelligence (XAI) seeks to provide human-interpretable explanations for a given AI model. The recently proposed Neural Explanation Mask (NEM) framework is the first XAI method to explain learned representations with high accuracy at real-time speed.
NEM transforms a given differentiable model into a self-explaining system by augmenting it with a neural network-based explanation module. This module is trained in an unsupervised manner to output occlusion-based explanations for the original model. However, the current framework does not consider labels associated with the inputs. This makes it unsuitable for many important tasks in the medical domain that require explanations specific to particular output dimensions, such as pathology discovery, disease severity regression, and multi-label data classification.
In this work, we address this issue by introducing a loss function for training explanation modules incorporating labels. It steers explanations toward target labels alongside an integrated smoothing operator, which reduces artifacts in the explanation masks. We validate the resulting Neural Explanation Masks with target labels (NEMt) framework on public databases of lung radiographs and skin images. The obtained results are superior to the state-of-the-art XAI methods in terms of explanation relevancy mass, complexity, and sparseness. Moreover, the explanation generation is several hundred times faster, allowing for real-time clinical applications. The code is publicly available at https://github.com/baerminator/NEM_T | [
"XAI",
"MIA"
] | https://openreview.net/pdf?id=PenPJYfmaA | https://openreview.net/forum?id=PenPJYfmaA | SXG54uyicl | decision | 1,730,901,555,074 | PenPJYfmaA | [
"everyone"
] | [
"NLDL.org/2025/Conference/Program_Chairs"
] | NLDL.org/2025/Conference | 2025 | title: Paper Decision
decision: Accept (Oral)
comment: We recommend an oral and a poster presentation given the AC and reviewers recommendations. |
PenPJYfmaA | NEMt: Fast Targeted Explanations for Medical Image Models via Neural Explanation Masks | [
"Bjørn Leth Møller",
"Sepideh Amiri",
"Christian Igel",
"Kristoffer Knutsen Wickstrøm",
"Robert Jenssen",
"Matthias Keicher",
"Mohammad Farid Azampour",
"Nassir Navab",
"Bulat Ibragimov"
] | A fundamental barrier to the adoption of AI systems in clinical practice is the insufficient transparency of AI decision-making. The field of Explainable Artificial Intelligence (XAI) seeks to provide human-interpretable explanations for a given AI model. The recently proposed Neural Explanation Mask (NEM) framework is the first XAI method to explain learned representations with high accuracy at real-time speed.
NEM transforms a given differentiable model into a self-explaining system by augmenting it with a neural network-based explanation module. This module is trained in an unsupervised manner to output occlusion-based explanations for the original model. However, the current framework does not consider labels associated with the inputs. This makes it unsuitable for many important tasks in the medical domain that require explanations specific to particular output dimensions, such as pathology discovery, disease severity regression, and multi-label data classification.
In this work, we address this issue by introducing a loss function for training explanation modules incorporating labels. It steers explanations toward target labels alongside an integrated smoothing operator, which reduces artifacts in the explanation masks. We validate the resulting Neural Explanation Masks with target labels (NEMt) framework on public databases of lung radiographs and skin images. The obtained results are superior to the state-of-the-art XAI methods in terms of explanation relevancy mass, complexity, and sparseness. Moreover, the explanation generation is several hundred times faster, allowing for real-time clinical applications. The code is publicly available at https://github.com/baerminator/NEM_T | [
"XAI",
"MIA"
] | https://openreview.net/pdf?id=PenPJYfmaA | https://openreview.net/forum?id=PenPJYfmaA | K85z6guUpR | official_review | 1,728,415,394,183 | PenPJYfmaA | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission17/Reviewer_z9b3"
] | NLDL.org/2025/Conference | 2025 | title: Organized paper that expands on existing XAI framework and shows results on two common medical imaging datasets
summary: The paper proposes Neural Explanation Masks with target labels (NEMt), an XAI method which is a variant of the Neural Explanation Mask (NEM) framework. The NEM framework trains an explanation module for self-supervised representations that outputs explanation masks for the parts of the input that influence the representation most. In this article, the authors have adapted the method to
The NEMt loss function differs from NEM by optimizing for the mask that yields the largest change in the predicted class logit, rather than optimizing for the mask that yields the most similar representation. Otherwise, both methods encourage sparse masks by $\ell_1$ regularization. NEMt enforces binary masks by removing artifacts with a stochastic smoothing filter via weighted neighborhood averaging. This is done during the training of the masking network. Originally, the NEM framework encourages binary masks by adding a penalty term to the loss function, but this term is absent and instead replaced by the stochastic smoothing filter in NEMt.
The method is evaluated on two datasets - the 2018 RSNA Pneumonia Detection 250 Challenge dataset and the HAM10000 dataset -using established quantitative metrics for XAI.
strengths: - The objective is clear and based on established work
- The method is evaluated using established XAI metrics for explanation quality
- The work is novel in the sense that it expands an established framework to apply to the supervised setting with labels
- The paper is organized, and the text is well written
- The experiments show improvement against existing XAI methods on two medical imaging datasets.
weaknesses: - Although this work focuses on the application of the method to medical data, the results would have been more significant if NEMt was evaluated on other image datasets as well. In particular, VOC and COCO to compare results with previous work. This could also allow comparison of the performance of the method across various models, e.g. vision transformers. Currently, the results are only shown for frozen ResNet50 architectures.
- Lacking ablations on choice of $\lambda_1$ and $\lambda_2$ and how stable the method is to changes of the training loss parameters
- I wonder if the problem with artifacts which appeared in the initial experiments without the stochastic smoothing filter was due to the choice of the $B(x)$ regularization term, and if the authors have tried different choices?
- The authors do not mention the neighborhood size used for the smoothing filter or if this was ablated on
confidence: 3
justification: The paper proposes an extension of an established framework, which shows promising results in the medical setting. I hope the authors can address the above weaknesses to justify the completeness and thoroughness of their work. |
PenPJYfmaA | NEMt: Fast Targeted Explanations for Medical Image Models via Neural Explanation Masks | [
"Bjørn Leth Møller",
"Sepideh Amiri",
"Christian Igel",
"Kristoffer Knutsen Wickstrøm",
"Robert Jenssen",
"Matthias Keicher",
"Mohammad Farid Azampour",
"Nassir Navab",
"Bulat Ibragimov"
] | A fundamental barrier to the adoption of AI systems in clinical practice is the insufficient transparency of AI decision-making. The field of Explainable Artificial Intelligence (XAI) seeks to provide human-interpretable explanations for a given AI model. The recently proposed Neural Explanation Mask (NEM) framework is the first XAI method to explain learned representations with high accuracy at real-time speed.
NEM transforms a given differentiable model into a self-explaining system by augmenting it with a neural network-based explanation module. This module is trained in an unsupervised manner to output occlusion-based explanations for the original model. However, the current framework does not consider labels associated with the inputs. This makes it unsuitable for many important tasks in the medical domain that require explanations specific to particular output dimensions, such as pathology discovery, disease severity regression, and multi-label data classification.
In this work, we address this issue by introducing a loss function for training explanation modules incorporating labels. It steers explanations toward target labels alongside an integrated smoothing operator, which reduces artifacts in the explanation masks. We validate the resulting Neural Explanation Masks with target labels (NEMt) framework on public databases of lung radiographs and skin images. The obtained results are superior to the state-of-the-art XAI methods in terms of explanation relevancy mass, complexity, and sparseness. Moreover, the explanation generation is several hundred times faster, allowing for real-time clinical applications. The code is publicly available at https://github.com/baerminator/NEM_T | [
"XAI",
"MIA"
] | https://openreview.net/pdf?id=PenPJYfmaA | https://openreview.net/forum?id=PenPJYfmaA | HnvaSje4ra | official_review | 1,728,235,032,767 | PenPJYfmaA | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission17/Reviewer_iqmS"
] | NLDL.org/2025/Conference | 2025 | title: An interesting XAI module for artificial neural networks
summary: The paper proposes a variant to a recently introduced Neural Explanation Mask (NEM) framework, called NEMt, where the additional "t" in the name corresponds to the notion of "target labels". The idea is that the original NEM framework works in an unsupervised setting, whereas the proposed framework works for artificial neural networks that operate in a supervised manner. The authors experimentally validate the performance of their approach using two datasets from the medical domain and moreover evaluate the performance of the proposed method using several metrics (relevance rank, relevance mass, complexity, sparseness) that have been introduced and provide justification for the "goodness" of the provided explanations. Furthermore, the method is very lightweight as exhibited by the experiments, thus providing explanations in time comparable to one baseline method (Grad Shape) or, faster compared to the other baselines explored (Smooth Pixel Mask, GradCAM, Integrated Gradients).
strengths: - A new method for XAI thus has good properties on several metrics (including the time to provide an explanation).
- The proposed method works on top of existing trained models, thus making it very appealing for adoption in various settings.
weaknesses: - No source code provided. I would urge the authors to provide a link with the source code for their experiments upon acceptance of their work. This can only benefit the community and the authors.
- Other than the above, I cannot think of any real weaknesses for this work from a scientific point of view. Even though the novelty is not very high since it relies on the NEM framework, nevertheless, the idea is natural and very much important since supervised neural networks are used in so many contexts.
Finally, below are some comments for the presentation.
- Figure 1 seems to be out of order and I do not believe it is referenced from the main text.
- In Section 3.1 where you describe the NEM framework I was a little bit confused in retrospect. In line 137 you say that the output of model $\Phi$ is $y$, but what is really $y$? If we are talking about unsupervised learning, then this $y$ cannot be a label - and $y$ is (almost?) universally accepted as a "label"; so, most likely you want to output the identify function (e.g., as in an auto encoder) or some sort of embedding of the original input? Please note that I was not familiar with the NEM framework before reading this paper and perhaps this is why it is not so clear to me what you are trying to say and how that would fit in an "unsupervised" setting where NEM operates. Along these lines, in line 170, you mention that $d$ is some distance function. Given that $d$ takes as input such $y$'s, it would be nice if you can provide an example of the kind of functions that you have in mind.
The above weaknesses are the reason that I am giving a rating of 4 for this work and not a 5.
confidence: 3
justification: The paper deals with an important problem; that of providing explanations in real-world applications of machine learning where artificial neural networks are used. Furthermore, an important aspect of the approach is the modularity of the proposed method, since it can be applied on any pre-trained artificial neural network (and such applications are pretty everywhere nowadays). Overall, I think the paper is well-written and good discussion has been made throughout. Important baselines are being used and a nice set of metrics is also used for categorizing the method, with respect to other methods.
final_rebuttal_confidence: 4
final_rebuttal_justification: Reading the reviews of others has increased my confidence on the rating of the paper. I believe that this is a paper that has sufficient novelty and should be accepted, especially given the changes and small additions that the reviewers have requested and the authors have promised to include in the final version of the paper. |
PenPJYfmaA | NEMt: Fast Targeted Explanations for Medical Image Models via Neural Explanation Masks | [
"Bjørn Leth Møller",
"Sepideh Amiri",
"Christian Igel",
"Kristoffer Knutsen Wickstrøm",
"Robert Jenssen",
"Matthias Keicher",
"Mohammad Farid Azampour",
"Nassir Navab",
"Bulat Ibragimov"
] | A fundamental barrier to the adoption of AI systems in clinical practice is the insufficient transparency of AI decision-making. The field of Explainable Artificial Intelligence (XAI) seeks to provide human-interpretable explanations for a given AI model. The recently proposed Neural Explanation Mask (NEM) framework is the first XAI method to explain learned representations with high accuracy at real-time speed.
NEM transforms a given differentiable model into a self-explaining system by augmenting it with a neural network-based explanation module. This module is trained in an unsupervised manner to output occlusion-based explanations for the original model. However, the current framework does not consider labels associated with the inputs. This makes it unsuitable for many important tasks in the medical domain that require explanations specific to particular output dimensions, such as pathology discovery, disease severity regression, and multi-label data classification.
In this work, we address this issue by introducing a loss function for training explanation modules incorporating labels. It steers explanations toward target labels alongside an integrated smoothing operator, which reduces artifacts in the explanation masks. We validate the resulting Neural Explanation Masks with target labels (NEMt) framework on public databases of lung radiographs and skin images. The obtained results are superior to the state-of-the-art XAI methods in terms of explanation relevancy mass, complexity, and sparseness. Moreover, the explanation generation is several hundred times faster, allowing for real-time clinical applications. The code is publicly available at https://github.com/baerminator/NEM_T | [
"XAI",
"MIA"
] | https://openreview.net/pdf?id=PenPJYfmaA | https://openreview.net/forum?id=PenPJYfmaA | CUfaKDL1KD | meta_review | 1,730,631,839,286 | PenPJYfmaA | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission17/Area_Chair_eGap"
] | NLDL.org/2025/Conference | 2025 | metareview: The paper proposes a method called Neural Explanation Masks with target labels (NEMt), an extension of the existing NEM framework tailored for supervised settings to improve interpretability for neural nets. The method addresses a weakness of the existing NEM and optimizes for target labels, incorporates a stochastic smoothing filter to remove artifacts to provide more reliable explanations for classification. The paper demonstrates improvements over existing baselines across multiple metrics. The authors tested the approach on two datasets and provided a code repository for reproducibility. All reviewers regarded the paper as an interesting contribution but also raised some comments. The reviewers noted that the paper would benefit from a broader evaluation on additional datasets, comparison with more related methods and more details on certain implementation aspects. Most of the comments from the reviewers were addressed during the rebuttal and the authors added some more experiments and ablation studies and gave some additional details for clarification which they plan to add to the manuscript prior to publication. This was well received by the reviewers. Given all strengths and weaknesses, all reviewers rated the paper very positively and suggested accepting it for NLDL.
recommendation: Accept (Oral)
suggested_changes_to_the_recommendation: 1: I agree that the recommendation could be moved down
confidence: 4: The area chair is confident but not absolutely certain |
Pb47B5t0pr | PePR: Performance Per Resource Unit as a Metric to Promote Small-scale Deep Learning in Medical Image Analysis | [
"Raghavendra Selvan",
"Bob Pepin",
"Christian Igel",
"Gabrielle Samuel",
"Erik B Dam"
] | The recent advances in deep learning (DL) have been accelerated by access to large-scale data and compute. These large-scale resources have been used to train progressively larger models which are resource intensive in terms of compute, data, energy, and carbon emissions. These costs are becoming a new type of entry barrier to researchers and practitioners with limited access to resources at such scale, particularly in the Global South. In this work, we take a comprehensive look at the landscape of existing DL models for medical image analysis tasks and demonstrate their usefulness in settings where resources are limited. To account for the resource consumption of DL models, we introduce a novel measure to estimate the performance per resource unit, which we call the PePR score. Using a diverse family of 131 unique DL architectures (spanning $1M$ to $130M$ trainable parameters) and three medical image datasets, we capture trends about the performance-resource trade-offs. In applications like medical image analysis, we argue that small-scale, specialized models are better than striving for large-scale models. Furthermore, we show that using existing pretrained models that are fine-tuned on new data can significantly reduce the computational resources and data required compared to training models from scratch. We hope this work will encourage the community to focus on improving AI equity by developing methods and models with smaller resource footprints. | [
"Equitable AI",
"Resource efficiency",
"Image Classification",
"Deep Learning"
] | https://openreview.net/pdf?id=Pb47B5t0pr | https://openreview.net/forum?id=Pb47B5t0pr | qdEB4to0iw | official_review | 1,728,481,478,194 | Pb47B5t0pr | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission23/Reviewer_WeNX"
] | NLDL.org/2025/Conference | 2025 | title: Review of PePR: Performance Per Resource Unit as a Metric to Promote Small-scale Deep Learning
summary: In this work, the authors propose a metric called PePR, which is a measure of performance per resource-unit and use it to try to show that small-scale Deep Learning has a better trade-off of performance to resource cost. They do this by conducting an empirical experiment in which they train/fine-tune a large class of models and evaluate PePR on them.
strengths: * The PePR metric is highly relevant due to the concerns of resource costs of training machine learning models.
* Although the PePR metric is simple, it is highly interpretable (which the authors also walk through in the paper).
weaknesses: Major Weaknesses:
* There is one primary (large) reason for my low score which is that I am concerned about the fact that the authors only train all their models for 10 epochs across all experiments. I could be wrong, but it seems highly unlikely to me that models of the reported sizes are converged after 10 epochs of training _especially_ when trained from scratch. If they are not converged, then the conclusion to promote small-scale deep learning could be wrong. I would appreciate it if the authors would comment on this.
Minor Weaknesses:
* In the abstract the authors write that using pre-trained models may be more efficient, but (to me) this seems conflicting with the recommendation of using small-scale Deep learning. Pre-training large models is typically highly resource costly and I suppose not considered small-scale. Could the authors comment on this?
* Could the authors comment a bit on how to tune the proposed $\alpha$ in line 162?
Minor corrections/suggestions:
* I think there is a "the" missing in line 041 before medical.
* I think the definitive "the" in line 044 before energy should be removed for grammatical reasons.
* In line 073, it would be nice to write what "a novel measure" is a measure of.
* In Figure 2 (b) the x-axis goes from 0.000 to 0.015. Should it not be from 0.0 to 1.0 after normalization?
* There is a full stop missing in line 219 before "Other".
* Could the authors clarify what they mean about "efficient" versus "not efficient" in line 220-222?
* I think the paragraph in lines 341-349 should be move to the conclusion.
confidence: 4
justification: I have provided a score of 2 primarily due to the concerns with convergence of the presented models which potentially could unravel/change the entire conclusion on model scale. My comments on tuning of $\alpha$ and especially on conflicting messages with regards to what model scale to use, are also a contributor to the score being a 2, but not to the same degree as the issues with model convergence. |
Pb47B5t0pr | PePR: Performance Per Resource Unit as a Metric to Promote Small-scale Deep Learning in Medical Image Analysis | [
"Raghavendra Selvan",
"Bob Pepin",
"Christian Igel",
"Gabrielle Samuel",
"Erik B Dam"
] | The recent advances in deep learning (DL) have been accelerated by access to large-scale data and compute. These large-scale resources have been used to train progressively larger models which are resource intensive in terms of compute, data, energy, and carbon emissions. These costs are becoming a new type of entry barrier to researchers and practitioners with limited access to resources at such scale, particularly in the Global South. In this work, we take a comprehensive look at the landscape of existing DL models for medical image analysis tasks and demonstrate their usefulness in settings where resources are limited. To account for the resource consumption of DL models, we introduce a novel measure to estimate the performance per resource unit, which we call the PePR score. Using a diverse family of 131 unique DL architectures (spanning $1M$ to $130M$ trainable parameters) and three medical image datasets, we capture trends about the performance-resource trade-offs. In applications like medical image analysis, we argue that small-scale, specialized models are better than striving for large-scale models. Furthermore, we show that using existing pretrained models that are fine-tuned on new data can significantly reduce the computational resources and data required compared to training models from scratch. We hope this work will encourage the community to focus on improving AI equity by developing methods and models with smaller resource footprints. | [
"Equitable AI",
"Resource efficiency",
"Image Classification",
"Deep Learning"
] | https://openreview.net/pdf?id=Pb47B5t0pr | https://openreview.net/forum?id=Pb47B5t0pr | dMLwIHp7pl | official_review | 1,728,262,748,760 | Pb47B5t0pr | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission23/Reviewer_HFyD"
] | NLDL.org/2025/Conference | 2025 | title: I would suggest major revisions before acceptance.
summary: The paper introduces the PePR score, a novel metric that measures performance per resource unit for deep learning (DL) models, with a focus on vision tasks, particularly in resource-constrained settings like medical imaging. The authors argue that large-scale DL models come with high costs in terms of compute, data, and energy, which can create barriers for researchers with limited access to these resources, especially in the Global South. The PePR score is defined in a balanced and objective way that accounts for both performance and resource costs. This method emphasizes resource sensitivity for high-performing models and highlights diminishing returns in resource usage.
strengths: The PePR score is defined in a balanced and objective way that accounts for both performance and resource costs. This method emphasizes resource sensitivity for high-performing models and highlights diminishing returns in resource usage.
weaknesses: Can you provide a more thorough justification for normalizing all resource types equally? Should different types of resources (e.g., memory vs. energy) be weighted differently depending on the application or hardware?
Have you considered evaluating PePR across different domains (e.g., NLP or general object detection tasks) to assess its broader applicability? Are there plans to extend the evaluation beyond medical image datasets?
Do you intend to extend PePR to consider other constraints like monetary cost, hardware access? How would the metric adapt in such contexts?
confidence: 3
justification: While the PePR score is an innovative and important contribution to the discussion on resource efficiency in deep learning, the assumptions behind the PePR score, the narrow scope of datasets, and the absence of empirical comparisons with other resource-aware metrics reduce the immediate practical significance of the work. |
Pb47B5t0pr | PePR: Performance Per Resource Unit as a Metric to Promote Small-scale Deep Learning in Medical Image Analysis | [
"Raghavendra Selvan",
"Bob Pepin",
"Christian Igel",
"Gabrielle Samuel",
"Erik B Dam"
] | The recent advances in deep learning (DL) have been accelerated by access to large-scale data and compute. These large-scale resources have been used to train progressively larger models which are resource intensive in terms of compute, data, energy, and carbon emissions. These costs are becoming a new type of entry barrier to researchers and practitioners with limited access to resources at such scale, particularly in the Global South. In this work, we take a comprehensive look at the landscape of existing DL models for medical image analysis tasks and demonstrate their usefulness in settings where resources are limited. To account for the resource consumption of DL models, we introduce a novel measure to estimate the performance per resource unit, which we call the PePR score. Using a diverse family of 131 unique DL architectures (spanning $1M$ to $130M$ trainable parameters) and three medical image datasets, we capture trends about the performance-resource trade-offs. In applications like medical image analysis, we argue that small-scale, specialized models are better than striving for large-scale models. Furthermore, we show that using existing pretrained models that are fine-tuned on new data can significantly reduce the computational resources and data required compared to training models from scratch. We hope this work will encourage the community to focus on improving AI equity by developing methods and models with smaller resource footprints. | [
"Equitable AI",
"Resource efficiency",
"Image Classification",
"Deep Learning"
] | https://openreview.net/pdf?id=Pb47B5t0pr | https://openreview.net/forum?id=Pb47B5t0pr | QXQA65p58j | official_review | 1,727,470,557,745 | Pb47B5t0pr | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission23/Reviewer_5oEE"
] | NLDL.org/2025/Conference | 2025 | title: Official Review
summary: The paper titled introduces the PePR score (Performance per Resource Unit) as a novel metric to evaluate deep learning (DL) models, particularly in resource-constrained settings like the Global South. The research focuses on balancing model performance with resource consumption (e.g., compute power, energy, and data). It argues that the trend of large-scale models is unsustainable, creates barriers for resource-limited researchers, and leads to inequity in AI development.
strengths: One of the key strengths of this paper is its introduction of the PePR score, a metric that effectively quantifies the trade-off between model performance and resource consumption, providing a holistic evaluation of deep learning models. This approach addresses a gap in AI research by promoting efficiency, especially for resource-constrained environments, which are often overlooked in conventional performance metrics. Additionally, the paper's comprehensive analysis of 131 different DL models across three medical imaging datasets offers valuable insights into the performance-resource dynamics, making the research both thorough and applicable to real-world scenarios. By highlighting the advantages of smaller, pretrained models, the study challenges the prevailing focus on large-scale models, pushing for a more sustainable and equitable approach to AI. This is particularly important for promoting AI accessibility in the Global South, enhancing the paper’s relevance and societal impact.
weaknesses: One weakness of the paper is that it primarily focuses on the evaluation of deep learning models in the context of medical imaging, which may limit the generalizability of the findings to other domains where the characteristics of datasets and model requirements differ significantly. Additionally, while the PePR score is a novel and useful metric, its effectiveness in optimizing models outside of small-scale or resource-constrained environments is not thoroughly explored, potentially reducing its applicability to high-performance use cases. The paper also relies heavily on pretrained models, and while it demonstrates their advantages, it does not fully investigate the potential trade-offs, such as the limitations in model flexibility or the dependence on the quality and diversity of the pretraining data. Moreover, the experiments are conducted over a relatively short training period (10 epochs), which might not fully capture the long-term performance trends and resource consumption patterns of the models. These factors could affect the broader applicability and robustness of the conclusions.
confidence: 3
justification: The evaluation of the paper highlights both strengths and weaknesses, providing a balanced perspective on its contribution to deep learning research. The strengths lie in the introduction of the PePR score, which effectively promotes resource-efficient AI by balancing performance and resource costs—a particularly valuable contribution for resource-constrained regions. The paper’s comprehensive analysis of a wide range of models across multiple datasets, alongside its emphasis on small-scale models, pushes the AI community to reconsider the prevalent trend of large-scale, resource-heavy models. This makes the work not only innovative but also socially impactful, addressing issues of equity and sustainability in AI.
On the other hand, the weaknesses indicate that the paper’s scope is somewhat narrow, focusing heavily on medical imaging, which may limit the general applicability of its findings. The reliance on pretrained models and short training epochs raises concerns about the broader robustness of the PePR score, especially in domains with different data characteristics or where model flexibility and long-term resource consumption are critical.
Overall, I judge the paper as a valuable contribution to the field, especially for promoting sustainable deep learning practices. However, its impact could be further enhanced by extending its scope beyond medical imaging and offering a deeper exploration of the limitations of pretrained models and the PePR score in various real-world settings. |
Pb47B5t0pr | PePR: Performance Per Resource Unit as a Metric to Promote Small-scale Deep Learning in Medical Image Analysis | [
"Raghavendra Selvan",
"Bob Pepin",
"Christian Igel",
"Gabrielle Samuel",
"Erik B Dam"
] | The recent advances in deep learning (DL) have been accelerated by access to large-scale data and compute. These large-scale resources have been used to train progressively larger models which are resource intensive in terms of compute, data, energy, and carbon emissions. These costs are becoming a new type of entry barrier to researchers and practitioners with limited access to resources at such scale, particularly in the Global South. In this work, we take a comprehensive look at the landscape of existing DL models for medical image analysis tasks and demonstrate their usefulness in settings where resources are limited. To account for the resource consumption of DL models, we introduce a novel measure to estimate the performance per resource unit, which we call the PePR score. Using a diverse family of 131 unique DL architectures (spanning $1M$ to $130M$ trainable parameters) and three medical image datasets, we capture trends about the performance-resource trade-offs. In applications like medical image analysis, we argue that small-scale, specialized models are better than striving for large-scale models. Furthermore, we show that using existing pretrained models that are fine-tuned on new data can significantly reduce the computational resources and data required compared to training models from scratch. We hope this work will encourage the community to focus on improving AI equity by developing methods and models with smaller resource footprints. | [
"Equitable AI",
"Resource efficiency",
"Image Classification",
"Deep Learning"
] | https://openreview.net/pdf?id=Pb47B5t0pr | https://openreview.net/forum?id=Pb47B5t0pr | DRPuLs5R5R | meta_review | 1,730,473,058,882 | Pb47B5t0pr | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission23/Area_Chair_rJAR"
] | NLDL.org/2025/Conference | 2025 | metareview: This paper introduces the PePR score, a metric designed to evaluate the trade-off between performance and resource consumption in deep learning models, with a particular focus on resource-constrained environments like medical imaging. The study is timely, tackling AI equity concerns by highlighting the resource efficiency of smaller models. The PePR score offers a novel, interpretable way to quantify resource use across various DL models, making a positive impact in promoting sustainable practices in AI.
However, the study’s reliance on short training durations and a specific domain (medical imaging) limits its broader applicability. Furthermore, the absence of statistical validation, alongside the reliance on pre-trained models, raises concerns about the robustness and flexibility of the findings. While the PePR score is a promising metric, extending evaluations to varied datasets, domains, and training regimes would strengthen its credibility and adoption potential.
Overall, the paper presents a valuable contribution, offering insights into efficient DL practices in resource-constrained settings, but would benefit from further validation and refinement to maximize its impact.
recommendation: Accept (Oral)
suggested_changes_to_the_recommendation: 1: I agree that the recommendation could be moved down
confidence: 5: The area chair is absolutely certain |
Pb47B5t0pr | PePR: Performance Per Resource Unit as a Metric to Promote Small-scale Deep Learning in Medical Image Analysis | [
"Raghavendra Selvan",
"Bob Pepin",
"Christian Igel",
"Gabrielle Samuel",
"Erik B Dam"
] | The recent advances in deep learning (DL) have been accelerated by access to large-scale data and compute. These large-scale resources have been used to train progressively larger models which are resource intensive in terms of compute, data, energy, and carbon emissions. These costs are becoming a new type of entry barrier to researchers and practitioners with limited access to resources at such scale, particularly in the Global South. In this work, we take a comprehensive look at the landscape of existing DL models for medical image analysis tasks and demonstrate their usefulness in settings where resources are limited. To account for the resource consumption of DL models, we introduce a novel measure to estimate the performance per resource unit, which we call the PePR score. Using a diverse family of 131 unique DL architectures (spanning $1M$ to $130M$ trainable parameters) and three medical image datasets, we capture trends about the performance-resource trade-offs. In applications like medical image analysis, we argue that small-scale, specialized models are better than striving for large-scale models. Furthermore, we show that using existing pretrained models that are fine-tuned on new data can significantly reduce the computational resources and data required compared to training models from scratch. We hope this work will encourage the community to focus on improving AI equity by developing methods and models with smaller resource footprints. | [
"Equitable AI",
"Resource efficiency",
"Image Classification",
"Deep Learning"
] | https://openreview.net/pdf?id=Pb47B5t0pr | https://openreview.net/forum?id=Pb47B5t0pr | 6QHSqwqFyM | decision | 1,730,901,555,403 | Pb47B5t0pr | [
"everyone"
] | [
"NLDL.org/2025/Conference/Program_Chairs"
] | NLDL.org/2025/Conference | 2025 | title: Paper Decision
decision: Accept (Oral)
comment: We recommend an oral and a poster presentation given the AC and reviewers recommendations. |
Pb47B5t0pr | PePR: Performance Per Resource Unit as a Metric to Promote Small-scale Deep Learning in Medical Image Analysis | [
"Raghavendra Selvan",
"Bob Pepin",
"Christian Igel",
"Gabrielle Samuel",
"Erik B Dam"
] | The recent advances in deep learning (DL) have been accelerated by access to large-scale data and compute. These large-scale resources have been used to train progressively larger models which are resource intensive in terms of compute, data, energy, and carbon emissions. These costs are becoming a new type of entry barrier to researchers and practitioners with limited access to resources at such scale, particularly in the Global South. In this work, we take a comprehensive look at the landscape of existing DL models for medical image analysis tasks and demonstrate their usefulness in settings where resources are limited. To account for the resource consumption of DL models, we introduce a novel measure to estimate the performance per resource unit, which we call the PePR score. Using a diverse family of 131 unique DL architectures (spanning $1M$ to $130M$ trainable parameters) and three medical image datasets, we capture trends about the performance-resource trade-offs. In applications like medical image analysis, we argue that small-scale, specialized models are better than striving for large-scale models. Furthermore, we show that using existing pretrained models that are fine-tuned on new data can significantly reduce the computational resources and data required compared to training models from scratch. We hope this work will encourage the community to focus on improving AI equity by developing methods and models with smaller resource footprints. | [
"Equitable AI",
"Resource efficiency",
"Image Classification",
"Deep Learning"
] | https://openreview.net/pdf?id=Pb47B5t0pr | https://openreview.net/forum?id=Pb47B5t0pr | 4mqYkXM33l | official_review | 1,728,085,110,760 | Pb47B5t0pr | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission23/Reviewer_uyFP"
] | NLDL.org/2025/Conference | 2025 | title: Interesting problem, but more results would improve the clarity of the paper
summary: This paper discusses the performance-resource tradeoff in small- and large-scale DL models. The authors shed light on the inaccessibility to large-scale computing and data, which hinders researchers' ability, especially in the global south and healthcare-AI sectors. The paper proposed a new metric that considers both performance and resources for DL models. An evaluation of more than 100 models for medical image classification datasets shows that small-scale models can perform similarly to large-scale models under resource-constrained regimes. The authors also claim that fine-tuning using a pretrained model should always be a first option rather than training from scratch.
strengths: - The paper is well-motivated from the angle of compute resource inaccessibility and how this can hinder the ability of some researchers to build high-performing models, especially for the sake of equity in AI-based healthcare systems.
- The proposed performance-resource tradeoff metric is interesting and can be incorporated into closed-loop optimization frameworks.
- The paper is well-written and easy to follow.
weaknesses: - Although the paper is well-motivated, the authors haven't discussed the results more profoundly or mapped their observations onto realistic cases. For instance, additional metrics on the usage and adoption of specific models/datasets for healthcare systems per region would be interesting to emphasize the resource equity barrier problem highlighted by the authors.
- The PePR metric's intended usage is still unclear. How do the authors envisage using this metric in a multi-objective optimization problem, and what would be the realistic use case?
- The process of models selection for the study is not well detailed. What criteria did the authors use to choose CNN and other models? Given the variety of model architectures, training methods, and task complexity, How could the authors ensure a representative coverage of models, tasks, and datasets so as to prevent any unbiased results? This needs to be justified further.
- The paper lacks additional details on the fine-tuning method and the rationale behind the training for 10-epoch. What if the fine-tuning is performed for less/more than 10 epochs? Would the paper observations/results still hold?
- In Figure 3, it's clear that CNN and others show no difference, so such categorization is not needed. The scatter points can reflect DL models in general.
- The discussion section could be more comprehensive (e.g., . what recommendation would the authors propose to tackle the equity problem)
- Ablation studies, particularly on fine-tuning methods and budget, models architectures and scale, and complexity of datasets, are crucial to confirm the generality of the authors' observations. This would further validate the robustness of their results.
confidence: 4
justification: The paper posits an interesting research problem regarding equity in resource access and how this would impact DL-based healthcare systems. However, the authors need to discuss the results from the lens of the problem and add more results to back up their claims. Further details on models selection and fine-tuning method/budget need to be considered to verify the result's robustness. The discussion section needs to emphasize the importance of the observations and discuss limitations and recommendations for the community.
final_rebuttal_confidence: 4
final_rebuttal_justification: The authors provided detailed responses in the rebuttals, which addressed several of my concerns. The paper contains interesting results and insights worth sharing with the ML research community. Hence, I recommend accepting the revised version of the paper. |
M5hbCbJs8R | DP-KAN: Differentially Private Kolmogorov-Arnold Networks | [] | We study the Kolmogorov-Arnold Network (KAN), recently proposed as an alternative to the classical Multilayer Perceptron (MLP), in the application for differentially private model training. Using the DP-SGD algorithm, we demonstrate that KAN can be made private in a straightforward manner and evaluated its performance across several datasets. Our results indicate that the accuracy of KAN is not only comparable with MLP but also experiences similar deterioration due to privacy constraints, making it suitable for differentially private model training. | [
"Kolmogorov-Arnold Networks",
"Differential Privacy"
] | https://openreview.net/pdf?id=M5hbCbJs8R | https://openreview.net/forum?id=M5hbCbJs8R | xDPCIkHd0D | official_review | 1,728,295,617,673 | M5hbCbJs8R | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission13/Reviewer_VhGM"
] | NLDL.org/2025/Conference | 2025 | title: Experimental results on DP-SGD in KAN are interesting, but questions remain
summary: The paper studies the use of differentially private SGD on Kolmogorov-Arnold networks and compare results to MLPs and linear regression. The results are, while not surprising, of practical interest but questions remain about the test and the authors interpretation of the results.
strengths: * The paper adress an interesting problem, namely the performance of Kolmogorov Arnold networks using differentially private gradient descent.
* The background and approach are briefly but concisely presented.
* The results provide insight into the performance of KAN using DP-SGD
weaknesses: * The results section, the most interesting part of the paper, is very brief. More details on experiments and datasets used are necessary.
* The paper would benefit from more analysis of the results.
* While interesting, the tests comparing KAN to MLP are not very comprehensive.
* There are questions around the interpretation of the results: Mayby I am missing something, but how is the statement "faster KAN did not suffer as much quality degradation as the MLP models" referring to figure 1 true?
confidence: 3
justification: While potentially providing practical experimental insight on the performance of KAN using DP-SGD, the evaluations are a bit too limited and too briefly described to be truly useful. Further, some interpretation of the results potentially put other interpretations and methodology into question. |
M5hbCbJs8R | DP-KAN: Differentially Private Kolmogorov-Arnold Networks | [] | We study the Kolmogorov-Arnold Network (KAN), recently proposed as an alternative to the classical Multilayer Perceptron (MLP), in the application for differentially private model training. Using the DP-SGD algorithm, we demonstrate that KAN can be made private in a straightforward manner and evaluated its performance across several datasets. Our results indicate that the accuracy of KAN is not only comparable with MLP but also experiences similar deterioration due to privacy constraints, making it suitable for differentially private model training. | [
"Kolmogorov-Arnold Networks",
"Differential Privacy"
] | https://openreview.net/pdf?id=M5hbCbJs8R | https://openreview.net/forum?id=M5hbCbJs8R | rjLcwu4zAR | official_review | 1,728,458,294,473 | M5hbCbJs8R | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission13/Reviewer_D8gj"
] | NLDL.org/2025/Conference | 2025 | title: An empirical comparison between privacy-preserving training of MLPs and KANs
summary: This paper studies training of Kolmogorov-Arnold Networks (KANs) under the privacy protection by differential privacy (DP). DP is known to degrade the utility of the learning in order to protect the privacy of the individuals. This paper studies empirically whether this utility hit in deep learning can be reduced by using another function approximator than MLP. Authors apply DP stochastic gradient descent (DP-SGD) for KANs, and empirically compare the obtained accuracy against DP-SGD trained MLPs in regression and classification tasks with multiple data sets. The results suggest that KANs trained with DP-SGD suffer a smaller utility loss, caused by DP, than the MLPs.
strengths: The empirical results demonstrate that for certain machine learning tasks, training a KAN instead of an MLP can help improve the privacy utility trade-off. This is an interesting, and as far as I'm aware, a novel contribution.
The main method is presented clearly in the paper, and as authors use readily available packages for DP deep learning, I also believe the experimental results are correct.
weaknesses: The discussion of the results could be improved. Since DP-SGD introduces stochasticity into the learning outcomes, it would be important to repeat results of multiple repeats of the method in Table 1. This would help to assess if the lower utility degradation of the DP-KAN is actually statistically significant, or is it just a fluke. For the MNIST experiment you seem to report some error bars for the plots, however you do not explicitly state what these bars are.
Regarding the MNIST experiment. You write that "In the differentially private setting, fasterKAN did not suffer as much quality degradation as the MLP models". Is there an error in the labels? To me, it seems that the DP MLP is almost always always slightly above the DP FasterKAN, while the FasterKAN dominates the non-DP results. So how can it be then that DP MLP has a larger degradation in utility?
Although I believe the implementation is correct, it would be great if authors could clearly state the KAN loss function in the paper. As DP-SGD relies on loss function to be composable into a sum over the individuals, it would help the clarity if authors could explicitly say that the target loss is composable over the per-example losses in Eq. (1). This could be fixed by e.g. using different indexing than $x_i$ to denote the $i$th dimension (in Eq. (1)) and $i$th sample (in Alg. 1).
confidence: 4
justification: While the studying the different trade-offs between private training of MLPs vs. KANs is an interesting research direction, I believe the paper should provide more rigorous evidence on its claims. The most crucial thing would be to test multiple repeats of the DP-SGD algorithm to test if the differences between DP KAN and DP MLP are actually statistically significant. |
M5hbCbJs8R | DP-KAN: Differentially Private Kolmogorov-Arnold Networks | [] | We study the Kolmogorov-Arnold Network (KAN), recently proposed as an alternative to the classical Multilayer Perceptron (MLP), in the application for differentially private model training. Using the DP-SGD algorithm, we demonstrate that KAN can be made private in a straightforward manner and evaluated its performance across several datasets. Our results indicate that the accuracy of KAN is not only comparable with MLP but also experiences similar deterioration due to privacy constraints, making it suitable for differentially private model training. | [
"Kolmogorov-Arnold Networks",
"Differential Privacy"
] | https://openreview.net/pdf?id=M5hbCbJs8R | https://openreview.net/forum?id=M5hbCbJs8R | oJ2fuO9SCj | decision | 1,730,901,554,743 | M5hbCbJs8R | [
"everyone"
] | [
"NLDL.org/2025/Conference/Program_Chairs"
] | NLDL.org/2025/Conference | 2025 | title: Paper Decision
decision: Reject |
M5hbCbJs8R | DP-KAN: Differentially Private Kolmogorov-Arnold Networks | [] | We study the Kolmogorov-Arnold Network (KAN), recently proposed as an alternative to the classical Multilayer Perceptron (MLP), in the application for differentially private model training. Using the DP-SGD algorithm, we demonstrate that KAN can be made private in a straightforward manner and evaluated its performance across several datasets. Our results indicate that the accuracy of KAN is not only comparable with MLP but also experiences similar deterioration due to privacy constraints, making it suitable for differentially private model training. | [
"Kolmogorov-Arnold Networks",
"Differential Privacy"
] | https://openreview.net/pdf?id=M5hbCbJs8R | https://openreview.net/forum?id=M5hbCbJs8R | U6q8JghCah | official_review | 1,726,973,346,304 | M5hbCbJs8R | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission13/Reviewer_SnD9"
] | NLDL.org/2025/Conference | 2025 | title: Review for DP-KAN: Differentially Private Kolmogorov-Arnold Networks
summary: This study explores the Kolmogorov-Arnold Network (KAN) as an alternative to the classical Multilayer Perceptron (MLP) for differentially private model training. Using the DP-SGD algorithm, KAN was made differentially private and evaluated across several datasets. The results show that KAN achieves accuracy comparable to MLP and experiences similar performance deterioration under privacy constraints, making it suitable for privacy-preserving machine learning.
strengths: - Paper is clearly written
- Contribution is clearly mentioned. "first integration of KANs with differentially private training algorithms"
weaknesses: - Contribution is weak, It is not clear what was the challenge in addressing the research question?
- Experiments could have been more elaborate in the setting,
- From DP side - probably adding more budget, more DP variations - these could have made the paper more concrete in the claims.
- From KAN side - May be more variations in experiments as mentioned in the related works.
- Implementation details are missing. If the implementation produced some challenges, that could have been noted down as well.
confidence: 5
justification: Even though contribution is weak in terms of just addition of the DPSGD (Adam) to KANs, paper could have been more complete from the perspective of experiments. Currently that is lacking. Hence the evaluation of the paper.
With the addition of more comprehensive set of experiments, the paper can be made better.
final_rebuttal_confidence: 5
final_rebuttal_justification: Based on the rebuttal by authors and discussion among the reviewers and AC. |
M5hbCbJs8R | DP-KAN: Differentially Private Kolmogorov-Arnold Networks | [] | We study the Kolmogorov-Arnold Network (KAN), recently proposed as an alternative to the classical Multilayer Perceptron (MLP), in the application for differentially private model training. Using the DP-SGD algorithm, we demonstrate that KAN can be made private in a straightforward manner and evaluated its performance across several datasets. Our results indicate that the accuracy of KAN is not only comparable with MLP but also experiences similar deterioration due to privacy constraints, making it suitable for differentially private model training. | [
"Kolmogorov-Arnold Networks",
"Differential Privacy"
] | https://openreview.net/pdf?id=M5hbCbJs8R | https://openreview.net/forum?id=M5hbCbJs8R | A1VRIgkafe | meta_review | 1,730,317,068,245 | M5hbCbJs8R | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission13/Area_Chair_ibJ4"
] | NLDL.org/2025/Conference | 2025 | metareview: The authors present privacy-aware Kolmogorov-Arnold Networks (KANs), i.e. KANs trained with differential privacy, ensuring privacy without compromising accuracy. KANs exhibit similar accuracy degradation to MLPs when trained with differential privacy, making them a promising option for privacy-preserving model training. Differentially private stochastic gradient descent (DP-SGD), used in this work, is a well-established family of algorithms that allows training models with differential privacy guarantees.
However, reviewers unanimously highlighted that the paper could have been better written, e.g. the appendix could have been part of the main paper, implementation details should have been clearer, some more experiments could have been provided, etc.
This line of work has definitely merits, and hopefully, the authors can submit an improved version of this paper to another venue in the coming period.
recommendation: Reject
suggested_changes_to_the_recommendation: 2: I'm certain of the recommendation. It should not be changed
confidence: 4: The area chair is confident but not absolutely certain |
KcBMGkip79 | Bounds on the Generalization Error in Active Learning | [
"Vincent Menden",
"Yahya Saleh",
"Armin Iske"
] | We establish empirical risk minimization principles for active learning by
deriving a family of upper bounds on the generalization error. Aligning with empirical observations, the bounds suggest that superior query algorithms can be obtained by
combining both informativeness and representativeness query strategies, where the latter is assessed using integral probability metrics.
To facilitate the use of these bounds in
application, we systematically link diverse active
learning scenarios, characterized by their loss functions and hypothesis
classes to their corresponding upper bounds. Our results show that
regularization techniques used to constraint the complexity of various hypothesis
classes are sufficient conditions to ensure the validity of the bounds.
The present work enables principled
construction and empirical quality-evaluation of query algorithms in active learning. | [
"Active Learning",
"Empirical Risk Minimization Principle",
"Integral Probability Metric"
] | https://openreview.net/pdf?id=KcBMGkip79 | https://openreview.net/forum?id=KcBMGkip79 | qnEdU49c7J | official_review | 1,728,472,669,276 | KcBMGkip79 | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission18/Reviewer_8tjT"
] | NLDL.org/2025/Conference | 2025 | title: Assessment of Theoretical Contributions and Practical Implications
summary: The paper derives a family of upper bounds on generalization error in the context of active learning (AL). It combines query strategies based on informativeness and representativeness to optimize the selection of data points for labeling. The authors propose using Integral Probability Metrics (IPMs) to quantify representativeness and link various active learning scenarios with corresponding generalization bounds. The main result shows that regularization techniques are sufficient to ensure the validity of these bounds, offering theoretical insights into the design of more effective active learning query algorithms.
strengths: The framework is mathematically sound, building on established concepts like empirical risk minimization (ERM), Rademacher complexity, and IPMs. The authors rigorously prove that their bounds hold under specific conditions, especially when applying regularization techniques to control hypothesis complexity.The derivations are logically consistent and follow from well-known theorems in statistical learning theory, such as the use of Rademacher complexity to estimate the generalization error. The assumptions and conditions (e.g., Lipschitz continuity, regularization) are standard in the field, ensuring the correctness of the results under the proposed settings.The authors systematically explore various active learning scenarios, linking them to their corresponding upper bounds. This comprehensive approach allows for a nuanced understanding of how different hypothesis classes and loss functions impact generalization error. Additionally, the paper is well-structured, with a logical flow that guides the reader through complex concepts without losing clarity.
weaknesses: While the paper presents a rigorous mathematical framework, there are potential concerns regarding the assumptions made in the derivations. The reliance on specific conditions for the validity of the upper bounds may limit the applicability of the results. For instance, the assumption that the training data follows a particular distribution could be restrictive in real-world scenarios where data may not conform to these assumptions. Additionally, the paper could benefit from a more detailed discussion on the implications of violating these assumptions and how it might affect the generalization bounds.
The quality of the research is generally high, but there are areas where the depth of analysis could be improved. For example, while the paper discusses the integration of informativeness and representativeness in query strategies, it does not provide a comprehensive exploration of how these strategies can be effectively combined in practice. A more detailed examination of practical implementations or case studies would enhance the quality of the work and provide clearer guidance for practitioners. Furthermore, the paper could benefit from a more thorough comparison with existing methods in active learning, highlighting the advantages and limitations of the proposed approach.
The lack of empirical validation is a notable weakness. While the theoretical framework is robust, the absence of experimental results to support the claims made in the paper raises questions about the practical applicability of the derived bounds. Empirical studies demonstrating the effectiveness of the proposed query strategies in real-world scenarios would significantly enhance the credibility of the findings. The authors should consider including experiments that compare their approach with existing active learning methods to provide a clearer picture of its performance.
confidence: 4
justification: The assessment of the paper "Bounds on the Generalization Error in Active Learning" highlights several areas for improvement. Firstly, the reliance on specific assumptions about data distribution raises concerns about the applicability of the theoretical results in real-world scenarios, necessitating a discussion on these implications. While the theoretical framework is strong, the paper lacks depth in practical guidance for effectively combining informativeness and representativeness in query strategies, and more examples or case studies would enhance its quality. Additionally, certain mathematical notations and concepts could be made more accessible to readers without a strong background in the field, improving clarity. The paper could also better connect its findings to real-world applications in active learning, emphasizing how the derived bounds can be utilized in practice. Lastly, the absence of empirical studies to support the theoretical claims is a significant weakness; conducting experiments to demonstrate the effectiveness of the proposed strategies would strengthen the paper's credibility. Addressing these points would enhance the paper's impact and contribute more effectively to the field of active learning. |
KcBMGkip79 | Bounds on the Generalization Error in Active Learning | [
"Vincent Menden",
"Yahya Saleh",
"Armin Iske"
] | We establish empirical risk minimization principles for active learning by
deriving a family of upper bounds on the generalization error. Aligning with empirical observations, the bounds suggest that superior query algorithms can be obtained by
combining both informativeness and representativeness query strategies, where the latter is assessed using integral probability metrics.
To facilitate the use of these bounds in
application, we systematically link diverse active
learning scenarios, characterized by their loss functions and hypothesis
classes to their corresponding upper bounds. Our results show that
regularization techniques used to constraint the complexity of various hypothesis
classes are sufficient conditions to ensure the validity of the bounds.
The present work enables principled
construction and empirical quality-evaluation of query algorithms in active learning. | [
"Active Learning",
"Empirical Risk Minimization Principle",
"Integral Probability Metric"
] | https://openreview.net/pdf?id=KcBMGkip79 | https://openreview.net/forum?id=KcBMGkip79 | TOPiaZZEbc | official_review | 1,728,320,597,498 | KcBMGkip79 | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission18/Reviewer_6img"
] | NLDL.org/2025/Conference | 2025 | title: Interesting math, potentially good information, lackluster communication
summary: This paper presents a theoretically derived upper bound on generalization risk in active learning environments. The authors provide mathematical context and justification for their upper bound, then provide examples by applying their theorem to some common learning cases.
strengths: The authors demonstrate a clear understanding of the mathematics of active learning. The abstract, introduction, and conclusion offer concise and effective summaries of the work. Mathematical notation is mostly very consistent. The authors clearly recognize similar work in the field and what differentiates their presented theorem from established literature.
weaknesses: Several key topics and values are not clearly explained:
- ERM is never defined in the text.
- $Q$ and $X$ are defined with identical properties but are not clearly distinguished from one another.
- By extension, $g$ and $h$ are both defined as mapping $\mathbb{R}^n \rightarrow \mathbb{R}$ (broadly speaking) without clearly distinguishing the difference between them.
- In line 150, the assumption that "$l(y,h(x))\leq k$ for some $k$>0" is stated. It's unclear at this point in the text if this is meant to be read as "$l(y,h(x))$ is bound by an upper limit of $k>0$" or "$l(y,h(x))$ is positive and finite". At first, I interpreted it as the former. This confused me when you introduced "$l_1(y,h(x)) := |y-h(x)|$" in line 307, which is not inherently bound by an upper limit. Only after, in Theorem 3, do you clarify that you derive a condition on $w$ that does constrain the value of $l_1(y,h(x))$. But again, this isn't clearly stated in the text.
- The term "generator" is introduced in line 195 without explaining its relevance for the problem setting.
- This may be a result of one of the above points, but it's unclear to me how the derived theorems and conditions (in Table 1) inform the design of the query strategy, as stated in the conclusion.
In summary, my impression is the authors failed to thoroughly explain some key concepts and terminology which hinders the communication of the main topic of the manuscript.
confidence: 3
justification: I am interested in the ideas proposed in this paper, but I think the communication of the ideas needs work. Some minor revision of the manuscript and/or clarifications from the authors on some key points could go a long way in clarifying the information presented. As a result, it's possible my understanding of the paper is flawed, even after several thorough readings. |
KcBMGkip79 | Bounds on the Generalization Error in Active Learning | [
"Vincent Menden",
"Yahya Saleh",
"Armin Iske"
] | We establish empirical risk minimization principles for active learning by
deriving a family of upper bounds on the generalization error. Aligning with empirical observations, the bounds suggest that superior query algorithms can be obtained by
combining both informativeness and representativeness query strategies, where the latter is assessed using integral probability metrics.
To facilitate the use of these bounds in
application, we systematically link diverse active
learning scenarios, characterized by their loss functions and hypothesis
classes to their corresponding upper bounds. Our results show that
regularization techniques used to constraint the complexity of various hypothesis
classes are sufficient conditions to ensure the validity of the bounds.
The present work enables principled
construction and empirical quality-evaluation of query algorithms in active learning. | [
"Active Learning",
"Empirical Risk Minimization Principle",
"Integral Probability Metric"
] | https://openreview.net/pdf?id=KcBMGkip79 | https://openreview.net/forum?id=KcBMGkip79 | KWJbsd7oBa | official_review | 1,728,233,708,628 | KcBMGkip79 | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission18/Reviewer_jtnw"
] | NLDL.org/2025/Conference | 2025 | title: IPM based ERM analysis in Active Learning
summary: The paper aims to understand active learning by deriving new upper bounds on the generalization error. It introduces Integral Probability Metrics (IPMs) to measure how well the selected samples represent the overall data distribution. The key idea is that active learning algorithms can perform better by balancing the selection of informative samples with representative ones. The authors connect different active learning scenarios, defined by various loss functions and hypothesis classes, to these theoretical bounds. They also show that using regularization techniques, which limit the complexity of models, helps ensure these bounds are valid.
strengths: 1. The paper extends the Empirical Risk Minimization (ERM) principle to the active learning setting, providing a solid theoretical foundation for designing active learning algorithms.
2. It emphasizes the importance of combining informativeness and representativeness when selecting samples, which aligns with practical observations in active learning.
3. The paper is mathematically sound, with clear proofs and reliance on established concepts like Rademacher complexity and IPMs.
weaknesses: 1. Calculating IPMs can be resource-intensive, especially in high-dimensional spaces, which might limit practical implementation.
2. The conditions needed for the theoretical bounds, such as specific regularization constraints, may be strict or limited in some real-world applications. It might require further clarification.
**Comments on selected Theorems:**
- Overall, please clarify why 1-Lipschtize is sufficient to prove to a broader audience.
**Theorem 1:**
- The theorem is a well-known result in statistical learning theory.
- However, the constant $c$ is not explicitly defined. Typically, $c$ depends on the loss function's bound $k$ or other parameters.
**Theorem 2:**
- The constant $c$ is not defined in the statement (or $C=c?$ from Theorem 1).
- The condition $\ell^y \in F$ depends on the specific loss function and hypothesis class, and it may not always hold without additional assumptions.
- Since $d_\mathcal{F}(P_X, P_Q)$ involves IPMs, which may be challenging to compute in practice, mentioning methods for estimating or approximating IPMs could enhance the practical usefulness of the theorem in the active learning.
**Theorem 6:**
- Curious if the assumption is valid choice. What happens if the numerator $1 - M_Y - |b|$ could be non-positive if $M_Y + |b| \geq 1$?
**Minor Comments:**
1. Some of the references missed the year. E.g. [5,6,7,14, 18] and so on..
confidence: 3
justification: The paper's theoretical framework is sound. It appropriately extends existing principles to active learning and uses well-established mathematical tools. The assumptions and conditions under which the results hold are clearly stated. By showing that regularization can ensure the validity of the bounds, it effectively connects theoretical findings with practical machine-learning techniques. Overall, the paper's theoretical contributions are solid and provide good insights into active learning.
**Regarding Generalizing Representativeness Measurement with IPMs**:
Earlier studies measured how well samples represent the data using metrics like Maximum Mean Discrepancy (MMD) or the Wasserstein distance. However, these methods worked only under limited conditions. This paper introduces Integral Probability Metrics (IPMs) to measure representativeness. Using different generator classes, IPMs can include various statistical distances, such as Total Variation and Kantorovic distance. This approach makes measuring representativeness more flexible and adaptable. |
KcBMGkip79 | Bounds on the Generalization Error in Active Learning | [
"Vincent Menden",
"Yahya Saleh",
"Armin Iske"
] | We establish empirical risk minimization principles for active learning by
deriving a family of upper bounds on the generalization error. Aligning with empirical observations, the bounds suggest that superior query algorithms can be obtained by
combining both informativeness and representativeness query strategies, where the latter is assessed using integral probability metrics.
To facilitate the use of these bounds in
application, we systematically link diverse active
learning scenarios, characterized by their loss functions and hypothesis
classes to their corresponding upper bounds. Our results show that
regularization techniques used to constraint the complexity of various hypothesis
classes are sufficient conditions to ensure the validity of the bounds.
The present work enables principled
construction and empirical quality-evaluation of query algorithms in active learning. | [
"Active Learning",
"Empirical Risk Minimization Principle",
"Integral Probability Metric"
] | https://openreview.net/pdf?id=KcBMGkip79 | https://openreview.net/forum?id=KcBMGkip79 | ChKLqmglNj | meta_review | 1,730,421,121,801 | KcBMGkip79 | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission18/Area_Chair_mwN2"
] | NLDL.org/2025/Conference | 2025 | metareview: The paper established ERM generalization bounds for active learning, based on Integral Probability Metrics. Some issues were raised by reviewers, mostly related to lack of direct applications of the proposed theoretical framework to concrete algorithms. Yet, the derivation of the proposed bound is meaningful and could lead to improvments in active learning algorithms. I believe it can be of interest for the audience of the conference, and therefore I recommend to accept it in the program.
recommendation: Accept (Poster)
suggested_changes_to_the_recommendation: 1: I agree that the recommendation could be moved down
confidence: 3: The area chair is somewhat confident |
KcBMGkip79 | Bounds on the Generalization Error in Active Learning | [
"Vincent Menden",
"Yahya Saleh",
"Armin Iske"
] | We establish empirical risk minimization principles for active learning by
deriving a family of upper bounds on the generalization error. Aligning with empirical observations, the bounds suggest that superior query algorithms can be obtained by
combining both informativeness and representativeness query strategies, where the latter is assessed using integral probability metrics.
To facilitate the use of these bounds in
application, we systematically link diverse active
learning scenarios, characterized by their loss functions and hypothesis
classes to their corresponding upper bounds. Our results show that
regularization techniques used to constraint the complexity of various hypothesis
classes are sufficient conditions to ensure the validity of the bounds.
The present work enables principled
construction and empirical quality-evaluation of query algorithms in active learning. | [
"Active Learning",
"Empirical Risk Minimization Principle",
"Integral Probability Metric"
] | https://openreview.net/pdf?id=KcBMGkip79 | https://openreview.net/forum?id=KcBMGkip79 | Ath6ovAO5k | decision | 1,730,901,555,229 | KcBMGkip79 | [
"everyone"
] | [
"NLDL.org/2025/Conference/Program_Chairs"
] | NLDL.org/2025/Conference | 2025 | title: Paper Decision
decision: Accept (Poster)
comment: We recommend a poster presentation given the AC and reviewers recommendations. |
KaZzDtUeJY | Predicting Oligomeric states of Fluorescent Proteins using Mamba | [
"Agney K Rajeev",
"Joel Joseph K B",
"Subhankar Mishra"
] | Fluorescent proteins (FPs) are essential tools in biomedical imaging, known for their ability to absorb and emit light, thereby allowing visualization of biological processes. Understanding the oligomeric state is crucial, as monomeric forms are often preferred in applications to minimize potential artifacts and prevent interference with cellular functions. Experimental methods to find the oligomeric state can be time-consuming and expensive. Most of the current computational model is CPU-based, limiting their speed and scalability. This paper studies the effectiveness of GPU-based deep-learning models in predicting the oligomeric states of fluorescent proteins directly from their amino acid sequences, specifically focusing on the Mamba architecture. Various protein-specific augmentations were also employed to enhance the model's generalizability. Our results indicate that the mamba-based model achieves accuracy and F1 score close to 90\% and an MCC value of 0.8 with in predicting the oligomeric states of fluorescent proteins directly from its amino acid sequence. The code used in this study is available at [GitHub repository](https://github.com/smlab-niser/FluorMamba). | [
"Mamba",
"Fluorescent proteins",
"Oligomerization state"
] | https://openreview.net/pdf?id=KaZzDtUeJY | https://openreview.net/forum?id=KaZzDtUeJY | ycIbYbIl5N | official_review | 1,728,992,720,349 | KaZzDtUeJY | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission50/Reviewer_m2uG"
] | NLDL.org/2025/Conference | 2025 | title: Promising Approach with Mamba Architecture for Predicting Protein Oligomeric States, but Lacks Key Comparisons and Clarity
summary: This paper proposes a method to predict the oligomeric state of fluorescent proteins using a deep learning model based on the Mamba architecture. The authors apply different data augmentation methods to enhance the model’s generalizability and compare it to other sequence-based models like RNN, LSTM, and Transformer. The Mamba-based model achieves high accuracy while reducing computational costs compared to other models.
strengths: The paper contributes Mamba architecture scheme for predicting the oligomeric state of fluorescent proteins, which improves accuracy and computational efficiency compared to traditional models. The introduction of protein-specific data augmentation techniques enhances the model's generalizability, offering a new direction for handling limited datasets in protein classification tasks.
weaknesses: The introduction lacks relevant background on the Mamba architecture and its prior applications, as well as the role of data augmentation in this context. Expanding on these topics would provide a clearer foundation for understanding the paper's contributions.
While the paper highlights the advantages of GPU-based methods, it would benefit from additional experiments comparing the Mamba-based model with CPU-based approaches discussed in the literature review section. This comparison could further emphasize the performance improvements, and the literature review could be condensed to make room for these experiments.
In section 3.1, the model implementation details could be moved to a separate "Implementation" section, making the paper more organized.
Table 2 is not explicitly referenced in the main body, which diminishes its usefulness.
Lastly, the paper does not discuss specific sequence patterns or features in a protein's amino acid sequence that may influence its oligomeric state. Addressing this could provide deeper biological insights and strengthen the interpretability of the model's predictions.
confidence: 4
justification: The paper presents a promising method using the Mamba architecture to predict the oligomeric state of fluorescent proteins, improving computational efficiency and accuracy compared to traditional models. Its use of data augmentation techniques is a strong contribution.
However, the paper lacks sufficient background on Mamba and data augmentation, which are crucial to understanding the approach. Additionally, the absence of direct comparisons with CPU-based methods weakens the evidence for the model's improvements. Organizational issues, such as missing references to key tables, also detract from clarity. While the contributions are notable, these limitations reduce the overall impact of the work. |
KaZzDtUeJY | Predicting Oligomeric states of Fluorescent Proteins using Mamba | [
"Agney K Rajeev",
"Joel Joseph K B",
"Subhankar Mishra"
] | Fluorescent proteins (FPs) are essential tools in biomedical imaging, known for their ability to absorb and emit light, thereby allowing visualization of biological processes. Understanding the oligomeric state is crucial, as monomeric forms are often preferred in applications to minimize potential artifacts and prevent interference with cellular functions. Experimental methods to find the oligomeric state can be time-consuming and expensive. Most of the current computational model is CPU-based, limiting their speed and scalability. This paper studies the effectiveness of GPU-based deep-learning models in predicting the oligomeric states of fluorescent proteins directly from their amino acid sequences, specifically focusing on the Mamba architecture. Various protein-specific augmentations were also employed to enhance the model's generalizability. Our results indicate that the mamba-based model achieves accuracy and F1 score close to 90\% and an MCC value of 0.8 with in predicting the oligomeric states of fluorescent proteins directly from its amino acid sequence. The code used in this study is available at [GitHub repository](https://github.com/smlab-niser/FluorMamba). | [
"Mamba",
"Fluorescent proteins",
"Oligomerization state"
] | https://openreview.net/pdf?id=KaZzDtUeJY | https://openreview.net/forum?id=KaZzDtUeJY | loMI5CxzdX | decision | 1,730,901,556,636 | KaZzDtUeJY | [
"everyone"
] | [
"NLDL.org/2025/Conference/Program_Chairs"
] | NLDL.org/2025/Conference | 2025 | title: Paper Decision
decision: Accept (Oral)
comment: We recommend an oral and a poster presentation given the AC and reviewers recommendations. |
KaZzDtUeJY | Predicting Oligomeric states of Fluorescent Proteins using Mamba | [
"Agney K Rajeev",
"Joel Joseph K B",
"Subhankar Mishra"
] | Fluorescent proteins (FPs) are essential tools in biomedical imaging, known for their ability to absorb and emit light, thereby allowing visualization of biological processes. Understanding the oligomeric state is crucial, as monomeric forms are often preferred in applications to minimize potential artifacts and prevent interference with cellular functions. Experimental methods to find the oligomeric state can be time-consuming and expensive. Most of the current computational model is CPU-based, limiting their speed and scalability. This paper studies the effectiveness of GPU-based deep-learning models in predicting the oligomeric states of fluorescent proteins directly from their amino acid sequences, specifically focusing on the Mamba architecture. Various protein-specific augmentations were also employed to enhance the model's generalizability. Our results indicate that the mamba-based model achieves accuracy and F1 score close to 90\% and an MCC value of 0.8 with in predicting the oligomeric states of fluorescent proteins directly from its amino acid sequence. The code used in this study is available at [GitHub repository](https://github.com/smlab-niser/FluorMamba). | [
"Mamba",
"Fluorescent proteins",
"Oligomerization state"
] | https://openreview.net/pdf?id=KaZzDtUeJY | https://openreview.net/forum?id=KaZzDtUeJY | VVDQWZhIcJ | official_review | 1,728,910,223,572 | KaZzDtUeJY | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission50/Reviewer_hT39"
] | NLDL.org/2025/Conference | 2025 | title: Review of the paper "Predicting Oligomeric states of Fluorescent Proteins using Mamba"
summary: This paper presents a method for categorizing the oligomeric states of proteins based on their amino acid sequences. The novelty of the research lies in its use of deep learning frameworks, particularly the recently developed Mamba architecture. The authors demonstrates that this approach outperforms other deep learning methods, such as LSTM, RNN, and Transformer models, in terms of accuracy, sensitivity, and computational efficiency.
strengths: The primary objective of this paper appears to be improving computational efficiency in predicting oligomeric states, with a secondary focus on increasing sensitivity in categorization. While the justification for using the Mamba architecture is not clearly articulated, it seems that Mamba has been tested and validated in DNA modeling, a field closely related to protein sequencing. Given that Mamba is a newly emerged framework, testing it's perfomance towards other deep learning architectures represents a valuable contribution to current research on Mamba's potential use cases. The paper thoroughly explores the use of various data augmentation techniques, including ablation studies. It is well-written and adheres to academic standards, introducing the reader to the topic of oligomeric state determination in a clear and concise manner. Additionally, the inclusion of a reference to the codebase used for testing enhances the paper’s transparency and provides deeper insights into its technical aspects.
weaknesses: The literature review mentions the Gradient Boost decision tree-based method FPredX, which reportedly achieves F1 scores of 93.3%, significantly higher than the results presented in this paper. Why is this not addressed in the conclusion? A decision tree approach would also offer superior explainability, a topic that is notably absent from the discussion. While lower computational costs (GPU vs. CPU-based, as mentioned) are advantageous, should this come at the expense of accuracy and sensitivity? Additionally, no direct metric comparison between FPredX and this method is provided, so can we be certain this approach performs better in terms of computational time? Given the brief format of this conference paper, I recommend shortening the somewhat lengthy literature review and focusing more on discussing these critical aspects.
The paper compares performance to other deep learning frameworks, such as RNN, LSTM, and Transformer, but does not specify which implementations are used. Upon reviewing the code, it appears that standard PyTorch API calls are employed for testing these, but this should be explicitly mentioned in the text.
While the paper claims that hyperparameter optimization was performed, it does not detail the method used. Was grid search employed? From what I gather in section 5.3, the smallest Mamba model achieves the best performance... Furthermore, the sensitivity tests in section 5.3 combine various augmentations to evaluate their effects, but only 11 combinations are tested, despite there being 6 different techniques, which could result in 64 possible combinations. The rationale for selecting these 11 baselines is not explained.
Additionally, the graphs in Figures 3-5 show that the Mamba parameter size stops at 10^8, several orders of magnitude smaller than those of the other methods. Why?
Finally, there are a few minor grammatical issues. Ensure that all acronyms, such as MCC and MAC, are defined. In line 206, the sentence '...overfitting of data, following which a dense layer...' could benefit from rewording. Similarly, in line 264, the phrase '...attempted to reproduce a few test results...' needs clarification for greater precision. Also, in line 147, there is a punctuation mark following a reference, while the sentence continues. Please review for clarity and correctness.
confidence: 4
justification: The paper regrettably demonstrates no comparison towards the stated methods previously developed by research in this field, forming a major drawback. As such, the paper needs to develop these baselines, preferably before being presented at the conference. Also, some minor flaws were highlighted according to the need for detailing more on experimental setup. However, gaining broader knowledge on the application Mamba as a novel, rather unexplored deep learning framework is advantageous for the community. Moreover, Mamba seems justifiable for protein sequencing, where processing time can be a critical factor, even if the evidence for this was lacking.
final_rebuttal_confidence: 3
final_rebuttal_justification: Based on the revision and comments from the authours in the process of rebuttal, and the discussions among the reviewers as followed, I belive the paper can be presented at NLDL 2025. |
KaZzDtUeJY | Predicting Oligomeric states of Fluorescent Proteins using Mamba | [
"Agney K Rajeev",
"Joel Joseph K B",
"Subhankar Mishra"
] | Fluorescent proteins (FPs) are essential tools in biomedical imaging, known for their ability to absorb and emit light, thereby allowing visualization of biological processes. Understanding the oligomeric state is crucial, as monomeric forms are often preferred in applications to minimize potential artifacts and prevent interference with cellular functions. Experimental methods to find the oligomeric state can be time-consuming and expensive. Most of the current computational model is CPU-based, limiting their speed and scalability. This paper studies the effectiveness of GPU-based deep-learning models in predicting the oligomeric states of fluorescent proteins directly from their amino acid sequences, specifically focusing on the Mamba architecture. Various protein-specific augmentations were also employed to enhance the model's generalizability. Our results indicate that the mamba-based model achieves accuracy and F1 score close to 90\% and an MCC value of 0.8 with in predicting the oligomeric states of fluorescent proteins directly from its amino acid sequence. The code used in this study is available at [GitHub repository](https://github.com/smlab-niser/FluorMamba). | [
"Mamba",
"Fluorescent proteins",
"Oligomerization state"
] | https://openreview.net/pdf?id=KaZzDtUeJY | https://openreview.net/forum?id=KaZzDtUeJY | R6q02yJgm9 | official_review | 1,728,517,632,425 | KaZzDtUeJY | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission50/Reviewer_KfRX"
] | NLDL.org/2025/Conference | 2025 | title: A neural network approach to predict state of fluorescnet proteins such as GFP and dsRed.
summary: Fluorophores used in biomedical imaging may consist of a single fluorescing unit, or a cluster of several units, influencing their ability to provide a useful signal. This paper investigates the possibility to predict if proteins exist as single or grouped units, based on the sequence of the protein.
This field of applying deep neural networks is a bit far from my expertise, but I have done my best in asses strengths and weaknesses. First of all, the strengths include approaches for improving on the output from a tool called Mamba, by having large focus on reducing computational time by moving from a slower CPU-based approach to faster and more efficient GPU processing. The number of papers in the field is limited, but the authors claim they outperform a previous studies utilizing models such as RNN and LSTM.
strengths: On the weakness-side is the very limited explanation of the field of protein 'clumping', making it tricky for the reader to understand input as well as output from the proposed models. The discussion becomes even harder to follow for someone with limited understanding of the application area. Improving on the output from a tool called Mamba, by having large focus on reducing computational time by moving from a slower CPU-based approach to faster and more efficient GPU processing, can potentially be useful aslo for other approaches to understand moleccular mechanisms. In general, I believe the paper is sound and well-structured.
weaknesses: Due to its slightly limited novelty and lack of information to convey the purpose of the project, I will rank it lower.
confidence: 3
justification: I justify my positive assessment by the fact that I believe a broad application in the biomedical and cell biology field will benefit from this type of publications. The full assessment of the method is difficult to judge with my limited knowledge in the application field, but strategies for training, validation and testing seem to be sound, and the input data is balanced.
final_rebuttal_confidence: 3
final_rebuttal_justification: Using deep learning methods to categorize proteins as forming aggregates or not based on their amino acid sequence is a rather narrow field, and likely interesting for a limited audience. The major drawback of the paper is the lack of comparison with previous developed methods in the field (other reviewers point to FPredX); however, the authors plan to do this comparison prior to submitting their final photo-ready version, according to the reply to @m2uG. Apart from these two concerns, the paper is now, with some revisions. very well written and nicely structured. I therefore recommend a weak accept. |
KaZzDtUeJY | Predicting Oligomeric states of Fluorescent Proteins using Mamba | [
"Agney K Rajeev",
"Joel Joseph K B",
"Subhankar Mishra"
] | Fluorescent proteins (FPs) are essential tools in biomedical imaging, known for their ability to absorb and emit light, thereby allowing visualization of biological processes. Understanding the oligomeric state is crucial, as monomeric forms are often preferred in applications to minimize potential artifacts and prevent interference with cellular functions. Experimental methods to find the oligomeric state can be time-consuming and expensive. Most of the current computational model is CPU-based, limiting their speed and scalability. This paper studies the effectiveness of GPU-based deep-learning models in predicting the oligomeric states of fluorescent proteins directly from their amino acid sequences, specifically focusing on the Mamba architecture. Various protein-specific augmentations were also employed to enhance the model's generalizability. Our results indicate that the mamba-based model achieves accuracy and F1 score close to 90\% and an MCC value of 0.8 with in predicting the oligomeric states of fluorescent proteins directly from its amino acid sequence. The code used in this study is available at [GitHub repository](https://github.com/smlab-niser/FluorMamba). | [
"Mamba",
"Fluorescent proteins",
"Oligomerization state"
] | https://openreview.net/pdf?id=KaZzDtUeJY | https://openreview.net/forum?id=KaZzDtUeJY | Ew1lXBpYVQ | official_review | 1,728,508,761,932 | KaZzDtUeJY | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission50/Reviewer_yi5B"
] | NLDL.org/2025/Conference | 2025 | title: Predicting Oligomeric states of Fluorescent Proteins using Mamba
summary: This paper test the new Mamba architecture to predict (classify) whether a fluorescent protein's structure will be monomeric or oligomeric based on the amino acid sequence. Two (standard) datasets are used, created and curated by others. The main experiments are run on a dataset (FP) of totally 409 proteins and then to some extent repeated on a separate set of 656 proteins (OSFP set). Some augmentation approaches are compared and the overall performance is compared to other methods/networks previously used for this. The results show slightly higher numbers (translated to ~1 more sequence correctly classified) than previously reported but the model is computationally more effective.
strengths: It is good and important to evaluate and modify new approaches to different applications. It would be good with more details on the application/biological use. Is this an example of protein structure (3D and quarternary) prediction in general? How many fluorescent proteins are there where this would be interesting to use? Who would be a user- the companies selling fluorescent proteins, others? Did it lead to any biological insights, or how could the results be further analyzed to get biological insights?
weaknesses: *Figure 1, copy from another publication- copyright?
*Figure 1- caption is a bit confusing. DsRed is the right figure (tetramer). Also, one cannot say "monomeric oligomerization"
*Explain what the MCC is and why you use it (not standard).
*All parts and abbreviations in the model should be (briefly at least) explained - what is H3 and give ref and/or explain SiLu.
*It is confusing what you call your datasets - the "name" or abbreviation you use should be given when they are described.
*Unclear why you do 10-fold cross-validation AND your ID approach. What do you wish to see/understand from the two approaches?
*Is there really a difference to the baseline for any of the augmentation methods? What is the difference between 0.8875 and eg. 0.8899 in your experimental setup?
*Why don't you show the variance/spread between your repeats (10 and 100 respectively)?
*I lack a proper exploration or more thorough investigation of the results. Which sequences are problematic? Can any biological and/or patterns be identified?
*Same Q also for the other results. How do the numbers relate to your dataset sizes? what do they actually mean? How many more are actually correctly classified? To me it seems like the claim "outperforms" is rather strong for these numbers.....
*Papers on arxiv that are published in journals/conferences should be referenced to the journal/conference.
confidence: 5
justification: This is a typical student project: Try this new model on a dataset and see how it performs. The novelty is very low and so is the significance (very low on biology non-existing on technology). The presentation also lacks in clarity.
The only reason for a non-reject decision is that it is important to encourage students to "test" research approaches and practice on writing scientific papers. The presentation and motivation of the paper can be improved with rather little effort but the very low novelty and significance will not change. |
KaZzDtUeJY | Predicting Oligomeric states of Fluorescent Proteins using Mamba | [
"Agney K Rajeev",
"Joel Joseph K B",
"Subhankar Mishra"
] | Fluorescent proteins (FPs) are essential tools in biomedical imaging, known for their ability to absorb and emit light, thereby allowing visualization of biological processes. Understanding the oligomeric state is crucial, as monomeric forms are often preferred in applications to minimize potential artifacts and prevent interference with cellular functions. Experimental methods to find the oligomeric state can be time-consuming and expensive. Most of the current computational model is CPU-based, limiting their speed and scalability. This paper studies the effectiveness of GPU-based deep-learning models in predicting the oligomeric states of fluorescent proteins directly from their amino acid sequences, specifically focusing on the Mamba architecture. Various protein-specific augmentations were also employed to enhance the model's generalizability. Our results indicate that the mamba-based model achieves accuracy and F1 score close to 90\% and an MCC value of 0.8 with in predicting the oligomeric states of fluorescent proteins directly from its amino acid sequence. The code used in this study is available at [GitHub repository](https://github.com/smlab-niser/FluorMamba). | [
"Mamba",
"Fluorescent proteins",
"Oligomerization state"
] | https://openreview.net/pdf?id=KaZzDtUeJY | https://openreview.net/forum?id=KaZzDtUeJY | 6LWowT7bRK | meta_review | 1,730,561,477,554 | KaZzDtUeJY | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission50/Area_Chair_4qBe"
] | NLDL.org/2025/Conference | 2025 | metareview: The paper proposes a method using the Mamba deep learning architecture to predict the oligomeric state (monomeric vs. oligomeric) of fluorescent proteins based on their amino acid sequences. The model is tested for accuracy, sensitivity, and computational efficiency against traditional methods, including RNN, LSTM, and Transformer models, as well as gradient-boosted decision trees (FPredX), as well as CPU-based models ( in an updated version of the manuscript). Various data augmentation methods are also applied to enhance generalizability.
Strengths:
1) The proposed work effectively applies the Mamba architecture to predict oligomeric states of fluorescent proteins. It achieves high accuracy while reducing computational costs compared to traditional CPU-based methods.
2) Applying Mamba ( a newly emerged framework) to protein sequence analysis offers a new direction in handling limited datasets in protein classification tasks.
3) By shifting from CPU to GPU processing, the model has shown efficiency improvements that could be relevant to various biomedical and molecular research applications.
4) To improve the model's generalizability for limited datasets, the authors have also introduced some novel protein-specific data augmentation techniques, particularly useful in protein classification tasks.
5) Testing Mamba's potential beyond DNA modeling provides new insights, and including a reference to the codebase enhances transparency.
6) The work offers the potential for broader biological application. For example, the proposed work has the potential to contribute insights into protein structure prediction and fluorescent protein use.
7)The manuscript's current version is well-written, well-structured, and meets academic standards.
Weaknesses:
1) Reviewers discuss that the novelty and significance of the work, particularly in terms of biological insights and technical advancement, need some improvements and make the paper potentially interesting to a limited audience.
2) The first version of the manuscript lacks a direct comparison with FPredX and CPU-based methods. However, the authors have addressed these issues in the revised version of the model.
3) The introduction in the first version of the manuscript needs more background/ references on the Mamba architecture, and specific terms (e.g., MCC, SiLu) and dataset details need to be clarified. However, the authors have addressed these issues in the revised manuscript.
4) The first version of the manuscript contains some methodological and structural clarity issues, as well as some issues regarding presentation quality. For example, references for certain figures, more details on hyperparameter optimization, and experimental results are needed. However, these issues have been improved in the revised manuscript.
In sum, in the revised version of the manuscript, the authors have addressed most reviewer concerns, particularly the lack of comparison with previously developed methods in the field, especially FPredX and CPU-based methods. Moreover, the current version of the work is well-written and meets academic standards. Given the authors' efforts to address the major concerns of reviewers and generally positive consensus among the reviewers after revision (one acceptance, two-week acceptance, and one weak rejection), it seems reasonable to lean toward recommending acceptance and oral presentation.
recommendation: Accept (Oral)
suggested_changes_to_the_recommendation: 2: I'm certain of the recommendation. It should not be changed
confidence: 4: The area chair is confident but not absolutely certain |
Juf8b4be1Z | Deployment of Deep Learning Model in Real World Clinical Setting: A Case Study in Obstetric Ultrasound | [] | Despite the rapid development of AI models in medical image analysis, their validation in real world clinical settings remains limited. Models are often developed without continuous feedback from clinicians, which can lead to a lack of alignment with the actual needs. To address this, we introduce a generic framework designed for deploying and testing image-based AI models early in such settings. Using this framework, we deployed a trained model for fetal ultrasound standard plane detection and evaluated it in real-time sessions with both novice and expert users. Feedback from these sessions revealed that while the model offers potential benefits to medical practitioners, the need for navigational guidance was identified as a key area for improvement. These findings underscore the importance of early testing of AI models in real-world settings, leading to insights that can guide the refinement of the model and system based on actual user feedback. | [
"deployment",
"fetal ultrasound",
"standard plane detection"
] | https://openreview.net/pdf?id=Juf8b4be1Z | https://openreview.net/forum?id=Juf8b4be1Z | x4Jn1UwqBY | official_review | 1,727,957,411,290 | Juf8b4be1Z | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission44/Reviewer_z8CZ"
] | NLDL.org/2025/Conference | 2025 | title: Deployment of Deep Learning Model in Real World Clinical Setting: A Case Study in Obstetric Ultrasound
summary: The paper introduces a framework for the real-world deployment of deep learning models in medical workflows. The aim is to facilitate real-world clinical validation of AI in medical imaging in clinical settings with integration challenges and complex medical workflows. The framework is designed to allow early-stage testing of AI models and to provide real-time feedback to users. Key components include: (i) HDMI-to-USB converter for capturing real-time ultrasound video feeds, (ii) docker containers to isolate and manage different aspects of the system, and (iii) a wireless display of AI predictions to clinicians via tablets. The framework is evaluated on one particular application: fetal ultrasound standard plane detection in obstetrics. This evaluation was conducted in a hospital where inexperienced medical students and experienced practitioners used the system during live ultrasound sessions.
strengths: The paper demonstrates an understanding of the practical challenges of AI deployment in clinical settings. It carefully documents the process of creating the proposed deployment framework. The methodology section is technically sound providing detailed justifications for key design choice. While most AI studies tend to focus on performance in lab environments, this paper emphasizes the practical implementation in a clinical setting which is both valuable and potentially novel for the particular application (a deep learning model for fetal ultrasound). The focus on containerization and isolating the research code from clinical infrastructure is a smart approach that addresses deployment robustness in clinical settings. The authors also ensure reproducibility by offering access to a public code repository. Finally: The paper is well-written, clearly organized, and organized in a manner that supports understanding and implementation.
weaknesses: The results section lacks comparative analysis with established clinical methods or other AI deployment approaches. There’s limited inclusion and discussion of performance metrics which should be important for evaluating the proposed deployment framework. Instead, the feedback is mostly qualitative, focusing on user experiences rather than objective scientific data. The authors should provide a comparison between their deployment framework and similar frameworks in other medical imaging applications. This could either be in terms of technical performance (e.g., lower latency) or usability (e.g., easier to implement in hospitals).
There’s nothing particularly new in the method itself: no novel algorithms, architectures, or breakthroughs in model performance. The paper applies existing techniques (e.g., the PCBM model for ultrasound image detection), and the claimed novelty comes from the framework implementation, rather than scientific discovery. The framework’s components are mostly engineering solutions rather than novel scientific contributions. A more thorough comparative analysis and discussion could possibly make this paper more than just an engineering report.
confidence: 1
justification: The paper is sufficiently strong on framework motivation and documentation but lacking in validation and scientific novelty. The paper presents a well-implemented solution to a deployment problem, but it reads more like an engineering report rather than a contribution of new knowledge to the field of AI in medical imaging.
final_rebuttal_confidence: 2
final_rebuttal_justification: I remain unconvinced about the paper's scientific contribution. If the primary contribution is the proposed framework, it requires a more thorough and systematic evaluation. If the focus is on a case study, the paper should emphasize this aspect and provide a deeper qualitative analysis. However, I reiterate that case studies are outside my area of expertise. After reviewing their rebuttal, I am inclined to uphold my recommendation for rejection. |
Juf8b4be1Z | Deployment of Deep Learning Model in Real World Clinical Setting: A Case Study in Obstetric Ultrasound | [] | Despite the rapid development of AI models in medical image analysis, their validation in real world clinical settings remains limited. Models are often developed without continuous feedback from clinicians, which can lead to a lack of alignment with the actual needs. To address this, we introduce a generic framework designed for deploying and testing image-based AI models early in such settings. Using this framework, we deployed a trained model for fetal ultrasound standard plane detection and evaluated it in real-time sessions with both novice and expert users. Feedback from these sessions revealed that while the model offers potential benefits to medical practitioners, the need for navigational guidance was identified as a key area for improvement. These findings underscore the importance of early testing of AI models in real-world settings, leading to insights that can guide the refinement of the model and system based on actual user feedback. | [
"deployment",
"fetal ultrasound",
"standard plane detection"
] | https://openreview.net/pdf?id=Juf8b4be1Z | https://openreview.net/forum?id=Juf8b4be1Z | vbLqGN9Nvb | official_review | 1,727,355,223,250 | Juf8b4be1Z | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission44/Reviewer_y175"
] | NLDL.org/2025/Conference | 2025 | title: Unsystematic collection of results, possibly out of scope, unclear definition of what is meant by a framework, no references to approvals
summary: They present a framework for early testing of visual AI-tools in a clinical context and report on an experiment that applies the framework in a fetal ultrasound setting. They measured the responsiveness of the system in terms of delays and also collected (unstructured) feedback from 6 students of medicine, one obstetrician (P7) and one sonographer. While the problem is important for the utilization of deep learning models, the paper gives no contribution to deep learning as such, and does not even describe the methodology behind the system that is tested.
They describe the technical setup they have used, but it is not completely clear to me what constitutes a "framework" in the present context. They do e.g. make a point of the use wireless connections, but it is not clear to me if this should be regarded as a framework defining decision or not.
They have collected some data on the speed of the implementation compared to an off-line version, but apart from this the results they report on are "soft" in the way of summaries of the testers experiences and behaviour. No standardized questionnaires or interview guides are given. The conclusions are related to the given AI tool, rather than the framework.
strengths: The paper addresses the important issue of implementing AI tools in a real life setting, with a focus on the integration of an AI tool in a clinical work flow. They give a clear exposition of the design challenges and their chosen solution.
weaknesses: The paper may be out of scope for a deep learning conference, since the deep learning issues as such are not addressed, and the properties of the tool in question are not presented. It might fit better in health services research or operational research in health services setting.
Although the design issues and the choices made in the given solution are presented clearly, it is not obvious which design features play the role of "framework".
They give quantitative results for the delays that the AI tool has in evaluating images (which may seem too high for practical use) and compare these to an off-line version, but the paper is weak in the way of robust conclusions and findings. They quote a few pieces of feedback from the users, but these seem to be collected in an unsystematic way. A predefined questionnaire would be better, and preferably one that has been validated. The use of more "soft" feedback should at least rely on interview guides and should be analysed with methods from qualitative research. Overall, the findings appear arbitrary. Also, the findings seem to focus on the system at hand and not the framework, which is promoted as the paper's contribution.
To get it published they would also need to produce ethical approvals, but I guess these were maybe skipped in order to make the study anonymous to the reviewers.
confidence: 4
justification: The distinction between specific arbitrary technical solutions and the framework is not clear. The feedback from the users was not collected in a systematic or reproducible way. The findings seem to be related to the given tool, rather then the framework, which is framed as the paper's contribution. There is no deep learning as such, which possibly places the work outside the conference's scope. They provide no ethical approvals, which are definitely needed, but this may be due to the anonymization of the manuscript.
final_rebuttal_confidence: 4
final_rebuttal_justification: I am sceptical about the scientific contribution of this paper. It's unclear which parts of the technical solution constitutes the framework (e.g. the HDMI-to-USB Converter Box?) to which they claim to generalize. Also the collection of feedback and analysis thereof seems arbitrary and anecdotal. While this general issue of implementing and testing medical models is important, I think they need more than one application, a more formal definition of the framework and/or more systematic collection and analysis of feedback in order to draw generalizable conclusions. I vote for rejection although it might be suitable for a poster if they are able to describe the collection of feedback and analysis better. |
Juf8b4be1Z | Deployment of Deep Learning Model in Real World Clinical Setting: A Case Study in Obstetric Ultrasound | [] | Despite the rapid development of AI models in medical image analysis, their validation in real world clinical settings remains limited. Models are often developed without continuous feedback from clinicians, which can lead to a lack of alignment with the actual needs. To address this, we introduce a generic framework designed for deploying and testing image-based AI models early in such settings. Using this framework, we deployed a trained model for fetal ultrasound standard plane detection and evaluated it in real-time sessions with both novice and expert users. Feedback from these sessions revealed that while the model offers potential benefits to medical practitioners, the need for navigational guidance was identified as a key area for improvement. These findings underscore the importance of early testing of AI models in real-world settings, leading to insights that can guide the refinement of the model and system based on actual user feedback. | [
"deployment",
"fetal ultrasound",
"standard plane detection"
] | https://openreview.net/pdf?id=Juf8b4be1Z | https://openreview.net/forum?id=Juf8b4be1Z | r00FAiRL5u | decision | 1,730,901,556,453 | Juf8b4be1Z | [
"everyone"
] | [
"NLDL.org/2025/Conference/Program_Chairs"
] | NLDL.org/2025/Conference | 2025 | title: Paper Decision
decision: Reject |
Juf8b4be1Z | Deployment of Deep Learning Model in Real World Clinical Setting: A Case Study in Obstetric Ultrasound | [] | Despite the rapid development of AI models in medical image analysis, their validation in real world clinical settings remains limited. Models are often developed without continuous feedback from clinicians, which can lead to a lack of alignment with the actual needs. To address this, we introduce a generic framework designed for deploying and testing image-based AI models early in such settings. Using this framework, we deployed a trained model for fetal ultrasound standard plane detection and evaluated it in real-time sessions with both novice and expert users. Feedback from these sessions revealed that while the model offers potential benefits to medical practitioners, the need for navigational guidance was identified as a key area for improvement. These findings underscore the importance of early testing of AI models in real-world settings, leading to insights that can guide the refinement of the model and system based on actual user feedback. | [
"deployment",
"fetal ultrasound",
"standard plane detection"
] | https://openreview.net/pdf?id=Juf8b4be1Z | https://openreview.net/forum?id=Juf8b4be1Z | j4ZLMOWEoi | official_review | 1,727,856,282,380 | Juf8b4be1Z | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission44/Reviewer_L2uz"
] | NLDL.org/2025/Conference | 2025 | title: Promising research with room for additional detail
summary: The paper presents a framework for deploying a generic deep learning model, providing a case study in clinical obstetric ultrasound settings. It focuses on real-time video processing for enhancing ultrasound imaging through artificial intelligence. The authors discuss the technical infrastructure, including containerization for stability and scalability, and highlight the importance of user feedback from both novice and expert users during the deployment phase.
strengths: The paper sets out a clear objective: deploying a deep learning model in a real-world clinical setting, which is highly relevant for AI applications in healthcare.
The focus on testing the model in a real-world scenario (obstetric ultrasound) makes the work significantly relevant, especially since many AI models face challenges transitioning from lab settings to clinical environments.
Authors use an explainable AI model, addressing a key challenge in medical AI — the need for transparency in decision-making.
By gathering feedback from both novice and expert users in a clinical setting, the study emphasizes the importance of human factors in AI model deployment.
weaknesses: The paper does not seem to explicitly discuss the generalizability of the framework in detail. Although it describes the framework’s application to obstetric ultrasound, it lacks a thorough analysis of how the framework could be adapted or validated in other clinical scenarios or with different imaging modalities. For instance, how would it perform in other areas of medical imaging, such as cardiology or neurology?
The paper does not contain detailed information on the development and training of the deep learning model itself. It jumps into deployment without providing much context on how the model was trained and validated.
The dataset is limited. A larger sample size would strengthen the paper’s findings and conclusions.
The paper mainly discusses user feedback from a small group of users, without incorporating objective performance metrics of the AI model during deployment in the real-world scenario.
**Questions**
- How do you ensure the framework is generalizable to other medical imaging modalities or devices beyond fetal ultrasound?
- How well does the framework scale in terms of integrating multiple models or more complex inferencing pipelines in real-time scenarios?
- How does the latency introduced by the framework impact clinical workflow in practice? Are there any threshold levels of latency beyond which the system becomes unusable?
- You identified the need for navigational guidance in future improvements. Have you explored any strategies or approaches for providing real-time navigation to users during ultrasound scans?
- Can you provide more details on how the deep learning model was trained and validated before deployment? How did the model perform in a controlled environment? What about its performance in clinical practices?
**Extra comments**
- The labeling of subsections in Section 4 is wrong. Consider revising the formatting.
- For readability purposes, I recommend including a period “.” after each item listed in Section 2.
- The main figure illustrating the system architecture is difficult to interpret at standard size; I had to zoom in by 300% to read it properly.
- There are numerous acronyms throughout the paper (e.g., AI, HDMI, PCBM) that are not defined. Although some of them may be commonly known, there are readers who may not be familiar with them.
- Table 1 has formatting issues that make it difficult to read. I recommend adjustments in layout or style to improve its clarity. Additionally, it would be helpful to clarify what the values in the table represent (average and standard deviation?)
- In general, readability and writing style could be improved.
confidence: 3
justification: The paper effectively outlines a clear objective of deploying a deep learning model in a real-world clinical setting, emphasizing the significance of its application in obstetric ultrasound and the importance of transparency through an explainable AI model while considering user feedback from both novice and expert clinicians.
However, the paper lacks a detailed discussion on the generalizability of the framework, provides insufficient information on the development and training of the deep learning model, has a limited dataset that weakens its findings, and primarily relies on feedback from a small user group without including objective performance metrics of the AI model during deployment.
Besides, the paper could benefit from some adjustments to improve the ease of reading, especially for a broader academic audience or those less familiar with all the technical details.
final_rebuttal_confidence: 4
final_rebuttal_justification: The paper presents an interesting perspective to consider when implementing DL models in real-world medical applications. However, the generalizability they claimed in the proposed framework is unclear. The technical contribution of their solution seems to rely on a HDMI-to-USB converter box, which means that medical devices require to be adapted to such video format.
Additionally, the results presented are based on test surveys from user experiences collected in an arbitrary manner. There is no evaluation from a DL perspective, where the performance variation between controlled and real-world environments is analyzed. In that case, the manuscript could benefit from the user experience analysis. |
Juf8b4be1Z | Deployment of Deep Learning Model in Real World Clinical Setting: A Case Study in Obstetric Ultrasound | [] | Despite the rapid development of AI models in medical image analysis, their validation in real world clinical settings remains limited. Models are often developed without continuous feedback from clinicians, which can lead to a lack of alignment with the actual needs. To address this, we introduce a generic framework designed for deploying and testing image-based AI models early in such settings. Using this framework, we deployed a trained model for fetal ultrasound standard plane detection and evaluated it in real-time sessions with both novice and expert users. Feedback from these sessions revealed that while the model offers potential benefits to medical practitioners, the need for navigational guidance was identified as a key area for improvement. These findings underscore the importance of early testing of AI models in real-world settings, leading to insights that can guide the refinement of the model and system based on actual user feedback. | [
"deployment",
"fetal ultrasound",
"standard plane detection"
] | https://openreview.net/pdf?id=Juf8b4be1Z | https://openreview.net/forum?id=Juf8b4be1Z | DGUQaKl49S | official_review | 1,728,508,205,656 | Juf8b4be1Z | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission44/Reviewer_BEct"
] | NLDL.org/2025/Conference | 2025 | title: Framework for running AI models for ultrasound in clinical environment
summary: The paper presents a computational framework for running and assessing AI models for ultrasound images, lowering the barrier for deployment of AI in clinical use. The authors use obstetric ultrasound as a case study to demonstrate the framework. The core of the work is description of the design and implementation of this framework, aiming to make it generic. The actual AI model implemented using the framework for the case study is standard plane detector for fetal ultrasound images. Although framed as presenting a generic framework, the study is more a description of a case study.
strengths: Development of AI models in biomedical applications is booming, but there is an obvious gap in implementation of such modes to real world clinical environment. Thus, a framework lowering the barrier and facilitating such integration to enable rapid testing and feedback is a warmly welcome idea. The attempt to make the framework generic is positive, although more could be done to actually show it is such.
weaknesses: Weight is placed on generic design, but only one case study is presented. It remains unclear whether any other modalities or even application within the domain of ultrasound imaging could be easily tucked in using this framework. It would be great to show other examples, at least give specs on limitations (what type of other modalities, limits to image sizes etc.), or tone down the framing as a generic platform.
Further, the case study is merely descriptive, for example, the authors state they wanted to study whether the explainability of the Ai model in the case studied provided added value, but this answer to this limits to anecdotes by the users. No systematic way to collect the feedback or quantitative values using the framework is presented, leaving the idea that the implementation falls a bit short of the aims.
confidence: 4
justification: Although not fully convincing as a generic framework for implementing AI systems in clinical environments, the study nevertheless takes a valuable step towards lowering the barrier or bridging the gap in implementing AI models for actual use, which is a much welcomed and valuable effort. |
Juf8b4be1Z | Deployment of Deep Learning Model in Real World Clinical Setting: A Case Study in Obstetric Ultrasound | [] | Despite the rapid development of AI models in medical image analysis, their validation in real world clinical settings remains limited. Models are often developed without continuous feedback from clinicians, which can lead to a lack of alignment with the actual needs. To address this, we introduce a generic framework designed for deploying and testing image-based AI models early in such settings. Using this framework, we deployed a trained model for fetal ultrasound standard plane detection and evaluated it in real-time sessions with both novice and expert users. Feedback from these sessions revealed that while the model offers potential benefits to medical practitioners, the need for navigational guidance was identified as a key area for improvement. These findings underscore the importance of early testing of AI models in real-world settings, leading to insights that can guide the refinement of the model and system based on actual user feedback. | [
"deployment",
"fetal ultrasound",
"standard plane detection"
] | https://openreview.net/pdf?id=Juf8b4be1Z | https://openreview.net/forum?id=Juf8b4be1Z | 9eaDfciFUO | meta_review | 1,730,663,360,286 | Juf8b4be1Z | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission44/Area_Chair_zKW7"
] | NLDL.org/2025/Conference | 2025 | metareview: The submitted paper proposes a framework for deploying AI models in clinical routine. The presented framework was used for a case study in which a AI-based standard plane detection model for fetal ultrasound was deployed and tested by clinicians. All reviewers acknowledged that the translation of AI models into clinical routine is indeed an important aspect and early deployment is beneficial. The objective and the workflow of the method is described clearly and to gather clinical feedback is regarded highly relevant.
However, the reviewers raised questions on whether the presented work has enough scientific contribution and novelty. Among others, reviewers noted the lack of details when the framework (i.e. the converter) and the model were described as well as the lack of standardization when evaluating user feedback. The quantitative measurements were focusing on the latency when using the presented pipeline in comparison to the direct execution. Both runtimes seem to be slow considering the application. The qualitative feedback from the clinicians seems to only consider the model output and not the proposed pipeline or the way it is deployed. Some of the reviewers felt that the presented paper is rather as a case study and since the proposed solution is agnostic of the deployed (AI) method, out of scope for NLDL. The reviewers suggested to add more comparisons with other deployment solutions, more than one deployed models, and a standardized feedback questionnaire for clinical evaluation. Taken into consideration all strengths and weaknesses, the submitted paper can unfortunately not be suggested for acceptance for NLDL. The authors are advised to follow the feedback of the reviewers, extend their manuscript and may select a better targeted conference for publication.
recommendation: Reject
suggested_changes_to_the_recommendation: 2: I'm certain of the recommendation. It should not be changed
confidence: 4: The area chair is confident but not absolutely certain |
JNxddbPPWt | Locally orderless networks | [
"Jon Sporring",
"Peidi Xu",
"Jiahao Lu",
"Francois Bernard Lauze",
"Sune Darkner"
] | We present Locally Orderless Networks (LON) and the theoretical foundation that links them to Convolutional Neural Networks (CNN), Scale-space histograms, and measurement theory. The key elements are a regular sampling of the bias and the derivative of the activation function. We compare LON, CNN, and Scale-space histograms on prototypical single-layer networks. We show how LON and CNN can emulate each other and how LON expands the set of functions computable to non-linear functions such as squaring. We demonstrate simple networks that illustrate the improved performance of LON over CNN on simple tasks for estimating the gradient magnitude squared, for regressing shape area and perimeter lengths, and for explainability of individual pixels' influence on the result. | [
"Convolutional Neural Networks",
"Locally Orderless Images",
"histograms",
"saliency maps",
"explainability"
] | https://openreview.net/pdf?id=JNxddbPPWt | https://openreview.net/forum?id=JNxddbPPWt | j1dFWdSVGM | decision | 1,730,901,556,344 | JNxddbPPWt | [
"everyone"
] | [
"NLDL.org/2025/Conference/Program_Chairs"
] | NLDL.org/2025/Conference | 2025 | title: Paper Decision
decision: Accept (Oral)
comment: Given the AC positive recommendation, we recommend an oral and a poster presentation given the AC and reviewers recommendations. |
JNxddbPPWt | Locally orderless networks | [
"Jon Sporring",
"Peidi Xu",
"Jiahao Lu",
"Francois Bernard Lauze",
"Sune Darkner"
] | We present Locally Orderless Networks (LON) and the theoretical foundation that links them to Convolutional Neural Networks (CNN), Scale-space histograms, and measurement theory. The key elements are a regular sampling of the bias and the derivative of the activation function. We compare LON, CNN, and Scale-space histograms on prototypical single-layer networks. We show how LON and CNN can emulate each other and how LON expands the set of functions computable to non-linear functions such as squaring. We demonstrate simple networks that illustrate the improved performance of LON over CNN on simple tasks for estimating the gradient magnitude squared, for regressing shape area and perimeter lengths, and for explainability of individual pixels' influence on the result. | [
"Convolutional Neural Networks",
"Locally Orderless Images",
"histograms",
"saliency maps",
"explainability"
] | https://openreview.net/pdf?id=JNxddbPPWt | https://openreview.net/forum?id=JNxddbPPWt | hbvNUD3mL5 | official_review | 1,728,466,698,900 | JNxddbPPWt | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission40/Reviewer_qFuz"
] | NLDL.org/2025/Conference | 2025 | title: Locally Orderless Networks
summary: The paper proposes an architecture called "Locally Orderless Networks" (LONs), which is analogous to CNNs with biases but uses Gaussian functions as activations. The authors note that sigmoid functions, which are common activations in CNNs, can be viewed as integrals of Gaussian functions. Therefore, while a CNN layer performs soft thresholding of the convolved input, a LON layer implements a "soft indicator" of the convolved input. According to the authors, this property makes LONs a better choice for tasks, which involve estimating lengths of curves of the same intensity in images (such as perimeter estimation), while CNNs are more appropriate for area estimation tasks. The paper provides experiments to support this intuition, where LONs marginally outperform CNNs for perimeter estimation. Moreover, the authors suggest that LONs result in more intuitive saliency maps in comparison with CNNs, and thus provide better explainability.
strengths: **S1**: The main idea is simple, clear, and intuitive.
**S2**: Authors propose a concrete area, where LONs can be applied (tasks involving perimeter estimation in images).
**S3**: The figures are overall nicely designed, and the experiments illustrate the paper's message well.
weaknesses: **W1**: The novelty of the paper is marginal, given that the LON architecture amounts to simply using a different activation function in a CNN.
**W2**: The writing could be significantly improved, especially in the introduction and conclusion.
**Questions/suggestions**:
* **Q1**: In the proposed architecture, the outer layer has the number of parameters proportional to the image size $|\Omega|$. I.e., every spatial location of the convolved image is multiplied by an independent set of weights. However, since the goal of the main experiments is just to estimate the perimeter or area, it should not be important where exactly the object appears in the picture. Therefore, shouldn't it be sufficient to use the same parameters for all the spatial locations (i.e., use a variant of a convolution in the outer layer)? Why do the authors choose to train seemingly much more parameters than needed for these tasks?
* **Q2**: Which exact LON is displayed in Figure 1? What are the kernels parameters?
* Figures 4 and 5 would benefit from bigger legend and font sizes.
* **Q3**: Figure 6 only shows correctly classified samples. How would missclassified samples look?
* Classes «0,1,2» should be defined in Figure 6 (I assume they correspond to area/parameter classes from the smallest to the largest?).
* **Q4**: How do the authors interpret the behaviour of LON with 8 bins and 2 kernels in Figure 6? How does it relate to the mentioned "overfitting" in Figure 4? Is it related to the much larger parameters count than needed (see **Q1**)?
**Writing problems/typos**:
* The dimensions of $A$ in line 089 should be transposed.
* There are many unclear/undefined wording choices in the introduction. E.g.: "function width" (I assume it is $\sigma$, but the word choice in confusing in NNs context), "local histogram", "scale space", "activity/activation" (are these used interchangeably?), "sigmoid derivatives» (is this the same as Gaussian functions?).
* In my view, the locally orderless images framework and its relationship to the proposed architecture is not explored and explained enough in the text. However, I understand that it is partially due to the limited space.
* There are some language mistakes. E.g., "network components exists" (line 049), "LON expresses more naturally to some local operators than CNN" (line 123), etc.
* The conclusion is especially poorly written and includes sentences that are incorrect and misleading. E.g., "... layer, called Locally Orderless Networks (LONs), allows CNNs and LONs to model each other and compute non-linear functions, like squaring".
confidence: 4
justification: Since I could not find any major issues with soundness and correctness of the paper, and given strengths S1 and S2, I believe that the paper's contribution is worth sharing with the community. While the novelty and originality of the paper are not very high (W1), I believe that the work is carefully done and is worth publishing after some minor revision of the writing (W2).
final_rebuttal_confidence: 4
final_rebuttal_justification: Since I already recommended acceptance in the initial review, there is no change in my assessment post rebuttal. |
JNxddbPPWt | Locally orderless networks | [
"Jon Sporring",
"Peidi Xu",
"Jiahao Lu",
"Francois Bernard Lauze",
"Sune Darkner"
] | We present Locally Orderless Networks (LON) and the theoretical foundation that links them to Convolutional Neural Networks (CNN), Scale-space histograms, and measurement theory. The key elements are a regular sampling of the bias and the derivative of the activation function. We compare LON, CNN, and Scale-space histograms on prototypical single-layer networks. We show how LON and CNN can emulate each other and how LON expands the set of functions computable to non-linear functions such as squaring. We demonstrate simple networks that illustrate the improved performance of LON over CNN on simple tasks for estimating the gradient magnitude squared, for regressing shape area and perimeter lengths, and for explainability of individual pixels' influence on the result. | [
"Convolutional Neural Networks",
"Locally Orderless Images",
"histograms",
"saliency maps",
"explainability"
] | https://openreview.net/pdf?id=JNxddbPPWt | https://openreview.net/forum?id=JNxddbPPWt | WNOod92mUV | meta_review | 1,730,494,424,283 | JNxddbPPWt | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission40/Area_Chair_h5iB"
] | NLDL.org/2025/Conference | 2025 | metareview: The reviewers agree that the paper is well-motivated and has a good theoretical foundation. Reviewers, however, also differ in their opinions, with two positive reviewers, one neutral, and one that recommends a rejection. The strong aspects of the paper are its theoretical foundation, that it is based on a clear, intuitive, and simple idea, and that it is well described. The highlighted weaknesses are limitations of the experimental validation, limited scope of the method, limited novelty, and insufficient discussion of computational complexity, scalability, and generalizability.
The authors have given a thorough rebuttal that addresses all comments from the reviewers and rewritten the paper accordingly. So, with the primarily positive reviews, I recommend accepting the paper as a poster.
recommendation: Accept (Poster)
suggested_changes_to_the_recommendation: 3: I agree that the recommendation could be moved up
confidence: 4: The area chair is confident but not absolutely certain |
JNxddbPPWt | Locally orderless networks | [
"Jon Sporring",
"Peidi Xu",
"Jiahao Lu",
"Francois Bernard Lauze",
"Sune Darkner"
] | We present Locally Orderless Networks (LON) and the theoretical foundation that links them to Convolutional Neural Networks (CNN), Scale-space histograms, and measurement theory. The key elements are a regular sampling of the bias and the derivative of the activation function. We compare LON, CNN, and Scale-space histograms on prototypical single-layer networks. We show how LON and CNN can emulate each other and how LON expands the set of functions computable to non-linear functions such as squaring. We demonstrate simple networks that illustrate the improved performance of LON over CNN on simple tasks for estimating the gradient magnitude squared, for regressing shape area and perimeter lengths, and for explainability of individual pixels' influence on the result. | [
"Convolutional Neural Networks",
"Locally Orderless Images",
"histograms",
"saliency maps",
"explainability"
] | https://openreview.net/pdf?id=JNxddbPPWt | https://openreview.net/forum?id=JNxddbPPWt | UCqYETgaGJ | official_review | 1,728,458,560,783 | JNxddbPPWt | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission40/Reviewer_rzBp"
] | NLDL.org/2025/Conference | 2025 | title: Locally Orderless Networks as a generalization of Convolutional Neural Networks
summary: This paper introduces a novel neural network layer called Locally Orderless Networks (LON) and explores its theoretical connections with Convolutional Neural Networks (CNN) and scale-space histograms. The core of LON lies in the regular sampling of the bias and the derivative of the activation function to create local histograms. By comparing LON, CNN, and scale-space histograms on prototypical single-layer networks, the paper demonstrates LON's advantages in specific tasks such as gradient magnitude squared estimation, shape area, and perimeter length regression, as well as explainability of individual pixel influence on the results.
strengths: The paper's strengths include its solid theoretical foundation linking Locally Orderless Networks (LON) with established concepts like CNNs and scale-space histograms, the introduction of an innovative neural network layer that enhances model capabilities, superior performance in tasks requiring boundary recognition and rotational invariance demonstrated through rigorous experimental comparison with CNNs, improved model explainability which is essential for trust and transparency in AI, and thorough experimental validation that substantiates LON's effectiveness across various tasks.
weaknesses: The weaknesses of the paper encompass the increased complexity and computational demands of LON due to its higher number of parameters compared to CNNs, concerns about its generalization ability beyond the specific tasks and datasets explored, the potential for overfitting highlighted by certain experiments, and the need for broader experimental validation to ascertain LON's versatility and practicality in diverse applications.
confidence: 2
justification: The introduction of a new model is certainly commendable. However, as I am not an expert in modeling, it is difficult for me to assess whether the strengths and weaknesses of the model presented in this paper are critical.
final_rebuttal_confidence: 2
final_rebuttal_justification: It is difficult for me to assess whether the strengths and weaknesses of the model presented in this paper are critical. |
JNxddbPPWt | Locally orderless networks | [
"Jon Sporring",
"Peidi Xu",
"Jiahao Lu",
"Francois Bernard Lauze",
"Sune Darkner"
] | We present Locally Orderless Networks (LON) and the theoretical foundation that links them to Convolutional Neural Networks (CNN), Scale-space histograms, and measurement theory. The key elements are a regular sampling of the bias and the derivative of the activation function. We compare LON, CNN, and Scale-space histograms on prototypical single-layer networks. We show how LON and CNN can emulate each other and how LON expands the set of functions computable to non-linear functions such as squaring. We demonstrate simple networks that illustrate the improved performance of LON over CNN on simple tasks for estimating the gradient magnitude squared, for regressing shape area and perimeter lengths, and for explainability of individual pixels' influence on the result. | [
"Convolutional Neural Networks",
"Locally Orderless Images",
"histograms",
"saliency maps",
"explainability"
] | https://openreview.net/pdf?id=JNxddbPPWt | https://openreview.net/forum?id=JNxddbPPWt | KUqw9swSGw | official_review | 1,727,315,763,437 | JNxddbPPWt | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission40/Reviewer_cJRh"
] | NLDL.org/2025/Conference | 2025 | title: Locally Orderless Networks - Interesting approach with narrow / limited results
summary: The article introduces Locally Orderless Networks; operators with roots in Locally Orderless Image frameworks. The work draws on kernel methods via general Parzen-Rosenblatt windows and local histograms, and connect these to the general convolutional operator from CNNs. The link is well grounded in theory, and the authors reference earlier work with local feature descriptors (SIFT, HoG).
Compared to CNNs, LON emphasizes soft isophotes (boundaries of regions of constant value), whereas CNNs perform thresholding. This can potentially make LON effective in certain tasks where CNNs might theoretically be less well suited, such as shape regression. The authors also investigate LONs for gradient based saliency maps.
strengths: (S1) The general idea behind LONs is quite intuitive and well grounded in theory.
(S2) LONs could potentially be used to extract robust locally permutation invariant features, complementing existing methods.
(S3) LONs provide novelty, and the theoretical justification for their application is well motivated in the manuscript. There is an obvious, but appealing connection to existing convolutional operators.
(S4) The presentation is generally sound, concise, and precise.
weaknesses: (W1) The experiments are limited in scope, and its general applicability to more complex or diverse tasks remains mostly unexplored. It is not clear from the experiments that LONs offer significant benefits in general modelling settings. This is by far the most pressing weakness of the work, as it provides little incentive for direct applications.
(W2) It seems that LON requires more parameters compared to CNNs, especially as the number of bins increases and particularly in the area regressor experiment. While this is somewhat discussed, the current results seem to indicate issues for LONs with higher number of bins. As a result, the reader is left with more questions regarding when exactly this operator could serve as viable modelling tool.
(W3) While the saliency maps provided by LONs are more focused, the definition of saliency with pure gradients is not considered robust. In this reviewers opinion, it somewhat demonstrates that LONs seems well suited to shape regression. However, pure gradients as a general saliency method has been demonstrated to be unreliable in more general settings [(Adebayo et al. 2018)](https://arxiv.org/abs/1810.03292).
confidence: 4
justification: **Strengths**: LONs provide an intuitive method that could likely be purposeful for more effectively extracting certain locally permutation invariant features. In particular, LONs seem promising for extracting statistical features that could require multiple spatial / temporal CNN layers to adequately model. It would be interesting to see LONs compared more directly as an adaptive variant to classic feature extraction methods such as SIFT and HoG; a somewhat missed opportunity in this reviewers opinion. This could also potentially remediate certain issues with the current draft, as the current experiments are a little too narrow in scope to convincingly argue for their practical use in modern vision modelling tasks, see below for details.
**Weaknesses**: Overall; while LONs presents an interesting theoretical and practical tool, they still have challenges regarding generalisation, parameter efficiency, and scalability. Moreover, the combination of histogram-based features with kernel methods, especially RBFs, has been widely studied and applied, and while LON can be seen as a continuation or extension of this approach within the neural network paradigm, the authors should perhaps more extensively acknowledge the body of existing research that intersects with its core concepts. This, in conjunction with slightly more extensive experiments, would strengthen the manuscript substantially. Leaning into a more theoretical / mathematical result for justifying the class of problems where LONs can extract meaningful features could potentially provide a more convincing argument for the method.
**Summary**: While the method is novel, the article is missing that little extra push to make this reviewer confident of the contribution of LONs for the general community. Currently, this reviewer is left with more questions after reading the manuscript than before reading it. Consider adding more experiments to advocate for exactly where practitioners may apply LONs in a modelling context, perhaps by formulating LONs in context of adaptive local feature descriptors, or perhaps as a non-parametric tool with Gaussian Mixtures or Dirichlet processes.
final_rebuttal_confidence: 5
final_rebuttal_justification: We appreciate that the authors are looking to establish the feasibility of a more theoretical approach. We believe the idea could have merit, as discussed in our review, however, the experiments designed around the current approach does not clearly show or delineate significant benefits of the proposed approach.
In summary, we believe the rebuttal fails to sufficiently establish the value of the proposed LON network layer in a modern modeling framework. It is still not sufficiently clear to the reader how LONs provide benefits compared to a standard convolutional layer, and the authors comments are not sufficiently addressing this limitation. For this reason, we will maintain our recommendation to not accept the work as part of the conference.
We wish the authors luck in their future work towards locally orderless network layers. |
JNxddbPPWt | Locally orderless networks | [
"Jon Sporring",
"Peidi Xu",
"Jiahao Lu",
"Francois Bernard Lauze",
"Sune Darkner"
] | We present Locally Orderless Networks (LON) and the theoretical foundation that links them to Convolutional Neural Networks (CNN), Scale-space histograms, and measurement theory. The key elements are a regular sampling of the bias and the derivative of the activation function. We compare LON, CNN, and Scale-space histograms on prototypical single-layer networks. We show how LON and CNN can emulate each other and how LON expands the set of functions computable to non-linear functions such as squaring. We demonstrate simple networks that illustrate the improved performance of LON over CNN on simple tasks for estimating the gradient magnitude squared, for regressing shape area and perimeter lengths, and for explainability of individual pixels' influence on the result. | [
"Convolutional Neural Networks",
"Locally Orderless Images",
"histograms",
"saliency maps",
"explainability"
] | https://openreview.net/pdf?id=JNxddbPPWt | https://openreview.net/forum?id=JNxddbPPWt | 6OhEx0v5aM | official_review | 1,728,464,335,317 | JNxddbPPWt | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission40/Reviewer_xnJP"
] | NLDL.org/2025/Conference | 2025 | title: Locally Orderless Networks - Strengths in theoretical foundations and weaknesses in experimental scope and generalizability
summary: The paper "Locally Orderless Networks" (LONs) introduces a novel neural network architecture that integrates scale-space theory and measure theory, enhancing the computational capabilities of neural networks by employing locally orderless histograms. It establishes LONs as a generalization of the Locally Orderless Image framework, allowing for the computation of non-linear functions like squaring. The research demonstrates that LONs outperform Convolutional Neural Networks (CNNs) in tasks related to edge detection, gradient magnitude estimation, and shape analysis, showcasing superior performance in object classification based on perimeter and improved explainability through saliency maps. Overall, LONs offer a theoretically sound and effective alternative to CNNs, particularly for boundary-focused tasks, with potential for further integration in future work.
strengths: The paper presents a well-founded proposal that integrates scale-space theory and measure theory, demonstrating how LONs generalize the locally orderless image paradigm to compute non-linear functions such as squaring. Empirical evaluations show that LONs outperform CNNs on specific tasks that require nuanced spatial and intensity handling, such as gradient magnitude estimation and shape analysis. The explainability analysis via saliency maps also reveals improvements over CNNs, indicating enhanced explainability by better highlighting task-relevant areas. Furthermore, it underscores the potential applicability of combining LONs with existing architectures in future research. Overall, this work represents a valuable contribution to the field, effectively illustrating the potential of LONs across several domains.
weaknesses: 1. The experiments primarily focus on gradient magnitude estimation and shape analysis, lacking a broader range of tasks and datasets. Expanding the experimental scope would strengthen claims about the versatility and generalizability of LONs.
2. The paper compares LONs with CNNs and does not include a broader analysis against other state-of-the-art architectures.
3. While the experiments indicate that LONs may exhibit overfitting, the paper does not adequately address methods to mitigate this issue. Incorporating techniques like regularization or cross-validation would enhance the reliability of the findings.
4. The paper lacks a thorough discussion on the scalability and computational efficiency of LONs, particularly regarding the added parameters from locally orderless histograms.
5. Although the theoretical foundations are solid, the paper does not explore the limitations and assumptions of LONs in depth. Understanding the conditions under which LONs might fail or produce suboptimal results would enhance credibility.
6. Certain terminology and concepts, such as scale-space and measure theory, should be better explained.
7. The font size of Figures 1, 4, and 5 is very small and cannot be read.
confidence: 4
justification: While the paper has notable strengths, it also has several weaknesses that should be addressed before publication. Although the theoretical foundations are sound, the limited experimental scope raises concerns about the generalizability of the findings. Additionally, the lack of rigorous comparative analysis diminishes the paper's significance. It would also be beneficial to discuss potential overfitting, as well as issues related to scalability and computational complexity. Finally, while the paper is generally clear, some terminology may pose challenges for readers unfamiliar with the underlying theories, and certain figures need correction. Addressing these weaknesses could enhance the paper's overall impact and relevance.
final_rebuttal_confidence: 4
final_rebuttal_justification: The authors have satisfactorily addressed all of my comments and have made appropriate changes to strengthen the critical aspects of the paper. Therefore, my recommendation is to accept the paper for publication. |
JBH3mtjG9I | FreqRISE: Explaining time series using frequency masking | [
"Thea Brüsch",
"Kristoffer Knutsen Wickstrøm",
"Mikkel N. Schmidt",
"Tommy Sonne Alstrøm",
"Robert Jenssen"
] | Time series data is fundamentally important for many critical domains such as healthcare, finance, and climate, where explainable models are necessary for safe automated decision-making. To develop explainable artificial intelligence in these domains therefore implies explaining salient information in the time series. Current methods for obtaining saliency maps assumes localized information in the raw input space. In this paper, we argue that the salient information of a number of time series is more likely to be localized in the frequency domain. We propose FreqRISE, which uses masking based methods to produce explanations in the frequency and time-frequency domain, and outperforms strong baselines across a number of tasks. | [
"Explainability",
"Time series data",
"Audio data"
] | https://openreview.net/pdf?id=JBH3mtjG9I | https://openreview.net/forum?id=JBH3mtjG9I | u3OC3LdQZr | official_review | 1,728,295,319,648 | JBH3mtjG9I | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission11/Reviewer_Bf1T"
] | NLDL.org/2025/Conference | 2025 | title: Promising concept but critical gaps in analysis and discussion
summary: The authors introduce FreqRISE, an extension of the RISE explainability method applied to time series data by transforming it into the frequency and time-frequency domains. FreqRISE outperforms baseline methods in identifying relevant features, particularly in noisy environments and tasks where information is sparse. However, it consistently produces high complexity explanations, which may hinder interpretability and raises concerns about the correctness of the relevance maps, as they may include irrelevant information.
strengths: - **Core idea and novelty**: The paper introduces a novel application of the RISE explainability method to time series data by operating in the frequency and time-frequency domains, which is underexplored. This is a promising approach.
- **Performance in noisy environments**: FreqRISE demonstrates strong performance in noisy settings, in the context of the chosen synthetic dataset. This is an important result, as many real-world applications involve noisy time series data.
- **Clear introduction and methodology**: The paper does a good job of explaining the motivation behind the work and the FreqRISE method. The introduction is well-written, and the explanation of how the method works is detailed and easy to follow. The reasoning behind focusing on the frequency domain for time series analysis is clearly laid out, which helps build a solid foundation for the study.
- **Frequency domain advantage**: The method performs well on the AudioMNIST dataset when explanations are generated in the frequency domain. This indicates that FreqRISE is effective in identifying salient frequency components in this context.
weaknesses: - **Insufficient discussion of performance results**
- In the synthetic dataset’s low-noise case FreqRISE performs the same as baseline methods, yet there is little discussion of what this means. Similarly, the fact that FreqRISE significantly outperforms in high-noise settings but not in low-noise ones raises questions about the method's behaviour and its reliance on noise levels. The implications of these results need to be explored further.
- In the AudioMNIST task FreqRISE performs better in the frequency domain, but not in the time-frequency domain. The paper does not provide sufficient explanation of why FreqRISE’s performance varies between these domains. The authors should address what this discrepancy implies for different types of tasks and whether FreqRISE is generally more suited to purely frequency-based data.
- **High complexity of explanations**: FreqRISE consistently produces explanations with high complexity scores. This complexity can make the relevance maps difficult to interpret, limiting their practical usefulness. The paper doesn’t adequately discuss the consequences of this high complexity. The authors should explore whether the added accuracy is worth the trade-off in interpretability and whether post-processing could reduce this complexity.
- **Unclear argument for superiority**: The high complexity and mixed results make it difficult to conclude that FreqRISE is a superior method to the baselines. The lack of a detailed discussion around these limitations weakens the paper's overall argument. Without addressing how FreqRISE’s complex explanations might reduce its practical value, the claim of superiority over baseline methods is questionable.
- **Minor issues**: There are minor issues such as typos (line 102, “build”; line 177, “allows to”) and insufficient detail in the figure captions (figs. 3, 4, 5). Standalone figure descriptions would improve the clarity of the results and help readers better understand the key findings.
confidence: 4
justification: The paper introduces a promising method for explainable AI in time series data, but its contributions are unclear and unconvincing. Key findings lack in-depth analysis, particularly regarding the variation in performance for different tasks. Additionally, the method produces explanations with high complexity, which undermines interpretability. These shortcomings indicate that the contributions of the work are unclear and insufficiently supported.
final_rebuttal_confidence: 3
final_rebuttal_justification: I recommend accepting this paper, as it introduces a valuable approach to XAI for time series in the frequency domain. FreqRISE offers advantages in noisy data environments, demonstrating potential in challenging applications. While high complexity in relevance maps remains an issue, especially in low-noise contexts, the preliminary post-processing solutions are encouraging steps towards improved interpretability. Additionally, while guidance for selecting between frequency and time-frequency domains is only preliminary, this paper does provide a good basis for further development. Overall, this work is a promising addition to time series explainability and lays groundwork for future research in frequency-based interpretability. |
JBH3mtjG9I | FreqRISE: Explaining time series using frequency masking | [
"Thea Brüsch",
"Kristoffer Knutsen Wickstrøm",
"Mikkel N. Schmidt",
"Tommy Sonne Alstrøm",
"Robert Jenssen"
] | Time series data is fundamentally important for many critical domains such as healthcare, finance, and climate, where explainable models are necessary for safe automated decision-making. To develop explainable artificial intelligence in these domains therefore implies explaining salient information in the time series. Current methods for obtaining saliency maps assumes localized information in the raw input space. In this paper, we argue that the salient information of a number of time series is more likely to be localized in the frequency domain. We propose FreqRISE, which uses masking based methods to produce explanations in the frequency and time-frequency domain, and outperforms strong baselines across a number of tasks. | [
"Explainability",
"Time series data",
"Audio data"
] | https://openreview.net/pdf?id=JBH3mtjG9I | https://openreview.net/forum?id=JBH3mtjG9I | rqvju5ieTM | meta_review | 1,730,500,304,919 | JBH3mtjG9I | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission11/Area_Chair_kTGG"
] | NLDL.org/2025/Conference | 2025 | metareview: The paper presents a new way to do XAI in the frequency domain of models which classify time-series.
The idea is potentially novel but the presentation is significantly unclear.
The basic definition given in equation 3 is mathematically quite unclear,
- even if one tries to match this against the RISE paper, https://arxiv.org/pdf/1806.07421
Given the constraints of time it is quite challenging to ascertain the correctness with confidence,
but I tend to believe that with a significant overhaul of the writing maybe the ambiguities can be resolved.
recommendation: Accept (Poster)
suggested_changes_to_the_recommendation: 1: I agree that the recommendation could be moved down
confidence: 3: The area chair is somewhat confident |
JBH3mtjG9I | FreqRISE: Explaining time series using frequency masking | [
"Thea Brüsch",
"Kristoffer Knutsen Wickstrøm",
"Mikkel N. Schmidt",
"Tommy Sonne Alstrøm",
"Robert Jenssen"
] | Time series data is fundamentally important for many critical domains such as healthcare, finance, and climate, where explainable models are necessary for safe automated decision-making. To develop explainable artificial intelligence in these domains therefore implies explaining salient information in the time series. Current methods for obtaining saliency maps assumes localized information in the raw input space. In this paper, we argue that the salient information of a number of time series is more likely to be localized in the frequency domain. We propose FreqRISE, which uses masking based methods to produce explanations in the frequency and time-frequency domain, and outperforms strong baselines across a number of tasks. | [
"Explainability",
"Time series data",
"Audio data"
] | https://openreview.net/pdf?id=JBH3mtjG9I | https://openreview.net/forum?id=JBH3mtjG9I | gzDJAkArk9 | official_review | 1,728,053,115,234 | JBH3mtjG9I | [
"everyone"
] | [
"NLDL.org/2025/Conference/Submission11/Reviewer_fUhF"
] | NLDL.org/2025/Conference | 2025 | title: Introduction of a new explanation method for time series data
summary: This paper builds upon existing explanation methods for time series data which utilize masking (RISE) and introduces the idea of applying masking to the frequency domain rather than input space. The outputs are then transformed back to the input space to obtain masks in the time domain, allowing explanations to be presented additionally in the time-frequency domain. The authors demonstrate the new technique, FreqRISE, in both an established synthetic dataset and the AudioMNIST dataset.
strengths: The paper is well organized and written, helping to make it accessible to a wide audience. Masking in the frequency domain is both interesting and appears to provide utility. Demonstrating performance in multiple ways on several datasets is useful.
weaknesses: There are some parts requiring further clarity both to the reviewer and future readers:
- In the Intro it is stated that there is a focus on imaging applications of XAI. It is worth adding other frequent applications of them eg on tabular and text data
- Some justification for the statement "We expect the salient information for the gender and digit task to be localized in the frequency and time-frequency domain, respectively." could be useful
- Clarification for why faithfulness was not computed and included in Table 1
- Please provide guidance on choosing parameters for FreqRISE, such as the number of masks. It is not immediately apparent why ~10x more are applied in the AudioMNIST task
- Some additional sample outputs for other classes (eg male speaker or other digits) would be interesting, in the appendices if required for the sake of space is fine
Fig 1
- The figure itself could do with some improvements to provide clarity eg at least some arrows etc to show the data flow, labelling of all parts (eg ŷ)
- More description of the pipeline is required in the caption. Currently it cannot easily be interpreted without first reading the text. For example, the meaning of STDFT must be included and additionally I suggest a brief description of the key stages of processing
Some minor typographical errors including:
- Line 102 misspelling "built"
- Line 227 "Freq" should presumably not be in brackets as currently appears
confidence: 4
justification: The work is interesting and well written. The weaknesses are relatively minor and resolvable so on balance I recommend acceptance. |
JBH3mtjG9I | FreqRISE: Explaining time series using frequency masking | [
"Thea Brüsch",
"Kristoffer Knutsen Wickstrøm",
"Mikkel N. Schmidt",
"Tommy Sonne Alstrøm",
"Robert Jenssen"
] | Time series data is fundamentally important for many critical domains such as healthcare, finance, and climate, where explainable models are necessary for safe automated decision-making. To develop explainable artificial intelligence in these domains therefore implies explaining salient information in the time series. Current methods for obtaining saliency maps assumes localized information in the raw input space. In this paper, we argue that the salient information of a number of time series is more likely to be localized in the frequency domain. We propose FreqRISE, which uses masking based methods to produce explanations in the frequency and time-frequency domain, and outperforms strong baselines across a number of tasks. | [
"Explainability",
"Time series data",
"Audio data"
] | https://openreview.net/pdf?id=JBH3mtjG9I | https://openreview.net/forum?id=JBH3mtjG9I | fpYvMYBEDz | decision | 1,730,901,554,627 | JBH3mtjG9I | [
"everyone"
] | [
"NLDL.org/2025/Conference/Program_Chairs"
] | NLDL.org/2025/Conference | 2025 | title: Paper Decision
decision: Accept (Poster)
comment: We recommend a poster presentation given the AC and reviewers recommendations. |
Subsets and Splits