forum_id
stringlengths 8
20
| forum_title
stringlengths 4
171
| forum_authors
sequencelengths 0
25
| forum_abstract
stringlengths 4
4.27k
| forum_keywords
sequencelengths 0
10
| forum_pdf_url
stringlengths 38
50
| note_id
stringlengths 8
13
| note_type
stringclasses 6
values | note_created
int64 1,360B
1,736B
| note_replyto
stringlengths 8
20
| note_readers
sequencelengths 1
5
| note_signatures
sequencelengths 1
1
| note_text
stringlengths 10
16.6k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
DOUskwCqg5 | SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors | [
"Vijay Lingam",
"Atula Tejaswi Neerkaje",
"Aditya Vavre",
"Aneesh Shetty",
"Gautham Krishna Gudur",
"Joydeep Ghosh",
"Eunsol Choi",
"Alex Dimakis",
"Aleksandar Bojchevski",
"sujay sanghavi"
] | Popular parameter-efficient fine-tuning (PEFT) methods, such as LoRA and its variants, freeze pre-trained model weights \(\mathbf{W}\) and inject learnable matrices \(\mathbf{\Delta W}\). These \(\mathbf{\Delta W}\) matrices are structured for efficient parameterization, often using techniques like low-rank approximations or scaling vectors. However, these methods typically show a performance gap compared to full fine-tuning. Although recent PEFT methods have narrowed this gap, they do so at the cost of additional learnable parameters.
We propose SVFT, a simple approach that fundamentally differs from existing methods: the structure imposed on \(\mathbf{\Delta W}\) depends on the specific weight matrix \(\mathbf{W}\). Specifically, SVFT updates \(\mathbf{W}\) as a sparse combination of outer products of its singular vectors, training only the coefficients (scales) of these sparse combinations. This approach allows fine-grained control over expressivity through the number of coefficients. Extensive experiments on language and vision benchmarks show that SVFT recovers up to \textbf{96\%} of full fine-tuning performance while training only \textbf{0.006 to 0.25\%} of parameters, outperforming existing methods that only recover up to \textbf{85\%} performance using \textbf{0.03 to 0.8\%} of the trainable parameter budget. | [
"Parameter Efficient Fine-Tuning",
"Large Language Models"
] | https://openreview.net/pdf?id=DOUskwCqg5 | ERYp24rAJy | official_review | 1,718,232,113,194 | DOUskwCqg5 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission9/Reviewer_2hmA"
] | title: Good paper, minor clarifications are required
summary: This paper proposes a novel parameter-efficient LLM fine-tuning algorithm with focus on improving finetuning efficiency. The idea is to use frozen singular vectors of pre-trained weight matrices and a trainable scaling matrix to create a fine-tuned weight addition. The results indicate that the proposed method requires less number of parameters in general while maintaining or exceeding the performance of other similar PEFT approaches.
strengths: The paper is well-written very easy to read
The method is easy to understand
The experimental results are extensive.
All claims are backed up by experimental evidence.
Ablation experiments give further insights
weaknesses: I don't really see any weaknesses in this paper
confidence: 5
limitations: - Singular vectors have to be stored for all pre-trained model weights, thus memory efficiency is noticeably worse than that of LoRA or other similar approaches.
- Method uses singular vectors of pre-trained weights, thus the quality of pre-train heavily influences the fine-tuning results, which is explicitly mentioned in the paper
suggestions: - In Appendix, section C.5 mentions using different groups of pre-trained models' weight matrices while comparing different PEFT approaches (e.g. BOFT - Q and K, VeRA - G and U). In my opinion it introduces a bias in PEFT performance comparison for natural language generation tasks. I would like to see a cleaner comparison.
- The choice of vision benchmarks is not fully clear to me. It would be interesting to see results on larger datasets and complex CV tasks such as image generation and image/video captioning. |
DOUskwCqg5 | SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors | [
"Vijay Lingam",
"Atula Tejaswi Neerkaje",
"Aditya Vavre",
"Aneesh Shetty",
"Gautham Krishna Gudur",
"Joydeep Ghosh",
"Eunsol Choi",
"Alex Dimakis",
"Aleksandar Bojchevski",
"sujay sanghavi"
] | Popular parameter-efficient fine-tuning (PEFT) methods, such as LoRA and its variants, freeze pre-trained model weights \(\mathbf{W}\) and inject learnable matrices \(\mathbf{\Delta W}\). These \(\mathbf{\Delta W}\) matrices are structured for efficient parameterization, often using techniques like low-rank approximations or scaling vectors. However, these methods typically show a performance gap compared to full fine-tuning. Although recent PEFT methods have narrowed this gap, they do so at the cost of additional learnable parameters.
We propose SVFT, a simple approach that fundamentally differs from existing methods: the structure imposed on \(\mathbf{\Delta W}\) depends on the specific weight matrix \(\mathbf{W}\). Specifically, SVFT updates \(\mathbf{W}\) as a sparse combination of outer products of its singular vectors, training only the coefficients (scales) of these sparse combinations. This approach allows fine-grained control over expressivity through the number of coefficients. Extensive experiments on language and vision benchmarks show that SVFT recovers up to \textbf{96\%} of full fine-tuning performance while training only \textbf{0.006 to 0.25\%} of parameters, outperforming existing methods that only recover up to \textbf{85\%} performance using \textbf{0.03 to 0.8\%} of the trainable parameter budget. | [
"Parameter Efficient Fine-Tuning",
"Large Language Models"
] | https://openreview.net/pdf?id=DOUskwCqg5 | D1Olo6vwyL | meta_review | 1,718,639,330,833 | DOUskwCqg5 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission9/Area_Chair_jqen"
] | metareview: All reviewers champion the acceptance of this manuscript, given its well-motivated methodology, good paper writing quality, extensive empirical results, and thorough ablation study on various CV and NLP tasks.
recommendation: Accept (Oral)
confidence: 4 |
DOUskwCqg5 | SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors | [
"Vijay Lingam",
"Atula Tejaswi Neerkaje",
"Aditya Vavre",
"Aneesh Shetty",
"Gautham Krishna Gudur",
"Joydeep Ghosh",
"Eunsol Choi",
"Alex Dimakis",
"Aleksandar Bojchevski",
"sujay sanghavi"
] | Popular parameter-efficient fine-tuning (PEFT) methods, such as LoRA and its variants, freeze pre-trained model weights \(\mathbf{W}\) and inject learnable matrices \(\mathbf{\Delta W}\). These \(\mathbf{\Delta W}\) matrices are structured for efficient parameterization, often using techniques like low-rank approximations or scaling vectors. However, these methods typically show a performance gap compared to full fine-tuning. Although recent PEFT methods have narrowed this gap, they do so at the cost of additional learnable parameters.
We propose SVFT, a simple approach that fundamentally differs from existing methods: the structure imposed on \(\mathbf{\Delta W}\) depends on the specific weight matrix \(\mathbf{W}\). Specifically, SVFT updates \(\mathbf{W}\) as a sparse combination of outer products of its singular vectors, training only the coefficients (scales) of these sparse combinations. This approach allows fine-grained control over expressivity through the number of coefficients. Extensive experiments on language and vision benchmarks show that SVFT recovers up to \textbf{96\%} of full fine-tuning performance while training only \textbf{0.006 to 0.25\%} of parameters, outperforming existing methods that only recover up to \textbf{85\%} performance using \textbf{0.03 to 0.8\%} of the trainable parameter budget. | [
"Parameter Efficient Fine-Tuning",
"Large Language Models"
] | https://openreview.net/pdf?id=DOUskwCqg5 | Ac9yrUG4eY | decision | 1,718,650,296,846 | DOUskwCqg5 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Program_Chairs"
] | decision: Accept (Oral)
comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together!
title: Paper Decision |
BnONfFhCd0 | Fisher-aware Quantization for DETR Detectors with Critical-category Objectives | [
"Huanrui Yang",
"Yafeng Huang",
"Zhen Dong",
"Denis A Gudovskiy",
"Tomoyuki Okuno",
"Yohei Nakata",
"Yuan Du",
"Kurt Keutzer",
"Shanghang Zhang"
] | The impact of quantization on the overall performance of deep learning models is a well-studied problem. However, understanding and mitigating its effects on a more fine-grained level is still lacking, especially for harder tasks such as object detection with both classification and regression objectives. This work defines the performance for a subset of task-critical categories, i.e. the critical-category performance, as a crucial yet largely overlooked fine-grained objective for detection tasks. We analyze the impact of quantization at the category-level granularity, and propose methods to improve performance for the critical categories. Specifically, we find that certain critical categories have a higher sensitivity to quantization, and are prone to overfitting after quantization-aware training (QAT). To explain this, we provide theoretical and empirical links between their performance gaps and the corresponding loss landscapes with the Fisher information framework. Using this evidence, we apply a Fisher-aware mixed-precision quantization scheme, and a Fisher-trace regularization for the QAT on the critical-category loss landscape. The proposed methods improve critical-category metrics of the quantized transformer-based DETR detectors. They are even more significant in case of larger models and higher number of classes where the overfitting becomes more severe. For example, our methods lead to 10.4% and 14.5% mAP gains for, correspondingly, 4-bit DETR-R50 and Deformable DETR on the most impacted critical classes in the COCO Panoptic dataset. | [
"Quantization",
"Detection Transformers",
"Fisher information",
"Finegrained performance"
] | https://openreview.net/pdf?id=BnONfFhCd0 | vfFdCVt5Qv | meta_review | 1,718,630,768,981 | BnONfFhCd0 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission10/Area_Chair_VzMA"
] | metareview: The paper proposes a new detection model that combines DETR-type architectures with low (4) bit quantization. The reviewers agree that the paper is interesting, with novel elements. The AC agrees and recommends for acceptance. Please try to address the feedback received by the camera ready.
recommendation: Accept (Poster)
confidence: 4 |
BnONfFhCd0 | Fisher-aware Quantization for DETR Detectors with Critical-category Objectives | [
"Huanrui Yang",
"Yafeng Huang",
"Zhen Dong",
"Denis A Gudovskiy",
"Tomoyuki Okuno",
"Yohei Nakata",
"Yuan Du",
"Kurt Keutzer",
"Shanghang Zhang"
] | The impact of quantization on the overall performance of deep learning models is a well-studied problem. However, understanding and mitigating its effects on a more fine-grained level is still lacking, especially for harder tasks such as object detection with both classification and regression objectives. This work defines the performance for a subset of task-critical categories, i.e. the critical-category performance, as a crucial yet largely overlooked fine-grained objective for detection tasks. We analyze the impact of quantization at the category-level granularity, and propose methods to improve performance for the critical categories. Specifically, we find that certain critical categories have a higher sensitivity to quantization, and are prone to overfitting after quantization-aware training (QAT). To explain this, we provide theoretical and empirical links between their performance gaps and the corresponding loss landscapes with the Fisher information framework. Using this evidence, we apply a Fisher-aware mixed-precision quantization scheme, and a Fisher-trace regularization for the QAT on the critical-category loss landscape. The proposed methods improve critical-category metrics of the quantized transformer-based DETR detectors. They are even more significant in case of larger models and higher number of classes where the overfitting becomes more severe. For example, our methods lead to 10.4% and 14.5% mAP gains for, correspondingly, 4-bit DETR-R50 and Deformable DETR on the most impacted critical classes in the COCO Panoptic dataset. | [
"Quantization",
"Detection Transformers",
"Fisher information",
"Finegrained performance"
] | https://openreview.net/pdf?id=BnONfFhCd0 | fuzD6f6YHz | decision | 1,718,651,437,237 | BnONfFhCd0 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Program_Chairs"
] | decision: Accept (Poster)
comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together!
title: Paper Decision |
BnONfFhCd0 | Fisher-aware Quantization for DETR Detectors with Critical-category Objectives | [
"Huanrui Yang",
"Yafeng Huang",
"Zhen Dong",
"Denis A Gudovskiy",
"Tomoyuki Okuno",
"Yohei Nakata",
"Yuan Du",
"Kurt Keutzer",
"Shanghang Zhang"
] | The impact of quantization on the overall performance of deep learning models is a well-studied problem. However, understanding and mitigating its effects on a more fine-grained level is still lacking, especially for harder tasks such as object detection with both classification and regression objectives. This work defines the performance for a subset of task-critical categories, i.e. the critical-category performance, as a crucial yet largely overlooked fine-grained objective for detection tasks. We analyze the impact of quantization at the category-level granularity, and propose methods to improve performance for the critical categories. Specifically, we find that certain critical categories have a higher sensitivity to quantization, and are prone to overfitting after quantization-aware training (QAT). To explain this, we provide theoretical and empirical links between their performance gaps and the corresponding loss landscapes with the Fisher information framework. Using this evidence, we apply a Fisher-aware mixed-precision quantization scheme, and a Fisher-trace regularization for the QAT on the critical-category loss landscape. The proposed methods improve critical-category metrics of the quantized transformer-based DETR detectors. They are even more significant in case of larger models and higher number of classes where the overfitting becomes more severe. For example, our methods lead to 10.4% and 14.5% mAP gains for, correspondingly, 4-bit DETR-R50 and Deformable DETR on the most impacted critical classes in the COCO Panoptic dataset. | [
"Quantization",
"Detection Transformers",
"Fisher information",
"Finegrained performance"
] | https://openreview.net/pdf?id=BnONfFhCd0 | bGkDyIf5Z1 | official_review | 1,718,445,847,404 | BnONfFhCd0 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission10/Reviewer_jV7J"
] | title: Studying the effects of quantization and category importance in object detection
summary: For efficiency, NN are often quantized. The article claims that category importance should play an important role in avoiding performance loss in object detection methods. They propose the use of the Fisher information matrix as a proxy for the Hessian to find a better objective. They also advocate using the trace of the FIM to estimate the sharpness of the loss landscape and utilise this information as a regulariser.
They show convincing improvement in mAP over the detection of critical classes in their results.
strengths: The paper is interesting and on-topic for the conference. It makes a few interesting points and shows good results when critical categories are important.
weaknesses: The paper is complex and while well-written not very easy to follow. The appendix is actually important to read to better understand the article.
confidence: 4
limitations: I don't think this is true that the importance of critical fine-grained objectives is really overlooked in the literature, and I'm not sure that such objectives should be actually quantized to any large degree in real-world applications (down to 4 bits?). The mAP numbers showed in the results are actually very low. While the proposed methodology helps matters significantly, it does not make it actually exploitable.
suggestions: The paper is going to be harder to understand if the appendix is not also published. I appreciated the more mathematically-inclined lean of the paper, so unfortunately I'm not quite sure what to suggest as improvement for readability. Perhaps try to gain some room by shortening page 2 of the paper. I didn't find figure 1 actually helpful.
Explain why the MILP formulation of eq. (12) can be efficiently solved. A MILP is typically NP-hard, which is the opposite of efficient. Is the problem always small ? is there a specific method that can be used in this case ? |
BnONfFhCd0 | Fisher-aware Quantization for DETR Detectors with Critical-category Objectives | [
"Huanrui Yang",
"Yafeng Huang",
"Zhen Dong",
"Denis A Gudovskiy",
"Tomoyuki Okuno",
"Yohei Nakata",
"Yuan Du",
"Kurt Keutzer",
"Shanghang Zhang"
] | The impact of quantization on the overall performance of deep learning models is a well-studied problem. However, understanding and mitigating its effects on a more fine-grained level is still lacking, especially for harder tasks such as object detection with both classification and regression objectives. This work defines the performance for a subset of task-critical categories, i.e. the critical-category performance, as a crucial yet largely overlooked fine-grained objective for detection tasks. We analyze the impact of quantization at the category-level granularity, and propose methods to improve performance for the critical categories. Specifically, we find that certain critical categories have a higher sensitivity to quantization, and are prone to overfitting after quantization-aware training (QAT). To explain this, we provide theoretical and empirical links between their performance gaps and the corresponding loss landscapes with the Fisher information framework. Using this evidence, we apply a Fisher-aware mixed-precision quantization scheme, and a Fisher-trace regularization for the QAT on the critical-category loss landscape. The proposed methods improve critical-category metrics of the quantized transformer-based DETR detectors. They are even more significant in case of larger models and higher number of classes where the overfitting becomes more severe. For example, our methods lead to 10.4% and 14.5% mAP gains for, correspondingly, 4-bit DETR-R50 and Deformable DETR on the most impacted critical classes in the COCO Panoptic dataset. | [
"Quantization",
"Detection Transformers",
"Fisher information",
"Finegrained performance"
] | https://openreview.net/pdf?id=BnONfFhCd0 | PSNsLbo6ad | official_review | 1,718,281,717,720 | BnONfFhCd0 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission10/Reviewer_nmpL"
] | title: Interesting Fisher-information based quantization method for detection transformer
summary: The authors consider the quantization of detection transformers (DETR) for object detection tasks. They first introduce a concept of critical category for the classification objective. Then, they study the impact of quantization on prediction performances for these categories. They later analyze this impact with Fisher information as an approximation of the Hessian. From this analysis, they propose a Fisher-aware quantization technique. Experiments show that this technique leads to improved mAP for DETR models trained on COCO datasets compared to uniform and hessian aware (HAWQ-V2) approaches to quantization.
The paper presents a method for the efficient training of transformers in computer vision. It is thus aligned with the topics of the workshop. I suggest to accept the paper.
strengths: - The paper is clear, well organized and easy to follow;
- The idea is simple and mathematically motivated;
- Experiments show consistent improvement of the mAP with their method compared to Uniform and HAWQ-V2.
weaknesses: The paper introduces critical-category, which are well motivated in the application of autonomous driving they mention. But how well does it transpose to other applications? For instance, what justifies the choice for _Person_, _Animal_, and _Indoor_ in their experiments?
While the Fisher-based quantization technique is being presented in the context of critical-category, the connection between the two concepts could be emphasized. Is the Fisher-based quantization also interesting because it can address the case of critical-category, while other techniques like HAWQ-V2 can only operate at the level of all categories?
It would be interesting to compare the impact of the different quantization in a quantization-aware training on additional metrics like the latency. Especially, when Fisher information is easier to obtain than the full Hessian.
confidence: 4 |
8lZEFFZhcT | Efficient Document Ranking with Learnable Late Interactions | [
"Himanshu Jain",
"Ziwei Ji",
"Ankit Singh Rawat",
"Andreas Veit",
"Sadeep Jayasumana",
"Sashank J. Reddi",
"Aditya Krishna Menon",
"Felix Yu"
] | Cross-Encoder (CE) and Dual-Encoder (DE) models are two fundamental approaches for predicting query-document relevance in information retrieval. To predict relevance, CE models use joint query-document embeddings, while DE models maintain factorized query-document embeddings; usually, the former has higher quality while the latter has lower latency.
Recently, late-interaction models have been proposed to realize more favorable latency-quality trade-offs, by using a DE structure followed by a lightweight scorer based on query and document token embeddings. However, these lightweight scorers are often hand-crafted, and there is no understanding of their approximation power; further, such scorers require access to individual document token embeddings, which imposes an increased latency and storage burden over DE models. In this paper, we propose novel \emph{learnable} late-interaction models (LITE) that resolve these issues.
Theoretically, we prove that LITE is a universal approximator of continuous scoring functions, even for relatively small embedding dimension. Empirically, LITE outperforms previous late-interaction models such as ColBERT on both in-domain and zero-shot re-ranking tasks such as MS MARCO and Natural Questions, and out-of-domain tasks such as BEIR. For instance, experiments on MS MARCO passage re-ranking show that LITE not only yields a model with better generalization, but also lowers latency and requires 0.25 times storage compared to ColBERT. | [
"Information retrieval",
"Reranking",
"Late-interaction methods"
] | https://openreview.net/pdf?id=8lZEFFZhcT | yX4RLuaeiG | decision | 1,718,650,235,540 | 8lZEFFZhcT | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Program_Chairs"
] | decision: Accept (Poster)
comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together!
title: Paper Decision |
8lZEFFZhcT | Efficient Document Ranking with Learnable Late Interactions | [
"Himanshu Jain",
"Ziwei Ji",
"Ankit Singh Rawat",
"Andreas Veit",
"Sadeep Jayasumana",
"Sashank J. Reddi",
"Aditya Krishna Menon",
"Felix Yu"
] | Cross-Encoder (CE) and Dual-Encoder (DE) models are two fundamental approaches for predicting query-document relevance in information retrieval. To predict relevance, CE models use joint query-document embeddings, while DE models maintain factorized query-document embeddings; usually, the former has higher quality while the latter has lower latency.
Recently, late-interaction models have been proposed to realize more favorable latency-quality trade-offs, by using a DE structure followed by a lightweight scorer based on query and document token embeddings. However, these lightweight scorers are often hand-crafted, and there is no understanding of their approximation power; further, such scorers require access to individual document token embeddings, which imposes an increased latency and storage burden over DE models. In this paper, we propose novel \emph{learnable} late-interaction models (LITE) that resolve these issues.
Theoretically, we prove that LITE is a universal approximator of continuous scoring functions, even for relatively small embedding dimension. Empirically, LITE outperforms previous late-interaction models such as ColBERT on both in-domain and zero-shot re-ranking tasks such as MS MARCO and Natural Questions, and out-of-domain tasks such as BEIR. For instance, experiments on MS MARCO passage re-ranking show that LITE not only yields a model with better generalization, but also lowers latency and requires 0.25 times storage compared to ColBERT. | [
"Information retrieval",
"Reranking",
"Late-interaction methods"
] | https://openreview.net/pdf?id=8lZEFFZhcT | Xc8dTdt2N9 | meta_review | 1,718,642,967,236 | 8lZEFFZhcT | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission1/Area_Chair_Kher"
] | metareview: This manuscript targets the application of predicting query-document relevance and introduces a learnable yet efficient strategy to automatically trade off the latency-quality. The paper is written in good shape: it has sufficient technical novelty and extensive numerical results to fit the interest of the workshop. Authors are encouraged to conduct more ablation studies to justify the effectiveness of the proposed LITE, e.g., in terms of domain generalization.
recommendation: Accept (Poster)
confidence: 3 |
8lZEFFZhcT | Efficient Document Ranking with Learnable Late Interactions | [
"Himanshu Jain",
"Ziwei Ji",
"Ankit Singh Rawat",
"Andreas Veit",
"Sadeep Jayasumana",
"Sashank J. Reddi",
"Aditya Krishna Menon",
"Felix Yu"
] | Cross-Encoder (CE) and Dual-Encoder (DE) models are two fundamental approaches for predicting query-document relevance in information retrieval. To predict relevance, CE models use joint query-document embeddings, while DE models maintain factorized query-document embeddings; usually, the former has higher quality while the latter has lower latency.
Recently, late-interaction models have been proposed to realize more favorable latency-quality trade-offs, by using a DE structure followed by a lightweight scorer based on query and document token embeddings. However, these lightweight scorers are often hand-crafted, and there is no understanding of their approximation power; further, such scorers require access to individual document token embeddings, which imposes an increased latency and storage burden over DE models. In this paper, we propose novel \emph{learnable} late-interaction models (LITE) that resolve these issues.
Theoretically, we prove that LITE is a universal approximator of continuous scoring functions, even for relatively small embedding dimension. Empirically, LITE outperforms previous late-interaction models such as ColBERT on both in-domain and zero-shot re-ranking tasks such as MS MARCO and Natural Questions, and out-of-domain tasks such as BEIR. For instance, experiments on MS MARCO passage re-ranking show that LITE not only yields a model with better generalization, but also lowers latency and requires 0.25 times storage compared to ColBERT. | [
"Information retrieval",
"Reranking",
"Late-interaction methods"
] | https://openreview.net/pdf?id=8lZEFFZhcT | KKbhRzw0se | official_review | 1,718,308,966,410 | 8lZEFFZhcT | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission1/Reviewer_ZTMq"
] | title: Summary of LITE paper
summary: Paper is about document rankings. Authors propose novel LITE (Learnable Late Interaction) model that has several advantages(generalization, latency, storage) than previous existing methods. In general, proposed LITE models offer a novel approach that combines the benefits of CE and DE models, providing high-quality relevance predictions with reduced latency and storage requirements.
strengths: • There is a clear summary of the research objectives, methods and novelty in abstract section.
• In the introduction section all necessary information is provided, scientific/research question is well-defined and contribution is also explained.
• Background section summarizes relevant previous studies and findings, identifies gaps in previous studies
• LITE Scorers section clarifies everything that's needed
• There are used several datasets and methods(more than one) in the Experiments section to compare previous methods to LITE and prove that it has better performance.
• Conclusion sums up everything very well
weaknesses: 1) While the paper shows promising zero-shot results, the generalization capabilities of LITE across highly diverse or unseen domains are still uncertain.
2) The performance of the LITE model might heavily rely on the quality of pre-trained embeddings. The paper does not extensively explore the impact of different pre-trained models (e.g., BERT, RoBERTa, GPT) on the final performance of LITE.
confidence: 5 |
7DPNITf7ui | Effective Layer Pruning Through Similarity Metric Perspective | [
"Ian Pons",
"Bruno Yamamoto",
"Anna Helena Reali Costa",
"Artur Jordao"
] | Deep neural networks have been the predominant paradigm in machine learning for solving cognitive tasks. Such models, however, are restricted by a high computational overhead, limiting their applicability and hindering advancements in the field. Extensive research demonstrated that pruning structures from these models is a straightforward approach to reducing network complexity. In this direction, most efforts focus on removing weights or filters. Studies have also been devoted to layer pruning as it promotes superior computational gains. However, layer pruning often hurts the network predictive ability (i.e., accuracy) at high compression rates. This work introduces an effective layer-pruning strategy that meets all underlying properties pursued by pruning methods. Our method estimates the relative importance of a layer using the Centered Kernel Alignment (CKA) metric, employed to measure the similarity between the representations of the unpruned model and a candidate layer for pruning. We confirm the effectiveness of our method on standard architectures and benchmarks, in which it outperforms existing layer-pruning strategies and other state-of-the-art pruning techniques. Particularly, we remove more than 75% of computation while improving predictive ability. At higher compression regimes, our method exhibits negligible accuracy drop, while other methods notably deteriorate model accuracy. Apart from these benefits, our pruned models exhibit robustness to adversarial and out-of-distribution samples. | [
"Layer Pruning",
"Similarity Metric",
"Effcient Deep Learning"
] | https://openreview.net/pdf?id=7DPNITf7ui | Ube4GzddF0 | official_review | 1,718,094,604,626 | 7DPNITf7ui | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission8/Reviewer_sPhz"
] | title: Review
summary: This study investigates layer-pruning strategies and proposes an effective method, which estimates the importance of layers (building blocks) based on a Centered Kernel Alignment metric. The authors test the effectiveness of the method using a wide array of model architectures and benchmarks and find the proposed method is more robustness to aggressive compression rates, adversarial and OOD samples.
Pros:
- It demonstrates the effectiveness of the proposed methods across extensive experiments on standard architectures and benchmarks.
- Using a more practical metric, latency, to measure the model efficiency.
Cons:
- The authors only provide a brief description of the CKA criterion, the core of the proposed method, even without giving some intuitions why this similarity-based criterion can work.
- The proposed layer-pruning method seems not compatible with other filter-pruning methods, such as l1-norm, because it always leads to worse performance with l1-norm. Can you add test results of CKA + l1-norm for other levels of parameter reduction?
Comments:
- Can you show the evidence (e.g., ablation study or prior work) to support the motivation that layers contributing to similar representations are unimportant layers and can be removed?
- In Table 3, for ResNet50 on ImageNet, -0.83 (WhiteBox, 45.6% parameter reduction) should be in bold instead of CKA_7.
strengths: - extensive experiments on standard architectures and benchmarks.
- comprehensive evaluations, including robustness to OOD and adversarial attack
weaknesses: - lack direct evidence to the motivation
- lack novelty
confidence: 3 |
7DPNITf7ui | Effective Layer Pruning Through Similarity Metric Perspective | [
"Ian Pons",
"Bruno Yamamoto",
"Anna Helena Reali Costa",
"Artur Jordao"
] | Deep neural networks have been the predominant paradigm in machine learning for solving cognitive tasks. Such models, however, are restricted by a high computational overhead, limiting their applicability and hindering advancements in the field. Extensive research demonstrated that pruning structures from these models is a straightforward approach to reducing network complexity. In this direction, most efforts focus on removing weights or filters. Studies have also been devoted to layer pruning as it promotes superior computational gains. However, layer pruning often hurts the network predictive ability (i.e., accuracy) at high compression rates. This work introduces an effective layer-pruning strategy that meets all underlying properties pursued by pruning methods. Our method estimates the relative importance of a layer using the Centered Kernel Alignment (CKA) metric, employed to measure the similarity between the representations of the unpruned model and a candidate layer for pruning. We confirm the effectiveness of our method on standard architectures and benchmarks, in which it outperforms existing layer-pruning strategies and other state-of-the-art pruning techniques. Particularly, we remove more than 75% of computation while improving predictive ability. At higher compression regimes, our method exhibits negligible accuracy drop, while other methods notably deteriorate model accuracy. Apart from these benefits, our pruned models exhibit robustness to adversarial and out-of-distribution samples. | [
"Layer Pruning",
"Similarity Metric",
"Effcient Deep Learning"
] | https://openreview.net/pdf?id=7DPNITf7ui | SFj1O1SKiM | decision | 1,718,724,799,434 | 7DPNITf7ui | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Program_Chairs"
] | decision: Accept (Poster)
comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together!
title: Paper Decision |
7DPNITf7ui | Effective Layer Pruning Through Similarity Metric Perspective | [
"Ian Pons",
"Bruno Yamamoto",
"Anna Helena Reali Costa",
"Artur Jordao"
] | Deep neural networks have been the predominant paradigm in machine learning for solving cognitive tasks. Such models, however, are restricted by a high computational overhead, limiting their applicability and hindering advancements in the field. Extensive research demonstrated that pruning structures from these models is a straightforward approach to reducing network complexity. In this direction, most efforts focus on removing weights or filters. Studies have also been devoted to layer pruning as it promotes superior computational gains. However, layer pruning often hurts the network predictive ability (i.e., accuracy) at high compression rates. This work introduces an effective layer-pruning strategy that meets all underlying properties pursued by pruning methods. Our method estimates the relative importance of a layer using the Centered Kernel Alignment (CKA) metric, employed to measure the similarity between the representations of the unpruned model and a candidate layer for pruning. We confirm the effectiveness of our method on standard architectures and benchmarks, in which it outperforms existing layer-pruning strategies and other state-of-the-art pruning techniques. Particularly, we remove more than 75% of computation while improving predictive ability. At higher compression regimes, our method exhibits negligible accuracy drop, while other methods notably deteriorate model accuracy. Apart from these benefits, our pruned models exhibit robustness to adversarial and out-of-distribution samples. | [
"Layer Pruning",
"Similarity Metric",
"Effcient Deep Learning"
] | https://openreview.net/pdf?id=7DPNITf7ui | FGc83J7H5n | official_review | 1,718,221,545,319 | 7DPNITf7ui | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission8/Reviewer_28X9"
] | title: Using CKA as a similarity metric in iterative layer pruning leads to accurate and robust networks.
summary: This paper explores the efficacy of using the Centered Kernel Alignment (CKA) metric (which closely borders standard correlation) as a means of measuring similarity between pruned and unpruned networks. In particular, the authors compute the CKA of the final hidden layer representation between a base and pruned network, $N$ and $N'$, where $N'$ consists of one less the layers in $N$.
In terms of the algorithm itself, the paper proposes a relatively simple strategy: 1) construct a set of candidate layers 2) iterate through the set and compare the representations with the base network 3) remove the layer corresponding to the lowest similarity metric ($1-CKA$) 4) repeat until the desired number of layers are removed. It is shown through ample experimental results that this method results in better performance across a range of reduction values, as well as generalizing to more robust datasets (eg. CIFAR-C).
strengths: - The paper is well-written, containing minimal errors and being considerably easy to follow.
- Tables and figures are clear and formatted well, making reading results easy.
- Despite its simplicity, the method is able to demonstrate strong results, outperforming many SOTA pruning techniques across a variety of metrics.
- The studies in adversarial robustness and latency in addition to standard accuracy and FLOP reduction make for a strong case for the paper's efficacy.
weaknesses: - Using CKA as a metric for pruning is not a novel idea, and has been explored in the past in similar algorithms [1]. This should, at the very least, be noted in the Related Works section.
- Figure 1. seems a little cherry-picked, as one could simply take the worst performing algorithm at every reduction level as a comparison point. For a stronger graph, I would suggest varying SOTA algorithms across reduction levels as the authors have done for their own method.
[1] Lachance, Alex, "Using Centered Kernel Alignment for Understanding and Pruning Neural Networks" (2022). Open Access Master's Theses. Paper 2283. https://digitalcommons.uri.edu/theses/2283
confidence: 4
limitations: - There is a potentially significant resource use that is not noted in the paper. That is, generating every representation requires two passes through the entire training sample $X$, and every layer is pruned iteratively. In pruning literature, these traits are described as 'data-free' and 'iterative-pruning.' Some methods compared against have the advantage of being data-independent (that is, no passes through samples are required) and non-iterative (one-shot pruning). This marks a limitation of the proposed algorithm.
suggestions: Overall, this was a strong paper and I commend the authors for this. Some potential suggestions are:
- Add the paragraph from the appendix about candidate layers to make clear that not all layers can be removed.
- Adjust figure 1 to account for reduction %s across methods rather than singular points at a particular reduction level. |
7DPNITf7ui | Effective Layer Pruning Through Similarity Metric Perspective | [
"Ian Pons",
"Bruno Yamamoto",
"Anna Helena Reali Costa",
"Artur Jordao"
] | Deep neural networks have been the predominant paradigm in machine learning for solving cognitive tasks. Such models, however, are restricted by a high computational overhead, limiting their applicability and hindering advancements in the field. Extensive research demonstrated that pruning structures from these models is a straightforward approach to reducing network complexity. In this direction, most efforts focus on removing weights or filters. Studies have also been devoted to layer pruning as it promotes superior computational gains. However, layer pruning often hurts the network predictive ability (i.e., accuracy) at high compression rates. This work introduces an effective layer-pruning strategy that meets all underlying properties pursued by pruning methods. Our method estimates the relative importance of a layer using the Centered Kernel Alignment (CKA) metric, employed to measure the similarity between the representations of the unpruned model and a candidate layer for pruning. We confirm the effectiveness of our method on standard architectures and benchmarks, in which it outperforms existing layer-pruning strategies and other state-of-the-art pruning techniques. Particularly, we remove more than 75% of computation while improving predictive ability. At higher compression regimes, our method exhibits negligible accuracy drop, while other methods notably deteriorate model accuracy. Apart from these benefits, our pruned models exhibit robustness to adversarial and out-of-distribution samples. | [
"Layer Pruning",
"Similarity Metric",
"Effcient Deep Learning"
] | https://openreview.net/pdf?id=7DPNITf7ui | BjxuOCJTeZ | meta_review | 1,718,679,328,506 | 7DPNITf7ui | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission8/Area_Chair_MCpC"
] | metareview: This work proposes a novel saliency criterion for layer/depth pruning, CKA, and demonstrates its effectiveness on a range of standard networks and benchmarks. The reviewers note the following high-level strengths and weaknesses.
* (+) Reviewers commend the relative simplicity of the approach; despite the simplicity, the approach appears to work well on real-world networks and tasks.
* (+) One reviewer points out the additional ablations performed for evaluating the effect of the method on adversarial robustness and latency.
* (+) Clear writing and presentation.
* (-) Multiple reviewers point out the lack of novelty of CKA, which is well-known in the literature. A clear comparison to related work would be helpful.
* (-) Relatively weaker approach compared to single-shot (non-iterative) data-independent pruning approaches, which may be more attractive for larger networks such as LLMs.
* (-) No clear explanation of why CKA works better than existing saliency metrics.
Overall, I recommend acceptance (poster). Authors, please incorporate the suggested changes made by reviewers.
recommendation: Accept (Poster)
confidence: 4 |
7DPNITf7ui | Effective Layer Pruning Through Similarity Metric Perspective | [
"Ian Pons",
"Bruno Yamamoto",
"Anna Helena Reali Costa",
"Artur Jordao"
] | Deep neural networks have been the predominant paradigm in machine learning for solving cognitive tasks. Such models, however, are restricted by a high computational overhead, limiting their applicability and hindering advancements in the field. Extensive research demonstrated that pruning structures from these models is a straightforward approach to reducing network complexity. In this direction, most efforts focus on removing weights or filters. Studies have also been devoted to layer pruning as it promotes superior computational gains. However, layer pruning often hurts the network predictive ability (i.e., accuracy) at high compression rates. This work introduces an effective layer-pruning strategy that meets all underlying properties pursued by pruning methods. Our method estimates the relative importance of a layer using the Centered Kernel Alignment (CKA) metric, employed to measure the similarity between the representations of the unpruned model and a candidate layer for pruning. We confirm the effectiveness of our method on standard architectures and benchmarks, in which it outperforms existing layer-pruning strategies and other state-of-the-art pruning techniques. Particularly, we remove more than 75% of computation while improving predictive ability. At higher compression regimes, our method exhibits negligible accuracy drop, while other methods notably deteriorate model accuracy. Apart from these benefits, our pruned models exhibit robustness to adversarial and out-of-distribution samples. | [
"Layer Pruning",
"Similarity Metric",
"Effcient Deep Learning"
] | https://openreview.net/pdf?id=7DPNITf7ui | 8xD1W1pNeJ | official_review | 1,718,110,661,679 | 7DPNITf7ui | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission8/Reviewer_VKxZ"
] | title: This paper revisits layer pruning granularity and proposes a new layer pruning algorithm using a novel criterion.
summary: This paper revisits the layer pruning granularity and proposes a new layer pruning algorithm with the usage of a novel criterion called CKA. The author argues that this work is the first paper that brings CKA, most widely used for comparing network representations, into pruning literature. They show their proposed CKA method achieves better accuracy retention while reducing more FLOPs across different model-dataset pairs, and at the same time achieve better wall-clock speedup compared to methods in filter pruning granularity. They also show that their method can be used jointly with pruning methods in other granularities (CKA layer pruning + Lp-Norm filter pruning), and produce even better results..
Overall score: Borderline accept
strengths: Quality:
The storytelling and illustration of the algorithm is easy-to-follow.
Significance:
Revisit layer pruning granularity and bring it back to the pruning community with the discovery of its neglect potential.
First work considers CKA as a pruning criterion.
weaknesses: Creativity:
Revisiting layer pruning granularity back to recent literature can be interesting, but applying an existing criterion (even not in this field) to an existing pruning granularity is somewhat incremental.
The comprehensiveness of the experiment:
Although the reviewer admits that it is hard to make a hundred percent fair comparison between different pruning techniques in terms of the same:
1.Pretrain baseline model accuracy,
2.Training / fine-tuning pipeline setting,
3.Total cost / budget,
it is important to make everything (if not all) as transparent as possible.
The author does not report the pretrain model performance, training/fine-tuning budget (in terms of epoch), and the training/fine-tuning hyperparam, etc. While it is OK to compare the acc change between different methods, it is not fair if the pretrain model performance is somewhat weak or not comparable, or if the total budget is significantly different from other compared methods.
Lack of speed-test on pruning procedure
confidence: 4 |
4pPYdiTvoz | Model-Agnostic Graph Dataset Compression with the Tree Mover’s Distance | [
"Mika Sarkin Jain",
"Stefanie Jegelka",
"Ishani Karmarkar",
"Luana Ruiz",
"Ellen Vitercik"
] | Graph neural networks have demonstrated remarkable success across a variety of domains. However, the acquisition and management of largescale graph datasets poses several challenges. Acquiring graph-level labels can be prohibitively costly, especially for applications in the biosciences and combinatorial optimization. Storage and privacy constraints can pose additional challenges. In this work, we propose an approach for data subset selection for graph datasets, which downsamples graphs and nodes based on the Tree Mover’s Distance. We provide new efficient methods for computing the TMD in our setting; empirical results showing our approach outperforms other node and graph sampling methods; and theoretical results bounding the decrease in accuracy caused by training on the downsampled graphs. Surprisingly, we find that with our method, we can subsample down to 1% of the number of graphs and 10% of the number of nodes on some datasets, with minimal degradation in model accuracy. | [
"graph neural networks",
"tree mover's distance",
"graph classification"
] | https://openreview.net/pdf?id=4pPYdiTvoz | vXOLKsPShU | official_review | 1,718,360,282,170 | 4pPYdiTvoz | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission44/Reviewer_i5fu"
] | title: Interesting work for decreasing graph dataset size for efficient GNN training
summary: Authors present Graph Neural Network (GNN) *model-agnostic* approaches for *both* (1) node subsampling and (2) graph dataset subsampling, which have *bounded GNN prediction accuracy decrease*, by developing an algorithm for selecting a subgraph with minimum Tree Mover Distance (TMD). Authors prove that finding optimal node subset, i.e. finding a subset with minimal TMD, is equivalent to finding a subset with a maximum tree norm, which can be solved faster as computing the norm takes linear runime compared to polynomial runtime of computing TMD. The experiments show that the proposed method preserves the GNN accuracy better or almost better (and more consistently) with respect to fraction of subsampled training set, compared to existing methods.
strengths: The paper is well structured and presents well to the reader 1) research question/context 2) goal 3) challenges/problem 4) proposed solutions. The paper seems to stand well in the context of the workshop, as it is related to efficient training of GNNs by compressing the dataset.
Though I did not fact-checked theoretical results, as I am not familiar with the graph theory, it looks like there is a lot of work done and presented to the reader (proofs, lemmas, etc.).
The empirical accuracy preserving results are interesting and important.
weaknesses: My main concern is the experimental/comparison part of the paper. I would have liked to see empirical runtime comparison of the methods, both in terms of algorithms runtime and further GNN training runtime. Since the accuracy preserving results are positive but not *extremely* significant (for some cases it works as good as random), it would be a great point.
Beside that, though the choice of the methods to compare with is well advocated (e.g., methods are model agnostic), it is still interesting to see how big the gap is in accuracy/runtime compared to non agnostic approaches (ones that were mentioned in related work section).
confidence: 2
suggestions: If it is possible to share as plots some empirical runtime comparison - that would be great. |
4pPYdiTvoz | Model-Agnostic Graph Dataset Compression with the Tree Mover’s Distance | [
"Mika Sarkin Jain",
"Stefanie Jegelka",
"Ishani Karmarkar",
"Luana Ruiz",
"Ellen Vitercik"
] | Graph neural networks have demonstrated remarkable success across a variety of domains. However, the acquisition and management of largescale graph datasets poses several challenges. Acquiring graph-level labels can be prohibitively costly, especially for applications in the biosciences and combinatorial optimization. Storage and privacy constraints can pose additional challenges. In this work, we propose an approach for data subset selection for graph datasets, which downsamples graphs and nodes based on the Tree Mover’s Distance. We provide new efficient methods for computing the TMD in our setting; empirical results showing our approach outperforms other node and graph sampling methods; and theoretical results bounding the decrease in accuracy caused by training on the downsampled graphs. Surprisingly, we find that with our method, we can subsample down to 1% of the number of graphs and 10% of the number of nodes on some datasets, with minimal degradation in model accuracy. | [
"graph neural networks",
"tree mover's distance",
"graph classification"
] | https://openreview.net/pdf?id=4pPYdiTvoz | f21VZkaspS | official_review | 1,718,196,386,482 | 4pPYdiTvoz | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission44/Reviewer_wdee"
] | title: Model-Agnostic Graph Dataset Compression with the Tree Mover’s Distance
summary: This paper proposes an approach to reduce the size of large graphs while maintaining the same learning model performance. Their approach making use of the tree distance between the original graph and subsampled graphs is based on a recursive computation of the cost measuring the TMD between graphs.
strengths: - The algorithm is based on a recursive measure of the distance between graphs which helps prove theorems easily based on induction.
- Strong empirical arguments prove that the approach proposed
weaknesses: - Experiments were conducted on small graph architectures, which does not answer the scalability problem, which is a good motivation for the paper.
- I am not convinced with some induction proofs, namely the proof of lemma 4.2, using the result of lemma 4.1 requires the set Z to be defined as $\{z \in V: z \in T, Z \notin S \}$ which is itself contradictory given the fact that $T \subseteq S$, reading the proof gives me the intuition that there's an inequality instead of an equation.
- Recursively defining the cost leads to the intractability of the proposed algorithm, this point was not discussed in the paper.
- In remark 4.6, you assume knowing S in advance. Isn't this similar to the strong assumption of knowing the downstream? Also defining S by a set of k-BFS sets seems to work only for small architectures.
confidence: 4
suggestions: - It would be interesting to add experimental results for large models to truly evaluate the proposed graph compression's performance.
- I'd suggest studying the hardness of the proposed algorithm rigorously: starting with an intractable algorithm, evaluating its hardness, and seeing the degrees of freedom we can play with to make it tractable (e.g. randomize a deterministic intractable algorithm). |
4pPYdiTvoz | Model-Agnostic Graph Dataset Compression with the Tree Mover’s Distance | [
"Mika Sarkin Jain",
"Stefanie Jegelka",
"Ishani Karmarkar",
"Luana Ruiz",
"Ellen Vitercik"
] | Graph neural networks have demonstrated remarkable success across a variety of domains. However, the acquisition and management of largescale graph datasets poses several challenges. Acquiring graph-level labels can be prohibitively costly, especially for applications in the biosciences and combinatorial optimization. Storage and privacy constraints can pose additional challenges. In this work, we propose an approach for data subset selection for graph datasets, which downsamples graphs and nodes based on the Tree Mover’s Distance. We provide new efficient methods for computing the TMD in our setting; empirical results showing our approach outperforms other node and graph sampling methods; and theoretical results bounding the decrease in accuracy caused by training on the downsampled graphs. Surprisingly, we find that with our method, we can subsample down to 1% of the number of graphs and 10% of the number of nodes on some datasets, with minimal degradation in model accuracy. | [
"graph neural networks",
"tree mover's distance",
"graph classification"
] | https://openreview.net/pdf?id=4pPYdiTvoz | Tm906jyTz8 | meta_review | 1,718,628,744,110 | 4pPYdiTvoz | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission44/Area_Chair_EmTP"
] | metareview: ## Strengths
* The paper is well presented
* Theoretical results provide insights about simplifying computations of TMD between a graph and its subgraph
* Empirical results are interesting and important, but mostly for graph sampling
## Weaknesses
* Notation-heavy paper, it might be possible to make the presentation simpler
* Experimental results do not contain any discussion about the computing requirement for doing the
sampling.
* Experimental results for node sampling (which represents most of the theoretical work) are not so
convincing. It would be interesting to evaluate the Lipschitz coefficients of the considered
networks, to assess how strong the theoretical bounds are.
The general sentiment about this paper is mixed. I recommend acceptance as a poster, provided that
authors refine their experimental evaluation.
recommendation: Accept (Poster)
confidence: 3 |
4pPYdiTvoz | Model-Agnostic Graph Dataset Compression with the Tree Mover’s Distance | [
"Mika Sarkin Jain",
"Stefanie Jegelka",
"Ishani Karmarkar",
"Luana Ruiz",
"Ellen Vitercik"
] | Graph neural networks have demonstrated remarkable success across a variety of domains. However, the acquisition and management of largescale graph datasets poses several challenges. Acquiring graph-level labels can be prohibitively costly, especially for applications in the biosciences and combinatorial optimization. Storage and privacy constraints can pose additional challenges. In this work, we propose an approach for data subset selection for graph datasets, which downsamples graphs and nodes based on the Tree Mover’s Distance. We provide new efficient methods for computing the TMD in our setting; empirical results showing our approach outperforms other node and graph sampling methods; and theoretical results bounding the decrease in accuracy caused by training on the downsampled graphs. Surprisingly, we find that with our method, we can subsample down to 1% of the number of graphs and 10% of the number of nodes on some datasets, with minimal degradation in model accuracy. | [
"graph neural networks",
"tree mover's distance",
"graph classification"
] | https://openreview.net/pdf?id=4pPYdiTvoz | 5WjHlOtWgV | decision | 1,718,650,830,722 | 4pPYdiTvoz | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Program_Chairs"
] | decision: Accept (Poster)
comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together!
title: Paper Decision |
4IWCHWlb6K | Scalify: scale propagation for efficient low-precision LLM training | [
"Paul Balanca",
"Samuel Hosegood",
"Carlo Luschi",
"Andrew W Fitzgibbon"
] | Low-precision formats such as float8 have been introduced in machine learning accelerated hardware to improve computational efficiency for large language models training and inference. Nevertheless, adoption by the ML community has been slowed down by the complex, and sometimes brittle, techniques required to match higher precision training accuracy. In this work, we present Scalify, a end-to-end scale propagation paradigm for computational graphs, generalizing and formalizing existing tensor scaling methods. Experiment results show that Scalify supports out-of-the-box float8 matrix multiplication and gradients representation, as well as float16 optimizer state storage. Our JAX
implementation of Scalify is open-sourced at [github.com/graphcore-research/jax-scalify](https://github.com/graphcore-research/jax-scalify). | [
"llm",
"fp8",
"training",
"low precision"
] | https://openreview.net/pdf?id=4IWCHWlb6K | gGCkd6uCFE | official_review | 1,718,167,548,439 | 4IWCHWlb6K | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission24/Reviewer_FXPW"
] | title: Official review of Submission24
summary: The paper presents a principled way of unifying different low precision training methods under a single paradigm which they call Scalify. It is currently enabled with Jax, with the authors mentioning a similar potential for enabling this in PyTorch via aten ops. The authors present principled methods to enable scaling of tensors in low-precision regimes for both 16-bit and 8-bit training. For 8-bit training, they build on top of unit scaling from Blake et al, and also show ways to unify with the recently proposed OCP formats and more recent work on FP8 training from Peng et al. Another aspect the authors point out is the simplification from writing explicit kernels to integrating the method into the framework itself.
Based on the coherency of the paper and relevance to training in lower-precisions for scaling to larger models, I'd recommend an accept.
strengths: 1. The paper presents the Scalify approach, which is straightforward and easy to integrate with existing libraries such as Jax / PyTorch.
2. The authors provide thorough explanations for the different components for training with the Scalify approach and also mention how existing training workflows can be integrated with Scalify (algo. 1) to enable training with different precisions.
3. The use of unit scaling as a baseline to improve on top of is good, as it provides a more principled approach to integrating their scaling methodology.
4. The paper clearly states when to enable further customizations for normalizations when using lower precision, providing better efficiency + stability for training.
5. The paper also provides primitives for low precision addition, which is useful in the case of biases and residuals (if needed).
6. Finally, they show combining dynamic per-tensor scaling through the use of two functions make it easier to handle and how the scaling can be optional in the main training loop (last line Algo 1). An interesting experiment though the authors may have considered is to show an example of a run with / without dynamic tensor scaling to show the strengths of their approach.
weaknesses: 1. While this is primarily a system + framework paper with the focus on unification of low-precision methods, here are some things I wish the authors presented better:
a. There are very few results. I understand that LLMs results can be bottlenecked on compute, but it would have been good to understand how the proposed methods scale to at least 1 order of magnitude higher model (~1B params) and if their proposed approach would scale or not.
b. The results for low-precision training (FP8 #3 / #4) see some minor degradation on loss (though within std. deviation of fp32 baseline), seeing some potential ICL evaluation would have given a better sense of how savings through memory + efficiency are translating to downstream tasks.
c. Given that their methodology provides a systematic way for scaling without statistics etc. it would have been good to understand even for their existing training runs, memory requirements for storing scalars across all arrays and any potential upside through speedup because of cycles not spent on statistics gathering etc.
confidence: 4
limitations: One main limitation is that implementing Scalify requires deep knowledge at the framework level (for example, aten ops are not something the everyday practitioner is working with when using pytorch for example). While there is no incentive for the authors, they might consider releasing their code, allowing for at least users of Jax to take advantage of the features the library provides. |
4IWCHWlb6K | Scalify: scale propagation for efficient low-precision LLM training | [
"Paul Balanca",
"Samuel Hosegood",
"Carlo Luschi",
"Andrew W Fitzgibbon"
] | Low-precision formats such as float8 have been introduced in machine learning accelerated hardware to improve computational efficiency for large language models training and inference. Nevertheless, adoption by the ML community has been slowed down by the complex, and sometimes brittle, techniques required to match higher precision training accuracy. In this work, we present Scalify, a end-to-end scale propagation paradigm for computational graphs, generalizing and formalizing existing tensor scaling methods. Experiment results show that Scalify supports out-of-the-box float8 matrix multiplication and gradients representation, as well as float16 optimizer state storage. Our JAX
implementation of Scalify is open-sourced at [github.com/graphcore-research/jax-scalify](https://github.com/graphcore-research/jax-scalify). | [
"llm",
"fp8",
"training",
"low precision"
] | https://openreview.net/pdf?id=4IWCHWlb6K | OjDfx8G5dj | official_review | 1,718,353,488,069 | 4IWCHWlb6K | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission24/Reviewer_enGd"
] | title: Review for Scalify: scale propagation for efficient low-precision LLM training
summary: This paper introduces Scalify, an end-to-end paradigm for scale propagation in computational graphs, which generalizes and formalizes current tensor scaling techniques. Experimental results demonstrate that Scalify enables seamless use of float8 for matrix multiplication and gradient representation, as well as float16 for storing optimizer states.
strengths: The algorithms are described clearly and the experiment results support the conclusion.
weaknesses: Limitation of the method is not discussed much.
confidence: 1 |
4IWCHWlb6K | Scalify: scale propagation for efficient low-precision LLM training | [
"Paul Balanca",
"Samuel Hosegood",
"Carlo Luschi",
"Andrew W Fitzgibbon"
] | Low-precision formats such as float8 have been introduced in machine learning accelerated hardware to improve computational efficiency for large language models training and inference. Nevertheless, adoption by the ML community has been slowed down by the complex, and sometimes brittle, techniques required to match higher precision training accuracy. In this work, we present Scalify, a end-to-end scale propagation paradigm for computational graphs, generalizing and formalizing existing tensor scaling methods. Experiment results show that Scalify supports out-of-the-box float8 matrix multiplication and gradients representation, as well as float16 optimizer state storage. Our JAX
implementation of Scalify is open-sourced at [github.com/graphcore-research/jax-scalify](https://github.com/graphcore-research/jax-scalify). | [
"llm",
"fp8",
"training",
"low precision"
] | https://openreview.net/pdf?id=4IWCHWlb6K | 98VTcZvocs | official_review | 1,718,022,115,916 | 4IWCHWlb6K | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission24/Reviewer_mP1C"
] | title: Official Review
summary: The paper presents a useful tool for using-point quantization, where tensors are represented using a value tensor and a shared scales vector.
It covers technical details for the underlying implementation of fundamental operations, and explains how this tool can be used in practice to simplify the workflow with FP8 models.
strengths: * The work is useful: it addresses a genuine missing component for FP8 quantization in software. Several works have used FP8 with scaling, but most have re-implemented it themselves without clear standards. I can see myself using this tool in the future.
* Detailed: It is very common for FP8 works to suggest scaling as an afterthought, without discussing the implications on computation. This work is helpful at delving into the hidden details of the scaling operations, which has more complexity than others would suggest.
weaknesses: * As a paper describing a practical tool, the work is not, by itself, novel.
* The experimental section lacks enough details to put it in context. For example, I am not sure whether the experiment is using scalar, channel or MX scaling, all were reported to work well when the tool was explained, in previous sections.
confidence: 4
suggestions: * From my experience (and some works), E3M4 can surpass M4E3 when properly scaled, especially in transformer based networks. I wouldn't brush it aside. |
4IWCHWlb6K | Scalify: scale propagation for efficient low-precision LLM training | [
"Paul Balanca",
"Samuel Hosegood",
"Carlo Luschi",
"Andrew W Fitzgibbon"
] | Low-precision formats such as float8 have been introduced in machine learning accelerated hardware to improve computational efficiency for large language models training and inference. Nevertheless, adoption by the ML community has been slowed down by the complex, and sometimes brittle, techniques required to match higher precision training accuracy. In this work, we present Scalify, a end-to-end scale propagation paradigm for computational graphs, generalizing and formalizing existing tensor scaling methods. Experiment results show that Scalify supports out-of-the-box float8 matrix multiplication and gradients representation, as well as float16 optimizer state storage. Our JAX
implementation of Scalify is open-sourced at [github.com/graphcore-research/jax-scalify](https://github.com/graphcore-research/jax-scalify). | [
"llm",
"fp8",
"training",
"low precision"
] | https://openreview.net/pdf?id=4IWCHWlb6K | 8mlvIZVPMb | decision | 1,718,651,168,767 | 4IWCHWlb6K | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Program_Chairs"
] | decision: Accept (Poster)
comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together!
title: Paper Decision |
4IWCHWlb6K | Scalify: scale propagation for efficient low-precision LLM training | [
"Paul Balanca",
"Samuel Hosegood",
"Carlo Luschi",
"Andrew W Fitzgibbon"
] | Low-precision formats such as float8 have been introduced in machine learning accelerated hardware to improve computational efficiency for large language models training and inference. Nevertheless, adoption by the ML community has been slowed down by the complex, and sometimes brittle, techniques required to match higher precision training accuracy. In this work, we present Scalify, a end-to-end scale propagation paradigm for computational graphs, generalizing and formalizing existing tensor scaling methods. Experiment results show that Scalify supports out-of-the-box float8 matrix multiplication and gradients representation, as well as float16 optimizer state storage. Our JAX
implementation of Scalify is open-sourced at [github.com/graphcore-research/jax-scalify](https://github.com/graphcore-research/jax-scalify). | [
"llm",
"fp8",
"training",
"low precision"
] | https://openreview.net/pdf?id=4IWCHWlb6K | 2rpDOBBQdI | meta_review | 1,718,410,426,304 | 4IWCHWlb6K | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission24/Area_Chair_Gwi2"
] | metareview: I recommend acceptance.
To encourage more researchers to use Scalify, I encourage Authors:
1. Discuss and show more results in the future, especially the large model runs.
2. Discuss precision w.r.t Attention mechanisms (standard Softmax Attention, and other new attentions, e.g. Linear Attention, Mamba etc, as these linear attentions are more sensitive with precision issues)
3. Discuss if Scalify can handle training/evaluation in case of state sharding (especially with tensor parallel) without too much extra effort.
recommendation: Accept (Poster)
confidence: 4 |
44NKKzz1n5 | u-μP: The Unit-Scaled Maximal Update Parametrization | [
"Charlie Blake",
"Constantin Eichenberg",
"Josef Dean",
"Lukas Balles",
"Luke Yuri Prince",
"Björn Deiseroth",
"Andres Felipe Cruz-Salinas",
"Carlo Luschi",
"Samuel Weinbach",
"Douglas Orr"
] | The recent Maximal Update Parametrization (µP) enables the hyperparameters for small models to transfer directly to large ones, substantially reducing the cost of training by avoiding expensive sweeps at scale. We present a new scheme, u-µP, which improves upon µP by combining it with Unit Scaling, a method for designing models that makes them easy to train in low-precision. The two techniques have a natural affinity: µP ensures that the scale of activations is independent of model size, and Unit Scaling ensures that the starting-scale of these activations is one (along with weights and gradients). This synthesis opens the door to a simpler scheme, whose default values are near-optimal. This in turn facilitates a more efficient sweeping strategy, with u-µP models reaching a lower loss than comparable µP models and working out-of-the-box in FP8. | [
"maximal update parametrization",
"learning dynamics",
"hyperparameter transfer",
"efficiency",
"training",
"stability",
"scaling",
"numerics",
"fp8",
"low precision"
] | https://openreview.net/pdf?id=44NKKzz1n5 | UFTwJNxLAm | meta_review | 1,718,649,961,503 | 44NKKzz1n5 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission30/Area_Chair_3iEh"
] | metareview: This work proposes a new model parametrization scheme that combines Maximal Update Parametrization and Unit Scaling, combining the advantages of hyperparameter transfer and stable training in FP8 precision. Reviewers have appreciated the theoretical analysis of the method, rigorous experiments conducted to validate the approach, and the clarity of presentation. Most of the concerns expressed in reviews were relatively minor and refer mostly to additional analysis or clarifications of specific design choices, which can be done for the version of the work submitted to an archival venue. Thus, I recommend accepting this paper to the WANT workshop.
recommendation: Accept (Poster)
confidence: 4 |
44NKKzz1n5 | u-μP: The Unit-Scaled Maximal Update Parametrization | [
"Charlie Blake",
"Constantin Eichenberg",
"Josef Dean",
"Lukas Balles",
"Luke Yuri Prince",
"Björn Deiseroth",
"Andres Felipe Cruz-Salinas",
"Carlo Luschi",
"Samuel Weinbach",
"Douglas Orr"
] | The recent Maximal Update Parametrization (µP) enables the hyperparameters for small models to transfer directly to large ones, substantially reducing the cost of training by avoiding expensive sweeps at scale. We present a new scheme, u-µP, which improves upon µP by combining it with Unit Scaling, a method for designing models that makes them easy to train in low-precision. The two techniques have a natural affinity: µP ensures that the scale of activations is independent of model size, and Unit Scaling ensures that the starting-scale of these activations is one (along with weights and gradients). This synthesis opens the door to a simpler scheme, whose default values are near-optimal. This in turn facilitates a more efficient sweeping strategy, with u-µP models reaching a lower loss than comparable µP models and working out-of-the-box in FP8. | [
"maximal update parametrization",
"learning dynamics",
"hyperparameter transfer",
"efficiency",
"training",
"stability",
"scaling",
"numerics",
"fp8",
"low precision"
] | https://openreview.net/pdf?id=44NKKzz1n5 | ScxjDD1U24 | decision | 1,718,722,197,179 | 44NKKzz1n5 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Program_Chairs"
] | decision: Accept (Poster)
comment: We thank the authors for their time and contribution to WANT and we are pleased to share that after the reviewing process the paper has been accepted. Congratulations! We encourage the authors to consider reviewers' feedback for the improvement of the camera-ready version. We hope to see you in person at the workshop and brainstorm on efficient training research together!
title: Paper Decision |
44NKKzz1n5 | u-μP: The Unit-Scaled Maximal Update Parametrization | [
"Charlie Blake",
"Constantin Eichenberg",
"Josef Dean",
"Lukas Balles",
"Luke Yuri Prince",
"Björn Deiseroth",
"Andres Felipe Cruz-Salinas",
"Carlo Luschi",
"Samuel Weinbach",
"Douglas Orr"
] | The recent Maximal Update Parametrization (µP) enables the hyperparameters for small models to transfer directly to large ones, substantially reducing the cost of training by avoiding expensive sweeps at scale. We present a new scheme, u-µP, which improves upon µP by combining it with Unit Scaling, a method for designing models that makes them easy to train in low-precision. The two techniques have a natural affinity: µP ensures that the scale of activations is independent of model size, and Unit Scaling ensures that the starting-scale of these activations is one (along with weights and gradients). This synthesis opens the door to a simpler scheme, whose default values are near-optimal. This in turn facilitates a more efficient sweeping strategy, with u-µP models reaching a lower loss than comparable µP models and working out-of-the-box in FP8. | [
"maximal update parametrization",
"learning dynamics",
"hyperparameter transfer",
"efficiency",
"training",
"stability",
"scaling",
"numerics",
"fp8",
"low precision"
] | https://openreview.net/pdf?id=44NKKzz1n5 | ADkTOUtjai | official_review | 1,718,104,796,223 | 44NKKzz1n5 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission30/Reviewer_TpYS"
] | title: Simpler, More Efficient Hyperparameter Sweeping
summary: The paper presents a model parameterisation scheme called "u-µP", which combines the principles of Maximal Update Parameterisation (µP), a theoretical basis for optimal model parameterisation, with Unit Scaling, a weight scaling method original suited to low precision quantisation. Though appearing somewhat orthogonal in their use cases, the authors propose that these can be combined to create an effective model parameterisation, performing better than the original µP out of the box while retaining a key property of being able to transfer between models of different widths. This is done by combining the assumptions of both methods, to obtain a particular instance of µP parameterisation.
strengths: * Solid and well explained theoretical basis, building on recent research in model parameterisation and theories of model training.
* Evidently useful for anyone combining low precision (FP8) with hyperparameter sweeping, both popular and useful techniques.
* Interesting observations when comparing sweeping results between µP and u-µP, and good discussion of these differences.
* Experiments seem to suggest the scheme performs better with independent search, which is promising as this reduces expense.
* Appreciated level of detail regarding the implementations of scaling with different model layers.
weaknesses: * Somewhat limited evidence of u-µP's improvements over µP when it comes to full precision training. It is expected that the inclusion of Unit Scaling would improve performance in an FP8 context as shown; but I would be interested in more data to discern if there are further benefits.
* The authors chose to perform their experiments on the Llama architecture, which is a good choice, though it would be good to see how it applies on other models and datasets.
* It was observed that u-µP performs well in independent sweeping compared to µP, but there seems to be a lack of explanation or discussion on why this might be the case.
* Experiments while promising could be more thorough and compelling, in order to make clear the benefits of choosing u-µP for parameterisation.
confidence: 4 |
44NKKzz1n5 | u-μP: The Unit-Scaled Maximal Update Parametrization | [
"Charlie Blake",
"Constantin Eichenberg",
"Josef Dean",
"Lukas Balles",
"Luke Yuri Prince",
"Björn Deiseroth",
"Andres Felipe Cruz-Salinas",
"Carlo Luschi",
"Samuel Weinbach",
"Douglas Orr"
] | The recent Maximal Update Parametrization (µP) enables the hyperparameters for small models to transfer directly to large ones, substantially reducing the cost of training by avoiding expensive sweeps at scale. We present a new scheme, u-µP, which improves upon µP by combining it with Unit Scaling, a method for designing models that makes them easy to train in low-precision. The two techniques have a natural affinity: µP ensures that the scale of activations is independent of model size, and Unit Scaling ensures that the starting-scale of these activations is one (along with weights and gradients). This synthesis opens the door to a simpler scheme, whose default values are near-optimal. This in turn facilitates a more efficient sweeping strategy, with u-µP models reaching a lower loss than comparable µP models and working out-of-the-box in FP8. | [
"maximal update parametrization",
"learning dynamics",
"hyperparameter transfer",
"efficiency",
"training",
"stability",
"scaling",
"numerics",
"fp8",
"low precision"
] | https://openreview.net/pdf?id=44NKKzz1n5 | 5WTVJG7Skd | official_review | 1,718,175,304,034 | 44NKKzz1n5 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission30/Reviewer_cJ4s"
] | title: Official review of Submission30
summary: The paper presents a principled method to apply µP in conjunction with unit scaling (called u-µP). The paper showcases the benefits of merging the two independent strategies under a unified framework, enabling hyper-param transfer in low precision training settings. It further shows how to move from an abc-parameterization to absolute scales (which is needed for unit scaling), but can be done via the abc framework that µP enables. The paper finally identifies how moving to unit scaling can change some fundamentals, especially moving to more recent architectures such as Llama, and identify necessary fixes for those, specifically in the residual stream and the normalization blocks.
The authors provide detailed proofs for most of their proposed scaling methods. Based on the coherency of the presented proofs and results, I recommend an accept for the paper.
strengths: 1. The paper identifies how to combine unit scaling and µP in a unified framework, enabling the properties of µP transfer for low-precision (FP8) training.
2. The paper presents principled proofs for most of the changes recommended to enable the combination of the ideas and delineates the base rules for the transfer clearly for practitioners in Table 1.
3. Through rigourous experiments, the authors show empirical validation for most of the proofs.
4. The authors combine the depth + width scaling for µP in this paper, which enables depth transfer even with unit scaling enabled (as shown in Figure 5). However, if we are to refer to the original depth scaling paper from Yang et al [1], it seems that ideally, transformers should not exhibit good depth transfer due to block_size >= 2 property of the blocks. How are the authors basing their assumptions on scaling with sqrt(base_depth)/depth for their transfer properties?
5. The paper gets rid of the dependency on base shapes - which often play a critical role in ensuring good transfer, but can also hinder transfer if the appropriate base widths are not considered.
[1] Tensor Programs VI: Feature Learning in Infinite-Depth Neural Networks (https://arxiv.org/abs/2310.02244)
weaknesses: 1. While the authors recognize this, some of the math for the alpha_attn / alpha_silu is handwavy and is difficult to understand how the particular constants were arrived at.
2. The authors identify that the output layers in both the attn / ffn sub-blocks see large growth in weight / gradient magnitudes and propose moving to E5M2 to handle those layers. Wondering if following a principled way similar to Micikevicius et al. for E4M3 in the forward and E5M2 in the backward for all layers would be better?
3. While the intention for enabling simpler hyper-params + low-precision is understandable, there are too many things to account here for good transfer, making me wonder if there are reliable speedups to be gained from an implementation of the proposed method during implementation in actual runtime (not simulated like how the authors have currently done). Note that I'm not expecting the authors provide any concrete numbers for this - but even high-level projections will be useful to understand.
confidence: 4
suggestions: A naming suggestion for the method: Since this is strictly not a parametrization anymore, it seems like calling it u-µP, which captures the general intent, is also slightly misleading. One recommendation is to potentially name the method as µnit Scaling or µS (muScaling) for short. |
44NKKzz1n5 | u-μP: The Unit-Scaled Maximal Update Parametrization | [
"Charlie Blake",
"Constantin Eichenberg",
"Josef Dean",
"Lukas Balles",
"Luke Yuri Prince",
"Björn Deiseroth",
"Andres Felipe Cruz-Salinas",
"Carlo Luschi",
"Samuel Weinbach",
"Douglas Orr"
] | The recent Maximal Update Parametrization (µP) enables the hyperparameters for small models to transfer directly to large ones, substantially reducing the cost of training by avoiding expensive sweeps at scale. We present a new scheme, u-µP, which improves upon µP by combining it with Unit Scaling, a method for designing models that makes them easy to train in low-precision. The two techniques have a natural affinity: µP ensures that the scale of activations is independent of model size, and Unit Scaling ensures that the starting-scale of these activations is one (along with weights and gradients). This synthesis opens the door to a simpler scheme, whose default values are near-optimal. This in turn facilitates a more efficient sweeping strategy, with u-µP models reaching a lower loss than comparable µP models and working out-of-the-box in FP8. | [
"maximal update parametrization",
"learning dynamics",
"hyperparameter transfer",
"efficiency",
"training",
"stability",
"scaling",
"numerics",
"fp8",
"low precision"
] | https://openreview.net/pdf?id=44NKKzz1n5 | 1qcJicAASo | official_review | 1,717,968,995,646 | 44NKKzz1n5 | [
"everyone"
] | [
"ICML.cc/2024/Workshop/WANT/Submission30/Reviewer_9XQP"
] | title: Unit-Scaled Maximal Update Parametrization induces transferable hyperparameters in models while maintaining unit variance between passes, allowing for improved low-precision performance.
summary: This paper introduces *Unit-Scaled Maximal Update Parametrization* (u-μP), which builds upon Maximal Update Parametrization (Yang & Hu, 2021) by combining it with the philosophy of unit variance between weights, activations, and gradients proposed in Unit Scaling (Blake et al., 2023). This allows for the hyperparameter transfer properties of μP to carry over to models that utilize lower precisions, such as FP8 casts.
To achieve this, the authors provide two main contributions over μP: modifying its original scaling scheme to follow unit scaling and simplifying the set of hyperparameters. Additionally, the authors provide per-operation scaling rules for Llama-specific architectural models to support their experimental results.
strengths: - The paper is well-written, with minimal typos and a clear motivation.
- The paper demonstrates strong empirical results in HP-transfer with FP8 models (See Figs 1., 4.)
- Notable reductions in loss are also demonstrated when compared to its predecessor μP.
- Figures are well-designed and easy to read.
- The introduction and background sections are concise yet informative, and give the reader a solid grasp of the resultant topics.
- The interpretability strengths of the method highlighted at the end of section 4 was interesting and informative of its strengths beyond experimental results.
weaknesses: Several things stand out,
- It is unclear sometimes whether or not a result is derived from an low or full-precision setting (eg. Fig 5), making statements like "Our u-μP scheme is more principled than that used for μP,..." harder to discern.
- Low-precision experiments are only carried out in FP8 settings. Perhaps it would be advantageous to see if the unit-variance properties proposed also result in improvements in 4bit or even 3bit models.
- To maintain unit variance in dot-product attention, the authors scale the pre-softmax product by $\alpha_{attn}$. However, the original scaled dot-product attention (in theory) already accounts for unit variance by multiplying the dot-product by $\sqrt{d}^{-1}$. It would be informative to specify why μP's choice of scaling by $d^{-1}$ rather than $\sqrt{d}^{-1}$ was kept over returning to the traditional formula.
confidence: 3
limitations: - Past the scaling schematics outlined in Table 1, implementations of u-μP are model specific and therefore rely on unique scales for every operation in a particular architecture. This makes the primary contribution of u-μP more of a general practice rather than a specific algorithm, and thus results on additional models across different architectures could be beneficial.
suggestions: Overall, this was a well-written and interesting paper, and I'd like to thank the authors for this. Following my praises and critiques, some possible improvements would be:
- Experiments in lower-precisions (eg. 4bit or 3bit) models.
- Experiments across varying model architectures.
- There is a minor typo on line 313 (*it's* instead of *its*). It will be good to sweep the paper once and ensure simple things like these do not appear elsewhere. |
zdDCJh35aC | What can machines teach us in our journey of reproducing human scientific creativity? | [] | In the race toward creating a strong AI, we have historically focused on replicating human intelligence. For many advanced tasks such as language and image generation, complex classifications in fields such as medicine, computer vision and other sensor data in self-driving cars, we have been successful. However, for complex behaviours like creativity, we often deem machines incapable. Maybe we are desperate to have something of our own, that machines could never do. Maybe we are too prideful in our own intelligence. What if we were tasked to build a truly creative AI capable of intuition and insight? What should we consider? Would replicating human abilities be the best option, or could we make something even better?
This article holds a mirror up to us and explores scientific creativity. We first explore the many properties that may allow machines to surpass humans in creative insight, such as unbounded effort and lack of competition. We should exploit these, rather than limit them in the attempt to make AI more human-like. In the second half of this article, we realise there are many traits we have overlooked in ourselves, that we should strive to emulate in machines. There is no doubt that machines someday could mimic human creativity. The purpose of this reflection is to realise it is not about what we can build, but what we should build. | [
"scientific creativity"
] | https://openreview.net/pdf?id=zdDCJh35aC | d6eRpZnBGN | official_review | 1,734,736,084,473 | zdDCJh35aC | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission4/Reviewer_oxZB"
] | title: neurmad paper review
review: # Summary
This paper explores the potential of artificial intelligence to replicate or surpass human creativity. It identifies advantages machines possess, such as unbounded effort, lack of emotional bias, and detachment from competition, and contrasts these with uniquely human traits like embodied intelligence, collaboration, and intrinsic motivation. The paper argues that instead of merely mimicking human creativity, AI should be designed to exploit its unique strengths for innovation rather than replication. The paper also briefly discusses ethical considerations, including the implications of machine creativity on ownership and societal roles.
# Strengths
The paper offers a compelling conceptual framework for understanding machine creativity through the lens of both cognitive science and computational advantages. It effectively highlights the strengths of machines in areas where humans are limited, such as scalability and freedom from emotional interference. The discussion around leveraging these machine-specific traits is technically grounded, particularly in ideas like memory graph representations for creative insights and reward function optimization for analogical reasoning. Additionally, its critique of human limitations, such as biases introduced by competition or self-doubt, is well-argued and relevant to AI system design.
# Weaknesses
While the paper presents a strong theoretical narrative, it lacks empirical evidence to validate its claims. The proposed ideas, such as “artificial dreams” or AI societies of mind, remain speculative without detailed implementation strategies. The discussion on creativity metrics is insufficient, failing to address how “Big-C” creativity (paradigm-shifting innovations) could be systematically identified or evaluated in machines. Furthermore, the ethical implications, while noted, are underdeveloped, leaving key questions of accountability and societal impact unresolved.
Overall, the paper provides valuable insights and raises thought-provoking questions about AI creativity. However, its theoretical nature and limited attention to practical methodologies and metrics reduce its overall technical contribution. Addressing these gaps could significantly enhance its impact.
rating: 6
confidence: 3 |
zdDCJh35aC | What can machines teach us in our journey of reproducing human scientific creativity? | [] | In the race toward creating a strong AI, we have historically focused on replicating human intelligence. For many advanced tasks such as language and image generation, complex classifications in fields such as medicine, computer vision and other sensor data in self-driving cars, we have been successful. However, for complex behaviours like creativity, we often deem machines incapable. Maybe we are desperate to have something of our own, that machines could never do. Maybe we are too prideful in our own intelligence. What if we were tasked to build a truly creative AI capable of intuition and insight? What should we consider? Would replicating human abilities be the best option, or could we make something even better?
This article holds a mirror up to us and explores scientific creativity. We first explore the many properties that may allow machines to surpass humans in creative insight, such as unbounded effort and lack of competition. We should exploit these, rather than limit them in the attempt to make AI more human-like. In the second half of this article, we realise there are many traits we have overlooked in ourselves, that we should strive to emulate in machines. There is no doubt that machines someday could mimic human creativity. The purpose of this reflection is to realise it is not about what we can build, but what we should build. | [
"scientific creativity"
] | https://openreview.net/pdf?id=zdDCJh35aC | Q45i6wmqjV | official_review | 1,734,497,595,679 | zdDCJh35aC | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission4/Reviewer_CzZY"
] | title: paper review
review: ## Summary
The paper provides an introspective exploration of the potential for machines to exhibit scientific creativity, comparing and contrasting machine and human capabilities.
It highlights machine advantages such as unbounded effort, immunity to bias, and lack of emotional interference while recognizing human strengths like collaboration, embodied intelligence, and motivation.
The paper proposes a balanced approach to developing creative AI, urging us to consider not just what can be built, but what should be built. Through philosophical, technical, and ethical discussions, it challenges existing benchmarks and notions of creativity while envisioning a complementary role for AI in advancing human knowledge.
## Strengths
1. The writing style is engaging, using compelling analogies and historical examples like Henri Poincaré’s "bus moment".
2. The paper takes a comprehensive approach, combining insights from psychology, philosophy, neuroscience, and AI research. It discusses "Big-C" and "little-c" creativity.
3. The article offers a balanced examination of machine and human capabilities. It highlights areas where machines can surpass humans (e.g., lack of emotional attachment) while acknowledging the importance of human traits like collaboration.
4. The paper emphasizes the importance of creating AI systems that complement rather than replicate human intelligence, providing a clear and meaningful purpose for AI in scientific discovery.
rating: 7
confidence: 3 |
zdDCJh35aC | What can machines teach us in our journey of reproducing human scientific creativity? | [] | In the race toward creating a strong AI, we have historically focused on replicating human intelligence. For many advanced tasks such as language and image generation, complex classifications in fields such as medicine, computer vision and other sensor data in self-driving cars, we have been successful. However, for complex behaviours like creativity, we often deem machines incapable. Maybe we are desperate to have something of our own, that machines could never do. Maybe we are too prideful in our own intelligence. What if we were tasked to build a truly creative AI capable of intuition and insight? What should we consider? Would replicating human abilities be the best option, or could we make something even better?
This article holds a mirror up to us and explores scientific creativity. We first explore the many properties that may allow machines to surpass humans in creative insight, such as unbounded effort and lack of competition. We should exploit these, rather than limit them in the attempt to make AI more human-like. In the second half of this article, we realise there are many traits we have overlooked in ourselves, that we should strive to emulate in machines. There is no doubt that machines someday could mimic human creativity. The purpose of this reflection is to realise it is not about what we can build, but what we should build. | [
"scientific creativity"
] | https://openreview.net/pdf?id=zdDCJh35aC | 3G9aw8ELHy | decision | 1,735,601,291,735 | zdDCJh35aC | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | decision: Reject
comment: This is a nice essay, but it lacks math to appear at this workshop.
title: Paper Decision |
wiKxXgE2MI | Quality and Diversity Both Matters When Merging Models | [] | Generalization to distribution shifts is a primary goal in modern machine learning literature. Ensemble methods, including both output-space ensemble and weight-space ensemble (model merging), are renowned for their robust generalization capabilities over multi-task settings, leveraging the diverse features from source models to improve cross-task transferability. While most studies on model merging focus on constructing diverse pools of task vectors obtained from foundation models trained on different tasks, we also emphasize the quality of each source. In this paper, we introduce a novel method for selectively merging task vectors to achieve superior generalization on target domains. Our approach uniquely considers both the diversity and quality of individual models. Using Determinantal Point Processes (DPP), we propose a probabilistic framework that optimally selects which models to average in a plug-and-play manner, ensuring a balanced consideration of quality and diversity. Theoretical support is provided for our hypothesis that this dual consideration yields a tighter generalization error bound for the unified model. Empirically, we present experiments in an out-of-distribution setting where there is significant violation in identically distributed conditions between the source and target domains. | [
"Model Merging",
"Domain Generalization",
"Robustness",
"Foundation Model"
] | https://openreview.net/pdf?id=wiKxXgE2MI | 3RWo36Kewi | decision | 1,735,601,843,039 | wiKxXgE2MI | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | decision: Reject
comment: This work has a bit of novelty. We agree with the reviewer's comments.
title: Paper Decision |
wiKxXgE2MI | Quality and Diversity Both Matters When Merging Models | [] | Generalization to distribution shifts is a primary goal in modern machine learning literature. Ensemble methods, including both output-space ensemble and weight-space ensemble (model merging), are renowned for their robust generalization capabilities over multi-task settings, leveraging the diverse features from source models to improve cross-task transferability. While most studies on model merging focus on constructing diverse pools of task vectors obtained from foundation models trained on different tasks, we also emphasize the quality of each source. In this paper, we introduce a novel method for selectively merging task vectors to achieve superior generalization on target domains. Our approach uniquely considers both the diversity and quality of individual models. Using Determinantal Point Processes (DPP), we propose a probabilistic framework that optimally selects which models to average in a plug-and-play manner, ensuring a balanced consideration of quality and diversity. Theoretical support is provided for our hypothesis that this dual consideration yields a tighter generalization error bound for the unified model. Empirically, we present experiments in an out-of-distribution setting where there is significant violation in identically distributed conditions between the source and target domains. | [
"Model Merging",
"Domain Generalization",
"Robustness",
"Foundation Model"
] | https://openreview.net/pdf?id=wiKxXgE2MI | 350as7b077 | official_review | 1,735,318,121,168 | wiKxXgE2MI | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission16/Reviewer_Db48"
] | title: Clear and rigorous ideas, but with limited novelty
review: This paper addresses the significant challenges and limitations faced by current ensemble methods in achieving robust generalization under distribution shifts. It proposes a novel framework that incorporates selective models evaluated by error, supported by theoretical proof, and filtered by Determinantal Point Processes (DPP), taking into account both diversity and quality.
Strength:
- The authors propose a novel kernel construction based on DPP to select models that balance both high performance (quality) and diversity.
- The authors first propose a generalization error bound and provide strong theoretical derivations to demonstrate that the previous weight-averaging methods for model aggregation can achieve a tighter generalization error bound by considering the contributions of both quality and diversity.
- The method is plug-and-play: This approach is designed to be easily applicable without requiring extensive modifications to existing systems.
- The method overcomes the limitations of traditional assumptions for i.i.d. data: It provides a solution that is not constrained by the i.i.d. assumption, making it more flexible and applicable to a wider range of scenarios.
- The output of experiments seems improved, even though the test dataset is small and there is a lack of benchmarks from other merging methods in the field for comparison
Weaknesses:
- Limited novelty: The proposed method mainly introduces the use of DPP for selecting ensemble models and provides theoretical support. However, the innovation is somewhat limited, as it primarily focuses on applying an existing technique (DPP) without introducing substantial new methodologies.
- Lack of effective comparison: The experiments lack comprehensive comparisons with other established methods in the field. Only the proposed method is added on top of the baseline, which makes it difficult to evaluate its performance relative to other merging techniques. The absence of benchmarks from other merging methods in the field undermines the robustness of the comparison.
- The derivation of the generalization error bound is also based on certain assumptions(like i.i.d), which may not hold in some cases, potentially affecting the practical applicability of the theoretical results.
rating: 5
confidence: 2 |
ocSvfbIjet | Beyond Interpolation: Extrapolative Reasoning with Reinforcement Learning and Graph Neural Networks | [] | Despite incredible progress, many neural architectures fail to properly generalize beyond their training distribution.
As such, learning to reason in a correct and generalizable way is one of the current fundamental challenges in machine learning.
In this respect, logic puzzles provide a great testbed, as we can fully understand and control the learning environment. Thus, they allow to evaluate performance on previously unseen, larger and more difficult puzzles that follow the same underlying rules.
Since traditional approaches often struggle to represent such scalable logical structures, we propose to model these puzzles using a graph-based approach.
Then, we investigate the key factors enabling the proposed models to learn generalizable solutions in a reinforcement learning setting.
Our study focuses on the impact of the inductive bias of the architecture, different reward systems and the role of recurrent modeling in enabling sequential reasoning.
Through extensive experiments, we demonstrate how these elements contribute to successful extrapolation on increasingly complex puzzles.
These insights and frameworks offer a systematic way to design learning-based systems capable of generalizable reasoning beyond interpolation. | [
"Algorithmic Reasoning",
"Puzzle",
"Benchmark",
"Graphs"
] | https://openreview.net/pdf?id=ocSvfbIjet | lUoPqF4fbz | decision | 1,735,598,401,089 | ocSvfbIjet | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: We agree with the major opinions of reviewers and authors’ comments. |
ocSvfbIjet | Beyond Interpolation: Extrapolative Reasoning with Reinforcement Learning and Graph Neural Networks | [] | Despite incredible progress, many neural architectures fail to properly generalize beyond their training distribution.
As such, learning to reason in a correct and generalizable way is one of the current fundamental challenges in machine learning.
In this respect, logic puzzles provide a great testbed, as we can fully understand and control the learning environment. Thus, they allow to evaluate performance on previously unseen, larger and more difficult puzzles that follow the same underlying rules.
Since traditional approaches often struggle to represent such scalable logical structures, we propose to model these puzzles using a graph-based approach.
Then, we investigate the key factors enabling the proposed models to learn generalizable solutions in a reinforcement learning setting.
Our study focuses on the impact of the inductive bias of the architecture, different reward systems and the role of recurrent modeling in enabling sequential reasoning.
Through extensive experiments, we demonstrate how these elements contribute to successful extrapolation on increasingly complex puzzles.
These insights and frameworks offer a systematic way to design learning-based systems capable of generalizable reasoning beyond interpolation. | [
"Algorithmic Reasoning",
"Puzzle",
"Benchmark",
"Graphs"
] | https://openreview.net/pdf?id=ocSvfbIjet | hjqUGQlaWw | official_review | 1,735,427,688,618 | ocSvfbIjet | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission12/Reviewer_DeFn"
] | title: A long full paper, lack of key information
review: Summary
This paper integrates GNN (graph neural network) with RL (reinforcement learning) to address PUZZLES problems. This paper claims the GNN can improve RL performance.
Compared with RL-only method, the new method reports considerable improvement on small size puzzles (2x2 ~ 6x6), but it only reports similar performance (to RL-only method) on larger size puzzles.
Beside above comparison, other empirical evaluations are based on the new method itself with different configurations, e.g.: iterative vs partial, recurrent vs state-less, GNN vs transformer. All these tests show a complete zero-solvability (cannot be solved) at x9 scale size. This empirical result matches the conclusion from other methods and research.
Issues
1. this paper does not provide a comprehensive explanation of how these puzzles transformed to a GNN network. Also, no runable program provided.
2. At large scale-size problem, this GNN+RL method matches the performance of RL-only method, it only perform better at (2x2 ~ 6x6) small scale size. Also, it lacks enough cross-comparison from RL-only method.
3. The major reason for its better performance (at small scale) is due to its richer NN-node representation capturing more state relationships. This benefit is not scalable when PUZZLES size is growing up.
rating: 4
confidence: 4 |
ocSvfbIjet | Beyond Interpolation: Extrapolative Reasoning with Reinforcement Learning and Graph Neural Networks | [] | Despite incredible progress, many neural architectures fail to properly generalize beyond their training distribution.
As such, learning to reason in a correct and generalizable way is one of the current fundamental challenges in machine learning.
In this respect, logic puzzles provide a great testbed, as we can fully understand and control the learning environment. Thus, they allow to evaluate performance on previously unseen, larger and more difficult puzzles that follow the same underlying rules.
Since traditional approaches often struggle to represent such scalable logical structures, we propose to model these puzzles using a graph-based approach.
Then, we investigate the key factors enabling the proposed models to learn generalizable solutions in a reinforcement learning setting.
Our study focuses on the impact of the inductive bias of the architecture, different reward systems and the role of recurrent modeling in enabling sequential reasoning.
Through extensive experiments, we demonstrate how these elements contribute to successful extrapolation on increasingly complex puzzles.
These insights and frameworks offer a systematic way to design learning-based systems capable of generalizable reasoning beyond interpolation. | [
"Algorithmic Reasoning",
"Puzzle",
"Benchmark",
"Graphs"
] | https://openreview.net/pdf?id=ocSvfbIjet | ciJXFzQQGV | official_review | 1,734,498,681,108 | ocSvfbIjet | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission12/Reviewer_Z4KK"
] | title: Review
review: #### Summary
This paper tackles the challenge of enabling neural architectures to generalize beyond their training distributions.
By employing logic puzzles as a controlled testbed, the authors propose a novel graph-based approach coupled with reinforcement learning (RL) to model scalable logical structures.
The key contributions include a multi-agent RL framework leveraging Graph Neural Networks (GNNs) for reasoning, insights into the role of architectural inductive biases, reward system designs, and recurrent modeling in achieving extrapolative reasoning.
---
#### Strengths
1. The paper effectively identifies generalization beyond interpolation as a critical challenge in machine learning. Using logic puzzles as a testbed provides a well-defined, scalable, and interpretable framework.
6. The introduction of the PUZZLES benchmark and a graph-based interface for logic puzzles enriches the resources available for studying generalization in controlled environments.
2. Representing puzzles as graphs provides a flexible and scalable way to handle tasks of varying complexity. The dual use of decision and meta-nodes in GNNs captures local and global constraints effectively.
3. The experiments are comprehensive, spanning multiple puzzle types, varying sizes, and different architectural choices (GNNs vs. transformers). The evaluation metrics (e.g., extrapolation to x4, x9, x16 larger puzzles) are robust and directly aligned with the paper's goals.
rating: 7
confidence: 3 |
ocSvfbIjet | Beyond Interpolation: Extrapolative Reasoning with Reinforcement Learning and Graph Neural Networks | [] | Despite incredible progress, many neural architectures fail to properly generalize beyond their training distribution.
As such, learning to reason in a correct and generalizable way is one of the current fundamental challenges in machine learning.
In this respect, logic puzzles provide a great testbed, as we can fully understand and control the learning environment. Thus, they allow to evaluate performance on previously unseen, larger and more difficult puzzles that follow the same underlying rules.
Since traditional approaches often struggle to represent such scalable logical structures, we propose to model these puzzles using a graph-based approach.
Then, we investigate the key factors enabling the proposed models to learn generalizable solutions in a reinforcement learning setting.
Our study focuses on the impact of the inductive bias of the architecture, different reward systems and the role of recurrent modeling in enabling sequential reasoning.
Through extensive experiments, we demonstrate how these elements contribute to successful extrapolation on increasingly complex puzzles.
These insights and frameworks offer a systematic way to design learning-based systems capable of generalizable reasoning beyond interpolation. | [
"Algorithmic Reasoning",
"Puzzle",
"Benchmark",
"Graphs"
] | https://openreview.net/pdf?id=ocSvfbIjet | 2BMtZLownI | official_review | 1,735,468,487,995 | ocSvfbIjet | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission12/Reviewer_tUXR"
] | title: a very interesting approach
review: this is a very interesting approach to solve puzzles. It uses RL and graph neural networks. However it is outside my area of expertise.
It is a well written paper and the ideas are well presented. I would like my review to be moderated based on the fact that I am not an expert in this field.
rating: 7
confidence: 1 |
ivr6cXXten | From Black Box to Algorithmic Insight: Explainable AI in Graph Neural Networks for Graph Coloring | [] | Despite advances in neural networks for solving combinatorial optimization problems using Graph Neural Networks (GNNs), understanding their learning processes and utilizing acquired knowledge remains elusive, particularly in imperfect models addressing NP-complete problems. This gap underscores the need for Explainable AI (XAI) methodologies. In this study, we undertake the task of elucidating the mechanisms of a specific model named GNN-GCP trained to solve the Graph Coloring Problem (GCP). Our findings reveal that the concepts that underpin the operation of GNN-GCP resemble those of hand-crafted combinatorial optimization heuristics. One prominent example is the concept of ``support of vertex $v$ with respect to a given coloring of the graph", which is the number of neighbors that $v$ has in each color class other than its own. By providing insights into the inner workings of GNN-GCP, we contribute to the larger goal of making AI models more interpretable and trustworthy, even in complex settings such as combinatorial optimization problems. | [
"Explainable AI",
"Combinatorial Optimization"
] | https://openreview.net/pdf?id=ivr6cXXten | kLvQrxyNiu | decision | 1,735,601,453,787 | ivr6cXXten | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | decision: Accept
comment: This paper introduces a task that is worth discussing at this workshop. We agree with the reviewer’s opinion.
title: Paper Decision |
ivr6cXXten | From Black Box to Algorithmic Insight: Explainable AI in Graph Neural Networks for Graph Coloring | [] | Despite advances in neural networks for solving combinatorial optimization problems using Graph Neural Networks (GNNs), understanding their learning processes and utilizing acquired knowledge remains elusive, particularly in imperfect models addressing NP-complete problems. This gap underscores the need for Explainable AI (XAI) methodologies. In this study, we undertake the task of elucidating the mechanisms of a specific model named GNN-GCP trained to solve the Graph Coloring Problem (GCP). Our findings reveal that the concepts that underpin the operation of GNN-GCP resemble those of hand-crafted combinatorial optimization heuristics. One prominent example is the concept of ``support of vertex $v$ with respect to a given coloring of the graph", which is the number of neighbors that $v$ has in each color class other than its own. By providing insights into the inner workings of GNN-GCP, we contribute to the larger goal of making AI models more interpretable and trustworthy, even in complex settings such as combinatorial optimization problems. | [
"Explainable AI",
"Combinatorial Optimization"
] | https://openreview.net/pdf?id=ivr6cXXten | 3lMn8WcSiX | official_review | 1,734,701,690,787 | ivr6cXXten | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission11/Reviewer_KdjA"
] | title: Review
review: Summary:
The paper explores the use of concept learning for XAI in a GNN model trained to solve the 3-coloring problem. It identifies two key concepts, i.e. support and confidence, that are geometrically encoded within the GNN's embeddings, providing interpretable insights into the model's learning process. Although innovative, the study's narrow focus on these concepts restricts its exploration of dynamic embedding evolution, graph topology, and broader applications.
Major concerns:
- The two key concepts analyzed are the support of a vertex and the confidence in the coloring of a specific vertex, evaluated through the 2D PCA projection of node embeddings. This approach appears somewhat limited and may lead to a relatively incomplete exploration of the embedding space. Additional concepts or alternative projections could have been considered for a more comprehensive analysis. Could you please provide insights on this point?
- The paper does not address the different topologies of the graphs or how these topologies might influence the network's learning process. Could you elaborate on this aspect?
- The work discusses the interplay between the static graph structure and dynamic solution states but does not seem to examine how embeddings evolve over time. Additionally, the concepts developed in the study appear to offer a more static representation of the final output generated by the GNN when reaching a solution. A crucial area for further investigation would be to identify specific strategies within the GNN's message-passing framework that dynamically address and resolve color conflicts throughout the solution process. Please add some comments in this.
- Could you please comment on the expected robustness of the introduced concepts compared to typical heuristic methods used for the Graph Coloring Problem?
rating: 7
confidence: 3 |
evDGngFLac | An Evaluation of Approaches to Train Embeddings for Logical Inference | [] | Knowledge bases traditionally require manual optimization to ensure reasonable performance when answering queries. We build on previous neurosymbolic approaches by improving the training of an embedding model for logical statements that maximizes similarity between unifying atoms and minimizes similarity of non-unifying atoms. In particular, we evaluate three approaches to training this model by increasing the occurrence of atoms with repeated terms, mutating anchor atoms to create positive and negative examples for use in triplet loss, and training with the “hardest” examples | [
"Logical reasoning",
"Neurosymbolic reasoning",
"Explainable AI"
] | https://openreview.net/pdf?id=evDGngFLac | meslkhmEiE | official_review | 1,735,256,091,070 | evDGngFLac | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission19/Reviewer_pGaA"
] | title: Good task
review: This paper describes a framework to train a neural embedding model for logical inference tasks. The core idea is through 'anchor mutation': triplets such (anchor, positive, negative) are generated via mutation, and an embedding model learns to embed unifying atoms (anchor and positive) closer (on the embedding space), and non-unifying atoms (anchor and negative) far away. This is closer to the idea of contrastive learning to me, which makes a lot of sense.
Strength:
- well-motived task
- clearly presented algorithm
Weakness:
- I would love to see a more formal definition of repeated term atoms (RTAs) and perhaps a few more words on its effect on the downstream tasks.
- More explanations on Table 2. How is the new embedding strategy different from the previous embeddings? Would it be correct to say that over large KBs the new embeddings only improve upon the mean nodes metric while stay the same upon the median metric? Would it be possible to perform worse over even larger KBs (on the medium nodes explored metric)?
rating: 7
confidence: 3 |
evDGngFLac | An Evaluation of Approaches to Train Embeddings for Logical Inference | [] | Knowledge bases traditionally require manual optimization to ensure reasonable performance when answering queries. We build on previous neurosymbolic approaches by improving the training of an embedding model for logical statements that maximizes similarity between unifying atoms and minimizes similarity of non-unifying atoms. In particular, we evaluate three approaches to training this model by increasing the occurrence of atoms with repeated terms, mutating anchor atoms to create positive and negative examples for use in triplet loss, and training with the “hardest” examples | [
"Logical reasoning",
"Neurosymbolic reasoning",
"Explainable AI"
] | https://openreview.net/pdf?id=evDGngFLac | diU5rwKmmI | decision | 1,735,598,401,162 | evDGngFLac | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: This is a well-written paper and also suitable for this workshop. We agree with the opinions of the reviewers. |
evDGngFLac | An Evaluation of Approaches to Train Embeddings for Logical Inference | [] | Knowledge bases traditionally require manual optimization to ensure reasonable performance when answering queries. We build on previous neurosymbolic approaches by improving the training of an embedding model for logical statements that maximizes similarity between unifying atoms and minimizes similarity of non-unifying atoms. In particular, we evaluate three approaches to training this model by increasing the occurrence of atoms with repeated terms, mutating anchor atoms to create positive and negative examples for use in triplet loss, and training with the “hardest” examples | [
"Logical reasoning",
"Neurosymbolic reasoning",
"Explainable AI"
] | https://openreview.net/pdf?id=evDGngFLac | 9sDOu2mSBj | official_review | 1,734,706,356,135 | evDGngFLac | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission19/Reviewer_gmc8"
] | title: Interesting paper that is quite relevant to the workshop
review: The paper considers the problem of neurosymbolic learning and aims to improve it by improving the training of the embedding model for logical statements, evaluating the quality of the embeddings to determine if they model the original semantics and finally improving the reasoning using a downstream scoring function.
The paper is well written and is quite relevant to the topic of the workshop. I would certainly like to see this presented at the workshop and get some more detailed feedback from the poster session.
Of course, standard improvements including improving the empirical and theoretical analyses, adding different types of KGs and baselines are possible which I am sure that the authors would consider before submitting this to a conference.
Over all, this is quite exciting work that I enjoyed reading.
rating: 7
confidence: 5 |
WTtyJGYQo7 | Improved Self-Explanatory Graph Learning Method Based on Controlled Information Compression and Branch Optimization | [] | Graph Neural Networks have gained widespread application across various domains and have motivated research into their explainability. Self-explainable methods consider inherent explanations during prediction and provide insights to reveal the decision-making processes. However, the transparent explainability of these methods often comes at the cost of predictive performance. One reason is that these methods suffer from a distribution shift when directly using explanation subgraphs to make predictions. In this work, we propose Self-explAinable Graph lEarning (SAGE) to improve the performance of self-explainable methods. Specifically, SAGE learns attention weights for edges to guide message-passing process, generating more meaningful and discriminative representations. In this process, we emphasize label-relevant critical structures while diminishing the influence of noisy ones. Additionally, we control the degree of noisy information compression applied to the subgraphs by establishing a lower bound for the attention scores of irrelevant noisy structures, which helps reduce the deviation from the original graph and mitigates the distribution shift. Furthermore, we introduced an optional strategy called branch optimization, exploring the optimal GNN state to improve the model's optimization effectiveness. Experimental results on real-world datasets demonstrate that SAGE can achieve predictive accuracy comparable to or even higher than baselines. Compared to the backbone model, our self-explainable framework attains an average performance improvement of 10.5% across four datasets. | [
"Self-explainable machine learning",
"Graph neural network",
"Information compression"
] | https://openreview.net/pdf?id=WTtyJGYQo7 | meaYP7f5mR | official_review | 1,734,409,835,355 | WTtyJGYQo7 | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission1/Reviewer_jnTH"
] | title: Review for Submission 1
review: **Paper summary**
SAGE is a self-explainable graph learning framework designed to mitigate distribution shift by “compressing” rather than discarding irrelevant structures, ensuring more stable explanations. By enforcing a lower bound on attention weights and refining parameters through branch optimization, SAGE achieves strong predictive performance and meaningful, interpretable explanations on multiple benchmark datasets.
------
**Originality**
* **Strengths**: SAGE introduces a novel method to control information compression by setting a probabilistic lower bound on edge attention scores, avoiding abrupt distributional shifts caused by pruning. Its branch optimization technique refines model parameters in a straightforward manner without disrupting the main training loop.
* **Weaknesses**: While the concept of controlling information compression is interesting, it is somewhat incremental over other IB-inspired methods. The idea of partial preservation might be seen as a heuristic. The method relies on a hyperparameter r that must be manually chosen and may vary with the dataset.
--------
**Quality**
* **Strengths**: The technical derivation is sound. The model employs a GIN backbone, Gumbel-softmax reparameterization for edge selection, and a KL divergence penalty that encourages attention weights to approximate a predefined distribution. The experiments show performance gains that support the claim.
* **Weaknesses**: The theoretical motivation for why this partial compression (instead of fully dropping edges) leads to better performance could be explored more deeply. The paper lacks rigorous theoretical analysis of how the chosen distribution boundary (r) interacts with distribution shift. The reasoning is intuitive but not extensively justified.
---------
**Significance**
* **Strengths**: Improving self-explainable GNN performance is a meaningful contribution. The problem of distribution shift is a recognized challenge. Addressing it by partial compression of noisy information could open a new line of thought for balancing model fidelity and interpretability. For practitioners looking for interpretable GNNs with minimal performance loss, this could be valuable.
* **Weaknesses**: Although the results are good, the performance improvements are not always dramatic except on certain datasets (e.g., NCI1). The method’s broad applicability and how it compares to a wide range of other explainability techniques (e.g., recent state-of-the-art methods) is not thoroughly discussed. More extensive baselines or complexity comparisons would add to significance.
-----------
**Questions and suggestions for the authors**
* The performance appears sensitive to the choice of r. Could the authors provide a heuristic or automated method to select r without heavy tuning?
* Could the authors provide more theoretical insights into why partial compression of noisy edges leads to improved performance and reduced distribution shift? For instance, can they characterize how r affects the divergence between compressed and original distributions mathematically?
* The experiments compare with a handful of methods. It would be helpful to see how SAGE compares against a broader range of self-explanatory or post-hoc methods to strengthen claims about general efficacy.
* The training process includes branch optimization and involves Gumbel-softmax sampling. How does the runtime and computational complexity scale with the size of the graph? Are there any memory constraints?
______
**Limitations**
The authors mention distribution shift but do not provide a formal definition or metric beyond similarity in embeddings and a heuristic. While they test multiple datasets, the approach relies on a hyperparameter (r) that must be tuned. The branch optimization step increases computational load and might not be feasible for very large graphs. Another limitation is that while the model provides attention scores as explanations, the granularity and faithfulness of these explanations depend heavily on how well the attention aligns with truly causal substructures. There is also no strong theoretical guarantee provided that the edges identified are indeed the “correct” explanations.
_______
**Ethics**
There are no obvious direct ethical concerns related to the method as it stands. The paper does not deal with sensitive data or produce sensitive content. The approach is a method improvement and not directly involved in human-facing decision-making applications at the evaluation stage. No unethical dataset or methodology usage is apparent. Thus, no ethical issues need to be flagged for special ethics review.
rating: 7
confidence: 4 |
WTtyJGYQo7 | Improved Self-Explanatory Graph Learning Method Based on Controlled Information Compression and Branch Optimization | [] | Graph Neural Networks have gained widespread application across various domains and have motivated research into their explainability. Self-explainable methods consider inherent explanations during prediction and provide insights to reveal the decision-making processes. However, the transparent explainability of these methods often comes at the cost of predictive performance. One reason is that these methods suffer from a distribution shift when directly using explanation subgraphs to make predictions. In this work, we propose Self-explAinable Graph lEarning (SAGE) to improve the performance of self-explainable methods. Specifically, SAGE learns attention weights for edges to guide message-passing process, generating more meaningful and discriminative representations. In this process, we emphasize label-relevant critical structures while diminishing the influence of noisy ones. Additionally, we control the degree of noisy information compression applied to the subgraphs by establishing a lower bound for the attention scores of irrelevant noisy structures, which helps reduce the deviation from the original graph and mitigates the distribution shift. Furthermore, we introduced an optional strategy called branch optimization, exploring the optimal GNN state to improve the model's optimization effectiveness. Experimental results on real-world datasets demonstrate that SAGE can achieve predictive accuracy comparable to or even higher than baselines. Compared to the backbone model, our self-explainable framework attains an average performance improvement of 10.5% across four datasets. | [
"Self-explainable machine learning",
"Graph neural network",
"Information compression"
] | https://openreview.net/pdf?id=WTtyJGYQo7 | KoiecLuFhe | decision | 1,735,601,223,106 | WTtyJGYQo7 | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | decision: Reject
comment: This is a good paper. However, it does not fit the scope of this workshop. The SAGE framework has some novelty but is still within the current paradigm of neural networks.
title: Paper Decision |
UzdKSpCjDh | Captioning and Task-Specific Prompting for Improved VLM Performance | [] | Vision-language models (VLMs) have transformed tasks requiring visual and reasoning abilities, such as image retrieval
and visual question answering (VQA). Despite their success,
VLMs face significant challenges with tasks involving geometric reasoning, algebraic problem-solving, and counting.
These limitations stem from difficulties in effectively integrating multiple modalities and accurately interpreting such
tasks. We propose an efficient, question-driven image captioning pipeline to enhance visual question answering abilities in mathematical contexts. Our method extracts keywords from the question, generates targeted captions for each
image-question pair using those keywords, and uses the caption as a prompt for QnA. We propose utilizing task-specific
guidance as an “approach” to enhance the VQA and captioning process. Additionally, we evaluate the robustness of
these models against adversarial prompts to ensure that our
captioning-based approach does not compromise much on robustness. Our pipeline is tested on diverse math-related and
visual reasoning tasks across multiple datasets and VLMs. | [
"Visual Understanding",
"Mathematical Reasoning",
"In-context Learning"
] | https://openreview.net/pdf?id=UzdKSpCjDh | dChzSFUzQv | official_review | 1,734,532,735,233 | UzdKSpCjDh | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission2/Reviewer_UfCJ"
] | title: This work have the potential, but needs a lot of revamps
review: Summary:
This paper discusses the performance improvements of visual language models (VLMs) in tasks involving vision and reasoning, such as image retrieval and visual question answering (VQA). This paper proposes an efficient, question-driven image description (captioning) pipeline to enhance visual question answering in a mathematical context. The approach extracts keywords from the question, generates targeted captions for each image-question pair, and uses these captions as hints for question answering.
Strength:
- A pipeline is proposed to enhance VQA capabilities by extracting keywords from questions and generating targeted image descriptions.
- Task-specific guidance is used as a “method” to enhance the VQA and description processes.
- The robustness of the model to hostile cues was evaluated to ensure that the description-based method does not compromise too much on robustness.
Weakness:
- The Figure 1 on Page 2 is lack of the detail of model. It’s hard to find the connection between the three steps. What are their meanings? How they improved from one to another?
- In the main context, authors never mention about Figure 1-4 and Table 1-2. So What are those information related?
- In Experiment on Page 3, author mention they divide MathVision dataset into three sub-datasets. How did they work? What standard author used to split the datasets as those three parts?
- In Results on Page 3-4, What datasets are used for testing ‘Model-wise’? And Which VLM is used for testing ‘Dataset-wise’?
- Authors said their approaches are using a prompt-based pipeline to solve relative problems, which reducing the computational overhead. However, there is no any comparison we can find in this study to show the difference between authors’ approaches and other approaches.
Suggestions:
- In Figure 1 on Page 2, authors should consider to improve the text format for a batter reading.
- In Experiments on Page 3, authors mention the models they used. We suggest author add more details of each model to highlight the difference of model.
- In Result section on Page 4, ‘(Table 4)’ is actually ‘(Table 2)’.
rating: 4
confidence: 4 |
UzdKSpCjDh | Captioning and Task-Specific Prompting for Improved VLM Performance | [] | Vision-language models (VLMs) have transformed tasks requiring visual and reasoning abilities, such as image retrieval
and visual question answering (VQA). Despite their success,
VLMs face significant challenges with tasks involving geometric reasoning, algebraic problem-solving, and counting.
These limitations stem from difficulties in effectively integrating multiple modalities and accurately interpreting such
tasks. We propose an efficient, question-driven image captioning pipeline to enhance visual question answering abilities in mathematical contexts. Our method extracts keywords from the question, generates targeted captions for each
image-question pair using those keywords, and uses the caption as a prompt for QnA. We propose utilizing task-specific
guidance as an “approach” to enhance the VQA and captioning process. Additionally, we evaluate the robustness of
these models against adversarial prompts to ensure that our
captioning-based approach does not compromise much on robustness. Our pipeline is tested on diverse math-related and
visual reasoning tasks across multiple datasets and VLMs. | [
"Visual Understanding",
"Mathematical Reasoning",
"In-context Learning"
] | https://openreview.net/pdf?id=UzdKSpCjDh | U81kpT05yD | decision | 1,735,598,400,637 | UzdKSpCjDh | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | title: Paper Decision
decision: Reject
comment: We agree with the opinions of the reviewers. |
UzdKSpCjDh | Captioning and Task-Specific Prompting for Improved VLM Performance | [] | Vision-language models (VLMs) have transformed tasks requiring visual and reasoning abilities, such as image retrieval
and visual question answering (VQA). Despite their success,
VLMs face significant challenges with tasks involving geometric reasoning, algebraic problem-solving, and counting.
These limitations stem from difficulties in effectively integrating multiple modalities and accurately interpreting such
tasks. We propose an efficient, question-driven image captioning pipeline to enhance visual question answering abilities in mathematical contexts. Our method extracts keywords from the question, generates targeted captions for each
image-question pair using those keywords, and uses the caption as a prompt for QnA. We propose utilizing task-specific
guidance as an “approach” to enhance the VQA and captioning process. Additionally, we evaluate the robustness of
these models against adversarial prompts to ensure that our
captioning-based approach does not compromise much on robustness. Our pipeline is tested on diverse math-related and
visual reasoning tasks across multiple datasets and VLMs. | [
"Visual Understanding",
"Mathematical Reasoning",
"In-context Learning"
] | https://openreview.net/pdf?id=UzdKSpCjDh | IT1y9t4pDy | official_review | 1,734,707,951,819 | UzdKSpCjDh | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission2/Reviewer_cKr2"
] | title: A nice work, but a bit premature, about VLMs capacity to perform simple boo-keeping "math" problems
review: The paper tests the performance of different VLMs in solving math problems using various zero-shot prompting techniques. The techniques themselves are nice and elaborate, as well as the variety of datasets and VLMs used. Therefore the results are convincing.
Having said that, I think that the paper is lacking in a few key aspects:
1. The VQA datasets were extensively studied in the past - I would present the performance of the best models that were trained for these tasks. This will allow us to see the gap the VLM is expected to close.
2. The paper only focuses on problems such as counting. I think that it's more interesting to ask the VLM to approximate the number of objects rather than give an exact number. A human won't be able to "see" the number of objects as well. The human observer will have to count, using his finger, which is a completely different task than seeing the image and "counting" how many items there are.
3. Use RAG, few-shot - if you want to simulate something more similar to a human process.
4. There is not enough description of the dataset, types of questions, and error analysis, so it's hard to understand why the model was wrong when he was wrong (or right).
rating: 4
confidence: 4 |
UzdKSpCjDh | Captioning and Task-Specific Prompting for Improved VLM Performance | [] | Vision-language models (VLMs) have transformed tasks requiring visual and reasoning abilities, such as image retrieval
and visual question answering (VQA). Despite their success,
VLMs face significant challenges with tasks involving geometric reasoning, algebraic problem-solving, and counting.
These limitations stem from difficulties in effectively integrating multiple modalities and accurately interpreting such
tasks. We propose an efficient, question-driven image captioning pipeline to enhance visual question answering abilities in mathematical contexts. Our method extracts keywords from the question, generates targeted captions for each
image-question pair using those keywords, and uses the caption as a prompt for QnA. We propose utilizing task-specific
guidance as an “approach” to enhance the VQA and captioning process. Additionally, we evaluate the robustness of
these models against adversarial prompts to ensure that our
captioning-based approach does not compromise much on robustness. Our pipeline is tested on diverse math-related and
visual reasoning tasks across multiple datasets and VLMs. | [
"Visual Understanding",
"Mathematical Reasoning",
"In-context Learning"
] | https://openreview.net/pdf?id=UzdKSpCjDh | B2tSQudShI | official_review | 1,735,429,313,087 | UzdKSpCjDh | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission2/Reviewer_fLTw"
] | title: Preliminary efforts to improve VLM performance in mathematical reasoning and visual understanding tasks
review: Summary:
This paper presents an innovative approach to improving the performance of Vision-Language Models (VLMs) in mathematical reasoning and visual understanding tasks. The authors propose a task-specific captioning pipeline that extracts keywords from the question, generates targeted image captions, and uses these captions as prompts to guide the VLM in solving complex problems. The approach is evaluated across multiple datasets and demonstrates promising improvements in accuracy and robustness.
Key Contributions:
* Generates targeted captions using keywords extracted from questions and integrates these captions into the reasoning process.
* Evaluate the pipeline's performance against adversarial approaches to ensure reliability.
* Tests on datasets involving geometry, counting, algebra, and mathematical reasoning to assess generalizability.
Strengths:
* The proposed captioning pipeline improves VLM performance on mathematical and reasoning tasks compared to baseline methods.
* By generating targeted captions and providing task-specific guidance, the method encourages VLMs to focus on visual content, improving their reasoning abilities.
* The pipeline demonstrates consistent improvements across multiple datasets and tasks, including geometry, counting, and algebra.
Limitations:
* The study evaluates only Vision-Language Models (VLMs) and does not compare its approach to other state-of-the-art methods outside the VLM domain, which could provide a more comprehensive understanding of its effectiveness.
* The experiments are confined to smaller datasets and open-source models, which limits the generalizability of the findings to larger-scale, state-of-the-art VLMs such as GPT-4.
* Generating captions from the query introduces potential challenges, as the quality of the captions depends on accurate keyword extraction and relevance to the question. Poorly structured queries or ambiguous keywords could lead to suboptimal performance.
* The study does not provide an analysis of the errors made by the framework. Identifying and categorizing these errors—whether they arise from the captioning process, reasoning steps, or question formulation—could offer critical insights for refining the approach.
The paper would benefit from additional refinement and improvements before it is ready for publication.
rating: 4
confidence: 3 |
TglJgkTlsN | Improving Math Problem Solving in Large Language Models Through Categorization and Strategy Tailoring | [] | In this paper, we investigate how to harness large language models (LLMs) to solve mathematical problems both quickly and accurately. Specifically, we demonstrate the effectiveness of classifying problems into distinct categories and applying category-specific problem-solving strategies to enhance the math performance of LLMs. We develop a straightforward machine learning model for problem categorization and show that its accuracy can be significantly improved through the creation of well-designed training datasets. We believe that our approach works by helping reduce hallucinations in LLMs, which is a critical step toward unlocking their potential to tackle advanced mathematical problems. | [
"Large Language Models",
"Math reasoning"
] | https://openreview.net/pdf?id=TglJgkTlsN | eDZqHKnuvb | official_review | 1,735,504,449,378 | TglJgkTlsN | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission8/Reviewer_nY3f"
] | title: No details of subjectwise intructions used, Weak problem categorization
review: The paper proposes a novel method to solve mathematical problems by classifying them into categories and strategies.
Key issues:
- Hallucination was mentioned earlier in the paper, but no clear metrics were used to show if the approach reduces hallucination.
- It's not clear what category-wise instructions were given to the model to solve problems.
- The categorization of problems was weak; a more robust approach could have been to use a small transformer-based model for category classification.
- A transformer model trained to determine which strategy is better for a particular problem could have been more effective instead of using a pre-defined distribution.
- Could have provided details of the prompt structure used to convey problems,categories and instructions to the model.
Minor issues:
- Tables could have been more readable.
rating: 5
confidence: 3 |
TglJgkTlsN | Improving Math Problem Solving in Large Language Models Through Categorization and Strategy Tailoring | [] | In this paper, we investigate how to harness large language models (LLMs) to solve mathematical problems both quickly and accurately. Specifically, we demonstrate the effectiveness of classifying problems into distinct categories and applying category-specific problem-solving strategies to enhance the math performance of LLMs. We develop a straightforward machine learning model for problem categorization and show that its accuracy can be significantly improved through the creation of well-designed training datasets. We believe that our approach works by helping reduce hallucinations in LLMs, which is a critical step toward unlocking their potential to tackle advanced mathematical problems. | [
"Large Language Models",
"Math reasoning"
] | https://openreview.net/pdf?id=TglJgkTlsN | aEzG0oc0Fq | decision | 1,735,598,400,880 | TglJgkTlsN | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | title: Paper Decision
decision: Reject
comment: We agree with the opinions of the reviewers. |
TglJgkTlsN | Improving Math Problem Solving in Large Language Models Through Categorization and Strategy Tailoring | [] | In this paper, we investigate how to harness large language models (LLMs) to solve mathematical problems both quickly and accurately. Specifically, we demonstrate the effectiveness of classifying problems into distinct categories and applying category-specific problem-solving strategies to enhance the math performance of LLMs. We develop a straightforward machine learning model for problem categorization and show that its accuracy can be significantly improved through the creation of well-designed training datasets. We believe that our approach works by helping reduce hallucinations in LLMs, which is a critical step toward unlocking their potential to tackle advanced mathematical problems. | [
"Large Language Models",
"Math reasoning"
] | https://openreview.net/pdf?id=TglJgkTlsN | CQJp8Jdmnh | official_review | 1,734,702,965,666 | TglJgkTlsN | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission8/Reviewer_hBvu"
] | title: A weak categorization between chain-of-thought and program-of-thought
review: This paper proposes to solve math problems by automatically selecting between chain-of-thought (CT) and program-of-though (PT) prompting. The selection is implemented with a categorization model using a 3-layer neural network based on word frequency features.
Despite some interesting insights, the paper has some major weaknesses:
* Weak categorization model. The categorization model is based on word frequency features. As mentioned in the paper, one of the main challenges encountered in categorization is ‘answer extraction,’ which is hard to address with word frequency features. Hence, a stronger sequence model (such as a Transformer) should be taken into account for solving the categorization.
* Why not compute the posterior probability? The current approach takes the argmax of the output distribution of the categorization model, and based on the selected category samples from the P_s(. | c), where s in {CT, PT} and c in {algebra, geometry, number theory, combinatorics}. However, one could also incorporate the output distribution of the categorization model into the computation of the posterior sampling distribution.
Moreover, the definition of the prior sampling distribution {CT, PT} is somewhat arbitrary and not justified.
* It is not clear how the approach is helping to reduce hallucinations in LLMs. If the approach would indeed reduce hallucinations, it should be demonstrated empirically.
* The methodology is not clear: the paper is missing important details about network architecture for the categorization model, example prompts, etc. In its current form, the experimental results of this paper are not reproducible.
Minor weaknesses/corrections
* References of AlphaProof and Alphageometry link to news articles instead of the original papers.
* The section references are broken.
rating: 4
confidence: 4 |
R9XUQ0hWTy | Reinforcement Learning for Locally Checkable Labeling Problems | [] | We address the challenge of solving locally checkable labeling (LCL) problems on graphs using machine learning. Unlike prior supervised approaches that depend on ground-truth algorithms or enforce unique solutions, we propose a reinforcement learning framework that requires only verifiers to evaluate correctness. This formulation allows models to learn solution strategies independently, without bias toward specific algorithmic procedures, and inherently supports the discovery of non-unique solutions. We evaluate our method on four fundamental LCL problems, demonstrating its ability to generalize effectively, outperform supervised baselines, and provide a versatile foundation for learning algorithmic reasoning on graphs. | [
"Algorithmic Reasoning",
"Graph Learning",
"Reinforcement Learning",
"Locally Checkable Problems"
] | https://openreview.net/pdf?id=R9XUQ0hWTy | hBCiyUkMld | decision | 1,735,601,887,436 | R9XUQ0hWTy | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | decision: Reject
comment: This is a good paper. However, it does not fit the scope of this workshop.
title: Paper Decision |
R9XUQ0hWTy | Reinforcement Learning for Locally Checkable Labeling Problems | [] | We address the challenge of solving locally checkable labeling (LCL) problems on graphs using machine learning. Unlike prior supervised approaches that depend on ground-truth algorithms or enforce unique solutions, we propose a reinforcement learning framework that requires only verifiers to evaluate correctness. This formulation allows models to learn solution strategies independently, without bias toward specific algorithmic procedures, and inherently supports the discovery of non-unique solutions. We evaluate our method on four fundamental LCL problems, demonstrating its ability to generalize effectively, outperform supervised baselines, and provide a versatile foundation for learning algorithmic reasoning on graphs. | [
"Algorithmic Reasoning",
"Graph Learning",
"Reinforcement Learning",
"Locally Checkable Problems"
] | https://openreview.net/pdf?id=R9XUQ0hWTy | QZAPVIKEEy | official_review | 1,734,505,648,783 | R9XUQ0hWTy | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission17/Reviewer_ZdPW"
] | title: Application of reinforcement learning framework for Locally Checkable Labeling (LCL) problems.
review: **Paper summary**
VARL is a reinforcement learning-based framework designed to solve Locally Checkable Labeling (LCL) problems. Compared to traditional supervised learning methods, this approach eliminates the reliance on ground truth solutions and avoids bias toward specific algorithms. By leveraging local validators to evaluate solutions, it is not constrained to unique solutions and can effectively handle tasks with multiple valid solutions. It has demonstrated effective generalization capabilities across multiple tasks, outperforming supervised learning baselines.
**Originality**
- **Strengths**: VARL follows the framework of the supervised method GraphFSA but relaxes the constraints on the transition function, making it more generalized. Each node and edge is treated as an agent that independently selects its next state, while all nodes or edges share the same policy, effectively reducing the number of parameters. The rewards are from the results of local validators and guide the model to find valid solutions.
- **Weaknesses**: While the idea of allowing the model to freely explore all possible solutions through the reinforcement learning process is interesting, its time complexity needs further consideration. The approach of exploring all valid solutions without relying on specific algorithms could be seen as a heuristic. However, the exploration process in reinforcement learning, how to avoid redundant computations, and how to ensure all solutions are effectively discovered require deeper investigation.
**Quality**
- **Strengths**: The technical derivations are sound. The model employs a multi-agent reinforcement learning framework, graph convolution layers for aggregating neighbor states, and MLPs for encoding and decoding. The performance improvements demonstrated in the experiments support this approach.
- **Weaknesses**: This reinforcement learning-based approach does not rely on ground truth or specific algorithms, which might necessitate a complete exploration of the entire graph. The paper lacks a theoretical analysis of the time complexity, and the experiments are conducted solely on small graphs of size 16, without extending to larger graphs. Furthermore, the paper does not provide detailed descriptions of the design and computational cost of the local validator. In theory, the validator would need to be manually designed for different tasks.
**Significance**
- **Strengths**: This method differs from traditional supervised methods as it does not rely on ground truth or specific algorithms, yet achieves better performance across various tasks, demonstrating strong generalization capabilities. It provides a universal foundation for learning algorithmic reasoning on graphs. The experimental results indicate that the combination of reinforcement learning and local validators offers a promising direction for addressing algorithmic problems on graphs, with broader implications for learning and reasoning in discrete domains.
- **Weaknesses**: Although the results are promising, the time complexity is not analyzed, making it difficult to assess the practical applicability of the approach. The broader applicability of this method has not been thoroughly discussed. Incorporating more comprehensive baselines or complexity comparisons would further enhance its significance.
**Questions and suggestions for the authors**
- In addition to supervised learning baselines, could the authors consider introducing other reinforcement learning methods for comparison to more comprehensively evaluate the performance advantages of the proposed approach. Furthermore, incorporating traditional heuristic-based algorithms as additional baselines, particularly well-known classical algorithms in LCL problems, could provide valuable insights.
- Testing on larger-scale graph structures (e.g., with 100+ nodes or more) would help analyze changes in training efficiency and validator performance. Additionally, evaluating the scalability of the proposed method in real-world large-scale network datasets would strengthen its practical applicability.
- Could the authors provide detailed descriptions of the local validators, such as pseudocode or process flows for each task, and analyze their time complexity to give a clearer understanding of their computational requirements.
**Limitations**
The authors repeatedly emphasize that the local validator is a core component, but they do not provide detailed descriptions of its design or computational cost. For different tasks, this module may require manual design, which could limit its general applicability. Another limitation is that, although the model demonstrates strong performance and generalization capabilities, its complexity and training time costs cannot be adequately evaluated. Additionally, the correctness of the solutions relies entirely on the validator, without robust theoretical guarantees to support its reliability.
**Ethics**
There are no obvious direct ethical concerns related to the method as it stands. The paper does not deal with sensitive data or produce sensitive content. The approach is a method improvement and not directly involved in human-facing decision-making applications at the evaluation stage. No unethical dataset or methodology usage is apparent. Thus, no ethical issues need to be flagged for special ethics review.
rating: 7
confidence: 4 |
R9XUQ0hWTy | Reinforcement Learning for Locally Checkable Labeling Problems | [] | We address the challenge of solving locally checkable labeling (LCL) problems on graphs using machine learning. Unlike prior supervised approaches that depend on ground-truth algorithms or enforce unique solutions, we propose a reinforcement learning framework that requires only verifiers to evaluate correctness. This formulation allows models to learn solution strategies independently, without bias toward specific algorithmic procedures, and inherently supports the discovery of non-unique solutions. We evaluate our method on four fundamental LCL problems, demonstrating its ability to generalize effectively, outperform supervised baselines, and provide a versatile foundation for learning algorithmic reasoning on graphs. | [
"Algorithmic Reasoning",
"Graph Learning",
"Reinforcement Learning",
"Locally Checkable Problems"
] | https://openreview.net/pdf?id=R9XUQ0hWTy | PcstoBeKit | official_review | 1,734,580,732,540 | R9XUQ0hWTy | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission17/Reviewer_JAob"
] | title: The paper proposes a novel RL framework for LCL problems and outperforms supervised baselines.
review: Summary
The paper proposes an RL framework for solving LCL problems on graphs, leveraging local verifiers instead of ground-truth labels. This approach avoids algorithmic biases and supports non-unique solutions. The proposed framework significantly outperforms supervised baselines on four LCL problems.
Strengths
- The writing is clear and well-structured.
- The proposed framework achieves impressive performance across diverse LCL problems, surpassing supervised methods by large margins.
- The proposed framework is novel and does not require ground-truth labels or pre-defined algorithms.
Weaknesses
- In the experiments, does the VARL model have a comparable number of parameters to the baseline models? It is unclear how much of the observed improvement is due to having more parameters.
Suggestions
- It would be better to describe your reinforcement learning algorithm in the main text rather than in the appendix, as the paper proposes a reinforcement learning framework.
- The first sentence of the Experiments section, "To test our ...", should be revised to "We test our ..." for grammatical correctness.
- It would be better to evaluate the proposed framework on large-scale datasets.
rating: 6
confidence: 3 |
R9XUQ0hWTy | Reinforcement Learning for Locally Checkable Labeling Problems | [] | We address the challenge of solving locally checkable labeling (LCL) problems on graphs using machine learning. Unlike prior supervised approaches that depend on ground-truth algorithms or enforce unique solutions, we propose a reinforcement learning framework that requires only verifiers to evaluate correctness. This formulation allows models to learn solution strategies independently, without bias toward specific algorithmic procedures, and inherently supports the discovery of non-unique solutions. We evaluate our method on four fundamental LCL problems, demonstrating its ability to generalize effectively, outperform supervised baselines, and provide a versatile foundation for learning algorithmic reasoning on graphs. | [
"Algorithmic Reasoning",
"Graph Learning",
"Reinforcement Learning",
"Locally Checkable Problems"
] | https://openreview.net/pdf?id=R9XUQ0hWTy | CnY54YAgLB | official_review | 1,735,414,537,860 | R9XUQ0hWTy | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission17/Reviewer_Gkbx"
] | title: A Novel Multi-Agent Reinforcement Learning (MARL) Framework for Locally Checkable Labeling (LCL) Problems
review: Summary:
The authors tackle the problem of solving locally checkable labeling (LCL) tasks on graphs using Multi-Agent Reinforcement Learning framework. In contrast to traditional supervised approaches that rely on ground-truth algorithms or enforce unique solutions, their framework uses verifiers to evaluate correctness. This method allows models to learn solution strategies independently, free from biases toward specific algorithms and supports the discovery of multiple valid solutions. They validate their framework on four core LCL problems, showcasing its ability to generalize effectively, surpass supervised baselines, and provide a flexible foundation for algorithmic reasoning on graphs.
Key Contributions:
* RL-based Framework: The method trains agents (representing nodes or edges) to learn decision-making policies based on local observations and verifiers.
* Flexibility: It supports problems with multiple valid solutions and avoids biases tied to specific algorithms.
The framework was tested on the four LCL problems:
* Maximal Independent Set (MIS)
* Minimal Vertex Cover (MVC)
* Maximal Matching (MM)
* Minimal Edge Cover (MEC) Results demonstrate superior performance compared to supervised baselines, particularly in generalization and solving edge-centric problems.
Strengths:
* The paper is clearly and effectively written.
* The proposed framework achieves better performance compared to supervised baseline methods.
* The RL-based approach eliminates the need for predefined labels or unique solutions, making it highly adaptable to a variety of LCL problems.
* The framework effectively handles problems with multiple valid solutions, addressing a significant limitation of many supervised methods.
Limitations:
* The RL-based framework involves tuning multiple parameters (e.g., reward functions, policy networks), which can lead to increased computational complexity.
* It is recommended to move the description of the RL framework from the appendix to the main paper, as it represents the core contribution.
* Clarification is needed regarding the dataset generation process for training, validation, and testing. Were the graphs generated randomly, and what is their distribution?
* Testing the framework on larger graphs with more than 16 vertices would have provided a more comprehensive evaluation.
Typos:
Page 2: “In GraphFSA, ach node...” change ach to each.
Page 4: “Architectures using gGCN show partial success...” change gGCN to GCN
rating: 7
confidence: 3 |
OupEEi1341 | Enhancing Reasoning through Process Supervision with Monte Carlo Tree Search | [] | Large language models (LLMs) have demonstrated their remarkable capacity across a variety of tasks. However, reasoning remains a challenge for LLMs. To improve LLMs’ reasoning ability, process supervision has proven to be better than outcome supervision. In this work, we study using Monte Carlo Tree Search (MCTS) to generate process supervision data with LLMs themselves for training them. We
sample reasoning steps with an LLM and assign each step a score that captures its ”relative correctness,” and the LLM is then trained by minimizing weighted log-likelihood of generating the reasoning steps. This generate-then-train process is repeated iteratively until convergence. Our experimental results demonstrate that the proposed methods considerably improve the performance of LLMs on two mathematical reasoning datasets. Furthermore, models trained on one dataset also exhibit improved performance on the other, showing the transferability of the enhanced reasoning ability. | [
"Large Language Models; Reasoning; Process Supervision; Monte Carlo Tree Search"
] | https://openreview.net/pdf?id=OupEEi1341 | zrCRJi7SgH | decision | 1,735,598,401,146 | OupEEi1341 | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: This is an interesting paper in reasoning. We agree with the opinions of reviewers. |
OupEEi1341 | Enhancing Reasoning through Process Supervision with Monte Carlo Tree Search | [] | Large language models (LLMs) have demonstrated their remarkable capacity across a variety of tasks. However, reasoning remains a challenge for LLMs. To improve LLMs’ reasoning ability, process supervision has proven to be better than outcome supervision. In this work, we study using Monte Carlo Tree Search (MCTS) to generate process supervision data with LLMs themselves for training them. We
sample reasoning steps with an LLM and assign each step a score that captures its ”relative correctness,” and the LLM is then trained by minimizing weighted log-likelihood of generating the reasoning steps. This generate-then-train process is repeated iteratively until convergence. Our experimental results demonstrate that the proposed methods considerably improve the performance of LLMs on two mathematical reasoning datasets. Furthermore, models trained on one dataset also exhibit improved performance on the other, showing the transferability of the enhanced reasoning ability. | [
"Large Language Models; Reasoning; Process Supervision; Monte Carlo Tree Search"
] | https://openreview.net/pdf?id=OupEEi1341 | jii9py4HQS | official_review | 1,735,285,499,708 | OupEEi1341 | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission15/Reviewer_c4F6"
] | title: Good Paper, Could be improved with few clarifications.
review: ### Summary
-----
The paper leverages Monte Carlo Tree Search (MCTS) to generate process supervision data for enhancing the step-by-step reasoning capabilities of large language models (LLMs). Improving reasoning mechanisms in LLMs has been a longstanding challenge. Process supervision has shown better performance than methods focusing on producing correct outcomes. This work aims to train LLMs without relying on reward models, which are inherently complex, by augmenting data generated by the model itself. The proposed method samples and collects data from the search tree using MCTS, then performs supervised fine-tuning (SFT) on the LLM until convergence by minimizing the log-likelihood of the relative correctness of reasoning steps. The paper is well-written and easy to understand.
### Strengths
-----
- The experiments are comprehensively conducted with replications, standard error reporting, and notable performance improvements compared to baseline methods.
- The paper introduces a novel approach by suggesting data augmentation methods rather than relying on reward-based approaches, enabling greater training efficiency.
- The transferability evaluation demonstrates the model's ability to generalize effectively to unseen data.
### Suggestions for Improvements
-----
I kindly suggest clarifying the following points to improve the paper:
- Could the notation for the tree $\{(x^i, p_j^i, s_{j,k}^i, r_{j,k}^i)\}$ be embedded into Figure 1 for better visual alignment and clarity?
- Is this method the first approach to tackle data augmentation within the process supervision paradigm? If not, could references to related works be provided?
- What are the possible cases of distribution shift (e.g., label shift, covariate shift, domain shift) in Eq. (2), and can such shifts be minimized by the proposed method? Alternatively, does the term serve as a penalty term just to account for distribution shift?
- What could be the possible reason for the quick convergence observed? Could the use of self-generated data be a contributing factor?
- A more detailed explanation of the MCTS method itself would enhance readers' understanding. While the introduction mentions its use for annotation in previous works, the application in the proposed methodology appears to differ. Additionally, what are the strengths of MCTS compared to other baseline tools (if any exist) leveraged in handling reasoning process?
rating: 7
confidence: 4 |
OupEEi1341 | Enhancing Reasoning through Process Supervision with Monte Carlo Tree Search | [] | Large language models (LLMs) have demonstrated their remarkable capacity across a variety of tasks. However, reasoning remains a challenge for LLMs. To improve LLMs’ reasoning ability, process supervision has proven to be better than outcome supervision. In this work, we study using Monte Carlo Tree Search (MCTS) to generate process supervision data with LLMs themselves for training them. We
sample reasoning steps with an LLM and assign each step a score that captures its ”relative correctness,” and the LLM is then trained by minimizing weighted log-likelihood of generating the reasoning steps. This generate-then-train process is repeated iteratively until convergence. Our experimental results demonstrate that the proposed methods considerably improve the performance of LLMs on two mathematical reasoning datasets. Furthermore, models trained on one dataset also exhibit improved performance on the other, showing the transferability of the enhanced reasoning ability. | [
"Large Language Models; Reasoning; Process Supervision; Monte Carlo Tree Search"
] | https://openreview.net/pdf?id=OupEEi1341 | PsVHUaTzV6 | official_review | 1,735,110,014,124 | OupEEi1341 | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission15/Reviewer_Ezd1"
] | title: Good Paper, Could be enhanced by benchmarking against prior research using MCTS
review: ## Summary
---
This paper introduces a method that enhances reasoning abilities in large language models (LLMs) by integrating Monte Carlo Tree Search (MCTS) into process supervision. Unlike traditional approaches that focus on the final answer, this method scores intermediate reasoning steps based on their correctness, allowing for more nuanced training. Experiments on mathematical reasoning datasets (MATH and GSM8K) show significant improvements over baselines, with strong generalization to unseen datasets.
## Strengths
---
- Innovative use of MCTS to generate fine-grained supervision for intermediate reasoning steps, addressing limitations of outcome-based training.
- Consistent and substantial performance improvements are demonstrated on both in-domain and transfer tasks, supported by rigorous evaluation and clear comparisons to baseline methods like Zero-shot CoT and RFT, effectively validating the approach.
## Suggestions for Improvement
---
- Overstated Novelty: While the combination of MCTS and process supervision is novel, the paper could more explicitly acknowledge existing foundational work in these areas. Highlighting prior research on MCTS in reasoning tasks (e.g., its use in planning and decision-making algorithms) and process supervision methods would provide better context and strengthen the positioning of the contribution. Additionally, discussing how this work extends or diverges from existing approaches would clarify its unique value.
rating: 7
confidence: 4 |
OupEEi1341 | Enhancing Reasoning through Process Supervision with Monte Carlo Tree Search | [] | Large language models (LLMs) have demonstrated their remarkable capacity across a variety of tasks. However, reasoning remains a challenge for LLMs. To improve LLMs’ reasoning ability, process supervision has proven to be better than outcome supervision. In this work, we study using Monte Carlo Tree Search (MCTS) to generate process supervision data with LLMs themselves for training them. We
sample reasoning steps with an LLM and assign each step a score that captures its ”relative correctness,” and the LLM is then trained by minimizing weighted log-likelihood of generating the reasoning steps. This generate-then-train process is repeated iteratively until convergence. Our experimental results demonstrate that the proposed methods considerably improve the performance of LLMs on two mathematical reasoning datasets. Furthermore, models trained on one dataset also exhibit improved performance on the other, showing the transferability of the enhanced reasoning ability. | [
"Large Language Models; Reasoning; Process Supervision; Monte Carlo Tree Search"
] | https://openreview.net/pdf?id=OupEEi1341 | JhTW22NJ55 | official_review | 1,734,688,382,626 | OupEEi1341 | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission15/Reviewer_PeQ7"
] | title: The paper proposes a MCTS based approach to generating process supervision data which is then used in training LLM (Llama 3.1 and DeepSeekMath).
review: The paper explores MCTS-based approach for automatically generating process supervision data. It would be better if a few things are clarified:
(1) What does 'm' stand for in Equation 1?
(2) Why wasn't the proposed approach compared to other MCTS-based approaches like:
Luo, Liangchen, et al. "Improve Mathematical Reasoning in Language Models by Automated Process Supervision." arXiv preprint arXiv:2406.06592 (2024).
Wang, P.; Li, L.; Shao, Z.; Xu, R.; Dai, D.; Li, Y.; Chen, D.; Wu, Y.; and Sui, Z. 2024a. , Math-shepherd: Verify and reinforce llms step-by-step without human annotations. (comparison with this will help evaluate the efficacy of the reward score function proposed)
(3) Process Supervision data size - what is the size of data used for training? How many problems from Math or GSM 8k was used? I suppose two sets of training data is used for each dataset - Math and GSM-8K.
rating: 6
confidence: 5 |
KlORk0Z9ai | Enhancing Classification and Calibration via Gaussian Distribution Modeling | [] | Accurate uncertainty estimation is crucial for reliable neural network predictions. Existing methods often struggle with accurate uncertainty estimation, known as \textit{miscalibration}, leading to overconfident or underconfident outputs. We introduce a novel training scheme that significantly enhances model calibration by representing neural network outputs as Gaussian distributions, instead of predicting a point estimation. Our approach also includes a method for calculating uncertainty labels, enabling more effective optimization. This technique is easily adaptable to various neural network architectures and can be combined with other calibration methods for further improvement. Our experiments demonstrate substantial reductions in calibration error and improved performance across different tasks, making our method a valuable tool for building more robust and reliable neural network models. | [
"CNN",
"Reliable",
"Robustness",
"Less Data-Hungry",
"Long-Tailed Learning"
] | https://openreview.net/pdf?id=KlORk0Z9ai | m572pBr7yC | decision | 1,735,598,400,747 | KlORk0Z9ai | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | title: Paper Decision
decision: Reject
comment: This work is outside this workshop's scope -- (rigorous)neural reasoning and math. |
KlORk0Z9ai | Enhancing Classification and Calibration via Gaussian Distribution Modeling | [] | Accurate uncertainty estimation is crucial for reliable neural network predictions. Existing methods often struggle with accurate uncertainty estimation, known as \textit{miscalibration}, leading to overconfident or underconfident outputs. We introduce a novel training scheme that significantly enhances model calibration by representing neural network outputs as Gaussian distributions, instead of predicting a point estimation. Our approach also includes a method for calculating uncertainty labels, enabling more effective optimization. This technique is easily adaptable to various neural network architectures and can be combined with other calibration methods for further improvement. Our experiments demonstrate substantial reductions in calibration error and improved performance across different tasks, making our method a valuable tool for building more robust and reliable neural network models. | [
"CNN",
"Reliable",
"Robustness",
"Less Data-Hungry",
"Long-Tailed Learning"
] | https://openreview.net/pdf?id=KlORk0Z9ai | UUrlCTi3Q0 | official_review | 1,735,074,895,952 | KlORk0Z9ai | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission3/Reviewer_91JA"
] | title: Enhancing Classification and Calibration via Gaussian Distribution Modeling
review: The work in this paper attempts to model class predictions as a gaussian distribution where the mean of the distribution is mapped to the class label while the standard deviation is mapped to the uncertainty score computed based on random crops of the input image. The paper provides a smooth introduction to calibration assisted by a strong experimental section.
Weaknesses/Questions
The work looks great as a framework but I am not entirely convinced about the assumption made by the paper with respect to uncertainty label generation. The work is based on the assumption that a smaller cropped image is less certain enforced by the ‘certainity_s(x’)’ metric in equation (12). Although this is a good starting point, the assumption of “pixels closer to the center of the image are more important than those near the edges” is not always true.
For example removing support evidence from the edges will reduce certainty in complex real-world scenes unlike well curated datasets experimented upon in this paper. Additionally, since model calibration is aimed towards deployment in real-world scenarios, I am not confident about the applicability of this work.
The paper fails to discuss the effect of other kind of transformations possible. I believe coming up with a better strategy to determine certainty would be great. Maybe a certainty metric driven by the labeling of the dataset crop or masking objects randomly might help.
rating: 6
confidence: 4 |
KlORk0Z9ai | Enhancing Classification and Calibration via Gaussian Distribution Modeling | [] | Accurate uncertainty estimation is crucial for reliable neural network predictions. Existing methods often struggle with accurate uncertainty estimation, known as \textit{miscalibration}, leading to overconfident or underconfident outputs. We introduce a novel training scheme that significantly enhances model calibration by representing neural network outputs as Gaussian distributions, instead of predicting a point estimation. Our approach also includes a method for calculating uncertainty labels, enabling more effective optimization. This technique is easily adaptable to various neural network architectures and can be combined with other calibration methods for further improvement. Our experiments demonstrate substantial reductions in calibration error and improved performance across different tasks, making our method a valuable tool for building more robust and reliable neural network models. | [
"CNN",
"Reliable",
"Robustness",
"Less Data-Hungry",
"Long-Tailed Learning"
] | https://openreview.net/pdf?id=KlORk0Z9ai | 8uwuccqk4k | official_review | 1,734,736,215,093 | KlORk0Z9ai | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission3/Reviewer_rS59"
] | title: Review
review: ## Summary
This work deals with the problem of learning calibrated neural network classifiers, i.e., ensuring that the learned network’s predictive probability for a class corresponds to the fraction of data points belonging to it. The authors approach this problem by proposing a modification to the neural network classifier architecture. Specifically, they replace the single classification head with two heads predicting the means and the variances of Gaussians respectively. The variance of the predicted Gaussian corresponds to predictive uncertainty. The resulting network is trained using data augmented with additional uncertainty labels for each data point.
The authors argue that this architectural change is more effective at predictive uncertainty quantification than post-hoc calibration methods such as temperature scaling (TS), and model-agnostic augmented loss-based calibration methods such as maximum mean calibration error (MMCE) and the difference between confidence and accuracy (DCA). They evaluate this hypothesis empirically by training these models on noisy and long-tailed versions of the CIFAR-10 and CIFAR-100 data sets and computing the accuracy (ACC), expected calibration error (ECE), and adaptive calibration error (ACE). Their results indicate that the proposed method does seem to yield better uncertainty estimates especially in noisy and long-tailed settings.
## Strengths
1. Learning calibrated models is an important problem in deep learning and is highly relevant for combining deep models with probabilistic reasoning frameworks.
2. The empirical results look promising despite being limited to the CIFAR benchmark.
## Limitations
1. **Model definition is not clear.** My best guess is that the mean of Gaussian for the i-th class is defined using the i-th component of the softmax vector.
2. **The plot in Figure 2 is not coherent.** The X-axis is labeled with discrete categories (e.g., bird, cat, deer), but the plot is of a continuous Gaussian probability density.
3. **Uncertainty label generation.** The authors assume that cropping the input image yields a more uncertain data point. This assumption only holds in images with the target object centered that have minimal extraneous elements. This assumption would be violated in many real-world settings (e.g., frames from a video) where cropping might reduce uncertainty by removing extraneous elements.
4. **Conditionally tractable models.** The proposed neural network architecture would be subsumed by conditionally tractable models (Dong et al., 2022; Shao et al., 2022). These models consist of a neural network that outputs the parameters of a tractable joint probabilistic model over a (structured) label space.
5. **Types of uncertainty.** It is unclear if the proposed approach isolates epistemic uncertainty or combines epistemic and aleatoric uncertainty. Kendall and Gal (2017) contextualize the two kinds of uncertainties for computer vision domains.
## References
Shao, X., Molina, A., Vergari, A., Stelzner, K., Peharz, R., Liebig, T., & Kersting, K. (2022). Conditional sum-product networks: Modular probabilistic circuits via gate functions. IJAR.
Dong, H., Roy, C., Rahman, T., Gogate, V., & Ruozzi, N. (2022). Conditionally tractable density estimation using neural networks. AISTATS.
Kendall, A., & Gal, Y. (2017). What uncertainties do we need in bayesian deep learning for computer vision?. NeurIPS.
rating: 6
confidence: 3 |
KlORk0Z9ai | Enhancing Classification and Calibration via Gaussian Distribution Modeling | [] | Accurate uncertainty estimation is crucial for reliable neural network predictions. Existing methods often struggle with accurate uncertainty estimation, known as \textit{miscalibration}, leading to overconfident or underconfident outputs. We introduce a novel training scheme that significantly enhances model calibration by representing neural network outputs as Gaussian distributions, instead of predicting a point estimation. Our approach also includes a method for calculating uncertainty labels, enabling more effective optimization. This technique is easily adaptable to various neural network architectures and can be combined with other calibration methods for further improvement. Our experiments demonstrate substantial reductions in calibration error and improved performance across different tasks, making our method a valuable tool for building more robust and reliable neural network models. | [
"CNN",
"Reliable",
"Robustness",
"Less Data-Hungry",
"Long-Tailed Learning"
] | https://openreview.net/pdf?id=KlORk0Z9ai | 4kcJxPCwcu | official_review | 1,735,563,603,766 | KlORk0Z9ai | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission3/Reviewer_8h2P"
] | title: Review
review: paper summary
This paper enhances neural network classification and calibration by modeling the output as a Gaussian distribution instead of a point estimate, where the mean of the distribution represents the prediction, and the variance represents the uncertainty in predictions. This paper uses random crops in images and calculates uncertainty of each part to guide model training. The model achieves good performances across multiple datasets.
**Originality**
- **Strengths**:
This paper introduces a new uncertainty modeling approach for neural network calibration, representing predictions as a Gaussian distribution instead of point estimates. The method explicitly incorporates uncertainty into the training process. And the method has been evaluated across various tasks, such as standard classification, long-tail learning, and robustness, demonstrating its versatility.
- **Weaknesses**:
While using Gaussian distributions for calibration is interesting, the idea of uncertainty modeling is not entirely new and has been explored in other contexts. The generating uncertainty labels based on input cropping in this paper might be considered incremental. However, there is a lack of deeper theoretical validation, such as analyzing the impact of different parts of the image (e.g., head, tail, body). Additionally, while the proposed method shows improvements, the results are not consistently significant across all metrics and datasets, especially when compared with temperature-scaled variants.
**Quality**
- **Strengths**:
The technical derivation is sound. The model constructs training data through random cropping and fits the optimal distribution. The performance improvements demonstrated in the experiments support this claim. The evaluation setup is comprehensive, comparing the proposed method with strong baselines and providing results for multiple metrics (e.g., accuracy, ECE, ACE).
- **Weaknesses**:
The paper lacks a deep theoretical justification for why the proposed uncertainty modeling approach, particularly the crop-based uncertainty label generation, is optimal for calibration. There are no experiments of the sensitivity of the method to different parameter settings (e.g., crop size ranges).
**Significance**
- **Strengths**:
By addressing uncertainty explicitly, the proposed method could pave the way for more reliable and interpretable neural networks. Due to the random crops, this method has the potential to achieve high-quality training with limited data.
- **Weaknesses**:
While the improvements in calibration error and accuracy are notable, they are not uniformly dramatic. In addition, the method's applicability to real-world tasks outside of standard benchmarks (e.g., medical imaging) is not explored.
**Questions and Suggestions for the Authors**
- The proposed method relies heavily on the uncertainty labels generated from input crops. Could the authors provide more theoretical justification or empirical analysis of how this approach influences calibration performance?
- The proposed method does not account for semantic differences between images. Even when the same position and size are selected in two images, the uncertainty of these parts can differ. Therefore, the uncertainty labeling process requires further in-depth analysis.
- A discussion of computational complexity would strengthen the paper.
**Limitations**
- The method relies on a handcrafted uncertainty label generation process (based on crop size and position), which might not generalize well to other types of data or tasks.
- Relying solely on the size of random crops and their distance to the center to estimate uncertainty may lack rigor, as the contribution of different regions to the prediction is not always strictly correlated with their proximity to the center. For instance, when predicting animals, the head typically contributes more to the prediction and exhibits lower uncertainty compared to other body parts.
**Ethics**
There are no obvious direct ethical concerns related to the method as it stands. The paper does not deal with sensitive data or produce sensitive content. The approach is a method improvement and not directly involved in human-facing decision-making applications at the evaluation stage. No unethical dataset or methodology usage is apparent. Thus, no ethical issues need to be flagged for special ethics review.
rating: 6
confidence: 3 |
IUaCQe6KKX | Syllogistic Reasoning and Knowledge Discovery | [] | Though syllogistic reasoning is the most widespread and most well-known reasoning, its machine implementation is still an open problem. The widespread modus ponens formalization does not fully capture every reasoning steps that required to reach a syllogistic conclusion. This paper demonstrates that more information and knowledge discovery are necessary to complete a syllogistic reasoning. This paper also demonstrates how a knowledge discovery could be integrated into a syllogistic reasoning. | [
"syllogistic reasoning",
"knowledge discovery",
"causal analysis"
] | https://openreview.net/pdf?id=IUaCQe6KKX | TiUBdVs7ju | decision | 1,735,598,401,247 | IUaCQe6KKX | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | title: Paper Decision
decision: Reject
comment: This paper lacks a formal part (math) and necessary references to appear at this workshop. |
IUaCQe6KKX | Syllogistic Reasoning and Knowledge Discovery | [] | Though syllogistic reasoning is the most widespread and most well-known reasoning, its machine implementation is still an open problem. The widespread modus ponens formalization does not fully capture every reasoning steps that required to reach a syllogistic conclusion. This paper demonstrates that more information and knowledge discovery are necessary to complete a syllogistic reasoning. This paper also demonstrates how a knowledge discovery could be integrated into a syllogistic reasoning. | [
"syllogistic reasoning",
"knowledge discovery",
"causal analysis"
] | https://openreview.net/pdf?id=IUaCQe6KKX | 6DPSBvDKSs | official_review | 1,735,118,124,505 | IUaCQe6KKX | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission20/Reviewer_rt6x"
] | title: a short and clear written paper, a bit out-of dated
review: This paper sketches the idea that more information is necessary to complete syllogistic reasoning, and these necessary information shall be acquired through a process of knowledge discovery. Authors did not list any references.
Authors follow the early method of verbal analysis, and strictly distinguish "a man" from "all men". That is, from "a feature of all men" we can deduce "a feature of a man". To deduce "Socrates has a feature", we shall first deduce "Socrates is a man". "All men" denotes a set, which corresponds to a predicate. This predicate applies for an instance. If its value is true, this instance is a member of the set, otherwise, this instance is not a member of the set. What authors propose is the need and the discovery of such a predicate.
In the analysis of the Barbara syllogism, authors suggest "all B" --> "B" and "all B are A"--> "B be A". In the term of set-theory, "all B are A" means that set B is a sub-set of set A, "B be A" means that set B equals to set A (let B be A). So, here, the analysis is incorrect (nor necessary).
rating: 3
confidence: 5 |
GU4CjUNNb5 | Reflection System for the Abstraction and Reasoning Corpus | [] | The Abstraction and Reasoning Corpus (ARC) benchmarks broad generalization in artificial intelligence, and presents a significant challenge to existing machine learning models and program synthesis solvers. In this work, we introduce a Reflection System for ARC. It combines Large Language Models (LLMs) and a program synthesis solver based on a Domain Specific Language (DSL). We analyse the accuracy of LLMs on ARC and demonstrate unsatisfactory results. We create AugARC, an augmented ARC benchmark, which consistently improves the performance of LLMs compared to the normal ARC benchmark. Using augmented ARC data, we fine-tune LLMs and observe a significant gain in ARC accuracy after training. By utilizing reflection, we combine LLMs and a previous DSL solver into our Reflection System for abstraction and reasoning. Our approach outperforms the previous publicly available ARC systems that consist solely of LLMs or DSL solvers. The proposed Reflection System motivates research to advance previous ARC attempts by combining the advantages of LLMs and program synthesis solvers with reflection. | [
"reflection systems",
"language models with reasoning capabilities",
"abstraction and reasoning corpus"
] | https://openreview.net/pdf?id=GU4CjUNNb5 | vkYrBTnWsP | official_review | 1,735,078,404,748 | GU4CjUNNb5 | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission5/Reviewer_ssoG"
] | title: Interesting ideas, incremental improvements
review: Summary:
- This paper proposed an augmentation technique that improves performance for LLMs on ARC and shows that finetuning based on these augmentations further improves performance.
Additionally, it proposes a reflection system that enables the effective combination of multiple independent solvers.
Strength:
- Identification of augmentations that bring improvement to ARC for LLMs
- Evaluation of effect of QLoRA fine-tuning on AugARC for different models
- In-depth analysis of solution overlap of different algorithms
- Proposal of a reflection system that is able to select the most promising outputs among a selection of candidates
- Running eval on complete evaluation set, enabling ease of comparison.
- Good limitations section
Weaknesses:
- Similarities of test-time augmentation scheme in AugARC to the one propozed by (Bober-Irizar and Banerjee 2024)
- Minor improvements over DSL search using massively more compute (Claude 3 Opus is rumored to be quite large)
- Exact prompting scheme for the reflection model is unclear
- Missing insights into performance of reflection model
- Ablations seem to be missing some scientific rigour
Questions and points for improvement:
- Figure 3: reflection model chooses the wrong solution in example 1, but the caption claimes it is the correct solution
- On page 4, you mention that for the remainder of the experiments, you are going to use AugARC. Since you call AugARC a benchmark, it is then rather confusing when you talk about ARC performance.
It would make more sense to just call AugARC an augmentation for ARC, since it is not really a new benchmark but just an augmentation scheme, that can be used for training and at test time.
- Figure 4 would be easier to read if sorted by performance.
- Why is Figure 5a symmetric? Shouldn't it be normalized by the total number of tasks of the model on the bottom? With the current approach, you show overlap only in one direction.
- Also in Figure 5, you claim that the models are ordered by how much gain they add, but this does not seem to be the case?
- Which version of Gemini Pro are you using, I assume 1.5?
- You propose to use permutations as augmentations. If I've read the paper correctly, you never finetune on a dataset that includes permutations, which leaves it unclear whether permutations also yield a benefit for finetuning.
A larger number of permutations could also lead to an imbalanced dataset due to tasks with different number of examples, eventually hampering the effectiveness of the augmentation.
- Did you also consider permuting the colors of the puzzles?
- It is commendable that you have evaluated many different models. However, your inconsistent use of different models for the different ablations and their inconsistent ordering in figures leaves a weird impression.
* Ordering in Figures 4 and 5
* Not all models of Figure 4 are present in Figure 5, how are they selected?
* GPT models used as Reflection models, but not used as solvers
- Some more detailed ablations on the main contribution, the reflection system, would be interesting.
* The upper bound of improvement over DSL search would be 23, the reflection system reaches 6.
* In the setting where you use 3 solvers, does the fine-tunes Llama-3 8B even provide any potential new solved tasks to begin with?
* Ablation with third solver for reflection system not really comparable to two solvers due to different reflection model used.
* Can a reflection model provide a benefit if it is the same model that has been used as a solver, or does it need to be a different model?
- This is interesting work as it improves performance on ARC, but how does this contribute to the original goal of ARC, to achieve more intelligent and more human-like artificial systems?
- The conclusion does not fully reflect the content of the paper
The augmentations proposed in AugARC, mainly the permutations, could lead to an imbalanced dataset, eventually hampering the effectiveness of the augmentations.
Minor:
- Weird paragraph break at the end of page 1
- page 4: "Each solver solver independently and cannot..."
- Weird paragraph break at the end of page 6
- page 7: "by 6 ARC tasks. the Reflection System..."
- There is also research on LLMs for the bloom model series (Camposampiero, Giacomo, et al. "Abstract visual reasoning enabled by language." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.)
- Further, (Wang, Ruocheng, et al. "Hypothesis search: Inductive reasoning with language models." arXiv preprint arXiv:2309.05660 (2023)) could also be included in the related work, as it is quite relevant.
Unfortunately, they only provide results on a random subset of the ARC evaluation set, so direct comparison is not really possible.
- ARC has been renamed to ARC-AGI
rating: 7
confidence: 5 |
GU4CjUNNb5 | Reflection System for the Abstraction and Reasoning Corpus | [] | The Abstraction and Reasoning Corpus (ARC) benchmarks broad generalization in artificial intelligence, and presents a significant challenge to existing machine learning models and program synthesis solvers. In this work, we introduce a Reflection System for ARC. It combines Large Language Models (LLMs) and a program synthesis solver based on a Domain Specific Language (DSL). We analyse the accuracy of LLMs on ARC and demonstrate unsatisfactory results. We create AugARC, an augmented ARC benchmark, which consistently improves the performance of LLMs compared to the normal ARC benchmark. Using augmented ARC data, we fine-tune LLMs and observe a significant gain in ARC accuracy after training. By utilizing reflection, we combine LLMs and a previous DSL solver into our Reflection System for abstraction and reasoning. Our approach outperforms the previous publicly available ARC systems that consist solely of LLMs or DSL solvers. The proposed Reflection System motivates research to advance previous ARC attempts by combining the advantages of LLMs and program synthesis solvers with reflection. | [
"reflection systems",
"language models with reasoning capabilities",
"abstraction and reasoning corpus"
] | https://openreview.net/pdf?id=GU4CjUNNb5 | sC3ZqpC6bc | decision | 1,735,601,370,917 | GU4CjUNNb5 | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | decision: Reject
comment: This is a good paper. However, the method is still within the current paradigm of neural networks and does not fit the scope of this workshop.
title: Paper Decision |
GU4CjUNNb5 | Reflection System for the Abstraction and Reasoning Corpus | [] | The Abstraction and Reasoning Corpus (ARC) benchmarks broad generalization in artificial intelligence, and presents a significant challenge to existing machine learning models and program synthesis solvers. In this work, we introduce a Reflection System for ARC. It combines Large Language Models (LLMs) and a program synthesis solver based on a Domain Specific Language (DSL). We analyse the accuracy of LLMs on ARC and demonstrate unsatisfactory results. We create AugARC, an augmented ARC benchmark, which consistently improves the performance of LLMs compared to the normal ARC benchmark. Using augmented ARC data, we fine-tune LLMs and observe a significant gain in ARC accuracy after training. By utilizing reflection, we combine LLMs and a previous DSL solver into our Reflection System for abstraction and reasoning. Our approach outperforms the previous publicly available ARC systems that consist solely of LLMs or DSL solvers. The proposed Reflection System motivates research to advance previous ARC attempts by combining the advantages of LLMs and program synthesis solvers with reflection. | [
"reflection systems",
"language models with reasoning capabilities",
"abstraction and reasoning corpus"
] | https://openreview.net/pdf?id=GU4CjUNNb5 | du0RJDuZiK | official_review | 1,735,485,348,370 | GU4CjUNNb5 | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission5/Reviewer_FLwR"
] | title: LLM with a program synthesis solver.
review: This work proposes a method combines ARC solutions from Large Language Models and a
Program Synthesis solver. Its performances are verified on the ARC and AugARC dataset.
As an ensemble method, it is not surprising to see better performances than methods that only use LLM or DSL solver. This work goes one step further by combing the 2 together to complement each other to achieve better performances. This is achieved by a reflection model.
Q: if the reflection model and the solver are the same LLM, will the reflection model prefers its own answer?
Agree with the authors that Claude 3 (sonnet) may have seen the ARC data and should be treated separately. However, this does not impact the final conclusion.
rating: 6
confidence: 4 |
GU4CjUNNb5 | Reflection System for the Abstraction and Reasoning Corpus | [] | The Abstraction and Reasoning Corpus (ARC) benchmarks broad generalization in artificial intelligence, and presents a significant challenge to existing machine learning models and program synthesis solvers. In this work, we introduce a Reflection System for ARC. It combines Large Language Models (LLMs) and a program synthesis solver based on a Domain Specific Language (DSL). We analyse the accuracy of LLMs on ARC and demonstrate unsatisfactory results. We create AugARC, an augmented ARC benchmark, which consistently improves the performance of LLMs compared to the normal ARC benchmark. Using augmented ARC data, we fine-tune LLMs and observe a significant gain in ARC accuracy after training. By utilizing reflection, we combine LLMs and a previous DSL solver into our Reflection System for abstraction and reasoning. Our approach outperforms the previous publicly available ARC systems that consist solely of LLMs or DSL solvers. The proposed Reflection System motivates research to advance previous ARC attempts by combining the advantages of LLMs and program synthesis solvers with reflection. | [
"reflection systems",
"language models with reasoning capabilities",
"abstraction and reasoning corpus"
] | https://openreview.net/pdf?id=GU4CjUNNb5 | 2n54caDVp0 | official_review | 1,735,563,686,645 | GU4CjUNNb5 | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission5/Reviewer_8jpk"
] | title: Review
review: paper summary
The paper presents a novel approach to the Abstraction and Reasoning Corpus (ARC) that integrates LLMs with program synthesis solvers based on a DSL in a reflection-based architecture. It introduces AugARC, an enhanced benchmark to boost LLMs generalization. This approach achieves a record 166/400 accuracy on ARC tasks, outperforming previous methods.
**Originality**
- **Strengths**:
The Reflection System leverages the strengths of both LLMs and program synthesis solvers based on a DSL, addressing the dataset limitations of the ARC through the AugARC benchmark. This benchmark enhances LLM generalization by incorporating augmented tasks, which broaden the scope of the original ARC. Furthermore, the system employs a self-reflection technique inspired by those used in LLMs, intelligently combining multiple solvers to tackle ARC tasks. The Reflection System demonstrates notable flexibility by supporting various solver types (e.g., LLMs and program synthesis tools) and allowing dynamic adjustment of solver configurations.
- **Weaknesses**:
The model lacks a theoretical analysis explaining why this reflection mechanism architecture improves task performance.
**Quality**
- **Strengths**:
The experimental setup is comprehensive and the performance improvement is significant. Fine-tuning experiments with smaller LLMs (e.g., 7B and 13B parameters) demonstrate a significant improvement in ARC task performance, highlighting the value of data augmentation.
- **Weaknesses**:
The theoretical foundation behind the reflection process is not fully explored. Specifically, how the reflection model determines the correct solution among solvers could benefit from more rigorous justification. The computational complexity of the reflection system, especially with multiple solvers and fine-tuned LLMs, is not deeply discussed. Most of the gains come from combining with Claude 3 Opus, raising questions about generalizability with other solvers.
**Significance**
- **Strengths**:
ARC is a challenging benchmark, and improving its performance meaningfully contributes to advancing AI’s ability for broad generalization and abstract reasoning. The system provides a new perspective on combining solvers, demonstrating the potential of reflection-based architectures for other reasoning benchmarks. The fine-tuning results suggest that even smaller LLMs can perform well on reasoning tasks, making the approach accessible for researchers without access to large-scale models.
- **Weaknesses**:
While the improvement over the ensemble system is clear, the gain of solving five additional tasks may not seem dramatic to practitioners.
**Questions and Suggestions for the Authors**
- Could the authors provide theoretical insights into why the reflection model effectively selects the correct solution?
- How does the computational complexity of the system (e.g., fine-tuning, running multiple solvers) scale with graph size or the number of tasks?
- While the results demonstrate state-of-the-art performance, could the authors include more extensive comparisons with other ensemble approaches, such as those combining different LLMs without program synthesis solvers?
- Since DSL Search contributes most solutions (160/400), is the reflection system’s performance reproducible with alternative program synthesis solvers?
**Limitations**
- The system relies on DSL Search and Claude 3 Opus for most of its performance gains.
- The computational cost of fine-tuning, reflection, and multi-solver integration could make the system infeasible for larger datasets or less computationally capable researchers.
**Ethics**
There are no obvious direct ethical concerns related to the method as it stands. The paper does not deal with sensitive data or produce sensitive content. The approach is a method improvement and not directly involved in human-facing decision-making applications at the evaluation stage. No unethical dataset or methodology usage is apparent. Thus, no ethical issues need to be flagged for special ethics review.
rating: 7
confidence: 4 |
F90YO0MacL | Towards Learning to Reason: Comparing LLMs with Neuro-Symbolic on Arithmetic Relations in Abstract Reasoning | [] | This work compares large language models (LLMs) and neuro-symbolic approaches in solving Raven's progressive matrices (RPM), a visual abstract reasoning test that involves the understanding of mathematical rules such as progression or arithmetic addition. Providing the visual attributes directly as textual prompts, which assumes an oracle visual perception module, allows us to measure
the model's abstract reasoning capability in isolation. Despite providing such compositionally structured representations from the oracle visual perception and advanced prompting techniques, both GPT-4 and Llama-3 70B cannot achieve perfect accuracy on the center constellation of the I-RAVEN dataset. Our analysis reveals that the root cause lies in the LLM's weakness in understanding and executing arithmetic rules. As a potential remedy, we analyze the Abductive Rule Learner with Context-awareness (ARLC), a neuro-symbolic approach that learns to reason with vector-symbolic architectures (VSAs). Here, concepts are represented with distributed vectors s.t. dot products between encoded vectors define a similarity kernel, and simple element-wise operations on the vectors perform addition/subtraction on the encoded values. We find that ARLC achieves almost perfect accuracy on the center constellation of I-RAVEN, demonstrating a high fidelity in arithmetic rules. To stress the length generalization capabilities of the models, we extend the RPM tests to larger matrices (3x10 instead of typical 3x3) and larger dynamic ranges of the attribute values (from 10 up to 1000). We find that the LLM's accuracy of solving arithmetic rules drops to sub-10%, especially as the dynamic range expands, while ARLC can maintain a high accuracy due to emulating symbolic computations on top of properly distributed representations. | [
"Analogical reasoning",
"large language models",
"vector-symbolic architectures",
"reasoning benchmarks"
] | https://openreview.net/pdf?id=F90YO0MacL | HNbIzHHSzP | official_review | 1,734,676,164,593 | F90YO0MacL | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission14/Reviewer_5UzQ"
] | title: The paper compares the performance of large language models (LLM), specifically GPT-4 and Llama 3, with a neuro-symbolic approach that combines vector symbolic architecture and abductive rule learning, focusing on the task of abstract reasoning using the Raven dataset.
review: The paper is interesting as it explores the application of vector symbolic architecture and abductive rule learning in abstract reasoning tasks. It would be better if the following points can be clarified:
(1) Equations 2 and 3 are confusing with respect to the variable names used. For instance, what is c1, c2..c6,..c12? It is mentioned that c_i represents v_a at (i,j), so shouldn't it be cij? I suppose i and j represents row and column numbers respectively.
Equation 3 is also not clear - what is I and j - is it number of rows and columns again?
(2) The three variants of ARLC - ARLC_progr, ARLC_learn and ARLC_p->1, ARLC_progr has some knowledge about the rules, and ARLC_p->1 which is initialised with programmed rules shows lower accuracy than ARLC_learn which learns all rules from scratch (see Table 2) - how do you explain this? Why doesn't Tables 5 and 6 show results on ARLC_p->1? It would help if you could give more details on manual programming of weights and rule initialisation to better understand the different variants.
(3) While comparing ARLC_progr and ARLC_p->1 with LLM, isn't it fairer if we provide some knowledge about the rules to LLM as well? How would the LLM perform if it has access to the rules?
(4)Can we apply other interpretable rule learning frameworks like ProbFOIL (Inducing Probabilistic Relational Rules from Probabilistic Examples, Luc de React et al) for the task? What is the significance of VSA? It would be easier to understand if the paper could also show examples of the rules learned by the approach.
rating: 6
confidence: 5 |
F90YO0MacL | Towards Learning to Reason: Comparing LLMs with Neuro-Symbolic on Arithmetic Relations in Abstract Reasoning | [] | This work compares large language models (LLMs) and neuro-symbolic approaches in solving Raven's progressive matrices (RPM), a visual abstract reasoning test that involves the understanding of mathematical rules such as progression or arithmetic addition. Providing the visual attributes directly as textual prompts, which assumes an oracle visual perception module, allows us to measure
the model's abstract reasoning capability in isolation. Despite providing such compositionally structured representations from the oracle visual perception and advanced prompting techniques, both GPT-4 and Llama-3 70B cannot achieve perfect accuracy on the center constellation of the I-RAVEN dataset. Our analysis reveals that the root cause lies in the LLM's weakness in understanding and executing arithmetic rules. As a potential remedy, we analyze the Abductive Rule Learner with Context-awareness (ARLC), a neuro-symbolic approach that learns to reason with vector-symbolic architectures (VSAs). Here, concepts are represented with distributed vectors s.t. dot products between encoded vectors define a similarity kernel, and simple element-wise operations on the vectors perform addition/subtraction on the encoded values. We find that ARLC achieves almost perfect accuracy on the center constellation of I-RAVEN, demonstrating a high fidelity in arithmetic rules. To stress the length generalization capabilities of the models, we extend the RPM tests to larger matrices (3x10 instead of typical 3x3) and larger dynamic ranges of the attribute values (from 10 up to 1000). We find that the LLM's accuracy of solving arithmetic rules drops to sub-10%, especially as the dynamic range expands, while ARLC can maintain a high accuracy due to emulating symbolic computations on top of properly distributed representations. | [
"Analogical reasoning",
"large language models",
"vector-symbolic architectures",
"reasoning benchmarks"
] | https://openreview.net/pdf?id=F90YO0MacL | AYA2wrdziS | official_review | 1,735,176,938,414 | F90YO0MacL | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission14/Reviewer_MjJi"
] | title: Nice comparison
review: This paper describes a comparison between leading LLMs (GPT-4 and Llama-3) and the Abductive Rule Learner with
Context-awareness (ARLC), a neuro-symbolic approach, in solving Raven’s progressive
matrices (RPM). In particular, the authors extend the RPM tests to larger matrices, so that out-of-distribution evaluation can be carried to better test the models reasoning ability.
Strength:
- comprehensive empirical study of various models on this interesting reasoning task.
- mostly well-written, not hard to follow.
Weakness:
- not much to take away as LLMs are already known to be bad at arithmetic reasoning
- it would be nice to see a few more words about why the I-RAVEN dataset matters and if the ARLC can be applied beyond this RPM task.
rating: 6
confidence: 2 |
F90YO0MacL | Towards Learning to Reason: Comparing LLMs with Neuro-Symbolic on Arithmetic Relations in Abstract Reasoning | [] | This work compares large language models (LLMs) and neuro-symbolic approaches in solving Raven's progressive matrices (RPM), a visual abstract reasoning test that involves the understanding of mathematical rules such as progression or arithmetic addition. Providing the visual attributes directly as textual prompts, which assumes an oracle visual perception module, allows us to measure
the model's abstract reasoning capability in isolation. Despite providing such compositionally structured representations from the oracle visual perception and advanced prompting techniques, both GPT-4 and Llama-3 70B cannot achieve perfect accuracy on the center constellation of the I-RAVEN dataset. Our analysis reveals that the root cause lies in the LLM's weakness in understanding and executing arithmetic rules. As a potential remedy, we analyze the Abductive Rule Learner with Context-awareness (ARLC), a neuro-symbolic approach that learns to reason with vector-symbolic architectures (VSAs). Here, concepts are represented with distributed vectors s.t. dot products between encoded vectors define a similarity kernel, and simple element-wise operations on the vectors perform addition/subtraction on the encoded values. We find that ARLC achieves almost perfect accuracy on the center constellation of I-RAVEN, demonstrating a high fidelity in arithmetic rules. To stress the length generalization capabilities of the models, we extend the RPM tests to larger matrices (3x10 instead of typical 3x3) and larger dynamic ranges of the attribute values (from 10 up to 1000). We find that the LLM's accuracy of solving arithmetic rules drops to sub-10%, especially as the dynamic range expands, while ARLC can maintain a high accuracy due to emulating symbolic computations on top of properly distributed representations. | [
"Analogical reasoning",
"large language models",
"vector-symbolic architectures",
"reasoning benchmarks"
] | https://openreview.net/pdf?id=F90YO0MacL | 2Urel7GjZf | decision | 1,735,601,657,567 | F90YO0MacL | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | decision: Accept
comment: This paper compares LLMs and neuro-symbolic methods for RPM tasks and may pave the way towards novel pure neuro-symbolic unification methods for visual logical reasoning.
title: Paper Decision |
APAaTfU21K | Solving the Rubik’s Cube in a Human-like Manner with Assisted Reinforcement Learning (ARL) | [] | Human-AI collaboration is most key in situations in which AI must approach problems in a human-like manner. In this work, we present a novel approach to Rubik’s cube solving that utilizes human-like solving techniques. We demonstrate assisted reinforcement learning (ARL), in which RL trains to solve the cube in separate steps (CFOP), thereby emulating human behavior.
Secondly, we applied inverse reinforcement learning (IRL) to align AI behavior with human problem-solving. We create a dataset of over 10,000 human Rubik’s cube solves and train to achieve a reward function that accurately reflects the goals and preferences of human solvers. As a result, the system is able to generalize across different cube states while maintaining interpretability.
Our research demonstrates the potential of combining ARL and IRL to close the gap between human and AI behavior. We successfully highlight the interdisciplinary nature of training AI to solve a trivial task while imitating complex human behavior. | [
"ai",
"ml",
"rl",
"inverse rl",
"deep learning"
] | https://openreview.net/pdf?id=APAaTfU21K | zzCeltr3gM | official_review | 1,734,707,089,399 | APAaTfU21K | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission24/Reviewer_4zVV"
] | title: Interesting paper but unclear of its relevance to the workshop
review: The paper considers the problem of human-AI collaboration in solving Rubik's cube. The idea is to enhance assisted RL by first learning the reward functions from trajectories (IRL). The results demonstrate that the resulting system is able to get close to human behavior.
The paper is interesting in that it solves a challenging problem with an intuitive solution. The results look good.
The relevance of the paper to mathematical discovery and neural models is unclear. There are no algorithmic contributions as it is a methodological paper (which I do not hold against the paper). Since there are no fundamental research contributions, it becomes harder to evaluate its relevance to the workshop.
rating: 4
confidence: 5 |
APAaTfU21K | Solving the Rubik’s Cube in a Human-like Manner with Assisted Reinforcement Learning (ARL) | [] | Human-AI collaboration is most key in situations in which AI must approach problems in a human-like manner. In this work, we present a novel approach to Rubik’s cube solving that utilizes human-like solving techniques. We demonstrate assisted reinforcement learning (ARL), in which RL trains to solve the cube in separate steps (CFOP), thereby emulating human behavior.
Secondly, we applied inverse reinforcement learning (IRL) to align AI behavior with human problem-solving. We create a dataset of over 10,000 human Rubik’s cube solves and train to achieve a reward function that accurately reflects the goals and preferences of human solvers. As a result, the system is able to generalize across different cube states while maintaining interpretability.
Our research demonstrates the potential of combining ARL and IRL to close the gap between human and AI behavior. We successfully highlight the interdisciplinary nature of training AI to solve a trivial task while imitating complex human behavior. | [
"ai",
"ml",
"rl",
"inverse rl",
"deep learning"
] | https://openreview.net/pdf?id=APAaTfU21K | Nlcg5tvSxZ | official_review | 1,735,487,159,901 | APAaTfU21K | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission24/Reviewer_DzdP"
] | title: Reviews
review: ## Summary
The key argument of this paper is that, Rubik's cube solver that is built with ARL may produce solutions that are not intuitive to human players. Thus, this paper propose to incorporate human prior from 10,000 collected human solutions with IRL.
## Pros
- This paper proposes to use IRL to enforce Rubik's cube solver to produce human-like solutions.
## Cons
- This paper does not evaluate the interpretability of the built Rubik's cube solver
- For well-defined symbolic problems, such as Rubik's Cube solver and Go, human-like manner does not implies good interpretability. In particular, the reviewer would consider it more meaningful to mine new Rubik's cube solving rules from automatically built solvers.
- It would be nice if the authors could use a pre-trained LLM as an initial policy model
rating: 4
confidence: 3 |
APAaTfU21K | Solving the Rubik’s Cube in a Human-like Manner with Assisted Reinforcement Learning (ARL) | [] | Human-AI collaboration is most key in situations in which AI must approach problems in a human-like manner. In this work, we present a novel approach to Rubik’s cube solving that utilizes human-like solving techniques. We demonstrate assisted reinforcement learning (ARL), in which RL trains to solve the cube in separate steps (CFOP), thereby emulating human behavior.
Secondly, we applied inverse reinforcement learning (IRL) to align AI behavior with human problem-solving. We create a dataset of over 10,000 human Rubik’s cube solves and train to achieve a reward function that accurately reflects the goals and preferences of human solvers. As a result, the system is able to generalize across different cube states while maintaining interpretability.
Our research demonstrates the potential of combining ARL and IRL to close the gap between human and AI behavior. We successfully highlight the interdisciplinary nature of training AI to solve a trivial task while imitating complex human behavior. | [
"ai",
"ml",
"rl",
"inverse rl",
"deep learning"
] | https://openreview.net/pdf?id=APAaTfU21K | CXaXFQtJF6 | decision | 1,735,598,401,368 | APAaTfU21K | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | title: Paper Decision
decision: Reject
comment: We agree with the opinions of the reviewers. |
APAaTfU21K | Solving the Rubik’s Cube in a Human-like Manner with Assisted Reinforcement Learning (ARL) | [] | Human-AI collaboration is most key in situations in which AI must approach problems in a human-like manner. In this work, we present a novel approach to Rubik’s cube solving that utilizes human-like solving techniques. We demonstrate assisted reinforcement learning (ARL), in which RL trains to solve the cube in separate steps (CFOP), thereby emulating human behavior.
Secondly, we applied inverse reinforcement learning (IRL) to align AI behavior with human problem-solving. We create a dataset of over 10,000 human Rubik’s cube solves and train to achieve a reward function that accurately reflects the goals and preferences of human solvers. As a result, the system is able to generalize across different cube states while maintaining interpretability.
Our research demonstrates the potential of combining ARL and IRL to close the gap between human and AI behavior. We successfully highlight the interdisciplinary nature of training AI to solve a trivial task while imitating complex human behavior. | [
"ai",
"ml",
"rl",
"inverse rl",
"deep learning"
] | https://openreview.net/pdf?id=APAaTfU21K | 1LtieupWPa | official_review | 1,735,077,918,270 | APAaTfU21K | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission24/Reviewer_yKTB"
] | title: Interesting approach but lacking evaluation
review: Summary:
- This paper implemented a 3D Rubik's cube visualization and trained an agent to solve rubics cubes using DQN.
Further, it demonstrates that inverse reinforcement learning can be used to train a policy based on a dataset 10000 solving sequences of human solvers.
Strengths:
- Implementation of an agent solving rubics cubes, using DQN
- Collection of extensive rubics cube dataset based on many different human solvers
Weaknesses:
- No ablations on design decisions
- No evaluation of the main claims of interpretabilty/human-alignment/trust or anything similar
- No comparison with baselines optimizing for shortest solving sequence (e.g. DeepCubeA)
- Serious number of missing citations
- The paper lacks a coherent narrative, it is not quite clear how the individual parts contribute to the overall goal
Detailed points for improvement:
- Abstract
* Parts could be rewritten for improved clarity, for example sentences such as "We demonstrate assisted reinforcement learning, ..."
- Related work
* Please add an explanation of what CFOP really is
* Please add a citation to support your statement about growing research area of intersection of AI and human problem-solving
* Mention of prior works incorporating domain-specific heuristics into RL frameworks but no citations
* Please add a citation to support your statement about IRL being the most common and usefool tool for inferring reward functions
* Please add a citation to support your statement of prior studies showing the usefulness of visualizations for human-centered AI
* Please add a citation to support your statement about interpretability in aligning AI systems with human cognitive processes having been underscored in many studies, but a single citation of an online article not underscoring the previous statement
* Please add a citation to support your statement about that it has been shown that interpretability is necessary for trust and collaboration but no citations
* Statement about breaking strategies down into human-understandable steps providing numerous benefits in AlphaZero, but citation of an online article not underscoring the previous statement
* The following conclusion, using a 3D visualization instead of text further increasing interpretability is not supported by previous statements.
- ARL Approach
* Please add a citation to support your statement about prior studies having shown the usefulness of visualizations in human-centered AI
* What part of this training setup makes up the "assisted" in assisted RL? How does it learn from traditional human techniques/human intuition?
* You introduce the interactive Rubik's cube model, but then you don't do anything with it. How do you use it? Can you show that it helps with interpretability or trust?
* How long was the history in the observation you have eventually used? How did you model partial observability, and is this really necessary?
* Given your sparse explanation of the moves, I am confused as to how invalid moves could occur. Some more details on this would be beneficial.
* How do these actions relate to CFOP? These actions seem to be the default/obvious actions of a rubic's cube, also used in previous work (e.g. https://arxiv.org/pdf/1805.07470)
* Please add a citation for Stable Baselines3 (see https://stable-baselines3.readthedocs.io/en/master/ at the bottom for proper citation)
* Figure 2 is missing the axis labels
* Your approach at hyperparameter optimization is commendable, however looking at Figure 2 and no reports of performance with respect to different random seeds, the performance difference might be dominated by randomness.
* Using a callback and regular checkpointing is useful. I assume that you then select the best checkpoint according to mean episode length? IMO, this does not prevent overfitting, but allows you to select the best model before your policy diverges
* Weird citation for the MLP used. Also, what MLP architecture did you use in the end?
* How many steps of random permuations did you use for the initial state? Did your agent achieve 100% success rate regardless of the number of permutations?
- IRL Approach
* Figure 3, do you mean "over 50 moves" for the right leafs?
* How does the gym environment used here differ from the one for ARL?
* Please cite the paper for AIRL, not only a library (https://arxiv.org/abs/1710.11248)
* How do you initialize the generator, on what kind of synthetic environments do you pretrain?
* What do you mean by "The Adam optimizer was also used to stabilize learning rates"?
* Could you provide some insights and details, ideally some quantitative analysis how you determine that the policy solves the cube in a very human like way? An option to achieve this would be to use your reward model to score the actions of your policy. You could then compare the rewards your model assign to your AIRL policy, to the ARL policy, some existing solvers such as DeepCubeA (Agostinelli, Forest, et al. "Solving the Rubik’s cube with deep reinforcement learning and search." Nature Machine Intelligence 1.8 (2019): 356-363.)
* The dataset could be a nice contribution for future work. Please consider making it public.
- Conclusion
* How was the 3D cube model crucial to the leisure of human users?
* It remains unclear how human intuition was added to the ARL approach
* It would be beneficial to mention that using IRL, you managed to build a reward function modelling human behaviour, based on your collected dataset of human moves.
* Please actually provide some insights on the benefits of the learned reward function
Minor:
- Captions should be below the table for the AAAI style
- intrepretability -> interpretabilty
- more slow -> slower
- missing words "DQNs utilize an \eps-greedy for exploration"
- Gym has been deprecated in favor of Gymnasium (https://gymnasium.farama.org/)
- Citation at the wrong place in "Stable Baselines’ (Mnih 2013) implementation of DQN uses two neural networks"
rating: 4
confidence: 4 |
9rka7Z2Gss | Reasoner: A Model of Thoughtful Processing Beyond Attention | [] | The Reasoner model introduces a novel approach to language processing that surpasses the limitations of attention-based transformer models (Vaswani et al., 2017). Unlike transformers, which rely on token-level relationships and attention mechanisms, the Reasoner model integrates structured reasoning processes to achieve deeper contextual understanding. Leveraging the Natural Semantic Metalanguage (NSM) framework (Wierzbicka, 1996), it simplifies language into semantic primitives and employs Bayesian inference to iteratively update its understanding based on new information (Cohen, 2021; Sreedharan et al., 2023). This combination of semantic transparency, probabilistic reasoning, and vectorized representations positions the Reasoner as a highly interpretable and adaptable alternative to existing models. Comparative analysis highlights its ad-vantages in interpretability, scalability, and adaptability to complex linguistic tasks. | [
"natural language processing",
"Bayesian inference",
"semantic primes",
"Reasoner model",
"AI reasoning"
] | https://openreview.net/pdf?id=9rka7Z2Gss | yoy3nmlysF | official_review | 1,734,575,506,166 | 9rka7Z2Gss | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission9/Reviewer_xYBG"
] | title: reviews for sub9
review: 1.it's better to do experiments to give a proof for your methods
2.li's better to make comparison with sota methods.
rating: 3
confidence: 4 |
9rka7Z2Gss | Reasoner: A Model of Thoughtful Processing Beyond Attention | [] | The Reasoner model introduces a novel approach to language processing that surpasses the limitations of attention-based transformer models (Vaswani et al., 2017). Unlike transformers, which rely on token-level relationships and attention mechanisms, the Reasoner model integrates structured reasoning processes to achieve deeper contextual understanding. Leveraging the Natural Semantic Metalanguage (NSM) framework (Wierzbicka, 1996), it simplifies language into semantic primitives and employs Bayesian inference to iteratively update its understanding based on new information (Cohen, 2021; Sreedharan et al., 2023). This combination of semantic transparency, probabilistic reasoning, and vectorized representations positions the Reasoner as a highly interpretable and adaptable alternative to existing models. Comparative analysis highlights its ad-vantages in interpretability, scalability, and adaptability to complex linguistic tasks. | [
"natural language processing",
"Bayesian inference",
"semantic primes",
"Reasoner model",
"AI reasoning"
] | https://openreview.net/pdf?id=9rka7Z2Gss | j5KH6eAaQ9 | decision | 1,735,598,400,934 | 9rka7Z2Gss | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | title: Paper Decision
decision: Reject
comment: We agree with the opinions of the reviewers. |
9rka7Z2Gss | Reasoner: A Model of Thoughtful Processing Beyond Attention | [] | The Reasoner model introduces a novel approach to language processing that surpasses the limitations of attention-based transformer models (Vaswani et al., 2017). Unlike transformers, which rely on token-level relationships and attention mechanisms, the Reasoner model integrates structured reasoning processes to achieve deeper contextual understanding. Leveraging the Natural Semantic Metalanguage (NSM) framework (Wierzbicka, 1996), it simplifies language into semantic primitives and employs Bayesian inference to iteratively update its understanding based on new information (Cohen, 2021; Sreedharan et al., 2023). This combination of semantic transparency, probabilistic reasoning, and vectorized representations positions the Reasoner as a highly interpretable and adaptable alternative to existing models. Comparative analysis highlights its ad-vantages in interpretability, scalability, and adaptability to complex linguistic tasks. | [
"natural language processing",
"Bayesian inference",
"semantic primes",
"Reasoner model",
"AI reasoning"
] | https://openreview.net/pdf?id=9rka7Z2Gss | buQ5MZeFzl | official_review | 1,734,683,348,123 | 9rka7Z2Gss | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission9/Reviewer_9TEg"
] | title: Insufficient approach description, missing empirical evidence
review: ### Summary
This paper proposes an approach that combines Natural Semantic Metalanguage (NSM) and Bayesian inference to perform structured linguistic reasoning. Input sequences are processed in different steps. First, they are simplified into a predefined set of 65 semantic primes and successively further recombined to represent semantic relationships, as defined in the NSM framework. Then, these basic words and combinations are projected into a vector space, which encodes semantic relations between these vectors. Finally, Bayesian inference is used to generate potential hypotheses about the relationships between concepts and “the nature of the world”. The authors claim that this approach is superior to Transformer-based and neuro-symbolic approaches, showing better contextual understanding, interpretability, and scalability.
### Review
It is impossible to understand from the submitted manuscript how the model really works, as the authors only sketch a vague overview of the different components and do not provide any detail on their practical implementation and integration. No experimental results or empirical evidence are included to support the authors claims. Furthermore, the submission is not anonymous.
rating: 2
confidence: 4 |
8uRoFM7Zi3 | Learning Probabilistic Logic Models over Structured and Unstructured Data | [] | Effective decision-making in high-stakes domains necessitates reconciling information from structured and unstructured data with incomplete and imprecise background knowledge. Relational Dependency Networks are a popular class of probabilistic logic models that support efficient reasoning over structured data and symbolic domain knowledge but struggle to accommodate unstructured data such as images and text. On the other hand, neural networks excel at extracting patterns from unstructured data but are not amenable to reasoning. We propose Deep Relational Dependency Networks which combine Relational Dependency Networks with neural networks to reason effectively about multimodal data and symbolic domain knowledge. Experiments on scene classification tasks with noisy and limited data indicate that this approach yields more accurate yet interpretable models. | [
"probabilistic logic models",
"relational dependency networks",
"neurosymbolic learning",
"knowledge-based learning"
] | https://openreview.net/pdf?id=8uRoFM7Zi3 | oAaScTcYSD | decision | 1,735,598,401,291 | 8uRoFM7Zi3 | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | title: Paper Decision
decision: Reject
comment: This paper does not fit the scope of this workshop. It has some novelty but a bit old-fashioned by targeting at learning probabilistic logic models. |
8uRoFM7Zi3 | Learning Probabilistic Logic Models over Structured and Unstructured Data | [] | Effective decision-making in high-stakes domains necessitates reconciling information from structured and unstructured data with incomplete and imprecise background knowledge. Relational Dependency Networks are a popular class of probabilistic logic models that support efficient reasoning over structured data and symbolic domain knowledge but struggle to accommodate unstructured data such as images and text. On the other hand, neural networks excel at extracting patterns from unstructured data but are not amenable to reasoning. We propose Deep Relational Dependency Networks which combine Relational Dependency Networks with neural networks to reason effectively about multimodal data and symbolic domain knowledge. Experiments on scene classification tasks with noisy and limited data indicate that this approach yields more accurate yet interpretable models. | [
"probabilistic logic models",
"relational dependency networks",
"neurosymbolic learning",
"knowledge-based learning"
] | https://openreview.net/pdf?id=8uRoFM7Zi3 | Uqhy12jTrP | official_review | 1,734,575,381,587 | 8uRoFM7Zi3 | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission23/Reviewer_T8rg"
] | title: Reviews for Sub23
review: 1.It's better to do comparison with the state-of-the-art decision deep networks/tree-based models
2.it's better to do experiments for large-scale datasets
rating: 4
confidence: 4 |
8uRoFM7Zi3 | Learning Probabilistic Logic Models over Structured and Unstructured Data | [] | Effective decision-making in high-stakes domains necessitates reconciling information from structured and unstructured data with incomplete and imprecise background knowledge. Relational Dependency Networks are a popular class of probabilistic logic models that support efficient reasoning over structured data and symbolic domain knowledge but struggle to accommodate unstructured data such as images and text. On the other hand, neural networks excel at extracting patterns from unstructured data but are not amenable to reasoning. We propose Deep Relational Dependency Networks which combine Relational Dependency Networks with neural networks to reason effectively about multimodal data and symbolic domain knowledge. Experiments on scene classification tasks with noisy and limited data indicate that this approach yields more accurate yet interpretable models. | [
"probabilistic logic models",
"relational dependency networks",
"neurosymbolic learning",
"knowledge-based learning"
] | https://openreview.net/pdf?id=8uRoFM7Zi3 | KHKKs471Oa | official_review | 1,734,702,729,891 | 8uRoFM7Zi3 | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission23/Reviewer_7hHF"
] | title: A new approach towards neuro-symbolic integration
review: This paper proposes a new method to combine relational dependency networks (in the form of relational probabilistic decision trees (RDTs)) with neural networks in order to leverage both structured and unstructured data for decision-making in noisy environments. Overall, the paper is well-motivated, and the methods are clearly described. While only a short paper, it might still benefit from more elaborate experiments that also compare to the state-of-the-art methods.
Some open questions/limitations that could be addressed
* Why would the logical penalty function (end of page 2) include phi that depends on unstructured data? This way, the unstructured data could invalidate the logical penalty function. Indeed, the penalty function on page 3 doesn’t contain any contribution from the unstructured data anymore.
* It is unclear how we can balance the contributions of neural refinement and the RPT.
* It would be interesting to see how the model behaves with different levels of noise in the structured data as well as the labels. I assume that with zero noise, an RPT would be sufficient to achieve high accuracy. When (at which noise level) does it become helpful to consider a combination of RPT and NN?
* Where do we get the labels for the RPT? Could this be provided by another NN?
rating: 7
confidence: 4 |
8uRoFM7Zi3 | Learning Probabilistic Logic Models over Structured and Unstructured Data | [] | Effective decision-making in high-stakes domains necessitates reconciling information from structured and unstructured data with incomplete and imprecise background knowledge. Relational Dependency Networks are a popular class of probabilistic logic models that support efficient reasoning over structured data and symbolic domain knowledge but struggle to accommodate unstructured data such as images and text. On the other hand, neural networks excel at extracting patterns from unstructured data but are not amenable to reasoning. We propose Deep Relational Dependency Networks which combine Relational Dependency Networks with neural networks to reason effectively about multimodal data and symbolic domain knowledge. Experiments on scene classification tasks with noisy and limited data indicate that this approach yields more accurate yet interpretable models. | [
"probabilistic logic models",
"relational dependency networks",
"neurosymbolic learning",
"knowledge-based learning"
] | https://openreview.net/pdf?id=8uRoFM7Zi3 | CMoMTC4u5v | official_review | 1,735,108,859,666 | 8uRoFM7Zi3 | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission23/Reviewer_Znpw"
] | title: Novel Approach but Choice of Dataset (Highly Unstructured only) not suited for Experiments done
review: ## Summary
---
This paper presents Deep Relational Dependency Networks (Deep-RDN), a framework that combines relational dependency networks and neural networks to integrate structured and unstructured data. It employs a decision tree for structured data and a neural network for unstructured inputs, incorporating domain knowledge through preference rules. Tested on ADE20k and RelKP, the model outperforms baselines in noisy, multimodal tasks.
## Strengths
---
- Novel combination of symbolic reasoning and deep learning with consistent improvements over baselines, especially in noisy scenarios.
- Comprehensive evaluation with clear metrics and integration of domain knowledge for enhanced interpretability.
## Suggestions for Improvement
---
- Dataset Choice: ADE20k is primarily unstructured and doesn’t align well with healthcare or other high-stakes applications. Medical datasets like MIMIC-IV or CheXpert, which combine structured records and unstructured imaging/text, would better validate the model's utility.
- Missed Citations: The paper overlooks related work in neurosymbolic reasoning and multimodal learning frameworks, such as approaches integrating neural networks with probabilistic graphical models (e.g., Learning using Privileged Information by Vapnik, hybrid neurosymbolic frameworks).
While innovative, addressing these gaps and testing on more relevant datasets would enhance the paper's alignment with its stated high-stakes application focus.
rating: 5
confidence: 4 |
84M0Jaiapl | LLM-based SQL Generation with Reinforcement Learning | [] | The text-to-SQL problem remains a challenging task, even with the advancements of Large Language Models (LLMs). Current state-of-the-art models require extensive preprocessing steps and powerful LLMs to achieve accurate SQL query generation, which leads to significant resource utilization. We introduce two models deriving from one another SQL-RL-GEN and SQL-RL-GEN∗, that improve text-to-sql generation while minimizing the resources needed for training and maximizing flexibility. The SQL-RL-GEN generates a reward function to guide the agent’s training process, while SQL-RL-GEN∗ uses this reward function to tune a base LLM in solving the specified task. Our models achieve an accuracy improvement of 2-7% compared to state-of-the-art methods on a limited training dataset composed of only 1000 samples and with a small LLM of 248M parameters. | [
"Large Language Models",
"Generative AI",
"Reinforcement Learning",
"Text-to-SQL",
"SQL Query Generation",
"Resource Efficiency"
] | https://openreview.net/pdf?id=84M0Jaiapl | kUO5V06nKM | official_review | 1,734,275,885,363 | 84M0Jaiapl | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission10/Reviewer_bTRR"
] | title: Good application of EUREAK PPO for text-to-SQL task with good experiments and results
review: This paper proposes a RL-based approach to fine tune a relatively small LM with limited samples (~1000) to outperform the SOTA for the task of text-to-sql. The paper was well written and the experiment and preliminary results seem to be good.
Here are some strengths of this paper:
1. Relevance of this paper’s topic is high: The paper picks a relatively difficult problem: text-to-SQL, which has big impact if such task can be solved to the accuracy of human-level performance (accuracy for SQL engineers is more than 93% according to IBM text-to-SQL generator study). It is of great interest to many industry practitioners as well.
2. Applying a modified EUREKA PPO algorithm for reward function generation: this paper uses the EUREKA PPO with multiple trials on the same sample to generate reward function. EUREKA is a gradient-free, evolution search-based algorithm, which performs well without any reward templates etc.
3. Evaluation of the experiments is good: this paper uses 1 LLM for reward function generation, and then uses another smaller LLM for fine tuning using the reward function generated by the previous step . These two steps then iterate until reaching some stopping criteria. The results seem solid, although improvement comparing with the previously published results is not that significant, but still reasonably better!
Some weakness of this paper:
1. Better clarity of presentation: The paper refers EUREKA at the introduction section and the SQL environment setup, but did not mention EUREKA any more in the experiments section when it describes the PPO . It would be better to describe how the EUREKA configuration is different from the PPO reference it cites. Also, it would be better to describe EUREKA and PPO in the background/introduction section together and then compare if they would use one vs another in a more clear way.
2. Choices of LLMs to the impact of the approaches: since this paper only uses 1 LLM llama-3-405b-instruct for reward generation and 1 LLM flan-t5-base for fine tuning using the generated reward function, it is not clear whether the choices of LLM will matter. In particular, the latest OpenAI GPT-4o vs. other SOTA code optimized pertained LLMs are not being evaluated . The baseline model chosen for fine tuning is also not necessarily representative and it is not clear. So it is not clear whether the choice of these LLMs in this paper is arbitrary, hence it is hard to tell whether the RL approach is really impactful enough if some other LLMs are used.
3. Choices of sample size for fine tuning is not well justified: while the paper tries to suggest that they use a small size of samples (1000 samples) , it is not clear how they come up this size. I would like to see some scaling experiments where they can change the size of fine tuning samples from some smaller samples and then scale to larger size: for example, from 100 samples, to 500, 1000, 5000, 10,000, etc, to see the impact of the size of the samples on the final performance of the SQL generation fine-tuned model.
4. DPO vs. PPO choice: it would be interesting to see if the paper can also address if DPO is applicable for this use case, where human preference can be directly optimized/fine tuned. That will change the approach but it is worthy clarifying , at least in the background section.
Overall, this paper seems to have some good ideas and experiments with a few weaknesses, but it is good enough for the workshop to accept this for presentation and discussion.
rating: 7
confidence: 5 |
84M0Jaiapl | LLM-based SQL Generation with Reinforcement Learning | [] | The text-to-SQL problem remains a challenging task, even with the advancements of Large Language Models (LLMs). Current state-of-the-art models require extensive preprocessing steps and powerful LLMs to achieve accurate SQL query generation, which leads to significant resource utilization. We introduce two models deriving from one another SQL-RL-GEN and SQL-RL-GEN∗, that improve text-to-sql generation while minimizing the resources needed for training and maximizing flexibility. The SQL-RL-GEN generates a reward function to guide the agent’s training process, while SQL-RL-GEN∗ uses this reward function to tune a base LLM in solving the specified task. Our models achieve an accuracy improvement of 2-7% compared to state-of-the-art methods on a limited training dataset composed of only 1000 samples and with a small LLM of 248M parameters. | [
"Large Language Models",
"Generative AI",
"Reinforcement Learning",
"Text-to-SQL",
"SQL Query Generation",
"Resource Efficiency"
] | https://openreview.net/pdf?id=84M0Jaiapl | OGCiuuT0eE | decision | 1,735,598,400,940 | 84M0Jaiapl | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: This is a work that fits into the scope of industrial applications. |
84M0Jaiapl | LLM-based SQL Generation with Reinforcement Learning | [] | The text-to-SQL problem remains a challenging task, even with the advancements of Large Language Models (LLMs). Current state-of-the-art models require extensive preprocessing steps and powerful LLMs to achieve accurate SQL query generation, which leads to significant resource utilization. We introduce two models deriving from one another SQL-RL-GEN and SQL-RL-GEN∗, that improve text-to-sql generation while minimizing the resources needed for training and maximizing flexibility. The SQL-RL-GEN generates a reward function to guide the agent’s training process, while SQL-RL-GEN∗ uses this reward function to tune a base LLM in solving the specified task. Our models achieve an accuracy improvement of 2-7% compared to state-of-the-art methods on a limited training dataset composed of only 1000 samples and with a small LLM of 248M parameters. | [
"Large Language Models",
"Generative AI",
"Reinforcement Learning",
"Text-to-SQL",
"SQL Query Generation",
"Resource Efficiency"
] | https://openreview.net/pdf?id=84M0Jaiapl | 7clIT7Hjni | official_review | 1,735,363,755,064 | 84M0Jaiapl | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission10/Reviewer_ks6w"
] | title: Novel Approach on Improved Text 2 SQL with reduced param model, Can Improve paper quality with performance metrics and details on reward generation
review: ## Summary
---
This paper introduces SQL-RL-GEN and SQL-RL-GEN*, two novel approaches for text-to-SQL generation using reinforcement learning and large language models. The work addresses the challenge of generating SQL queries from natural language while minimizing computational resources. The authors propose using a reference reward function generated by SQL-RL-GEN to guide the training process, which is then utilized by SQL-RL-GEN* to fine-tune a base LLM. The paper demonstrates improved performance using only 1,000 training samples and a relatively small 248M parameter model.
## Strengths
---
- Strong empirical results showing improved accuracy (2-7%) over state-of-the-art methods while using only 1,000 training samples
- Resource-efficient approach that achieves good performance with a small base model while demonstrating versatility across different datasets
## Suggestions for Improvements
---
- The paper lacks a comprehensive analysis of model parameter counts and computational requirements compared to baseline methods like SQLNet and Seq2SQL, making it difficult to fully assess efficiency claims
- A more thorough comparison with other reward-based approaches in text-to-SQL generation would strengthen the paper's contribution
- More detailed analysis of failure cases and limitations would help guide future research in this direction
rating: 6
confidence: 4 |
84M0Jaiapl | LLM-based SQL Generation with Reinforcement Learning | [] | The text-to-SQL problem remains a challenging task, even with the advancements of Large Language Models (LLMs). Current state-of-the-art models require extensive preprocessing steps and powerful LLMs to achieve accurate SQL query generation, which leads to significant resource utilization. We introduce two models deriving from one another SQL-RL-GEN and SQL-RL-GEN∗, that improve text-to-sql generation while minimizing the resources needed for training and maximizing flexibility. The SQL-RL-GEN generates a reward function to guide the agent’s training process, while SQL-RL-GEN∗ uses this reward function to tune a base LLM in solving the specified task. Our models achieve an accuracy improvement of 2-7% compared to state-of-the-art methods on a limited training dataset composed of only 1000 samples and with a small LLM of 248M parameters. | [
"Large Language Models",
"Generative AI",
"Reinforcement Learning",
"Text-to-SQL",
"SQL Query Generation",
"Resource Efficiency"
] | https://openreview.net/pdf?id=84M0Jaiapl | 56t7Try1tU | official_review | 1,735,123,091,221 | 84M0Jaiapl | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission10/Reviewer_RFLD"
] | title: a novel method targeting at wide industrial application
review: This paper proposed a novel reinforcement learning method for solving the task of generating SQL statements from questions in natural languages. Authors introduce two models: (1) SQL-RL-Gen model generates a reward function. This function improves the training process. (2) SQL-RL-Gen* model uses the generated reward function to tune the LLM model. Experimented with a limited amount of training data, this new method achieved better performance than state-of-the-art methods.
Questions:
1) Is it possible to enumerate "all possible text prompts" in real applications?
2) It seems that authors used (Schulman et al 2017)'s method to generate reward functions. Is the method too old? What is the limitation of the method?
3) In experiments, authors repeated 10 times on the same sample before moving on. Did (how often) LLM ignore feedbacks?
rating: 6
confidence: 3 |
5JiArtBtSG | Enhancing AI Capabilities on the Abstraction and Reasoning Corpus: A Path Toward Broad Generalization in Intelligence | [] | This position paper explores advancing artificial intelligence by improving its ability to generalize beyond training data, a key requirement for tasks in the Abstraction and Reasoning Corpus (ARC). Inspired by historical algorithmic challenges like the Bongard Problems, ARC tasks require pattern recognition and logical reasoning, pushing AI toward more flexible, human-like intelligence. We investigate DreamCoder, a neural-symbolic system, and the role of large language models in ARC. We emphasize the need for diverse data sources, inspired by human trials and synthetic data augmentation, and propose pipelines for logical reasoning using math-inspired neural architectures. This work underlines how ARC can guide AI research, bridging the gap between machine learning and mathematical discovery. | [
"broad generalization"
] | https://openreview.net/pdf?id=5JiArtBtSG | pUXPHFmXLH | official_review | 1,734,709,579,788 | 5JiArtBtSG | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission6/Reviewer_8ScV"
] | title: The paper presents intriguing ideas for advancing AI generalization through ARC but suffers from a lack of empirical evidence, speculative analogies, impractical collaboration details, and insufficient focus on ARC-specific challenges, leaving its proposals largely ungrounded.
review: The paper explores advancing artificial intelligence by addressing its limitations in generalization through the Abstraction and Reasoning Corpus (ARC), a benchmark for logic-based tasks requiring human-like reasoning. It critiques the narrow capabilities of current AI models, highlighting the neurosymbolic DreamCoder system for its structured reasoning and LLMs like GPT-4 for their adaptability while advocating for hybrid approaches that combine their strengths. The authors propose enriching AI capabilities through data augmentation, including synthetic datasets and human trial observations, and developing math-inspired neural architectures that embed logical rigor. They also emphasize human-AI collaboration, suggesting interactive frameworks where humans and machines jointly solve ARC tasks by leveraging complementary strengths.
Key points to address:
1. The paper heavily theorizes without presenting empirical results or concrete benchmarks for its proposed hybrid models or math-inspired architectures. This omission weakens the argument for their efficacy.
2. Drawing analogies with AlphaGo's approach and mathematical discovery seems speculative without real evidence that these strategies would generalize to ARC, which differs fundamentally from Go in structure and problem-solving requirements.
3. While the proposal for human-AI collaboration is cool, it lacks practical implementation details, such as how interactive interfaces would function or how the collaboration pipeline would be evaluated.
4. The paper overlooks the unique challenges of ARC, such as its focus on abstract transformations that defy straightforward data-driven solutions. This weakens its proposals for data augmentation and math-inspired architectures, which may not align with ARC’s core demands.
5. The paper blends philosophical aspirations of AI generalization with technical proposals without a clear roadmap for achieving its goals.
rating: 3
confidence: 3 |
5JiArtBtSG | Enhancing AI Capabilities on the Abstraction and Reasoning Corpus: A Path Toward Broad Generalization in Intelligence | [] | This position paper explores advancing artificial intelligence by improving its ability to generalize beyond training data, a key requirement for tasks in the Abstraction and Reasoning Corpus (ARC). Inspired by historical algorithmic challenges like the Bongard Problems, ARC tasks require pattern recognition and logical reasoning, pushing AI toward more flexible, human-like intelligence. We investigate DreamCoder, a neural-symbolic system, and the role of large language models in ARC. We emphasize the need for diverse data sources, inspired by human trials and synthetic data augmentation, and propose pipelines for logical reasoning using math-inspired neural architectures. This work underlines how ARC can guide AI research, bridging the gap between machine learning and mathematical discovery. | [
"broad generalization"
] | https://openreview.net/pdf?id=5JiArtBtSG | nDPM3MZUSV | official_review | 1,735,079,106,638 | 5JiArtBtSG | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission6/Reviewer_kmZS"
] | title: Some interesting high-level ideas but limited novelty
review: Summary:
- This position paper investigates the role of language models and neurosymbolic systems for solving ARC. It proposes to use a hybrid approach, combining the strenghts of LLMs and math-inspired neural architectures.
Further, it suggests using data augmentation in order to advance the abstraction and reasoning abilities of neural models.
Strengths:
- Summary of current approaches at solving ARC
- Advocation for a combination of neural networks with the rigor of mathematical logic
Weaknesses:
- While this position paper includes a brief summary of current and potential future approaches of solving ARC, it does not really provide any novel insights (compare (Bober-Irizar and Banerjee 2024)).
- Human collaboration: On one hand this is not the point of ARC (see Chollet 2019). On the other hand, the potential for improvement would be quite small, as humans are already very good at solving ARC (LeGris, Solim, et al. "H-ARC: A Robust Estimate of Human Performance on the Abstraction and Reasoning Corpus Benchmark." arXiv preprint arXiv:2409.01374 (2024)).
- Data augmentation: While data augmentation might certainly help with ARC, it is unclear if it actually helps neural models in achieving the generalization required to solve ARC, or if it only tries to move the test-tasks in-distribution.
- The idea of combining neural architectures with math-inspired principles is interesting, however not novel.
(Wang, Ruocheng, et al. "Hypothesis search: Inductive reasoning with language models." arXiv preprint arXiv:2309.05660 (2023)) and (Barke, Shraddha, et al. "HYSYNTH: Context-Free LLM Approximation for Guiding Program Synthesis." The Thirty-eight Conference on Neural Information Processing Systems (2024)) and (Kalyanpur, Aditya, et al. "Llm-arc: Enhancing llms with an automated reasoning critic." arXiv preprint arXiv:2406.17663 (2024)) have already proposed and evaluated similar ideas.
- Deep-learning guided program synthesis has been used quite extensively in the latest ARC Prize 2024, so the ideas have been known for a while (Chollet, Francois, et al. "ARC Prize 2024: Technical Report." arXiv preprint arXiv:2412.04604 (2024).)
Relevant papers not properly cited, e.g.
- Bongard Problems (M. Bongard. Pattern Recognition. Spartan Books, New York, 1970.)
- AlphaGo (Silver, David, et al. "Mastering the game of Go with deep neural networks and tree search." nature 529.7587 (2016): 484-489.)
- DreamCoder (Ellis, Kevin, et al. "Dreamcoder: Bootstrapping inductive program synthesis with wake-sleep library learning." Proceedings of the 42nd acm sigplan international conference on programming language design and implementation. 2021.)
- Early works on using language/LLMs for solving ARC, e.g. (Camposampiero, Giacomo, et al. "Abstract visual reasoning enabled by language." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.) and (Acquaviva, Sam, et al. "Communicating natural programs to humans and machines." Advances in Neural Information Processing Systems 35 (2022): 3731-3743.)
Overall, many of the points brought up in this position paper follow the work by (Bober-Irizar and Banerjee 2024), with limited novelty in its ideas.
rating: 4
confidence: 4 |
5JiArtBtSG | Enhancing AI Capabilities on the Abstraction and Reasoning Corpus: A Path Toward Broad Generalization in Intelligence | [] | This position paper explores advancing artificial intelligence by improving its ability to generalize beyond training data, a key requirement for tasks in the Abstraction and Reasoning Corpus (ARC). Inspired by historical algorithmic challenges like the Bongard Problems, ARC tasks require pattern recognition and logical reasoning, pushing AI toward more flexible, human-like intelligence. We investigate DreamCoder, a neural-symbolic system, and the role of large language models in ARC. We emphasize the need for diverse data sources, inspired by human trials and synthetic data augmentation, and propose pipelines for logical reasoning using math-inspired neural architectures. This work underlines how ARC can guide AI research, bridging the gap between machine learning and mathematical discovery. | [
"broad generalization"
] | https://openreview.net/pdf?id=5JiArtBtSG | QMQarxVbAe | decision | 1,735,598,400,726 | 5JiArtBtSG | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | title: Paper Decision
decision: Reject
comment: We agree with the major opinions of the reviewers. |
5JiArtBtSG | Enhancing AI Capabilities on the Abstraction and Reasoning Corpus: A Path Toward Broad Generalization in Intelligence | [] | This position paper explores advancing artificial intelligence by improving its ability to generalize beyond training data, a key requirement for tasks in the Abstraction and Reasoning Corpus (ARC). Inspired by historical algorithmic challenges like the Bongard Problems, ARC tasks require pattern recognition and logical reasoning, pushing AI toward more flexible, human-like intelligence. We investigate DreamCoder, a neural-symbolic system, and the role of large language models in ARC. We emphasize the need for diverse data sources, inspired by human trials and synthetic data augmentation, and propose pipelines for logical reasoning using math-inspired neural architectures. This work underlines how ARC can guide AI research, bridging the gap between machine learning and mathematical discovery. | [
"broad generalization"
] | https://openreview.net/pdf?id=5JiArtBtSG | 3QZuIPCvEY | official_review | 1,734,512,547,027 | 5JiArtBtSG | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission6/Reviewer_4Q3d"
] | title: Review
review: This paper explores a dual approach to align AI in human-like reasoning, combining structured neural-symbolic methods with the adaptive capabilities of Large Language Models (LLMs). Focused on the Abstraction and Reasoning Corpus (ARC) as a benchmark, it highlights how these techniques can complement human problem-solving and advance abstraction and broad generalization in AI systems.
1) The paper effectively underscores a critical limitation of current AI systems: their inability to generalize and reason abstractly in human-like ways. The emphasis on ARC as a benchmark for "broad generalization" is well-motivated, presenting a relevant challenge to AI research.
2) The discussion of DreamCoder and Large Language Models (LLMs) highlights their complementary strengths. The hybrid approach proposed by leveraging neural-symbolic techniques alongside foundation models shows thoughtful integration of existing tools. In particular to overcome abstract reasoning, intuition, and contextual understanding where LLMs still struggle.
3) While the paper promotes structured architectures inspired by mathematical logic, it does not convincingly argue why such structures alone would suffice to bridge the gap between current AI capabilities and human reasoning. I would appreciate more clarity and examples on the mathematical structures and tools that the authors claim should be used.
rating: 6
confidence: 3 |
1dIwEDNSvY | Active Symbolic Discovery of Ordinary Differential Equations via Phase Portrait Sketching | [] | Discovering Ordinary Differential Equations (ODEs) from trajectory data is a critical task in AI-driven scientific discovery. Recent methods for symbolic discovery of ODEs primarily rely on fixed training datasets collected a priori. How- ever, it often results in suboptimal performance, as shown in our observations in Figure 1. Inspired by the active learning strategy, we consider querying informative trajectory data to evaluate predicted ODEs. Chaos theory suggests that small changes in the initial conditions of a dynamical system can lead to vastly different trajectories, which ask for maintaining a large set of initial conditions. To address this challenge, we introduce Active Symbolic Discovery of Ordinary Differential Equations via Phase Portrait Sketching (APPS). Rather than directly selecting individual initial conditions, APPS first identifies an informative region and then samples a batch of initial conditions within that region. Compared to traditional active learning methods, APPS eliminates the need to maintain large amounts of training data. Extensive experiments demonstrate that APPS consistently discovers more accurate ODE expressions compared to baseline methods. | [
"Equation discovery",
"ordinary differential equation",
"active data query",
"symbolic regression"
] | https://openreview.net/pdf?id=1dIwEDNSvY | v5kVhhptPH | official_review | 1,734,687,192,344 | 1dIwEDNSvY | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission7/Reviewer_vy5o"
] | title: Review
review: Summary:
The paper introduces APPS, a method for data-driven differential model discovery that employs active learning. APPS avoids the need for the pre computation of large training dataset by actively querying informative data regions, guided by phase portraits of candidate ODEs.
Major concerns:
- The extension of context-free grammar to represent an ODE as a sequence of grammatical rules is not entirely new, but it builds on previous works on symbolic model discovery; it should be better highlighted what the authors' contribution is in this regard.
- Exploration of generalization properties appears limited, particularly with respect to long-term prediction and extrapolation beyond the training time interval. In addition, it would be useful to analyze the generalization capabilities of the framework with respect to discretization steps different from those used during the training procedure.
- A more in depth analysis of training times related to algorithm 1 would be appreciated.
- The test cases considered are not "large-scale." I recommend that the authors apply the proposed framework—or at least discuss and comment on its application—to an ODE derived from the semi-discretization of a PDE, enabling it to handle systems with 1,000 to 10,000 or more DoFs.
rating: 7
confidence: 3 |
1dIwEDNSvY | Active Symbolic Discovery of Ordinary Differential Equations via Phase Portrait Sketching | [] | Discovering Ordinary Differential Equations (ODEs) from trajectory data is a critical task in AI-driven scientific discovery. Recent methods for symbolic discovery of ODEs primarily rely on fixed training datasets collected a priori. How- ever, it often results in suboptimal performance, as shown in our observations in Figure 1. Inspired by the active learning strategy, we consider querying informative trajectory data to evaluate predicted ODEs. Chaos theory suggests that small changes in the initial conditions of a dynamical system can lead to vastly different trajectories, which ask for maintaining a large set of initial conditions. To address this challenge, we introduce Active Symbolic Discovery of Ordinary Differential Equations via Phase Portrait Sketching (APPS). Rather than directly selecting individual initial conditions, APPS first identifies an informative region and then samples a batch of initial conditions within that region. Compared to traditional active learning methods, APPS eliminates the need to maintain large amounts of training data. Extensive experiments demonstrate that APPS consistently discovers more accurate ODE expressions compared to baseline methods. | [
"Equation discovery",
"ordinary differential equation",
"active data query",
"symbolic regression"
] | https://openreview.net/pdf?id=1dIwEDNSvY | kcZF97jxp4 | decision | 1,735,598,400,767 | 1dIwEDNSvY | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: This work introduces an interesting method for scientific discovery. |
1dIwEDNSvY | Active Symbolic Discovery of Ordinary Differential Equations via Phase Portrait Sketching | [] | Discovering Ordinary Differential Equations (ODEs) from trajectory data is a critical task in AI-driven scientific discovery. Recent methods for symbolic discovery of ODEs primarily rely on fixed training datasets collected a priori. How- ever, it often results in suboptimal performance, as shown in our observations in Figure 1. Inspired by the active learning strategy, we consider querying informative trajectory data to evaluate predicted ODEs. Chaos theory suggests that small changes in the initial conditions of a dynamical system can lead to vastly different trajectories, which ask for maintaining a large set of initial conditions. To address this challenge, we introduce Active Symbolic Discovery of Ordinary Differential Equations via Phase Portrait Sketching (APPS). Rather than directly selecting individual initial conditions, APPS first identifies an informative region and then samples a batch of initial conditions within that region. Compared to traditional active learning methods, APPS eliminates the need to maintain large amounts of training data. Extensive experiments demonstrate that APPS consistently discovers more accurate ODE expressions compared to baseline methods. | [
"Equation discovery",
"ordinary differential equation",
"active data query",
"symbolic regression"
] | https://openreview.net/pdf?id=1dIwEDNSvY | QmeTqdMhHk | official_review | 1,735,426,381,498 | 1dIwEDNSvY | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission7/Reviewer_SZrp"
] | title: A long full paper with poorly arranged paragraphs
review: Summary
This paper presents a (APPS) method to capture the potential parameters for differential equations from trajectory data. This paper proposes a new active learning method integrated into ODE discovery. This new active learning method incorporates an active path discovery based on more informative data assessment.
Novelty
The method in this paper is very similar to (d’Ascoli, 2024). They have similar pipeline that is made of transformer. (d’Ascoli, 2024) is to read point data as transformer input, by contrary this APPS method is to set mathematical rules as transformer input. The APPS method employs predefined grammar generation rules, by contrary the (d’Ascoli, 2024) is to employ pretrained model.
This paper claims that APPS method is more accurate.
We could say that the (d’Ascoli, 2024) is a global learning, but the APPS method is based on the ranking of local informativeness.
Is this informativeness rating method scalable ? Considering, if there is a large curve, then its local trajectory pattern may be a straight line.
Issue:
1. Some very important explanations and paragraphs are in appendix, particularly the main algorithm is also put into appendix. The main paper is not self-explained without appendix.
2. Although this paper highlighted a new solution to address the “initial condition” dynamic issue, the experiment regarding the improvement on “initial condition” issue are not shown in report.
3. It is necessary to significantly reorganize all the paragraphs for 8-page limit.
Typos:
page 4, figure 2(a), “Categorial distribution” → “Categorical distribution”
page 12, Om = {+, −, ×} → Om = {+, −, ×, /}
rating: 5
confidence: 4 |
1DoInYlVp6 | Can Better Solvers Find Better Matches? Assessing Math-LLM Models in Similar Problem Identification | [] | Researchers have adapted large language models (LLMs) for mathematical reasoning by fine-tuning them with math-specific datasets to create math-specialized LLMs. This paper evaluates such models not only on solving accuracy but also on their ability to identify similar problems. We introduce an indicator task—retrieving a similar problem given a query word problem—to assess whether the model’s internal representations of the word problems capture mathematical semantics. A model capable of solving a problem should also be adept at identifying problems requiring similar reasoning, as human experts do. Using a dataset of Probability Word Problems with formal symbolic annotations, we show that math-specialized LLMs often prioritize linguistic similarity over mathematical similarity. This underscores the need for symbolic intermediate representation during fine-tuning of a LLM to better capture mathematical essence of a problem aiding improvement in model’s consistency and reliability. | [
"LLM and mathematical reasoning"
] | https://openreview.net/pdf?id=1DoInYlVp6 | kjJ4qxwbjo | decision | 1,735,601,593,394 | 1DoInYlVp6 | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Program_Chairs"
] | decision: Accept
comment: We agree with the major opinions of the reviewers, and this paper fits into one of the focuses of this workshop.
title: Paper Decision |
1DoInYlVp6 | Can Better Solvers Find Better Matches? Assessing Math-LLM Models in Similar Problem Identification | [] | Researchers have adapted large language models (LLMs) for mathematical reasoning by fine-tuning them with math-specific datasets to create math-specialized LLMs. This paper evaluates such models not only on solving accuracy but also on their ability to identify similar problems. We introduce an indicator task—retrieving a similar problem given a query word problem—to assess whether the model’s internal representations of the word problems capture mathematical semantics. A model capable of solving a problem should also be adept at identifying problems requiring similar reasoning, as human experts do. Using a dataset of Probability Word Problems with formal symbolic annotations, we show that math-specialized LLMs often prioritize linguistic similarity over mathematical similarity. This underscores the need for symbolic intermediate representation during fine-tuning of a LLM to better capture mathematical essence of a problem aiding improvement in model’s consistency and reliability. | [
"LLM and mathematical reasoning"
] | https://openreview.net/pdf?id=1DoInYlVp6 | Es7pGDIYLd | official_review | 1,734,711,763,711 | 1DoInYlVp6 | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission13/Reviewer_penM"
] | title: Do LLMs truly capture mathematical semantics
review: ## Summary
The paper addresses an important field of LLM evaluations. Its focus on semantic understanding over pure accuracy is novel and positions it as a valuable contribution to mathematical reasoning and AI explainability. The authors use the NLP4PLP, a dataset of probability words, and evaluate three different LLMs: Qwen, DeepSeekMath, and Mathstral.
## Strengths
**Novel Evaluation Approach:** The evaluation approach present in the paper focuses on semantic and similar problem understanding rather than just accuracy
**Valuable Results:** The authors show that Math tuned models still rely heavily on linguistic features rather than mathematical semantics in their internal representations
## Weaknesses
**Limited Evaluation:** The authors only evaluate their results on a small dataset comprising probability-related problems. It would be interesting to see if these results are similar to those of other fields of mathematics, such as algebra.
**Lack of reasoning behind the observed results:** While the authors demonstrate models' reliance on linguistic features over mathematical semantics in their representations, they lack theoretical analysis or hypotheses explaining this phenomenon in depth. Adding the same in more detail would strengthen the paper's contributions.
rating: 6
confidence: 3 |
1DoInYlVp6 | Can Better Solvers Find Better Matches? Assessing Math-LLM Models in Similar Problem Identification | [] | Researchers have adapted large language models (LLMs) for mathematical reasoning by fine-tuning them with math-specific datasets to create math-specialized LLMs. This paper evaluates such models not only on solving accuracy but also on their ability to identify similar problems. We introduce an indicator task—retrieving a similar problem given a query word problem—to assess whether the model’s internal representations of the word problems capture mathematical semantics. A model capable of solving a problem should also be adept at identifying problems requiring similar reasoning, as human experts do. Using a dataset of Probability Word Problems with formal symbolic annotations, we show that math-specialized LLMs often prioritize linguistic similarity over mathematical similarity. This underscores the need for symbolic intermediate representation during fine-tuning of a LLM to better capture mathematical essence of a problem aiding improvement in model’s consistency and reliability. | [
"LLM and mathematical reasoning"
] | https://openreview.net/pdf?id=1DoInYlVp6 | 9chfz7HIU8 | official_review | 1,734,688,914,095 | 1DoInYlVp6 | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission13/Reviewer_DkBg"
] | title: Good paper, missing some details
review: ### Summary
This paper proposes a new evaluation strategy for models specialized in mathematical reasoning. Rather than simply evaluating the accuracy of these models when solving these tasks, the authors propose to complementarily test their ability to identify similar problems. The rationale behind this is that a true understanding of a mathematical problem implies the construction of a proper abstract model for it, which should come with the ability to evaluate similarities between these constructed models. To test this, the authors use NLP4PLP, a dataset of probability word problems where each problem is annotated with its solution and a formal representation based on a declarative programming language. Three different Math-LLMs are evaluated: Qwen, DeepSeekMath, and Mathstral. In the experiments, the authors report three different metrics: accuracy (% of problems solved), inconsistency (% of pairs of problems that share the same archetype but who are not solved in the same way), and recall@10 (% of problems where the correct matching problem is retrieved from the KB). The experimental results show that, while math-LLMs can solve NLP4PLP problems with good accuracy, their internal representations predominantly encode linguistic rather than mathematical similarity.
### Review
I liked the idea proposed in this paper: simple, clear, and well-documented. To the best of my knowledge, probing the mathematical semantic similarity between internal representations of different models is a novel idea. The closest work is probably [1], where the authors tested the ability of general (and not math-tuned) models to generate (linguistically and not mathematically) similar problems.
I have, however, a few concerns/observations regarding the experiments:
- For the inconsistency metric, it is not clear how the pairs are defined. Do you group all the problems sharing the same archetypes in pairs of two, or do you create a full cartesian product of the problem instances? It would be helpful to clarify this in the manuscript.
- Always on inconsistency, I think it would be good to evaluate the impact of pairs where both examples could not be solved.
- It is not clear how the problem embeddings are extracted. This could have a great impact on the evaluation: intuitively, earlier layers might contain more linguistic information, while final layers could contain more mathematical information. Having some ablation on this, then, could also be interesting.
On a more general note, I think these results are not surprising. Math-LLMs, despite being fine-tuned for mathematical reasoning, are still language models. Hence, the fact that substantial linguistic information is still contained in their “problem models” is expected. I think that, in future work, it would be interesting to understand to what extent it is possible to separate mathematical and linguistic information in the embeddings.
[1] Solving Math Word Problems concerning Systems of Equations with GPT-3, Zong, and Krishnamachari
rating: 7
confidence: 3 |
1DoInYlVp6 | Can Better Solvers Find Better Matches? Assessing Math-LLM Models in Similar Problem Identification | [] | Researchers have adapted large language models (LLMs) for mathematical reasoning by fine-tuning them with math-specific datasets to create math-specialized LLMs. This paper evaluates such models not only on solving accuracy but also on their ability to identify similar problems. We introduce an indicator task—retrieving a similar problem given a query word problem—to assess whether the model’s internal representations of the word problems capture mathematical semantics. A model capable of solving a problem should also be adept at identifying problems requiring similar reasoning, as human experts do. Using a dataset of Probability Word Problems with formal symbolic annotations, we show that math-specialized LLMs often prioritize linguistic similarity over mathematical similarity. This underscores the need for symbolic intermediate representation during fine-tuning of a LLM to better capture mathematical essence of a problem aiding improvement in model’s consistency and reliability. | [
"LLM and mathematical reasoning"
] | https://openreview.net/pdf?id=1DoInYlVp6 | 8SwScWQNTm | official_review | 1,735,486,019,276 | 1DoInYlVp6 | [
"everyone"
] | [
"AAAI.org/2025/Workshop/NeurMAD/Submission13/Reviewer_XMK6"
] | title: Good research question, but the paper could be further enhanced
review: ## Summary
This paper examines whether Math-LLMs are able to perform consistently on different math problems with similar solutions. The research is conducted on three well-known math-LLMs: Qwen-Math, Mathstral, and DeepSeek-Math. The testbed is a set of probability word problems. These problems are annotated with formal representations. The author conduct experiments to explore whether Math-LLMs could treat similar problems similarly and capture mathematical semantics in these problems.
## Pros
- Regarding the research question. It is known that LLMs are black-box models. This paper provides a new way to uncover the inner mechanism of LLMs by questioning whether they behave consistently on semantically similar problems.
## Cons
- The organization of this paper could be further improved.
1. Typo: in subsection{Models}: Qwe2.5-Math --> Qwen2.5-Math
2. The experiment procedure is only depicted in Figure 1. However, the reviewer believe it is important to elaborate on the details of the conduction.
3. The authors should further explain why we can use the embedding similarity to measure semantic similarity.
The reviewer would support its acceptance to the workshop if the author could properly address Cons 2 and Cons 3
rating: 5
confidence: 4 |
Subsets and Splits