paper_id
stringlengths 10
19
| venue
stringclasses 14
values | focused_review
stringlengths 128
8.15k
| point
stringlengths 47
624
|
---|---|---|---|
NIPS_2018_840 | NIPS_2018 | 1. It is confusing to me what the exact goal of this paper is. Are we claiming the multi-prototype model is superior to other binary classification models (such as linear SVM, kNN, etc.) in terms of interpretability? Why do we have two sets of baselines for higher-dimensional and lower-dimensional data? 2. In Figure 3, for the baselines on the left hand side, what if we sparsify the trained models to reduce the number of selected features and compare accuracy to the proposed model? 3. Since the parameter for sparsity constraint has to be manually picked, can the authors provide any experimental results on the sensitivity of this parameter? Similar issue arises when picking the number of prototypes. Update after Author's Feedback: All my concerns are addressed by the authors's additional results. I'm changing my score based on that. | 2. In Figure 3, for the baselines on the left hand side, what if we sparsify the trained models to reduce the number of selected features and compare accuracy to the proposed model? |
NIPS_2016_313 | NIPS_2016 | Weakness: 1. The proposed method consists of two major components: generative shape model and the word parsing model. It is unclear which component contributes to the performance gain. Since the proposed approach follows detection-parsing paradigm, it is better to evaluate on baseline detection or parsing techniques sperately to better support the claim. 2. Lacks in detail about the techniques and make it hard to reproduce the result. For example, it is unclear about the sparsification process since it is important to extract the landmark features for following steps. And how to generate the landmark on the edge? How to decide the number of landmark used? What kind of images features? What is the fixed radius with different scales? How to achieve shape invariance, etc. 3. The authors claim to achieve state-of-the-art results on challenging scene text recognition tasks, even outperforms the deep-learning based approaches, which is not convincing. As claimed, the performance majorly come from the first step which makes it reasonable to conduct comparisons experiments with existing detection methods. 4. It is time-consuming since the shape model is trained in pixel level(though sparsity by landmark) and the model is trained independently on all font images and characters. In addition, parsing model is a high-order factor graph with four types of factors. The processing efficiency of training and testing should be described and compared with existing work. 5. For the shape model invariance study, evaluation on transformations of training images cannot fully prove the point. Are there any quantitative results on testing images? | 2. Lacks in detail about the techniques and make it hard to reproduce the result. For example, it is unclear about the sparsification process since it is important to extract the landmark features for following steps. And how to generate the landmark on the edge? How to decide the number of landmark used? What kind of images features? What is the fixed radius with different scales? How to achieve shape invariance, etc. |
ARR_2022_186_review | ARR_2022 | There are two main concerns from me . The first one is that how to generate the adversarial training set with code is not clearly discussed and the detail of adversarial training set is not be provided. The second concern is that the reason of excluding 25% of the word phrases said in the last paragraph in 4.1 and the reason of using first 50 examples from the OntoNotes test set is not be fully discussed.
1. For the Appendix H section, it should be reorganized which is difficult to follow. | 1. For the Appendix H section, it should be reorganized which is difficult to follow. |
NIPS_2019_1207 | NIPS_2019 | - Moderate novelty. This paper combines various components proposed in previous work (some of it, it seems, unbeknownst to the authors - see Comment 1): hierarchical/structured optimal transport distances, Wasserstein-Procrustes methods, sample complexity results for Wasserstein/Sinkhorn objectives. Thus, I see the contributions of this paper being essentially: putting together these pieces and solving them cleverly via ADMM. - Lacking awareness of related work (see Comment 1) - Missing relevant baselines and runtime experimental results (Comments 2, 3 and 4) Major Comments/Questions: 1 Related Work. My main concern with this paper is its apparent lack of awareness of two very related lines of work. On the one hand, the idea of defining hierarchical OT distances has been explored before in various contexts (e.g., [5], [6] and [7]), and so has leveraging cluster information for structured losses, e.g. [9] and [10] (note that latter of these relies on an ADMM approach too). On the other hand, combining OT with Procrustes alignment has a long history too (e.g, [1]), with recent successful application in high-dimensional problems ([2], [3], [4]). All of these papers solve some version of Eq (4) with orthogonality (or more general constraints), leading to algorithms whose core is identical to Algorithm 1. Given that this paper sits at the intersection of two rich lines of work in the OT literature, I would have expected some effort to contrast their approach, both theoretically and empirically, with all these related methods. 2. Baselines. Related to the point above, any method that does not account for rotations across data domains (e.g., classic Wasserstein distance) is inadequate as a baseline. Comparing to any of the methods [1]-[4] would have been much more informative. In addition, none of the baselines models group structure, which again, would have been easy to remedy by including at least one alternative that does (e.g., [10] or the method of Courty et al, which is cited and mentioned in passing, but not compared against). As for the neuron application, I am not familiar with the DAD method, but the same applies about the lack of comparison to OT-based methods with structure/Procrustes invariance. 3. Conflation of geometric invariance and hierarchical components. Given that this approach combines two independent extensions on the classic OT problem (namely, the hierarchical formulation and the aligment over the stiefel manifold), I would like to understand how important these two are for the applications explored in this work. Yet, no ablation results are provided. A starting point would be to solve the same problem but fixing the transformation T to be the identity, which would provide a lower bound that, when compared against the classic WA, would neatly show the advantage of the hierarchical vs a "flat" classic OT versions of the problem. 4. No runtime results. Since computational efficiency is one of the major contributions touted in the abstract and introduction, I was expecting to see at least empirical and/or a formal convergence/runtime complexity analysis, but neither of these was provided. Since the toy example is relatively small, and no details about the neural population task are provided, the reader is left to wonder about the practical applicability of this framework for real applications. Minor Comments/Typos: - L53. *the* data. - L147. It's not clear to me why (1) is referred to as an update step here. Wrong eqref? - Please provide details (size, dimensionality, interpretation) about the neural population datasets, at least on the supplement. Many readers will not be familiar with it. References: * OT-based methods to align in the presence of unitary transformations: [1] Rangarajan et al, "The Softassign Procrustes Matching Algorithm", 1997. [2] Zhang et al, "Earth Moverâs Distance Minimization for Unsupervised Bilingual Lexicon Induction", 2017. [3] Alvarez-Melis et al, "Towards Optimal Transport with Global Invariances", 2019. [4] Grave et al, "Unsupervised Alignment of Embeddings with Wasserstein Procrustes", 2019. *Hierarchical OT methods: [5] Yuorochkin et al, "Hierarhical Optimal Transport for Document Representation". [6] Shmitzer and Schnorr, "A Hierarchical Approach to Optimal Transport", 2013 [7] Dukler et al, "Wasserstein of Wasserstein Loss for Learning Generative Models", 2019 [9] Alvarez-Melis et al, "Structured Optimal Transport", 2018 [10] Das and Lee, "Unsupervised Domain Adaptation Using Regularized Hyper-Graph Matching", 2018 | - Lacking awareness of related work (see Comment 1) - Missing relevant baselines and runtime experimental results (Comments 2, 3 and 4) Major Comments/Questions: |
NIPS_2019_494 | NIPS_2019 | of the approach, it may be interesting to do that. Clarity: The paper is well written but clarity could be improved in several cases: - I found the notation / the explicit split between "static" and temporal features into two variables confusing, at least initially. In my view this requires more information than is provided in the paper (what is S and Xt). - even with the pseudocode given in the supplementary material I don't get the feeling the paper is written to be reproduced. It is written to provide an intuitive understanding of the work, but to actually reproduce it, more details are required that are neither provided in the paper nor in the supplementary material. This includes, for example, details about the RNN implementation (like number of units etc), and many other technical details. - the paper is presented well, e.g., quality of graphs is good (though labels on the graphs in Fig 3 could be slightly bigger) Significance: - from just the paper: the results would be more interesting (and significant) if there was a way to reproduce the work more easily. At present I cannot see this work easily taken up by many other researchers mainly due to lack of detail in the description. The work is interesting, and I like the idea, but with a relatively high-level description of it in the paper it would need a little more than the peudocode in the materials to convince me using it (but see next). - In the supplementary material it is stated the source code will be made available, and in combination with paper and information in the supplementary material, the level of detail may be just right (but it's hard to say without seeing the code). Given the promising results, I can imagine this approach being useful at least for more research in a similar direction. | - even with the pseudocode given in the supplementary material I don't get the feeling the paper is written to be reproduced. It is written to provide an intuitive understanding of the work, but to actually reproduce it, more details are required that are neither provided in the paper nor in the supplementary material. This includes, for example, details about the RNN implementation (like number of units etc), and many other technical details. |
NIPS_2018_605 | NIPS_2018 | in the related work and the experiments. If some of these concerns would be addressed in the rebuttal, I would be willing to upgrade my recommended score. Strengths: - The results seem to be correct. - In contrast to Huggins et al. (2016) and Tolochinsky & Feldman (2018), the coreset guarantee applies to the standard loss function of logistic regression and not to variations. - The (theoretical) algorithm (without the sketching algorithm for the QR decomposition) seems simple and practical. If space permits, the authors might consider explicitly specifying the algorithm in pseudo-code (so that practitioners do not have to extract it from the Theorems). - The authors include in the Supplementary Materials an example where uniform sampling fails even if the complexity parameter mu is bounded. - The authors show that the proposed approach obtains a better trade-off between error and absolute running time than uniform sampling and the approach by Huggins et al. (2016). Weaknesses: - Novelty: The sensitivity bound in this paper seems very similar to the one presented in [1] which is not cited in the manuscript. The paper [1] also uses a mix between sampling according to the data point weights and the l2-sampling with regards to the mean of the data to bound the sensitivity and then do importance sampling. Clearly, this paper treats a different problem (logistic regression vs k-means clustering) and has differences. However, this submission would be strengthened if the proposed approach would be compared to the one in [1]. In particular, I wonder if the idea of both additive and multiplicative errors in [1] could be applied in this paper (instead of restricting mu-complexity) to arrive at a coreset construction that does not require any assumptions on the data data set. [1] Scalable k-Means Clustering via Lightweight Coresets Olivier Bachem, Mario Lucic and Andreas Krause To Appear In International Conference on Knowledge Discovery and Data Mining (KDD), 2018. - Practical significance: The paper only contains a limited set of experiments, i.e., few data sets and no comparison to Tolochinsky & Feldman (2018). Furthermore, the paper does not compare against any non-coreset based approaches, e.g., SGD, SDCA, SAGA, and friends. It is not clear whether the proposed approach is useful in practice compared to these approaches. - Figure 1 would be much stronger if there were error bars and/or if there were more random trials that would (potentially) get rid of some of the (most likely) random fluctuations in the results. | - Figure 1 would be much stronger if there were error bars and/or if there were more random trials that would (potentially) get rid of some of the (most likely) random fluctuations in the results. |
2JF8mJRJ7M | ICLR_2024 | 1. Utilizing energy models to explain the fine-tuning of pre-trained models seems not to be essential. As per my understanding, the objective of the method in this paper as well as related methods ([1,2,3], etc.) is to reduce the difference in features extracted by the models before and after fine-tuning.
2. The authors claim that the text used is randomly generated, but it appears from the code in the supplementary material that tokens are sampled from the openai_imagenet_template. According to CAR-FT, using all templates as text input also yields good performance. What then is the significance of random token sampling in this scenario?
3. It is suggested that the authors provide a brief introduction to energy models in the related work section.
In Figure 1, it is not mentioned which points different learning rates in the left graph and different steps in the right graph correspond to.
[1] Context-aware robust fine-tuning.
[2] Fine-tuning can cripple your foundation model; preserving features may be the solution.
[3] Robust fine-tuning of zero-shot models. | 3. It is suggested that the authors provide a brief introduction to energy models in the related work section. In Figure 1, it is not mentioned which points different learning rates in the left graph and different steps in the right graph correspond to. [1] Context-aware robust fine-tuning. [2] Fine-tuning can cripple your foundation model; preserving features may be the solution. [3] Robust fine-tuning of zero-shot models. |
gDDW5zMKFe | ICLR_2024 | 1. At the heart of FIITED is the utility-based approach to determine chunk significance. However, basing eviction decisions purely on utility scores might introduce biases. For instance, recent chunks might gain a temporary high utility, leading to potentially premature evictions of other valuable chunks.
2. This approach does not consider the individual significance of dimensions within a chunk, leading to potential information loss.
3. While the chunk address manager maintains a free address stack, this design assumes that the most recently evicted space is optimal for the next allocation. This might not always be the case, especially when considering the locality of data and frequent access patterns.
4. The system heavily depends on the hash table to fetch and manage embeddings. This approach, while efficient in accessing chunks, might lead to hashing collisions even though the design ensures a low collision rate. Any collision, however rare, can introduce latency in access times or even potential overwrites.
5. The methodology leans heavily on access frequency to decide on embedding significance. However, frequency doesn't always equate to importance. There could be rarely accessed but critically important embeddings, and the method might be prone to undervaluing them. | 1. At the heart of FIITED is the utility-based approach to determine chunk significance. However, basing eviction decisions purely on utility scores might introduce biases. For instance, recent chunks might gain a temporary high utility, leading to potentially premature evictions of other valuable chunks. |
W6fIyuK8Lk | ICLR_2025 | 1. The first paragraph of the Introduction is entirely devoted to a general introduction of DNNs, without any mention of drift. Given that the paper's core focus is on detecting drift types and drift magnitude, I believe the DNN-related introduction is not central to this paper, and this entire paragraph provides little valuable information to readers.
2. The paper is poorly written. A paper should highlight its key points and quickly convey its innovations to readers. In the introduction, three paragraphs are spent on related work, while only one paragraph describes the paper's contribution, and even this paragraph fails to intuitively explain why the proposed method would work.
3. The first paragraph of Preliminaries is entirely about concept drift. So I assume the paper aims to address concept drift issues. If this is the case, there is a serious misuse of terminology. In concept drift, drift types include abrupt drift, gradual drift, recurrent drift, etc. While this paper uses the term "drift type" 86 times, it never explains or strictly defines what drift type means. According to Table 1 in the paper, the authors treat "gaussian noise, poisson noise, salt noise, snow fog rain, etc" as drift types. I find this inappropriate as these are more like different types of concepts. In summary, drift type is a specialized term in concept drift [1].
4. Line 163 states: "Pi,j denote the prediction probability distribution of the images belonging to the class j predicted as class i", but according to equation 1, I believe p_ij is a scalar, not a distribution. This appears to be an expression issue. I believe the paper consistently misuses the term "distribution".
5. Line 183, "per each drift type" should be changed to "for each effect type".
6. Lines 117-119 state: "To understand the impact of data drifts on image classification neural networks, let us consider the impact of Gaussian noise on a classification network trained on the MNIST handwritten digit image dataset, detailed in Section 4, under the effect of Gaussian noise." This sentence is highly redundant.
7. The experimental evaluation is inadequate as it only compares against a single baseline. For a paper proposing a new framework, comparing with multiple state-of-the-art methods is essential to demonstrate the effectiveness and advantages of the proposed approach. The limited comparison significantly weakens the paper's experimental validation.
Overall, I find the paper poorly written, with issues including misuse of terminology, redundant expressions, unclear logical flow, and lack of focus. Moreover, the paper seems to compare against only one baseline, which makes the experimental results unconvincing to me.
[1] Agrahari, S. and Singh, A.K., 2022. Concept drift detection in data stream mining: A literature review. Journal of King Saud University-Computer and Information Sciences, 34(10), pp.9523-9540. | 1. The first paragraph of the Introduction is entirely devoted to a general introduction of DNNs, without any mention of drift. Given that the paper's core focus is on detecting drift types and drift magnitude, I believe the DNN-related introduction is not central to this paper, and this entire paragraph provides little valuable information to readers. |
4ltiMYgJo9 | ICLR_2025 | 1. One of the main claims by the authors is the adaptation of the whole close-loop framework. While the authors claim it can be simply replaced by recording EEG data from human participants, there are actually no more concrete demonstrations on how. For example, what is the "specific neural activity in the brain" in this paper and in a possible real scenario? What's the difference? And how difficult is it and how much effort will it take to apply the framework to the real world? It's always easy to just claim a methodology "generalizable", but without more justification that doesn't actually help strengthen the contribution of the paper.
2. Based on 1, I feel it is not sufficiently demonstrated in the paper what role the EEG plays in the whole framework. As far as I can understand from the current paper, it seems to be related to the reward $R$ in the MDP design, because it should provide signal based on the desired neural activities. However, we know neither how the reward is exactly calculated nor what kinds of the neural signal the authors are caring about (e.g., a specific frequency bank? a specific shape of waveforms? a specific activation from some brain area?).
3. Besides the methodology, it's also not clear how the different part of this framework performs and contribute to the final result from the experimental aspect. While in the result section, we can see that the framework can yield promising visual stimuli result, it lacks either quantitative experiments and comparison between selection of algorithms, or a more detailed explanations on the presented ones. (See questions.) Therefore, it's unclear for me what the exact performance of the whole framework and individual parts compared to other solutions.
4. Overall, the presentation of this paper is unsatisfying (and that's probably why I have the concerns in 2 and 3). On the one hand, the author is presenting more well-known details in the main content but didn't make their own claims clear. For example, the algorithm 1 and algorithm 2 is a direct adaptation from previous work. Instead of using space to present them, I wish to see more on how the MDP is constructed. On the other hand, mixing citations with sentences (please use \citep instead \cite) and a few typos (in line 222, algorithm 1, the bracket is not matched) give me the feeling that the paper is not yet ready to be published. | 3. Besides the methodology, it's also not clear how the different part of this framework performs and contribute to the final result from the experimental aspect. While in the result section, we can see that the framework can yield promising visual stimuli result, it lacks either quantitative experiments and comparison between selection of algorithms, or a more detailed explanations on the presented ones. (See questions.) Therefore, it's unclear for me what the exact performance of the whole framework and individual parts compared to other solutions. |
ICLR_2023_149 | ICLR_2023 | Weakness: 1. Some recent RNN-based latent models eg. LFADs and Oerich 2020, were overlooked in the current manuscript. It would be great to discuss those. 2. It is not clear to me whether such a model could generate novel knowledge or testable hypothesis about neuron data. | 2. It is not clear to me whether such a model could generate novel knowledge or testable hypothesis about neuron data. |
et5l9qPUhm | ICLR_2025 | - Model Assumptions: I am unsure of the modelling of synthetic data and how well it translates into practice. Specifically, the authors model synthetic data using a label shift, assuming that the data (X) marginal remains the same. However, it seems unrealistic for autoregressive training (a key experiment in the paper), where the input tokens for next token generation come from the synthetic distribution.
- Experimental Details: The theoretical results establish a strong dependence on the quality of synthetic data. However, the experiments with real data (MNIST/GPT-2) do not provide quantitative metrics to measure the degradation in the synthetic data source (either accuracy of MNIST classifier or perplexity/goodness scores of the trained GPT-2 generator), which makes it hard to ascertain which paradigm (in fig.1) the experiments align best with or what level of degradation in practice results in the observed trend.
- Minor Issues
- Typos: Line 225 (v \in R^{m}), line 299 (synthetic data P2), line 398 (represented by stars)
- Suggestions for Clarity:
- Akin to theorem 1, is it possible to present a simplified version of theorem 2 for the general audience? As it is, definition 2 and theorem 2 are hard to digest just on their own.
- Line 481, before stating the result, can the authors explain in words the process of iterative mixing proposed in Ferbach et al., 2024? It would make the manuscript more self-contained.
- Missing Citation: Line 424, for the MNIST dataset, please include the citation for "Gradient Based Learning Applied to Document
Recognition", LeCun et al, 1998
- Visualization: For fig.1, please consider changing the y-axes test error to the same scale. Right now, it is hard to compare the error values or the slope in subplot 1 to those in 2 and 3. | - Akin to theorem 1, is it possible to present a simplified version of theorem 2 for the general audience? As it is, definition 2 and theorem 2 are hard to digest just on their own. |
NIPS_2021_2306 | NIPS_2021 | 1, All the experiments are conducted using images under 224*224 resolution, it would be interesting to see how the performance will be if we use a larger resolution. 2. The accuracy with lower resolutions for some examples is even better than the model with full resolution. Is there any underlying reason for this phenomenon? 3, It seems the improvement over the flops does align well with that over the real latency as shown in fig.3 and tab.3. It would be good to provide the performance and speed trade-off for real acceleration. 4, For the training process, the base models will be first trained and then combined with the resolution selector network for fine-tuning. I’m wondering if it is possible to train the whole model from scratch?
Some minor issues: Line121: “The first is the large classifier network with both high performance and expensive computational costs is first trained”, is the “is first trained” redundant? | 1, All the experiments are conducted using images under 224*224 resolution, it would be interesting to see how the performance will be if we use a larger resolution. |
NIPS_2019_962 | NIPS_2019 | ** * Clarity. There are parts of this paper that are a bit unclear. The diagram and caption for KeyQN section are very helpful, but the actual text section could be fleshed out more. It would nice if the text could have a little more detail on how the outputs from the transporter are input to the KeyQN architecture and how the whole thing is trained. The exploration section was well explained for most part, but it took a bit of time to understand. Maybe would help to have an algorithm box. Also, the explanation of training process a bit confusing. Maybe a diagram of the architecture and how the transporter feeds into this would help. Also, I am confused a bit about whether the transporter is pretrained and frozen or fine-tuned. One quote from the paper in this regard confused me: âOur transporter model and all control policies simultaneuosly â so the weights of the Transporter network are not frozen during the downstream task like in KeyQN? * Experiments: They only show these results on a few games (and no error bars), so it would have been nice (but not a dealbreaker) to see results from more Atari games. They do partially justify this by saying they couldnât use a random policy on other games, but Iâd be curious just to see what happens when they try a couple more games. Would be nice to see comparisons to other exploration methods (they only show results compared to random exploration) Nitpicks/Questions * Makes sense to just refer the reader to the PointNet paper instead of re-explaining it, but a short explanation if possible of PointNet (couple sentences) might be helpful, so that one doesnât have to skim that paper to understand this paper * The diagram in figure 5 (h_psi) should show a heat map not keypoints superimposed on raw frame right? * In the appendix âK is handpicked for each game?â How? Validation loss? * The tracking experiments but the section is a bit unclear. I have a few questions on that front: * why is there a need to separating precision and recall? * why not just report overall mean average precision or F1 score? Might be a bit easier for reader to digest one number * Why bucket into diff sequence lengths? what do the different sequence lengths mean? There is no prediction-in-keypoint space model right? So there is no concept of the performance worsening as the trajectory gets longer. Arenât the keypoint guesses just the output of the PointNet at each frame, so why would the results from a 200 frame sequence be much different than 100 or something? Why not just report overall precision and recall on the test set? * In the KeyQN section What is the keypoint mask averaged feature vector? just multiply each feature map element wise by H_psi? | * In the KeyQN section What is the keypoint mask averaged feature vector? just multiply each feature map element wise by H_psi? |
NIPS_2016_117 | NIPS_2016 | weakness of this work is impact. The idea of "direct feedback alignment" follows fairly straightforwardly from the original FA alignment work. Its notable that it is useful in training very deep networks (e.g. 100 layers) but its not clear that this results in an advantage for function approximation (the error rate is higher for these deep networks). If the authors could demonstrate that DFA allows one to train and make use of such deep networks where BP and FA struggle on a larger dataset this would significantly enhance the impact of the paper. In terms of biological understanding, FA seems more supported by biological observations (which typically show reciprocal forward and backward connections between hierarchical brain areas, not direct connections back from one region to all others as might be expected in DFA). The paper doesn't provide support for their claim, in the final paragraph, that DFA is more biologically plausible than FA. Minor issues: - A few typos, there is no line numbers in the draft so I haven't itemized them. - Table 1, 2, 3 the legends should be longer and clarify whether the numbers are % errors, or % correct (MNIST and CIFAR respectively presumably). - Figure 2 right. I found it difficult to distinguish between the different curves. Maybe make use of styles (e.g. dashed lines) or add color. - Figure 3 is very hard to read anything on the figure. - I think this manuscript is not following the NIPS style. The citations are not by number and there are no line numbers or an "Anonymous Author" placeholder. - I might be helpful to quantify and clarify the claim "ReLU does not work very well in very deep or in convolutional networks." ReLUs were used in the AlexNet paper which, at the time, was considered deep and makes use of convolution (with pooling rather than ReLUs for the convolutional layers). | - Figure 2 right. I found it difficult to distinguish between the different curves. Maybe make use of styles (e.g. dashed lines) or add color. |
NIPS_2016_93 | NIPS_2016 | - The claims made in the introduction are far from what has been achieved by the tasks and the models. The authors call this task language learning, but evaluate on question answering. I recommend the authors tone-down the intro and not call this language learning. It is rather a feedback driven QA in the form of a dialog. - With a fixed policy, this setting is a subset of reinforcement learning. Can tasks get more complicated (like what explained in the last paragraph of the paper) so that the policy is not fixed. Then, the authors can compare with a reinforcement learning algorithm baseline. - The details of the forward-prediction model is not well explained. In particular, Figure 2(b) does not really show the schematic representation of the forward prediction model; the figure should be redrawn. It was hard to connect the pieces of the text with the figure as well as the equations. - Overall, the writing quality of the paper should be improved; e.g., the authors spend the same space on explaining basic memory networks and then the forward model. The related work has missing pieces on more reinforcement learning tasks in the literature. - The 10 sub-tasks are rather simplistic for bAbi. They could solve all the sub-tasks with their final model. More discussions are required here. - The error analysis on the movie dataset is missing. In order for other researchers to continue on this task, they need to know what are the cases that such model fails. | - The claims made in the introduction are far from what has been achieved by the tasks and the models. The authors call this task language learning, but evaluate on question answering. I recommend the authors tone-down the intro and not call this language learning. It is rather a feedback driven QA in the form of a dialog. |
NIPS_2019_463 | NIPS_2019 | 1. The central contribution of modeling weight evolution using ODEs hinges on the mentioned problem of neural ODEs exhibiting inaccuracy while recomputing activations. It appears a previous paper first reported this issue. The reviewer is not convinced about this problem. The current paper doesn't provide a convincing analytical argument or empirical evidence about this issue. 2. Leaving aside the claimed weakness of neuralODE, the idea of modeling weight evolution as ODE is itself very intellectually interesting and worthy of pursuit. But the empirical improvement reported in Table 1 over AlexNet, ResNet-4 and ResNet-10 is <= 1.75 % for both configurations. The improvement of decoupling weight evolution is in fact even small and not consistent - the improvement in ResNet for configuration 2 is smaller than keeping the evolution of parameters and activations aligned. The improvement for ablation study over neuralODE is also minimal. So, the empirical case for the proposed approach is not convincing. 3. The derivation of optimality conditions for the coupled formulation is interesting because of connections to a machine learning application (backpropagation) but a pretty standard textbook derivation from dynamical systems / controls point of view. | 1. The central contribution of modeling weight evolution using ODEs hinges on the mentioned problem of neural ODEs exhibiting inaccuracy while recomputing activations. It appears a previous paper first reported this issue. The reviewer is not convinced about this problem. The current paper doesn't provide a convincing analytical argument or empirical evidence about this issue. |
NIPS_2016_450 | NIPS_2016 | . First of all, the experimental results are quite interesting, especially that the algorithm outperforms DQN on Atari. The results on the synthetic experiment are also interesting. I have three main concerns about the paper. 1. There is significant difficulty in reconstructing what is precisely going on. For example, in Figure 1, what exactly is a head? How many layers would it have? What is the "Frame"? I wish the paper would spend a lot more space explaining how exactly bootstrapped DQN operates (Appendix B cleared up a lot of my queries and I suggest this be moved into the main body). 2. The general approach involves partitioning (with some duplication) the samples between the heads with the idea that some heads will be optimistic and encouraging exploration. I think that's an interesting idea, but the setting where it is used is complicated. It would be useful if this was reduced to (say) a bandit setting without the neural network. The resulting algorithm will partition the data for each arm into K (possibly overlapping) sub-samples and use the empirical estimate from each partition at random in each step. This seems like it could be interesting, but I am worried that the partitioning will mean that a lot of data is essentially discarded when it comes to eliminating arms. Any thoughts on how much data efficiency is lost in simple settings? Can you prove regret guarantees in this setting? 3. The paper does an OK job at describing the experimental setup, but still it is complicated with a lot of engineering going on in the background. This presents two issues. First, it would take months to re-produce these experiments (besides the hardware requirements). Second, with such complicated algorithms it's hard to know what exactly is leading to the improvement. For this reason I find this kind of paper a little unscientific, but maybe this is how things have to be. I wonder, do the authors plan to release their code? Overall I think this is an interesting idea, but the authors have not convinced me that this is a principled approach. The experimental results do look promising, however, and I'm sure there would be interest in this paper at NIPS. I wish the paper was more concrete, and also that code/data/network initialisation can be released. For me it is borderline. Minor comments: * L156-166: I can barely understand this paragraph, although I think I know what you want to say. First of all, there /are/ bandit algorithms that plan to explore. Notably the Gittins strategy, which treats the evolution of the posterior for each arm as a Markov chain. Besides this, the figure is hard to understand. "Dashed lines indicate that the agent can plan ahead..." is too vague to be understood concretely. * L176: What is $x$? * L37: Might want to mention that these algorithms follow the sampled policy for awhile. * L81: Please give more details. The state-space is finite? Continuous? What about the actions? In what space does theta lie? I can guess the answers to all these questions, but why not be precise? * Can you say something about the computation required to implement the experiments? How long did the experiments take and on what kind of hardware? * Just before Appendix D.2. "For training we used an epsilon-greedy ..." What does this mean exactly? You have epsilon-greedy exploration on top of the proposed strategy? | * L37: Might want to mention that these algorithms follow the sampled policy for awhile. |
NIPS_2020_1344 | NIPS_2020 | 1. There have been several results on the problems of batched top-k ranking and fully adaptive coarse ranking in recent years. From that point of view the results in this paper are not particularly surprising. Even the idea that one can reduce the size of active arm set by a factor of n^{1/R} has appeared in [37] for the problem of collaborative top-1 ranking. However, the main novelty in this paper seems to be the application of this idea for the problem of coarse ranking using a successive-accepts-and-rejects type algorithm. 2. Also, proving lower bounds for round complexity is the major chuck of work involved in proving results for batched ranking problems. However, this paper exploits an easy reduction from the problem of collaborative ranking, and hence, the lower bound results follow as an easy corollary of these collaborative ranking results. | 2. Also, proving lower bounds for round complexity is the major chuck of work involved in proving results for batched ranking problems. However, this paper exploits an easy reduction from the problem of collaborative ranking, and hence, the lower bound results follow as an easy corollary of these collaborative ranking results. |
Nk2vfZa4lX | EMNLP_2023 | 1. In this study, three distinct LLMs named Galactica, BioMedLM, and ChatGPT have been selected by the authors. The differences between these LLMs are outlined in the related work section of the paper. Despite their notable distinctions in training data and size, the evaluation of all these LLMs follows a uniform approach. A more effective approach would involve assessing the outputs of each LLM separately, as this could provide valuable insights into their relative performance in generating medical systematic reviews. It is plausible that certain LLMs might exhibit higher risk factors, while others could excel in generating coherent systematic reviews. In essence, this study would benefit significantly from a comprehensive comparative analysis between the LLMs, allowing for a more nuanced understanding of their respective capabilities, limitations, and potential benefits.
2. The number of samples presented to each domain expert appears to be relative inadequate to draw definitive conclusions about the abilities and constraints of LLMs in generating systematic reviews. Additionally, during the expert interviews, the inclusion of human-written systematic reviews, without indicating their human origin, could offer valuable insights. This approach would allow observation of how domain experts react to these reviews, shedding light on the deficiencies of LLM-generated systematic reviews and thereby allowing a more comprehensive understanding of the lacking of the LLM generated review.
3. The prompting technique used in this study is very basic and fail to leverage the full potentials of LLMs. Carefully curated prompts can gain better results in generating better systematic reviews. | 3. The prompting technique used in this study is very basic and fail to leverage the full potentials of LLMs. Carefully curated prompts can gain better results in generating better systematic reviews. |
NIPS_2020_1623 | NIPS_2020 | - If I understand correctly, the method requires maintaining a probability vector for each data point. This is not an issue for small data sets with few classes, but can become a problem at ImageNet scale. I did not find any comment regarding this issue in the main paper or in the supplement. Could the authors please elaborate on this? - From Table 2 b) it seems that for 40% label noise on CIFAR10 the method is reasonably robust to the hyper parameter values. Does this observation transfer to other corruption percentages and data sets? - Additional experiments on larger data sets would be nice (but I understand that compute might be an issue). --- Thanks for the author response. I still think maintaining the probabilities might become an issue, in particular at large batch size, but I don't think this aspect is critical. Generally, the response addressed my concerns well. | - Additional experiments on larger data sets would be nice (but I understand that compute might be an issue). --- Thanks for the author response. I still think maintaining the probabilities might become an issue, in particular at large batch size, but I don't think this aspect is critical. Generally, the response addressed my concerns well. |
a4PBF1YInZ | ICLR_2025 | 1. Although the model is able to perform various tasks, the performance in each domain is questionable. In the quantitative results, this paper seems to be deliberately comparing with weaker models. Examples include but are not limited to:
- Table 2: Only earlier grounded MLLMs are compared. An early baseline, LLaVA-1.5-7B (the "fundamental MLLM" module in the proposed method should be comparable to it) can achieve 78.5 accuracy on VQAv2.
- Table 3: On POPE, "F1 score" should be the main metric to compare with, as one may achieve a very high precision by making the model predicting conservatively. Previous models like LLaVA-1.5 also mainly reports the F1 score.
- Table 4: The performance on REC and RES are clearly behind more recent models. For example, GLaMM [ref1] achieves 83.2 RES cIoU on RefCOCO TestA, and UNINEXT [ref2] achieves 89.4 REC accuracy (IoU>0.5) on RefCOCOg Test.
2. Although this work claims keypoints as one major decoding modality, there is no quantitative evaluation of keypoint detection.
3. The ablation study does not discuss the most important module designs, e.g., decoding with CoT vs. without CoT, how data mixture in VT-Instruct affects the final results.
4. The presentation is not suitable for an academic research paper. In sections 3 and 4, only a high-level overview of the method is introduced, with a few short paragraphs and 5 large-sized figures. After reading these sections, readers may get a glimpse of the model and data, but too many details are missing or hidden in the appendix.
5. The qualitative results mainly show simple cases with few objects clearly visible. Even so, there seems to be many artifacts in the visualization (e.g., duplicate "person" detections in Figure 8 top right, low-quality masks in Figure 9 top right). It is unclear how the model perform in more challenging natural scene images.
[ref1] Hanoona Rasheed, Muhammad Maaz, Sahal Shaji, Abdelrahman Shaker, Salman Khan, Hisham Cholakkal, Rao M. Anwer, Eric Xing, Ming-Hsuan Yang, Fahad S. Khan. GLaMM: Pixel Grounding Large Multimodal Model. In CVPR, 2024.
[ref2] Bin Yan, Yi Jiang, Jiannan Wu, Dong Wang, Ping Luo, Zehuan Yuan, Huchuan Lu. Universal Instance Perception as Object Discovery and Retrieval. In CVPR, 2023. | - Table 4: The performance on REC and RES are clearly behind more recent models. For example, GLaMM [ref1] achieves 83.2 RES cIoU on RefCOCO TestA, and UNINEXT [ref2] achieves 89.4 REC accuracy (IoU>0.5) on RefCOCOg Test. |
NIPS_2022_331 | NIPS_2022 | I believe that one small (but important) part of the paper could use some clarifications in the writing: Section 3.2 (on Representational Probing). I will elaborate below.
I think that a couple of claims in the paper may be slightly too strong and need a bit more nuance. I will elaborate below.
A lot of the details described in Section 3.3 (Behavioral Tests) seem quite specific to the game of Hex. For the specific case of Hex, we can indeed know how to create such states that (i) contain a concept, (ii) contain that concept in only exactly one place, (iii) make sure that the agent must play according to the concept immediately, because otherwise it would lose. I imagine that setting up such specific situations may be much more difficult in many other games or RL environments, and would certainly require highly game-specific knowledge again for such tasks. This seems like a potential limitation (which doesn't seem to be discussed yet).
On weakness (1):
The first sentence that I personally found confusing was "To form the random control, for each board ( H ( 0 ) , y )
in the probing dataset, we consistently map each cell in that board to a random cell, forming $H_s^{(0)}." I guess that "map each cell in that board to a random cell" means creating a random mapping from all cells in the original board to all cells in the control board, in a way such that every original cell maps to exactly one randomly-selected control cell, and every control cell is also mapped to by exactly one original cell. And then, the value of each original cell (black/white/empty) is assigned to the control cell that it maps to. I guess this is what is done, and it makes sense, but it's not 100% explicit. I'm afraid that many readers could misunderstand it as simply saying that every cell gets a random value directly.
Then, a bit further down under Implementation Details, it is described how the boards in the probing dataset get constructed. I suspect it would make more sense to actually describe this before describing how the matching controls are created.
On weakness (2):
(a) The behavioral tests involve states created specifically such that they (i) contain the concept, but also (ii) demand that the agent immediately plays according to the concept, because it will lose otherwise. In the game of Hex, this means that all of these board states, for all these different concepts, actually include one more new "concept" that is shared across all the tests; a concept that recognises a long chain of enemy pieces that is about to become a winning connection if not interrupted by playing in what is usually just one or two remaining blank cells in between. So, I do not believe that we can say with 100% certainty that all these behavior tests are actually testing for the concept that you intend them to test for. Some or all of them may simply be testing more generally if the agent can recognise when it needs to interrupt the opponent's soon-to-be-winning chain.
(b) "Fig. 5 shows evidence that some information is learned before the model is able to use the concepts." --> I think "evidence" may be too strong here, and would say something more like "Fig. 5 suggests that some information may be learned [...]". Technically, Fig. 5 just shows that there is generally a long period with no progress on the tests, and after a long time suddenly rapid progress on the tests. To me this indeed suggests that it is likely that it is learning something else first, but it is not hard evidence. It could also be that it's just randomly wandering about the parameter space and suddenly gets lucky and makes quick progress then, having learned nothing at all before.
(c) "Behavioral tests can also expose heuristics the model may be using." --> yes, but only if we actually already know that the heuristics exist, and know how to explicitly encode them and create probes for them. They can't teach us any new heuristics that we didn't already know about. So maybe, better phrasing could be something like "Behavioral tests can also confirm whether or not the model may be using certain heuristics."
It may be useful to discuss the apparent limitation that quite a bit of Hex-specific knowledge is used for setting up the probes (discussed in more detail as a weakness above).
It may be useful to discuss the potential limitation I discussed in more detail above that the behavioral tests may simply all be testing for an agent's ability to recognise when it needs to interrupt an opponent's immediate winning threat. | 5 shows evidence that some information is learned before the model is able to use the concepts." --> I think "evidence" may be too strong here, and would say something more like "Fig. |
0zRuk3QdiH | ICLR_2025 | 1. **Limited Novelty in Video Storyboarding**: The innovation of the proposed video storyboarding approach is limited. The primary method relies on frame-wise SDSA, which largely mirrors the approach used in ConsiStory. The only notable difference lies in the mask source, utilizing CLIPseg and OTSU segmentation rather than cross-attention.
2. **Poor Writing and Project Organization**: The paper's writing and the project page's layout hinder comprehension, making it difficult for readers to follow the key contributions the authors intend to convey.
3. **Minimal Improvement over Baseline Models**: The generated video storyboarding results appear similar to those produced by existing video generation baselines like Videocrafter2 or TokenFlow encoder, with little noticeable difference in output quality.
4. **Lack of Motion Dynamics**: The method demonstrates limited motion dynamics. In most video segments, the objects remain static, and in every case, the object consistently occupies the center of the frame, resulting in rigid, uninspired visuals.
5. **Overclaiming the Benchmark**: The authors’ claim of establishing a benchmark based on a dataset of only 30 videos, each containing 5 video shots, is unsubstantiated. This dataset is insufficiently sized and lacks diversity, with evaluations limited to character consistency and dynamic degree, providing a narrow view that does not comprehensively assess the model's capabilities. | 1. **Limited Novelty in Video Storyboarding**: The innovation of the proposed video storyboarding approach is limited. The primary method relies on frame-wise SDSA, which largely mirrors the approach used in ConsiStory. The only notable difference lies in the mask source, utilizing CLIPseg and OTSU segmentation rather than cross-attention. |
NIPS_2019_360 | NIPS_2019 | weakness (since unfortunately there are other confounding factors). Further, orthogonally to the accuracy results, it is an interesting finding if standard approaches indeed suffer from this and the proposed method provides a remedy. I would therefore focus on these qualitative results more, and explain in the main text (not just the appendix) exactly how those visualization are created, and show those results for various models. 2) Somewhat related to the previous point: Pure metric-based models like Prototypical Networks lack an explicit mechanism for adaptation to each task at hand and it therefore seems plausible that they indeed suffer from the identified issue. However, it is less clear whether (or to what extent) models that do perform task-specific adaptation run the same danger. Intuitively, it seems that task adaptation also constitutes a mechanism for modifying the embedding function so that it favours the identification of objects that are targets of the associated classification task. By task adaptation here Iâm referring either to gradient-based adaptation (as in MAML and variants) or amortized conditioning-based adaptation (as in TADAM for example). Therefore, it would be very interesting to empirically compare the proposed method to these other ones not only in terms of classification accuracy but also qualitatively via visualizations as in Figure 1 that show the areas of the image that a model focuses more for making classification decisions. 3) Suggestion for the transductive framework: In Equation 8, it might be useful to incorporate the unlabeled examples in a weighted fashion instead of trusting that every example whose confidence surpasses a manually-set threshold can safely contribute to the prototype of the class that it is predicted to belong to. Specifically, the contribution of an unlabeled example to the updated class prototype can be weighted by the cosine similarity between that unlabeled example and that prototype (normalized across classes) and maybe additionally by the confidence c_b^q. This might slightly relieve the need to find the perfect threshold, since even if it is not conservative enough, a query example will be prohibited by modifying a prototype too much. An example of this is in Ren et al. [1] when computing refined prototypes by including unlabeled examples. 4) It seems that the weakness that this method is addressing would be more prominent in images comprised of multiple objects, or cluttered scenes. It would be very interesting to compare this approach to previous ones on few-shot classification on such a dataset! 5) For more easily assessing the degree of apples-to-applesness of the various comparisons in the tables, it would be useful to note which of the listed methods use data augmentations (as until recently this was not common practice for few-shot classification), what architecture they use, and what objective (most are episodic only but I think TADAM also performs joint training as the proposed method). 6) Another difference between the proposed approach and previous Prototypical Network-like methods is that the distance comparisons that inform the classification decisions are done in a feature-wise manner in this work. Specifically, when comparing embeddings a and b, for each spatial location, the distance between the feature vectors of a and b at that location is computed. The final estimate of the distance between a and b is obtained by aggregating those feature-wise distance estimates over all spatial locations. In contrast, usually the output of the last embedding layer is reshaped into a single vector (of shape channels x height x width) and distance comparisons of examples are made by directly comparing these vectors. It would therefore be useful to perform another ablation where a standard Prototypical Network is modified to perform the same type of distance comparison as their method. 7) Similarly to how the proposed transductive method was applied to other models, it would be nice to see results where the proposed joint training is also applied to other models, since this is orthogonal to the choice of the meta-learner too. References [1] Meta-Learning for Semi-Supervised Few-shot Classification. Ren et al. ICLR 2018. | 4) It seems that the weakness that this method is addressing would be more prominent in images comprised of multiple objects, or cluttered scenes. It would be very interesting to compare this approach to previous ones on few-shot classification on such a dataset! |
NIPS_2020_547 | NIPS_2020 | - While I understand the space limitations, I think the paper could greatly benefit from more explanation of the meaning of the bounds (perhaps in the appendix). - Line 122: it's not obvious to me that the smoothed bound is more stable, since the \gamma factor in the numerator is also larger. Some calculations here, or a very simple experiment, would greatly help the reader understand when smoothing would be desirable. - The above also applies for the discussion on overestimation starting on line 181, especially in the trade-off of reducing overestimation error and converging to a suboptimal value function. - The above applies for the combined smoothness + regularization algorithm | - While I understand the space limitations, I think the paper could greatly benefit from more explanation of the meaning of the bounds (perhaps in the appendix). |
NIPS_2018_232 | NIPS_2018 | - Strengths: the paper is well-written and well-organized. It clearly positions the main idea and proposed approach related to existing work and experimentally demonstrates the effectiveness of the proposed approach in comparison with the state-of-the-art. - Weaknesses: the research method is not very clearly described in the paper or in the abstract. The paper lacks a clear assessment of the validity of the experimental approach, the analysis, and the conclusions. Quality - Your definition of interpretable (human simulatable) focuses on to what extent a human can perform and describe the model calculations. This definition does not take into account our ability to make inferences or predictions about something as an indicator of our understanding of or our ability to interpret that something. Yet, regarding your approach, you state that you are ânot trying to find causal structure in the data, but in the modelâs responseâ and that âwe can freely manipulate the input and observe how the model response changesâ. Is your chosen definition of interpretability too narrow for the proposed approach? Clarity - Overall, the writing is well-organized, clear, and concise. - The abstract does a good job explaining the proposed idea but lacks description of how the idea was evaluated and what was the outcome. Minor language issues p. 95: âfrom fromâ -> âfromâ p. 110: âto toâ -> âhow toâ p. 126: âas wayâ -> âas a wayâ p. 182 âcan sortedâ -> âcan be sortedâ p. 197: âon directly onâ -> âdirectly onâ p. 222: âwhere wantâ -> âwhere we wantâ p. 245: âas accurateâ -> âas accurate asâ Tab. 1: âsquareâ -> âsquared errorâ p. 323: âthis are featuresâ -> âthis is featuresâ Originality - the paper builds on recent work in IML and combines two separate lines of existing work; the work by Bloniarz et al. (2016) on supervised neighborhood selection for local linear modeling (denoted SILO) and the work by Kazemitabar et al. (2017) on feature selection (denoted DStump). The framing of the problem, combination of existing work, and empirical evaluation and analysis appear to be original contributions. Significance - the proposed method is compared to a suitable state-of-the-art IML approach (LIME) and outperforms it on seven out of eight data sets. - some concrete illustrations on how the proposed method makes explanations, from a user perspective, would likely make the paper more accessible for researchers and practitioners at the intersection between human-computer interaction and IML. You propose a âcausal metricâ and use it to demonstrate that your approach achieves âgood local explanationsâ but from a user or human perspective it might be difficult to get convinced about the interpretability in this way only. - the experiments conducted demonstrate that the proposed method is indeed effective with respect to both accuracy and interpretability, at least for a significant majority of the studied datasets. - the paper points out two interesting directions for future work, which are likely to seed future research. | - the experiments conducted demonstrate that the proposed method is indeed effective with respect to both accuracy and interpretability, at least for a significant majority of the studied datasets. |
SjgfWbamtN | ICLR_2024 | - The prediction process is faster, but final performance significantly decreases.
- Removing IPA is disadvantageous, as the structure module is less costly than Evoformer.
- Kernels are implemented with OpenAI's Triton, not CUDA; a full-page explanation is unnecessary due to well-known engineering improvements.
- The analysis of kernels is wrong. For example, "This reduces the number of reads from 5 to 1, and the number of writes from 4 to 1". The Wx and Vx are matrix multiplication operators, which will call GEMM kernels, thus these read/write cannot be merged. We usually can only save the read/write times for element-wise operators.
- The method relies on a computationally demanding pretrained protein language model; simplification would be beneficial.
- Coordinate recovery omits chirality consideration, potentially negatively impacting performance.
- In-depth analysis of uncertainty estimation technique is needed for better understanding of robustness. | - Kernels are implemented with OpenAI's Triton, not CUDA; a full-page explanation is unnecessary due to well-known engineering improvements. |
NIPS_2021_1072 | NIPS_2021 | I have certain concerns about the paper.
1/ I think the contribution of the paper is a bit limited. V-MAIL combines several existing ideas, e.g., latent imagination, GAIL, variational state space model, etc., and achieves a good performance. However, how these components affect each other and how they contribute to the final performance are not clear. An ablation study is also missing from the paper. In this case, it would be hard to get inspiration from reading it.
2/ Although the paper has provided theoretical analysis about the model-based adversarial imitation learning (sect. 4.1 and sect. 4.2), they are disconnected from the practical implementations (sect. 4.3). In particular, Theorem 1 shows that the divergence of the visitation distributions in a MDP can be upper bounded by a divergence of the visitation distributions in a POMDP. However, in the practical implementation, a variational state space model captures the belief, rather than the visitation distribution of the belief. In addition, it feels difficult to compute the visitation frequency of a belief, whose size is exponential to the size of the history and the state space. I believe the proposed algorithm indeed has its merit, but I don’t think Theorem 1 provides a correct justification of the optimization objective used in this paper.
3/ I feel the author should be careful when making certain claims. For example, from line 39 to line 48, the authors are analyzing the limitations of the existing IRL methods and adversarial imitation learning methods. “These approaches explicitly train a GAN-based classifier [17] to distinguish the visitation distribution of the agent from the expert, and use it as a reward signal for training the agent with RL….” However, not all IRL methods are adversarial imitation learning. In fact, most of them don’t train a GAN-based classifier and do RL afterwards. Instead, a lot of them recover the reward and do planning instead.
4/ The authors claimed that V-MAIL achieves zero-shot transfer to novel environments. However, the policy is fine-tuned with additional expert demonstrations, as shown in Alg. 2. Why is this zero-shot? In addition, I guess the transferability might be limited by the real difficulty of the source task / target task. Walker-run is clearly harder than walker-walk, so the policy transfer here is possible. Also, for the manipulation scenario, 3-prong task with both clockwise / counter clockwise rotations, together with one 4-prong task, actually provides sufficient information about the target task. I guess it might be difficult to transfer a policy from simpler tasks to more complex tasks. This has to be made clear in the paper. Otherwise, it is quite misleading. | 2. Why is this zero-shot? In addition, I guess the transferability might be limited by the real difficulty of the source task / target task. Walker-run is clearly harder than walker-walk, so the policy transfer here is possible. Also, for the manipulation scenario, 3-prong task with both clockwise / counter clockwise rotations, together with one 4-prong task, actually provides sufficient information about the target task. I guess it might be difficult to transfer a policy from simpler tasks to more complex tasks. This has to be made clear in the paper. Otherwise, it is quite misleading. |
ICLR_2023_3203 | ICLR_2023 | 1. The novelty is limited. The proposed method is too similar to other attentional modules proposed in previous works [1, 2, 3]. The group attention design seems to be related to ResNeSt [4] but it is not discussed in the paper. Although these works did not evaluate their performance on object detection and instance segmentation, the overall structures between these modules and the one that this paper proposed are pretty similar.
2. Though the improvement is consistent for different frameworks and tasks, the relative gains are not very strong. For most of the baselines, the proposed methods can only achieve just about 1% gain on a relative small backbone ResNet-50. As the proposed method introduces global pooling into its structure, it might be easy to improve a relatively small backbone since it is with a smaller receptive field. I suspect whether the proposed method still works well on large backbone models like Swin-B or Swin-L.
3. Some of the baseline results do not matched with their original paper. I roughly checked the original Mask2former paper but the performance reported in this paper is much lower than the one reported in the original Mask2former paper. For example, for panoptic segmentation, Mask2former reported 51.9 but in this paper it's 50.4, and the AP for instance segmentation reported in the original paper is 43.7 but here what reported is 42.4.
Meanwhile, there are some missing references about panoptic segmentation that should be included in this paper [5, 6]. Reference
[1] Chen, Yunpeng, et al. "A^ 2-nets: Double attention networks." NeurIPS 2018.
[2] Cao, Yue, et al. "Gcnet: Non-local networks meet squeeze-excitation networks and beyond." T-PAMI 2020
[3] Yinpeng Chen, et al. Dynamic convolution: Attention over convolution kernels. CVPR 2020.
[4] Zhang, Hang, et al. "Resnest: Split-attention networks." CVPR workshop 2022.
[5] Zhang, Wenwei, et al. "K-net: Towards unified image segmentation." Advances in Neural Information Processing Systems 34 (2021): 10326-10338.
[6] Wang, Huiyu, et al. "Max-deeplab: End-to-end panoptic segmentation with mask transformers." CVPR 2021 | 2. Though the improvement is consistent for different frameworks and tasks, the relative gains are not very strong. For most of the baselines, the proposed methods can only achieve just about 1% gain on a relative small backbone ResNet-50. As the proposed method introduces global pooling into its structure, it might be easy to improve a relatively small backbone since it is with a smaller receptive field. I suspect whether the proposed method still works well on large backbone models like Swin-B or Swin-L. |
ICLR_2022_2081 | ICLR_2022 | 1. My feeling is that the conclusion is somewhat overclaimed. In both abstract and conclusion, it is emphasized that this work proves the pessimistic result that reweighting algorithms always overfit. However, the paper only proves that this conclusion might be true for some specific situations. For example, the reweighting algorithms need to satisfy Assumption 1 and Assumption 2, which means not all reweighting algorithms are considered. The overparameterized models need to be linear models, linearized neural networks or wide fully-connected neural networks, which are not commonly used in practice. Besides, the squared loss needs to be used to confirm the update rule is linear. All those assumptions are not quite mild for me. 2. The analysis of neural networks contributes less. With the existing NTK theorem, the extension from linear models to wide fully-connected neural networks is trivial (Section 3.2, 3.3). The work bypasses the core problem of overparametrized neural networks and only considers the easy wide fully-connected neural networks. 3. The theoretical results and experiments do not match. The theoretical proof considers wide fully-connected neural networks, while the experiments utilize a ResNet18 as the model, which is quite different. 4. Some key steps are empirical, although the paper claims that it provides a theoretical backing in the abstract. For example, this paper only proves that reweighting algorithms will converge to the same level as ERM, but the conclusion that ERM has a poor worst-group test performance is summarized through observation in practice. Besides, the paper can only empirically demonstrate that commonly used algorithms satisfy Assumption 2. | 2. The analysis of neural networks contributes less. With the existing NTK theorem, the extension from linear models to wide fully-connected neural networks is trivial (Section 3.2, 3.3). The work bypasses the core problem of overparametrized neural networks and only considers the easy wide fully-connected neural networks. |
NIPS_2020_1854 | NIPS_2020 | - The contribution goes in several directions which makes the paper hard to evaluate; is the main contribution the selection of existing datasets, introducing new datasets or new versions of datasets, the empirical evaluation or the software tooling? - The dataset does not describe existing datasets and benchmarks, and so it is hard to judge the exact differences between the proposed datasets and currently used datasets. A more direct comparison might be useful, and it's not clear why existing, smaller datasets are not included in the collection. - For some of the datasets, it's unclear if or how they have been used or published before. In particular, the datasets from Moleculenet seem to be mostly reproduced, using the splitting strategy that was suggested in their paper, with the modification potentially being addition of new features. - If the selection of the datasets is a main contribution, the selection process should be made more clear. What was the pool of datasets that was drawn from, and how were datasets selected? An example of such a work is the OpenML100 and OpenML CC-18 for classification, see Bischl et. al. "OpenML Benchmarking Suites". or Gijsbers et al "An Open Source AutoML Benchmark" In addition to selection of the datasets, the selection of the splitting procedure and split ratios also seems ad-hoc and is not detailed. - Neither the software package, nor the datasets, nor the code for the experiments has been submitted as supplementary material, and the details in the paper are unlikely to be enough to reproduce the creation of the datasets or the experiments given the datasets. - Given that many methods aim at one of the three tasks, having 5, 6 and 4 datasets for the tasks respectively, might not be enough for a very rigorous evaluation, in particular if some of the datasets are so large that not all algorithms can be used on them. Addendum: Thank you to the authors for their detailed reply. A repository and online platform for reproducing the experiments was provided, and it was clarified that the datasets are substantially novel. Motivations for the number and choice of datasets were given and I updated my assessment to reflect that. | - Given that many methods aim at one of the three tasks, having 5, 6 and 4 datasets for the tasks respectively, might not be enough for a very rigorous evaluation, in particular if some of the datasets are so large that not all algorithms can be used on them. Addendum: Thank you to the authors for their detailed reply. A repository and online platform for reproducing the experiments was provided, and it was clarified that the datasets are substantially novel. Motivations for the number and choice of datasets were given and I updated my assessment to reflect that. |
NIPS_2020_125 | NIPS_2020 | - In section 3.1, the logic of extending HOGA from second order is not consistent with the extension from first order to second order; i.e., second order attention creates one more intermediate state U compared to the first order attention module. However, from the second order to higher order attention module, although intermediate states U0, U1 … are created, they are only part of the intermediate feature (Concatenating them will form U with full channel resolution). In this way, it seems we could regard the higher order attention module as a special form of second order attention module. - The paper does not clearly explain the intuition as to why different channel groups should have different attention mechanisms; i.e., in what specific way the network can benefit from the proposed channel group specific attention module. - Experiments are not solid enough: 1. There are no ablation studies on the effect of parameter numbers, so it is not clear whether the performance gain is due to the proposed approach or additional parameters. 2. Although there is good performance on imageNet classification with ResNet50/34/18, there are no results with larger models like ResNet101/152. 3. There are no results using strong object detection frameworks; the current SSD framework is relatively weak (e.g. Faster RCNN would be a stronger, more standard approach); it is not clear whether the improvements would be retained with a stronger base framework. - The proposed approach requires larger FLOPS compared to baselines; i.e., any performance gain requires large computation overhead (this is particularly pronounced in Table 3). - In Table 3 shows ResNet32/56 but L222 refers to ResNet34/50, which is confusing. | 2. Although there is good performance on imageNet classification with ResNet50/34/18, there are no results with larger models like ResNet101/152. |
NIPS_2018_482 | NIPS_2018 | , and while I recommend acceptance, I think that it deserves to be substantially improved before publication. One of the main concerns is that part of the relevant literature has been ignored, and also importantly that the proposed approach has not really been extensively compared to potential competitors (that might need to be adapted to the multi-source framework; not e also that single-fidelity experiments could be run in order to better understand how the proposed acquisition function compares to others from the literature). Another main concern in connection with the previous one is that the presented examples remain relatively simple, one testcase being an analytical function and the other one a one-dimensional mototonic function. While I am not necessarily requesting a gigantic benchmark or a list of complicated high-dimensional real-world test cases, the paper would significantly benefit from a more informative application section. Ideally, the two aspects of improving the representativity of numerical test cases and of better benchmarking against competitor strategies could be combined. As of missing approaches from the literature, some entry points follow: * Adaptive Designs of Experiments for Accurate Approximation of a Target Region (Picheny et al. 2010) http://mechanicaldesign.asmedigitalcollection.asme.org/article.aspx?articleid=1450081 *Fast kriging-based stepwise uncertainty reduction with application to the identification of an excursion set (Chevalier et al. 2014a) http://amstat.tandfonline.com/doi/full/10.1080/00401706.2013.860918 * NB: approaches from the two articles above and more (some cited in the submitted paper but not benchmarked against) are coded for instance in the R package "KrigInv". The following article gives an overview of some of the package's functionalities: KrigInv: An efficient and user-friendly implementation of batch-sequential inversion strategies based on kriging (Chevalier et al. 2014b) * In the following paper, an entropy-based approach (but not in the same fashion as the one proposed in the submitted paper) is used, in a closely related reliability framework: Gaussian process surrogates for failure detection: A Bayesian experimental design approach (Wang et al. 2016) https://www.sciencedirect.com/science/article/pii/S002199911600125X * For an overall discussion on estimating and quantifying uncertainty on sets under GP priors (with an example in contour line estimation), see Quantifying Uncertainties on Excursion Sets Under a Gaussian Random Field Prior (Azzimonti et al. 2016) https://epubs.siam.org/doi/abs/10.1137/141000749 NB: the presented approaches to quantify uncertainties on sets under GP priors could also be useful here to return a more complete output (than just the contour line of the predictive GP mean) in the CLoVER algorithm. * Coming to the multi-fidelity framework and sequential design for learning quantities (e.g. probabilities of threshold exceedance) of interest, see notably Assessing Fire Safety using Complex Numerical Models with a Bayesian Multi-fidelity Approach (Stroh et al. 2017) https://www.sciencedirect.com/science/article/pii/S0379711217301297?via%3Dihub Some further points * Somehow confusing to read "relatively inexpensive" in the abstract and then "expensive to evaluate" in the first line of the introduction! * L 55 "A contour" should be "A contour line"? * L88: what does "f(l,x) being the normal distribution..." mean? * It would be nice to have a point-by-point derivation of equation (10) in the supplementary material (that would among others help readers including referees proofchecking the calculation). * About the integral appearing in the criterion, some more detail on how its computation is dealt with could be worth. ##### added after the rebuttal I updated my overall grade from 7 to 8 as I found the response to the point and it made me confident that the final paper would be improved (by suitably accounting for my remarks and those of the other referees) upon acceptance. Let me add a comment about related work by Rémi Stroh. The authors are right, the Fire Safety paper is of relevance but does actually not address the design of GP-based multi-fidelity acquisition functions (in the BO fashion). However, this point has been further developed by Stroh et al; see "Sequential design of experiments to estimate a probability of exceeding a threshold in a multi-fidelity stochastic simulator" in conference contributions listed in http://www.l2s.centralesupelec.fr/en/perso/remi.stroh/publications | * Coming to the multi-fidelity framework and sequential design for learning quantities (e.g. probabilities of threshold exceedance) of interest, see notably Assessing Fire Safety using Complex Numerical Models with a Bayesian Multi-fidelity Approach (Stroh et al. 2017) https://www.sciencedirect.com/science/article/pii/S0379711217301297?via%3Dihub Some further points * Somehow confusing to read "relatively inexpensive" in the abstract and then "expensive to evaluate" in the first line of the introduction! |
ICLR_2023_3317 | ICLR_2023 | Weakness main comments: • what is the advantage of using a differentiable LP layer (GNN and a LP solver) as a high-level policy, shown in Eq. 10?
– compare it to [1] that considers the LP optimization layer as a meta-environment?
– compare it to an explicit task assignment protocol (e.g. not implicit).
E.g. a high-level policy that directly outputs task weightings instead of the intermediary C matrix?
• How does this method address sparse reward problems in a better way? From the experiments, this does not support well. in practice, the proposed method requires sub-task-specific rewards to be specified, which would be similar to providing a dense reward signal that includes rewards for reaching sub-goals. If given the sum of low-level reward as the global reward, will the other methods (Qmix) solve the sparse-reward tasks as well?
minor comments: • It is hard to determine whether the solution to the matching problem (learned agent-task score matrix C) optimized by LP is achieving global perspective over the learning process.
• When the lower-level policies are also trained online, the learning could be unstable. Details on how to solve the instability in hierarchical learning are missing.
• What is the effect of the use of hand-defined tasks on performance? what is the effect of the algorithm itself? maybe do an ablation study.
• Section 5.2 ”training low-level actor-critic” should be put in the main text.
[1] Carion N, Usunier N, Synnaeve G, et al. A structured prediction approach for generalization in cooperative multi-agent reinforcement learning[J]. Advances in neural information processing systems, 2019, 32: 8130-8140. | • How does this method address sparse reward problems in a better way? From the experiments, this does not support well. in practice, the proposed method requires sub-task-specific rewards to be specified, which would be similar to providing a dense reward signal that includes rewards for reaching sub-goals. If given the sum of low-level reward as the global reward, will the other methods (Qmix) solve the sparse-reward tasks as well? minor comments: |
NIPS_2020_1720 | NIPS_2020 | - In L104-112 several prior arts are listed. I understand that the task authors tackle is predicting full mesh, but why proposed method is better than [21] or [6]? What makes the proposed approach better than previous methods? From the experiments, the performance difference is clear. However, I am missing the core insights/motivations behind the approach. - In L230, it is indicated that "we allow it (3D pose regressor) to output M possible predictions and, out of this set, we select the one that minimizes the MPJPE/RE metric". Comparison here seems a bit unfair. Instead of using oracle poses, the authors would compute the MPJPE/RE for all of the M or maybe n out of M poses, then report the median error. - It is not clearly indicated whether the curated AH36M dataset is used for training. If so, did other methods eg. HMR, SPIN have access to AH36M data during training for a fair comparison? - There is no promise to release the code and the data. Even though the method is explained clearly, a standard implementation would be quite helpful for the research community. - There is no failure cases/limitations sections. It would be insightful to include such information for researchers who would like to build on this work. | - It is not clearly indicated whether the curated AH36M dataset is used for training. If so, did other methods eg. HMR, SPIN have access to AH36M data during training for a fair comparison? |
NIPS_2022_815 | NIPS_2022 | Weakness: 1. This paper requires a more detailed discussion and comparison with the previous related work. 2. There are some confusing mistake in the proof of the main results.
This paper lacks a detailed discussion and comparison with the previous work.
This paper seemed not to give any new insight on this field. | 2. There are some confusing mistake in the proof of the main results. This paper lacks a detailed discussion and comparison with the previous work. This paper seemed not to give any new insight on this field. |
NIPS_2022_1725 | NIPS_2022 | Weakness:
the novelty and contribution are limited. The key contribution is the class-aware transformer module, a revised transformer, to learn the class aware context. Pyramid structure and adversarial training are common approaches used in medical image analysis. 2.The motivation is unclear. Why the adversarial network is needed in this model?
the comparison of experimental results is unfair. The proposed model is equipped with newly-added CAT and GAN, which is a bigger model than others. Even the pre-trained model is compared with other models. | 2.The motivation is unclear. Why the adversarial network is needed in this model? the comparison of experimental results is unfair. The proposed model is equipped with newly-added CAT and GAN, which is a bigger model than others. Even the pre-trained model is compared with other models. |
NIPS_2016_265 | NIPS_2016 | 1. For the captioning experiment, the paper compares to related work only on some not official test set or dev set, however the final results should be compared on the official COOC leader board on the blind test set: https://competitions.codalab.org/competitions/3221#results e.g. [5,17] have won this challenge and have been evaluated on the blind challenge set. Also, several other approaches have been proposed since then and significantly improved (see leaderboard, the paper should at least compare to the once where an corresponding publication is available). 2. A human evaluation for caption generation would be more convincing as the automatic evaluation metrics can be misleading. 3. It is not clear from Section 4.2 how the supervision is injected for the source code caption experiment. While it is over interesting work, for acceptance at least points 1 and 3 of the weaknesses have to be addressed. ==== post author response === The author promised to include the results from 1. in the final For 3. it would be good to state it explicitly in Section section 4.2. I encourage the authors to include the additional results they provided in the rebuttal, e.g. T_r in the final version, as it provides more insight in the approach. Mine and, as far as I can see, the other reviewers concerns have been largely addressed, I thus recommend to accept the paper. | 1. For the captioning experiment, the paper compares to related work only on some not official test set or dev set, however the final results should be compared on the official COOC leader board on the blind test set: https://competitions.codalab.org/competitions/3221#results e.g. [5,17] have won this challenge and have been evaluated on the blind challenge set. Also, several other approaches have been proposed since then and significantly improved (see leaderboard, the paper should at least compare to the once where an corresponding publication is available). |
USGY5t7fwG | ICLR_2025 | 1. The proposed method lacks novelty, as it simply splits the image into background and foreground before processing them separately.
2. In Table 1, the best MAE for SD→SR is not achieved by the proposed method, yet it is marked in bold. This should be corrected.
3. In Table 1, the best performance for SN→FH is also incorrectly highlighted, and the proposed method performs worse than other methods by a large margin. The authors should explain this discrepancy.
4. The experimental results are unreliable, especially in Table 1, where the MSE is significantly smaller than the MAE, which raises concerns about their validity. | 4. The experimental results are unreliable, especially in Table 1, where the MSE is significantly smaller than the MAE, which raises concerns about their validity. |
NIPS_2017_401 | NIPS_2017 | Weakness:
1. There are no collaborative games in experiments. It would be interesting to see how the evaluated methods behave in both collaborative and competitive settings.
2. The meta solvers seem to be centralized controllers. The authors should clarify the difference between the meta solvers and the centralized RL where agents share the weights. For instance, Foester et al., Learning to communicate with deep multi-agent reinforcement learning, NIPS 2016.
3. There is not much novelty in the methodology. The proposed meta algorithm is basically a direct extension of existing methods.
4. The proposed metric only works in the case of two players. The authors have not discussed if it can be applied to more players.
Initial Evaluation:
This paper offers an analysis of the effectiveness of the policy learning by existing approaches with little extension in two player competitive games. However, the authors should clarify the novelty of the proposed approach and other issues raised above. Reproducibility:
Appears to be reproducible. | 3. There is not much novelty in the methodology. The proposed meta algorithm is basically a direct extension of existing methods. |
ICLR_2021_343 | ICLR_2021 | 1. This paper is not well-organized and many parts are misleading. For example, above Eq.3, the author assumes P_{G_{0}} = P_{D}. Does the author take the samples generated by the root generator as the authentic dataset? However, in Section 2 above Eq. 4, the author claims that the authentic data does not belong to any generator. 2. In Eq.4, the key-dependent generator is obtained via adding perturbation to the output of the root model. This setting may be troublesome as :1. These generators are not actually trained. This is different from the problem which this paper tempt to solve. 2. No adversarial loss to guarantee the perturbed data being similar to the authentic data. 2. How to distinguish the samples from different generators. 3. Since Eq.4 is closely related to adversarial attack, the authors are supposed to discuss their connections in the related works. 4. The name of ‘decentralized attribution’ is misleading. Decentralized models are something like federated learning, where a ‘center’ model grasps information from ‘decentralized models’. However, the presented work is not related to such decentralization. 5. Typos: regarding the adversarial generative models ->regarding to the adversarial generative models; along the keys->along with the keys. | 2. No adversarial loss to guarantee the perturbed data being similar to the authentic data. |
NIPS_2019_263 | NIPS_2019 | --- Weaknesses of the evaluation in general: * 4th loss (active fooling): The concatenation of 4 images into one and the choice of only one pair of classes makes me doubt whether the motivation aligns well with the implementation, so 1) the presentation should be clearer or 2) it should be more clearly shown that it does generalize to the initial intuition about any two objects in the same image. The 2nd option might be accomplished by filtering an existing dataset to create a new one that only contains images with pairs of classes and trying to swap those classes (in the same non-composite image). * I understand how LRP_T works and why it might be a good idea in general, but it seems new. Is it new? How does it relate to prior work? Does the original LRP would work as the basis or target of adversarial attacks? What can we say about the succeptibility of LRP to these attacks based on the LRP_T results? * How hard is it to find examples that illustrate the loss principles clearly like those presented in the paper and the supplement? Weaknesses of the proposed FSR metric specifically: * L195: Why does the norm need to be changed for the center mass version of FSR? * The metric should measure how different the explanations are before and after adversarial manipulation. It does this indirectly by measuring losses that capture similar but more specific intuitions. It would be better to measure the difference in heatmaps before and after explicitly. This could be done using something like the rank correlation metric used in Grad-CAM. I think this would be a clearly superior metric because it would be more direct. * Which 10k images were used to compute FSR? Will the set be released? Philosohpical and presentation weaknesses: * L248: What does "wrong" mean here? The paper gets into some of the nuance of this position at L255, but it would be helpful to clarify what is meant by a good/bad/wrong explanation before using those concepts. * L255: Even though this is an interesting argument that forwards the discussion, I'm not sure I really buy it. If this was an attention layer that acted as a bottleneck in the CNN architecuture then I think I'd be forced to buy this argument. As it is, I'm not convinced one way or the other. It seems plausible, but how do you know that the final representation fed to the classifier has no information outside the highlighted area. Furthermore, even if there is a very small amount of attention on relevant parts that might be enough. * The random parameterization sanity check from [25] also changes the model parameters to evaluation visualizations. This particular experiment should be emphasized more because it is the only other case I can think of which considers how explanations change as a function of model parameters (other than considering completely different models). To be clear, the experiment in [25] is different from what is proposed here, I just think it provides interesting contrast to these experiments. The claim here is that the explanations change too much while the claim there is that they don't change enough. Final Justification --- Quality - There are a number of minor weaknesses in the evaluation that together make me unsure about how easy it is to perform this kind of attack and how generalizable the attack is. I think the experiments do clearly establish that the attack is possible. Clarity - The presentation is pretty clear. I didn't have to work hard to understand any of it. Originality - I haven't seen an attack on interpreters via model manipulation before. Significance - This is interesting because it establishes a new way to evaluate models and/or interpreters. The paper is a bit lacking in scientific quality in a number of minor ways, but the other factors clearly make up for that defect. | * L248: What does "wrong" mean here? The paper gets into some of the nuance of this position at L255, but it would be helpful to clarify what is meant by a good/bad/wrong explanation before using those concepts. |
ZnmofqLWMQ | ICLR_2024 | - The proposed method is computationally very costly. The outer optimization needs to differentiate through several (n = 10+) chained calls of a large score model in each iteration, with an overall N (N=100+) outer loop iterations, per image. Thus, it requires at least 1000 NFEs plus the heavy cost of N backpropagations through a large, chained model.
- There are lots of hyperparameters that need to be tuned (step size, N, $\delta t$) and as the optimization needs to be solved on a sample-by-sample basis it is not clear how much variation in optimal hyperparameters can occur.
- I have serious doubts about the experiments:
1) There is no comparison with DPS which is a well-known diffusion-based solver that has better performance than the included competing techniques and source code is made public. For instance, for 4x SR *with noise* DPS reports far better LPIPS than any techniques in this paper.
2) Distortion metrics such as PSNR, NMSE and SSIM are completely missing, which are crucial in evaluating inverse problem solvers.
3) A very small (100) number of samples is used for evaluation. In other competing methods it is standard to use a 1000-sample validation split. Thus the results are not necessarily reliable and it is very difficult to compare to existing published results (DDRM, DDNM, DPS). Furthermore, since FID heavily depends on the number of samples used in the generated distribution, the reposrted FIDs are not compatible with the ones reported in competing method's original papers.
4) I have doubts about the reported timing results in Table 3. SHRED is reported as approx. 5x slower than DDRM. According to the DDRM paper, they use 20 NFEs. How is the reported timing possible, when SHRED uses 100 outer loop iterations with 10 NFEs in each outer loop (total 1000 NFEs) plus the additional cost of 100 backpropagation?
5) The robustness experiments could be more rigorous. Instead of showing some good looking samples, it would be more meaningful to quantify the variation of image quality metrics for the validation dataset over 5-10 samples.
6) The framework is developed for noisy inverse problems, however there are no experiments for the noisy case. Reconstruction performance under measurement noise is crucial in evaluating the utility of the algorithm. | - There are lots of hyperparameters that need to be tuned (step size, N, $\delta t$) and as the optimization needs to be solved on a sample-by-sample basis it is not clear how much variation in optimal hyperparameters can occur. |
ICLR_2021_1014 | ICLR_2021 | - I am not an expert in the area of pruning. I think this motivation is quite good but the results seem to be less impressive. Moreover, I believe the results should be evaluated from more aspects, e.g., the actual latency on target device, the memory consumption during the inference time and the actual network size. - The performance is only compared with few methods. And the proposed is not consistently better than other methods. For those inferior results, some analysis should be provided since the results violate the motivation.
I am willing to change my rating according to the feedback from authors and the comments from other reviewers. | - The performance is only compared with few methods. And the proposed is not consistently better than other methods. For those inferior results, some analysis should be provided since the results violate the motivation. I am willing to change my rating according to the feedback from authors and the comments from other reviewers. |
NIPS_2019_168 | NIPS_2019 | of the submission. * originality: This is a highly specialized contribution building up novel results on two main fronts: The derivation of the lower bound on the competitive ratio of any online algorithm and the introduction of two variants of an existing algorithm so as to meet this lower bound. Most of the proofs and techniques are natural and not surprising. In my view the main contribution is the introduction of the regularized version which brings a different, and arguably more modern interpretation, about the conditions under which these online algorithms perform well in these adversarial settings. * quality: The technical content of the paper is sound and rigorous * clarity: The paper is in general very well-written, and should be easy to follow for expert readers. * significance: As mentioned above this is a very specialized paper likely to interest some experts in the online convex optimization communities. Although narrow in scope, it contains interesting theoretical results advancing the state of the art in dealing with these specific problems. * minor details/comments: - p.1, line 6-7: I would rewrite the sentence to simply express that the lower bound is $\Omega(m^{-1/2})$ \- p.3, line 141: cost an algorithm => cost of an algorithm \- p.4, Algorithm 1, step 3: mention somewhere that this is the projection operator (not every reader will be familiar with this notation \- p.5, Theorem 2: remind the reader that the $\gamma$ in the statement is the parameter of OBD as defined in Algorithm 1 \- p.8, line 314: why surprisingly? | * quality: The technical content of the paper is sound and rigorous * clarity: The paper is in general very well-written, and should be easy to follow for expert readers. |
WDO5hfLZvN | ICLR_2025 | 1. The paper combines EvoPrompt and Mid Vision Feedback (MVF), but does not explain the principles and detailed processes of the two in the intrduction or related work section. In addition, the method section is a bit casual, without strict mathematical definitions and rigorous process expressions, making the method not specific and clear enough.
2. The paper does not have sufficient experimental demonstration of the contribution points. There is only an experimental comparison between ELF (the author's method) and the baseline without Mid Vision Feedback (MVF), but no comparison with the image classification result of Mid Vision Feedback (MVF). This does not prove that the schema searched by ELF (the author's method) is better than the schema in Mid Vision Feedback (MVF).
3. The description of the experimental section is not rigorous enough (potentially, it may lead to an imprecise experimental setting). For example, in the comparison of Stage1 and ELF in Table 1, the total training generations of the two do not seem to be consistent. Whether Stage1 has reached sufficient convergence may need to be explained. In lines #346-347, the author mentions using a 32x32 input size neural network for CIFAR100 experiments, but in lines #383-384, the experiment continues on ImageNet, switching to a larger ViT-B/16 and ResNet50, and the resolution setting is not explained at this time.
4. The analysis in the experimental part is not sufficient. The authors can show the difference between the schema optimized by EvoPrompt and the original schema (and MVF), and explain clearly and more deeply the growth points brought by using EvoPrompt to optimize the schema. | 2. The paper does not have sufficient experimental demonstration of the contribution points. There is only an experimental comparison between ELF (the author's method) and the baseline without Mid Vision Feedback (MVF), but no comparison with the image classification result of Mid Vision Feedback (MVF). This does not prove that the schema searched by ELF (the author's method) is better than the schema in Mid Vision Feedback (MVF). |
AqXzHRU2cs | ICLR_2024 | 1. The authors have not covered more on the types of activities captured in the datasets, and their importance in smart homes, particularly from the perspective of occupant comfort and energy efficiency.
2. The number of sensors used to collect data seems a lot. In practice, its not practical to have so many sensors in a home collecting information. The authors should try some benchmarking on a subset of sensors if the dataset permits.
3. How will a sensor fusion approach work in this scenario?
4. What are the motivations behind hierarchical approach?
5. For Milan and Cairo, the temporal method might not be effective since the number of days in the experiment is less. | 1. The authors have not covered more on the types of activities captured in the datasets, and their importance in smart homes, particularly from the perspective of occupant comfort and energy efficiency. |
NIPS_2018_76 | NIPS_2018 | - A main weakness of this work is its technical novelty with respect to spatial transformer networks (STN) and also the missing comparison to the same. The proposed X-transformation seems quite similar to STN, but applied locally in a neighborhood. There are also existing works that propose to apply STN in a local pixel neighborhood. Also, PointNet uses a variant of STN in their network architecture. In this regard, the technical novelty seems limited in this work. Also, there are no empirical or conceptual comparisons to STN in this work, which is important. - There are no ablation studies on network architectures and also no ablation experiments on how the representative points are selected. - The runtime of the proposed network seems slow compared to several recent techniques. Even for just 1K-2K points, the network seem to be taking 0.2-0.3 seconds. How does the runtime scales with more points (say 100K to 1M points)? It would be good if authors also report relative runtime comparisons with existing techniques. Minor corrections: - Line 88: "lose" -> "loss". - line 135: "where K" -> "and K". Minor suggestions: - "PointCNN" is a very short non-informative title. It would be good to have a more informative title that represents the proposed technique. - In several places: "firstly" -> "first". - "D" is used to represent both dimensionality of points and dilation factor. Better to use different notation to avoid confusion. Review summary: - The proposed technique is sensible and the performance on different benchmarks is impressive. Missing comparisons to established STN technique (with both local and global transformations) makes this short of being a very good paper. After rebuttal and reviewer discussion: - I have the following minor concerns and reviewers only partially addressed them. 1. Explicit comparison with STN: Authors didn't explicitly compare their technique with STN. They compared with PointNet which uses STN. 2. No ablation studies on network architecture. 3. Runtimes are only reported for small point clouds (1024 points) but with bigger batch sizes. How does runtime scale with bigger point clouds? Authors did not provide new experiments to address the above concerns. They promised that a more comprehensive runtime comparison will be provided in the revision. Overall, the author response is not that satisfactory, but the positive aspects of this work make me recommend acceptance assuming that authors would update the paper with the changes promised in the rebuttal. Authors also agreed to change the tile to better reflect this work. | - "D" is used to represent both dimensionality of points and dilation factor. Better to use different notation to avoid confusion. Review summary: |
NIPS_2018_821 | NIPS_2018 | 1. A few parts of this paper are not very clear or not sufficiently provided, such as the model details. Section 4.3 should be addressed more to make it clearer since some concepts/statements are misleading or confusing. 2. The trace to code model is complex, which requires as many LSTMs as the number of input/output pairs, and it may be hard to be applied to other program synthesis scenarios. 3. Apart from the DSL, more experiments on dominant PL (e.g., Python) would be appreciated by people in this research field. Details are described as below: 1. For a given program, how to generate diverse execution traces that can be captured by the I/O -> Trace model? Since execution traces are generated by running the program on N I/O pairs, it is possible that some execution traces have a large overlap. For example, in the extreme case, two execution traces may be the same (or very similar) given different I/O pairs. 2. The authors involve the program interpreter in their approach, which is a good trial and it should help enhance the performance. However, I am curious about is it easy to be integrated with the neural network during training and testing? 3. The concept of state is not very clear, from my understanding, it represents the grid status (e.g., agent position) and it is obtained after applying an action of the trace. Line 186-line 187, is the âelementsâ equivalent to âstatesâ? or âactionsâ? More should be elaborated. 4. Line 183-line 184, is it necessary to use embedding for only four conditionals (of Boolean type)? only 16 possible combinations. 5. As depicted in Figure 3, if more I/O pairs are provided, the Trace->Code should be very complex since each i/o example requires such an LSTM model. How to solve this issue? 6. In essence, the Trace->Code structure is a Sequence to Sequence model with attention, the only differences are the employment of I/O pair embedding and the max pooling on multiple LSTM. How are the I/O pair embeddings integrated into the computation? Some supplementary information should be provided. 7. It is interesting to find that model trained on gold traces perform poorly on inferred traces, the authors do not give a convincing explanation. More exploration should be conducted for this part. 8. It would be better if some synthesized program samples are introduced in an appendix or other supplementary documents. | 3. The concept of state is not very clear, from my understanding, it represents the grid status (e.g., agent position) and it is obtained after applying an action of the trace. Line 186-line 187, is the âelementsâ equivalent to âstatesâ? or âactionsâ? More should be elaborated. |
FAYIlGDBa1 | ICLR_2025 | W1) Some of the technical contents, including the problem formulation, are not properly formalized, with some important aspects completely omitted. Notably, it should be stated that matrix $A$ in (SPCA) is necessarily symmetric positive semidefinite, implying the same for the blocks in the block-diagonal approximation. The absence of this information and the fact that throughout the paper $A$ is simply referred to as an "input matrix" rather than a covariance matrix may mislead the reader into thinking that the problem is more general than it actually is.
W2) The presentation of the simulation results is somewhat superficial, focusing only on presenting and briefly commenting the two quantitative criteria used for comparison, without much discussion or insight into what is going on. Specifically:
- Separate values for the different used $k$ should be reported (see point W2 below).
- It would be interesting to report the threshold value $\varepsilon$ effectively chosen by the algorithm (relatively to the infinity norm of the input matrix), as well as the proportion of zero entries after the thresholding.
- It would also be interesting to compare the support of the solution obtained by the proposed scheme with that obtained by the baseline methods (e.g., using a Jaccard index).
W3) Reporting average results with respect to $k$ is not a reasonable choice in my opinion, as the statistics of the chosen metrics probably behave very differently for different values of $k$.
W4) As it is, I don't see the utility of Section 4.1. First, this model is not applied to any of the datasets used in the experiments. This leads one to suspect that in practice it is quite hard to come up with estimates of the parameters required by Algorithm 4. Second, the authors do not even generate synthetic data following such a model (which is one typical use of a model) in order to illustrate the obtained results. In my view results of this kind should be added, or else the contents of 4.1 should be moved to the appendices as they're not really important (or both).
W5) Though generally well-written, the paper lacks some clarity at times, and the notation is not always consistent/clear. In particular:
- The sentence "This result is non-trivial: while the support of an optimal solution could span multiple blocks, we theoretically show that there must exist a block that contains the support of an optimal solution, which guarantees the efficiency of our framework." seems to be contradicatory. I believe that the authors mean the following: *one could think* that the solution could span multiple blocks, but they show this is not true. The same goes for Remark 1.
- What is the difference between $A^\varepsilon$ and $\tilde{A}$ in Theorem 1? It seems that two different symbols are used to denote the same object.
- The constraint $\|x\|_0 \le k$ in the last problem that appears in the proof of Theorem 2 is inocuous, since the size of each block $\tilde{A}_i'$ is at most $k$ anyway. This should be commented. | - It would also be interesting to compare the support of the solution obtained by the proposed scheme with that obtained by the baseline methods (e.g., using a Jaccard index). |
NIPS_2020_0 | NIPS_2020 | I mainly have the following concerns. 1) In general, this paper is incremental to GIN [1], which limits the contribution of this paper. While GIN is well motivated by WL test with solid theoretical background, this paper lacks deeper analysis and new motivation behind the algorithm design. I suggest the authors to give more insightful analysis and motivation. 2) I noticed that in Sec 5.3, a generator equipped with a standard R-GCN as discriminator tends to collapse after several (around 20), while the proposed module will not. The reason behind this fact can be essential to show the mechanism how the proposed method differs from previous one. However, this part is missing in this version of submission. I would like to see why the proposed module can prevent a generator from collapsing. 3) I understand that stochastic/random projection is with high probability to preserve the metric before mapping . My concern is that when stacking multiple layers of WLS units, the probability of the failure case of stochastic/random projection also increases (since projection is performed at each layer). This may greatly hinder the scenario of the proposed method from forming deeper GNN. In this case, authors should justify the stability of the proposed method. How stable is the proposed method? And what happens when stacking more layers? | 2) I noticed that in Sec 5.3, a generator equipped with a standard R-GCN as discriminator tends to collapse after several (around 20), while the proposed module will not. The reason behind this fact can be essential to show the mechanism how the proposed method differs from previous one. However, this part is missing in this version of submission. I would like to see why the proposed module can prevent a generator from collapsing. |
n7n8McETXw | ICLR_2025 | 1. **Limitations in Model Complexity**: The paper primarily analyzes a single-head attention Transformer, which may not encapsulate the full complexity and performance characteristics of multi-layer and multi-head attention models. The validation was confined to binary classification tasks, thereby restricting the generalizability of the theoretical findings.
2. **Formula Accuracy**: The paper requires a meticulous review of its mathematical expressions. For instance, in Example 1, the function should be denoted as \( f_2(\mu_2') = \mu_1' \) instead of \( f_1(\mu_2') = \mu_1' \), and the second matrix \( A_1^f \) should be corrected to \( A_2^f \). It is essential to verify all formulas throughout the text to ensure accuracy.
3. **Theorem Validity and Clarification**: The theorems presented in the article should be scrutinized for their validity, particularly the sections that substantiate reasoning, as they may induce some ambiguity. Reading the appendix is mandatory for a comprehensive understanding of the article; otherwise, it might lead to misconceptions.
4. **Originality Concerns**: The article's reasoning and writing logic bear similarities to those found in "How Do Nonlinear Transformers Learn and Generalize in In-Context Learning." It raises the question of whether this work is merely an extension of the previous study or if it introduces novel contributions. | 4. **Originality Concerns**: The article's reasoning and writing logic bear similarities to those found in "How Do Nonlinear Transformers Learn and Generalize in In-Context Learning." It raises the question of whether this work is merely an extension of the previous study or if it introduces novel contributions. |
NIPS_2022_1572 | NIPS_2022 | 1.) Theoretical comparisons to adaptive learning of GPRGNN is not clear. 2.) Incremental work though the contribution is useful. | 1.) Theoretical comparisons to adaptive learning of GPRGNN is not clear. |
xozJw0kZXF | EMNLP_2023 | 1. Is object hallucination the most important problem of multimodal LLM? Others include knowledge, object spatial relationships, fine-grained attributes, etc.
2. Is it sufficient to measure object hallucination through only yes/no responses? Yes response does not necessarily indicate that the model comprehends the presence of the object in the image, as it may still produce incorrect objects when undertaking other tasks. | 2. Is it sufficient to measure object hallucination through only yes/no responses? Yes response does not necessarily indicate that the model comprehends the presence of the object in the image, as it may still produce incorrect objects when undertaking other tasks. |
4NhMhElWqP | ICLR_2024 | - The main weakness of this paper is that it overclaims and underdelivers. In its current state, the study is not strong enough to claim the title of a "foundational model".
- The authors mention that they use datasets from diverse domains. However, out of the 12 datasets studied, 6 come from a single domain. The distribution of sampling frequencies of these datasets are also not diverse with 6 hourly datasets and a limited representation of other frequencies (and some popular frequencies completely missing).
- Another aspect that could have justified the term "foundational" is a diversity of tasks. However, the paper mostly focuses on the long-term forecasting tasks with limited discussion of other tasks. Importantly, the practically relevant task of short-term forecasting (e.g., Monash time series forecasting archive) gets very less attention.
- The claim _Most existing forecasting models were designed to process regularly sampled data of fixed length. We argue that this restriction is the central reason for poor generalisation in time series forecasting_ has not been justified convincingly.
- The visualizations are poorly done and confusing for a serious academic paper. Please consider using cleaner figures. It is unclear how exactly inference on a new dataset is performed. It would improve the clarity of the paper if a specific paragraph on inference is added. Please see specific questions in the questions section.
- The results on the long-term forecasting benchmarks, while reasonable, are not impressive for a "foundational model" that has been trained on a larger corpus of datasets.
- The very-long-term forecasting task is of limited practical significance. Despite that, the discussion requires improvement, e.g., by conducting experiments on more datasets and training the baseline models with the "correct" forecast horizon to put the results in a proper context.
- The zero-shot analysis (Sec. 4.3) has only been conducted on two datasets. Moreover, since prior works such as PatchTST and NHITS do not claim to be foundational models, a proper comparison would be with baselines trained specifically on these held-out datasets. DAM would most likely be worse in that case but it would be a better gauge for the zero-shot performance. | - The very-long-term forecasting task is of limited practical significance. Despite that, the discussion requires improvement, e.g., by conducting experiments on more datasets and training the baseline models with the "correct" forecast horizon to put the results in a proper context. |
R6sIi9Kbxv | ICLR_2025 | 1. The approach of decomposing video representation into spatial and temporal representation for efficient and effective spatio-temporal modelling is a general idea in video understanding. I'm not going to blame using this in large video language models, however, I think proper credit and literature reviews should be included. For example, TimeSFormer[1], Uniformer[2], Dual-AI[3] and others using transformer for video recognition.
2. Training specialized and efficient video QFormer has been explored and utilized by UMT [4] and VideoChat2 [5]. Please clarify the difference and include them in literature reviews.
2. Comfusing attentive pooling module architecture. It seems the temporal representation $v_{t}$ is derived from spatial queries $Q_{s}$ attending to a set of frame features (with T as batch dimension). It means the spatial queries can only attend to in-frame content. This is doubtable why the representation is called temporal representation.
3. Training data: what specific data are used from training? Please provide details of how many videos from what dataset and how you make sampling. This is critical for reproduction and measure the method effectiveness.
4. Experiments - Comparison fairness: More latest methods should be included in comparison, especially those with similar motivations, e.g., VideoChat2.
5. Experiments - Image benchmark: As image dataset is used, it would be great to show the performance variance after such ST QFormer tuning. Also compared to normal QFormer.
6. Experiments - Video summarization: there are some new good benchmarks, like Shot2Story ranging from different topics and using only text and frames modalities. This is not mandatory, but it should be good to include.
7. Experiments - Ablation - missing components: There should be experiments and explanation regarding the different queries used in spatio-temporal representation, i.e., spatial, temporal and summary. That is the key difference to VideoChatGPT and other works. What if only have spatial one, or temporal and summary one?
8. Experiments - Ablation - metric: for abaltions, I suggest to use QA benchmarks for experiments rather than captioning benchmark. When things come to LLM, the current captioning metrics such as B@4 and CIDEr might not be ideal to reflect model ability.
[1] Is Space-Time Attention All You Need for Video Understanding?
[2] UniFormer: Unified Transformer for Efficient Spatiotemporal Representation Learning
[3] Dual-AI: Dual-Path Actor Interaction Learning for Group Activity Recognition
[4] MVBench | 7. Experiments - Ablation - missing components: There should be experiments and explanation regarding the different queries used in spatio-temporal representation, i.e., spatial, temporal and summary. That is the key difference to VideoChatGPT and other works. What if only have spatial one, or temporal and summary one? |
ICLR_2022_3068 | ICLR_2022 | 1. The innovation in the paper is limited, more like an assembly of existing work, such as DeepLabv3+, attention and spatial attention。 2. The proposed method was not evaluated on other datasets, such as MS COCO dataset. 3. The main focus of this paper is tiny object detection, but the analysis of small object is limited in the experimental results.
Question: 1. What’s the ‘CEM’ and ‘FPM’ mean in Figure 1? 2. The novelty of CAM is limited, A similar structure has been proposed in DeepLabv3+. 3. The proposed FRM is a simple combination of channel attention and spatial attention. The innovative should be given in detail. | 3. The proposed FRM is a simple combination of channel attention and spatial attention. The innovative should be given in detail. |
NIPS_2021_780 | NIPS_2021 | 5 Limitations
a. The authors briefly talk about the limitations of the approach in section 5. The main limitation they draw attention to is the challenge of moving closer to the local maxima of the reward function in the latter stages of optimization. To resolve this they discuss combining their method with local optimization techniques; however, I wonder whether the temperature approach they discuss in the earlier part of their paper (combined with some annealing scheme) could also be used here?
b. One limitation the authors do not mention is how the method scales in terms of the size of the state and action space. The loss function requires for every current state the sum over all previous states and actions that may have led to the current state (see term 1 of Eq.9). I assume this may become intractable for very large state-action spaces (and the flows one is trying to model become very small). Can one approximate the sum using a subset? Also what about continuous state/action spaces?
6 Societal impact
The authors state that they foresee no negative social impacts of their work (line 379). While I do not believe this work has the potential for significant negative social impact (and I'm not quite sure if/how I'm meant to review this aspect of their work), the authors could always mention the social impact of increased automation, or the risks from the dual use of their method, etc. | 6 Societal impact The authors state that they foresee no negative social impacts of their work (line 379). While I do not believe this work has the potential for significant negative social impact (and I'm not quite sure if/how I'm meant to review this aspect of their work), the authors could always mention the social impact of increased automation, or the risks from the dual use of their method, etc. |
NIPS_2018_606 | NIPS_2018 | , I tend to vote in favor of this paper. * Detailed remarks: - The analysis in Figure 4 is very interesting. What is a possible explanation for the behaviour in Figure 4(d), showing that the number of function evaluations automatically increases with the epochs? Consequently, how is it possible to control the tradeoff between accuracy and computation time if, automatically, the computation time increases along the training? In this direction, I think it would be nice to see how classification accuracy evolves (e.g. on MNIST) with the precision required. - In Figure 6, an interpolation experiment shows that the probability distribution evolves smoothly along time, which is an indirect way to interpret it. Since this is a low (2D) dimensional case, wouldn't it be possible to directly analyze the learnt ODE function, by looking at its fixed points and their stability? - For the continuous-time time-series model, subsec 6.1 clarity should be improved. Regarding the autoencoder formulation, why is an RNN used for the encoder, and not an ODE-like layer? Indeed, the authors argue that RNNs have trouble coping with such time-series, so this might also be the case in the encoding part. - Do the authors plan to share the code of the experiments (not only of the main module)? - I think it would be better if notations in appendix A followed the notations of the main paper. - Section 3 and Section 4 are slightly redundant: maybe putting the first paragraph of sec 4 in sec 3 and putting the remainder of sec 4 before section 3 would help. | - Section 3 and Section 4 are slightly redundant: maybe putting the first paragraph of sec 4 in sec 3 and putting the remainder of sec 4 before section 3 would help. |
ICLR_2021_458 | ICLR_2021 | W1. Clarity
The organization of the paper is such that the reader has to refer to the appendix a lot. My biggest concern on clarity is on the “theoretical” results which are not rigorous and at times unsupported. Further, some statements/claims are not precise or clear enough for me to be convinced that the method is well-motivated and is doing what it is claimed to be doing.
W2. Soundness
I have a lot of concerns and questions here as I read through Sect. 3. At a high-level, I don’t see a clear connection between “improved variance control of prediction y^ or the smoothness of loss landscape” and “zero-shot learning effectiveness.” Details below. This is in part due to poor clarity.
W3. Experiments
IMO, if the main claim is really about the effectiveness of the two tricks and the proposed class normalization, then the experiments should go beyond one zero-shot learning starting point --- 3-layer MLP (Table 2).
If baseline methods already adopt some of these tricks, it should be made clear and see if removing these tricks lead to inferior performance.
If baseline methods do not adopt some of these tricks, these tricks, especially class normalization, could be applied to show improved performance. If it is difficult to apply these tricks, further explanation should be given (generally, also mention applicability of these tricks.)
This is done to some degree in the continual setting.
W4. Related work
As I mentioned in W3, it is unclear which methods are linear/deep, and which methods have already benefited from existing/proposed tricks.
Detailed comments (mainly to clarify my points about weaknesses)
Statement 1
The main claim for this part is that this statement provides “a theoretical understanding of the trick” and “allows to speed up the search [of the optimal value fo \gamma].”
However, I feel that we need further justifications on the correlation between Statement 1 (variance of y^_c, “better stability” and “the training would not stale”) and the zero-shot learning accuracy for this to be the “why normalization + scaling works.” My understanding is that the Appendix simply validates that Eq. (4) seems to hold in practice.
Moreover, is the usual search region [5,10] actually effective? Do we have stronger supporting empirical evidence than the three groups of practitioners (Li et al 2019, Zhang et al. 2019, Guo et al. 2020), who may have influenced each other, used it?
Finally, can the authors comment on the validity of multiple assumptions in Appendix A? To which degrees does each of them hold in practice?
Statement 2 and 3
Why wouldn’t the following statement in Sect. 3.3 invalidate Statement 1? “This may create an impression that it does not matter how we initialize the weights — normalization would undo any fluctuations. However it is not true, because it is still important how the signal flows, i.e. for an unnormalized and unscaled logit value”
It is unclear (at least not from the beginning) why understanding attribute normalization has to do with initialization of the weights.
Similar to my comments to Statement 1, why should we believe that the explanation in Sect. 3.3 and Sect. 3.4 is the reason for zero-shot learning effectiveness? In particular, the authors again claim that the main bottleneck in improving zero-shot learning is “variance control” (the end of Sect. 3.3).
I also have a hard time understanding some statements in Appendix H, which is needed to motivate the following statement in Sect. 3.3: “And these assumptions are safe to assume only for z but not for a_c, because they do not hold for the standard datasets (see Appendix H).” H1: Would this statement still be true after we transform a_c with an MLP? H2: Why is it not “a sensible thing to do” if we just want zero mean and unit variance? H3: Why is “such an approach far from being scalable”? H4: What if these are things like word embeddings? H5: Fig. 12 and Fig. 13 are not explained. H6: Histograms in Fig. 13 look quite normal.
How useful is Statement 2? Why is the connection with Xavier initialization important?
Why is “preserving the variance between z and y~” in Statement 3 important for zero-shot learning?
Improved smoothness
The claim “improved smoothness” at the end of Sect. 3 and Appendix F is really hard to understand. F1: How do the authors define “irregular loss surface”? F2: “Santurkar et al. (2018) showed that batch-wise standardization procedure decreases the Lipschitz constant of a model, which suggests that our class-wise standardization will provide the same impact.” This is not very precise and seems unsupported. Please make it clear how. If this is a hypothesis, please make it clear.
Similarly to my comments to Statement 1-3, how is improved smoothness related to zero-shot learning effectiveness?
Other more minor comments
Abstract: Are the authors the one to “generalize ZSL to a broader problem”? Please tone down the claim if not.
After Eq. (2): Why does attribute normalization look “inconsiderable” (possibly this is not the right word?) or why is it “surprising” that this is preferred in practice? Don’t most zero-shot learning methods use this (see for example Table 4 in [A])?
Suggestions for references for attribute normalization. This can be improved; I can trace this back to much earlier work such as [A] and [B] (though I think this fact is stated more explicitly in [A]).
Under Table 1 “These two tricks work well and normalize the variance to a unit value when the underlying ZSL model is linear (see Figure 1), but they fail when we use a multi-layer architecture.”: Could the authors provide a reference to evidence to support this? I think it is also important to provide a clear statement of what separates a “linear” or “multi-layer” model.
The first paragraph of Sect. 3: Could you provide references for motivations for different activation functions? Further, It is unclear that all of them perform normalization.
The second paragraph of Sect. 3: What exactly limits “the tools” for zero-shot learning vs. supervised learning? Further, it would also be nice to separate traditional supervised learning where classes are balanced and imbalanced; see, e.g., [C].
What is the closest existing zero-shot model to the one the authors describe in Sect. 3.1? Why is the described model considered/selected?
[A] Synthesized Classifiers for Zero-Shot Learning
[B] Zero-Shot Learning by Convex Combination of Semantic Embeddings
[C] Class-Balanced Loss Based on Effective Number of Samples | 3. At a high-level, I don’t see a clear connection between “improved variance control of prediction y^ or the smoothness of loss landscape” and “zero-shot learning effectiveness.” Details below. This is in part due to poor clarity. |
NIPS_2016_386 | NIPS_2016 | , however. For of all, there is a lot of sloppy writing, typos and undefined notation. See the long list of minor comments below. A larger concern is that some parts of the proof I could not understand, despite trying quite hard. The authors should focus their response to this review on these technical concerns, which I mark with ** in the minor comments below. Hopefully I am missing something silly. One also has to wonder about the practicality of such algorithms. The main algorithm relies on an estimate of the payoff for the optimal policy, which can be learnt with sufficient precision in a "short" initialisation period. Some synthetic experiments might shed some light on how long the horizon needs to be before any real learning occurs. A final note. The paper is over length. Up to the two pages of references it is 10 pages, but only 9 are allowed. The appendix should have been submitted as supplementary material and the reference list cut down. Despite the weaknesses I am quite positive about this paper, although it could certainly use quite a lot of polishing. I will raise my score once the ** points are addressed in the rebuttal. Minor comments: * L75. Maybe say that pi is a function from R^m \to \Delta^{K+1} * In (2) you have X pi(X), but the dimensions do not match because you dropped the no-op action. Why not just assume the 1st column of X_t is always 0? * L177: "(OCO )" -> "(OCO)" and similar things elsewhere * L176: You might want to mention that the learner observes the whole concave function (full information setting) * L223: I would prefer to see a constant here. What does the O(.) really mean here? * L240 and L428: "is sufficient" for what? I guess you want to write that the sum of the "optimistic" hoped for rewards is close to the expected actual rewards. * L384: Could mention that you mean |Y_t - Y_{t-1}| \leq c_t almost surely. ** L431: \mu_t should be \tilde \mu_t, yes? * The algorithm only stops /after/ it has exhausted its budget. Don't you need to stop just before? (the regret is only trivially affected, so this isn't too important). * L213: \tilde \mu is undefined. I guess you mean \tilde \mu_t, but that is also not defined except in Corollary 1, where it just given as some point in the confidence ellipsoid in round t. The result holds for all points in the ellipsoid uniformly with time, so maybe just write that, or at least clarify somehow. ** L435: I do not see how this follows from Corollary 2 (I guess you meant part 1, please say so). So first of all mu_t(a_t) is not defined. Did you mean tilde mu_t(a_t)? But still I don't understand. pi^*(X_t) is (possibly random) optimal static strategy while \tilde \mu_t(a_t) is the optimistic mu for action a_t, which may not be optimistic for pi^*(X_t)? I have similar concerns about the claim on the use of budget as well. * L434: The \hat v^*_t seems like strange notation. Elsewhere the \hat is used for empirical estimates (as is standard), but here it refers to something else. * L178: Why not say what Omega is here. Also, OMD is a whole family of algorithms. It might be nice to be more explicit. What link function? Which theorem in [32] are you referring to for this regret guarantee? * L200: "for every arm a" implies there is a single optimistic parameter, but of course it depends on a ** L303: Why not choose T_0 = m Sqrt(T)? Then the condition becomes B > Sqrt(m) T^(3/4), which improves slightly on what you give. * It would be nice to have more interpretation of theta (I hope I got it right), since this is the most novel component of the proof/algorithm. | * L434: The \hat v^*_t seems like strange notation. Elsewhere the \hat is used for empirical estimates (as is standard), but here it refers to something else. |
NIPS_2021_537 | NIPS_2021 | Weakness: The main weakness of the approach is the lack of novelty. 1. The key contribution of the paper is to propose a framework which gradually fits the high-performing sub-space in the NAS search space using a set of weak predictors rather than fitting the whole space using one strong predictor. However, this high-level idea, though not explicitly highlighted, has been adopted in almost all query-based NAS approaches where the promising architectures are predicted and selected at each iteration and used to update the predictor model for next iteration. As the authors acknowledged in Section 2.3, their approach is exactly a simplified version of BO which has been extensively used for NAS [1,2,3,4]. However, unlike BO, the predictor doesn’t output uncertainty and thus the authors use a heuristic to trade-off exploitation and exploration rather than using more principled acquisition functions.
2. If we look at the specific components of the approach, they are not novel as well. The weak predictor used are MLP, Regression Tree or Random Forest, all of which have been used for NAS performance prediction before [2,3,7]. The sampling strategy is similar to epsilon-greedy and exactly the same as that in BRP-NAS[5]. In fact the results of the proposed WeakNAS is almost the same as BRP-NAS as shown in Table 2 in Appendix C. 3. Given the strong empirical results of the proposed method, a potentially more novel and interesting contribution would be to find out through theorical analyses or extensive experiments the reasons why simple greedy selection approach outperforms more principled acquisition functions (if that’s true) on NAS and why deterministic MLP predictors, which is often overconfident when extrapolate, outperform more robust probabilistic predictors like GPs, deep ensemble or Bayesian neural networks. However, such rigorous analyses are missing in the paper.
Detailed Comments: 1. The authors conduct some ablation studies in Section 3.2. However, a more important ablation would be to modify the proposed predictor model to get some uncertainty (by deep-ensemble or add a BLR final output layer) and then use BO acquisition functions (e.g. EI) to do the sampling. The proposed greedy sampling strategy works because the search space for NAS-Bench-201 and 101 are relatively small and as demonstrated in [6], local search even gives the SOTA performance on these benchmark search spaces. For a more realistic search space like NAS-Bench-301[7], the greedy sampling strategy which lacks a principled exploitation-exploration trade-off might not work well. 2. Following the above comment, I’ll suggest the authors to evaluate their methods on NAS-Bench-301 and compare with more recent BO methods like BANANAS[2] and NAS-BOWL[4] or predictor-based method like BRP-NAS [5] which is almost the same as the proposed approach. I’m aware that the authors have compared to BONAS and shows better performance. However, BONAS uses a different surrogate which might be worse than the options proposed in this paper. More importantly, BONAS use weight-sharing to evaluate architectures queried which may significantly underestimate the true architecture performance. This trades off its performance for time efficiency. 3. For results on open-domain search, the authors perform search based on a pre-trained super-net. Thus, the good final performance of WeakNAS on MobileNet space and NASNet space might be due to the use of a good/well-trained supernet; as shown in Table 6, OFA with evalutinary algorithm can give near top performance already. More importantly, if a super-net has been well-trained and is good, the cost of finding the good subnetwork from it is rather low as each query via weight-sharing is super cheap. Thus, the cost gain in query efficiency by WeakNAS on these open-domain experiments is rather insignificant. The query efficiency improvement is likely due to the use of a predictor to guide the subnetwork selection in contrast to the naïve model-free selection methods like evolutionary algorithm or random search. A more convincing result would be to perform the proposed method on DARTS space (I acknowledge that doing it on ImageNet would be too expensive) without using the supernet (i.e. evaluate the sampled architectures from scratch) and compare its performance with BANANAS[2] or NAS-BOWL[4]. 4. If the advantage of the proposed method is query-efficiency, I’d love to see Table 2, 3 (at least the BO baselines) in plots like Fig. 4 and 5, which help better visualise the faster convergence of the proposed method. 5. Some intuitions are provided in the paper on what I commented in Point 3 in Weakness above. However, more thorough experiments or theoretical justifications are needed to convince potential users to use the proposed heuristic (a simplified version of BO) rather than the original BO for NAS. 6. I might misunderstand something here but the results in Table 3 seem to contradicts with the results in Table 4. As in Table 4, WeakNAS takes 195 queries on average to find the best architecture on NAS-Bench-101 but in Table 3, WeakNAS cannot reach the best architecture after even 2000 queries.
7. The results in Table 2 which show linear-/exponential-decay sampling clearly underperforms uniform sampling confuse me a bit. If the predictor is accurate on the good subregion, as argued by the authors, increasing the sampling probability for top-performing predicted architectures should lead to better performance than uniform sampling, especially when the performance of architectures in the good subregion are rather close. 8. In Table 1, what does the number of predictors mean? To me, they are simply the number of search iterations. Do the authors reuse the weak predictors from previous iterations in later iterations like an ensemble?
I understand that given the time constraint, the authors are unlikely to respond to my comments. Hope those comments can help the authors for future improvement of the paper.
References: [1] Kandasamy, Kirthevasan, et al. "Neural architecture search with Bayesian optimisation and optimal transport." NeurIPS. 2018. [2] White, Colin, et al. "BANANAS: Bayesian Optimization with Neural Architectures for Neural Architecture Search." AAAI. 2021. [3] Shi, Han, et al. "Bridging the Gap between Sample-based and One-shot Neural Architecture Search with BONAS." NeurIPS. 2020. [4] Ru, Binxin, et al. "Interpretable Neural Architecture Search via Bayesian Optimisation with Weisfeiler-Lehman Kernels." ICLR. 2020. [5] Dudziak, Lukasz, et al. "BRP-NAS: Prediction-based NAS using GCNs." NeurIPS. 2020. [6] White, Colin, et al. "Local search is state of the art for nas benchmarks." arXiv. 2020. [7] Siems, Julien, et al. "NAS-Bench-301 and the case for surrogate benchmarks for neural architecture search." arXiv. 2020.
The limitation and social impacts are briefly discussed in the conclusion. | 3. Given the strong empirical results of the proposed method, a potentially more novel and interesting contribution would be to find out through theorical analyses or extensive experiments the reasons why simple greedy selection approach outperforms more principled acquisition functions (if that’s true) on NAS and why deterministic MLP predictors, which is often overconfident when extrapolate, outperform more robust probabilistic predictors like GPs, deep ensemble or Bayesian neural networks. However, such rigorous analyses are missing in the paper. Detailed Comments: |
NIPS_2016_186 | NIPS_2016 | weakness in the algorithm is the handling of the discretization. It seems that two improvements are somewhat easily achievable: First, there should probably be a way to obtain instance dependent bounds for the continuous setting. It seems that by taking a confidence bound of size \sqrt{log(st)/T_{i,t}} rather than \sqrt{log(t)/T_{i,t}}, one can get a logarithmic dependence on s, rather than polynomial, which may solve this issue. If that doesnât work, the paper should benefit from an explanation for why that doesnât work. Second, it seems that the discretization should be adaptive to the data. Otherwise, the running time and memory are dependent of the time horizon in cases where they do not have to. Overall, the paper is well written and motivated. Its results, though having room for improvement are non-trivial and deserve publication. Minor comments: - Where else was the k-max problem discussed? Please provide a citation for this. | - Where else was the k-max problem discussed? Please provide a citation for this. |
UEx5dZqXvr | EMNLP_2023 | Based on the analyses presented in the paper, the scaling behavior of document-level machine translation models (in terms of number of parameters and number of data points) is not crucially different from the scaling behavior of sentence-level machine translation systems, which has already been studied by [Ghorbani et al. (2021)](https://arxiv.org/pdf/2109.07740.pdf).
In my opinion, the experiments (or their reporting) could be more thorough in many places to back up the claims made by the authors, e.g.
- Figure 1 indicates a causal relationship between the number of parameters and translation quality by affecting sample efficiency and optimal sequence length. While the graphic is confusing to me (e.g. it looks to me as if the number of parameters is affecting the corpus size), I don’t think that the intended claim (as reiterated at the end of the “Introduction” section) is supported by the results. None of the experiments actually show causation but rather association between the variables.
- There is no information on how the function for the optimal sequence length was estimated (Equation 1) and how reliable we expect this model to be.
- In Section 5.2, the authors should define what they mean by “error accumulation” explicitly. Is the problem that the models base the translation of later sentences made on erroneous translation of earlier sentences? If that is what the authors meant by “error accumulation”, it would need more analysis to verify such a phenomenon. Personally, I’d be interested in details like the maximum sequence length used for these experiments and the number of training examples per bin (which are not reported yet), as the observations made by the authors could be related to tendencies of the Transformer to overfit to length statistics of the training data, see [Varis and Bojar (2021)](https://aclanthology.org/2021.emnlp-main.650.pdf).
- The results presented in Section 5.3 look rather noisy to me as the accuracy on ContraPro decreases from context length 60 to 120 and from 120 to 250 while it improves considerably for larger context lengths. Intuitively, a substantial context length increase (e.g. from 60 to 250 tokens) should not hurt the accuracy. The authors do not make an attempt to explain this trend.
- It would be helpful to also provide the confidence intervals to strengthen the conclusions from the experiments with one or multiple factors.
- The authors mention in Section 5.1 that the cross entropy loss “fails to fully depict the translation quality”. I don’t think that this conclusion is valid based on the authors' experiments. The authors are measuring translation quality in terms of (d-)BLEU, i.e. in terms of a metric that has many limitations, especially for document-level MT, see [Kocmi et al. (2021)](https://aclanthology.org/2021.wmt-1.57.pdf); [Post and Junczys-Dowmunt (2023)](https://arxiv.org/pdf/2304.12959.pdf). If general translation quality is not properly reflected by the metric, then it can’t really be determined whether the loss is a good indicator of general translation quality.
Some minor comments are:
- For most of the experiments in Section 4, the factor that is held constant is not reported, e.g. for the experiment of the joint effect of maximum sequence length and data scale (in Section 4.2), I couldn’t find the model size.
- The authors mention that previous work, in particular, Beltagy et al. (2020) and Press et al. (2021) demonstrate that model performance improves with a larger context. Let me note here that their methods are different from the ones presented in this paper and thus conclusions can be different for a number of reasons.
- In my opinion, a lot of the details on training and inference configurations can be moved to the appendix.
- The abbreviation “MAC” is used in line 194 already but only explained later in the paper. | - There is no information on how the function for the optimal sequence length was estimated (Equation 1) and how reliable we expect this model to be. |
NIPS_2017_35 | NIPS_2017 | - The applicability of the methods to real world problems is rather limited as strong assumptions are made about the availability of camera parameters (extrinsics and intrinsics are known) and object segmentation.
- The numerical evaluation is not fully convincing as the method is only evaluated on synthetic data. The comparison with [5] is not completely fair as [5] is designed for a more complex problem, i.e., no knowledge of the camera pose parameters.
- Some explanations are a little vague. For example, the last paragraph of Section 3 (lines 207-210) on the single image case. Questions/comments:
- In the Recurrent Grid Fusion, have you tried ordering the views sequentially with respect to the camera viewing sphere?
- The main weakness to me is the numerical evaluation. I understand that the hypothesis of clean segmentation of the object and known camera pose limit the evaluation to purely synthetic settings. However, it would be interesting to see how the architecture performs when the camera pose is not perfect and/or when the segmentation is noisy. Per category results could also be useful.
- Many typos (e.g., lines 14, 102, 161, 239 ), please run a spell-check. | - The applicability of the methods to real world problems is rather limited as strong assumptions are made about the availability of camera parameters (extrinsics and intrinsics are known) and object segmentation. |
ARR_2022_286_review | ARR_2022 | While there exist many papers discussing the softmax bottleneck or the stolen probability problem, similar to what the authors found, I personally have not found enough evidence in my work that the problem is really severe.
After all, there are intrinsic uncertainties in the empirical distributions of the training data, and it is quite natural for us to use a smaller hidden dimension size than the vocabulary size, because after all, we call them "word embeddings" for a reason.
I guess what I mean to say here is that the problem is of limited interest to me (which says nothing about a broader audience) because the results agree very well with my expectations.
This is definitely not against the authors because they did a good job in showing this via their algorithm and the concrete experiments.
I feel like the authors could mention and expand on the implications when beam search is used.
Because in reality, especially that many MT models are considered in the paper, greedy search is seldomly used.
In other words, "even if greedy search is used, SPP is not a big deal, let alone that in reality we use beam search", something like that.
Compared to the main text, I am personally more interested in the point brought up at L595.
What implications are there for the training of our models?
How does the gradient search algorithm decide on where to put the word vectors and hidden state vector?
Is there anything we, i.e. the trainers of the NNs, can do to make it easier for the NNs?
Small issues: - L006, as later written in the main text, "thousands" is not accurate here. Maybe add "on the subword level"?
- L010, be predicted - L034, personally, I think "expressiveness" is more commonly used, this happens elsewhere in the paper as well.
- L082, autoencoder - L104, greater than or equal to | - L006, as later written in the main text, "thousands" is not accurate here. Maybe add "on the subword level"? |
ICLR_2021_634 | ICLR_2021 | + Clarifications: - The question of the latent variable model seems relevant and interesting. It seems that the mixup method is only as good as the model, and also the trained model might add its own biases to the classification task. It would be nice to see some discussion of this in the paper - I am surprised that mixup improves precision on the adult task. It would be good to see some exploration of this - For experiments, are all runs shown? Or just the Pareto fronts. - A number of hyperparameters (e.g. regularization) are not given - For all the latent path figures (eg Fig 3) why is the y value at x= 0 always 0? Is it normalized to this? Be clear in your description (or maybe I missed it) - I would be interested in seeing some further analysis on this model, perhaps using the interpolations themselves | - A number of hyperparameters (e.g. regularization) are not given - For all the latent path figures (eg Fig 3) why is the y value at x= 0 always 0? Is it normalized to this? Be clear in your description (or maybe I missed it) - I would be interested in seeing some further analysis on this model, perhaps using the interpolations themselves |
kNvwWXp6xD | ICLR_2025 | - I found the paper's flow to be quite confusing. It seems the author's had a lot of material to cover, most of which is placed in the appendix. Perhaps because of this, the actual paper lacks clarity and the required detail. It would be helpful if the authors present the material in the main sections, and refer to the appropriate appendix in case the reader wants further detail.
- There seems to be forward referencing in the paper. Material is introduced without proper explanation, and is explained in later sections e.g. Figure1
- The exact contribution(s) need to be written more clearly in the Introduction. Moreover, the material supporting the main contributions seems to be in the appendix and not the main sections e.g. deep-rag algorithm or discussion on the high concurrency.
- The experiments section seems to be defining the evaluation measures rather than focusing on an explanation of the experiments and results
- The authors mention that the superior performance of their approach can be attributed to several factors. However, it is not clear which factor is actually contributing towards the better results
- Some sentences are confusing e.g. in the first para of the Introduction: HumanEval() first proposed to let LLM generating code based on ....... | - There seems to be forward referencing in the paper. Material is introduced without proper explanation, and is explained in later sections e.g. Figure1 - The exact contribution(s) need to be written more clearly in the Introduction. Moreover, the material supporting the main contributions seems to be in the appendix and not the main sections e.g. deep-rag algorithm or discussion on the high concurrency. |
ICLR_2023_1195 | ICLR_2023 | - The assumption that a set of analytical derivative functions is available is a very strong hypothesis so the number of cases where this method can be applied seems limited. - The high dimensional tensor can be also compactly represented by the set of derivative functions avoiding the curse of dimensionality, so it is not clear what is the advantage of replacing the original compact representation by the TT representation. Maybe the reason is that in TT-format many operations can be implemented more efficiently. The paper gives not a clear explanation about the necessity of the TT representation in this case. - It is not clear in which cases the minimum rank is achieved by the proposed method. Is there a way to check it? - In the paper it is mentioned that the obtained core tensors can be rounded to smaller ranks with a given accuracy by clustering the values of the domain sets or imposing some error decision epsilon if the values are not discrete. It is not clear what is, in theory, the effect on the approximation in the full tensor error. Is there any error bound in terms of epsilon? - The last two bullets in the list of main contributions and advantages of the proposed approach are not clear to me (Page 2). - The method is introduced by an application example using the P_step function (section 2.2). I found this example difficult to follow and maybe not relevant from the point of view of an application. I think, a better option would be to use some problem easier to understand, for example, one application to game theory as it is done later in the paper. - Very relevant ideas and results are not included in the main paper and referred instead to the Appendix, which makes the paper not well self-contained. - The obtained performance in terms of complexity for the calculation of the permanent of a matrix is not better than standard algorithms as commented by the authors (Hamilton walks obtained the result with half of the complexity). It is not clear what is the advantage of the proposed new method for this application. - The comparison with the TT-cross method is not clear enough. What is the number of samples taken in the TT-cross method? What is the effect to increase the number of samples in the TT-cross method. I wonder if the accuracy of the TT-cross method can be improved by sampling more entries of the tensor.
Minor issues: - Page 2: “an unified approach” -> “a unified approach” - Page 2: “and in several examples is Appendix” -> “and in several examples in the Appendix” - In page 3, “basic vector e” is not defined. I think the authors refers to different elements of the canonical base, i.e., vectors containing all zeros except one “1” in a different location. This should be formally introduced somewhere in the paper. - Page 9: “as an contraction” -> “as a contraction” | - In the paper it is mentioned that the obtained core tensors can be rounded to smaller ranks with a given accuracy by clustering the values of the domain sets or imposing some error decision epsilon if the values are not discrete. It is not clear what is, in theory, the effect on the approximation in the full tensor error. Is there any error bound in terms of epsilon? |
NIPS_2017_356 | NIPS_2017 | ]
My major concerns about this paper is the experiment on visual dialog dataset. The authors only show the proposed model's performance on discriminative setting without any ablation studies. There is not enough experiment result to show how the proposed model works on the real dataset. If possible, please answer my following questions in the rebuttal.
1: The authors claim their model can achieve superior performance having significantly fewer parameters than baseline [1]. This is mainly achieved by using a much smaller word embedding size and LSTM size. To me, it could be authors in [1] just test model with standard parameter setting. To backup this claim, is there any improvements when the proposed model use larger word embedding, and LSTM parameters?
2: There are two test settings in visual dialog, while the Table 1 only shows the result on discriminative setting. It's known that discriminative setting can not apply on real applications, what is the result on generative setting?
3: To further backup the proposed visual reference resolution model works in real dataset, please also conduct ablation study on visDial dataset. One experiment I'm really interested is the performance of ATT(+H) (in figure 4 left). What is the result if the proposed model didn't consider the relevant attention retrieval from the attention memory. | 2: There are two test settings in visual dialog, while the Table 1 only shows the result on discriminative setting. It's known that discriminative setting can not apply on real applications, what is the result on generative setting? |
NIPS_2016_374 | NIPS_2016 | weakness is the presentation: - From my understanding of the submission instructions, the main part of the paper should include all that one needs to understand the paper (even if proofs may be in supplementary material). I thus found it awkward to have a huge algorithm listing as in Alg. 2, without any accompanying text explaining it, and to have algorithms in the supplementary material without giving at least a brief idea of the algorithms in the main body of the paper. This makes it hard to read the paper, and I think it is not appropriate for publication. - Perhaps something more should be done to convince the reader that a query of the type SEARCH is feasible in some realistic scenario. One other little thing: - what is meant by "and quantities that do appear" in line 115? | - Perhaps something more should be done to convince the reader that a query of the type SEARCH is feasible in some realistic scenario. One other little thing: |
e4dXIBRQ9u | EMNLP_2023 | - **Flexibility**: One of the limitations of this approach lies in the requirement for a homologous teacher model in terms of paradigm and vocabulary table with the student model. This can hinder the method's flexibility and may pose challenges during the preparation of the teacher model.
- **Scope**: The pre-trained teacher model tends to be already a strong model; do we truly need to train a student model from scratch using weighted training? Is it possible to simply fine-tune the teacher model with weighted training? Weighted training would be meaningful if our goal is to obtain a small yet strong student model. However, the authors have not clearly indicated the specific targeted scenarios in this paper.
- **Experiments**:
1) The authors did not explore how different teacher models affect the student's learning effectiveness, which could have provided valuable insights into the impact of varying teacher models on the proposed method's performance. The choice of teacher model may lack flexibility; however, it is worth noting that there are numerous robust GEC systems that share the same PLM architecture.
2) The effectiveness of the proposed approach for other language families remains unknown. | 2) The effectiveness of the proposed approach for other language families remains unknown. |
ICLR_2022_2677 | ICLR_2022 | 1 The authors do not analyze the security (i.e., protection of the privacy) of the proposed framework.
2 The authors do not analyze the communication cost between each client (i.e., domain) and the server. In a typical federated learning system, the communication cost is a very important issue.
3 The way of using an encoder and a decoder, or a domain-specific part and a domain-independent part are well known in existing cross-domain or transfer learning works. | 1 The authors do not analyze the security (i.e., protection of the privacy) of the proposed framework. |
NIPS_2019_663 | NIPS_2019 | of their work?"] The submission is overall reasonably sound, although I have some comments and questions: * Regarding the model itself, I am confused by the GRU-Bayes component. I must be missing something, but why is it not possible to ingest observed data using the GRU itself, as in equation 2? This confusion would perhaps be clarified by an explanation in line 89 of why continuous observations are required. As it is written, I am not sure why it you couldn't just forecast (by solving the ODE defined by equation 3) the hidden state until the next measurement arrives, at which point g(t) and z(t) can be updated to define a new evolution equation for the hidden state. I am guessing the issue here is that this update only changes the derivative of the hidden state and not its value itself, but since the absolute value of the hidden state is not necessarily meaningful, the problem with this approach isn't very clear to me. I imagine the authors have considered such a model, so I would like to understand why it wouldn't be feasible here. * In lines 143-156, it is mentioned that the KL term of the loss can be computed empirically for binomial and Gaussian distributions. I understand that in the case of an Ornstein-Uhlenbeck SDE, the distribution of the observations are known to be (conditionally) Gaussian, but in the case of arbitrary data (e.g. health data), as far as I'm aware, few assumptions can be made of the underlying process. In this case, how is the KL term managed? Is a Gaussian distribution assumption made? Line 291 indicates this is the case, but it should be made clear that this is an assumption imposed on the data. For example, in the case of lab test results as in MIMIC, these values are rarely Gaussian-distributed and may not have Gaussian-distributed observation noise. On a similar note, it's mentioned in line 154 that many real-world cases have very little observation noise relative to the predicted distribution - I assume this is because the predicted distribution has high variance, but this statement could be better qualified (e.g. which real-world cases?). * It is mentioned several times (lines 203, 215) that the GRU (and by extension GRU-ODE-Bayes) excels at long-term forecasting problems, however in both experiments (sections 5.2 and 5.3) only near-term forecasting is explored - in both cases only the next 3 observations are predicted. To support this claim, longer prediction horizons should be considered. * I find it interesting that the experiments on MIMIC do not use any regularly-measured vital signs. I assume this was done to increase the "sporadicity" of the data, but it makes the application setting very unrealistic. It would be very unusual for values such as heart rate, respiratory rate, blood pressure and temperature not to be available in a forecasting problem in the ICU. I also think it's a missed opportunity to potentially highlight the ability of the proposed model to use the relationship between the time series to refine the hidden state. I would like to know why these variables were left out, and ideally how the model would perform in their presence. * I think the experiment in Section 5.5 is quite interesting, but I think a more direct test of the "continuity prior" would be to explicitly test how the model performs (in the low v. high data cases) on data which is explicitly continuous and *not* continuous (or at least, not 2-Lipschitz). The hypothesis that this continuity prior is useful *because* it encodes prior information about the data would be more directly tested by such a setup. At present, we can see that the model outperforms the discretised version in the low data regime, but I fear this discretisation process may introduce other factors which could explain this difference. It is slightly hard to evaluate because I'm not entirely sure what the discretised version consists of , however - this should be explained (perhaps in the appendix). Furthermore, at present there is no particular reason to believe that the data in MIMIC *is* Lipschitz-2 - indeed, in the case of inputs and outputs (Table 4, Appendix), many of these values can be quite non-smooth (e.g. a patient receiving aspirin). * It is mentioned (lines 240-242, section H.1.3) that this approach can handle "non-aligned" time series well. As mentioned, this is quite a challenging problem in the healthcare setting, so I read this with some interest. Do these statements imply that this ability is unique to GRU-ODE-Bayes, and is there a way to experimentally test this claim? My intuition is that any latent-variable model could in theory capture the unobserved "stage" of a patient's disease process, but if GRU-ODE-Bayes has some unique advantage in this setting it would be a valuable contribution. At present it is not clearly demonstrated - the superior performance shown in Table 1 could arise from any number of differences between this model and the baselines. 2.c Clarity: ["Is the submission clearly written? Is it well organized? (If not, please make constructive suggestions for improving its clarity.) Does it adequately inform the reader? (Note: a superbly written paper provides enough information for an expert reader to reproduce its results.)"] While I quite like the layout of the paper (specifically placing related work after a description of the methodology, which is somewhat unusual but makes sense here) and think it is overall well written, I have some minor comments: * Section 4 is placed quite far away from the Figure it refers to (Figure 1). I realise this is because Figure 1 is mentioned in the introduction of the paper, but it makes section 4 somewhat hard to follow. A possible solution would be to place section 4 before the related research, since the only related work it draws on is the NeuralODE-VAE, which is already mentioned in the Introduction. * I appreciate the clear description of baseline methods in Section 5.1. * The comprehensive Appendix is appreciated to provide additional detail about parts of the paper. I did not carefully read additional experiments described in the Appendix (e.g. the Brusselator) out of time consideration. * How are negative log-likelihoods computed for non-probabilistic models in this paper? * Typo on line 426 ("me" instead of "we"). * It would help if the form of p was described somewhere near line 135. As per my above comment, I assume it is a Gaussian distribution, but it's not explicitly stated. 2.d Significance: ["Are the results important? Are others (researchers or practitioners) likely to use the ideas or build on them? Does the submission address a difficult task in a better way than previous work? Does it advance the state of the art in a demonstrable way? Does it provide unique data, unique conclusions about existing data, or a unique theoretical or experimental approach?"] This paper describes quite an interesting approach to the modelling of sporadically-measured time series. I think this will be of interest to the community, and appears to advance state of the art even if it is not explicitly clear where these gains come from. | * It would help if the form of p was described somewhere near line 135. As per my above comment, I assume it is a Gaussian distribution, but it's not explicitly stated. |
NIPS_2019_445 | NIPS_2019 | - Quality: Results of Section 2.1, which builds the main motivation of the paper, is demonstrated on a very limited settings and examples. It does not convince the reader that overfitting is the general reason for potential poor performance of the models under study. - Soundness: While expressiveness is useful, it does not mean that the optimal weights are learnable. The paper seem to not pay attention to this issue. - Clarity: Related work could be improved. Some related works are mainly named but their differences are not described enough. - Organization could be improved. Currently the paper is dependent on appendix (eg the algorithms). Also the contents of tables are too small. Overall, I do not think the quality of the paper is high enough and I vote for it to be rejected. | - Clarity: Related work could be improved. Some related works are mainly named but their differences are not described enough. |
NIPS_2022_2814 | NIPS_2022 | 1. A solution towards removing the position encoding is not discussed. 2. Importance of quantifying the strength of PPP is not clear to me. 3. Authors state that reliable PPP metrics are important for understanding PPP effects in different tasks. While this point is surely intriguing, such an explanation or understanding is not explicitly given in the article. Can the authors explicitly explain what type of understanding one reaches by looking at the PPP maps? 4. The conclusion of the article remains a bit vague. While the proposed metrics have some more desirable attributes, value of these attributes for applications is unclear to me. How will this actually improve the practice or our understanding? | 3. Authors state that reliable PPP metrics are important for understanding PPP effects in different tasks. While this point is surely intriguing, such an explanation or understanding is not explicitly given in the article. Can the authors explicitly explain what type of understanding one reaches by looking at the PPP maps? |
JMSkoIYFSn | EMNLP_2023 | 1. This paper shows little novelty, as it is only a marginal improvement of the conventional attention mechanism based on some span-level inductive bias.
2. The performance improvement over simple baselines like max-pooling is also marginal for probing tasks.
3. Although 4 patterns are proposed in this paper, it seems hard to find a unified solution or guideline for how to combine them in different tasks. If we have to try all possible combinations every time, the practicality of this method would be significantly reduced.
4. The authors validate the effectiveness of the proposed span-level attention only on the backbone of BERT. The paper lacks experiments on other encoder backbones to further demonstrate the generality of the proposed method.
5. The authors do not compare their methods with other state-of-the-art methods for span-related tasks, such as SpanBERT, thus lacking some credibility.
6. The writing can be improved. There are some typos and unclear descriptions. Please refer to comments for detail. | 5. The authors do not compare their methods with other state-of-the-art methods for span-related tasks, such as SpanBERT, thus lacking some credibility. |
NIPS_2018_428 | NIPS_2018 | weakness which decreased my score. Some line by line comments: - lines 32 - 37: You discuss how the regret cannot be sublinear, but proceed to prove that your method achieves T^{1/2} regret. Do you mean that the prediction error over the entire horizon T cannot be sublinear? - eq after line 145: typo --- i goes from 1 to n and since M,N are W x k x n x m, the index i should go in the third position. Based on the proof, the summation over u should go from tau to t, not from 1 to T. - line 159: typo -- "M" --> Theta_hat - line 188: use Theta_hat for consistency. - line 200: typo -- there should no Pi in the polynomial. - line 212: typo --- "beta^j" --> beta_j - line 219: the vector should be indexed - lines 227 - 231: the predictions in hindsight are denoted once by y_t^* and once by hat{y}_t^* - eq after line 255: in the last two terms hat{y}_t --> y_t Comments on the Appendix: - General comment about the Appendix: the references to Theorems and equations are broken. It is not clear if a reference points to the main text or to the appendix. - line 10: Consider a noiseless LDS... - line 19: typo -- N_i ---> P_i - equation (21): same comment about the summation over u as above. - line 41: what is P? - line 46: typo --- M_omega' ---> M_ell' - eq (31): typo -- no parenthesis before N_ell - line 56: the projection formula is broken - eq (56): why did you use Holder in that fashion? By assumption the Euclidean norm of x is bounded, so Cauchy Schwartz would avoid the extra T^{1/2}. ================== In line 40 of the appendix you defined R_x to be a bound on \|x\|_2 so there is no need for the inequality you used in the rebuttal. Maybe there is a typo in line 40, \|x\|_2 maybe should be \|x\|_\infty | - lines 32 - 37: You discuss how the regret cannot be sublinear, but proceed to prove that your method achieves T^{1/2} regret. Do you mean that the prediction error over the entire horizon T cannot be sublinear? |
NIPS_2016_93 | NIPS_2016 | - The claims made in the introduction are far from what has been achieved by the tasks and the models. The authors call this task language learning, but evaluate on question answering. I recommend the authors tone-down the intro and not call this language learning. It is rather a feedback driven QA in the form of a dialog. - With a fixed policy, this setting is a subset of reinforcement learning. Can tasks get more complicated (like what explained in the last paragraph of the paper) so that the policy is not fixed. Then, the authors can compare with a reinforcement learning algorithm baseline. - The details of the forward-prediction model is not well explained. In particular, Figure 2(b) does not really show the schematic representation of the forward prediction model; the figure should be redrawn. It was hard to connect the pieces of the text with the figure as well as the equations. - Overall, the writing quality of the paper should be improved; e.g., the authors spend the same space on explaining basic memory networks and then the forward model. The related work has missing pieces on more reinforcement learning tasks in the literature. - The 10 sub-tasks are rather simplistic for bAbi. They could solve all the sub-tasks with their final model. More discussions are required here. - The error analysis on the movie dataset is missing. In order for other researchers to continue on this task, they need to know what are the cases that such model fails. | - The details of the forward-prediction model is not well explained. In particular, Figure 2(b) does not really show the schematic representation of the forward prediction model; the figure should be redrawn. It was hard to connect the pieces of the text with the figure as well as the equations. |
NIPS_2016_93 | NIPS_2016 | / Major concerns: - It is difficult to evaluate whether the MovieQA result should be considered significant given that +10% gap exists between MemN2N on dataset with explicit answers (Task 1) and RBI + FP on dataset with other forms of supervision, especially Task 3. If I understood correctly, the different tasks are coming from the same data, but authors provide different forms of supervision. Also, Task 3 gives full supervision of the answers. Then I wonder why RBI + FP on task 3 (69%) is doing much worse than MemN2N on task 1 (80%). Is it because the supervision is presented in a more implicit way ("No, the answer is kitchen" instead of "kitchen")? - For RBI, they only train on rewarded actions. Then this means rewardless actions that get useful supervision (such as "No, the answer is Timothy Dalton." in Task 3) is ignored as well. I think this could be one significant factor that makes FP + RBI better than RBI alone. If not, I think the authors should provide stronger baseline than RBI (that is supervised by such feedback) to prove the usefulness of FP. Questions / Minor concerns: - For bAbI, it seems the model was only tested on single supporting fact dataset (Task 1 of bAbI). How about other tasks? - How is dialog dataset obtained from QA datasets? Are you using a few simple rules? - Lack of lexical / syntactic diversity of teacher feedback: assuming the teacher feedback was auto-generated, do you intend to turk the teacher feedback and / or generate a few different kinds of feedback (which is more real-life situation)? - How does other models than MemN2N do on MovieQA? | - For RBI, they only train on rewarded actions. Then this means rewardless actions that get useful supervision (such as "No, the answer is Timothy Dalton." in Task 3) is ignored as well. I think this could be one significant factor that makes FP + RBI better than RBI alone. If not, I think the authors should provide stronger baseline than RBI (that is supervised by such feedback) to prove the usefulness of FP. Questions / Minor concerns: |
NIPS_2017_567 | NIPS_2017 | Weakness:
1. I find the first two sections of the paper hard to read. The author stacked a number of previous approaches but failed to explain each method clearly.
Here are some examples:
(1) In line 43, I do not understand why the stacked LSTM in Fig 2(a) is "trivial" to convert to the sequential LSTM Fig2(b). Where are the h_{t-1}^{1..5} in Fig2(b)? What is h_{t-1} in Figure2(b)?
(2) In line 96, I do not understand the sentence "our lower hierarchical layers zoom in time" and the sentence following that.
2. It seems to me that the multi-scale statement is a bit misleading, because the slow and fast RNN do not operate on different physical time scale, but rather on the logical time scale when the stacks are sequentialized in the graph. Therefore, the only benefit here seems to be the reduce of gradient path by the slow RNN.
3. To reduce the gradient path on stacked RNN, a simpler approach is to use the Residual Units or simply fully connect the stacked cells. However, there is no comparison or mention in the paper.
4. The experimental results do not contain standard deviations and therefore it is hard to judge the significance of the results. | 2. It seems to me that the multi-scale statement is a bit misleading, because the slow and fast RNN do not operate on different physical time scale, but rather on the logical time scale when the stacks are sequentialized in the graph. Therefore, the only benefit here seems to be the reduce of gradient path by the slow RNN. |
ICLR_2023_1522 | ICLR_2023 | The application of FERMI is not obviously a large improvement over its introduction in (Lowy+ 2021): we want to optimize a weird fairness-constrained loss, so we instead optimize an upper bound on it, which admits a stochastic convergence analysis, and also handles non-binary classification. The added contribution here is, in the paper's phrasing, "a careful analysis of how the Gaussian noises [necessary for DP-SGD] propagate through the optimization trajectories". I don't have much feel for what constitutes an "interesting" convergence analysis, but the conceptual novelty here is unclear, and the introduction is a bit slippery about what is novel and what is borrowed from the FERMI paper.
The paper also struggles to explain its technical contributions in terms between a very high level summary and a long, opaque theorem statement. I suggest changing the focus of the paper to 1) reduce, relegate to the appendix, or eliminate the discussion of demographic parity (an extremely coarse notion of fairness that, IMO, the fairness literature needs to move past, and has only been discussed this long because it's very simple), which takes up over a page of the main body without meaningfully adding to the story told by the equalized odds results alone, 2) extending the discussion of how Theorem 3.1 works and what it accomplishes (the current statement is a blizzard of notation with little explanation -- I still don't know what W is doing), along with Theorem 3.2, and 3) extending the equalized odds results to more datasets (why are Parkinsons and Retired Adult results only reported for demographic parity? it seems like equalized odds should also apply here, and an empirical story built on 2 datasets seems thin). I think 2) would help provide a clearer explanation of the paper's improvement over (Lowy+ 2021) and 3) would make a stronger empirical case separate from the convergence analysis.
Other questions/comments:
I'd appreciate a table in the appendix attempting to concisely explain all of the relevant variables -- by my count, Theorem B.1 has well over a dozen.
Why is Tran 2021b a single point where the other methods have curves? More generally, perhaps I missed the explanation of this in the text, but what is varied to generate the curves?
As far as I can tell, the paper does not discuss the tightness of the upper bound offered by ERMI, nor does it explicitly write out the expression for equalized odds. This makes it hard to contextualize the convergence guarantee in terms of the underlying loss we actually want to optimize.
Figure 4 "priavte" | 3) extending the equalized odds results to more datasets (why are Parkinsons and Retired Adult results only reported for demographic parity? it seems like equalized odds should also apply here, and an empirical story built on 2 datasets seems thin). I think |
NIPS_2022_1317 | NIPS_2022 | Weakness: 1. Literature review is not adequate. Even with the content in the appendix, this is no discussion of off-policy evaluation for reinforcement learning or non-stationary multi-armed bandits. 2. The claim and the discussion of “active non-stationarity” is somewhat confusing (more on this later in Questions). 3. The authors seem to overstate their contribution.
4. Baseline methods are weak and not presenting state-of-the-art.
There is no discussion of limitation. Along with my questions regarding the difference between this work and reinforcement learning, one possible direction in the conclusion is to talk about the similarity and difference, and to what extent the results are generalizable to RL settings. | 4. Baseline methods are weak and not presenting state-of-the-art. There is no discussion of limitation. Along with my questions regarding the difference between this work and reinforcement learning, one possible direction in the conclusion is to talk about the similarity and difference, and to what extent the results are generalizable to RL settings. |
NIPS_2021_1251 | NIPS_2021 | - Typically, expected performance under observation noise is used for evaluation because the decision-maker is interested in the true objective function and the noise is assumed to be noise (misleading, not representative). In the formulation in this paper, the decision maker does care about the noise; rather the objective function of interest is the stochastic noisy function. It would be good to make this distinction clearer upfront. - The RF experiment is not super compelling. It is not nearly as interesting as the FEL problem, and the risk aversion does not make a significant difference in average performance. Overall the empirical evaluation is fairly limited. - It is unclear why the mean-variance model is the best metric to use for evaluating performance - Why not also evaluate performance in terms of the VaR or CVaR? - The MV objective is nice for the proposed UCB-style algorithm and theoretical work, but for evaluation VaR and CVaR also are important considerations
Writing: - Very high quality and easy to follow writing - Grammar: - L164: “that that” - Figure 5 caption: “Simple regret fat the reprted”
Questions: - Figure 2: “RAHBO not only leads to strong results in terms of MV, but also in terms of mean objective”? Why is it better than GP-UCB on this metric? Is this an artifact of the specific toy problem?
Limitations are discussed and potential future directions are interesting. “We are not aware of any societal impacts of our work” – this (as with an optimization algorithm) could be used for nefarious endeavors and could be discussed. | - Typically, expected performance under observation noise is used for evaluation because the decision-maker is interested in the true objective function and the noise is assumed to be noise (misleading, not representative). In the formulation in this paper, the decision maker does care about the noise; rather the objective function of interest is the stochastic noisy function. It would be good to make this distinction clearer upfront. |
NIPS_2019_961 | NIPS_2019 | - It would be good to better justify and understand the bernoulli poisson link. Why are the number of layers used in the link in the poisson part? The motivation for the original paper [40] seems to be that one can capture communities and the sum in the exponential is over r_k coefficientst where each coefficient corresponds to a community. In this case the sum is over layers. How do the intuitions from that work transfer here? In what way do the communities correspond to layers in the encoder? It would be nice to beter understand this. Missing Baselines - It would be instructive to vary the number of layers of processing for the representation during inference and analyze how that affects the representations and performance on downstream tasks. - Can we run VGAE with a vamp prior to more accurately match the doubly stochastic construction in this work? That would help inform if the benefits are coming from a better generative model or better inference due to doubly-semi implicit variational inference. Minor Points - Figure 3: It might be nice to keep the generative model fixed and then optimize only the inference part of the model, parameterizing it as either SIG-VAE or VGAE to compare the representations. Its impossible to know / compare representations when the underlying generative models are also potentially different. | - Can we run VGAE with a vamp prior to more accurately match the doubly stochastic construction in this work? That would help inform if the benefits are coming from a better generative model or better inference due to doubly-semi implicit variational inference. Minor Points - Figure 3: It might be nice to keep the generative model fixed and then optimize only the inference part of the model, parameterizing it as either SIG-VAE or VGAE to compare the representations. Its impossible to know / compare representations when the underlying generative models are also potentially different. |
NIPS_2016_23 | NIPS_2016 | in the bibliography, switching-bandit algorithms with dependencies on the number of switches (EXP3.S is however cited[7]) and gaps between action's mean rewards being not described (Discounted UCB, Sliding Window UCB, EXP3 with reset). The detector of non-stationary has also some similarities with the SAO (Stochastic and Adversarial Optimal) algorithm that starts with a stationary algorithm and then switches to EXP3 if a non-stationarity is detected. - The proposed algorithm is well constructed and is a step-forward from the main reference [1]. While not having a major practical impact (has an high regret in stationary regime and in highly non-stationary regime, even if the best action does not change) it resolves and open problem and gives some closure the the analysis using the variation. - After the post rebuttal discussion, we agreed to rank this paper at poster level (5: 4->3). | - The proposed algorithm is well constructed and is a step-forward from the main reference [1]. While not having a major practical impact (has an high regret in stationary regime and in highly non-stationary regime, even if the best action does not change) it resolves and open problem and gives some closure the the analysis using the variation. |
NIPS_2020_844 | NIPS_2020 | - It is claimed that the proposed method aims to discrminatively localize the sounding objects from their mixed sound without any manual annotations. However, the method aslo aims to do class-aware localization. As shown in Figure 4, the object categories are labeled for the localized regions for the proposed method. It is unclear to this reviewer whether the labels there are only for illustrative purposes? - Even the proposed method doesn't rely on any class labels, it needs the number of categories of potential sound sources in the data to build the object dictionary. - Though the performance of method is pretty good especially in Table 2, the novelty/contribution of the method is somewhat incremental. The main contribution of the work is a new network design drawing inspirations from prior work for the sound source localization task. - The method assumes single source videos are available to train in the first stage, which is also a strong assumption even though class labels are not used. Most in-the-wild videos are noisy and multi-source. It would be desired to have some analysis to show how robust the system is to noise in videos or how the system can learn without clean single source videos to build the object dictionary. | - Though the performance of method is pretty good especially in Table 2, the novelty/contribution of the method is somewhat incremental. The main contribution of the work is a new network design drawing inspirations from prior work for the sound source localization task. |
NIPS_2018_849 | NIPS_2018 | - The presented node count for the graphs is quite low. How is performance affected if the count is increased? In the example of semantic segmentation: how does it affect the number of predicted classes? - Ablation study: how much of the learned pixel to node association is responsible for the performance boost. Previous work has also shown in the past that super-pixel based prediction is powerful and fast, I.e. with fixed associations. # Typos - Line 36: and computes *an* adjacency matrix - Line 255: there seems to be *a weak* correlation # Further Questions - Is there an advantage in speed in replacing some of the intermediate layers with this type of convolutional blocks? - Any ideas on how to derive the number of nodes for the graph? Any intuition on how this number regularises the predictor? - As far as I can tell the projection and re-projection is using activations from the previous layer both as feature (the where it will be mapped) and as data (the what will be mapped). Have you thought about deriving different features based on the activations; maybe also changing the dimension of the features through a non-linearity? Also concatenating hand-crafted features (or a learned derived value thereof), e.g., location, might lead to a stronger notion of "regions" as pointed out in the discussion about the result of semantic segmentation. - The paper opens that learning long-range dependencies is important for powerful predictors. In the example of semantic segmentation I can see that this is actually happening, e.g., in the visualisations in table 3; but I am not sure if it is fully required. Probably the truth lies somewhere in between and I miss a discussion about this. If no form of locality with respect to the 2d image space is encoded in the graph structure, I suspect that prediction suddenly depends on the image size. | - The paper opens that learning long-range dependencies is important for powerful predictors. In the example of semantic segmentation I can see that this is actually happening, e.g., in the visualisations in table 3; but I am not sure if it is fully required. Probably the truth lies somewhere in between and I miss a discussion about this. If no form of locality with respect to the 2d image space is encoded in the graph structure, I suspect that prediction suddenly depends on the image size. |
NIPS_2017_585 | NIPS_2017 | weakness of the paper is in the experiments: there should be more complete comparisons in computation time, and comparisons with QMC-based methods of Yang et al (ICML2014). Without this the advantage of the proposed method remains unclear.
- The limitation of the obtained results:
The authors assume that the spectrum of a kernel is sub-gaussian. This is OK, as the popular Gaussian kernels are in this class. However, another popular class of kernels such as Matern kernels are not included, since their spectrum only decay polynomially. In this sense, the results of the paper could be restrictive.
- Eq. (3):
What is $e_l$?
Corollaries 1, 2 and 3 and Theorem 4:
All of these results have exponential dependence on the diameter $M$ of the domain of data: a required feature size increases exponentially as $M$ grows. While this factor does not increase as a required amount of error $\varepsilon$ decreases, the dependence on $M$ affects the constant factor of the required feature size. In fact, Figure 1 shows that the performance is more quickly getting worse than standard random features. This may exhibit the weakness of the proposed approaches (or at least of the theoretical results).
- The equation in Line 170:
What is $e_i$?
- Subsampled dense grid:
This approach is what the authors used in Section 5 on experiments. However, it looks that there is no theoretical guarantee for this method. Those having theoretical guarantees seem not to be practically useful.
- Reweighted grid quadrature:
(i) It looks that there is no theoretical guarantee with this method.
(ii) The approach reminds me of Bayesian quadrature, which essentially obtains the weights by minimizing the worst case error in the unit ball of an RKHS. I would like to look at comparison with this approach.
(iii) Would it be possible to derive a time complexity?
(iv) How do you chose the regularization parameter $\lambda$ in the case of the $\ell_1$ approach?
- Experiments in Section 5:
(i) The authors reported the results of computation time very briefly (320 secs vs. 384 seconds for 28800 features in MNIST and "The quadrature-based features ... are about twice as fast to generate, compared to random Fourier features ..." in TIMIT). I do not they are not enough: the authors should report the results in the form of Tables, for example, varying the number of features.
(ii) There should be comparison with the QMC-based methods of Yang et al. (ICML2014, JMLR2016). It is not clear what is the advantage of the proposed method over the QMC-based methods.
(iii) There should be explanation on the settings of the MNIST and TIMIT classification tasks: what classifiers did you use, and how did you determine the hyper-parameters of these methods? At least such explantion should be included in the appendix. | - Eq. (3): What is $e_l$? Corollaries 1, 2 and 3 and Theorem 4: All of these results have exponential dependence on the diameter $M$ of the domain of data: a required feature size increases exponentially as $M$ grows. While this factor does not increase as a required amount of error $\varepsilon$ decreases, the dependence on $M$ affects the constant factor of the required feature size. In fact, Figure 1 shows that the performance is more quickly getting worse than standard random features. This may exhibit the weakness of the proposed approaches (or at least of the theoretical results). |
ICLR_2023_802 | ICLR_2023 | - There are no experiments to support the claim that A-DGN can specifically alleviate/mitigate oversquashing.
- There are no experiments to support the claim that A-DGN can effectively handle long-range dependencies specifically on graph data requiring long-range reasoning.
- The poor long-range modelling ability of DGNs is attributed to oversquashing and vanishing/exploding gradients but the poor performance could also be due to oversmoothing, another phenomenon observed in the context of very deep graph networks [Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning, In AAAI'18]. | - The poor long-range modelling ability of DGNs is attributed to oversquashing and vanishing/exploding gradients but the poor performance could also be due to oversmoothing, another phenomenon observed in the context of very deep graph networks [Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning, In AAAI'18]. |
NIPS_2021_2445 | NIPS_2021 | and strengths in their analysis with sufficient experimental detail, it is admirable, but they could provide more intuition why other methods do better than theirs.
The claims could be better supported. Some examples and questions(if I did not miss out anything)
Why using normalization is a problem for a network or a task (it can be thought as a part of cosine distance)? How would Barlow Twins perform if their invariance term is replaced with a euclidean distance?
Your method still uses 2048 as the batch size, I would not consider it as small. For example, Simclr uses examples in the same batch and its batch size changes between 256-8192. Most of the methods you mentioned need even much lower batch size.
You mentioned not sharing weights as an advantage, but you have shared weights in your results, except Table 4 in which the results degraded as you mentioned. What stops the other methods from using different weights? It should be possible even though they have covariance term between the embeddings, how much their performance would be affected compared with yours?
My intuition is that a proper design might be sufficient rather than separating variance terms.
- Do you have a demonstration or result related to your model collapsing less than other methods? In line 159, you mentioned gradients become 0 and collapse; it was a good point, is it commonly encountered, did you observe it in your experiments?
- I am also not convinced to the idea that the images and their augmentations need to be treated separately, they can be interchangeable.
- Variances of the results could be included to show the stability of the algorithms since it was another claim in the paper(although "collapsing" shows it partly, it is a biased criteria since the other methods are not designed for var/cov terms).
- How hard is it to balance these 3 terms?
- When someone thinks about gathering two batches from two networks and calculate the global batch covariance in this way; it includes both your terms and Barlow Twins terms. Can anything be said based on this observation, about which one is better and why? Significance:
Currently, the paper needs more solid intuition or analysis or better results to make an impact in my opinion. The changes compared with the prior work are minimal. Most of the ideas and problems in the paper are important, but they are already known.
The comparisons with the previous work is valuable to the field, they could maybe extend their experiments to the more of the mentioned methods or other variants.
The authors did a great job in presenting their work's limitations, their results in general not being better than the previous works and their extensive analysis(tables). If they did a better job in explaining the reasons/intuitions in a more solid way, or include some theory if there is any, I would be inclined to give an accept. | - Do you have a demonstration or result related to your model collapsing less than other methods? In line 159, you mentioned gradients become 0 and collapse; it was a good point, is it commonly encountered, did you observe it in your experiments? |
NIPS_2021_1958 | NIPS_2021 | 1.The problem formulation is somewhat unclear in the statement and introduction examples. 2. More baselines or self variants should be compared to better prove the effectiveness.
Detailed comments:
The problem definition of keyword search on incomplete graphs is ambiguous and confusing. The KS-GNN mostly optimizes on node similarity and the inference stage tends to select the top-k most similar results towards the query keyword set. However, the problem itself seems more like a combinatorial one, or say, set optimization, node-set selection with minimal distance measurement. Are the targets here equivalent?
The baseline approach seems much inferior to KS-GNN. It would be great to include some variants of the KS-GNN that delete some of the module or training objectives to confirm the contribution of each component.
Table 2 with missing edges is supposed to be more challenging than the task in Table 1. However, a lot of models perform even better (or comparable) which seems strange. Also, the claim of “KS-GNN has no significant effect” does not apply to the Toy and Video datasets for correctness.
It is hard to conclude from Figure 4 on the benefit of keyword frequency regularization. That is, it’s better to show the performance scores along with the visualization.
Figure 3: The notations of the figure (especially function f and g) are confusing. | 1.The problem formulation is somewhat unclear in the statement and introduction examples. |
5BoXZXTJvL | ICLR_2024 | 1. The novelty of this method appears somewhat constrained. Utilizing the first-order gradient for determining parameter importance is a common approach in pruning techniques applied to CNN, BERT, and ViT. This technique is well-established within the realm of model pruning. Considering in some instances this method even falls short of those achieved by SparseGPT (e.g., 2:4 for LLaMA-1 and LLaMA-2), I cannot say the first-order gradient in pruning LLMs might be a major contribution.
2. This paper lacks experiments on different LLM families. Conducting trials with models like OPT, BLOOM, or other alternatives could provide valuable insights into the method's applicability and generalizability across various LLM families.
3. The paper doesn't provide details regarding the latency of the pruned model. In a study centered on LLM compression, including latency metrics is crucial since such information is highly important to the readers to understand the efficiency of the pruned model. | 2. This paper lacks experiments on different LLM families. Conducting trials with models like OPT, BLOOM, or other alternatives could provide valuable insights into the method's applicability and generalizability across various LLM families. |
NIPS_2016_69 | NIPS_2016 | - The paper is somewhat incremental. The developed model is a fairly straighforward extension of the GAN for static images. - The generated videos have significant artifacts. Only some of the beach videos are kind of convincing. The action recognition performance is much below the current state-of-the-art on the UCF dataset, which uses more complex (deeper, also processing optic flow) architectures. Questions: - What is the size of the beach/golf course/train station/hospital datasets? - How do the video generation results from the network trained on 5000 hours of video look? Summary: While somewhat incremental, the paper seems to have enough novelty for a poster. The visual results encouraging but with many artifacts. The action classification results demonstrate benefits of the learnt representation compared with random weights but are significantly below state-of-the-art results on the considered dataset. | - How do the video generation results from the network trained on 5000 hours of video look? Summary: While somewhat incremental, the paper seems to have enough novelty for a poster. The visual results encouraging but with many artifacts. The action classification results demonstrate benefits of the learnt representation compared with random weights but are significantly below state-of-the-art results on the considered dataset. |
F61IzZl5jw | ICLR_2025 | - The paper uses 5,000 images as the training set (am I correct?) . I think the training set size is too small, and is easily memorized with sufficient long learning by large models such as SD 2 . What I am concerned about is what proportion of data is memorized when training with a huge set.
- This method seems to only work for generative models that can be fine-tuned as an in/outpainting model.
- Since the model has been fine-tuned, is it still capable of reflecting the memorization of the original model?
- Although the approach is novel and interesting, it lacks strong evidence to support its effectiveness as an evaluation method for memorization. | - This method seems to only work for generative models that can be fine-tuned as an in/outpainting model. |
NIPS_2018_809 | NIPS_2018 | Weakness: - The uniqueness of connecting curves between two weights would be unclear, and there might be a gap between the curve and FGE. A natural question would be, for example, if we run the curve findings several times, we will see many different curves? Or, those curves would be nearly unique? - The evidences are basically empirical, and it would be nice if we have some supportive explanations on why this curve happens (and whether it always happens). - The connections of the curve finding (the first part) and FGE (the second part) would be rather weak. When I read the first part and the title, I imagined that take random weights, learn curves between weights, and find nice wights to be mixed into the final ensemble, but it was not like that. (this can work, but also computationally demanding) Comment: - Overall I liked the paper even though the evidences are empirical. It was fun to read. The reported phenomena are quite mysterious, and interesting enough to inspire some subsequent research. - To be honest, I'm not sure the first curve-finding part explains well why the FGE work. The cyclical learning rate scheduling would perturb the weight around the initial converged weight, but it cannot guarantee that weight is changing along the curve described in the first part. | - The connections of the curve finding (the first part) and FGE (the second part) would be rather weak. When I read the first part and the title, I imagined that take random weights, learn curves between weights, and find nice wights to be mixed into the final ensemble, but it was not like that. (this can work, but also computationally demanding) Comment: |
ICLR_2022_2618 | ICLR_2022 | Weakness: Method:
1. Problem Formulation:
A key step in the proposed method is to convert the multi-objective optimization problem into different constrained single-objective optimization problems. However, many conversion methods have already been proposed [1-3] which can tackle the non-convex Pareto frontier. None of them are discussed in this work. Is the proposed problem formulation new or adopted from other works?
What are its advantages (and disadvantages) over other methods?
What is J(\pi_k) in (7)?
2. Finding the Whole Pareto Front:
Theorem 1 is a crucial motivation of this work to find the whole Pareto front. However, the theoretical property of Theorem 1 is relatively weak. It can only guarantee that the optimal solution of the constrained single-objective problem can dominate other solutions that satisfy the same constraint. Therefore, the obtained solution is not guaranteed to be Pareto optimal (can be dominated by other feasible solutions out of the constrained set). In addition, even assuming all the single-objective problems can be successfully optimized, there is also no guarantee that all Pareto solutions can be found by the proposed method.
It is also confusing why the obtained solutions can exactly lie on the unit vector in Figure 1. In my understanding, the proposed method can find a solution that satisfies the preference-based constraint while minimizing the L2 norm of all objectives. Since the problem has an inequality constraint, the obtained solution should be close to the unit preference vector but not guaranteed to be exactly on the unit vector.
3. The Model Structure:
To incorporate the preference into the TSP-Net model, the proposed method simply adds the preference embedding to each node (city) embedding. According to the ablation study in Appendix, the preference embedding network can be a simple single-layer FC. The rest model is the standard TSP-Net where the whole transformer-based encoder can be totally replaced by a single 1-D convolution layer. Why can this simple structure work so well for MOTSP? Will it significantly hurt the model performance of each individual preference compared to a single objective model?
More concerns on the results are listed below in the experiment part.
4. Other Problems and Related Work:
One important advantage of the learning-based method is its flexibility to solve different problems [4]. Although this work focuses on MOTSP, I believe it could have a larger impact by showing its ability to solve other problems.
Many works on deep multi-task learning and multi-objective RL have been cited and discussed multiple times in the related work section, while they are not the most related work to this paper. I think it is better to shorten this part into a single paragraph, and leave more space to discuss the related work on learning-based solvers (e.g., [4]) and other approaches for MOTSP. Experiments:
5. Competitive Baselines:
According to the experimental results, the learning-based solvers are much better than the heuristic-based solvers. However, for the single objective TSP, the SOTA heuristic-solver (e.g., Concorde) usually has the best performance. Since the obtained Pareto front is not highly non-convex (as in Figure 2), the results for linear scalarization + Concorde should be included for a better comparison.
6. Unfair Comparison:
The comparison to other learning-based solvers is questionable. First of all, one contribution of DRL-MOA is the transfer learning based training method. The required training epoch (and time) is far less than those reported in this paper. Hence the results reported in Table 1 are misleading. It seems that all the PA-Nets are trained on the 120-city instances for 2,3 and 5 objective MOTSP. However, the original DRL-MOA is only trained on the 40-city MOTSP. As reported in this work, "The trained model of bi-objective TSP for DRL-MOA (Li et al.,2020) is used.", will it lead to unfair comparison?
The TSP-Net is more powerful than the Ptr-Net used in DRL-MOA, so it is expected that it can have better performance. However, it only moderately outperforms DRL-MOA on the 2 and 5 objective problems, and has worse performance on the 3-obj problems. A more reasonable baseline is to use the TSP-Net in DRL-MOA to check whether the preference-based model could lead to worse performance.
To calculate the hypervolume, this work reports the results with {100,500,500} preferences for the 2, 3, and 5 objective MOTSP instances, while it only reports the results of {100, 91, 40} networks for DRL-MOA. Although it is understandable that one advantage of PA-Net is to generate a dense approximation, the results of {91, 40} preferences should also be reported for a clear comparison. Since PA-Net (with 500 preferences) are already outperformed by DRL-MOA (with only 91 models) for the 3-objective MOTSP, will it be significantly outperformed by DRL-MOA with the same number of solutions?
7. Missing Experiment Settings:
How many training samples and epochs are used to train the PA-Net and DRL-MOA? How many test instances are used to measure the performance for different methods? If there is only one instance for each kind of problem, the results are not convincing. What is the run time (inference) for each method to solve the MOTSP instance with different numbers of objectives and cities?
The details for Hypervolume calculation (e.g., the reference points for different instances) are missing.
8. Surprising Results on the Ablation Studies:
It is quite surprising that the transformer-based encoder is not needed for the TSP-Net, and only a simple one-layer FC is needed for preference embedding. With this setting, the whole PA-Net should have a much smaller scale. What are the total numbers of parameters for these ablation settings (1, 2, and 3)? With the 1-D Conv encoder, the ablation 3 model should be even smaller than the Prt-Net. Why it still needs ~12 hrs to train the model? Why can this model still outperform the Ptr-Net model on the 2-objective MOTSP? Reference:
[1] Das, Indraneel, and John E. Dennis. "Normal-boundary intersection: A new method for generating the Pareto surface in nonlinear multicriteria optimization problems." SIAM journal on optimization 8, no. 3: 631-657, 1998.
[2] Mavrotas, George. Effective implementation of the ε-constraint method in multi-objective mathematical programming problems. Applied mathematics and computation 213, no. 2: 455-465, 2009.
[3] Miettinen, Kaisa. Nonlinear multiobjective optimization. Vol. 12. Springer Science & Business Media, 2012.
[4] Kool, Wouter, Herke van Hoof, and Max Welling. Attention, Learn to Solve Routing Problems!. ICLR 2019. | 5. Competitive Baselines: According to the experimental results, the learning-based solvers are much better than the heuristic-based solvers. However, for the single objective TSP, the SOTA heuristic-solver (e.g., Concorde) usually has the best performance. Since the obtained Pareto front is not highly non-convex (as in Figure 2), the results for linear scalarization + Concorde should be included for a better comparison. |
kYXZ4FT2b3 | ICLR_2024 | - while it is interesting to see the grounding of the proposed method in neuroscience, some of the general ideas are already present in other methods for exploration, in particular, reasoning topologically is captured by methods that use the generalized Voronoi graph or semantic maps to guide the exploration, and the long-term storage through pose graphs in SLAM, where loop closure is applied (discussed in graph-based slam appendix section), or curiosity-driven exploration. The paper should discuss the proposed method with respect to such methods.
- the paper's comparison is limited in considering only the standard frontier-based exploration, when in fact there are a number of exploration methods showing better performance than the standard one, both in terms of exploration, as well as planning time. Some examples both classic and learning based include:
Cao, C., Zhu, H., Choset, H., & Zhang, J. (2021, July). TARE: A Hierarchical Framework for Efficiently Exploring Complex 3D Environments. In Robotics: Science and Systems (Vol. 5).
Lindqvist, B., Agha-Mohammadi, A. A., & Nikolakopoulos, G. (2021, September). Exploration-RRT: A multi-objective path planning and exploration framework for unknown and unstructured environments. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 3429-3435). IEEE.
Shrestha, R., Tian, F. P., Feng, W., Tan, P., & Vaughan, R. (2019, May). Learned map prediction for enhanced mobile robot exploration. In 2019 International Conference on Robotics and Automation (ICRA) (pp. 1197-1204). IEEE.
Caley, J. A., Lawrance, N. R., & Hollinger, G. A. (2019). Deep learning of structured environments for robot search. Autonomous Robots, 43, 1695-1714.
- the gain in memory appears to be a major component of the proposed method, however, overall, the trend seems to be fairly close to the frontier-based approach and somewhat surprising given the use of local maps. In fact, for the realistic experiments, in AWS office, memory appears better for Frontier. The size of each local map might depend on the complexity of the environment, but it is worth discussing what affects the determination of the local map in practice.
A couple of minor presentation comments:
- to be more precise in assumptions and corresponding presentation of functions, it is worth mentioning that the robot is non-omnidirectional, as otherwise the indicator function for whether the frontier edge is spatially behind the agent wouldn't apply. In addition, for that function there would be a threshold to determine what "behind" means, with respect to the orientation of the robot.
- usually white pixels are used for free space, instead of black.
- "FRAGMENTAION" -> "FRAGMENTATION"
- "that work did not seriously explore" -> "that work did not explore in-depth"
- instead of calling "wall-clock time" it is better to characterize it with "planning time" | - while it is interesting to see the grounding of the proposed method in neuroscience, some of the general ideas are already present in other methods for exploration, in particular, reasoning topologically is captured by methods that use the generalized Voronoi graph or semantic maps to guide the exploration, and the long-term storage through pose graphs in SLAM, where loop closure is applied (discussed in graph-based slam appendix section), or curiosity-driven exploration. The paper should discuss the proposed method with respect to such methods. |
NIPS_2018_494 | NIPS_2018 | 1. The biggest weakness is that there is little empirical validation provided for the constructed methods. A single table presents some mixed results where in some cases hyperbolic networks perform better and in others their euclidean counterparts or a mixture of the two work best. It seems that more work is needed to clearly understand how powerful the proposed hyperbolic neural networks are. 2. The experimental setup, tasks, and other details are also moved to the appendix which makes it hard to interpret this anyway. I would suggest moving some of these details back in and moving some background from Section 2 to the appendix instead. 3. The tasks studied in the experiments section (textual entailment, and a constructed prefix detection task) also fail to provide any insight on when / how the hyperbolic layers might be useful. Perhaps more thought could have been given to constructing a synthetic task which can clearly show the benefits of using such layers. In summary, the theoretical contributions of the paper are significant and would foster more exciting research in this nascent field. However, though it is not the central focus of the paper, the experiments carried out are unconvincing. | 2. The experimental setup, tasks, and other details are also moved to the appendix which makes it hard to interpret this anyway. I would suggest moving some of these details back in and moving some background from Section 2 to the appendix instead. |
Subsets and Splits