paper_id
stringlengths
10
19
venue
stringclasses
14 values
focused_review
stringlengths
232
10.2k
point
stringlengths
64
662
gybvlVXT6z
EMNLP_2023
1. I feel that paper has insufficiant baseline. For example, CoCoOp (https://arxiv.org/abs/2203.05557) is a widely used baseline for prompt tuning research in CLIP. Moreover, it would be nice to include the natural data shift setting as in most other prompt tuning papers for CLIP. 2. It would be nice to include the hard prompt baseline in Table 1 to see the increase in performance of each method. 3. I think the performance drop seen with respect to the prompt length (Figure 4) is a major limitation of this approach. For example, this phenomenon might make it so that using just a general hard prompt of length 4 ('a photo of a') would outperform the CBBT with length 4 or even CBBT with length 1.
2. It would be nice to include the hard prompt baseline in Table 1 to see the increase in performance of each method.
NIPS_2022_670
NIPS_2022
1. Lack of numerical results. The reviewer is curious about how to apply it to some popular algorithms and their performance compared with existing DP algorithms. 2. The presentation of this paper is hard to follow for the reviewer.
1. Lack of numerical results. The reviewer is curious about how to apply it to some popular algorithms and their performance compared with existing DP algorithms.
NIPS_2020_295
NIPS_2020
1. The experimental comparisons are not enough. Some methods like MoCo and SimCLR also test the results with wider backbones like ResNet50 (2×) and ResNet50 (4×). It would be interesting to see the results of proposed InvP with these wider backbones. 2. Some methods use epochs and pretrain epochs as 200, while the reported InvP uses 800 epochs. What are the results of InvP with epochs as 200? It would be more clear after adding these results into the tables. 3. The proposed method adopts memory bank to update vi, as detailed in the beginning of Sec.3. What the results would be when adopting momentum queue and current batch of features? As the results of SimCLR and MoCo are better than InsDis, it would be nice to have those results.
1. The experimental comparisons are not enough. Some methods like MoCo and SimCLR also test the results with wider backbones like ResNet50 (2×) and ResNet50 (4×). It would be interesting to see the results of proposed InvP with these wider backbones.
NIPS_2017_351
NIPS_2017
- As I said above, I found the writing / presentation a bit jumbled at times. - The novelty here feels a bit limited. Undoubtedly the architecture is more complex than and outperforms the MCB for VQA model [7], but much of this added complexity is simply repeating the intuition of [7] at higher (trinary) and lower (unary) orders. I don't think this is a huge problem, but I would suggest the authors clarify these contributions (and any I may have missed). - I don't think the probabilistic connection is drawn very well. It doesn't seem to be made formally enough to take it as anything more than motivational which is fine, but I would suggest the authors either cement this connection more formally or adjust the language to clarify. - Figure 2 is at an odd level of abstraction where it is not detailed enough to understand the network's functionality but also not abstract enough to easily capture the outline of the approach. I would suggest trying to simplify this figure to emphasize the unary/pairwise/trinary potential generation more clearly. - Figure 3 is never referenced unless I missed it. Some things I'm curious about: - What values were learned for the linear coefficients for combining the marginalized potentials in equations (1)? It would be interesting if different modalities took advantage of different potential orders. - I find it interesting that the 2-Modalities Unary+Pairwise model under-performs MCB [7] despite such a similar architecture. I was disappointed that there was not much discussion about this in the text. Any intuition into this result? Is it related to swap to the MCB / MCT decision computation modules? - The discussion of using sequential MCB vs a single MCT layers for the decision head was quite interesting, but no results were shown. Could the authors speak a bit about what was observed?
- I don't think the probabilistic connection is drawn very well. It doesn't seem to be made formally enough to take it as anything more than motivational which is fine, but I would suggest the authors either cement this connection more formally or adjust the language to clarify.
NIPS_2016_9
NIPS_2016
Weakness: The authors do not provide any theoretical understanding of the algorithm. The paper seems to be well written. The proposed algorithm seems to work very all on the experimental setup, using both synthetic and real-world data. The contributions of the papers are enough to be considered for a poster presentation. The following concerns if addressed properly could raise to the level of oral presentation: 1. The paper does not provide an analysis on what type of data the algorithm work best and on what type of data the algorithm may not work well. 2. The first claimed contribution of the paper is that unlike other existing algorithms, the proposed algorithm does not take as many points or does not need apriori knowledge about dimensions of subspaces. It would have been better if there were some empirical justification about this. 3. It would be good to show some empirical evidence that the proposed algorithm works better for Column Subset Selection problem too, as claimed in the third contribution of the paper.
3. It would be good to show some empirical evidence that the proposed algorithm works better for Column Subset Selection problem too, as claimed in the third contribution of the paper.
NIPS_2020_916
NIPS_2020
My major complaints can be characterized into the following bullet points: - Robustness is argued only indirectly by way of Lipschitz constants. While the authors present a novel formulation (i.e., differing from the standard low-lipschitz=>robustness claims in the ML literature) of how controlling the lipschitz function of the classifier controls sensitivity to adversarial perturbations, this setting differs slightly from standard classification settings. In the standard classification setting, where labels are 1-hot vectors in R^dim(Y), a classifier typically returns as a label the argmax of the vector-valued function. Robustness then is usually considered as whether the argmax returns the right label, rather than a strictly-convex loss applied to the one-hot-label: this work incorporates 'confidence' into the robustness evaluation. - The applicability of the robust training scheme seems unlikely to scale to practical datasets, particularly those supported on high-dimensional domains. It seems like the accuracy would scale unfavorably unless the size of V scales exponentially with the dimension. - The example data distribution could be better chosen. As it stands, a classifier with (true) loss would require a Lipschitz constant that tends to infinity. A more standard/practical dataset would admit a perfect classifier that has a valid Lipschitz constant.
- The applicability of the robust training scheme seems unlikely to scale to practical datasets, particularly those supported on high-dimensional domains. It seems like the accuracy would scale unfavorably unless the size of V scales exponentially with the dimension.
VnMfQuDSgG
EMNLP_2023
- The paper is entirely statistical. For a task like this, it is important to show the linguistic nuance that is captured by the metrics. - The choice of datasets: is 4 years sufficient period to study style shifts? what kind of style shifts happen in such a time? Without these answers, it is hard to appreciate what the model is capturing.
- The choice of datasets: is 4 years sufficient period to study style shifts? what kind of style shifts happen in such a time? Without these answers, it is hard to appreciate what the model is capturing.
NIPS_2021_442
NIPS_2021
of the paper: Strengths: 1) To the best of my knowledge, the problem investigated in the paper is original in the sense that top-m identification has not been studied in the misspecified setting. 2) The paper provides some interesting results: i) (Section 3.1) Knowing the level of misspecification ε is a key ingredient, as not knowing the same would yield sample complexity bounds which are no better than the bound obtainable from unstructured ( ε = ∞ ) stochastic bandits. ii) A single no-regret learner is used for the sampling strategy instead of assigning a learner for each of the (N choose k) answers, thus exponentially reducing the number of online learners. iii) The proposed decision rules are shown to match the prescribed lower bound asymptotically. 3) Sufficient experimental validation is provided to showcase the empirical performance of the prescribed decision rules. Weaknesses: Some of the explanations provided by the authors are a bit unclear to me. Specifically, I have the following questions: 1) IMO, a better explanation of investigating top-m identification in this setting is required. Specifically, in this setting, we could readily convert the problem to the general top-m identification by appending the constant 1 to the features (converting them into d + 1 dimensional features) and trying to estimate the misspecifications η in the higher dimensional space. Why is that disadvantageous? Can the authors explain how the lower bound in Theorem 1 explicitly captures the effect of the upper bound on misspecification ε ? The relationship could be shown, for instance, by providing an example of a particular bandit environment (say, Gaussian bandits) ala [Kaufmann2016]. Sample complexity: Theorem 2 states the sample complexity in a very abstract way; it provides an equation which needs to be solved in order to get an explicit expression of the sample complexity. In order to make a comparison, the authors then mention that the unstructured confidence interval β t , δ u n s is approximately log ⁡ ( 1 δ ) in the limit of δ → 0 , which is then used to argue that the sample complexity of MISLID is asymptotically optimal. However, β t , δ u n s also depends on t . In fact, my understanding is that as δ goes to 0 , the stopping time t goes to infinity, where it is not clear as to what value the overall expression β t , δ u n s converges. Overall, I feel that the authors need to explicate the sample complexity a bit more. My suggestions are: can the authors find a solution to equation (5) (or at least an upper bound on the solution for different regimes of ε )? Using such an upper bound, even if the authors could give an explicit expression of the (asymptotic) sample complexity and show how it compares to the lower bound, it would be a great contribution. Looking at Figure 1A (the second figure from the left, for the case of ε = 2 ), it looks like LinGapE outperforms MISLID in terms of average sample complexity. Please correct me if I’m missing something, but if what I understand is correct, then why use MISLID and not LinGapE? Probable typo: Line 211: Should it be θ instead of θ t for the self-normalized concentration? The authors have explained the limitations of the investigation in Section 6.
2) The paper provides some interesting results: i) (Section 3.1) Knowing the level of misspecification ε is a key ingredient, as not knowing the same would yield sample complexity bounds which are no better than the bound obtainable from unstructured ( ε = ∞ ) stochastic bandits. ii) A single no-regret learner is used for the sampling strategy instead of assigning a learner for each of the (N choose k) answers, thus exponentially reducing the number of online learners. iii) The proposed decision rules are shown to match the prescribed lower bound asymptotically.
ICLR_2023_977
ICLR_2023
the evaluation section has 2 experiments, but only 2 very insightful detailed examples. The paper can use a few more examples to illustrate more differences of the output sequences. This would allow the reader to internalize how the non-monotonicity in a deeper way. Questions: In details, how does the decoding algorithm actually avoid repetitions? In other way, how does other models actually degrade validation perplexity using their decoding algorithm? Typos, Grammar, etc.: Page 7, section 4.2, par. 2: the callout to table 5 should go to table 3, instead. Page 7, section 5, last par.: figure 6 callout is not directing properly
2: the callout to table 5 should go to table 3, instead. Page 7, section 5, last par.: figure 6 callout is not directing properly
NIPS_2020_1371
NIPS_2020
When reading the paper, I've got the impression that the paper is not finished with couple of key experiments missing. Some parts of the paper lack motivation. Terminology is sometimes unclear and ambiguous. 1. Terminology. The paper uses terms "animation", "generative", "interpolation". See contribution 1 in L40-42. While the paper reported some interpolation experiments, I haven't found any animation or generation experiments. It seems the authors equate interpolation and animation (Section 4.2) which is not correct. I consider animation is a physically plausible motion. Like a person opens a mouth, car moves, while interpolation is just warping one image into the other. Fig 7 shows exactly this with the end states being plausible states of the system. The authors should fix the ambiguity to avoid misunderstanding. The authors also don't report any generation results. Can I sample a random shape from the learnt distribution? If not the I don't think it's correct to say the model is generative. 2. Motivation. It's not clear why the problem is important from practical standpoint? Why one would like to interpolate between two icons? Motivation behind animation is more clear, but in my opinion, the paper doesn't do animation. I believe from a practical standpoint letting the user to input text and be able to generate an icon would also be important. Again, I have hard time understanding why shape autoencoding and interpolation is interesting. 3. Experiments. Probably the biggest concern with the paper is with the experiments. The paper reports only self comparisons. The paper also doesn't explain why this is so, which adds to the poor motivation problem. In a generative setting comparisons with SketchRNN could be performed.
3. Experiments. Probably the biggest concern with the paper is with the experiments. The paper reports only self comparisons. The paper also doesn't explain why this is so, which adds to the poor motivation problem. In a generative setting comparisons with SketchRNN could be performed.
ICLR_2022_1794
ICLR_2022
1 Medical imaging are often obtained in 3D volumes, not only limited to 2D images. So experiments should include the 3D volume data as well for the general community, rather than all on 2D images. And the lesion detection is another important task for the medical community, which has not been studied in this work. 2 More analysis and comments are recommended on the performance trending of increasing the number of parameters for ViT (DeiT) in the Figure 3. I disagree with authors' viewpoint that "Both CNNs and ViTs seem to benefit similarly from increased model capacity". In the Figure 3, the DeiT-B models does not outperform DeiT-T in APTOS2019, and it does not outperform DeiT-S on APTOS2019, ISIC2019 and CheXpert (0.1% won't be significant). However, CNNs can give more almost consistent model improvements as the capacity goes up except on the ISIC2019. 3 On the segmentation mask involved with cancer on CSAW-S, the segmentation results of DEEPLAB3-DEIT-S cannot be concluded as better than DEEPLAB3-RESNET50. The implication that ViTs outperform CNNs in this segmentation task cannot be validly drawn from an 0.2% difference with larger variance. Questions: 1 For the grid search of learning rate, is it done on the validation set? Minor problems: 1 The n number for Camelyon dataset in Table 1 is not consistent with the descriptions in the text in Page 4.
2 More analysis and comments are recommended on the performance trending of increasing the number of parameters for ViT (DeiT) in the Figure 3. I disagree with authors' viewpoint that "Both CNNs and ViTs seem to benefit similarly from increased model capacity". In the Figure 3, the DeiT-B models does not outperform DeiT-T in APTOS2019, and it does not outperform DeiT-S on APTOS2019, ISIC2019 and CheXpert (0.1% won't be significant). However, CNNs can give more almost consistent model improvements as the capacity goes up except on the ISIC2019.
ICLR_2022_3352
ICLR_2022
+ The problem studied in this paper is definitely important in many real-world applications, such as robotics decision-making and autonomous driving. Discovering the underlying causation is important for agents to make reasonable decisions, especially in dynamic environments. + The method proposed in this paper is interesting and technically correct. Intuitively, using GRU to extract sequential information helps capture the changes of causal graphs. - The main idea of causal discovery by sampling intervention set and causal graphs for masking is similar to DCDI [1]. This paper is more like using DCDI in dynamic environments, which may limit the novelty of this paper. - The paper is not difficult to follow, but there are several places that are may cause confusion. (listed in point 3). - The contribution of this paper is not fully supported by experiments. Main Questions (1) During the inference stage, why use samples instead of directly taking the argmax of Bernoulli distribution? How many samples are required? Will this sampling cause scalability problems? (2) In the experiment part, the authors only compare with one method (V-CDN). Is it possible to compare DYNOTEARS with the proposed method? (3) The authors mention that there is no ground truth to evaluate the causal discovery task. I agree with this opinion since the real world does not provide us causal graphs. However, the first experiment is conducted on a synthetic dataset, where I believe it is able to obtain the causation by checking collision conditions. In other words, I am not convinced only by the prediction results. Could the author provide the learned causal graphs and intervention sets and compare them with ground truth even on a simple synthetic dataset? Clarification questions (1) It seems the citation of NOTEARS [2] is wrongly used for DYNOTEARS [3]. This citation is important since DYNOTEARS is one of the motivations of this paper. (2) ICM part in Figure 3 is not clear. How is the Intervention set I is used? If I understand correctly, function f is a prediction model conditioned on history frames. (3) The term “Bern” in equation (3) is not defined. I assume it is the Bernoulli distribution. Then what does the symbol B e r n ( α t , β t ) mean? (4) According to equation (7), each node j has its own parameters ϕ j t and ψ j t . Could the authors explain why the parameters are related to time? (5) In equation (16), the authors mention the term “ secondary optimization”. I can’t find any reference for it. Could the author provide more information? Minor things: (1) In the caption of Figure 2, the authors say “For nonstationary causal models, (c)….”. But in figure (c) belongs to stationary methods. [1] Brouillard P, Lachapelle S, Lacoste A, et al. Differentiable causal discovery from interventional data[J]. arXiv preprint arXiv:2007.01754, 2020. [2] Zheng X, Aragam B, Ravikumar P, et al. Dags with no tears: Continuous optimization for structure learning[J]. arXiv preprint arXiv:1803.01422, 2018. [3] Pamfil R, Sriwattanaworachai N, Desai S, et al. Dynotears: Structure learning from time-series data[C]//International Conference on Artificial Intelligence and Statistics. PMLR, 2020: 1595-1605.
- The paper is not difficult to follow, but there are several places that are may cause confusion. (listed in point 3).
ICLR_2021_1948
ICLR_2021
a. Anonymisation Failure in References i. A reference uncited in the manuscript body contains a non-anonymised set of author names to a paper with the same title as the system presented in this paper. This was not detected during initial review. "Shuby Deshpande and Jeff Schneider. Vizarel: A System to Help Better Understand RL Agents.arXiv:2007.05577." b. Citations i. There is egregiously missing information in almost all citations. This is quite obvious in all the missing dates in the manuscript text. c. Clarity i. There are a few unclear or misleadingly worded statements made as below: 1) "However, there is no corresponding set of tools for the reinforcement learning setting." - This is false. See references below (also some in the submitted paper). 2) "stronger feedback loop between the researcher and the agent" - This is at least confusing. In any learning setting, there is a strong interaction loop between experimentation by the researcher and resulting outcomes for the trained model. 3) "To the best of our knowledge, there do not exist visualization systems built for interpretable reinforcement learning that effectively address the broader goals we have identified" - It isn't clear what these broader goals are that have been identified. Therefore it isn't possible to evaluate this claim. 4) "For multi-dimensional action spaces, the viewport could be repurposed to display the variance of the action distribution, plot different projections of the action distribution, or use more sophisticated techniques (Huber)." - It would be clearer to actually state what the sophisticated techniques from Huber are here. ii. The framework could be clearer that it applies most directly as described on single agent RL. The same approach could be used with multi-agent RL but the observation state and visualisations around that get more confusing when there are multiple potentially different sets of observations. Is this not the case, please clarify? d. Experimental rigour i. The paper does include 3 extremely brief examples of how the tool might be used. However, it does not include any experiments to suggest that this tool would actually improve the debugging process for training RL in a real user study. Not every paper requires a user study, however, the contributions proposed by this particular manuscript require validation at some level from actual RL users (even a case study with some feedback from users would address this to some extent). e. Novelty in Related Work i. The manuscript contains references to several relevant publications that can be compared to the current work. However, the paper is also missing references to many related and relevant works in the space of debugging reinforcement learning using visualisations, especially with an eye towards explainable reinforcement learning. See a sampling below. 1) @inproceedings{ Rupprecht2020Finding, title={Finding and Visualizing Weaknesses of Deep Reinforcement Learning Agents}, author={Christian Rupprecht and Cyril Ibrahim and Christopher J. Pal}, booktitle={International Conference on Learning Representations}, year={2020}, url={https://openreview.net/forum?id=rylvYaNYDH} } 2) @inproceedings{ Atrey2020Exploratory, title={Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps for Deep Reinforcement Learning}, author={Akanksha Atrey and Kaleigh Clary and David Jensen}, booktitle={International Conference on Learning Representations}, year={2020}, url={https://openreview.net/forum?id=rkl3m1BFDB} } 3) @inproceedings{ Puri2020Explain, title={Explain Your Move: Understanding Agent Actions Using Specific and Relevant Feature Attribution}, author={Nikaash Puri and Sukriti Verma and Piyush Gupta and Dhruv Kayastha and Shripad Deshmukh and Balaji Krishnamurthy and Sameer Singh}, booktitle={International Conference on Learning Representations}, year={2020}, url={https://openreview.net/forum?id=SJgzLkBKPB} } 4) @article{reddy2019learning, title={Learning human objectives by evaluating hypothetical behavior}, author={Reddy, Siddharth and Dragan, Anca D and Levine, Sergey and Legg, Shane and Leike, Jan}, journal={arXiv preprint arXiv:1912.05652}, year={2019} } 5) @inproceedings{mcgregor2015facilitating, title={Facilitating testing and debugging of Markov Decision Processes with interactive visualization}, author={McGregor, Sean and Buckingham, Hailey and Dietterich, Thomas G and Houtman, Rachel and Montgomery, Claire and Metoyer, Ronald}, booktitle={2015 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)}, pages={53--61}, year={2015}, organization={IEEE} } 6) @article{puiutta2020explainable, title={Explainable Reinforcement Learning: A Survey}, author={Puiutta, Erika and Veith, Eric}, journal={arXiv preprint arXiv:2005.06247}, year={2020} } 7) @book{calvaresi2019explainable, title={Explainable, Transparent Autonomous Agents and Multi-Agent Systems: First International Workshop, EXTRAAMAS 2019, Montreal, QC, Canada, May 13--14, 2019, Revised Selected Papers}, author={Calvaresi, Davide and Najjar, Amro and Schumacher, Michael and Fr{\"a}mling, Kary}, volume={11763}, year={2019}, publisher={Springer Nature} } 8) @inproceedings{juozapaitis2019explainable, title={Explainable reinforcement learning via reward decomposition}, author={Juozapaitis, Zoe and Koul, Anurag and Fern, Alan and Erwig, Martin and Doshi-Velez, Finale}, booktitle={IJCAI/ECAI Workshop on Explainable Artificial Intelligence}, year={2019} } 9) @misc{sundararajan2020shapley, title={The many Shapley values for model explanation}, author={Mukund Sundararajan and Amir Najmi}, year={2020}, eprint={1908.08474}, archivePrefix={arXiv}, primaryClass={cs.AI} } 10) @misc{madumal2020distal, title={Distal Explanations for Model-free Explainable Reinforcement Learning}, author={Prashan Madumal and Tim Miller and Liz Sonenberg and Frank Vetere}, year={2020}, eprint={2001.10284}, archivePrefix={arXiv}, primaryClass={cs.AI} } 11) @article{Sequeira_2020, title={Interestingness elements for explainable reinforcement learning: Understanding agents’ capabilities and limitations}, volume={288}, ISSN={0004-3702}, url={http://dx.doi.org/10.1016/j.artint.2020.103367}, DOI={10.1016/j.artint.2020.103367}, journal={Artificial Intelligence}, publisher={Elsevier BV}, author={Sequeira, Pedro and Gervasio, Melinda}, year={2020}, month={Nov}, pages={103367} } 12) @article{Fukuchi_2017, title={Autonomous Self-Explanation of Behavior for Interactive Reinforcement Learning Agents}, ISBN={9781450351133}, url={http://dx.doi.org/10.1145/3125739.3125746}, DOI={10.1145/3125739.3125746}, journal={Proceedings of the 5th International Conference on Human Agent Interaction}, publisher={ACM}, author={Fukuchi, Yosuke and Osawa, Masahiko and Yamakawa, Hiroshi and Imai, Michita}, year={2017}, month={Oct} } 4. Recommendation a. I recommend this paper for rejection as the degree of change needed to validate the contribution through a user study or other validation is likely not feasible in the time and space needed. However, the contribution is potentially valuable to RL, so with the inclusion of this missing evaluation and additions to related work/contextualisation of the contribution, I would consider increasing my score and changing my recommendation. 5. Minor Comments/Suggestions a. It is recommended to use the TensorFlow whitepaper citation for TensorBoard (https://arxiv.org/abs/1603.04467). This is the official response (https://github.com/tensorflow/tensorboard/issues/3437).
1) "However, there is no corresponding set of tools for the reinforcement learning setting." - This is false. See references below (also some in the submitted paper).
NIPS_2016_370
NIPS_2016
, and while the scores above are my best attempt to turn these strengths and weaknesses into numerical judgments, I think it's important to consider the strengths and weaknesses holistically when making a judgment. Below are my impressions. First, the strengths: 1. The idea to perform improper unsupervised learning is an interesting one, which allows one to circumvent certain NP hardness results in the unsupervised learning setting. 2. The results, while mostly based on "standard" techniques, are not obvious a priori, and require a fair degree of technical competency (i.e., the techniques are really only "standard" to a small group of experts). 3. The paper is locally well-written and the technical presentation flows easily: I can understand the statement of each theorem without having to wade through too much notation, and the authors do a good job of conveying the gist of the proofs. Second, the weaknesses: 1. The biggest weakness is some issues with the framework itself. In particular: 1a. It is not obvious that "k-bit representation" is the right notion for unsupervised learning. Presumably the idea is that if one can compress to a small number of bits, one will obtain good generalization performance from a small number of labeled samples. But in reality, this will also depend on the chosen model class used to fit this hypothetical supervised data: perhaps there is one representation which admits a linear model, while another requires a quadratic model or a kernel. It seems more desirable to have a linear model on 10,000 bits than a quadratic model on 1,000 bits. This is an issue that I felt was brushed under the rug in an otherwise clear paper. 1b. It also seems a bit clunky to work with bits (in fact, the paper basically immediately passes from bits to real numbers). 1c. Somewhat related to 1a, it wasn't obvious to me if the representations implicit in the main results would actually lead to good performance if the resulting features were then used in supervised learning. I generally felt that it would be better if the framework was (a) more tied to eventual supervised learning performance, and (b) a bit simpler to work with. 2. I thought that the introduction was a bit grandiose in comparing itself to PAC learning. 3. The main point (that improper unsupervised learning can overcome NP hardness barriers) didn't come through until I had read the paper in detail. When deciding what papers to accept into a conference, there are inevitably cases where one must decide between conservatively accepting only papers that are clearly solid, and taking risks to allow more original but higher-variance papers to reach a wide audience. I generally favor the latter approach, I think this paper is a case in point: it's hard for me to tell whether the ideas in this paper will ultimately lead to a fruitful line of work, or turn out to be flawed in the end. So the variance is high, but the expected value is high as well, and I generally get the sense from reading the paper that the authors know what they are doing. So I think it should be accepted. Some questions for the authors (please answer in rebuttal): -Do the representations implicit in Theorems 3.2 and Theorem 4.1 yield features that would be appropriate for subsequent supervised learning of a linear model (i.e., would linear combinations of the features yield a reasonable model family)? -How easy is it to handle e.g. manifolds defined by cubic constraints with the spectral decoding approach?
2. The results, while mostly based on "standard" techniques, are not obvious a priori, and require a fair degree of technical competency (i.e., the techniques are really only "standard" to a small group of experts).
ARR_2022_223_review
ARR_2022
The majority of the weaknesses in the paper seem to stem from confusion and inconsistencies between some of the prose and the results. 1. Figure 2, as it is, isn't totally convincing there is a gap in convergence times. The x-axis of the graph is time, when it would have been more convincing using steps. Without an efficient, factored attention for prompting implementation a la [He et al. (2022)](https://arxiv.org/abs/2110.04366) prompt tuning can cause slow downs from the increased sequence length. With time on the x-axis it is unclear if prompt tuning requires more steps or if each step just takes more time. Similarly, this work uses $0.001$ for the learning rate. This is a lot smaller than the suggested learning rate of $0.3$ in [Lester et al (2021)](https://aclanthology.org/2021.emnlp-main.243/), it would have been better to see if a larger learning rate would have closed this gap. Finally, this gap with finetuning is used as a motivating examples but the faster convergence times of things like their initialization strategy is never compared to finetuning. 2. Confusion around output space and label extraction. In the prose (and Appendix A.3) it is stated that labels are based on the predictions at `[MASK]` for RoBERTa Models and the T5 Decoder for generation. Scores in the paper, for example the random vector baseline for T5 in Table 2 suggest that the output space is restricted to only valid labels as a random vector of T5 generally produces nothing. Using this rank classification approach should be stated plainly as direct prompt reuse is unlikely to work for actual T5 generation. 3. The `laptop` and `restaurant` datasets don't seem to match their descriptions in the appendix. It is stated that they have 3 labels but their random vector performance is about 20% suggesting they actually have 5 labels? 4. Some relative performance numbers in Figure 3 are really surprising, things like $1$ for `MRPC` to `resturant` transfer seem far too low, `laptop` source to `laptop` target on T5 doesn't get 100, Are there errors in the figure or is where something going wrong with the datasets or implementation? 5. Prompt similarities are evaluated based on correlation with zero-shot performance for direct prompt transfer. Given that very few direct prompt transfers yield gain in performance, what is actually important when it comes to prompt transferability is how well the prompt works as an initialization and does that boost performance. Prompt similarity tracking zero-shot performance will be a good metric if that is in turn correlated with transfer performance. The numbers from Table 1 generally support that this as a good proxy method as 76% of datasets show small improvements when using the best zero-shot performing prompt as initialization when using T5 (although only 54% of datasets show improvement for RoBERTa). However Table 2 suggests that this zero-shot performance isn't well correlated with transfer performance. In only 38% of datasets does the best zero-shot prompt match the best prompt to use for transfer (And of these 5 successes 3 of them are based on using MNLI, a dataset well known for giving strong transfer results [(Phang et al., 2017)](https://arxiv.org/abs/1811.01088)). Given that zero-shot performance doesn't seem to be correlated with transfer performance (and that zero-shot transfer is relatively easy to compute) it seems like _ON_'s strong correlation would not be very useful in practice. 6. While recent enough that it is totally fair to call [Vu et al., (2021)](https://arxiv.org/abs/2110.07904) concurrent work, given the similarity of several approaches there should be a deeper discussion comparing the two works. Both the prompt transfer via initialization and the prompt similarity as a proxy for transferability are present in that work. Given the numerous differences (Vu et al transfer mostly focuses on large mixtures transferring to tasks and performance while this work focuses on task to task transfer with an eye towards speed. _ ON_ as an improvement over the Cosine similarities which are also present in Vu et al) it seems this section should be expanded considering how much overlap there is. 7. The majority of Model transfer results seem difficult to leverage. Compared to cross-task transfer, the gains are minimal and the convergence speed ups are small. Coupled with the extra time it takes to train the projector for _Task Tuning_ (which back propagation with the target model) it seems hard to imagine situations where this method is worth doing (that knowledge is useful). Similarly, the claim on line 109 that model transfer can significantly accelerate prompt tuning seems lie an over-claim. 8. Line 118 claims `embedding distances of prompts do not well indicate prompt transferability` but Table 4 shows that C$_{\text{average}}$ is not far behind _ON_. This claim seems over-reaching and should instead be something like "our novel method of measuring prompt similarity via model activations is better correlated with transfer performance than embedding distance based measures" 1. Line 038: They state that GPT-3 showed extremely large LM can give remarkable improvements. I think it would be correct to have one of their later citations on continually developed LM as the one that showed that. GPT-3 mostly showed promise for Few-Shot evaluation, not that it get really good performance on downstream tasks. 2. Line 148: I think it would make sense to make a distinction between hard prompt work updates the frozen model (Schick and Schütez, etc) from ones that don't. 3. Line 153: I think it makes sense to include [_Learning How to Ask: Querying LMs with Mixtures of Soft Prompts_ (Qin and Eisner, 2021)](https://aclanthology.org/2021.naacl-main.410.pdf) in the citation list for work on soft prompts. 4. Figure 3: The coloring of the PI group makes the text very hard to read in Black and White. 5. Table 1: Including the fact that the prompt used for initialization is the one that performed best in direct transfer in the caption as well as the prose would make the table more self contained. 6. Table 2: Mentioning that the prompt used as cross model initialization is from _Task Tuning_ in the caption would make the table more self contained. 7. Line 512: It is mentioned that _ON_ has a drop when applied to T$5_{\text{XXL}}$ and it is suggested this has to do with redundancy as the models grow. I think this section could be improved by highlighting that the Cosine based metrics have a similar drop (suggesting this is a fact of the model rather than the fault of the _ON_ method). Similarly, Figure 4 shows the dropping correlation as the model grows. Pointing out the that the _ON_ correlation for RoBERTA$_{\text{large}}$ would fit the tend of correlation vs model size (being between T5 Base and T5 Large) also strengths the argument but showing it isn't an artifact of _ON_ working poorly on encoder-decoder models. I think this section should also be reordered to show that this drop is correlated with model size. Then the section can be ended with hypothesizing and limited exploration of model redundancy. 8. Figure 6. It would have been interesting to see how the unified label space worked for T5 rather than RoBERTAa as the generative nature of T5's decoding is probably more vulnerable to issue stemming from different labels. 9. _ ON_ could be pushed farther. An advantage of prompt tuning is that the prompt is transformed by the models attention based on the value of the prompt. Without having an input to the model, the prompts activations are most likely dissimilar to the kind of activations one would expect when actually using the prompt. 10. Line 074: This sentence is confusing. Perhaps something like "Thus" over "Hence only"? 11. Line 165: Remove "remedy,"
2. Line 148: I think it would make sense to make a distinction between hard prompt work updates the frozen model (Schick and Schütez, etc) from ones that don't.
CEPkRTOlut
EMNLP_2023
No ethics section, but there are ethical issues that deserve discussion (see the ethics section). Also a few, mostly minor points: - When the corpus was created, participants were told to speak in such a way to make the intent of the speech unambiguous. This may lead to over-emphasis compared with natural speech. There was no mention of any evaluation of the data to avoid this. - The corpus was created with only ambiguous sentences, and the non-ambiguous content was taken from another source. There is a chance that different recording qualities between news (LSCVR) and crowdsourced data could artificially raise the ability of the model to distinguish between ambiguous (tag 1 or 2) and non-ambiguous (tag 0) sentences. - The amount of data used to train the text disambiguation model was significantly lower than the data used for training the end-to-end system. Given that the difference between the two proposed systems is only a few percentage points, it brings into question the conclusion that the direct model is clearly the better of the two (but they still are both demonstrably superior to the baseline). - It would be hard to reproduce the fine tuning of the IndoBART model without a little more information. Was it fine-tuned for a certain number of steps, for example?
- The amount of data used to train the text disambiguation model was significantly lower than the data used for training the end-to-end system. Given that the difference between the two proposed systems is only a few percentage points, it brings into question the conclusion that the direct model is clearly the better of the two (but they still are both demonstrably superior to the baseline).
pxclAomHat
ICLR_2025
1. The paper does not explicitly link the quality of the optimization landscape to convergence speed or generalization performance, which undermines its stated goal of theoretically explaining the performance of LoRA and GaLore. Many claims also lack this connection (e.g., lines 60-61, 289-291, and 301-304), making them appear weaker and less convincing. 2. The theoretical results in this paper are derived from analyses of MLPs, whereas LLMs, with their attention mechanisms, layer normalization, etc., are more complex. Since there is a gap between MLPs and LLMs, and the paper does not directly validate its theoretical results on LLMs (instead of relying only on the convergence and performance outcomes), the theoretical conclusions are less convincing. 3. The paper does not clearly motivate GaRare; it lacks evidence or justification for GaRare's advantages over GaLore based on theoretical analysis. Additionally, a more detailed algorithmic presentation is needed, particularly to clarify the process of recovering updated parameters from projected gradients, which would enhance understanding.
3. The paper does not clearly motivate GaRare; it lacks evidence or justification for GaRare's advantages over GaLore based on theoretical analysis. Additionally, a more detailed algorithmic presentation is needed, particularly to clarify the process of recovering updated parameters from projected gradients, which would enhance understanding.
NIPS_2017_356
NIPS_2017
] My major concerns about this paper is the experiment on visual dialog dataset. The authors only show the proposed model's performance on discriminative setting without any ablation studies. There is not enough experiment result to show how the proposed model works on the real dataset. If possible, please answer my following questions in the rebuttal. 1: The authors claim their model can achieve superior performance having significantly fewer parameters than baseline [1]. This is mainly achieved by using a much smaller word embedding size and LSTM size. To me, it could be authors in [1] just test model with standard parameter setting. To backup this claim, is there any improvements when the proposed model use larger word embedding, and LSTM parameters? 2: There are two test settings in visual dialog, while the Table 1 only shows the result on discriminative setting. It's known that discriminative setting can not apply on real applications, what is the result on generative setting? 3: To further backup the proposed visual reference resolution model works in real dataset, please also conduct ablation study on visDial dataset. One experiment I'm really interested is the performance of ATT(+H) (in figure 4 left). What is the result if the proposed model didn't consider the relevant attention retrieval from the attention memory.
3: To further backup the proposed visual reference resolution model works in real dataset, please also conduct ablation study on visDial dataset. One experiment I'm really interested is the performance of ATT(+H) (in figure 4 left). What is the result if the proposed model didn't consider the relevant attention retrieval from the attention memory.
NIPS_2019_220
NIPS_2019
1. Unclear experimental methodology. The paper states that 300W-LP is used to train the model, but later it is claimed same procedure is used as was used for baselines. Most baselines do not use 300W-LP dataset in their training. Is 300W-LP used in all experiments or just some? If it is used in all this would provide an unfair advantage to the proposed method. 2. Missing link to similar work on Continuous Conditional Random Fields [Ristovski 2013] and Continuous Conditional Neural Fields [Baltrusaitis 2014] that has a similar structure of the CRF and ability to perform exact inference. 3. What is Gaussian NLL? This seems to come out of nowhere and is not mentioned anywhere in the paper, besides the ablation study? Trivia: Consider replacing "difference mean" with "expected difference" between two landmarks (I believe it would be clearer)
2. Missing link to similar work on Continuous Conditional Random Fields [Ristovski 2013] and Continuous Conditional Neural Fields [Baltrusaitis 2014] that has a similar structure of the CRF and ability to perform exact inference.
ICLR_2022_2470
ICLR_2022
Weakness: The idea is a bit simple -- which in of itself is not a true weakness. ResNet as an idea is not complicated at all. I find it disheartening that the paper did not really tell readers how to construct a white paper in section 3 (if I simply missed it, please let me know). However, the code in the supplementary materials helped. White paper is constructed as follow: white_paper_gen = torch.ones(args.train_batch, 3, 32, 32) It offers another way of constructing white paper, which is white_paper_gen = 255 * np.ones((32, 32, 3), dtype=np.uint8) white_paper_gen = Image.fromarray(white_paper_gen) white_paper_gen = transforms.ToTensor()(white_paper_gen) white_paper_gen = transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))(white_paper_gen) The code states that either version works similarly and does not affect the performance. I wonder if there are other white papers as well, for example np.zeros((32, 32, 3)) -- most CNN models add explicit bias terms in their CNN kernel. Would a different white paper reveal different bias in the model? I don't think the paper answers this question or discusses it. 2. Section 4 "Is white paper training harmful to the model?" -- the evidences do not seem to support the claim. The evidences are 1). Only projection head (CNN layers) are affected but not classification head (FCN layer); 2). Parameter changes are small. None of these constitute as a direct support that the training is not "harmful" to the model. This point can simply be illustrated by the experimental results 3. Section 5.1 and 5.2 mainly build the narrative that WPA improves the test performance (generalization performance), but they are indirect evidence to support that WPA does in fact alleviate shortcut learning. Only Section 5.3 and Table 6 directly show whether WPA does what it's designed to do. A suggestion is to discuss the result of Section 5.3 more. 4. It would be interesting to try to explain why WPA works -- with np.ones input, what is the model predicting? Would any input serve as white paper? Figure 2 seems to suggest that Gaussian noise input does not work as well as WPA. Why? Authors spend lot of time showing WPA improves the test performance of the original model, but fails to provide useful insights on how WPA works -- this is particularly important because it can spark future research directions.
4. It would be interesting to try to explain why WPA works -- with np.ones input, what is the model predicting? Would any input serve as white paper? Figure 2 seems to suggest that Gaussian noise input does not work as well as WPA. Why? Authors spend lot of time showing WPA improves the test performance of the original model, but fails to provide useful insights on how WPA works -- this is particularly important because it can spark future research directions.
ICLR_2023_1979
ICLR_2023
Weakness: 1.It seems that the method part is very similar to the related work cited in the paper: Generating Adversarial Disturbances for Controller Verification. Could the author provide more clarification on this? 2.Experimental comparison to RRT* seems not good: Even though the RRT* baseline is an oracle without partial observability, the visible region is still very large, which covers more than half of the obstacles. In this case, the naive RRT* (as mentioned in supp C1) can still outperform the proposed method by a large margin on the first task.
1.It seems that the method part is very similar to the related work cited in the paper: Generating Adversarial Disturbances for Controller Verification. Could the author provide more clarification on this?
NIPS_2022_2513
NIPS_2022
Weakness: The claim in Line 175~176 is not validated which it is valuable to see whether the proposed method could prevents potential classes from being incorrectly classified into historical classed. In Tab. 1, for VOC 2-2 (10 tasks) and VOC 19-1 (2 tasks) MicroSeg gets inferior performance compared with SSUL, the reason should be explained. It also appear in ADE 100-50 (2 tasks) in Tab. 2. The proposed method adopts a proposal generator pretrained on MSCOCO which aggregates more information. Is it fair to compared with other methods? Besides, could the proposed technique propmotes existing Class incremental semantic segmentation methods. The authors adequately addressed the limitations and potential negative societal impact of their work.
2. The proposed method adopts a proposal generator pretrained on MSCOCO which aggregates more information. Is it fair to compared with other methods? Besides, could the proposed technique propmotes existing Class incremental semantic segmentation methods. The authors adequately addressed the limitations and potential negative societal impact of their work.
NIPS_2016_287
NIPS_2016
weakness, however, is the experiment on real data where no comparison against any other method is provided. Please see the details comments below.1. While [5] is a closely related work, it is not cited or discussed at all in Section 1. I think proper credit should be given to [5] in Sec. 1 since the spacey random walk was proposed there. The difference between the random walk model in this paper and that in [5] should also be clearly stated to clarify the contributions. 2. The AAAI15 paper titled "Spectral Clustering Using Multilinear SVD: Analysis, Approximations and Applications" by Ghoshdastidar and Dukkipati seems to be a related work missed by the authors. This AAAI15 paper deals with hypergraph data with tensors as well so it should be discussed and compared against to provide a better understanding of the state-of-the-art. 3. This work combines ideas from [4], [5], and [14] so it is very important to clearly state the relationships and differences with these earlier works. 4. End of Sec. 2., there are two important parameters/thresholds to set. One is the minimum cluster size and the other is the conductance threshold. However, the experimental section (Sec. 3) did not mention or discuss how these parameters are set and how sensitive the performance is with respect to these parameters. 5. Sec. 3.2 and Sec. 3.3: The real data experiments study only the proposed method and there is no comparison against any existing method on real data. Furthermore, there is only some qualitative analysis/discussion on the real data results. Adding some quantitative studies will be more helpful to the readers and researchers in this area. 6. Possible Typo? Line 131: "wants to transition".
4. End of Sec.2., there are two important parameters/thresholds to set. One is the minimum cluster size and the other is the conductance threshold. However, the experimental section (Sec. 3) did not mention or discuss how these parameters are set and how sensitive the performance is with respect to these parameters.
NIPS_2017_330
NIPS_2017
- Section 4 is very tersely written (maybe due to limitations in space) and could have benefitted with a slower development for an easier read. - Issues of convergence, especially when applying gradient descent over a non-Euclidean space, is not addressed In all, a rather thorough paper that derives an efficient way to compute gradients for optimization on LDSs modeled using extended subspaces and kernel-based similarity. At one hand, this leads to improvements over some competing methods. Yet, at its core, the paper avoids handling of the harder topics including convergence and any analysis of the proposed optimization scheme. None the less, the derivation of the gradient computations is interesting by itself. Hence, my recommendation.
- Section 4 is very tersely written (maybe due to limitations in space) and could have benefitted with a slower development for an easier read.
NIPS_2020_257
NIPS_2020
* In terms of novelty, note that both the motivation for the model as well as the initial parts of it hold similarities to some prior works. See detailed description in the relation to prior work section. * I would be happy to see results about generalization not only for the CLEVR dataset, but also for natural images datasets where there is larger variance both in the language and visual complexity. There are multiple datasets for generalization in VQA that can be used for that such as CP-VQA and also some splits of GQA. For the CLEVR dataset, the model is basically based on using an object detector to recognize the objects and their properties and build a semantic graph that represents the image. While other approaches that are compared to for this task use object detectors as well, there are many approaches for CLEVR (such as the Neural Module Network, Relation Network, MAC and FiLM) that do not use such strong supervision and therefore the comparison between these approaches in the experimental section is not completely valid. For better comparability, I would be interested to see generalization results when these models are also being fed with at least object-based bounding-boxes representations instead of the earlier commonly used spatial features, as is very common in VQA in the last years (see bottom-up attention networks). * This can be open for debate but I personally believe that the need for reinforcement learning for a static VQA task may be a potential weakness making the approach less data efficient and harder to train the models that use gradient descent.
* This can be open for debate but I personally believe that the need for reinforcement learning for a static VQA task may be a potential weakness making the approach less data efficient and harder to train the models that use gradient descent.
BTr3PSlT0T
ICLR_2025
- I express skepticism about whether the number of videos in the benchmark can achieve a robust assessment. The CVRR-ES benchmark includes only 214 videos, with the shortest video being just 2 seconds. Upon reviewing several videos from the anonymous link, I noticed a significant proportion of short videos. I question whether such short videos can adequately cover 11 categories. Moreover, current work that focuses solely on designing Video-LLMs, without specifically constructing evaluation benchmarks, provides a much larger number of assessment videos than the 214 included in CVRR-ES, for example, Tarsier [1]. - As mentioned in the previous question, the distribution of videos of different lengths within the benchmark is crucial for the assessment of reasoning ability and robustness, and the paper does not provide relevant explanations. The authors should include a table showing the distribution of video lengths across the dataset, and explain how they ensured a balanced representation of different video lengths across the 11 categories. - In the motivation, it is mentioned that the goal is to build human-centric AI systems. Does the paper's reflection on this point merely consist of providing a human baseline? I think that offering more fine-grained visual examples would be more helpful for human-AI comparisons. - I think that the contribution of the DSCP is somewhat overstated and lacks novelty. Such prompt engineering-based methods have already been applied in many works for data generation, model evaluation, and other stages. The introduction and ablation experiments of this technology in the paper seem redundant. - The discussion on DSCP occupies a significant portion of the experimental analysis. I think that the current analysis provided in the paper lacks insight and does not fully reflect the value of CVRR-ES, especially in terms of human-machine comparison. - The phrase should be "there exist a few limitations" instead of "there exist few limitations" in line 520. - The paper does not provide prompt templates for all the closed-source and open-source Video-LLMs used, which will influence the reproducibility. The problems discussed in this paper are valuable, but the most crucial aspects of benchmark construction and evaluation are not entirely convincing. Instead, a significant amount of space is dedicated to introducing the DSCP method. I don't think it meets the acceptance standards of ICLR yet. I will consider modifying the score based on the feedback from other reviewers and the authors' responses. *** [1] Wang J, Yuan L, Zhang Y. Tarsier: Recipes for Training and Evaluating Large Video Description Models[J]. arXiv preprint arXiv:2407.00634, 2024.
- As mentioned in the previous question, the distribution of videos of different lengths within the benchmark is crucial for the assessment of reasoning ability and robustness, and the paper does not provide relevant explanations. The authors should include a table showing the distribution of video lengths across the dataset, and explain how they ensured a balanced representation of different video lengths across the 11 categories.
NIPS_2017_53
NIPS_2017
Weakness 1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA. 2. Given that the paper uses a billinear layer to combine representations, it should mention in related work the rich line of work in VQA, starting with [B] which uses billinear pooling for learning joint question image representations. Right now the manner in which things are presented a novice reader might think this is the first application of billinear operations for question answering (based on reading till the related work section). Billinear pooling is compared to later. 3. L151: Would be interesting to have some sort of a group norm in the final part of the model (g, Fig. 1) to encourage disentanglement further. 4. It is very interesting that the approach does not use an LSTM to encode the question. This is similar to the work on a simple baseline for VQA [C] which also uses a bag of words representation. 5. (*) Sec. 4.2 it is not clear how the question is being used to learn an attention on the image feature since the description under Sec. 4.2 does not match with the equation in the section. Speficially the equation does not have any term for r^q which is the question representation. Would be good to clarify. Also it is not clear what \sigma means in the equation. Does it mean the sigmoid activation? If so, multiplying two sigmoid activations (with the \alpha_v computation seems to do) might be ill conditioned and numerically unstable. 6. (*) Is the object detection based attention being performed on the image or on some convolutional feature map V \in R^{FxWxH}? Would be good to clarify. Is some sort of rescaling done based on the receptive field to figure out which image regions belong correspond to which spatial locations in the feature map? 7. (*) L254: Trimming the questions after the first 10 seems like an odd design choice, especially since the question model is just a bag of words (so it is not expensive to encode longer sequences). 8. L290: it would be good to clarify how the implemented billinear layer is different from other approaches which do billinear pooling. Is the major difference the dimensionality of embeddings? How is the billinear layer swapped out with the hadarmard product and MCB approaches? Is the compression of the representations using Equation. (3) still done in this case? Minor Points: - L122: Assuming that we are multiplying in equation (1) by a dense projection matrix, it is unclear how the resulting matrix is expected to be sparse (aren’t we mutliplying by a nicely-conditioned matrix to make sure everything is dense?). - Likewise, unclear why the attended image should be sparse. I can see this would happen if we did attention after the ReLU but if sparsity is an issue why not do it after the ReLU? Perliminary Evaluation The paper is a really nice contribution towards leveraging traditional vision tasks for visual question answering. Major points and clarifications for the rebuttal are marked with a (*). [A] Andreas, Jacob, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2015. “Neural Module Networks.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1511.02799. [B] Fukui, Akira, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. “Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1606.01847. [C] Zhou, Bolei, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2015. “Simple Baseline for Visual Question Answering.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1512.02167.
8.L290: it would be good to clarify how the implemented billinear layer is different from other approaches which do billinear pooling. Is the major difference the dimensionality of embeddings? How is the billinear layer swapped out with the hadarmard product and MCB approaches? Is the compression of the representations using Equation. (3) still done in this case? Minor Points:
OvoRkDRLVr
ICLR_2024
1. The paper proposes a multimodal framework built atop a frozen Large Language Model (LLM) aimed at seamlessly integrating and managing various modalities. However, this approach seems to be merely an extension of the existing InstructBLIP. 2. Additionally, the concept of extending to multiple modalities, such as the integration of audio and 3D modalities, has already been proposed in prior works like PandaGPT. Therefore, the paper appears to lack sufficient novelty in both concept and methodology. 3. In Table 1, there is a noticeable drop in performance for X-InstructBLIP. Could you please clarify the reason behind this? If this drop is due to competition among different modalities, do you propose any solutions to mitigate this issue? 4. The promised dataset has not yet been made publicly available, so a cautious approach should be taken regarding this contribution until the dataset is openly accessible.
4. The promised dataset has not yet been made publicly available, so a cautious approach should be taken regarding this contribution until the dataset is openly accessible.
ICLR_2023_3203
ICLR_2023
1. The novelty is limited. The proposed method is too similar to other attentional modules proposed in previous works [1, 2, 3]. The group attention design seems to be related to ResNeSt [4] but it is not discussed in the paper. Although these works did not evaluate their performance on object detection and instance segmentation, the overall structures between these modules and the one that this paper proposed are pretty similar. 2. Though the improvement is consistent for different frameworks and tasks, the relative gains are not very strong. For most of the baselines, the proposed methods can only achieve just about 1% gain on a relative small backbone ResNet-50. As the proposed method introduces global pooling into its structure, it might be easy to improve a relatively small backbone since it is with a smaller receptive field. I suspect whether the proposed method still works well on large backbone models like Swin-B or Swin-L. 3. Some of the baseline results do not matched with their original paper. I roughly checked the original Mask2former paper but the performance reported in this paper is much lower than the one reported in the original Mask2former paper. For example, for panoptic segmentation, Mask2former reported 51.9 but in this paper it's 50.4, and the AP for instance segmentation reported in the original paper is 43.7 but here what reported is 42.4. Meanwhile, there are some missing references about panoptic segmentation that should be included in this paper [5, 6]. Reference [1] Chen, Yunpeng, et al. "A^ 2-nets: Double attention networks." NeurIPS 2018. [2] Cao, Yue, et al. "Gcnet: Non-local networks meet squeeze-excitation networks and beyond." T-PAMI 2020 [3] Yinpeng Chen, et al. Dynamic convolution: Attention over convolution kernels. CVPR 2020. [4] Zhang, Hang, et al. "Resnest: Split-attention networks." CVPR workshop 2022. [5] Zhang, Wenwei, et al. "K-net: Towards unified image segmentation." Advances in Neural Information Processing Systems 34 (2021): 10326-10338. [6] Wang, Huiyu, et al. "Max-deeplab: End-to-end panoptic segmentation with mask transformers." CVPR 2021
1. The novelty is limited. The proposed method is too similar to other attentional modules proposed in previous works [1, 2, 3]. The group attention design seems to be related to ResNeSt [4] but it is not discussed in the paper. Although these works did not evaluate their performance on object detection and instance segmentation, the overall structures between these modules and the one that this paper proposed are pretty similar.
ICLR_2023_2283
ICLR_2023
1. 1. The symbols in Section 4.3 are not very clearly explained. 2. This paper only experiments on the very small time steps (e.g.1、2) and lack of some experiments on slightly larger time steps (e.g. 4、6) to make better comparisons with other methods. I think it is necessary to analyze the impact of the time step on the method proposed in this paper. 3. Lack of experimental results on ImageNet to verify the method. Questions: 1. Fig. 3 e. Since the preactivation values of two networks are the same membrane potentials, their output cosine similarity will be very high. Why not directly illustrate the results of the latter loss term of Eqn 13? 2. Is there any use of recurrent connections in the experiments in this paper? Apart from appendix A.5, I do not see the recurrent connections.
1. Fig. 3 e. Since the preactivation values of two networks are the same membrane potentials, their output cosine similarity will be very high. Why not directly illustrate the results of the latter loss term of Eqn 13?
eI6ajU2esa
ICLR_2024
- This paper investigates the issue of robustness in video action recognition, but it lacks comparison with test-time adaptation (TTA) methods, such as [A-B]. These TTA methods also aim to adapt to out-of-distribution data when the input data is disturbed by noise. Although these TTA methods mainly focus on updating model parameters, and this paper primarily focuses on adjusting the input data, how to prove that data processing is superior to model parameter adjustment? I believe a comparison should be made based on experimental results. - Under noisy conditions, many TTA methods can achieve desirable results, while the improvement brought by this paper's method is relatively low. - In appendix A.2.1, under noisy conditions, the average performance improvement brought by this paper's method is very low and can even be counterproductive under certain noise conditions. Does this indicate an issue with the approach of changing input data? - How to verify the reliability of the long-range photometric consistency in section 3.3? Are there any ablation study results reflecting the performance gain brought by each part? - The explanation of the formula content in Algorithm 1 in the main body is not clear enough. [A] Temporal Coherent Test-Time Optimization for Robust Video Classification. ICLR23 [B] Video Test-Time Adaptation for Action Recognition. CVPR23
- This paper investigates the issue of robustness in video action recognition, but it lacks comparison with test-time adaptation (TTA) methods, such as [A-B]. These TTA methods also aim to adapt to out-of-distribution data when the input data is disturbed by noise. Although these TTA methods mainly focus on updating model parameters, and this paper primarily focuses on adjusting the input data, how to prove that data processing is superior to model parameter adjustment? I believe a comparison should be made based on experimental results.
ICLR_2021_863
ICLR_2021
Weakness 1. The presentation of the paper should be improved. Right now all the model details are placed in the appendix. This can cause confusion for readers reading the main text. 2. The necessity of using techniques includes Distributional RL and Deep Sets should be explained more thoroughly. From this paper, the illustration of Distributional RL lacks clarity. 3. The details of state representation are not explained clear. For an end-to-end method like DRL, it is crucial for state representation for training a good agent, as for network architecture. 4. The experiments are not comprehensive for validating that this algorithm works well in a wide range of scenarios. The efficiency, especially the time efficiency of the proposed algorithm, is not shown. Moreover, other DRL benchmarks, e.g., TD3 and DQN, should also be compared with. 5. There are typos and grammar errors. Detailed Comments 1. Section 3.1, first paragraph, quotation mark error for "importance". 2. Appendix A.2 does not illustrate the state space representation of the environment clearly. 3. The authors should state clearly as to why the complete state history is enough to reduce POMDP for the no-CSI case. 4. Section 3.2.1: The first expression for J ( θ ) is incorrect, which should be Q ( s t 0 , π θ ( s t 0 ) ) . 5. The paper did not explain Figure 2 clearly. In particular, what does the curve with the label "Expected" in Fig. 2(a) stand for? Not to mention there are multiple misleading curves in Fig. 2(b)&(c). The benefit of introducing distributional RL is not clearly explained. 6. In Table 1, only 4 classes of users are considered in the experiment sections, which might not be in accordance with practical situations, where there can be more classes of users in the real system and more user numbers. 7. In the experiment sections, the paper only showed the Satisfaction Probability of the proposed method is larger than conventional methods. The algorithm complexity, especially the time complexity of the proposed method in an ultra multi-user scenario, is not shown. 8. There is a large literature on wireless scheduling with latency guarantees from the networking community, e.g., Sigcomm, INFOCOM, Sigmetrics. Representative results there should also be discussed and compared with. ====== post rebuttal: My concern regarding the experiments remains. I will keep my score unchanged.
4. Section 3.2.1: The first expression for J ( θ ) is incorrect, which should be Q ( s t 0 , π θ ( s t 0 ) ) .
ICLR_2021_872
ICLR_2021
The authors push on the idea of scalable approximate inference, yet the largest experiment shown is on CIFAR-10. Given this focus on scalability, and the experiments in recent literature in this space, I think experiments on ImageNet would greatly strengthen the paper (though I sympathize with the idea that this can a high bar from a resources standpoint). As I noted down below, the experiments currently lack results for the standard variational BNN with mean-field Gaussians. More generally, I think it would be great to include the remaining models from Ovadia et al. (2019). More recent results from ICML could also useful to include (as referenced in the related works sections). Recommendation Overall, I believe this is a good paper, but the current lack of experiments on a dataset larger than CIFAR-10, while also focusing on scalability, make it somewhat difficult to fully recommend acceptance. Therefore, I am currently recommending marginal acceptance for this paper. Additional comments p. 5-7: Including tables of results for each experiment (containing NLL, ECE, accuracy, etc.) in the main text would be helpful to more easily assess p. 7: For the MNIST experiments, in Ovadia et al. (2019) they found that variational BNNs (SVI) outperformed all other methods (including deep ensembles) on all shifted and OOD experiments. How does your proposed method compare? I think this would be an interesting experiment to include, especially since the consensus in Ovadia et al. (2019) (and other related literature) is that full variational BNNs are quite promising but generally methodologically difficult to scale to large problems, with relative performance degrading even on CIFAR-10. Minor p. 6: In the phrase "for 'in-between' uncertainty", the first quotation mark on 'in-between' needs to be the forward mark rather than the backward mark (i.e., ‘ i n − b e t w e e n ′ ). p. 7: s/out of sitribution/out of distribution/ p. 8: s/expensive approaches 2) allows/expensive approaches, 2) allows/ p. 8: s/estimates 3) is/estimates, and 3) is/ In the references: Various words in many of the references need capitalization, such as "ai" in Amodei et al. (2016), "bayesian" in many of the papers, and "Advances in neural information processing systems" in several of the papers. Dusenberry et al. (2020) was published in ICML 2020 Osawa et al. (2019) was published in NeurIPS 2019 Swiatkowski et al. (2020) was published in ICML 2020 p. 13, supplement, Fig. 5: error bar regions should be upper and lowered bounded by [0, 1] for accuracy. p. 13, Table 2: Splitting this into two tables, one for MNIST and one for CIFAR-10, would be easier to read.
8: s/expensive approaches2) allows/expensive approaches,2) allows/ p.8: s/estimates3) is/estimates, and3) is/ In the references: Various words in many of the references need capitalization, such as "ai" in Amodei et al. (2016), "bayesian" in many of the papers, and "Advances in neural information processing systems" in several of the papers. Dusenberry et al. (2020) was published in ICML 2020 Osawa et al. (2019) was published in NeurIPS 2019 Swiatkowski et al. (2020) was published in ICML 2020 p. 13, supplement, Fig.
NIPS_2016_339
NIPS_2016
weakness of the model. How would the values in table 1 change without this extra assumption? 3. I didn't find all parameter values. What are the model parameters for task 1? What lambda was chosen for the Boltzmann policy. But more importantly: How were the parameters chosen? Maximum likelihood estimates? 4. An answer to this point may be beyond the scope of this work, but it may be interesting to think about it. It is mentioned (lines 104-106) that "the examples [...] should maximally disambiguate the concept being taught from other possible concepts". How is disambiguation measured? How can disambiguation be maximized? Could there be an information theoretic approach to these questions? Something like: the teacher chooses samples that maximally reduce the entropy of the assumed posterior of the student. Does the proposed model do that? Minor points: • line 88: The optimal policy is deterministic. Hence I'm a bit confused by "the stochastic optimal policy". Is above defined "the Boltzmann policy" meant? • What is d and h in equation 2? • line 108: "to calculate this ..." What is meant by "this"? • Algorithm 1: Require should also include epsilon. Does line 1 initialize the set of policies to an empty set? Are the policies in line 4 added to this set? Does calculateActionValues return the Q* defined in line 75? What is M in line 6? How should p_min be chosen? Why is p_min needed anyway? • Experiment 2: Is the reward 10 points (line 178) or 5 points (line 196)? • Experiment 2: Is 0A the condition where all tiles are dangerous? Why are the likelihoods so much larger for 0A? Is it reasonable to average over likelihoods that differ by more than an order of magnitude (0A vs 2A-C)? • Text and formulas should be carefully checked for typos (e.g. line 10 in Algorithm 1: delta > epsilon; line 217: 1^-6;)
3. I didn't find all parameter values. What are the model parameters for task 1? What lambda was chosen for the Boltzmann policy. But more importantly: How were the parameters chosen? Maximum likelihood estimates?
NIPS_2016_313
NIPS_2016
Weakness: 1. The proposed method consists of two major components: generative shape model and the word parsing model. It is unclear which component contributes to the performance gain. Since the proposed approach follows detection-parsing paradigm, it is better to evaluate on baseline detection or parsing techniques sperately to better support the claim. 2. Lacks in detail about the techniques and make it hard to reproduce the result. For example, it is unclear about the sparsification process since it is important to extract the landmark features for following steps. And how to generate the landmark on the edge? How to decide the number of landmark used? What kind of images features? What is the fixed radius with different scales? How to achieve shape invariance, etc. 3. The authors claim to achieve state-of-the-art results on challenging scene text recognition tasks, even outperforms the deep-learning based approaches, which is not convincing. As claimed, the performance majorly come from the first step which makes it reasonable to conduct comparisons experiments with existing detection methods. 4. It is time-consuming since the shape model is trained in pixel level(though sparsity by landmark) and the model is trained independently on all font images and characters. In addition, parsing model is a high-order factor graph with four types of factors. The processing efficiency of training and testing should be described and compared with existing work. 5. For the shape model invariance study, evaluation on transformations of training images cannot fully prove the point. Are there any quantitative results on testing images?
3. The authors claim to achieve state-of-the-art results on challenging scene text recognition tasks, even outperforms the deep-learning based approaches, which is not convincing. As claimed, the performance majorly come from the first step which makes it reasonable to conduct comparisons experiments with existing detection methods.
NIPS_2017_631
NIPS_2017
1. The main contribution of the paper is CBN. But the experimental results in the paper are not advancing the state-of-art in VQA (on the VQA dataset which has been out for a while and a lot of advancement has been made on this dataset), perhaps because the VQA model used in the paper on top of which CBN is applied is not the best one out there. But in order to claim that CBN should help even the more powerful VQA models, I would like the authors to conduct experiments on more than one VQA model – favorably the ones which are closer to state-of-art (and whose codes are publicly available) such as MCB (Fukui et al., EMNLP16), HieCoAtt (Lu et al., NIPS16). It could be the case that these more powerful VQA models are already so powerful that the proposed early modulating does not help. So, it is good to know if the proposed conditional batch norm can advance the state-of-art in VQA or not. 2. L170: it would be good to know how much of performance difference this (using different image sizes and different variations of ResNets) can lead to? 3. In table 1, the results on the VQA dataset are reported on the test-dev split. However, as mentioned in the guidelines from the VQA dataset authors (http://www.visualqa.org/vqa_v1_challenge.html), numbers should be reported on test-standard split because one can overfit to test-dev split by uploading multiple entries. 4. Table 2, applying Conditional Batch Norm to layer 2 in addition to layers 3 and 4 deteriorates performance for GuessWhat?! compared to when CBN is applied to layers 4 and 3 only. Could authors please throw some light on this? Why do they think this might be happening? 5. Figure 4 visualization: the visualization in figure (a) is from ResNet which is not finetuned at all. So, it is not very surprising to see that there are not clear clusters for answer types. However, the visualization in figure (b) is using ResNet whose batch norm parameters have been finetuned with question information. So, I think a more meaningful comparison of figure (b) would be with the visualization from Ft BN ResNet in figure (a). 6. The first two bullets about contributions (at the end of the intro) can be combined together. 7. Other errors/typos: a. L14 and 15: repetition of word “imagine” b. L42: missing reference c. L56: impact -> impacts Post-rebuttal comments: The new results of applying CBN on the MRN model are interesting and convincing that CBN helps fairly developed VQA models as well (the results have not been reported on state-of-art VQA model). So, I would like to recommend acceptance of the paper. However I still have few comments -- 1. It seems that there is still some confusion about test-standard and test-dev splits of the VQA dataset. In the rebuttal, the authors report the performance of the MCB model to be 62.5% on test-standard split. However, 62.5% seems to be the performance of the MCB model on the test-dev split as per table 1 in the MCB paper (https://arxiv.org/pdf/1606.01847.pdf). 2. The reproduced performance reported on MRN model seems close to that reported in the MRN paper when the model is trained using VQA train + val data. I would like the authors to clarify in the final version if they used train + val or just train to train the MRN and MRN + CBN models. And if train + val is being used, the performance can't be compared with 62.5% of MCB because that is when MCB is trained on train only. When MCB is trained on train + val, the performance is around 64% (table 4 in MCB paper). 3. The citation for the MRN model (in the rebuttal) is incorrect. It should be -- @inproceedings{kim2016multimodal, title={Multimodal residual learning for visual qa}, author={Kim, Jin-Hwa and Lee, Sang-Woo and Kwak, Donghyun and Heo, Min-Oh and Kim, Jeonghee and Ha, Jung-Woo and Zhang, Byoung-Tak}, booktitle={Advances in Neural Information Processing Systems}, pages={361--369}, year={2016} } 4. As AR2 and AR3, I would be interested in seeing if the findings from ResNet carry over to other CNN architectures such as VGGNet as well.
4. Table 2, applying Conditional Batch Norm to layer 2 in addition to layers 3 and 4 deteriorates performance for GuessWhat?! compared to when CBN is applied to layers 4 and 3 only. Could authors please throw some light on this? Why do they think this might be happening?
yIv4SLzO3u
ICLR_2024
- Lack of comparison with a highly relevant method. [1] also proposes to utilize the previous knowledge with ‘inter-task ensemble’, while enhancing the current task’s performance with ‘intra-task’ ensemble. Yet, the authors didn’t include the method comparison or performance comparison. - Novelty is limited. From my perspective, the submission simply applies the existing weight averaging and the bounded update to the class incremental learning problems. - No theoretical justification or interpretation. Is there any theoretical guarantee that to what extent inter-task weight averaging or bounded update counter catastrophic forgetting? - Though it shows in Table 3 that bounded model update mitigates the forgetting, the incorporation of bounded model update doesn’t seem to have enough motivation from the methodological perspective. The inter-task weight average is designed to incorporate both old and new knowledge, which is by design enough to tackle forgetting. ##### References: 1.Miao, Z., Wang, Z., Chen, W., & Qiu, Q. (2021, October). Continual learning with filter atom swapping. In International Conference on Learning Representations.
- Lack of comparison with a highly relevant method. [1] also proposes to utilize the previous knowledge with ‘inter-task ensemble’, while enhancing the current task’s performance with ‘intra-task’ ensemble. Yet, the authors didn’t include the method comparison or performance comparison.
NIPS_2018_914
NIPS_2018
of the paper are (i) the presentation of the proposed methodology to overcome that effect and (ii) the limitations of the proposed methods for large-scale problems, which is precisely when function approximation is required the most. While the intuition behind the two proposed algorithms is clear (to keep track of partitions of the parameter space that are consistent in successive applications of the Bellman operator), I think the authors could have formulated their idea in a more clear way, for example, using tools from Constraint Satisfaction Problems (CSPs) literature. I have the following concerns regarding both algorithms: - the authors leverage the complexity of checking on the Witness oracle, which is "polynomial time" in the tabular case. This feels like not addressing the problem in a direct way. - the required implicit call to the Witness oracle is confusing. - what happens if the policy class is not realizable? I guess the algorithm converges to an \empty partition, but that is not the optimal policy. minor: line 100 : "a2 always moves from s1 to s4 deterministically" is not true line 333 : "A number of important direction" -> "A number of important directions" line 215 : "implict" -> "implicit" - It is hard to understand the figure where all methods are compared. I suggest to move the figure to the appendix and keep a figure with less curves. - I suggest to change the name of partition function to partition value. [I am satisfied with the rebuttal and I have increased my score after the discussion]
- the required implicit call to the Witness oracle is confusing.
NIPS_2022_532
NIPS_2022
1. Imitation Learning: The proposed method needs to be trained by behavioral cloning, which means 1) it requires a carefully well-designed algorithm (e.g., ODA-T/B/K) to generate the supervised data set. 2) More importantly, the data generated by ODA with a time limit L is indeed not a perfect teacher for behavioral cloning. Since all the ODAs are not designed and optimized for bounded time performance, their behavior (the state-action pairs) for the first L time is not the optimal policy under time limit L. Since the generic algorithm procedure of ODA can be encoded as a MDP, is it possible to use reinforcement learning to train the model? Would the RL-based approach find a better policy for bounded time performance that is aware of the time limit T? 2. Performance with Different Time Limits: It seems that the proposed method is both trained and tested with a single fixed time limit T = 1000s for all problems. However, in practice, the applications could have very different run time limits. Will the proposed method generalize well to different time limits (such as 1/10/100/2000/5000s)? 3. Comparison with PMOCO: PMOCO is the only learning-based approach in the comparison, but it has a very small cardinality (feasible points) for most problems. Its approximated Pareto front is also relatively sparse (which means a small set of solutions) in Figure A.3. This result is a bit counter-intuitive. In my understanding, PMOCO is a construction-based neural combinatorial optimization algorithm, of which one important advantage is the very fast run time. The PMOCO paper [1] reports it can generate 101 solutions for 100 MOKP(2-100) instances (hence 10,100 solutions in total) in only 15s, and 10,011 solutions for 100 MOTSP(3-100) instances (hence 1,001,100 solutions in total) in 33 minutes (~2000s). In this work, with a large time limit of 1,000s, I think POMO should be able to generate a dense set of solutions for each instance. In addition, while a dataset with 100 instances could provide enough supervised ODA state-action pairs (with a time limit L = 1000s for each instance) for imitation learning, it is far from enough for PMOCO's RL-based training. Since PMOCO does not require any supervised data and the MOKP instances can be easily generated on the fly, is it more suitable to train PMOCO under the same wall-clock time with the proposed method? [1] Pareto set learning for neural multi-objective combinatorial optimization. ICLR 2022. The limitations of this work are 1) the requirement of ODAs and a well-designed IP solver; 2) it loses the guarantee for finding the whole Pareto front. They have been properly discussed in the paper (see remark at the end of Section 4.1 and Conclusion). I do not see any potential negative societal impact of this work.
2) it loses the guarantee for finding the whole Pareto front. They have been properly discussed in the paper (see remark at the end of Section 4.1 and Conclusion). I do not see any potential negative societal impact of this work.
ICLR_2022_1420
ICLR_2022
Weakness: Lack of novelty. The key idea, i.e., combining foreground masks to remove the artifacts from the background, is not new. Separate handling of foreground from background is a common practice for dynamic scene novel view synthesis, and many recent methods do not even require the foreground masks for modeling dynamic scenes (they jointly model the foreground region prediction module, e.g., Tretschk et al. 2021). Lack of controllability. 1) The proposed method cannot handle the headpose. While this paper defers this problem to a future work, a previous work (e.g., Gafni et al. ICCV 2021) is already able to control both facial expression and headpose. Why is it not possible to condition the headpose parameters in the NeRF beyond the facial expression similar to [Gafni et al. ICCV 2021]? 2) Even for controlling facial expression, it is highly limited to the mouth region only. From overall qualitative results and demo video, it is not clear the method indeed can handle overall facial expression including eyes, nose, and wrinkle details, and the diversity in mouth shape that the model can deliver is significantly limited. Low quality. The results made by the proposed method are of quite low quality. 1) Low resolution: While many previous works introduce high-quality view synthesis results with high-resolution (512x512 or more), this paper shows low resolution results (256x256) for some reasons. Simply saying the problem of resources is not a convincing argument since many existing works already proved the feasibility of high resolution image synthesis using implicit function. Due to this low-resolution nature, many high-frequency details (e.g., facial wrinkles), which are the key to enabling photorealistic face image synthesis, are washed out. 2) In many cases, the conditioning facial expressions do not match that from the synthesized image. From the demo video, while mouth opening or closing are somehow synchronized with conditioning videos, there exists a mismatch in the style of the detailed mouth shape.
1) The proposed method cannot handle the headpose. While this paper defers this problem to a future work, a previous work (e.g., Gafni et al. ICCV 2021) is already able to control both facial expression and headpose. Why is it not possible to condition the headpose parameters in the NeRF beyond the facial expression similar to [Gafni et al. ICCV 2021]?
ICLR_2023_3449
ICLR_2023
1.The spurious features in Section 3.1 and 3.2 are very similar to backdoor triggers. They both are some artificial patterns that only appear a few times in the training set. For example, Chen et al. (2017) use random noise patterns. Gu et al. (2019) [1] use single-pixel and simple patterns as triggers. It is well-known that a few training examples with such triggers (rare spurious examples in this paper) would have a large impact on the trained model. 2.How neural nets learn natural rare spurious correlations is unknown to the community (to the best of my knowledge). However, most of analysis and ablation studies use the artificial patterns instead of natural spurious correlations. Duplicating the same artificial pattern for multiple times is different from natural spurious features, which are complex and different in every example. 3.What’s the experiment setup in Section 3.3? (data augmentation methods, learning rate, etc.). [1]: BadNets: Evaluating Backdooring Attacks on Deep Neural Networks. https://messlab.moyix.net/papers/badnets_ieeeaccess19.pdf
1.The spurious features in Section 3.1 and 3.2 are very similar to backdoor triggers. They both are some artificial patterns that only appear a few times in the training set. For example, Chen et al. (2017) use random noise patterns. Gu et al. (2019) [1] use single-pixel and simple patterns as triggers. It is well-known that a few training examples with such triggers (rare spurious examples in this paper) would have a large impact on the trained model.
ICLR_2022_3332
ICLR_2022
Weakness: 1. The writing needs a lot of improvement. Many of the concepts or notations are not explained. For example, what do “g_\alpha” and “vol(\alpha)” mean? What is an “encoding tree”(I believe it is not a common terminology)? Why can the encoding tree be used a tree kernel? Other than complexity, what is the theoretic advantage of using these encoding trees? 2. Structural optimization seems one of the main components and it has been emphasized several times. However, it seems the optimization algorithm is directly from some previous works. That is a little bit confusing and reduces the contribution. 3. From my understanding, the advantage over WL-kernel is mainly lower complexity, but compared to GIN or other GNNs, the complexity is higher. That is also not convincing enough. Of course, the performance is superior, so I do not see it as a major problem, but I would like to see more explanations of the motivation and advantages. 4. If we do not optimize the structural entropy but just use a random encoding tree, how does it perform? I think the authors need this ablation study to demonstrate the necessity of structural optimization.
2. Structural optimization seems one of the main components and it has been emphasized several times. However, it seems the optimization algorithm is directly from some previous works. That is a little bit confusing and reduces the contribution.
4kuLaebvKx
EMNLP_2023
- The chained impacts of image captioning and multilingual understanding model in the proposed pipeline. If the Image caption gives worse results and the final results could be worse. So The basic performance of the image caption model and multilingual language mode depends on the engineering model choice when it applies to zero-shot. - This pipeline style method including two models does not give better average results for both XVNLI and MaRVL. Baseline models in the experiments are not well introduced.
- This pipeline style method including two models does not give better average results for both XVNLI and MaRVL. Baseline models in the experiments are not well introduced.
GeFFYOCkvS
EMNLP_2023
- The authors have reproduced a well-known result in the literature--left political bias in ChatGPT and in LLMs in general--using the "coarse" (their description) methodology of passing a binary stance classifier over ChatGPT's output. The observation that language models reproduce the biases of the corpora on which they're trained has been made at each step of the evolution of these models, from word2vec to BERT to ChatGPT, and so it's unclear why this observation needs to once again be made using the authors "coarse" methodology. - The authors' decision to filter by divisive topic using introduces an unacknowledged prior: for most of the topics given in the appendix (immigration, the death penalty, etc.), the choice to bother writing about the topic itself introduces bias. This choice will be reflected in both the frequency with which such articles appear and in the language used in those articles. The existence of the death penalty, for example, might be considered unproblematic on the right and, for that reason, one will see fewer articles on the subject in right-leaning newspapers. When they do appear, neutral language will be employed. The opposite is likely true for a left-leaning publication, for whom the existence of the death penalty is a problem: the articles will occur with more frequency and will employ less neutral language. For these reasons, it's not surprising that the authors' classifier, which was annotated via distant supervision using political slant tags assigned by Wikipedia, will tend to score most content generated by an LLM as left-leaning.
- The authors have reproduced a well-known result in the literature--left political bias in ChatGPT and in LLMs in general--using the "coarse" (their description) methodology of passing a binary stance classifier over ChatGPT's output. The observation that language models reproduce the biases of the corpora on which they're trained has been made at each step of the evolution of these models, from word2vec to BERT to ChatGPT, and so it's unclear why this observation needs to once again be made using the authors "coarse" methodology.
NIPS_2017_53
NIPS_2017
Weakness 1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA. 2. Given that the paper uses a billinear layer to combine representations, it should mention in related work the rich line of work in VQA, starting with [B] which uses billinear pooling for learning joint question image representations. Right now the manner in which things are presented a novice reader might think this is the first application of billinear operations for question answering (based on reading till the related work section). Billinear pooling is compared to later. 3. L151: Would be interesting to have some sort of a group norm in the final part of the model (g, Fig. 1) to encourage disentanglement further. 4. It is very interesting that the approach does not use an LSTM to encode the question. This is similar to the work on a simple baseline for VQA [C] which also uses a bag of words representation. 5. (*) Sec. 4.2 it is not clear how the question is being used to learn an attention on the image feature since the description under Sec. 4.2 does not match with the equation in the section. Speficially the equation does not have any term for r^q which is the question representation. Would be good to clarify. Also it is not clear what \sigma means in the equation. Does it mean the sigmoid activation? If so, multiplying two sigmoid activations (with the \alpha_v computation seems to do) might be ill conditioned and numerically unstable. 6. (*) Is the object detection based attention being performed on the image or on some convolutional feature map V \in R^{FxWxH}? Would be good to clarify. Is some sort of rescaling done based on the receptive field to figure out which image regions belong correspond to which spatial locations in the feature map? 7. (*) L254: Trimming the questions after the first 10 seems like an odd design choice, especially since the question model is just a bag of words (so it is not expensive to encode longer sequences). 8. L290: it would be good to clarify how the implemented billinear layer is different from other approaches which do billinear pooling. Is the major difference the dimensionality of embeddings? How is the billinear layer swapped out with the hadarmard product and MCB approaches? Is the compression of the representations using Equation. (3) still done in this case? Minor Points: - L122: Assuming that we are multiplying in equation (1) by a dense projection matrix, it is unclear how the resulting matrix is expected to be sparse (aren’t we mutliplying by a nicely-conditioned matrix to make sure everything is dense?). - Likewise, unclear why the attended image should be sparse. I can see this would happen if we did attention after the ReLU but if sparsity is an issue why not do it after the ReLU? Perliminary Evaluation The paper is a really nice contribution towards leveraging traditional vision tasks for visual question answering. Major points and clarifications for the rebuttal are marked with a (*). [A] Andreas, Jacob, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2015. “Neural Module Networks.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1511.02799. [B] Fukui, Akira, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. “Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1606.01847. [C] Zhou, Bolei, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2015. “Simple Baseline for Visual Question Answering.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1512.02167.
1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA.
NIPS_2017_645
NIPS_2017
- The main paper is dense. This is despite the commendable efforts by the authors to make their contributions as readable as possible. I believe it is due to NIPS page limit restrictions; the same set of ideas presented at their natural length would make for a more easily digestible paper. - The authors do not quite discuss computational aspects in detail (other than a short discussion in the appendix), but it is unclear whether their proposed methods can be made practically useful for high dimensions. As stated, their algorithm requires solving several LPs in high dimensions, each involving a parameter that is not easily calculable. This is reflected in the authors’ experiments which are all performed on very small scale datasets. - The authors mainly seem to focus on SSC, and do not contrast their method with several other subsequent methods (thresholded subspace clustering (TSC), greedy subspace clustering by Park, etc) which are all computationally efficient as well as come with similar guarantees.
- The authors mainly seem to focus on SSC, and do not contrast their method with several other subsequent methods (thresholded subspace clustering (TSC), greedy subspace clustering by Park, etc) which are all computationally efficient as well as come with similar guarantees.
4N97bz1sP6
ICLR_2024
1. The authors should make clear the distinction of when the proposed method is trained using only weak supervision and when it is semi-supervised trained. For instance, in Table 1, I think the proposed framework row refers to the semi-supervised version of the method, thus the authors should rename the column to ‘Fully supervised’ from ‘Supervised’. Maybe a better idea is to specify the data used to train ALL the parts of each model and have two big columns ‘Mixture training data’ and ‘Single source data’ which will make it much more prevalent of what is which. 2. Building upon my previous argument, I think that when one is using these large pre-trained networks on single-source data like CLAP, the underlying method becomes supervised in a sense, or to put it more specifically supervised with unpaired data. The authors should clearly explain these differences throughout the manuscript. 3. I think the authors should include stronger text-based sound separation baselines like the model and ideally the training method that uses heterogeneous conditions to train the separation model in [A] which has already shown to outperform LASS-Net (Liu et. al 2022) which is almost always the best performing baseline in this paper. I would be more than happy to increase my score if all the above weaknesses are addressed by the authors. [A] Tzinis, E., Wichern, G., Smaragdis, P. and Le Roux, J., 2023, June. Optimal Condition Training for Target Source Separation. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1-5). IEEE.
1. The authors should make clear the distinction of when the proposed method is trained using only weak supervision and when it is semi-supervised trained. For instance, in Table 1, I think the proposed framework row refers to the semi-supervised version of the method, thus the authors should rename the column to ‘Fully supervised’ from ‘Supervised’. Maybe a better idea is to specify the data used to train ALL the parts of each model and have two big columns ‘Mixture training data’ and ‘Single source data’ which will make it much more prevalent of what is which.
NIPS_2020_1454
NIPS_2020
- Small contributions over previous methods (NCNet [6] and Sparse NCNet [21]). Mostly (good) engineering. And despite that it seems hard to differentiate it from its predecessors, as it performs very similarly in practice. - Claims to be SOTA on three datasets, but this does not seem to be the case. Does not evaluate on what it trains on (see "additional feedback").
- Small contributions over previous methods (NCNet [6] and Sparse NCNet [21]). Mostly (good) engineering. And despite that it seems hard to differentiate it from its predecessors, as it performs very similarly in practice.
NIPS_2018_25
NIPS_2018
- My understanding is that R,t and K (the extrinsic and intrinsic parameters of the camera) are provided to the model at test time for the re-projection layer. Correct me in the rebuttal if I am wrong. If that is the case, the model will be very limited and it cannot be applied to general settings. If that is not the case and these parameters are learned, what is the loss function? - Another issue of the paper is that the disentangling is done manually. For example, the semantic segmentation network is the first module in the pipeline. Why is that? Why not something else? It would be interesting if the paper did not have this type of manual disentangling, and everything was learned. - "semantic" segmentation is not low-level since the categories are specified for each pixel so the statements about semantic segmentation being a low-level cue should be removed from the paper. - During evaluation at test time, how is the 3D alignment between the prediction and the groundtruth found? - Please comment on why the performance of GTSeeNet is lower than that of SeeNetFuse and ThinkNetFuse. The expectation is that groundtruth 2D segmentation should improve the results. - line 180: Why not using the same amount of samples for SUNCG-D and SUNCG-RGBD? - What does NoSeeNet mean? Does it mean D=1 in line 96? - I cannot parse lines 113-114. Please clarify.
- "semantic" segmentation is not low-level since the categories are specified for each pixel so the statements about semantic segmentation being a low-level cue should be removed from the paper.
NIPS_2021_2168
NIPS_2021
1.The motivation to investigate a graph structured model is to capture the global dependency structure in the sentence which different from existing sequence models that tend to focus on the dependency between each word and its close preceding words. However,the encoder and decoder is based on Transformer,which can draw global dependencies in sentence. Therefore, I am a bit confused about the complex approach of this article. 2.Table3 does not use the C3D feature to do the experiment, I think this paper should compare the result of C3D features in MSR-VTT with [1].On the MSR-VTT dataset, the performance of the method has not improved much on CIDEr, within 0.3%. 3.In the ablation experiment, the performance without reinforcement learning dropped lower than without dependency tree.The two tables do not list the cases where dependency tree and RL are not used. 4.Constructing the generation tree, constructing the ground-truth dependency trees, calculating loss and reward all consume many computing resources. Is the time cost of the training process large? [1] Zhang at al.Object Relational Graph with Teacher-Recommended Learning for Video Captioning.CVPR'2020 Typos:e.g.The last two rows of table3 are wrongly blackened on the METEOR on dataset MSR-VTT.
3.In the ablation experiment, the performance without reinforcement learning dropped lower than without dependency tree.The two tables do not list the cases where dependency tree and RL are not used.
ICLR_2022_562
ICLR_2022
1. The main weakness of this paper is the experiments section. The results are presented only on CIFAR-10 dataset and do not consider many other datasets from Federated learning benchmarks (e.g., LEAF https://leaf.cmu.edu/). The authors should see relevant works like (FedProx https://arxiv.org/abs/1812.06127) and (FedMAX, https://arxiv.org/abs/2004.03657 (ECML 2020)) for details on different datasets and model types. If the experimental evaluation was comprehensive enough, this would be a very good paper (given the interesting and important problem it addresses). 2. One other thing (although this is not the main focus of this paper), the authors should provide comparisons between strategies that result in fast convergence (without sparsity) vs. sparse methods? For example, do non-sparse, fast convergence methods (like FedProx, FedMAX, and others) result in small enough number of epochs compared to sparse methods? Can the fast convergence methods be augmented with sparisity ideas successfully without resulting in significant loss of accuracy? Some discussion and possibly performance numbers are needed here.
1. The main weakness of this paper is the experiments section. The results are presented only on CIFAR-10 dataset and do not consider many other datasets from Federated learning benchmarks (e.g., LEAF https://leaf.cmu.edu/). The authors should see relevant works like (FedProx https://arxiv.org/abs/1812.06127) and (FedMAX, https://arxiv.org/abs/2004.03657 (ECML 2020)) for details on different datasets and model types. If the experimental evaluation was comprehensive enough, this would be a very good paper (given the interesting and important problem it addresses).
NIPS_2022_738
NIPS_2022
W1) The paper states that "In order to introduce epipolar constraints into attention-based feature matching while maintaining robustness to camera pose and calibration inaccuracies, we develop a Window-based Epipolar Transformer (WET), which matches reference pixels and source windows near the epipolar lines." It claims that it introduces "a window-based epipolar Transformer (WET) for enhancing patch-to-patch matching between the reference feature and corresponding windows near epipolar lines in source features". To me, taking a window around the epipolar line into account seems like an approximation to estimating the uncertainty region around the epipolar lines caused by inaccuracies in calibration and camera pose and then searching within this region (see [Förstner & Wrobel, Photogrammetric Computer Vision, Springer 2016] for a detailed derivation of how to estimate uncertainties). Is it really valid to claim this part of the proposed approach as novel? W2) I am not sure how significant the results on the DTU dataset are: a) The difference with respect to the best performing methods is less than 0.1 mm (see Tab. 1). Is the ground truth sufficiently accurate enough that such a small difference is actually noticeable / measurable or is the difference due to noise or randomness in the training process? b) Similarly, there is little difference between the results reported for the ablation study in Tab. 4. Does the claim "It can be seen from the table that our proposed modules improve in both accuracy and completeness" really hold? Why not use another dataset for the ablation study, e.g., the training set of Tanks & Temples or ETH3D? W3) I am not sure what is novel about the "novel geometric consistency loss (Geo Loss)". Looking at Eq. 10, it seems to simply combine a standard reprojection error in an image with a loss on the depth difference. I don't see how Eq. 10 provides a combination of both losses. W4) While the paper discusses prior work in Sec. 2, there is mostly no mentioning on how the paper under review is related to these existing works. In my opinion, a related work section should explain the relation of prior work to the proposed approach. This is missing. W5) There are multiple parts in the paper that are unclear to me: a) What is C in line 106? The term does not seem to be introduced. b) How are the hyperparameters in Sec. 4.1 chosen? Is their choice critical? c) Why not include UniMVSNet in Fig. 5, given that UniMVSNet also claims to generate denser point clouds (as does the paper under review)? d) Why use only N=5 images for DTU and not all available ones? e) Why is Eq. 9 a reprojection error? Eq. 9 measures the depth difference as a scalar and no projection into the image is involved. I don't see how any projection is involved in this loss. Overall, I think this is a solid paper that presents a well-engineered pipeline that represents the current state-of-the-art on a challenging benchmark. While I raised multiple concerns, most of them should be easy to address. E.g., I don't think that removing the novelty claim from W1 would make the paper weaker. The main exception is the ablation study, where I believe that the DTU dataset is too easy to provide meaningful comparisons (the relatively small differences might be explained by randomness in the training process. The following minor comments did not affect my recommendation: References are missing for Pytorch and the Adam optimizer. Post-rebuttal comments Thank you for the detailed answers. Here are my comments to the last reply: Q: Relationship to prior work. Thank you very much, this addresses my concern. A: Fig. 5 is not used to claim our method achieves the best performance among all the methods in terms of completeness, it actually indicates that our proposed method could help reconstruct complete results while keeping high accuracy (Tab. 1) compared with our baseline network [7] and the most relevant method [3]. In that context, we not only consider the quality of completeness but also the relevance to our method to perform comparison in Fig. 5. As I understand lines 228-236 in the paper, in particular "The quantitative results of DTU evaluation set are summarized in Tab. 1, where Accuracy and Completeness are a pair of official evaluation metrics. Accuracy is the percentage of generated point clouds matched in the ground truth point clouds, while Completeness measures the opposite. Overall is the mean of Accuracy and Completeness. Compared with the other methods, our proposed method shows its capability for generating denser and more complete point clouds on textureless regions, which is visualized in Fig. 5.", the paper seems to claim that the proposed method generates denser point clouds. Maybe this could be clarified? A: As a) nearly all the learning-based MVS methods (including ours) take the DTU as an important dataset for evaluation, b) the GT of DTU is approximately the most accurate GT we can obtain (compared with other datasets), c) the final results are the average across 22 test scans, we think that fewer errors could indicate better performance. However, your point about the accuracy of DTU GT is enlightening, and we think it's valuable future work. This still does not address my concern. My question is whether the ground truth is accurate enough that we can be sure that the small differences between the different components really comes from improvements provided by adding components. In this context, stating that "the GT of DTU is approximately the most accurate GT we can obtain (compared with other datasets)" does not answer this question as, even though DTU has the most accurate GT, it might not be accurate enough to measure differences at this level of accuracy (0.05 mm difference). If the GT is not accurate enough to differentiate in the 0.05 mm range, then averaging over different test scans will not really help. That "nearly all the learning-based MVS methods (including ours) take the DTU as an important dataset for evaluation" does also not address this question. Since the paper claims improvements when using the different components and uses the results to validate the components, I do not think that answering the question whether the ground truth is accurate enough to make these claims in future work is really an option. I think it would be better to run the ablation study on a dataset where improvements can be measured more clearly. Final rating I am inclined to keep my original rating ("6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations."). I still like the good results on the Tanks & Temples dataset and believe that the proposed approach is technically sound. However, I do not find the authors' rebuttals particularly convincing and thus do not want to increase my rating. In particular, I still have concerns about the ablation study as I am not sure whether the ground truth of the DTU dataset is accurate enough that it makes sense to claim improvements if the difference is 0.05 mm or smaller. Since this only impacts the ablation study, it is also not a reason to decrease my rating.
4. Does the claim "It can be seen from the table that our proposed modules improve in both accuracy and completeness" really hold? Why not use another dataset for the ablation study, e.g., the training set of Tanks & Temples or ETH3D?
5EHI2FGf1D
EMNLP_2023
- no comparison against baselines. The functionality similarity comparison study reports only accuracy across optimization levels of binaries, but no baselines are considered. This is a widely-understood binary analysis application and many papers have developed architecture-agnostic similarity comparison (or often reported as codesearch, which is a similar task). - rebuttal promises to add this evaluation - in addition, the functionality similarity comparison methodology is questionable. The authors use cosine similarity with respect to embeddings, which to me makes the experiment rather circular. In contrast, I might have expected some type of dynamic analysis, testing, or some other reasoning to establish semantic similarity between code snippets. - rebuttal addresses this point. - vulnerability discovery methodology is also questionable. The authors consider a single vulnerability at a time, and while they acknowledge and address the data imbalance issue, I am not sure about the ecological validity of such a study. Previous work has considered multiple CVEs or CWEs at a time, and report whether or not the code contains any such vulnerability. Are the authors arguing that identifying one vulnerability at a time is an intended use case? In any case, the results are difficult to interpret (or are marginal improvements at best). - addressed in rebuttal - This paper is very similar to another accepted at Usenix 2023: Can a Deep Learning Model for One Architecture Be Used for Others? Retargeted-Architecture Binary Code Analysis. In comparison to that paper, I do not quite understand the novelty here, except perhaps for a slightly different evaluation/application domain. I certainly acknowledge that this submission was made slightly before the Usenix 2023 proceedings were made available, but I would still want to understand how this differs given the overall similarity in idea (building embeddings that help a model target a new ISA). - addressed in rebuttal - relatedly, only x86 and ARM appear to be considered in the evaluation (the authors discuss building datasets for these ISAs). There are other ISAs to consider (e.g., PPC), and validating the approach against other ISAs would be important if claiming to build models that generalize to across architectures. - author rebuttal promises a followup evaluation
- vulnerability discovery methodology is also questionable. The authors consider a single vulnerability at a time, and while they acknowledge and address the data imbalance issue, I am not sure about the ecological validity of such a study. Previous work has considered multiple CVEs or CWEs at a time, and report whether or not the code contains any such vulnerability. Are the authors arguing that identifying one vulnerability at a time is an intended use case? In any case, the results are difficult to interpret (or are marginal improvements at best).
ICLR_2021_2506
ICLR_2021
Weakness == Exposition == The exposition of the proposed method can be improved. For examples, - it’s unclear how the “Semantic Kernel Generation” is implemented. I can probably guess this step is essentially a 1 x 1 convolution, but it would be better to fill in the details. - In the introduction, the paper mentioned that the translated images often contain artifacts due to the nearest-neighbor upsampling. Yet, in the Feature Spatial Expansion step it also used nearest-neighbor interpolation. This needs some more justification. - For Figure 3, it’s unclear what the learned semantically adaptive kernel visualization means. At each upsampling layer, the kernel is only K x K. - At the end of Section 2, I do not understand what it means by “the semantics of the upsampled feature map can be stronger than the original one”. - The proposed upsampling layer is called “semantically aware”. However, I do not see anything that’s related to the semantics other than the fact that the input is a semantic map. I would suggest that this should be called “content aware” instead. == Technical novelty == - My main concern about the paper lies in its technical novelty. There have been multiple papers that proposed content aware filter. As far as I know, the first one is [Xu et al. 2016]. Jia, Xu, et al. "Dynamic filter networks." Advances in neural information processing systems. 2016. The work most relevant to this paper is the CARAFE [Wang et al. ICCV 2019], which can be viewed as a special case of the dynamic filter network for feature upsampling. By comparing Figure 2 in this paper and Figure 2 in the CARAFE paper [Wang et al. ICCV 2019], it seems to me that the two methods are * the same*. The only difference is the application of the layout-to-image task. The paper in the related work section suggests, “the settings of these tasks are significantly different from ours, making their methods cannot be used directly.” I respectfully disagree with this statement because both methods take in a feature tensor and produce an upsampled feature tensor. I believe that the application would be straightforward without any modification. Given the similarity to prior work, it would be great for the authors to (1) describe in detail the differences (if any) and (2) compare with prior work that also uses spatially varying upsampling kernel. Minor: - CARAFE: Content-Aware ReAssembly of Features. The paper is in ICCV 2019, not CVPR 2019 In sum, I think the idea of the spatially adaptive upsampling kernel is technically sound. I also like the extensive evaluation in this paper. However, I have concerns about the high degree of similarity with the prior method and the lack of comparison with CARAFE. ** After discussions ** I have read other reviewers' comments. Many of the reviewers share similar concerns regarding the technical novelty of this work. I don't find sufficient ground to recommend acceptance of this paper.
- CARAFE: Content-Aware ReAssembly of Features. The paper is in ICCV 2019, not CVPR 2019 In sum, I think the idea of the spatially adaptive upsampling kernel is technically sound. I also like the extensive evaluation in this paper. However, I have concerns about the high degree of similarity with the prior method and the lack of comparison with CARAFE. ** After discussions ** I have read other reviewers' comments. Many of the reviewers share similar concerns regarding the technical novelty of this work. I don't find sufficient ground to recommend acceptance of this paper.
NIPS_2022_655
NIPS_2022
1. How to get a small degree of bias from a clear community structure needs more explanations. Theorem 1 and 2 prove that GCL conforms to a clearer community structure via intra-community concentration and inter-community scatter, but its relationship with degree bias is not intuitive enough. 2. There is some confusion in the theoretical analysis. Why is the supremum in Definition 1 \gamma(\frac{B}{\hat{d}_{\min}^k})^{\frac{1}{2}}? Based on this definition, how to prove that the proposed GRADE reduces this supremum? 3. There is a lack of significance test in Table 1. Despite the weaknesses mentioned above, I believe that this paper is worth publishing. They consider an important degree-bias problem in the graph domain, given that node degrees of real-world graphs often follow a long-tailed power-law distribution. And they show an exciting finding that GCL is more stable w.r.t. the degree bias, and give a preliminary explanation for the underlying mechanism. Although the improvement does not seem significant in Table 1, they may inspire more future research on this promising solution.
1. How to get a small degree of bias from a clear community structure needs more explanations. Theorem 1 and 2 prove that GCL conforms to a clearer community structure via intra-community concentration and inter-community scatter, but its relationship with degree bias is not intuitive enough.
NIPS_2021_1222
NIPS_2021
Claims: 1.a) I think the paper falls short of the high-level contributions claimed in the last sentence of the abstract. As the authors note in the background section, there are a number of published works that demonstrate the tradeoffs between clean accuracy, training with noise perturbations, and adversarial robustness. Many of these, especially Dapello et al., note the relevance with respect to stochasticity in the brain. I do not see how their additional analysis sheds new light on the mechanisms of robust perception or provides a better understanding of the role stochasticity plays in biological computation. To be clear - I think the paper is certainly worthy of publication and makes notable contributions. Just not all of the ones claimed in that sentence. 1.b) The authors note on lines 241-243 that “the two geometric properties show a similar dependence for the auditory (Figure 4A) and visual (Figure 4B) networks when varying the eps-sized perturbations used to construct the class manifolds.” I do not see this from the plots. I would agree that there is a shared general upward trend, but I do not agree that 4A and 4B show “similar dependence” between the variables measured. If nothing else, the authors should be more precise when describing the similarities. Clarifications: 2.a) The authors say on lines 80-82 that the center correlation was not insightful for discriminating model defenses, but then use that metric in figure 4 A&B. I’m wondering why they found it useful here and not elsewhere? Or what they meant by the statement on lines 80-82. 2.b) On lines 182-183 the authors note measuring manifold capacity for unperturbed images, i.e. clean exemplar manifolds. Earlier they state that the exemplar manifolds are constructed using either adversarial perturbations or from stochasticity of the network. So I’m wondering how one constructs images for a clean exemplar manifold for a non-stochastic network? Or put another way, how is the denominator of figure 2.c computed for the ResNet50 & ATResNet50 networks? 2.c) The authors report mean capacity and width in figure 2. I think this is the mean across examples as well as across seeds. Is the STD also computed across examples and seeds? The figure caption says it is only computed across seeds. Is there a lot of variability across examples? 2.d) I am unsure why there would be a gap between the orange and blue/green lines at the minimum strength perturbation for the avgpool subplot in figure 2.c. At the minimum strength perturbation, by definition, the vertical axis should have a value of 1, right? And indeed in earlier layers at this same perturbation strength the capacities are equal. So why does the ResNet50 lose so much capacity for the same perturbation size from conv1 to avgpool? It would also be helpful if the authors commented on the switch in ordering for ATResNet and the stochastic networks between the middle and right subplots. General curiosities (low priority): 3.a) What sort of variability is there in the results with the chosen random projection matrix? I think one could construct pathological projection matrices that skews the MFTMA capacity and width scores. These are probably unlikely with random projections, but it would still be helpful to see resilience of the metric to the choice of random projection. I might have missed this in the appendix, though. 3.b) There appears to be a pretty big difference in the overall trends of the networks when computing the class manifolds vs exemplar manifolds. Specifically, I think the claims made on lines 191-192 are much better supported by Figure 1 than Figure 2. I would be interested to hear what the authors think in general (i.e. at a high/discussion level) about how we should interpret the class vs exemplar manifold experiments. Nitpick, typos (lowest priority): 4.a) The authors note on line 208 that “Unlike VOneNets, the architecture maintains the conv-relu-maxpool before the first residual block, on the grounds that the cochleagram models the ear rather than the primary auditory cortex.” I do not understand this justification. Any network transforming input signals (auditory or visual) would have to model an entire sensory pathway, from raw input signal to classification. I understand that VOneNets ignore all of the visual processing that occurs before V1. I do not see how this justifies adding the extra layer to the auditory network. 4.b) It is not clear why the authors chose a line plot in figure 4c. Is the trend as one increases depth actually linear? From the plot it appears as though the capacity was only measured at the ‘waveform’ and ‘avgpool’ depths; were there intermediate points measured as well? It would be helpful if they clarified this, or used a scatter/bar plot if there were indeed only two points measured per network type. 4.c) I am curious why there was a switch to reporting SEM instead of STD for figures 5 & 6. 4.c) I found typos on lines 104, 169, and the fig 5 caption (“10 image and”).
2.b) On lines 182-183 the authors note measuring manifold capacity for unperturbed images, i.e. clean exemplar manifolds. Earlier they state that the exemplar manifolds are constructed using either adversarial perturbations or from stochasticity of the network. So I’m wondering how one constructs images for a clean exemplar manifold for a non-stochastic network? Or put another way, how is the denominator of figure 2.c computed for the ResNet50 & ATResNet50 networks?
NIPS_2019_494
NIPS_2019
of the approach, it may be interesting to do that. Clarity: The paper is well written but clarity could be improved in several cases: - I found the notation / the explicit split between "static" and temporal features into two variables confusing, at least initially. In my view this requires more information than is provided in the paper (what is S and Xt). - even with the pseudocode given in the supplementary material I don't get the feeling the paper is written to be reproduced. It is written to provide an intuitive understanding of the work, but to actually reproduce it, more details are required that are neither provided in the paper nor in the supplementary material. This includes, for example, details about the RNN implementation (like number of units etc), and many other technical details. - the paper is presented well, e.g., quality of graphs is good (though labels on the graphs in Fig 3 could be slightly bigger) Significance: - from just the paper: the results would be more interesting (and significant) if there was a way to reproduce the work more easily. At present I cannot see this work easily taken up by many other researchers mainly due to lack of detail in the description. The work is interesting, and I like the idea, but with a relatively high-level description of it in the paper it would need a little more than the peudocode in the materials to convince me using it (but see next). - In the supplementary material it is stated the source code will be made available, and in combination with paper and information in the supplementary material, the level of detail may be just right (but it's hard to say without seeing the code). Given the promising results, I can imagine this approach being useful at least for more research in a similar direction.
- I found the notation / the explicit split between "static" and temporal features into two variables confusing, at least initially. In my view this requires more information than is provided in the paper (what is S and Xt).
4A5D1nsdtj
ICLR_2024
1. The motivation for the choice of $\theta = \frac{\pi}{2}(1-h)$ from theorem 3, is not very straightforward and clear. The paper states that this choice is empirical, but there is very little given in terms of motivation for this exact form. 2. For this method, the knowledge of the homophily ratio seems to be important. In many practical scenarios, this may not be possible to be estimated accurately and even approximations could be difficult. No ablation study is presented showing the sensitivity of this model to the accurate knowledge of the homophily ratio. 3. The HetFilter seems to degrade rapidly past h=0.3 whereas OrtFilter is lot more graceful to the varying homophily ratio. It is unclear whether one would consider the presented fluctuations as inferior to the presented UniBasis. For UniBasis, in the region of h >= 0.3, the choice of tau should become extremely important (as is evident from Figure 4, where lower tau values can reduce performance on Cora by about 20 percentage points).
1. The motivation for the choice of $\theta = \frac{\pi}{2}(1-h)$ from theorem 3, is not very straightforward and clear. The paper states that this choice is empirical, but there is very little given in terms of motivation for this exact form.
NIPS_2019_629
NIPS_2019
- To my opinion, the setting and the algorithm lack a bit of originality and might seem as incremental combinations of methods of graph labelings prediction and online learning in a switching environment. Yet, the algorithm for graph labelings is efficient, new and seem different from the existing ones. - Lower bounds and optimality of the results are not discussed. In the conclusion section, it is asked whether the loglog(T) can be removed. Does this mean that up to this term the bounds are tight? I would like more discussions on this. More comparison with existing upper-bounds and lower-bound without switches could be made for instance. In addition, this could be interesting to plot the upper-bound on the experiments, to see how tight is the analysis. Other comments: - Only bounds in expectation are provided. Would it be possible to get high-probability bounds? For instance by using ensemble methods as performed in the experiments. Some measure about the robustness could be added to the experiments (such as error bars or standard deviation) in addition to the mean error. - When reading the introduction, I thought that the labels were adversarially chosen by an adaptive adversary. It seems that the analysis is only valid when all labels are chosen in advance by an oblivious adversary. Am I right? This should maybe be clarified. - This paper deals with many graph notions and it is a bit hard to get into it but the writing is generally good though more details could sometimes be provided (definition of the resistance distance, more explanations on Alg. 1 with brief sentences defining A_t, Y_t,...). - How was alpha tuned in the experiments (as 1/(t+1) or optimally)? - Some possible extensions could be discussed (are they straightforward?): directed or weighted graph, regression problem (e.g, to predict the number of bikes in your experiment)... Typo: l 268: the sum should start at 1
- This paper deals with many graph notions and it is a bit hard to get into it but the writing is generally good though more details could sometimes be provided (definition of the resistance distance, more explanations on Alg. 1 with brief sentences defining A_t, Y_t,...).
NIPS_2018_476
NIPS_2018
Weakness] 1) Originality is limited because the main idea of variable splitting is not new and the algorithm is also not new. 2) Theoretical proofs of existing algorithm might be regarded as some incremental contributions. 3) Experiments are somewhat weak: 3-1) I was wondering why Authors conducted experiments with lambda=1. According to Corollary 1 and 2 lambda should be sufficiently large, however it is completely ignored for experimental setting. Otherwise the proposed algorithm has no difference from [10]. 3-2) In Figure 3, different evaluations are shown in different dataset. It might be regarded as subjectively selected demonstrations. 3-3) I think clustering accuracy is not very significant because there are many other sophisticated algorithms, and initializations are still very important for nice performance. It show just the proposed algorithm is OK for some applications. [Minor points] -- comma should be deleted at line num. 112: "... + \lambda(u-v), = 0 ...". -- "close-form" --> "closed-form" at line num.195-196. --- after feedback --- I understand that the contribution of this paper is a theoretical justification of the existing algorithms proposed in [10]. In that case, the experimental validation with respect to the sensitivity of "lambda" is more important rather than the clustering accuracy. So Fig. 1 in feedback file is nice to add paper if possible. I think dropping symmetry is helpful, however it is not new idea that is already being used. So, it will not change anything in practice to use it. Furthermore, in recent years, almost all application researchers are using some application specific extensions of NMF such as sparse NMF, deep NMF, semi-NMF, and graph-regularized NMF, rather than the standard NMF. Thus, this paper is theoretically interesting as some basic research, but weak from an application perspective. Finally, I changed my evaluation upper one as: "Marginally below the acceptance threshold. I tend to vote for rejecting this submission, but accepting it would not be that bad."
1) Originality is limited because the main idea of variable splitting is not new and the algorithm is also not new.
NIPS_2016_313
NIPS_2016
Weakness: 1. The proposed method consists of two major components: generative shape model and the word parsing model. It is unclear which component contributes to the performance gain. Since the proposed approach follows detection-parsing paradigm, it is better to evaluate on baseline detection or parsing techniques sperately to better support the claim. 2. Lacks in detail about the techniques and make it hard to reproduce the result. For example, it is unclear about the sparsification process since it is important to extract the landmark features for following steps. And how to generate the landmark on the edge? How to decide the number of landmark used? What kind of images features? What is the fixed radius with different scales? How to achieve shape invariance, etc. 3. The authors claim to achieve state-of-the-art results on challenging scene text recognition tasks, even outperforms the deep-learning based approaches, which is not convincing. As claimed, the performance majorly come from the first step which makes it reasonable to conduct comparisons experiments with existing detection methods. 4. It is time-consuming since the shape model is trained in pixel level(though sparsity by landmark) and the model is trained independently on all font images and characters. In addition, parsing model is a high-order factor graph with four types of factors. The processing efficiency of training and testing should be described and compared with existing work. 5. For the shape model invariance study, evaluation on transformations of training images cannot fully prove the point. Are there any quantitative results on testing images?
5. For the shape model invariance study, evaluation on transformations of training images cannot fully prove the point. Are there any quantitative results on testing images?
NIPS_2016_287
NIPS_2016
weakness, however, is the experiment on real data where no comparison against any other method is provided. Please see the details comments below.1. While [5] is a closely related work, it is not cited or discussed at all in Section 1. I think proper credit should be given to [5] in Sec. 1 since the spacey random walk was proposed there. The difference between the random walk model in this paper and that in [5] should also be clearly stated to clarify the contributions. 2. The AAAI15 paper titled "Spectral Clustering Using Multilinear SVD: Analysis, Approximations and Applications" by Ghoshdastidar and Dukkipati seems to be a related work missed by the authors. This AAAI15 paper deals with hypergraph data with tensors as well so it should be discussed and compared against to provide a better understanding of the state-of-the-art. 3. This work combines ideas from [4], [5], and [14] so it is very important to clearly state the relationships and differences with these earlier works. 4. End of Sec. 2., there are two important parameters/thresholds to set. One is the minimum cluster size and the other is the conductance threshold. However, the experimental section (Sec. 3) did not mention or discuss how these parameters are set and how sensitive the performance is with respect to these parameters. 5. Sec. 3.2 and Sec. 3.3: The real data experiments study only the proposed method and there is no comparison against any existing method on real data. Furthermore, there is only some qualitative analysis/discussion on the real data results. Adding some quantitative studies will be more helpful to the readers and researchers in this area. 6. Possible Typo? Line 131: "wants to transition".
2. The AAAI15 paper titled "Spectral Clustering Using Multilinear SVD: Analysis, Approximations and Applications" by Ghoshdastidar and Dukkipati seems to be a related work missed by the authors. This AAAI15 paper deals with hypergraph data with tensors as well so it should be discussed and compared against to provide a better understanding of the state-of-the-art.
ICLR_2021_1944
ICLR_2021
I have several concerns regarding this paper. • Novelty. The authors propose to use Ricci flow to compute the distance between nodes so that to sample edges with respect to that distance. Using Ricci flow for distance computation is a well-studied area (as indicated in related work). The only novel part is that each layer gets a new graph; however, this choice is not motivated (why not to train all layers of GNN on different graphs instead?) and has problems (see next). • Approach. Computing optimal transport distance is generally an expensive procedure. While authors indicated that it takes seconds to compute it on 36 cores machine, it’s not clear how scalable this method is. I would like to see whether it scales on normal machines with a couple of cores. Moreover, how do you compute exactly optimal transport, because the Sinkhorn method gives you a doubly stochastic matrix (how do you go from it to optimal transport?). • Algorithm. This is the most obscure part of the paper. First, it’s not indicated how many layers do you use in experiments. This is a major part of your algorithm because you claim that if an edge appears in several layers it means that it’s not adversarial (or that it does not harm your algorithm). In most of the baselines, there are at most 2-3 layers. There are theoretical limitations why GNN with many layers may not work in practice (see, the literature on “GNN oversmoothing”). Considering that you didn’t provide the code (can you provide an anonymized version of the code?) and that your baselines (GCN, GAT, etc.) have similar (or the same) performance as in the original papers (where the number of layers is 2-3), I deduce that your model Ricci-GNN also has this number of layers. With that said, I doubt that it’s possible to make any conclusive results about whether an edge is adversarial or not with 2-3 graphs. Moreover, I would expect to see an experiment on how your approach varies depending on the number of layers. This is a crucial part of your algorithm and not seeing discussion of it in the paper, raises concerns about the validity of experiments. • Design choices. Another potential problem of your algorithm is that the sampled graphs can become dense. There are hyperparameters \sigma and \beta that control the probabilities and also you limit the sampling only for 2-hop neighborhoods (“To keep graph sparsity, we only sample edges between pairs that are within k hops of each other in G (we always take k = 2 in the experiments).” This is arbitrary and the effect of it on the performance is not clear. How did you select parameters \sigma and \beta? Why k=2? How do you ensure that the sampled graphs are similar to the original one? Does it matter that sampled graphs should have similar statistics to the original graph? I guess, this crucially affects the performance of your algorithm, so I would like to see more experiments on this. • Datasets. Since this paper is mostly experimental, I would like to see a comparison of this model on more datasets (5-7 in total). Verifying on realistic but small datasets such as Cora and Citeseer limits our intuition about performance. For example, Cora is a single graph of 2.7K nodes. As indicated in [1], “Although small datasets are useful as sanity checks for new ideas, they can become a liability in the long run as new GNN models will be designed to overfit the small test sets instead of searching for more generalizable architectures.” There are many sources of real graphs, you can consider OGB [2] or [3]. • Weak baselines. Another major concern of the validity of the experiments is the choice of the baselines. Neither of GNN baselines (GCN, GAT, etc.) was designed for the defense of adversarial attacks, so choosing them for comparison is not fair. A comparison with previous works (indicated in “Adversarial attack on graphs.” in related work section) is necessary. Moreover, an experiment where you randomly sample edges (instead of using Ricci distance) is desirable to compare the performance against random sampling. • Ablation. Since you use GCN, why the performance of Ricci-GCN is so different from GCN when there 0 perturbations? For Citeseer the absolute difference is 2% which is quite high for the same models. Also, an experiment with different choices of GNN is desirable. • Training. Since experiments play important role in this paper, it’s important to give a fair setup for the models in comparison. You write “For each training procedure, we run 100 epochs and use the model trained at 100-th epoch.”. This can disadvantageous for many models. A better way would be to run each model setup until convergence on the training set, selecting the epoch using the validation set. Otherwise, your baselines could suffer from either underfitting or overfitting. [1] https://arxiv.org/pdf/2003.00982.pdf [2] https://ogb.stanford.edu/ [3] https://paperswithcode.com/task/node-classification ========== After reading the authors comments. I applaud the authors for greatly improving their paper via the revision. Now the number of layers is specified and the explanation of having many sampled graphs during training is added, which was missing in the original text and was preventing a full understanding of the reasons why the proposed approach works. Overall, I am leaning toward increasing the score. I still have several concerns about the practicality of Ricci-GNN. In simple words, the proposed approach uses some metric S (Ricci flow) that dictates how to sample graphs for training. The motivation for using Ricci flow is “that Ricci flow is a global process that tries to uncover the underlying metric space supported by the graph topology and thus embraces redundancy”. This claim cites previous papers, which in turn do not discuss what exactly is meant by “a global process that tries to uncover the underlying metric space”. Spectral embeddings also can be considered as a global metric, so some analysis on what properties of Ricci flow makes it more robust to attacks would be appreciated. Also including random sampling in comparison would confirm that the effect is coming not from the fact that you use more graphs during the training, but from how you sample those graphs. In addition, as the paper is empirical and relies on the properties of Ricci flow which was discussed in previous works and was not addressed in the context of adversarial attacks, having more datasets (especially larger ones) in the experiments would improve the paper.
• Approach. Computing optimal transport distance is generally an expensive procedure. While authors indicated that it takes seconds to compute it on 36 cores machine, it’s not clear how scalable this method is. I would like to see whether it scales on normal machines with a couple of cores. Moreover, how do you compute exactly optimal transport, because the Sinkhorn method gives you a doubly stochastic matrix (how do you go from it to optimal transport?).
NIPS_2019_1131
NIPS_2019
1. There is no discussion on the choice of "proximity" and the nature of the task. On the proposed tasks, proximity on the fingertip Cartesian positions is strongly correlated with proximity in the solution space. However, this relationship doesn't hold for certain tasks. For example, in a complicated maze, two nearby positions in the Euclidean metric can be very far in the actual path. For robotic tasks with various obstacles and collisions, similar results apply. The paper would be better if it analyzes what tasks have reasonable proximity metrics, and demostrate failure on those that don't. 2.  Some ablation study is missing, which could cause confusion and extra experimentation for practitioners. For example, the \sigma in the RBF kernel seems to play a crucial role, but no analysis is given on it. Figure 4 analyzes how changing \lambda changes the performance, but it would be nice to see how \eta and \tau in equation (7) affect performance. Minor comments: 1. The diversity term, defined as the facility location function, is undirected and history-invariant. Thus it shouldn't be called "curiosity", since curiosity only works on novel experiences. Please use a different name. 2. The curves in Figure 3 (a) are suspiciously cut at Epoch = 50, after which the baseline methods seem to catch up and perhaps surpass CHER. Perhaps this should be explained.
2. Â Some ablation study is missing, which could cause confusion and extra experimentation for practitioners. For example, the \sigma in the RBF kernel seems to play a crucial role, but no analysis is given on it. Figure 4 analyzes how changing \lambda changes the performance, but it would be nice to see how \eta and \tau in equation (7) affect performance. Minor comments:
5UW6Mivj9M
EMNLP_2023
1) The paper was extremely hard to follow. I read it multiple times and still had trouble following the exact experimental procedures and evaluations that the authors conducted. 2) Relatedly, it was hard to discern what was novel in the paper and what had already been tried by others. 3) Since the improvement in numbers is not large (in most cases, just a couple of points), it is hard to tell if this improvement is statistically significant and if it translates to qualitative improvements in performance.
1) The paper was extremely hard to follow. I read it multiple times and still had trouble following the exact experimental procedures and evaluations that the authors conducted.
FpElWzxzu4
ICLR_2024
1. In the intro section, the claim that Transformers reply on regularly sampled time-series data is wrong. For example, [1] shows that the Transformer model handles irregularly-sampled time series well for imputation. 2. In section 2, "Finally, INRs operate on a per-data-instance basis, meaning that one time-series instance is required to train an INR". This claim is true but I don't think it is an advantage. A model that can only handle a single time series data is almost useless. 3. In section 3.1, why $c_i^t$ is a vector rather than a scalar? It just denotes a temporal coordinate of a time and should be a scalar. 4. Why the frequency of the Fourier feature is uniformly sampled? 5. "Vectors Bm ∈ R1×DψF , δm ∈ R1×DψF denote the phase shift and the bias respectively". I think what the author wants to claim is that B represents bias while δm denotes phase shift. 6. In Figure 5, it is unclear what the vertical axis represents. [1] NRTSI: Non-Recurrent Time Series Imputation, ICASSP 2023.
2. In section 2, "Finally, INRs operate on a per-data-instance basis, meaning that one time-series instance is required to train an INR". This claim is true but I don't think it is an advantage. A model that can only handle a single time series data is almost useless.
NIPS_2017_110
NIPS_2017
of this work include that it is a not-too-distant variation of prior work (see Schiratti et al, NIPS 2015), the search for hyperparameters for the prior distributions and sampling method do not seem to be performed on a separate test set, the simultion demonstrated that the parameters that are perhaps most critical to the model's application demonstrate the greatest relative error, and the experiments are not described with adequate detail. This last issue is particularly important as the rupture time is what clinicians would be using to determine treatment choices. In the experiments with real data, a fully Bayesian approach would have been helpful to assess the uncertainty associated with the rupture times. Paritcularly, a probabilistic evaluation of the prospective performance is warranted if that is the setting in which the authors imagine it to be most useful. Lastly, the details of the experiment are lacking. In particular, the RECIST score is a categorical score, but the authors evaluate a numerical score, the time scale is not defined in Figure 3a, and no overall statistics are reported in the evaluation, only figures with a select set of examples, and there was no mention of out-of-sample evaluation. Specific comments: - l132: Consider introducing the aspects of the specific model that are specific to this example model. For example, it should be clear from the beginning that we are not operating in a setting with infinite subdivisions for \gamma^1 and \gamma^m and that certain parameters are bounded on one side (acceleration and scaling parameters). - l81-82: Do you mean to write t_R^m or t_R^{m-1} in this unnumbered equation? If it is correct, please define t_R^m. It is used subsequently and it's meaning is unclear. - l111: Please define the bounds for \tau_i^l because it is important for understanding the time-warp function. - Throughout, the authors use the term constrains and should change to constraints. - l124: What is meant by the (*)? - l134: Do the authors mean m=2? - l148: known, instead of know - l156: please define \gamma_0^{***} - Figure 1: Please specify the meaning of the colors in the caption as well as the text. - l280: "Then we made it explicit" instead of "Then we have explicit it"
- l132: Consider introducing the aspects of the specific model that are specific to this example model. For example, it should be clear from the beginning that we are not operating in a setting with infinite subdivisions for \gamma^1 and \gamma^m and that certain parameters are bounded on one side (acceleration and scaling parameters).
zpayaLaUhL
EMNLP_2023
- Limited Experiments - Most of the experiments (excluding Section 4.1.1) are limited to RoBERTa-base only, and it is unclear if the results can be generalized to other models adopting learnable APEs. It is important to investigate whether the results can be generalized to differences in model size, objective function, and architecture (i.e., encoder, encoder-decoder, or decoder). In particular, it is worthwhile to include more analysis and discussion for GPT-2. For example, I would like to see the results of Figure 2 for GPT-2. - The input for the analysis is limited to only 100 or 200 samples from wikitext-2. It would be desirable to experiment with a larger number of samples or with datasets from various domains. - Findings are interesting, but no statement of what the contribution is and how practical impact on the community or practical use. (Question A). - Results contradicting those reported in existing studies (Clark+'19) are observed but not discussed (Question B). - I do not really agree with the argument in Section 5 that word embedding contributes to relative position-dependent attention patterns. The target head is in layer 8, and the changes caused by large deviations from the input, such as only position embedding, are quite large at layer 8. It is likely that the behavior is not such that it can be discussed to explain the behavior under normal conditions. Word embeddings may be the only prerequisites for the model to work properly rather than playing an important role in certain attention patterns. - Introduction says to analyze "why attention depends on relative position," but I cannot find content that adequately answers this question. - There is no connection or discussion of relative position embedding, which is typically employed in recent Transformer models in place of learnable APE (Question C).
- Limited Experiments - Most of the experiments (excluding Section 4.1.1) are limited to RoBERTa-base only, and it is unclear if the results can be generalized to other models adopting learnable APEs. It is important to investigate whether the results can be generalized to differences in model size, objective function, and architecture (i.e., encoder, encoder-decoder, or decoder). In particular, it is worthwhile to include more analysis and discussion for GPT-2. For example, I would like to see the results of Figure 2 for GPT-2.
ICLR_2023_2934
ICLR_2023
- Fig. 1 leaves me with some doubts. It would seem that the private task is solved by using only a head operating on the learned layer for the green task (devanagari). This is at least what I would expect for the claims of the method to still uphold, because if the private task head can alter the weights of the Transformer layer 1 then information from the private task is flowing into the network. I would appreaciate if the authors could clarify this. - Overall a lot of choices seem to lead towards the necessity for large compute power. The choice of modifying hyperparameters only by a one-hop neighbor is quite restrictive and it implies that we have to evolve/search for quite a while before stumbling on the correct hyperparams. The layer cloning and mutation probability hyperparameter is set at random by the evolutionary process, implying that the level of overall randomicity is very high and therefore large training times are needed to get stable results or be able to reproduce the claimed results (considering the authors use DNN architectures). The authors mention that "the score can be defined to optimize a mixture of factors depending on application requirements". It would have been nice to see what the tradeoff between training time and model size vs optimal multi-task performance is, especially considering these high levels of randomicity present in the proposed approach. (and also for others to be able to reproduce somewhat similar results on lesser compute). - The parameters in Table 1, the model and the experiments seem to be only good for image data and ViT. Did the authors try to apply the same principles to other areas research areas such as NLP or simpler models in the image domain (CNNs)? I understand the latter might be due to the focus about state of the art performance, but it would show that the method can generalize to different architectures and tasks, not just transformers in vision.
- The parameters in Table 1, the model and the experiments seem to be only good for image data and ViT. Did the authors try to apply the same principles to other areas research areas such as NLP or simpler models in the image domain (CNNs)? I understand the latter might be due to the focus about state of the art performance, but it would show that the method can generalize to different architectures and tasks, not just transformers in vision.
X4ATu1huMJ
ICLR_2024
**Overall comment** The paper discusses evaluating TTA methods across multiple settings, and how to choose the correct method during test-time. I would argue most of the methods/model selection strategies that are discussed in the paper are not novel and/or existed before, and the paper does not have a lot of algorithmic innovation. While this discussion unifies various prior methods and can be a valuable guideline for practitioners to choose the appropriate TTA method, there needs to be more experiments to make it a compelling paper (i.e., add MEMO [1] as a method of comparison, add WILDS-Camelyon 17 [9], WILDS-FMoW [9], ImageNet-A [7], CIFAR-10.1 [8] as dataset benchmarks). But I do feel the problem setup is very important, and adding more experiments and a bit of rewriting can make the paper much stronger. **Abstract and Introduction** 1. The paper mentions model restarting to avoid error propagation. There has been important work in TTA, where the model adapts its parameters to only one test example at a time, and reverts back to the initial (pre-trained) weights after it has made the prediction, doing the process all over for the next test example. This is also an important setting to consider, where only one test example is available, and one cannot rely on batches of data from a stream. For example, see MEMO [1]. 2. (nitpicking, not important to include in the paper) “under extremely long scenarios all existing TTA method results in degraded performance”, while this is true, the paper does not mention some recent works that helps alleviate this. E.g., MEMO [1] in the one test example at a time scenario, or MEMO + surgical FT [2] where MEMO is used in the online setting, but parameter-efficient updating helps with feature distortion/performance degradation. So the claim is outdated. 3. It would be good to cite relevant papers such as [4] as prior works that look into model selection strategies (but not for TTA setting) to motivate the problem statement. **Section 3.2, model selection strategies in TTA** 1. While accuracy-on-the-line [3] shows correlation between source (ID) and target (OOD) accuracies, some work [4] also say source accuracy is unreliable in the face of large domain gaps. I think table 3 shows the same result. Better to cite [4] and add their observation. 2. Why not look at agreement-on-the-line [5]? This is known to be a good way of assessing performance on the target domain without having labels. For example, A-Line [6] seems to have good performance on TTA tasks. This should also be considered as a model selection method. **Section 4.1, datasets** 1. Missing some key datasets such as ImageNet-A [7], CIFAR-10.1 [8]. It is important to consider ImageNet-A (to show TTA’s performance on adversarial examples) and CIFAR-10.1, to show TTA’s performance on CIFAR-10 examples where the shift is natural, i.e., not corruptions. Prior work such as MEMO [1] has used some of these datasets. **Section 4.3, experimental setup** 1. The architecture suite that is used is limited in size. Only ResNext-29 and ResNet-50 are used. Since the paper’s goal is to say something rigorous about model selection strategies, it is important to try more architectures to have a comprehensive result. At least some vision-transformer architecture is required to make the results strong. I would suggest trying RVT-small [12] or ViT-B/32 [13]. Why do the authors use SGD as an optimizer for all tasks? It is previously shown that [14] SGD often performs worse for more modern architectures. The original TENT [15] paper also claims they use SGD for ImageNet and for everything else they use Adam [16]. **Section 5, results** 1. (Table 1) It might be easier if the texts mention that each row represent one method, and each column represents one model selection strategy. When the authors say “green” represents the best number, they mean “within a row”. 2. (Different methods’ ranking under different selection strategies) The results here are not clear and hard to read. How many times does one method outperform the other, when considering all different surrogate based metrics across all datasets? If the goal is to show consistency of AdaContrast as mentioned in the introduction, a better way of presenting this might be making something similar to table 1 of [17]. 3. What does the **Median** column in Table 2 and 3 represent? There is no explanation given for this in paper. 4. I assume the 4 surrogate strategies are: S-Acc, Cross-Acc, Ent and Con. If so, then the statement **“While EATA is significantly the best under the oracle selection strategy (49.99 on average) it is outperformed for example by Tent (5th method using oracle selection) when using 3 out of 4 surrogate-based metrics”** is clearly False according to the last section of Table 2: Tent > EATA on Cross-Acc and Con, but EATA > Tent when using S-Acc and Ent. 5. **(Performance of TTA methods)** This is an interesting observation, that using non-standard benchmarks breaks a lot of popular TTA methods. If the authors can evaluate TTA on more conditions of natural distribution shift, like WILDS [9], it could really strengthen the paper.
5. **(Performance of TTA methods)** This is an interesting observation, that using non-standard benchmarks breaks a lot of popular TTA methods. If the authors can evaluate TTA on more conditions of natural distribution shift, like WILDS [9], it could really strengthen the paper.
MY8SBpUece
ICLR_2024
Weakness: 1. Based on my understanding, the core advantage of the proposed analysis is from the Hermite expansion of the activation layer, which can characterize higher-order nonlinearity and explain more non-linear behaviors than the orthogonal decomposition used in Ba et al. 2022. Please clarify this. 2. The required condition on the learning rate (scaling with the number of samples) is not scalable. I never see a step size grows with the sample size in practice, which will lead to unreasonably large learning rate when learning on large-scale dataset. I understand the authors need a way to precisely characterize the benefit of large learning rates, but this condition is not realistic itself.
2. The required condition on the learning rate (scaling with the number of samples) is not scalable. I never see a step size grows with the sample size in practice, which will lead to unreasonably large learning rate when learning on large-scale dataset. I understand the authors need a way to precisely characterize the benefit of large learning rates, but this condition is not realistic itself.
NIPS_2019_134
NIPS_2019
Weakness: 1. Although these tensor networks can be used to represent PMF of discrete variables. How these results are useful to machine learning algorithms or analyze the algorithm is not clear. Hence, the significance of this paper is poor. 2. The two experiments are all based on very small dataset either generated or realistic data. The evaluation is performed on KL-divergence or NLL, which only show how good the model can fit the data, rather than generalization performance. How these results are useful for machine learning? In addition, MPS, BM, LPS are quite similar in the structure. There are many well known tensor models, CP, Tucker, Hirachical Tucker are not compared. There are also more complicated models like MERA, PEPS. I have read the authors' rebuttal, they addressed some of questions well. But the generalization is not considered, thus it becomes a standard non-negative tensor factorization problem on the PMF data. Hence, I will remain the original score.
1. Although these tensor networks can be used to represent PMF of discrete variables. How these results are useful to machine learning algorithms or analyze the algorithm is not clear. Hence, the significance of this paper is poor.
ICLR_2021_1181
ICLR_2021
1.For domain adaptation in the NLP field, powerful pre-trained language models, e.g., BERT, XLNet, can overcome the domain-shift problem to some extent. Thus, the authors should be used as the base encoder for all methods and then compare the efficacy of the transfer parts instead of the simplest n-gram features. 2.The whole procedure is slightly complex. The author formulates the prototypical distribution as a GMM, which has high algorithm complexity. However, formal complexity analysis is absent. The author should provide an analysis of the time complexity and training time of the proposed SAUM method compared with other baselines. Besides, a statistically significant test is absent for performance improvements. 3.The motivation of learning a large margin between different classes is exactly discriminative learning, which is not novel when combined with domain adaptation methods and already proposed in the existing literature, e.g., Unified Deep Supervised Domain Adaptation and Generalization, Saeid et al., ICCV 2017. Contrastive Adaptation Network for Unsupervised Domain Adaptation, Kang et al., CVPR 2019 Joint Domain Alignment and Discriminative Feature Learning for Unsupervised Deep Domain Adaptation, Chen et al., AAAI 2019. However, this paper lacks detailed discussions and comparisons with existing discriminative feature learning methods for domain adaptation. 4.The unlabeled data (2000) from the preprocessed Amazon review dataset (Blitzer version) is perfectly balanced, which is impractical in real-world applications. Since we cannot control the label distribution of unlabeled data during training, the author should also use a more convinced setting as did in Adaptive Semi-supervised Learning for Cross-domain Sentiment Classification, He et al., EMNLP 2018, which directly samples unlabeled data from millions of reviews. 5.The paper lacks some related work about cross-domain sentiment analysis, e.g., End-to-end adversarial memory network for cross-domain sentiment classification, Li et al., IJCAI 2017 Adaptive Semi-supervised Learning for Cross-domain Sentiment Classification, He et al., EMNLP 2018 Hierarchical attention transfer network for cross-domain sentiment classification, Li et al., AAAI 18 Questions: 1.Have the authors conducted the significance tests for the improvements? 2.How fast does this algorithm run or train compared with other baselines?
4.The unlabeled data (2000) from the preprocessed Amazon review dataset (Blitzer version) is perfectly balanced, which is impractical in real-world applications. Since we cannot control the label distribution of unlabeled data during training, the author should also use a more convinced setting as did in Adaptive Semi-supervised Learning for Cross-domain Sentiment Classification, He et al., EMNLP 2018, which directly samples unlabeled data from millions of reviews.
NIPS_2019_1158
NIPS_2019
1. The proposed method only gets convergence rate in expectation (i.e. only variance bound), not with high probability. Though Chebyshev's inequality gives bound in probability from the variance bound, this is still weaker than that of Bach [3]. 2. The method description lacks necessary details and intuition: - It's not clear how to get/estimate the mean element mu_g for different kernel spaces. - It's not clear how to sample from the DPP if the eigenfunctions e_n's are inaccessible (Eq (10) line 130). This seems to be the same problem with sampling from the leverage score in [3], so I'm not sure how sampling from the DPP is easier than sampling from the leverage score. - There is no intuition why DPP with that particular repulsion kernel is better than other sampling schemes. 3. The empirical results are not presented clearly: - In Figure 1: what is "quadrature error"? Is it the sup of error over all possible integrand f in the RKHS, or for a specific f? If it's the sup over all f, how does one get that quantity for other methods such as Bayesian quadrature (which doesn't have theoretical guarantee). If it's for a specific f, which function is it, and why is the error on that specific f representative of other functions? Other comment: - Eq (18), definition of principal angle: seems to be missing absolute value on the right hand side, as it could be negative. Minor: - Reference for Kernel herding is missing [?] - Line 205: Getting of the product -> Getting rid of the product - Please ensure correct capitalization in the references (e.g., [1] tsp -> TSP, [39] rkhss -> RKHSs) [3] F. Bach. On the equivalence between kernel quadrature rules and random feature expansions. The Journal of Machine Learning Research, 18(1):714–751, 2017. ===== Update after rebuttal: My questions have been adequately addressed. The main comparison in the paper seems to be the results of F. Bach [3]. Compared to [3], I do think the theoretical contribution (better convergence rate) is significant. However, as the other reviews pointed out, the theoretical comparison with Bayesian quadrature is lacking. The authors have agreed to address this. Therefore, I'm increasing my score.
- It's not clear how to sample from the DPP if the eigenfunctions e_n's are inaccessible (Eq (10) line 130). This seems to be the same problem with sampling from the leverage score in [3], so I'm not sure how sampling from the DPP is easier than sampling from the leverage score.
ICLR_2021_1682
ICLR_2021
+ The value of episodic training is increasingly being questioned, and the submission approaches the topic from a new and interesting perspective. + The connection between nearest-centroid few-shot learning approaches and NCA has not been made in the literature to my knowledge and has potential applications beyond the scope of this paper. + The paper is well-written, easy to follow, and well-connected to the existing literature. - The extent to which the observations presented generalize to few-shot learners beyond Prototypical Networks is not evaluated, which may limit the scope of the submission’s contributions in terms of understanding the properties of episodic training. - The Matching Networks / NCA connection makes more sense in my opinion than the Prototypical Networks / NCA connection. - A single set of hyperparameters was used across learners for a given benchmark, which can bias the conclusions drawn from the experiments. Recommendation I’m leaning towards acceptance. I have some issues with the submission that are detailed below, but overall the paper presents an interesting take on a topic that’s currently very relevant to the few-shot learning community, and I feel that the value it brings to the conversation is sufficient to overcome the concerns I have. Detailed justification The biggest concern I have with the submission is methodological. One one hand, the authors went beyond the usual practice of reporting accuracies on a single run and instead trained each method with five different random initializations, and this is a practice that I’m happy to see in a few-shot classification paper. On the other hand, the choice to share a single set of hyperparameters across learners for a given benchmark leaves a blind spot in the evaluation. What if Prototypical Networks are more sensitive to the choice of optimizer, learning rate schedule, and weight decay coefficient than NCA? Is it possible that the set of hyperparameters chosen for the experiments happens to work poorly for Prototypical Networks? Would we observe the same trends if we tuned hyperparameters independently for each experimental setting? In its current form the submission shows that Prototypical Networks are sensitive to the hyperparameters used to sample episodes while keeping other hyperparameters fixed, but showing the same trend while doing a reasonable effort at tuning other hyperparameters would make for a more convincing argument. This is why I take the claim made in Section 4.2 that "NCA performs better than all PN configurations, no matter the batch size" with a grain of salt, for instance. I also feel that the submission misses out on an opportunity to support a more general statement about episodic training via observations on approaches such as Matching Networks, MAML, etc. I really like the way Figure 1 explains visually how Prototypical Networks miss out on useful relationships between examples in a batch and is therefore data-inefficient. To me, this is one of the submission’s most important contributions: the suggestion that a leave-one-out strategy could allow episodic approaches to achieve the same kind of data efficiency as non-episodic approaches, alleviating the need for a supervised pre-training / episodic fine-tuning strategy. To be clear, I don’t think the missed opportunity would be a reason to reject the paper, but I think that showing empirically that the leave-one-out strategy applies beyond Prototypical Networks would make me lean more strongly towards acceptance. The connection drawn between Prototypical Networks and NCA feels forced at times. In the introduction the paper claims to "show that, without episodic learning, Prototypical Networks correspond to the classic Neighbourhood Component Analysis", but Section 3.3 lists the creation of prototypes as a key difference between the two which is not resolved by training non-episodically. From my perspective, NCA would be more akin to the non-episodic counterpart to Matching Networks without Full Contextual Embeddings – albeit with a Euclidean metric rather than a cosine similarity metric – since both perform comparisons on example pairs. This relationship with Matching Networks could be exploited to improve clarity. For instance, row 6 of Figure 4 can be interpreted as a Matching Networks implementation with a Euclidean distance metric. With this in mind, could the difference in performance between "1-NN with class centroids" and k-NN / Soft Assignment noted in Section 4.1 – as well as the drop in performance observed in Figure 4’s row 6 – be explained by the fact that a (soft) nearest-neighbour approach is more sensitive to outliers? Finally, I have some issues with how results are reported in Tables 1 and 2. Firstly, we don’t know how competing approaches would perform if we applied the paper’s proposed multi-layer concatenation trick, and the idea itself feels more like a way to give NCA’s performance a small boost and bring it into SOTA-like territory. Comparing NCA without multi-layer against other approaches is therefore more interesting to me. Secondly, 95% confidence intervals are provided, but the absence of identification of the best-performing approach(es) in each setting makes it hard to draw high-level conclusions at a glance. I would suggest bolding the best accuracy in each column along with all other entries for which a 95% confidence interval test on the difference between the means is inconclusive in determining that the difference is significant. Questions In Equation 2, why is the sum normalized by the total number of examples in the episode rather than the number of query examples? Can the authors comment on the extent to which Figure 2 supports the hypothesis that NCA is better for training because it learns from a larger number of positives and negatives? Assuming this is true, we should see that Prototypical Networks configurations that increase the number of positives and negatives should perform better for a given batch size. Does Figure 2 support this assertion? Can the authors elaborate on the "no S/Q" ablation (Figure 4, row 7)? What is the point of reference when computing distances for support and query examples? Is the loss computed in the same way for support and query examples? The text in Section 4.3 makes it appear like the loss for query examples is the NCA loss, but the loss for support examples is the prototypical loss. Wouldn’t it be conceptually cleaner to compute leave-one-out prototypes, i.e. leave each example out of the computation of its own class’ prototype (resulting in slightly different prototypes for examples of the same class)? In my mind, this would be the best way to remove the support/query partition while maintaining prototype computation, thereby showing that the partition is detrimental to Prototypical Networks training. Additional feedback This is somewhat inconsequential, but across all implementations of episodic training that I have examined I haven’t encountered an implementation that uses a flag to differentiate between support and query examples. Instead, the implementations I have examined explicitly represent support and query examples as separate tensors. I was therefore surprised to read that "in most implementations [...] each image is characterised by a flag indicating whether it corresponds to the support or the query set [...]"; can the authors point to the implementations they have in mind when making that assertion? I would be careful with the assertion that "during evaluation the triplet {w, n, m} [...] must stay unchanged across methods". While this is true for the benchmarks considered in this submission, benchmarks like Meta-Dataset evaluate on variable-ways and variable-shots episodes. I’m not too concerned with the computational efficiency of NCA. The pairwise Euclidean distances can be computed efficiently using the inner- and outer-product of the batch of embeddings with itself.
- The extent to which the observations presented generalize to few-shot learners beyond Prototypical Networks is not evaluated, which may limit the scope of the submission’s contributions in terms of understanding the properties of episodic training.
NIPS_2021_2152
NIPS_2021
Weakness: 1. This manuscript is more like an experimental discovery paper, and the proposed method is similar to the traditional removal method, i.e., traverse all the modal feature subsets and calculate the perceptual score, removing the last subset. The reviewer believes that the contribution of the manuscript still has room for improvement. 2. The contribution of different modalities of different instances may be different, e.g., we have modalities A and B, some instances with good performance of modality A which belongs to the strong modality, whereas some instances with good performance modality B which belongs to the strong modality. Equation 3 directly removes the modal subset of all instances. How to deal with the problem mentioned above.
2. The contribution of different modalities of different instances may be different, e.g., we have modalities A and B, some instances with good performance of modality A which belongs to the strong modality, whereas some instances with good performance modality B which belongs to the strong modality. Equation 3 directly removes the modal subset of all instances. How to deal with the problem mentioned above.
NIPS_2018_232
NIPS_2018
- Strengths: the paper is well-written and well-organized. It clearly positions the main idea and proposed approach related to existing work and experimentally demonstrates the effectiveness of the proposed approach in comparison with the state-of-the-art. - Weaknesses: the research method is not very clearly described in the paper or in the abstract. The paper lacks a clear assessment of the validity of the experimental approach, the analysis, and the conclusions. Quality - Your definition of interpretable (human simulatable) focuses on to what extent a human can perform and describe the model calculations. This definition does not take into account our ability to make inferences or predictions about something as an indicator of our understanding of or our ability to interpret that something. Yet, regarding your approach, you state that you are “not trying to find causal structure in the data, but in the model’s response” and that “we can freely manipulate the input and observe how the model response changes”. Is your chosen definition of interpretability too narrow for the proposed approach? Clarity - Overall, the writing is well-organized, clear, and concise. - The abstract does a good job explaining the proposed idea but lacks description of how the idea was evaluated and what was the outcome. Minor language issues p. 95: “from from” -> “from” p. 110: “to to” -> “how to” p. 126: “as way” -> “as a way” p. 182 “can sorted” -> “can be sorted” p. 197: “on directly on” -> “directly on” p. 222: “where want” -> “where we want” p. 245: “as accurate” -> “as accurate as” Tab. 1: “square” -> “squared error” p. 323: “this are features” -> “this is features” Originality - the paper builds on recent work in IML and combines two separate lines of existing work; the work by Bloniarz et al. (2016) on supervised neighborhood selection for local linear modeling (denoted SILO) and the work by Kazemitabar et al. (2017) on feature selection (denoted DStump). The framing of the problem, combination of existing work, and empirical evaluation and analysis appear to be original contributions. Significance - the proposed method is compared to a suitable state-of-the-art IML approach (LIME) and outperforms it on seven out of eight data sets. - some concrete illustrations on how the proposed method makes explanations, from a user perspective, would likely make the paper more accessible for researchers and practitioners at the intersection between human-computer interaction and IML. You propose a “causal metric” and use it to demonstrate that your approach achieves “good local explanations” but from a user or human perspective it might be difficult to get convinced about the interpretability in this way only. - the experiments conducted demonstrate that the proposed method is indeed effective with respect to both accuracy and interpretability, at least for a significant majority of the studied datasets. - the paper points out two interesting directions for future work, which are likely to seed future research.
- The abstract does a good job explaining the proposed idea but lacks description of how the idea was evaluated and what was the outcome. Minor language issues p.
NIPS_2018_639
NIPS_2018
Weakness: - I am quite not convinced by the experimental results of this paper. The paper sets to solve POMDP problem with non-convex value function. To motivate the case for their solution the examples of POMDP problem with non-convex value functions used are: (a) surveillance in museums with thresholded rewards; (b) privacy preserving data collection. So then the first question is when the case we are trying to solve are above two, why is there not a single experiment on such a setting, not even a simulated one? This basically makes the experiments section not quite useful. - How does the reader know that the reward definitions of rho for this tasks necessitates a non-convex reward function. Surveillance and data collection has been studied in POMDP context by many papers. Fortunately/unfortunately, many of these papers show that the increase in the reward due to a rho based PWLC reward in comparison to a corresponding PWLC state-based reward (R(s,a)) is not that big. (Papers from Mykel Kochenderfer, Matthijs Spaan, Shimon Whiteson are some I can remember from top of my head.) The related work section while missing from the paper, if existed, should cover papers from these groups, some on exactly the same topic (surveillance and data collection). - This basically means that we have devised a new method for solving non-convex value function POMDPs, but do we really need to do all that work? The current version of the paper does not answer this question to me. Also, follow up question would be exactly what situation do I want to use the methodology proposed by this paper vs the existing methods. In terms of critisim of significance, the above points can be summarized as why should I care about this method when I do not see the results on problem the method is supposedly designed for.
- I am quite not convinced by the experimental results of this paper. The paper sets to solve POMDP problem with non-convex value function. To motivate the case for their solution the examples of POMDP problem with non-convex value functions used are: (a) surveillance in museums with thresholded rewards; (b) privacy preserving data collection. So then the first question is when the case we are trying to solve are above two, why is there not a single experiment on such a setting, not even a simulated one? This basically makes the experiments section not quite useful.
README.md exists but content is empty.
Downloads last month
31