paper_id
stringlengths 10
19
| venue
stringclasses 14
values | focused_review
stringlengths 249
8.29k
| point
stringlengths 59
672
|
---|---|---|---|
ARR_2022_219_review | ARR_2022 | - The paper hypothesizes that SimCSE suffers from the cue of sentence length and syntax. However, the experiments only targets sentence length but not syntax. - The writing of this paper can benefit by some work (see more below). Specifically, I find Section 3 difficult to understand as someone who does not directly work on this task. Specifically, a good mount of terminologies are introduced without explanations. I suggest a good effort of rewriting this section to be easier understandable by general NLP researchers.
- Though a universal issue in related papers and should not be blamed on the authors, why only consider BERT-base? It is known that other models such as BERT-large, RoBERTa, DeBERTa, etc. could produce better embeddings, and that the observations in these works do not necessarily hold in those larger and better models. - The introduction of the SRL-based discrete augmentation approach (line 434 onwards) is unclear and cannot be possibly understood by readers without any experience in SRL. I suggest at least discussing the following: - Intuitively why relying on semantic roles is better than work like CLEAR - What SRL model you use - What the sequence "[ARG0, PRED, ARGM − NEG, ARG1]" mean, and what these PropBank labels mean - What is your reason of using this sequence as opposed to alternatives
- (Line 3-6, and similarly in the Intro) The claim is a finding of the paper, so best prefix the sentence with "We find that". Or, if it has been discussed elsewhere, provide citations. - (7): semantics-aware or semantically-aware - (9-10): explore -> exploit - (42): works on -> works by - Figure 1 caption: embeddings o the : typo - Figure 1 caption: "In a realistic scenario, negative examples have the same length and structure, while positive examples act in the opposite way." I don't think this is true. Positive or negative examples should have similar distribution of length and structure, so that they don't become a cue during inference. - (99): the first mention of "momentum-encoder" in the paper body should immediately come with citation or explanation. - (136): abbreviations like "MoCo" should not appear in the section header, since a reader might not know what it means. - (153): what is a "key"?
- (180): same -> the same - (186-198): I feel that this is a better paragraph describing existing issue and motivation than (63-76). I would suggest moving it to the Intro, and just briefly re-mention the issue here in Related Work. - (245-246): could be -> should be - (248): a -> another - (252-55): Isn't it obvious that "positive examples in SimCSE have the same length", since SimCSE enocdes the same sentence differently as positive examples? How does this need "Through careful observation"?
- (288): "textual similarity" should be "sentence length and structure"? Because the models are predicting textual similarity, after all.
- (300-301): I don't understand "unchangeable syntax" - (310-314): I don't understand "query", "key and value". What do they mean here? Same for "core", "pseudo syntactic". - (376): It might worth mentioning SimCSE is the state-of-the-art method mention in the Abstract. - (392): Remove "In this subsection" | - (136): abbreviations like "MoCo" should not appear in the section header, since a reader might not know what it means. |
QQvhOyIldg | ICLR_2025 | 1. This paper is poorly written & presented. A lot of the content can be found in the undergraduate textbook. A substantial part of the results are informal version, say Lemma 6.1 - 6.3. Also, there is hardly any interpretation of the main results. The presentation style does not seem to be serious.
2. The technical contribution is unclear. Most of the analysis are quite standard.
3. There is no numerical experiments to verify its application in real-world dataset. | 2. The technical contribution is unclear. Most of the analysis are quite standard. |
xtOydkE1Ku | ICLR_2024 | - The core innovation claimed by the paper is the reduction in computational complexity through a two-stage solution, first estimating marginals and then dependencies. However, this approach isn't novel, as seen in references [1,2]. The paper would benefit from a clearer distinction of how its methodology differs significantly from these existing methods.
- The paper's primary contribution seems to be an incremental advancement in efficiency over the TACTiS approach. More substantial evidence or arguments are needed to establish this as a significant contribution to the field.
- When evaluating the model's efficacy, the improvement in terms of Negative Log-Likelihood (NLL) is notable. However, the Mean Continuous Ranked Probability Score (CRPS) metric indicates that these improvements are only marginal when compared to the TACTiS model.
[1] Andersen, Elisabeth Wreford. "Two-stage estimation in copula models used in family studies." Lifetime Data Analysis 11 (2005)
[2] Joe, Harry. "Asymptotic efficiency of the two-stage estimation method for copula-based models." Journal of Multivariate Analysis 94.2 (2005). | - The paper's primary contribution seems to be an incremental advancement in efficiency over the TACTiS approach. More substantial evidence or arguments are needed to establish this as a significant contribution to the field. |
aRlH9AkiEA | EMNLP_2023 | 1. It's still unclear how topic entities can improve the relationship representations. This claim is less intuitive.
2. The improvements on different datasets are trivial and the novelty of this paper is limited. Lots of previous works focus on this topic. Just adding topic entities seems incremental.
3. Missed related work.
4. Some methods (e.g., KnowBERT, CorefBERT) in related work are not selected as baselines for comparison. | 2. The improvements on different datasets are trivial and the novelty of this paper is limited. Lots of previous works focus on this topic. Just adding topic entities seems incremental. |
yCAigmDGVy | ICLR_2025 | 1. As the paper primarily focuses on applying quantum computing to global Lipschitz constant estimation, it is uncertain whether the ICLR community will find this topic compelling.
2. The paper lacks discussion on the theoretical guarantee about the approximation ratio of the hierarchical strategy to the global optimal of original QUBO.
3. The experimental results are derived entirely from simulations under ideal conditions, without consideration for practical aspects of quantum devices such as finite shots, device noise, and limited coherence time. These non-ignorable imperfections could significantly impact the quality of solutions obtained from quantum algorithms in practice. | 2. The paper lacks discussion on the theoretical guarantee about the approximation ratio of the hierarchical strategy to the global optimal of original QUBO. |
NIPS_2022_246 | NIPS_2022 | Weakness: 1) Generally lacking a quantitative measure to evaluate the generated VCEs. Evaluation is mainly performed with visual inspection. 2) As the integration of cone projection shown to be helpful, however it is not clear why this particular projection is chosen. Are there other projections that are also helpful? Is there a theoretical proof that this cone projection resolves the noise of the gradients in non-robust classifiers?
Overall, I think the proposed technique yield better VCE and interesting for the community. I also think that the strengths outweigh the weakness. However, I would be open to hear other reviewers opinion here. | 1) Generally lacking a quantitative measure to evaluate the generated VCEs. Evaluation is mainly performed with visual inspection. |
NIPS_2021_275 | NIPS_2021 | weakness Originality
+ Novel setting. As far as I am aware, the paper proposes a novel setting - Few-shot Hypothesis Adaptation (FHA) - a combination of existing problems - Hypothesis Transfer Learning and the Few-Shot Domain Adaptation.
+/- Somewhat novel method. As far as I am aware, the paper also proposes a novel method - TOHAN - which is a minor adaptation of FADA [23] into the setting. The method architecture seems to be heavily inspired by FADA [23]. Unlike FADA, TOHAN generates and uses the generated intermediate domain instead of the original source domain. However, apart from that, it is not entirely clear what the technical differences are between these methods. Which line in Algorithm 1 would be different for FADA / S-FADA / T-FADA / ST-FADA?
- The relation between FHA and FSL is poorly explained, and FSL is poorly cited. I found the sentence in lines 88-89 particularly vague and poorly written. In addition, line 90 that compares TSN [37] and ProtoNets [29] ("which is relatively weaker than the former") brings little value to the paper. In my humble opinion, these methods were designed for two different problems (i.e. video action classification [37] and single image classification [29]), and it is inappropriate to compare them in this way. The authors might find the following works from FSL literature more related to their work: (Antreas et al., 2018), (Hariharan & Girshick., 2017), (Wang et al., 2018). Generally, these methods also generate/hallucinate samples from limited target-domain data and should be cited. Quality
+ Some good experiments. The paper performs a solid number of comparisons with representative baselines from HTL and FDA literature ([19] and [23], respectively) and basic fine-tuning baseline proposed by the authors.
+/- Work seems grounded in some theoretical work. However, I did not attempt to verify the proof, so I cannot comment on its correctness.
- Only marginal improvements over baselines, mostly within the error bar range. Although the authors claim the method performs better than the baselines, the error range is rather high, suggesting that the performance differences between some methods are not very significant.
- There might be a possible fundamental flaw to the claim of full data privacy. The authors claim that the TOHAN method (and therefore also the FHA problem) "strictly" protects the privacy of the source domain by using the source hypothesis rather than the source data itself (lines 245-247). However, the claim that knowledge is "completely" inaccessible may be false. For instance, Ateniese et al. (2015) have shown that it is possible to extract certain types of information from pretrained classifiers and models. In a more recent privacy analysis of deep learning, Nasr et al. (2019) suggest that even well-trained models might leak a significant amount of information about the data they were trained on. Moreover, the proposed TOHAN relies on the leaked source-domain knowledge to generator appropriate source-domain data. Therefore, it seems to me that claim that TOHAN is privacy secure is completely false.
- Lack of sufficient evidence for no source domain features in intermediate data. The authors claim that "high-level, visual and useful features of source domain are rare in the generated intermediate data (Figure 6)" (lines 243-244); however, no empirical value is provided to support this claim (i.e. what do authors mean by "rare" in this context? how rare is "rare"?). Figure 6a/b does support this claim as it shows only 4 intermediate-domain images and no examples of the original source-domain data. This is insufficient evidence to draw the conclusion in lines 243-244, and in fact, it points to the contrary. Clarity
- The paper is poorly written. The paper is not particularly easy to read. It contains many spelling and grammatical errors (see below for proposed corrections). There is a large number of acronyms (e.g. FHA, SHOT, HTL, ST+F etc..) and some confusing mathematical notation (e.g. D , D , X and X
refer to different entities) which make this work more confusing to read. The notation in the figure is inconsistent with the main text (uses X
instead of D ). Significance
- The paper presents a novel and interesting problem but could be flawed. This novel problem and method could have important consequences in the context of data privacy - however, to me, the idea seems fundamentally flawed (see my comments in the "Quality" subsection, or "Limitations And Societal Impact").
- The TOHAN improvements over baselines are mostly marginal. Although the authors claim the method performs better than the baselines, the error range is rather high, suggesting that the performance differences between some methods are not very significant and the improvements are marginal.
Spelling, Grammer, and Other Possible Mistakes.
- Line 32, grammar: "face of generation" --> "face generation"
- Figure 2 caption, grammar: "which two data come from ... " --> "where the two data points come from ... "
- Line 128, wrong word: "dependents" --> "depends"
- Line 167, redundant word: "regarding to the ..." --> "regarding the ... "
- Line 210, redundant words: "which confuses D unable to distinguish between ..." --> "which confuses D between ... "
- Line 213, wrong word: "initial" --> "initialize" References
Ateniese et al., 2015, "Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers", International Journal of Security and Networks, Volume 10, Issue 3
Nasr et al., 2019, "Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning," 2019 IEEE Symposium on Security and Privacy
Antoniou et al., 2018, "Data Augmentation Generative Adversarial Networks", (ICLR 2018 Workshop)
Hariharan & Girshick., 2017, "Low-shot Visual Recognition by Shrinking and Hallucinating Features", (ICCV 2017)
Wang et al., 2018, "Low-shot learning from imaginary data" (CVPR 2018) POST-REBUTTAL
After a detailed discussion with the authors, I decided to increase my original rating from 3 to 7. The initial low rating was due to initially hidden assumptions and a poorly defined scope of data privacy which are central in the paper. These have been discussed and clarified by the authors during the rebuttal. The authors also addressed my concerns regarding the novelty and source-domain leakage into the intermediate domain. The authors have agreed to improve the clarity, literature review, dampen down on the privacy claims, and include additional experiments, and I am happy to increase the rating. Although the privacy claims are now not as strong as originally claimed (e.g. the method does not guard against source-information leakage, but rather shelters individual source data points from a possible data leakage), the paper still opens up an interesting area of research and presents a novel method that will likely attract attention in the community.
The authors claim that their method allows for full privacy of the source domain by relying on a well-trained source domain classifier. However, this seems to be based on the false assumption that the source domain classifier does not leak any private information. Recent studies (for example, Nasr et al. (2019) and Ateniese et al. (2015)) suggest that there might be a significant amount of source-domain information that pre-trained models leak. Moreover, TOHAN relies on leaking information from the source domain to generate realistic/compatible intermediate-domain data. This completely invalidates one of the main claims of the paper that private information is protected. | - Only marginal improvements over baselines, mostly within the error bar range. Although the authors claim the method performs better than the baselines, the error range is rather high, suggesting that the performance differences between some methods are not very significant. |
NIPS_2016_69 | NIPS_2016 | - The paper is somewhat incremental. The developed model is a fairly straighforward extension of the GAN for static images. - The generated videos have significant artifacts. Only some of the beach videos are kind of convincing. The action recognition performance is much below the current state-of-the-art on the UCF dataset, which uses more complex (deeper, also processing optic flow) architectures. Questions: - What is the size of the beach/golf course/train station/hospital datasets? - How do the video generation results from the network trained on 5000 hours of video look? Summary: While somewhat incremental, the paper seems to have enough novelty for a poster. The visual results encouraging but with many artifacts. The action classification results demonstrate benefits of the learnt representation compared with random weights but are significantly below state-of-the-art results on the considered dataset. | - The generated videos have significant artifacts. Only some of the beach videos are kind of convincing. The action recognition performance is much below the current state-of-the-art on the UCF dataset, which uses more complex (deeper, also processing optic flow) architectures. Questions: |
ICLR_2023_4659 | ICLR_2023 | Weakness: 1. It would make this paper stronger if the authors can show some adversarial robustness of some SOTA defended recognition models on the new set. 2. I would like to see more clear details of how to use DALL-E2 or stable diffusion models to generate hard examples, e.g., how to design prompts and how to filter out some unrealistic images. 3. The new dataset has some great properties. However, how to make it more scalable to real applications besides evaluating the current models trained on a public dataset? What if we have some new classes in our task, but it is not included in this set? 4. It is still unclear how to make the new proposed evaluation set more diverse and representative than the previous method and how to select those representative images. | 4. It is still unclear how to make the new proposed evaluation set more diverse and representative than the previous method and how to select those representative images. |
Y4iaDU4yMi | ICLR_2025 | - The paper's presentation is difficult to follow, with numerous typos and inconsistencies in notation. For example:
- Line 84, "In summery" -> "In summary".
- In Figure 1, "LLaVA as dicision model" -> "LLaVA as decision model."
- Line 215, "donate" should be "denote"; additionally, $\pi_{ref}$ is duplicated.
- The definitions of subscripts and superscripts for action (i.e., $a_t^1$ and $a_t^2$) in line 245 and in Equations (4), (6), (7), (8), and (9) are inconsistent.
- Line 213 references the tuple with $o_t$, but it is unclear where $s_{1 \sim t-1}$ originates.
- The authors should include a background section to introduce the basic RL framework, including elements of the MDP, trajectories, and policy, to clarify the RL context being considered. Without this, it is difficult to follow the subsequent sections. Additionally, a brief overview of the original DPO algorithm should be provided so that modifications proposed in the methods section are clearly distinguishable.
- In Section 3.1, the authors state that the VLM is used as a planner; however, it is unclear how the plan is generated. It appears that the VLM functions directly as a policy, outputting final actions to step into the environment, as illustrated in Figure 1. Thus, it may be misleading to frame the proposed method as a "re-planning framework" (line 197). Can author clarify this point?
- What type of action space does the paper consider? Is it continuous or discrete? If it is discrete, how is the MSE calculated in Eq. (7)?
- In line 201, what does "well-fine-tuned model" refer to? Is this the VLM fine-tuned by the proposed method?
- Throughout the paper, what does $\tau_t^{t-1}$ represent? | - The authors should include a background section to introduce the basic RL framework, including elements of the MDP, trajectories, and policy, to clarify the RL context being considered. Without this, it is difficult to follow the subsequent sections. Additionally, a brief overview of the original DPO algorithm should be provided so that modifications proposed in the methods section are clearly distinguishable. |
Gzuzpl4Jje | EMNLP_2023 | 1. The original tasks’ performance degenerates to some extent and underperforms the baseline of the Adapter, which indicates the negative influence of removing some parts of the original networks.
2. The proposed method may encounter a limitation if the users continuously add new languages because of the limited model capacity. | 2. The proposed method may encounter a limitation if the users continuously add new languages because of the limited model capacity. |
NIPS_2020_593 | NIPS_2020 | - Line 54: Is 'interpretable' program relevant to the notion described in the work of 'Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608'? - Line 19, 37, 39: A reference for the 'Influence maximization' problem may be provided. The distribution may be more formally given (e.g. which p_{ij} sum to 1). To be able to refer to the joint distributions, there should be a more concrete statement of the p_{ij}. Or maybe a preamble of line 103. - Line 52: Some more details about the polynomial time character of the formulation may clarify your statement about the LP. - Line 103: The strategy space of the adversary implied in the equation is strongly pessimistic (why consider all possible correlations?). This can be used in a follow up work. It seems that it does not reduce the value of the current model. | - Line 54: Is 'interpretable' program relevant to the notion described in the work of 'Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608'? |
ICLR_2021_2892 | ICLR_2021 | - Proposition 2 seems to lack an argument why Eq 16 forms a complete basis for all functions h. The function h appears to be defined as any family of spherical signals parameterized by a parameter in [-pi/2, pi/2]. If that’s the case, why eq 16? As a concrete example, let \hat{h}^\theta_lm = 1 if l=m=1 and 0 otherwise, so constant in \theta. The only constant associated Legendre polynomial is P^0_0, so this h is not expressible in eq 16. Instead, it seems like there are additional assumptions necessary on the family of spherical functions h to let the decomposition eq 16, and thus proposition 2, work. Hence, it looks like that proposition 2 doesn’t actually characterize all azimuthal correlations. - In its discussion of SO(3) equivariant spherical convolutions, the authors do not mention the lift to SO(3) signals, which allow for more expressive filters than the ones shown in figure 1. - Can the authors clarify figure 2b? I do not understand what is shown. - The architecture used for the experiments is not clearly explained in this paper. Instead the authors refer to Jiang et al. (2019) for details. This makes the paper not self-contained. - The authors appear to not use a fast spherical Fourier transform. Why not? This could greatly help performance. Could the authors comment on the runtime cost of the experiments? - The sampling of the Fourier features to a spherical signal and then applying a point-wise non-linearity is not exactly equivariant (as noted by Kondor et al 2018). Still, the authors note at the end of Sec 6 “This limitation can be alleviated by applying fully azimuthal-rotation equivariant operations.”. Perhaps the authors can comment on that? - The experiments are limited to MNIST and a single real-world dataset. - Out of the many spherical CNNs currently in existence, the authors compare only to a single one. For example, comparisons to SO(3) equivariant methods would be interesting. Furthermore, it would be interesting to compare to SO(3) equivariant methods in which SO(3) equivariance is broken to SO(2) equivariance by adding to the spherical signal a channel that indicates the theta coordinate. - The experimental results are presented in an unclear way. A table would be much clearer. - An obvious approach to the problem of SO(2) equivariance of spherical signals, is to project the sphere to a cylinder and apply planar 2D convolutions that are periodic in one direction and not in the other. This suffers from distortion of the kernel around the poles, but perhaps this wouldn’t be too harmful. An experimental comparison to this method would benefit the paper.
Recommendation: I recommend rejection of this paper. I am not convinced of the correctness of proposition 2 and proposition 1 is similar to equivariance arguments made in prior work. The experiments are limited in their presentation, the number of datasets and the comparisons to prior work.
Suggestions for improvement: - Clarify the issue around eq 16 and proposition 2 - Improve presentation of experimental results and add experimental details - Evaluate the model of more data sets - Compare the model to other spherical convolutions
Minor points / suggestions: - When talking about the Fourier modes as numbers, perhaps clarify if these are reals or complex. - In Def 1 in the equation it is confusing to have theta twice on the left-hand side. It would be clearer if h did not have a subscript on the left-hand side. | - The experiments are limited to MNIST and a single real-world dataset. |
ICLR_2022_912 | ICLR_2022 | 1. The paper in general does not read well, and more careful proofreading is needed. 2. In S2D structure, it is not clear why the number of parameters does not change. If the kernel height/width stay the same, then its depth will increase, resulting in more parameters. I agree the efficiency could be improved since the FLOP is quadratic on activation side length. But in terms of parameters, more details are expected. | 2. In S2D structure, it is not clear why the number of parameters does not change. If the kernel height/width stay the same, then its depth will increase, resulting in more parameters. I agree the efficiency could be improved since the FLOP is quadratic on activation side length. But in terms of parameters, more details are expected. |
NIPS_2016_499 | NIPS_2016 | - The proposed method is very similar in spirit to the approach in [10]. It seems that the method in [10] can also be equipped with scoring causal predictions and the interventional data. If otherwise, why [10] cannot use these side information? - The proposed method reduces the computation time drastically compared to [10] but this is achieved by reducing the search space to the ancestral graphs. This means that the output of ACI has less information compared to the output of [10] that has a richer search space, i.e., DAGs. This is the price that has been paid to gain a better performance. How much information of a DAG is encoded in its corresponding ancestral graph? - Second rule in Lemma 2, i.e., Eq (7) and the definition of minimal conditional dependence seem to be conflicting. Taking Zâ in this definition to be the empty set, we should have that x and y are independent given W, but Eq. (7) says otherwise. | - The proposed method is very similar in spirit to the approach in [10]. It seems that the method in [10] can also be equipped with scoring causal predictions and the interventional data. If otherwise, why [10] cannot use these side information? |
NIPS_2019_390 | NIPS_2019 | 1. The distinction between modeling uncertainty about the Q-values and modeling stochasticity of the reward (lines 119-121) makes some sense philosophically but the text should make clearer the practical distinction between this and distributional reinforcement learning. 2. It is not explained (Section 5) why the modifications made in Definition 5.1 aren't important in practice. 3. The Atari game result (Section 7.2) is limited to a single game and a single baseline. It is very hard to interpret this. Less major weaknesses: 1. The main text should make it more clear that there are additional experiments in the supplement (and preferably summarize their results). Questions: 1. You define a modified TD learning algorithm in Definition 5.1, for the purposes of theoretical analysis. Why should we use the original proposal (Algorithm 1) over this modified learning algorithm in practice? 2. Does this idea of propagating uncertainty not naturally combine with that of distributional RL, in that stochasticity of the reward might contribute to uncertainty about the Q-value? Typos, etc.: * Line 124, "... when experimenting a transition ..." ---- UPDATE: After reading the rebuttal, I have raised my score. I appreciate that the authors have included additional experiments and have explained further the difference between Definition 5.1 and the algorithm used in practice, as well as the distinction between the current work and distributional RL. I hope that all three of these additions will make their way into the final paper. | 3. The Atari game result (Section 7.2) is limited to a single game and a single baseline. It is very hard to interpret this. Less major weaknesses: |
ICLR_2022_3248 | ICLR_2022 | compared to [1], which are advantages of the IBP. 1) Unlike a factorized model with an IBP prior, the proposed method lacks a sparsity constraint in the number of factors used by subsequent tasks. As such, the model will not be incentivized to use less factors, leading to increasing number of factors and increased computation with more tasks. 2) The IBP prior allows the data to dictate the number of factors to add for each task. The proposed method has no such mechanism, requiring setting the growth rate by hand using heuristics or a pre-determined schedule. Either is liable to over- or under-utilization of model capacity. Table 4 in the Experiments show that this does indeed have a significant impact on performance.
Overall, I think this is an example of convergent ideas rather than plagiarism, but a discussion of the connections is warranted.
Task incremental learning: This method requires knowing the task ID at test time to pick which factor selector weights to use. Without it, the proposed method doesn’t know which subnetwork to use, and would likely have to resort to trying all of them, which isn’t guaranteed to produce the right results. Recent continual learning methods are often evaluated in the more challenging class incremental setting, where task ID is not known.
Experiments 1. (+) Experiments are conducted on a good set of datasets 2. (+) Error bars are shown 3. (+) The proposed method mostly outperforms the baselines, especially on the more complex datasets. 4. (-) More baselines should be compared against, particularly dynamic architecture approaches, as that’s the category that this method falls under. Many of the compared methods don’t operate on the same set of continual learning assumptions as this paper; in particular, the replay-based methods are often using replay because they consider class incremental learning. 5. (-) Why are the results of Multitask learning so bad for S-CIFAR-100 and S-miniImageNet? My understanding is that it trains on all the data jointly, which should actually be the upper bound for a single model. 6. It would have been nice to visualize the factor selection matrices S for each task in order to visualize knowledge transfer.
Miscellaneous: 1. \citep should be used for parenthetical citations. 2. Initial double quote “ is backwards (Related Works). 3. “the first task,rk,1” 4. Figure 3 caption: “abd”
Questions: 1. How would you apply the weight factorization to 4D convolutional kernels?
[1] “Continual Learning using a Bayesian Nonparametric Dictionary of Weight Factors”, AISTATS 2021 | 1) Unlike a factorized model with an IBP prior, the proposed method lacks a sparsity constraint in the number of factors used by subsequent tasks. As such, the model will not be incentivized to use less factors, leading to increasing number of factors and increased computation with more tasks. |
NIPS_2020_902 | NIPS_2020 | - The paper could benefit from a better practical motivation, in its current form it will be quite hard for someone who is not at home in this field to understand why they should care about this work. What are specific practical examples in which the proposed algorithm would be beneficial? - The presentation of the simulation study is not really doing a favor to the authors. Specifically, the authors do not really comment on why the GPC (benchmark) is performing better than BPC (their method). It would be worth re-iterating that this is b/c of the bandit feedback and not using information about the form of the cost function. - More generally, the discussion of the simulation study results could be strenghtened. It is not really clear what the reader should take away from the results, and some discussion could help a lot with interpreting them properly. | - The presentation of the simulation study is not really doing a favor to the authors. Specifically, the authors do not really comment on why the GPC (benchmark) is performing better than BPC (their method). It would be worth re-iterating that this is b/c of the bandit feedback and not using information about the form of the cost function. |
NIPS_2016_117 | NIPS_2016 | weakness of this work is impact. The idea of "direct feedback alignment" follows fairly straightforwardly from the original FA alignment work. Its notable that it is useful in training very deep networks (e.g. 100 layers) but its not clear that this results in an advantage for function approximation (the error rate is higher for these deep networks). If the authors could demonstrate that DFA allows one to train and make use of such deep networks where BP and FA struggle on a larger dataset this would significantly enhance the impact of the paper. In terms of biological understanding, FA seems more supported by biological observations (which typically show reciprocal forward and backward connections between hierarchical brain areas, not direct connections back from one region to all others as might be expected in DFA). The paper doesn't provide support for their claim, in the final paragraph, that DFA is more biologically plausible than FA. Minor issues: - A few typos, there is no line numbers in the draft so I haven't itemized them. - Table 1, 2, 3 the legends should be longer and clarify whether the numbers are % errors, or % correct (MNIST and CIFAR respectively presumably). - Figure 2 right. I found it difficult to distinguish between the different curves. Maybe make use of styles (e.g. dashed lines) or add color. - Figure 3 is very hard to read anything on the figure. - I think this manuscript is not following the NIPS style. The citations are not by number and there are no line numbers or an "Anonymous Author" placeholder. - I might be helpful to quantify and clarify the claim "ReLU does not work very well in very deep or in convolutional networks." ReLUs were used in the AlexNet paper which, at the time, was considered deep and makes use of convolution (with pooling rather than ReLUs for the convolutional layers). | - I might be helpful to quantify and clarify the claim "ReLU does not work very well in very deep or in convolutional networks." ReLUs were used in the AlexNet paper which, at the time, was considered deep and makes use of convolution (with pooling rather than ReLUs for the convolutional layers). |
NIPS_2020_839 | NIPS_2020 | - In Table 2, what about the performance of vanilla Transformer with the proposed approach? It's clearer to report the baseline + proposed approach, not only aiming at reporting state-of-the-art performance. - In Figure 1, the reported perplexities are over 30, which looks pretty high. This high perplexity contradicts better BLEU scores in my experience. How did you calculate perplexity? | - In Figure 1, the reported perplexities are over 30, which looks pretty high. This high perplexity contradicts better BLEU scores in my experience. How did you calculate perplexity? |
fL8AKDvELp | EMNLP_2023 | 1. The paper needs a comprehensive analysis of sparse MoE, including the communication overhead (all to all). Currently, it's not clear where the performance gain comes from, basically, different number of experts incurs different communication overhead.
2. The evaluation needs experiments on distributed deployment and a larger model.
3. For the arguments that the existing approach has two key limitations, the authors should present key experiment results for demonstration. | 2. The evaluation needs experiments on distributed deployment and a larger model. |
NIPS_2021_815 | NIPS_2021 | - In my opinion, the paper is a bit hard to follow. Although this is expected when discussing more involved concepts, I think it would be beneficial for the exposition of the manuscript and in order to reach a larger audience, to try to make it more didactic. Some suggestions: - A visualization showing a counting of homomorphisms vs subgraph isomorphism counting. - It might be a good idea to include a formal or intuitive definition of the treewidth since it is central to all the proofs in the paper. - The authors define rooted patterns (in a similar way to the orbit counting in GSN), but do not elaborate on why it is important for the patterns to be rooted, neither how they choose the roots. A brief discussion is expected, or if non-rooted patterns are sufficient, it might be better for the sake of exposition to discuss this case only in the supplementary material. - The authors do not adequately discuss the computational complexity of counting homomorphisms. They make brief statements (e.g., L 145 “Better still, homomorphism counts of small graph patterns can be efficiently computed even on large datasets”), but I think it will be beneficial for the paper to explicitly add the upper bounds of counting and potentially elaborate on empirical runtimes. - Comparison with GSN: The authors mention in section 2 that F-MPNNs are a unifying framework that includes GSNs. In my perspective, given that GSN is a quite similar framework to this work, this is an important claim that should be more formally stated. In particular, as shown by Curticapean et al., 2017, in order to obtain isomorphism counts of a pattern P, one needs not only to compute P-homomorphisms, but also those of the graphs that arise when doing “non-edge contractions” (the spasm of P). Hence a spasm(P)-MPNN would require one extra layer to simulate a P-GSN. I think formally stating this will give the interested reader intuition on the expressive power of GSNs, albeit not an exact characterisation (we can only say that P-GSN is at most as powerful as a spasm(P)-MPNN but we cannot exactly characterise it; is that correct?) - Also, since the concept of homomorphisms is not entirely new in graph ML, a more elaborate comparison with the paper by NT and Maehara, “Graph Homomorphism Convolution”, ICML’20 would be beneficial. This paper can be perceived as the kernel analogue to F-MPNNs. Moreover, in this paper, a universality result is provided, which might turn out to be beneficial for the authors as well.
Additional comments:
I think that something is missing from Proposition 3. In particular, if I understood correctly the proof is based on the fact that we can always construct a counterexample such that F-MPNNs will not be equally strong to 2-WL (which by the way is a stronger claim). However, if the graphs are of bounded size, a counterexample is not guaranteed to exist (this would imply that the reconstruction conjecture is false). Maybe it would help to mention in Proposition 3 that graphs are of unbounded size?
Moreover, there is a detail in the proof of Proposition 3 that I am not sure that it’s that obvious. I understand why the subgraph counts of C m + 1
are unequal between the two compared graphs, but I am not sure why this is also true for homomorphism counts.
Theorem 3: The definition of the core of a graph is unclear to me (e.g., what if P contains cliques of multiple sizes?)
In the appendix, the authors mention they used 16 layers for their dataset. That is an unusually large number of layers for GNNs. Could the authors comment on this choice?
In the same context as above, the experiments on the ZINC benchmark are usually performed with either ~100K or 500K parameters. Although I doubt that changing the number of parameters will lead to a dramatic change in performance, I suggest that the authors repeat their experiments, simply for consistency with the baselines.
The method of Bouritsas et al., arxiv’20 is called “Graph Substructure Networks” (instead of “Structure”). I encourage the authors to correct this.
After rebuttal
The authors have adequately addressed all my concerns. Enhancing MPNNs with structural features is a family of well-performing techniques that have recently gained traction. This paper introduces a unifying framework, in the context of which many open theoretical questions can be answered, hence significantly improving our understanding. Therefore, I will keep my initial recommendation and vote for acceptance. Please see my comment below for my final suggestions which, along with some improvements on the presentation, I hope will increase the impact of the paper.
Limitations: The limitations are clearly stated in section 1, by mainly referring to the fact that the patterns need to be selected by hand. I would also add a discussion on the computational complexity of homomorphism counting.
Negative societal impact: A satisfactory discussion is included in the end of the experimental section. | - The authors define rooted patterns (in a similar way to the orbit counting in GSN), but do not elaborate on why it is important for the patterns to be rooted, neither how they choose the roots. A brief discussion is expected, or if non-rooted patterns are sufficient, it might be better for the sake of exposition to discuss this case only in the supplementary material. |
ARR_2022_178_review | ARR_2022 | __1. The relation between instance difficulty and training-inference consistency remains vague:__ This paper seems to try to decouple the concept of instance difficulty and training-inference consistency in current early exiting works. However, I don't think these two things are orthogonal and can be directly decoupled. Intuitively, according to the accuracy of the prediction, there are two main situations for training-inference inconsistency: the inconsistent exit makes the prediction during inference better than that during training, and vice versa. The first case is unlikely to occur. For the second case, considering instance difficulty reflects the decline in the prediction accuracy of the instances during training and inference, it may be regarded as the second case of training-inference consistency. Accordingly, I am still a little bit confused about the relation between instance difficulty and training-inference consistency after reading the paper. I would suggest that the authors calculate the token-level difficulty of the model before and after using the hash function, and perform more analysis on this basis. In fact, if the hash function is instance-level, the sentence-level difficulty of all baselines (including static and dynamic models) can be calculated, which will provide a more comprehensive and fair comparison.
__2. Lack of the analysis of the relation between instance-level consistency and token-level consistency:__ The core idea derived from the preliminary experiments is to enhance the instance-level consistency between training and inference, i.e., mapping semantically similar instances into the same exiting layer. However, the practical method introduces the consistency constrain at the token level. The paper doesn’t show that whether the token-level method can truely address or mitigate the inconsistency problem at the instance level. I would suggest the authors define metrics to reflect the instance-level and token-level consistency, and conduct an experiment to verify that whether they are correlated.
1. I didn’t follow the description of the sentence-level hash function from Line 324 to Line 328: If we use the sequence encoder (e.g., Sentence-BERT) as a hash function to directly map the instances to the exiting layer, why do we still need an internal classifier at that layer? And considering all the instances can be hashed by the pre-trained sequence encoder in advance before training (and early exiting), the appearance of label imbalance should not cause any actual harm? Why does it become a problem?
2. The paper addresses many times (Line 95-97, Line 308-310) that the consistency between training and inference can be easily satisfied due to the smoothness of neural models. I would suggest giving more explanations on this. | 2. The paper addresses many times (Line 95-97, Line 308-310) that the consistency between training and inference can be easily satisfied due to the smoothness of neural models. I would suggest giving more explanations on this. |
NIPS_2017_356 | NIPS_2017 | ]
My major concerns about this paper is the experiment on visual dialog dataset. The authors only show the proposed model's performance on discriminative setting without any ablation studies. There is not enough experiment result to show how the proposed model works on the real dataset. If possible, please answer my following questions in the rebuttal.
1: The authors claim their model can achieve superior performance having significantly fewer parameters than baseline [1]. This is mainly achieved by using a much smaller word embedding size and LSTM size. To me, it could be authors in [1] just test model with standard parameter setting. To backup this claim, is there any improvements when the proposed model use larger word embedding, and LSTM parameters?
2: There are two test settings in visual dialog, while the Table 1 only shows the result on discriminative setting. It's known that discriminative setting can not apply on real applications, what is the result on generative setting?
3: To further backup the proposed visual reference resolution model works in real dataset, please also conduct ablation study on visDial dataset. One experiment I'm really interested is the performance of ATT(+H) (in figure 4 left). What is the result if the proposed model didn't consider the relevant attention retrieval from the attention memory. | 1: The authors claim their model can achieve superior performance having significantly fewer parameters than baseline [1]. This is mainly achieved by using a much smaller word embedding size and LSTM size. To me, it could be authors in [1] just test model with standard parameter setting. To backup this claim, is there any improvements when the proposed model use larger word embedding, and LSTM parameters? |
NIPS_2022_1598 | NIPS_2022 | Weakness:
It is unclear whether the gain of BooT comes from 1. Extra data 2. Different architecture (pretrained gpt2 vs not) 3. Some inherent property in the sequence model as opposed to other world models that may only predict the observation and the reward.
It is unclear from the paper whether bootstrapping is novel beyond supervised learning (e.g. in RL)
There are quite a few additional limitations not mentioned in the paper (l349-350): 1. The extra two hyperparameters introduced k and η require finetuning, which depends on availability to the environment or a good OPE method. 2. As mentioned in l37-39, for other tasks in general, it is unclear whether the dataset available is sufficient to train a BooT, unless we try it, which will incur extra training time and cost, as mentioned in l349-350. | 1. The extra two hyperparameters introduced k and η require finetuning, which depends on availability to the environment or a good OPE method. |
NIPS_2016_153 | NIPS_2016 | weakness of previous models. Thus I find these results novel and exciting.Modeling studies of neural responses are usually measured on two scales: a. Their contribution to our understanding of the neural physiology, architecture or any other biological aspect. b. Model accuracy, where the aim is to provide a model which is better than the state of the art. To the best of my understanding, this study mostly focuses on the latter, i.e. provide a better than state of the art model. If I am misunderstanding, then it would probably be important to stress the biological insights gained from the study. Yet if indeed modeling accuracy is the focus, it's important to provide a fair comparison to the state of the art, and I see a few caveats in that regard: 1. The authors mention the GLM model of Pillow et al. which is pretty much state of the art, but a central point in that paper was that coupling filters between neurons are very important for the accuracy of the model. These coupling filters are omitted here which makes the comparison slightly unfair. I would strongly suggest comparing to a GLM with coupling filters. Furthermore, I suggest presenting data (like correlation coefficients) from previous studies to make sure the comparison is fair and in line with previous literature. 2. The authors note that the LN model needed regularization, but then they apply regularization (in the form of a cropped stimulus) to both LN models and GLMs. To the best of my recollection the GLM presented by pillow et al. did not crop the image but used L1 regularization for the filters and a low rank approximation to the spatial filter. To make the comparison as fair as possible I think it is important to try to reproduce the main features of previous models. Minor notes: 1. Please define the dashed lines in fig. 2A-B and 4B. 2. Why is the training correlation increasing with the amount of training data for the cutout LN model (fig. 4A)? 3. I think figure 6C is a bit awkward, it implies negative rates, which is not the case, I would suggest using a second y-axis or another visualization which is more physically accurate. 4. Please clarify how the model in fig. 7 was trained. Was it on full field flicker stimulus changing contrast with a fixed cycle? If the duration of the cycle changes (shortens, since as the authors mention the model cannot handle longer time scales), will the time scale of adaptation shorten as reported in e.g Smirnakis et al. Nature 1997. | 2. The authors note that the LN model needed regularization, but then they apply regularization (in the form of a cropped stimulus) to both LN models and GLMs. To the best of my recollection the GLM presented by pillow et al. did not crop the image but used L1 regularization for the filters and a low rank approximation to the spatial filter. To make the comparison as fair as possible I think it is important to try to reproduce the main features of previous models. Minor notes: |
NIPS_2016_43 | NIPS_2016 | Weakness: 1. The organization of this paper could be further improved, such as give more background knowledge of the proposed method and bring the description of the relate literatures forward. 2. It will be good to see some failure cases and related discussion. | 2. It will be good to see some failure cases and related discussion. |
Fg04yPK0BH | ICLR_2025 | 1. There is some disconnection between Proposition 2.2 that the adjacency matrix of the line graph has the same support of some unitary matrix and the proposed method which finds the projection of a weighted adjacency matrix to the set of unitary matrices. It is unclear to me if the result in Proposition 2.3 has the same support as the adjacency matrix of the line graph.
2. The computational complexity of the proposed method is very high as it involves taking the square root of a matrix of the size $2e \times 2e$ where $e$ is the number of edges in the graph. Though the block matrix structure can be exploited, but there is no guarantee how many blocks can be found in the matrix. For example, it's likely that the proposed method cannot perform on all of the LRGB datasets.
3. The experiments are not very convincing. They only compared with the one-hop variant of some models that aims to solve the oversquashing problem. Note that the oversquashing problem is intrinsically multi-hop and I don't see the rationale weakening the baseline models to one-hop.
4. The preprocessing time that involves the computation of the block matrix is not reported.
5. It's unclear why there is a base layer GNN encoding in the proposed method. An ablation study on the necessity of the base layer GNN encoding would be helpful.
6. On the Peptide dataset, the GCN can easily achieve the accuracy of the proposed method by some proper data preprocessing or normalization. The authors should provide a comparison following [1].
[1] Tönshoff, Jan, et al. "Where did the gap go? Reassessing the long-range graph benchmark." arXiv preprint arXiv:2309.00367 (2023). | 5. It's unclear why there is a base layer GNN encoding in the proposed method. An ablation study on the necessity of the base layer GNN encoding would be helpful. |
ICLR_2021_1189 | ICLR_2021 | weakness of the paper is its experiments section. 1. Lack of large scale experiments: The models trained in the experiments section are quite small (80 hidden neurons for the MNIST experiments and a single convolutional layer with 40 channels for the SVHN experiments). It would be nice if there were at least some experiments that varied the size of the network and showed a trend indicating that the results from the small-scale experiments will (or will not) extend to larger scale experiments. 2. Need for more robustness benchmarks: It is impressive that the Lipschitz constraints achieved by LBEN appear to be tight. Given this, it would be interesting to see how LBEN’s accuracy-robustness tradeoff compare with other architectures designed to have tight Lipschitz constraints, such as [1]. 3. Possibly limited applicability to more structured layers like convolutions: Although it can be counted as a strength that LBEN can be applied to convnets without much modification, the fact that its performance considerably trails that of MON raises questions about whether the methods presented here are ready to be extended to non-fully connected architectures. 4. Lack of description of how the Lipschitz bounds of the networks are computed: This critique is self-explanatory.
Decision: I think this paper is well worthy of acceptance just based on the quality and richness of its theoretical development and analysis of LBEN. I’d encourage the authors to, if possible, strengthen the experimental results in directions including (but certainly not limited to) the ones listed above.
Other questions to authors: 1. I was wondering why you didn’t include experiments involving larger neural networks. What are the limitations (if any) that kept you from trying out larger networks? 2. Could you describe how you computed the Lipschitz constant? Given how notoriously difficult it is to compute bounds on the Lipschitz constants of neural networks, I think this section requires more elaboration.
Possible typos and minor glitches in writing: 1. Section 3.2, fist paragraph, first sentence: Should the phrase “equilibrium network” be plural? 2. D^{+} used in Condition 1 is used before it’s defined in Condition 2. 3. Just below equation (7): I think there’s a typo in “On the other size, […]”. 4. In Section 4.1, \epsilon is not used in equation (10), but in equation (11). It might be more clear to introduce \epsilon when (11) is discussed. 5. Section 4.2, in paragraph “Computing an equilibrium”, first sentence: Do you think there’s a grammar error in this sentence? I might also have mis-parsed the sentence. 6. Section 5, second sentence: There are two “the”s in a row.
[1] Anil, Cem, James Lucas, and Roger Grosse. "Sorting out lipschitz function approximation." International Conference on Machine Learning. 2019. | 4. In Section 4.1, \epsilon is not used in equation (10), but in equation (11). It might be more clear to introduce \epsilon when (11) is discussed. |
NIPS_2021_815 | NIPS_2021 | - In my opinion, the paper is a bit hard to follow. Although this is expected when discussing more involved concepts, I think it would be beneficial for the exposition of the manuscript and in order to reach a larger audience, to try to make it more didactic. Some suggestions: - A visualization showing a counting of homomorphisms vs subgraph isomorphism counting. - It might be a good idea to include a formal or intuitive definition of the treewidth since it is central to all the proofs in the paper. - The authors define rooted patterns (in a similar way to the orbit counting in GSN), but do not elaborate on why it is important for the patterns to be rooted, neither how they choose the roots. A brief discussion is expected, or if non-rooted patterns are sufficient, it might be better for the sake of exposition to discuss this case only in the supplementary material. - The authors do not adequately discuss the computational complexity of counting homomorphisms. They make brief statements (e.g., L 145 “Better still, homomorphism counts of small graph patterns can be efficiently computed even on large datasets”), but I think it will be beneficial for the paper to explicitly add the upper bounds of counting and potentially elaborate on empirical runtimes. - Comparison with GSN: The authors mention in section 2 that F-MPNNs are a unifying framework that includes GSNs. In my perspective, given that GSN is a quite similar framework to this work, this is an important claim that should be more formally stated. In particular, as shown by Curticapean et al., 2017, in order to obtain isomorphism counts of a pattern P, one needs not only to compute P-homomorphisms, but also those of the graphs that arise when doing “non-edge contractions” (the spasm of P). Hence a spasm(P)-MPNN would require one extra layer to simulate a P-GSN. I think formally stating this will give the interested reader intuition on the expressive power of GSNs, albeit not an exact characterisation (we can only say that P-GSN is at most as powerful as a spasm(P)-MPNN but we cannot exactly characterise it; is that correct?) - Also, since the concept of homomorphisms is not entirely new in graph ML, a more elaborate comparison with the paper by NT and Maehara, “Graph Homomorphism Convolution”, ICML’20 would be beneficial. This paper can be perceived as the kernel analogue to F-MPNNs. Moreover, in this paper, a universality result is provided, which might turn out to be beneficial for the authors as well.
Additional comments:
I think that something is missing from Proposition 3. In particular, if I understood correctly the proof is based on the fact that we can always construct a counterexample such that F-MPNNs will not be equally strong to 2-WL (which by the way is a stronger claim). However, if the graphs are of bounded size, a counterexample is not guaranteed to exist (this would imply that the reconstruction conjecture is false). Maybe it would help to mention in Proposition 3 that graphs are of unbounded size?
Moreover, there is a detail in the proof of Proposition 3 that I am not sure that it’s that obvious. I understand why the subgraph counts of C m + 1
are unequal between the two compared graphs, but I am not sure why this is also true for homomorphism counts.
Theorem 3: The definition of the core of a graph is unclear to me (e.g., what if P contains cliques of multiple sizes?)
In the appendix, the authors mention they used 16 layers for their dataset. That is an unusually large number of layers for GNNs. Could the authors comment on this choice?
In the same context as above, the experiments on the ZINC benchmark are usually performed with either ~100K or 500K parameters. Although I doubt that changing the number of parameters will lead to a dramatic change in performance, I suggest that the authors repeat their experiments, simply for consistency with the baselines.
The method of Bouritsas et al., arxiv’20 is called “Graph Substructure Networks” (instead of “Structure”). I encourage the authors to correct this.
After rebuttal
The authors have adequately addressed all my concerns. Enhancing MPNNs with structural features is a family of well-performing techniques that have recently gained traction. This paper introduces a unifying framework, in the context of which many open theoretical questions can be answered, hence significantly improving our understanding. Therefore, I will keep my initial recommendation and vote for acceptance. Please see my comment below for my final suggestions which, along with some improvements on the presentation, I hope will increase the impact of the paper.
Limitations: The limitations are clearly stated in section 1, by mainly referring to the fact that the patterns need to be selected by hand. I would also add a discussion on the computational complexity of homomorphism counting.
Negative societal impact: A satisfactory discussion is included in the end of the experimental section. | - The authors do not adequately discuss the computational complexity of counting homomorphisms. They make brief statements (e.g., L 145 “Better still, homomorphism counts of small graph patterns can be efficiently computed even on large datasets”), but I think it will be beneficial for the paper to explicitly add the upper bounds of counting and potentially elaborate on empirical runtimes. |
NIPS_2016_238 | NIPS_2016 | - My biggest concern with this paper is the fact that it motivates âdiversityâ extensively (even the word diversity is in the title) but the model does not enforce diversity explicitly. I was all excited to see how the authors managed to get the diversity term into their model and got disappointed when I learned that there is no diversity. - The proposed solution is an incremental step considering the relaxation proposed by Guzman. et. al. Minor suggestions: - The first sentence of the abstract needs to be re-written. - Diversity should be toned down. - line 108, the first âfâ should be âgâ in âwe fixed the form of ..â - extra â.â in the middle of a sentence in line 115. One Question: For the baseline MCL with deep learning, how did the author ensure that each of the networks have converged to a reasonable results. Cutting the learners early on might significantly affect the ensemble performance. | - line 108, the first âfâ should be âgâ in âwe fixed the form of ..â - extra â.â in the middle of a sentence in line 115. One Question: For the baseline MCL with deep learning, how did the author ensure that each of the networks have converged to a reasonable results. Cutting the learners early on might significantly affect the ensemble performance. |
NIPS_2018_122 | NIPS_2018 | - Figure 1 and 2 well motive this work, but in the main body of this paper I cannot see what happens to these figures after applying the proposed adversarial training. It is better to put together the images before and after applying your method in the same place. Figure 2 does not say anything about details (we can understand the very brief overview of the positions of the embeddings), and thus these figures could be smaller for better space usage. - For the LM and NMT models, did you use the technique to share word embedding and output softmax matrices as in [1]? The transformer model would do this, if the transformer implementations are based on the original paper. If so, your method affects not only the input word embeddings, but also the output softmax matrix, which is not a trivial side effect. This important point seems missing and not discussed. If the technique is not used, the strength of the proposed method is not fully realized, because the output word embeddings could still capture simple frequency information. [1] Inan et al., Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling, ICLR 2017. - There are no significance test or discussions about the significance of the score differences. - It is not clear how the BLEU improvement comes from the proposed method. Did you inspect whether rare words are more actively selected in the translations? Otherwise, it is not clear whether the expectations of the authors actually happened. - Line 131: The authors mention standard word embeddings like word2vec-based and glove-based embeddings, but recently subword-based embeddings are also used. For example, fasttex embeddings are aware of internal character n-gram information, which is helpful in capturing information about rare words. By inspecting the character n-grams, it is sometimes easy to understand rare words' brief properties. For example, in the case of "Peking", we can see the words start from a uppercase character and ends by the suffix "ing", etc. It makes this paper more solid to compare the proposed method with such character n-gram-based methods [2, 3]. [2] Bojanowski et al., Enriching Word Vectors with Subword Information, TACL. [3] Hashimoto et al., A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks, EMNLP 2017. *Minor comments: - Line 21: I think the statement "Different from classic one-hot representation" is not necessary, because anyway word embeddings are still based on such one-hot representations (i.e., the word indices). An embedding matrix is just a weight matrix for the one-hot representations. - Line 29: Word2vec is not a model, but a toolkit which implements Skipgram and CBOW with a few training options. - The results on Table 6 in the supplementary material could be enough to be tested on the dev set. Otherwise, there are too many test set results. * Additional comments after reading the author response Thank you for your kind reply to my comments and questions. I believe that the draft will be further improved in the camera-ready version. One additional suggestion is that the title seems to be too general. The term "adversarial training" has a wide range of meanings, so it would be better to include your contribution in the title; for example, "Improving Word Embeddings by Frequency-based Adversarial Training" or something. | * Additional comments after reading the author response Thank you for your kind reply to my comments and questions. I believe that the draft will be further improved in the camera-ready version. One additional suggestion is that the title seems to be too general. The term "adversarial training" has a wide range of meanings, so it would be better to include your contribution in the title; for example, "Improving Word Embeddings by Frequency-based Adversarial Training" or something. |
NIPS_2021_2191 | NIPS_2021 | of the paper: [Strengths]
The problem is relevant.
Good ablation study.
[Weaknesses] - The statement in the intro about bottom up methods is not necessarily true (Line 28). Bottom-up methods do have a receptive fields that can infer from all the information in the scene and can still predict invisible keypoints. - Several parts of the methodology are not clear. - PPG outputs a complete pose relative to every part’s center. Thus O_{up} should contain the offset for every keypoint with respect to the center of the upper part. In Eq.2 of the supplementary material, it seems that O_{up} is trained to output the offset for the keypoints that are not farther than a distance \textit{r) to the center of corresponding part. How are the groundtruths actually built? If it is the latter, how can the network parts responsible for each part predict all the keypoints of the pose. - Line 179, what did the authors mean by saying that the fully connected layers predict the ground-truth in addition to the offsets? - Is \delta P_{j} a single offset for the center of that part or it contains distinct offsets for every keypoint? - In Section 3.3, how is G built using the human skeleton? It is better to describe the size and elements of G. Also, add the dimensions of G,X, and W to better understand what DGCN is doing. - Experiment can be improved: - For instance, the bottom-up method [9] has reported results on crowdpose dataset outperforming all methods in Table 4 with a ResNet-50 (including the paper one). It will be nice to include it in the tables - It will be nice to evaluate the performance of their method on the standard MS coco dataset to see if there is a drop in performance in easy (non occluded) settings. - No study of inference time. Since this is a pose estimation method that is direct and does not require detection or keypoint grouping, it is worth to compare its inference speed to previous top-down and bottom-up pose estimation method. - Can we visualize G, the dynamic graph, as it changes through DGCN? It might give an insight on what the network used to predict keypoints, especially the invisible ones.
[Minor comments]
In Algorithm 1 line 8 in Suppl Material, did the authors mean Eq 11 instead of Eq.4?
Fig1 and Fig2 in supplementary are the same
Spelling Mistake line 93: It it requires…
What does ‘… updated as model parameters’ mean in line 176
Do the authors mean Equation 7 in line 212?
The authors have talked about limitations in Section 5 and have mentioned that there are not negative societal impacts. | - No study of inference time. Since this is a pose estimation method that is direct and does not require detection or keypoint grouping, it is worth to compare its inference speed to previous top-down and bottom-up pose estimation method. |
ICLR_2023_642 | ICLR_2023 | Unclear notations. The authors used the same notations to write vectors and scalars. Reading these notations would be challenging to follow for many readers. Please consider updating your notations and refer to the notation section in the Formatting Instructions template for ICLR 23.
The framework impact is unclear. The authors mentioned that the case of intrinsic but known bias and variance is often the case in computational neuroscience and neuromorphic engineering. This is the main motivation for their approach. However, the framework provided is limited to specific cases, namely, white noise and fixed bias. The authors argue that their assumptions are reasonable for most cases computational scientists and neuromorphic engineers face, but they don’t provide evidence for their claims. Clearly, this framework provides an important way for analyzing methods such as perturbed gradient descent methods with Gaussian noise, but it’s unclear how it can help analyze other cases. This suggests that the framework is quite limited.
The authors need to show that their choices and assumption are still useful for computational neuroscience and neuromorphic engineering. This can happen by referring to contributing and important works from these fields having known bias and variance with Gaussian noise.
In the experiments, the used bias is restricted to having the same magnitude for all weights ( b 1 →
). Can we reproduce the results if we use arbitrary biases? It would be better if the authors tried a number of arbitrary biases and averaged the results.
The paper is not well-placed in the literature. The authors didn’t describe the related works fully (e.g., stochastic gradient Langevin dynamics). This makes the work novelty unclear since the authors didn’t mention how analyzing the gradient estimator was done in earlier works and how their contribution is discernible from the earlier contributions. Mentioning earlier contributions increases the quality of your work and makes it distinguishable from other work. Please also refer to my comment in the novelty section.
Missing evidence of some claims and missing details. Here, I mention a few:
It’s not clear how increasing the width and/or depth can lower the trace of the Hessian (Section 2.1). If this comes from a known result/theory, please mention it. Otherwise, please show how it lowers the trace.
The authors mentioned that they use an analytical and empirical framework that is agnostic to the actual learning rule. However, the framework is built on top of a specific learning rule. It’s unclear what is meant by agnostic in this context (Section 1).
The authors mentioned in the abstract that the ideal amount of variance depends on the size and sparsity of the network, the norm of the gradient, and the curvature of the loss landscape. However, the authors didn’t mention the sparsity dependence anywhere in the paper.
The authors mentioned in a note after the proof of Theorem A.5 that it is also valid for Tanh but not Sigmoid. However, the proof assumed that the second derivative is zero. It’s unclear whether a similar derivation can be developed without this assumption. However, the authors only mention the relationship with the gain of ϕ ( . ) .
More information is needed on how the empirical likelihood of descent is computed (Fig. 7).
The use of MSE should be mentioned in Theorem A.3 since it’s not proven for any loss function. In addition, the dataset notation is wrong. It should be D = { ( x 1 , y 1 ) , . . . , ( x M , y M ) }
, where M
is the number of examples since it’s a set containing input-output pairs, not just a single pair.
The argument in Section 2.1 that increasing depth could theoretically make the loss less smooth is not related to the argument being made about variance. It is unclear how this is related to the analyses of how increasing depth affects the impact of the variance. I think it needs to be moved in the discussion on generalization instead.
A misplaced experiment that does not provide convincing evidence to support the theorems and lemmas developed in the paper with less experimental rigor (Fig. 1).
The experiment is misplaced being at the introduction section. This hurts the introduction and makes the reader less focused on your logic to motivate your work.
It’s not clear from the figure what the experiment is. The reader has to read appendix B2 to be able to continue reading your introduction, which is unnecessary.
The results are shown with only three seeds. This is not enough and cannot create any statistical significance in your experiment. I suggest increasing the number of runs to 20 or 30.
It’s unclear why batch gradient descent is used instead of gradient descent with varying bias and variance. Using batch gradient descent might undesirably add to the bias and variance.
The experiment results are not consistent with the rest of the paper. We cannot see the relationship when varying the bias or variance similar to other experiments. Looking at Fig.1B where bias=0, for example, we find that adding a small amount of variance reduces performance, but adding more improves performance up to a limit. This is not the case with the other experiments, though. I suggest following the previous two points to make the results aligned with the rest of your results.
Alternative hypotheses can be made with some experiments. The experiment in Fig. 3.A needs improvement. The authors mention that excessive amounts of variance and/or bias can hinder learning performance. In Fig. 3. A, they only show levels of variance that help decrease loss. An alternative explanation from their figure is that by increasing the variance, the performance improves. This is not the case, of course, so I think the authors need to add more variance curves that hinder performance to avoid alternative interpretations.
Minor issues that didn’t impact the score:
There are nine arXiv references. If they are published, please add this information instead of citing arXiv.
What is a norm N
vector? Can you please add the definition to the paper?
You mentioned that the step size has to be very small. However, in Fig. 1, the step size used is large (0.02). Can you please explain why? Can this be an additional reason why there is no smooth relationship between the values of the variance and performance?
No error bars are added in Fig. 4 or Fig. 7. Can you please add them?
In experiments shown in Fig. 3 and Fig. 5, the number of runs used to create the error bars is not mentioned in Appendix B.2.
A missing 2 D
in Eq. 27.
In Theorem A.3 proof, how the input x
has two indices? The input is a vector, not a matrix. Moreover, shouldn’t ∑ k ( W k ( 2 ) ) 2 = 1 / d
, not d ? | 27. In Theorem A.3 proof, how the input x has two indices? The input is a vector, not a matrix. Moreover, shouldn’t ∑ k ( W k ( 2 ) ) 2 = 1 / d , not d ? |
tqhAA26vXE | ICLR_2024 | - In section 4.3 and 4.4, words such as “somewhat” and “good generative ability” appears in the description yet I am concerned that even with beam search, only 77% of the result lists contain the ground truth logical forms. If the relationships and entities were replaced, how do we ensure that the plugged-in entities/relationships were the right one? In what percentage were the right entities/relationships being plugged in if no ground truth is available?
- In section 4.5, the authors claim that Graph-Query-of-Thoughts are a way to improve QA’s interpretability and avoid LLM’s hallucinations, which has no evidence support in the result/analysis section. This seems to be an exaggerated claim and I am not convinced.
- Presentation of the paper needs improvement. Multiple grammatical errors, and the description of the method is confusing. Explanation of methods like QLora.etc can be moved to related work, since now it is interrupting the flow of the writing. | - In section 4.3 and 4.4, words such as “somewhat” and “good generative ability” appears in the description yet I am concerned that even with beam search, only 77% of the result lists contain the ground truth logical forms. If the relationships and entities were replaced, how do we ensure that the plugged-in entities/relationships were the right one? In what percentage were the right entities/relationships being plugged in if no ground truth is available? |
NIPS_2016_395 | NIPS_2016 | - I found the application to differential privacy unconvincing (see comments below) - Experimental validation was a bit light and felt preliminary RECOMMENDATION: I think this paper should be accepted into the NIPS program on the basis of the online algorithm and analysis. However, I think the application to differential privacy, without experimental validation, should be omitted from the main paper in favor of the preliminary experimental evidence of the tensor method. The results on privacy appear too preliminary to appear in a "conference of record" like NIPS. TECHNICAL COMMENTS: 1) Section 1.2: the dimensions of the projection matrices are written as $A_i \in \mathbb{R}^{m_i \times d_i}$. I think this should be $A_i \in \mathbb{R}^{d_i \times m_i}$, otherwise you cannot project a tensor $T \in \mathbb{R}^{d_1 \times d_2 \times \ldots d_p}$ on those matrices. But maybe I am wrong about this... 2) The neighborhood condition in Definition 3.2 for differential privacy seems a bit odd in the context of topic modeling. In that setting, two tensors/databases would be neighbors if one document is different, which could induce a change of something like $\sqrt{2}$ (if there is no normalization, so I found this a bit confusing. This makes me think the application of the method to differential privacy feels a bit preliminary (at best) or naive (at worst): even if a method is robust to noise, a semantically meaningful privacy model may not be immediate. This $\sqrt{2}$ is less than the $\sqrt{6}$ suggested by the authors, which may make things better? 3) A major concern I have about the differential privacy claims in this paper is with regards to the noise level in the algorithm. For moderate values of $L$, $R$, and $K$, and small $\epsilon = 1$, the noise level will be quite high. The utility theorem provided by the author requires a lower bound on $\epsilon$ to make the noise level sufficiently low, but since everything is in "big-O" notation, it is quite possible that the algorithm may not work at all for reasonable parameter values. A similar problem exists with the Hardt-Price method for differential privacy (see a recent ICASSP paper by Imtiaz and Sarwate or an ArXiV preprint by Sheffet). For example, setting L=R=100 and K=10, \epsilon = 1, \delta = 0.01 then the noise variance is of the order of 4 x 10^4. Of course, to get differentially private machine learning methods to work in practice, one either needs large sample size or to choose larger $\epsilon$, even $\epsilon \gg 1$. Having any sense of reasonable values of $\epsilon$ for a reasonable problem size (e.g. in topic modeling) would do a lot towards justifying the privacy application. 4) Privacy-preserving eigenvector computation is pretty related to private PCA, so one would expect that the authors would have considered some of the approaches in that literature. What about (\epsilon,0) methods such as the exponential mechanism (Chaudhuri et al., Kapralov and Talwar), Laplace noise (the (\epsilon,0) version in Hardt-Price), or Wishart noise (Sheffet 2015, Jiang et al. 2016, Imtiaz and Sarwate 2016)? 5) It's not clear how to use the private algorithm given the utility bound as stated. Running the algorithm is easy: providing $\epsilon$ and $\delta$ gives a private version -- but since the $\lambda$'s are unknown, verifying if the lower bound on $\epsilon$ holds may not be possible: so while I get a differentially private output, I will not know if it is useful or not. I'm not quite sure how to fix this, but perhaps a direct connection/reduction to Assumption 2.2 as a function of $\epsilon$ could give a weaker but more interpretable result. 6) Overall, given 2)-5) I think the differential privacy application is a bit too "half-baked" at the present time and I would encourage the authors to think through it more clearly. The online algorithm and robustness is significantly interesting and novel on its own. The experimental results in the appendix would be better in the main paper. 7) Given the motivation by topic modeling and so on, I would have expected at least an experiment on one real data set, but all results are on synthetic data sets. One problem with synthetic problems versus real data (which one sees in PCA as well) is that synthetic examples often have a "jump" or eigenvalue gap in the spectrum that may not be observed in real data. While verifying the conditions for exact recovery is interesting within the narrow confines of theory, experiments are an opportunity to show that the method actually works in settings where the restrictive theoretical assumptions do not hold. I would encourage the authors to include at least one such example in future extended versions of this work. | - I found the application to differential privacy unconvincing (see comments below) - Experimental validation was a bit light and felt preliminary RECOMMENDATION: I think this paper should be accepted into the NIPS program on the basis of the online algorithm and analysis. However, I think the application to differential privacy, without experimental validation, should be omitted from the main paper in favor of the preliminary experimental evidence of the tensor method. The results on privacy appear too preliminary to appear in a "conference of record" like NIPS. TECHNICAL COMMENTS: |
NIPS_2016_283 | NIPS_2016 | weakness of the paper are the empirical evaluation which lacks some rigor, and the presentation thereof: - First off: The plots are terrible. They are too small, the colors are hard to distinguish (e.g. pink vs red), the axis are poorly labeled (what "error"?), and the labels are visually too similar (s-dropout(tr) vs e-dropout(tr)). These plots are the main presentation of the experimental results and should be much clearer. This is also the reason I rated the clarity as "sub-standard". - The results comparing standard- vs. evolutional dropout on shallow models should be presented as a mean over many runs (at least 10), ideally with error-bars. The plotted curves are obviously from single runs, and might be subject to significant fluctuations. Also the models are small, so there really is no excuse for not providing statistics. - I'd like to know the final used learning rates for the deep models (particularly CIFAR-10 and CIFAR-100). Because the authors only searched 4 different learning rates, and if the optimal learning rate for the baseline was outside the tested interval that could spoil the results. Another remark: - In my opinion the claim about evolutional dropout addresses the internal covariate shift is very limited: it can only increase the variance of some low-variance units. Batch Normalization on the other hand standardizes the variance and centers the activation. These limitations should be discussed explicitly. Minor: * | - In my opinion the claim about evolutional dropout addresses the internal covariate shift is very limited: it can only increase the variance of some low-variance units. Batch Normalization on the other hand standardizes the variance and centers the activation. These limitations should be discussed explicitly. Minor: |
NIPS_2022_947 | NIPS_2022 | 1. Apart from the multiple pre-trained models, FedPCL is built on the idea of prototypical learning and contrastive learning, which are not new in federated learning. 2. The performance of FedPCL heavily relies on the selection of different pre-trained models, limiting its applications to more wide areas. As shown in Table 4, the model accuracy is quite sensitive to the pre-trained models.
This work adequately addressed the limitations. The authors developed a lightweight federated learning framework to reduce the computation and communication costs and integrated pre-trained models to extract prototypes for federated aggregation. The is a new try for federated learning. | 2. The performance of FedPCL heavily relies on the selection of different pre-trained models, limiting its applications to more wide areas. As shown in Table 4, the model accuracy is quite sensitive to the pre-trained models. This work adequately addressed the limitations. The authors developed a lightweight federated learning framework to reduce the computation and communication costs and integrated pre-trained models to extract prototypes for federated aggregation. The is a new try for federated learning. |
NIPS_2017_356 | NIPS_2017 | - I would have liked to see some analysis about the distribution of the addressing coefficients (Betas) with and without the bias towards sequential addressing. This difference seems to be very important for the synthetic task (likely because each question is based on the answer set of the previous one). Also I don't think the value of the trade-off parameter (Theta) was ever mentioned. What was it and how was it selected? If instead of a soft attention, the attention from the previous question was simply used, how would that baseline perform?
- Towards the same point, to what degree does the sequential bias affect the VisDial results?
- Minor complaint. There are a lot of footnotes which can be distracting, although I understand they are good for conserving clarity / space while still providing useful details.
- Does the dynamic weight prediction seem to identify a handful of modes depending on the question being asked? Analysis of these weights (perhaps tSNE colored by question type) would be interesting.
- It would have been interesting to see not only the retrieved and final attentions but also the tentative attention maps in the qualitative figures. | - It would have been interesting to see not only the retrieved and final attentions but also the tentative attention maps in the qualitative figures. |
ICLR_2023_3208 | ICLR_2023 | 1.The typesetting in some places is out of order, such as equations (23), (25), and (31). 2. In Page 4, "A4 bounds the degree of non-stationarity between consecutive iterations", why this assumption holds? 3. This author should add more description about the contribution of this paper. | 3. This author should add more description about the contribution of this paper. |
ICLR_2022_1012 | ICLR_2022 | a. The paper lacks structure and clarity
b. The paper lacks a more qualitative study of the model:
it would be interesting to see what layers the layer-wise attention mechanism attends to.
it would be great to understand how this model uses the latent variables, for instance by measuring the KL divergence at each layer, as done in the previous work (LVAE, BIVA) (connection to "posterior collapse").
c. Experiments are limited to CIFAR-10, larger scaler experiments (i.e. ImageNet) would be beneficial to the paper. It is not guaranteed that such an architecture would translate in the same gains for larger datasets (i.e. ImageNet).
3. Clarification needed:
a. Table 5: I interpreted the column "non-local layers" as using "attention across layers", I hope I was right. The nomenclature needs to be improved.
b. Is the layer-wise attention mechanism specific to deep VAEs, or can it be more generally applied to ResNet architectures?
c. Section 2.3 (paragraph cited bellow): I get the idea, but unless demonstrated, this remains a hypothesis.
"... in practice the network may no longer respect the factorization of the prior p ( z ) = ∏ l p ( z l ∣ z < l )
leading to diminished performance gains as shown in Table 1":
4. Minor comments / suggestions
a. The main contributions are introducing two types of attention for deep VAEs, it might help to describe them in a separate section, and only then describe the generative and inference models. Right now the description of the layer-wise attention mechanism is scattered across sections 2.3 and 2.4.
b. tricks like normalisation or feature scaling could be referenced in a separate section.
c. eq8: you might want to cite ReZero [1] here
d. Fig 1. a: the lack of arrows going from the activations ( h l , k l q )
to the attention block ( A ( . . . ) )
was confusing on the first read
e. It would be better practice to report likelihoods for multiple random seeds
f. Typo in section 2.1: "both q ( z x ) and p ( x )
are fully factorized gaussian..." -> "both q ( z x ) and p ( z )
are fully factorized gaussian..."
[1] Bachlechner, T., Prasad Majumder, B., Mao, H. H., Cottrell, G. W., and McAuley, J., “ReZero is All You Need: Fast Convergence at Large Depth”, <i>arXiv e-prints</i>, 2020. | 4. Minor comments / suggestions a. The main contributions are introducing two types of attention for deep VAEs, it might help to describe them in a separate section, and only then describe the generative and inference models. Right now the description of the layer-wise attention mechanism is scattered across sections 2.3 and 2.4. b. tricks like normalisation or feature scaling could be referenced in a separate section. c. |
ICLR_2022_1393 | ICLR_2022 | I think that:
The comparison to baselines could be improved.
Some of the claims are not carefully backed up.
The explanation of the relationship to the existing literature could be improved.
More details on the above weaknesses:
Comparison to baselines:
"We did not find good benchmarks to compare our unsupervised, iterative inferencing algorithm against" I think this is a slightly unfair comment. The unsupervised and iterative inferencing aspects are only positives if they have the claimed benefits, as compared to other ML methods (more accurate and better generalization). There is a lot of recent work addressing the same ML task (as mentioned in the related work section.) This paper contains some comparisons to previous work, but as I detail below, there seem to be some holes.
FCNN is by far the strongest competitor for the Laplace example in the appendix. Why is this left off of the baseline comparison table in the main paper? Further, is there any reason that FCNN couldn't have been used for the other examples?
Why is FNO not applied to the Chip cooling (Temperature) example?
A major point in this paper is improved generalization across PDE conditions. However, I think that's hard to check when only looking at the test errors for each method. In other words, is CoAE-MLSim's error lower than UNet's error because the approach fit the training data better, or is it because it generalized better? Further, in some cases, it's not obvious to me if the test errors are impressive, so maybe it is having a hard time generalizing. It would be helpful to see train vs. test errors, and ideally I like to see train vs. val. vs. test.
For the second main example (vortex decay over time), looking at Figures 8 and 33 (four of the fifty test conditions), CoAE-MLSim has much lower error than the baselines in the extrapolation phase but noticeably higher in the interpolation phase. In some cases, it's hard to tell how close the FNO line is to zero - it could be that CoAE-MLSim even has orders of magnitude more error. Since we can see that there's a big difference between interpolation and extrapolation, it would be helpful to see the test error averaged over the 50 test cases but not averaged over the 50 time steps. When averaged over all 50 time steps for the table on page 9, it could be that CoAE-MLSim looks better than FNO just because of the extrapolation regime. In practice, someone might pick FNO over CoAE-MLSim if they aren't interested in extrapolating in time. Do the results in the table for vortex decay back up the claim that CoAE-MLSim is generalizing over initial conditions better than FNO, or is it just better at extrapolation in time?
Backing up claims:
The abstract says that the method is tested for a variety of cases to demonstrate a list of things, including "scalability." The list of "significant contributions" also includes "This enables scaling to arbitrary PDE conditions..." I might have missed/forgotten something, but I think this wasn't tested?
"Hence, the choice of subdomain size depends on the trade-off between speed and accuracy." This isn't clear to me from the results. It seems like 32^3 is the fastest and most accurate?
I noticed some other claims that I think are speculations, not backed up with reported experiments. If I didn't miss something, this could be fixed by adding words like "might."
"Physics constrained optimization at inference time can be used to improve convergence robustness and fidelity with physics."
"The decoupling allows for better modeling of long range time dynamics and results in improved stability and generalizability."
"Each solution variable can be trained using a different autoencoder to improve accuracy."
"Since, the PDE solutions are dependent and unique to PDE conditions, establishing this explicit dependency in the autoencoder improves robustness."
"Additionally, the CoAE-MLSim apprach solves the PDE solution in the latent space, and hence, the idea of conditioning at the bottleneck layer improves solution predictions near geometry and boundaries, especially when the solution latent vector prediction has minor deviations."
"It may be observed that the FCNN performs better than both UNet and FNO and this points to an important aspect about representation of PDE conditions and its impact on accuracy." The representation of the PDE conditions could be why, but it's hard to say without careful ablation studies. There's a lot different about the networks.
Similarly: "Furthermore, compressed representations of sparse, high-dimensional PDE conditions improves generalizability."
Relationship to literature:
The citation in this sentence is abrupt and confusing because it sounds like CoAE-MLSim is a method from that paper instead of the new method: "Figure 4 shows a schematic of the autoencoder setup used in the CoAE-MLSim (Ranade et al., 2021a)." More broadly, Ranade et al., 2021a, Ranade et al., 2021b, and Maleki, et al., 2021 are all cited and all quite related to this paper. It should be more clear how the authors are building on those papers (what exactly they are citing them for), and which parts of CoAE-MLSim are new. (The Maleki part is clearer in the appendix, but the reader shouldn't have to check the appendix to know what is new in a paper.)
I thought that otherwise the related work section was okay but was largely just summarizing some papers without giving context for how they relate to this paper.
Additional feedback (minor details, could fix in a later version, but no need to discuss in the discussion phase):
- The abstract could be clearer about what the machine learning task is that CoAE-MLSim addresses.
- The text in the figures is often too small.
- "using pre-trained decoders (g)" - probably meant g_u?
- Many of the figures would be more clear if they said pre-trained solution encoders & solution decoders, since there are multiple types of autoencoders.
- The notation is inconsistent, especially with nu. For example, the notation in Figures 2 & 3 doesn't seem to match the notation in Alg 1. Then on Page 4 & Figure 4, the notation changes again.
- Why is the error table not ordered 8^3, 16^3, 32^3 like Figure 9? The order makes it harder for the reader to reason about the tradeoff.
- Why is Err(T_max) negative sometimes? Maybe I don't understand the definition, but I would expect to use absolute value?
- I don't think the study about different subdomain sizes is an "ablation" study since they aren't removing a component of the method.
- Figure 11: I'm guessing that the y-axis is log error, but this isn't labeled as such. I didn't really understand the the legend or the figure in general until I got to the appendix, since there's little discussion of it in the main paper.
- "Figure 30 shows comparisons of CoAE-MLSim with Ansys Fluent for 4 unseen objects in addition to the example shown in the main paper." - probably from previous draft. Now this whole example is in the appendix, unless I missed something.
- My understanding is that each type of autoencoder is trained separately and that there's an ordering that makes sense to do this in, so you can use one trained autoencoder for the next one (i.e. train the PDE condition AEs, then the PDE solution AE, then the flux conservation AE, then the time integration AE). This took me a while to understand though, so maybe this could be mentioned in the body of the paper. (Or perhaps I missed that!)
- It seems that the time integration autoencoder isn't actually an autoencoder if it's outputting the solution at the next time step, not reconstructing the input.
- Either I don't understand Figure 5 or the labels are wrong.
- It's implied in the paper (like in Algorithm 1) that the boundary conditions are encoded like the other PDE conditions. In the Appendix (A.1), it's stated that "The training portion of the CoAE-MLSim approach proposed in this work corresponds to training of several autoencoders to learn the representations of PDE solutions, conditions, such as geometry, boundary conditions and PDE source terms as well as flux conservation and time integration." But then later in the appendix (A.1.3), it's stated that boundary conditions could be learned with autoencoders but are actually manually encoded for this paper. That seems misleading. | - Either I don't understand Figure 5 or the labels are wrong. |
NIPS_2021_2257 | NIPS_2021 | - Missing supervised baselines. Since most experiments are done on datasets of scale ~100k images, it is reasonable to assume that full annotation is available for a dataset at this scale in practice. Even if it isn’t, it’s an informative baseline to show where these self-supervised methods are at comparing to a fully supervised pre-trained network. - The discussion in section 3 is interesting and insightful. The authors compared training datasets such as object-centric versus scene-centric ones, and observed different properties that the model exhibited. One natural question is then what would happen if a model is trained on \emph{combined} datasets. Can the SSL model make use of different kinds of data? - The authors compared two-crop and multi-crop augmentation in section 4, and observed that multi-crop augmentation yielded better performance. One important missing factor is the (possible) computation overhead of multi-crop strategies. My estimation is that it would increase the computation complexity (i.e., slowing the speed) of training. Therefore, one could argue that if we could train the two-crop baseline for a longer period of time it would yield better performance as well. To make the comparison fair, the computation overhead must be discussed. It can also be seen from Figure 7, for the KNN-MoCo, that the extra positive samples are fed into the network \emph{that takes the back-propagated gradients}. It will drastically increase training complexity as the network not only performs forward passing, but also the backward passing as well. - Section 4.2 experiments with AutoAugment as a stronger augmentation strategy. One possible trap is that AutoAugment’s policy is obtained by supervise training on ImageNet. Information leaking is likely.
Questions - In L114 the authors concluded that for linear classification the pretraining dataset should match the target dataset in terms of being object or-scene centric. If this is true, is it a setback for SSL algorithms that strive to learn more generic representations? Then it goes back again to whether by combining two datasets SSL model can learn better representations. - In L157 the authors discussed that for transfer learning potentially only low- and mid-level visual features are useful. My intuition is that low- and mid-level features are rather easy to learn. Then how does it explain the model’s transferability increasing when we scale up pre-training datasets? Or the recent success of CLIPs? Is it possible that \emph{only} MoCo learns low- and mid-level features?
Minor things that don’t play any role in my ratings. - “i.e.” -> “i.e.,”, “e.g.” -> “e.g.,” - In Eq.1, it’s better to write L_{contrastive}(x) = instead of L_{contrastive}. Also, should the equation be normalized by the number of positives? - L241 setup paragraph is overly complicated for an easy-to-explain procedure. L245/246, the use of x+ and x is very confusing. - It’s better to explain that “nearest neighbor mining” in the intro is to mine nearest neighbor in a moving embedding space in the same dataset.
Overall, I like the objective of the paper a lot and I think the paper is trying to answer some important questions in SSL. But I have some reservation to confidently recommend acceptance due to the concerns as written in the “weakness” section, because this is an analysis paper and analysis needs to be rigorous. I’ll be more than happy to increase the score if those concerns are properly addressed in the feedback.
The authors didn't discuss the limitations of the study. I find no potential negative societal impact. | - Missing supervised baselines. Since most experiments are done on datasets of scale ~100k images, it is reasonable to assume that full annotation is available for a dataset at this scale in practice. Even if it isn’t, it’s an informative baseline to show where these self-supervised methods are at comparing to a fully supervised pre-trained network. |
WzUPae4WnA | ICLR_2025 | 1. **The motivation of this paper appears to be questionable.** The authors claim that DoRA increases the risk of overfitting, basing this on two pieces of evidence:
- DoRA introduces additional parameters compared to LoRA.
- The gap between training and test accuracy curves for DoRA is larger than that of BiDoRA.
However, these two points do not convincingly support the claim. First, while additional parameters can sometimes contribute to overfitting, they are not a sufficient condition for it. In fact, DoRA adds only a negligible number of parameters (0.01% of the model size, as reported by the authors) beyond LoRA. Moreover, prior work [1] suggests that LoRA learns less than full fine-tuning and may even act as a form of regularization, implying that the risk of overfitting is generally low across these PEFT methods.
Additionally, the training curves are not necessarily indicative of overfitting, as they can be significantly influenced by factors such as hyperparameters, model architecture, and dataset characteristics. The authors present results from only a single configuration, which limits the generalizability of their findings.
Finally, the authors’ attribution of an *alleged overfitting problem* to DoRA’s concurrent training lacks a strong foundation.
2. **The proposed BiDoRA method is overly complex and difficult to use.** It requires a two-phase training process, with the first phase itself consisting of two sub-steps. It also introduces two additional hyperparameters: the weight of orthogonality regularization and a ratio for splitting training and validation sets. As a result, BiDoRA takes 3.92 times longer to train than LoRA.
3. **Performance differences between methods are minimal across evaluations**. In nearly all results, the performance differences between the methods are less than 1 percentage point, which may be attributable to random variation. Furthermore, the benchmarks selected are outdated and likely saturated.
[1] [LoRA Learns Less and Forgets Less](https://arxiv.org/abs/2405.09673) | 3. **Performance differences between methods are minimal across evaluations**. In nearly all results, the performance differences between the methods are less than 1 percentage point, which may be attributable to random variation. Furthermore, the benchmarks selected are outdated and likely saturated. [1] [LoRA Learns Less and Forgets Less](https://arxiv.org/abs/2405.09673) |
NIPS_2020_106 | NIPS_2020 | I also feel that the paper could have benefited from a discussion of these as compared to just outrightly saying that existing methods do not give us good results. In particular, the conditions under which existing methods work vs do not work should have been discussed more explicitly than what it is right now in the paper. Moreover, I think the experiments on cartpole and hopper are not indicative of their method's performance since these have determnisitc dynamics and the dataset was collected as trajectories (so s' is as frequent as s in the distribution \mu, see my point below) and hence their choice of masking reduces to action conditioned masking only. Some other questions that I have: - From the analysis perspective, the paper says that prior works such as Kumar et al. 2019 that use action conditional and concentrability do not get the same error rate. Is the main issue behind this limitation that the notion of concentrability used in Kumar et al. and other works is trajectory centric and not on the state-action marginal? The latter was shown to be better than trajectory-based concentrability in Chen and Jiang (2019). If this notion of concentrability is used, would that be sufficient to get rid of the concentrability assumptions in your work. - Why does the method help on Hopper, which has deterministic dynamics, so given (s, a), there is a unique s', and in this case, it simply reduces to action-conditional masking? Can it be evaluated on some other domains with non-deterministic dynamics to evaluate its empirical efficacy? Otherwise empirically it seems like the method doesn't seem to have much benefit. Why is BEAR missing from baselines? - It seems like when the batch of data is a set of trajectories, and the state space is continuous, then the density of s_t is the same as s_{t+1} which is 1/N, so in that case does the proposed algorithm which exploits the fact that s_{t+1} may be highly infrequent compared to s_{t} reduce simply to an action conditional? The experiments are done with this setup too it seems. - Building on the previous point, if the data comes from d^\mu: the state visitation distribution of a behavior policy \mu, then d^\mu(s) and d^\mu(s') for a transition (s, a, r, s') observed in the dataset shouldn't be very different, in that case, would the proposed method be not (much) better than action-conditional penalty methods that have mostly been studied in this domain? - How do you compare algorithms, theoretically, for different values of b, and the hyperparameters for other algorithms, such as \eps in BEAR? It seems like for some value of both of these quantities, the algorithms should perform safely and not have much error accumulation. So, how is the proposed method better theoretically than the best configuration of the prior methods? - How will the proposed method compare to residual gradient algorithms which have better guarantees? | - Why does the method help on Hopper, which has deterministic dynamics, so given (s, a), there is a unique s', and in this case, it simply reduces to action-conditional masking? Can it be evaluated on some other domains with non-deterministic dynamics to evaluate its empirical efficacy? Otherwise empirically it seems like the method doesn't seem to have much benefit. Why is BEAR missing from baselines? |
NIPS_2020_1706 | NIPS_2020 | 1. The memorization effect is not new to the community. Therefore, the novelty of this paper is not sufficiently demonstrated. The authors need to be clearer what extra insights this paper gives. 2. It would be better if the author could provide some theocratical justification in terms of why co-training and weight averaging can improve results, since they are important for the performance. 3. The empirical performance does not seem to be very strong compared to DivideMix. Some explanations are needed. | 2. It would be better if the author could provide some theocratical justification in terms of why co-training and weight averaging can improve results, since they are important for the performance. |
NIPS_2016_182 | NIPS_2016 | weakness of the technique in my view is that the kerne values will be dependent on the dataset that is being used. Thus, the effectiveness of the kernel will require a rich enough dataset to work well. In this respect, the method should be compared to the basic trick that is used to allos non-PSD similarity metrics to be used in kernel methods, namely defining the kernel as k(x,x') = (s(x,z_1),...,s(x,z_N))^T(s(x',z_1),...,s(x',z_N)), where s(x,z) is a possibly non-PSD similarity metric (e.g. optimal assignment score between x and z) and Z = {z_1,...,z_n} is a database of objects to compared to. The write-up is (understandably) dense and thus not the easiest to follow. However, the authors have done a good job in communicating the methods efficiently. Technical remarks: - it would seem to me that in section 4, "X" should be a multiset (and [\cal X]**n the set of multisets of size n) instead of a set, since in order the histogram to honestly represent a graph that has repeated vertex or edge labels, you need to include the multiplicities of the labels in the graph as well. - In the histogram intersection kernel, it think for clarity, it would be good to replace "t" with the size of T; there is no added value to me in allowing "t" to be arbitrary. | - it would seem to me that in section 4, "X" should be a multiset (and [\cal X]**n the set of multisets of size n) instead of a set, since in order the histogram to honestly represent a graph that has repeated vertex or edge labels, you need to include the multiplicities of the labels in the graph as well. |
ICLR_2022_1998 | ICLR_2022 | In Section 3. The paper uses a measure ρ
that is essentially the fraction of examples at which local monotonicity (in any of the prescribed directions in M
) is violated and then show that this measure decreases when using the paper's method over the baselines. However, I'm not certain that this measure corresponds to the global monotonicity requirment that is often desired in practice: namely, the one that appears in Definition 1. For example, consider a 1
-D function over [ 1 , 99.99 ]
whose graph is a piecewise linear curve connecting the points ( 0 , 100 ) , ( 0.99 , 100.99 ) , ( 1 , 99 ) , ( 1.99 , 99.99 ) , ( 2 , 98 ) , ( 2.99 , 98.99 ) , . . . , ( 99 , 1 ) , ( 99.99 , 1.99 )
. This function has nonnegative derivative at about 99
of its domain, yet if one chooses two points x 1 , x 2
uniformaly and independently from the domain, then there's at least a 97
chance that f ( m i n x 1 , x 2 ) > f ( m a x x 1 , x 2 )
. I think, therefore, that it would be good to complement the local ρ
with an estimate of the probability that Definition 1 would not hold over the distribution in question (training, test or random).
Section 4.1 The authors introduce the notion of group monotonicity, but it's unclear how the regularizer introduced in equation 3 helps to encourage that property. Specifically, 1) Only the sum of the gradient is taken into account (so it could be that a component a_w_{i,j} has a very negative gradient, but still the sum will be positive), and 2) the softmax in equation 3 seems to encourage that the total gradient of S y
is larger than the total gradient of all the other S k
's, not that it's positive. Perhaps I'm missing something?
Section 4.2 The paper claims that the fact that a good performance of the "total activation classifier" shows evidence that the original classifier satisfies group monotonicity. But that claim is not clear to me. The total activation classifier does not depend on the part of the network that computes the output from the intermediate layer which is critical for the satisfaction of group monotonicity.
Section 4.2.2 The paper doesn't compare their methods to other methods for detecting noisey/adverserial test examples. | 1) Only the sum of the gradient is taken into account (so it could be that a component a_w_{i,j} has a very negative gradient, but still the sum will be positive), and |
lesQevLmgD | ICLR_2024 | I believe the authors' results merit publication in a specialized journal rather than in ICLR. The main reasons are the following
1. The authors do not give any compelling numerical evidence that their bound is tight or even "log-tight".
2. The authors' derivation falls into classical learning theory-based bounds, which, to the best of my knowledge, does not yield realistic bounds, unless Bayesian considerations are taken into account (e.g. Bayesian-PAC based bounds).
3. Even if one maintains that VC-dimension-style learning theory is an important part of the theory of deep learning, my hunch would be that the current work does not contain sufficient mathematical interest to be published in ICLR.
My more minor comments are that
1. The introduction is very wordy and contains many repetitions of similar statements.
2. I found what I believe are various math typos, for instance around Lemma 3.5. I think n and m are used interchangeably. Furthermore calligraphic R with an n sub-script and regular R. Similarly, capital and non-capital l are mixed in assumption 4.8. Runaway subscripts also appear many times in Appendix A2. | 2. The authors' derivation falls into classical learning theory-based bounds, which, to the best of my knowledge, does not yield realistic bounds, unless Bayesian considerations are taken into account (e.g. Bayesian-PAC based bounds). |
ICLR_2023_3693 | ICLR_2023 | Weakness: 1. Some details of the proposed method are missing, such as the definition of \mathcal{L}{{kl} and the representation of the augmentation samples in the function. 2. In the proposed method, a novel augmentation strategy is proposed with mask. In the ablation study, the effectiveness of the proposed strategy has not been verified by comparing with mixup and so on. The main contributions of the proposed method should be further verified in the experiments. 3. Meanwhile, more details about the proposed method should be presented, such as how the implicit distribution characterize the uncertainty of each label value and how the model mitigrate the uncertainty of the label distribution. | 3. Meanwhile, more details about the proposed method should be presented, such as how the implicit distribution characterize the uncertainty of each label value and how the model mitigrate the uncertainty of the label distribution. |
RwzFNbJ3Ez | EMNLP_2023 | 1. The method presented relies on extracting multiple responses from the LLM. For the variant with optimal performance, LLM prompting, 20 samples are needed to achieve the best reported results. Assuming a response contains 5 sentences, this requires 100 API calls to obtain a passage-level score (if I understand correctly), which is cost heavy and ineffective.
2. It remains unclear whether the proposed approach is suitable for detecting hallucinations in responses from other LLMs and across various application scenarios beyond WikiBio. This uncertainty arises because the experiment dataset exclusively encompasses WikiBio responses drawn from text-davinci-003.
3. The proposed method might struggle to detect hallucinations in open-ended responses, for example, the prompt "introduce a sports celebrity to me". In this case, the sampled responses could pertain to different individuals, making it challenging to identify shared information for consistency checking. | 3. The proposed method might struggle to detect hallucinations in open-ended responses, for example, the prompt "introduce a sports celebrity to me". In this case, the sampled responses could pertain to different individuals, making it challenging to identify shared information for consistency checking. |
VoI4d6uhdr | ICLR_2025 | 1. Although the authors present the exact formulation of the risk in the main text, it is complicated to understand the implications of those formulas. It would be helpful to include more discussion to explain each term to better understand the results.
2. The paper's main contribution is to examine the bias amplification phenomenon using the formula. However, a formal statement about how different components affect the bias amplification is lacking. I would suggest the authors write them in formal theorems.
3. It is unclear how these theoretical findings relate to real-world deep learning models, I would suggest the authors verify the conclusion about the label noise and model size on MNIST and CNN as well. | 3. It is unclear how these theoretical findings relate to real-world deep learning models, I would suggest the authors verify the conclusion about the label noise and model size on MNIST and CNN as well. |
ICLR_2023_903 | ICLR_2023 | of different chain-of-thought prompting methods: including zero-shot-CoT, few-shot-CoT, manual-CoT, and Auto-CoT. The paper conducts case study experiments to look into the limitations of existing methods and proposes improvement directions. Finally, the paper proposes an improved method for Auto-CoT that could achieve a competitive advantage over Manual-CoT.
2. The experiment part is very detailed and comprehensive.
3. The paper is well-organized.
The writing is good and most of the content is very clear to me. Weaknesses/Feedback
1. The writing could be improved.
It would be helpful to draw a table to compare different CoT prompting methods across different dimensions.
How and why shall we make an assumption that “questions of all the wrong demonstrations fall into the same frequent-error cluster”?
Is the selection criteria in section 4.2 reasonable? Namely, why do we not choose questions with more than 60 tokens and rationales with more than 5 reasoning steps?
2. Some experimental details are missing.
The experimental details for Table 1 are not very clear and lack some details. For example, how the manual-CoT examples are built. What is the number of demonstration examples?
The experimental details for the Codex baseline are missing. I am curious about the instructions you used to prompt the Codex model.
3. It would be better to discuss more recent related work.
I understand some work has been released very recently. Since this work is closely related to the paper, it would be nice to include it in the revised paper. Recent work includes but is not limited to: [1] Calibrate Before Use: Improving Few-Shot Performance of Language Models, 2021 [2] Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning, 2022 [3] Complexity-Based Prompting for Multi-Step Reasoning, 2022 [4] Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering, 2022 [5] What Makes Pre-trained Language Models Better Zero/Few-shot Learners?, 2022
4. Is it possible to apply Auto-CoT to sample the examples with manually designed CoT? | 3. The paper is well-organized. The writing is good and most of the content is very clear to me. Weaknesses/Feedback 1. The writing could be improved. It would be helpful to draw a table to compare different CoT prompting methods across different dimensions. How and why shall we make an assumption that “questions of all the wrong demonstrations fall into the same frequent-error cluster”? Is the selection criteria in section 4.2 reasonable? Namely, why do we not choose questions with more than 60 tokens and rationales with more than 5 reasoning steps? |
NIPS_2021_1822 | NIPS_2021 | of the paper. Organization could definitely be improved and I oftentimes had a bit of a hard time following the discussed steps. But in general, I think the included background is informative and well selected. Though, I could see people having trouble understanding the state-space GP-regression when coming from the more machine-learning like perspective of function/ weight space GP-regression.
Significance: I think there is definitely a significance in this work, as GP-Regression is usually a bit problematic because of the scaling, though it is still used extensively in certain areas, such as Bayesian Optimization or for modelling dynamical systems in robotics.
• Background: f
is a random function which describes a map e.g. f : R × R D s → R
and not a function of X ∈ R N t × N s × D s
as described in eq 1. At least, when one infers the inputs from the definitions of the kernel functions.
• In general, the definitions are confusing and should be checked,e.g. check if f n , k = f ( X n , k )
is correct and properly define X n , k .
• The operator L s
is not mentioned in the text
• 2.1: The description of the process f ¯
is confusing as the relationship to the original process f
is established just at the end.
• It would be helpful to add a bit more background on how the state space model is constructed from the kernel κ t ( ⋅ , ⋅ )
, e.g. why it induces the dimensionality d t
and also describe the limitations that a finite dimensional SDE can only be established, when a suitable spectral decomposition of the kernel exist.
• It should be mentioned that p ( y ∣ H f ¯ ( t n ) )
has to be chosen Gaussian, as otherwise Kalman Filtering and Smoothing and CVI is not possible. Later on in the ELBOs this is assumed anyway. • p ( u ) = N ( u ∣ 0 , K z z )
is a finite dimensional Gaussian not a Gaussian process and p(u) is not a random variable ( = not ∼ ).
• The notations for the covariances, e.g. K z z
are discussed in the appendix. I am fine with it; however, it should be referenced as I was confused in the beginning.
• 2.2: The log
is missing for the Fisher information matrix.
• The acronym CVI is used in the paragraph headline before the definition.
• Some figures and tables are not referenced in the text, such as figure 1.
• 3.1: In line 173 the integration should be carried over f ¯
and not s
, I guess?
• I had a bit of a hard time establishing the connection E p ( f ) [ N ( Y ~ ∣ f , V ~ ) ] = p ( Y ~ )
which is the whole point why one part of the ELBO can be calculated using the Kalman filter. Adding this to a sentence to the text would have helped me a lot.
• One question I had was that for computing the ELBO the matrix exponential is needed. When backpropagating the gradient for the hyper parameters, is this efficient? As I am used to using the adjoint method for computing gradients of the (linear) differential equation.
• Reference to Appendix for the RMSE and NLPD metrics is missing. | • It should be mentioned that p ( y ∣ H f ¯ ( t n ) ) has to be chosen Gaussian, as otherwise Kalman Filtering and Smoothing and CVI is not possible. Later on in the ELBOs this is assumed anyway. |
JWwvC7As4S | ICLR_2024 | ### Theory
The main theoretical results are Theorem 2.1 and 2.2. They state that if the "average last-layer feature norm and the last-layer weight matrix norm are both bounded, then achieving near-optimal loss implies that most classes have intra-class cosine similarity near one and most pairs of classes have inter-class cosine similarity near -1/(C-1)".
Qualitatively, this result is an immediate consequence of continuity of the loss function together with the fact that bounded average last-layer feature norm and bounded last-layer weight matrices implies NC.
Quantitatively, this work proves asymptotic bounds on the proximity to NC as a function of the loss. This quantitative aspect is novel. I am not convinced of its significance however, as I will outline below.
1. The result is only asymptotic, and thus it cannot be used to estimate proximity to NC from a given loss value.
2. The bound is used as basis to argue that *"under the presence of batch normalization
and weight decay of the final layer, larger values of weight decay provide stronger NC guarantees in the sense that the intra-class cosine similarity of most classes is nearer to 1 and the inter-class cosine similarity of most pairs of classes is nearer to -1/(C-1)."*
This is backed up by the observation, that the bounds get more tight if the weight decay parameter $\lambda$ increases. To be more specific, Theorem 2.2 shows that if $L< L{min}+\epsilon$, then the average intra class cosine similarity is smaller than $-1/(C-1) + O(f(C,\lambda,\epsilon,\delta))$ and $f$ decreases with $\lambda$.
The problem with this argument is that the loss function itself depends on the regularization parameter $\lambda$ and so it is a-priori not clear whether values of $\epsilon$ are comparable for different $\lambda$. For example, apply this argument to the more simple loss function $L(x,\lambda)=\lambda x^2$. As $L$ is convex, it is clear that the value of $\lambda>0$ is irrelevant for the minimum and the near optimal solutions. Yet, $L(x,\lambda)<\epsilon$ implies $x^2<\epsilon/\lambda$ which decreases with $\lambda$. By the logic given in this work, the latter inequality suggests that minimizing a loss function with a larger value of $\lambda$ provides stronger guarantees for arriving close to the minimum at $0$. Clearly, this is not the case and an artifact of quantifying closeness to the loss minimum by $\epsilon$, when it should have been adjusted to $\lambda \epsilon$ instead.
I have doubts on how batch normalization is handled. As far as I see, batch normalization enters the proofs only through the condition $\sum_i \| h_i \|^2 =\| h_i \|^2$ (see Prop 2.1). However, this is only an implication and batch normalization induces stronger constraints. The theorems assume that the loss minimizer is a simplex ETF in the presence of batch normalization. This is not obvious, and neither proven nor discussed. It is also not accounted for in the part of the proof of Theorem 2.2, where the loss minimum $m_{reg}$ is derived.
### Experiments
- Theorems 2.1 and 2.2 are not evaluated empirically. It is not tested, whether the average intra / inter class cosine similarities of near optimal solutions follow the exponential dependency in $\lambda$ and the square (or sixth) root dependency on $\epsilon$ as suggested by the theorems.
- Instead, the dependency of cosine similarities at the end of training (200 epochs) on weight decay strength is evaluated. As presumed by the authors, the intra class cosine similarities get closer to the optimum, if the weight decay strength increases. Yet, there are problems with this experiment. It is inconsistent with the setting of the theory part and thus only provides limited insight on if the idealized theoretical results transfer to practice.
1. The theory part depends only on the weight decay strength on the last layer parameters. Yet, in the experiments, weight decay is applied to all layers and its strength varies between experiments (when instead only the strength of the last layer should change).
2. The theorems assume near optimal training loss, but training losses are not reported. Moreover, the reported cosine similarities are far from optimal (e.g. intra class is around 0.2 instead of 1) which suggests that the training loss is also far from optimal. It also suggests that the models are of too small capacity to justify the 'unconstrained-features' assumption.
3. As (suboptimally) weight decay is applied to all layers, we would expect a large training loss and thus suboptimal cosine similarities for large weight decay parameters. Conveniently, cosine similarities for such large weight decay strengths are not reported and the plots end at a weight decay strength where cosine similarities are still close to optimal.
4. On real-world data sets, the inter class cosine similarity increases with weight decay (even for batch norm models VGG11), disagreeing with the theoretical prediction. This observation is insufficiently acknowledged.
### General
The central question that this work wants to answer **What is a minimal set of conditions that would guarantee the emergence of NC?"** is already solved in the sense that it is known that minimal loss plus a norm constraint on the features (explicit via feature normalization or implicit via weight decay) implies neural collapse. The authors argue to add batch normalization to this list but that contradicts minimality.
The first contribution listed by the authors is not a contribution.
1. *"We propose the intra-class and inter-class cosine similarity measure, a simple and geometrically intuitive quantity that measures the proximity of a set of feature vectors to several core
structural properties of NC. (Section 2.2)"*
Cosine similarity (i.e. the normalized inner product) is a well known and an extensively used distance measure on the sphere. In the context of neural collapse, cosine similarities were already used in the foundational paper by Papyan et al. (2020) to empirically quantify closeness to NC (cf. Figure 3 in this reference) and many others. Minor:
- There is a grammatical error in the second sentence of the second paragraph
- There is no punctuation after formulas; In the appendix, multiple rows start with a punctuation
- intra / inter is sometimes written in italics, sometimes upright
- $\beta$ is used multiply with a different meaning
- Proposition 2.1 $N$ = batch site, Theorem 2.2 $N$ = number of samples per class.
- As a consequence, it seems that $\gamma$ needs to be rescaled to account for the number of batches | 3. As (suboptimally) weight decay is applied to all layers, we would expect a large training loss and thus suboptimal cosine similarities for large weight decay parameters. Conveniently, cosine similarities for such large weight decay strengths are not reported and the plots end at a weight decay strength where cosine similarities are still close to optimal. |
NIPS_2018_276 | NIPS_2018 | . Strengths: * This is the first inconsistency analysis for random forests. (Verified by quick Google scholar search.) * Clearly written to make results (mostly) approachable. This is a major accomplishment for such a technical topic. * The analysis is relevant to published random forest variations; these include papers published at ICDM, AAAI, SIGKDD. Weaknesses: * Relevance to researchers and practitioners is a little on the low side because most people are using supervised random forest algorithms. * The title, abstract, introduction, and discussion do not explain that the results are for unsupervised random forests. This is a fairly serious omission, and casual readers would remember the wrong conclusions. This must be fixed for publication, but I think it would be straightforward to fix. Officially, NIPS reviewers are not required to look at the supplementary material. Because of having only three weeks to review six manuscripts, I was not able to make the time during my reviewing. So I worry that publishing this work would mean publishing results without sufficient peer review. DETAILED COMMENTS * p. 1: I'm not sure it is accurate to say that deep, unsupervised trees grown with no subsampling is a common setup for learning random forests. It appears in Geurts et al. (2006) as a special case, sometimes in mass estimation [1, 2], and sometimes in Wei Fan's random decision tree papers [3-6]. I don't think these are used very much. * You may want to draw a connection between Theorem 3 and isolation forests [7] though. I've heard some buzz around this algorithm, and it uses unsupervised, deep trees with extreme subsampling. * l. 16: "random" => "randomized" * l. 41: Would be clearer with forward pointer to definition of deep. * l. 74: "ambient" seems like wrong word choice * l. 81: Is there a typo here? Exclamation point after \thereexists is confusing. * l. 152; l. 235: I think this mischaracterizes Geurts et al. (2006), and the difference is important for the impact stated in Section 4. Geurts et al. include a completely unsupervised tree learning as a special case, when K = 1. Otherwise, K > 1 potential splits are generated randomly and unsupervised (from K features), and the best one is selected *based on the response variable*. The supervised selection is important for low error on most data sets. See Figures 2 and 3; when K = 1, the error is usually high. * l. 162: Are random projection trees really the same as oblique trees? * Section 2.2: very useful overview! * l. 192: Typo? W^2? * l. 197: No "Eq. (2)" in paper? * l. 240: "parameter setup that is widely used..." This was unclear. Can you add references? For example, Lin and Jeon (2006) study forests with adaptive splitting, which would be supervised, not unsupervised. * Based on the abstract, you might be interested in [8]. REFERENCES [1] Ting et al. (2013). Mass estimation. Machine Learning, 90(1):127-160. [2] Ting et al. (2011). Density estimation based on mass. In ICDM. [3] Fan et al. (2003). Is random model better? On its accuracy and efficiency. In ICDM. [4] Fan (2004). On the optimality of probability estimation by random decision trees. In AAAI. [5] Fan et al. (2005). Effective estimation of posterior probabilities: Explaining the accuracy of randomized decision tree approaches. In ICDM. [6] Fan el al. (2006). A general framework for accurate and fast regression by data summarization in random decision trees. In KDD. [7] Liu, Ting, and Zhou (2012). Isolation-based anomaly detection. ACM Transactions on Knowledge Discovery from Data, 6(1). [8] Wager. Asymptotic theory for random forests. https://arxiv.org/abs/1405.0352 | * The title, abstract, introduction, and discussion do not explain that the results are for unsupervised random forests. This is a fairly serious omission, and casual readers would remember the wrong conclusions. This must be fixed for publication, but I think it would be straightforward to fix. Officially, NIPS reviewers are not required to look at the supplementary material. Because of having only three weeks to review six manuscripts, I was not able to make the time during my reviewing. So I worry that publishing this work would mean publishing results without sufficient peer review. DETAILED COMMENTS * p. |
NIPS_2022_1813 | NIPS_2022 | 1.The innovation of the article seems limited to me, mainly since the work shares the same perspective as [2]. Both the models build upon the probabilistic formulation and applies the Hilbert-Schmidt Independence Criteria (HSIC). It may be good to clarify a bit more on how novel the paper is compared from [2].
2.There is a lack of qualitative experiments to demonstrate the validity of the conditional independence model. a)It is better to provide some illustrative experimental results to demonstrate that minimising HSICcond-i could indeed perform better than minimising HSIC_HOOD. Possibly, one toy dataset can be used to demonstrate the separability of inlier features and outlier features.
b)The authors propose a new test metric, however, lacking the correctness test and comparative experiments with other metrices. It may be better to provide some visualization results or schematic diagram, which could make readers easier to understand.
3.Current experimental results seem not very convincing to me. Some critical comparative results are missing. a)Under the setting of unseen OOD training data, DIN [34], Mahalanobis distance [31], Energy [36], their original papers did not use fake/augmented OOD training data. These settings need to be clarified in the paper. Moreover, the impact of using different augmentation methods on the result could be explored in the ablation.
b)In CIFAR-100, the experimental setup appears to be consistent with that of HOOD [2]. However, in Table 1 (unseen OOD training data), the HOOD’s results are missing. In [2], the results of HOOD are superior to those of Conditional-I's.
c)In Table 2 (unseen OOD training data), the HOOD’s results are also missing.
d)There are missing both Conditional-i-generative and HOOD results for the NLP OOD detection tasks. As missing the results of the most relevant methods, the present experiments could not convince me of the validity of the improvements.
4.The memory bank architecture is one contribution, but the authors do not provide quantitative results of introducing the memory bank architecture.
The authors seemed not to discuss the limitations of the proposed model. | 2.There is a lack of qualitative experiments to demonstrate the validity of the conditional independence model. a)It is better to provide some illustrative experimental results to demonstrate that minimising HSICcond-i could indeed perform better than minimising HSIC_HOOD. Possibly, one toy dataset can be used to demonstrate the separability of inlier features and outlier features. b)The authors propose a new test metric, however, lacking the correctness test and comparative experiments with other metrices. It may be better to provide some visualization results or schematic diagram, which could make readers easier to understand. |
NIPS_2018_172 | NIPS_2018 | 1. The writing is not clear. The descriptions of the techniqual part can not be easily followed by the reviewer, which make it very hard to reimplement the techniques. 2. An incomplete sequence is represented by a finite state automaton. In this paper, only a two out of three finite state automation is used. Is it possible/computational feasible to use more complicated finite state automaton to select more image labels? As there are 12 image labels per image on average, only selecting two labels seems insufficient. 3. The authors describe an online version of the algorithm because it is impractical to train multiple iterations/epochs with large models and datasets. Is it true that the proposed method requires much more computation than other methods? Please compare the computational complexity with other methods. 4. How many estimated complete captions could be obtained for one image? Is it possible to generate over C(12, 3) * C(3, 2) = 660 different captions for one image? | 3. The authors describe an online version of the algorithm because it is impractical to train multiple iterations/epochs with large models and datasets. Is it true that the proposed method requires much more computation than other methods? Please compare the computational complexity with other methods. |
NIPS_2020_1274 | NIPS_2020 | - It would be helpful if the paper’s definition of “decentralized” is more explicitly stated in the paper, instead of in a footnote. Other ways of defining “decentralized” is where agents do not have access to the global state and actions of other agents during both training and execution which LIO seems to do. - Systematically studying the impact of the cost of incentivization on performance would have been a helpful analysis (e.g., for various values of \alpha, what are the reward incentives each agent receives, and what is the collective return?). It seems like roles between “winners” and “cooperators” emerge because the cost to reward the other agent becomes high for the cooperators. If this cost were lower, it seems like roles would be less distinguished, causing the collective return to be much lower. - In Figure 5d, more explanation as to why the Winner receives about the same incentive as the Cooperator to pull the lever would be helpful; it doesn’t match how the plot is described on lines 286-287. | - Systematically studying the impact of the cost of incentivization on performance would have been a helpful analysis (e.g., for various values of \alpha, what are the reward incentives each agent receives, and what is the collective return?). It seems like roles between “winners” and “cooperators” emerge because the cost to reward the other agent becomes high for the cooperators. If this cost were lower, it seems like roles would be less distinguished, causing the collective return to be much lower. |
sXErPfdA7Q | EMNLP_2023 | UPDATE: The authors addressed most of my concerns however, I believe that the first and second points are still valid and should be discussed as potential limitations (i.e., there are too many confounding variables to claim that one is investigating an impact of different training methods; and the datasets might have been - and probably were - used for RLHF).
UPDATE 2: the concerns were addressed by the authors to the extend it was possible with the current design.
- The authors claim to investigate the effect of different training methods on the processing of discourse-level information (as one of three main experiments), however, it is questionable whether what we see is the effect of different training methods, different training data, or perhaps the data used for RLHF (rather than RLHF alone, that is it is possible that the MT datasets used in this study were used to create RLHF examples). Since the authors research black box models behind an API, I do not think we can make any claims about the effect of the training method (of which we know little) on the model's performance.
- Data contamination might have influenced the evaluation - The authors employ various existing datasets. While two of these datasets do have publication date past the openAI's models' training cutoff point (made public in August 2022), this seems not to be the case for the other datasets employed in this study (including the dataset with contextual errors). It is likely that these were included in the training data of the LLMs being evaluated. Furthermore, with the RLHF models, it is also possible (and quite likely) that MT datasets published post-training were employed to create the RLHF data. For instance the WMT22 dataset was made public in August 2022, which gives companies like OpenAI plenty of time to retrieve it, reformulate it into training examples, and use for RLHF.
- The authors discuss how certain methods are significantly different from others, yet no significance testing is done to support these claims. For example, in line 486 the authors write "The conversational ability of ChatGPT and GPT-4 significantly boosts translation quality and discourse awareness" -- the difference between zh->en ChatGPT (17.4 d-BLEU; 2.8/2.9 humeval) and GPT-4 (18.8 d-BLEU; 2.6/2.8 humeval) and the scores for FeedME-2 (16.1 d-BLEU; 2.2/2.3 humeval) and PPO (17.2 d-BLEU; 2.6/2.7 humeval) is minimal and it's hard to say whether it is significant without proper testing (including checking the distribution and accounting for multiple comparisons).
- The main automatic method used throughout this paper is d-BLEU, which works best with multiple references, yet I believe only one is given. I understand that there are limited automatic options for document level evaluation where the sentences cannot be aligned. Some researchers used sliding windows for COMET, but that's not ideal (yet worth trying?). That is why the human evaluation is so important, and the one in this paper is lacking.
- Human evaluation - many important details are missing so it is hard to judge the research design (more questions below); however what bothers me most is that the authors construct an ordinal scale with a clear cutoff point between 2 and 3 (for general quality especially), yet they present only average scores. I do believe that "5: Translation passes quality control; the overall translation is excellent (...)" minus "4: Translation passes quality control; the overall translation is very good" is not the same "one" as "3: Translation passes quality control; the overall translation is ok. (...)" minus "2: Translation does not pass quality control; the overall translation is poor. (...)". It is clear that the difference between 5 and 4 is minimal, while between 3 and 2 is much bigger. Simple average is not a proper way to analyze this data (a proper analysis would include histograms of scores, possibly a stacked bar with proportion of choices, statistical testing).
- Another issue with the human evaluation is that it appears that the evaluators were asked to evaluate an entire document by assigning it one score. Note that this is a cognitively demanding and difficult task for the evaluators. The results are likely to be unreliable (please see Sheila Castilho's work on this topic). There is also no indication that the annotators were at least given some practice items.
- "Discourse-aware prompts" - I am not sure what this experiment was about. It seems that the idea was to evaluated how the availability of discourse information can improve the translation, but if that is so, then all three setups did have discourse level information (hence this evaluation is impossible). The only thing this seems to be doing is checking in which way the information should be presented (one sentence at a time in a chat, all sentences at once but clearly marked, or the entire document at once without sentence boundaries). | - The authors discuss how certain methods are significantly different from others, yet no significance testing is done to support these claims. For example, in line 486 the authors write "The conversational ability of ChatGPT and GPT-4 significantly boosts translation quality and discourse awareness" -- the difference between zh->en ChatGPT (17.4 d-BLEU; 2.8/2.9 humeval) and GPT-4 (18.8 d-BLEU; 2.6/2.8 humeval) and the scores for FeedME-2 (16.1 d-BLEU; 2.2/2.3 humeval) and PPO (17.2 d-BLEU; 2.6/2.7 humeval) is minimal and it's hard to say whether it is significant without proper testing (including checking the distribution and accounting for multiple comparisons). |
ARR_2022_141_review | ARR_2022 | - The approach description (§ 3) is partially difficult to follow and should be revised. The additional page of the camera-ready version should be used to extend the approach description (rather than adding more experiments).
- CSFCube results are not reported with the same metrics as in the original publication making a comparison harder than needed.
- The standard deviation from the Appendix could be added to Table 1 at least for one metric. There should be enough horizontal space.
- Notation of BERT_\theta and BERT_\epsilon is confusing. Explicitly calling them the co-citation sentence encoder and the paper encoder could make it clearer. - How are negative sampled for BERT_\epsilon?
Additional relevant literature: - Luu, K., Wu, X., Koncel-Kedziorski, R., Lo, K., Cachola, I., & Smith, N.A. (2021). Explaining Relationships Between Scientific Documents. ACL/IJCNLP.
- Malte Ostendorff, Terry Ruas, Till Blume, Bela Gipp, Georg Rehm. Aspect-based Document Similarity for Research Papers. COLING 2020.
Typos: - Line 259: “cotation” - Line 285: Missing “.” | - The approach description (§ 3) is partially difficult to follow and should be revised. The additional page of the camera-ready version should be used to extend the approach description (rather than adding more experiments). |
ZPwX1FL4yp | ICLR_2025 | 1.The application of gyro-structures on SPD manifolds and correlation matrices is indeed novel, but the paper does not clearly articulate the theoretical significance or unique advantages of using Power-Euclidean (PE) geometry over existing approaches like Affine-Invariant (AI) or Log-Euclidean (LE) methods. The work seems incremental without providing substantial theoretical or empirical evidence that PE geometry offers practical improvements beyond computational convenience. Especially, while gyro-structures are presented as an extension to non-Euclidean spaces, the paper does not establish a strong need or motivation for this approach within the broader context of machine learning or geometry-based learning. It lacks a thorough discussion on why gyro-structures would fundamentally enhance SPD or correlation matrix-based learning in a way that current methods do not.
2.Some key theoretical concepts and mathematical operations, such as those in gyrovector space theory and correlation matrix manifold construction, are highly technical and lack intuitive explanations. Additional clarification or simplified summaries would improve accessibility for readers unfamiliar with advanced Riemannian geometry.
3.On the experiments part, the related discussion lacks interpretive insights that would elucidate why the proposed gyro-structures outperform existing methods. In addition, while the paper compares its methods against SPD-based models and a few gyro-structure-based approaches, it lacks comparison with other state-of-the-art methods that might not rely on gyro-structures. This omission makes it unclear whether the proposed approach actually outperforms simpler or more commonly used techniques in manifold-based learning. | 3.On the experiments part, the related discussion lacks interpretive insights that would elucidate why the proposed gyro-structures outperform existing methods. In addition, while the paper compares its methods against SPD-based models and a few gyro-structure-based approaches, it lacks comparison with other state-of-the-art methods that might not rely on gyro-structures. This omission makes it unclear whether the proposed approach actually outperforms simpler or more commonly used techniques in manifold-based learning. |
4WrqZlEK3K | EMNLP_2023 | 1. It is desired to have more evidence or analysis supporting the training effectiveness property of the dataset or other key properties
that will explain the importance and possible use-cases of _LMGQS_ over other QFS datasets.
2. Several unclear methods affecting readability and reproducibility:
* "To use LMGQS in the zero-shot setting, it is necessary to convert the queries of diverse formats into natural questions." (L350) please explain why.
* "Specifically, we finetune a BART model to generate queries with the document and summary as input." (L354) - how did you FT? what is the training set and any relevant hyper-parameters for reproducing the results.
* "we manually create a query template to transform the query into a natural language question." (L493) - what are the templates? what are the query template and several examples. | 1. It is desired to have more evidence or analysis supporting the training effectiveness property of the dataset or other key properties that will explain the importance and possible use-cases of _LMGQS_ over other QFS datasets. |
ICLR_2021_2527 | ICLR_2021 | Duplicate task settings. The proposed new task, cross-supervised object detection, is almost the same as the task defined in (Hoffman et al. 2014, Tang et al. 2016, Uijlings et al. 2018). Both of these previous works study the task of training object detectors on the combination of base class images with instance-level annotations and novel class image with only image-level annotations. The work (Uijlings et al. 2018) also conducts experiments on COCO which contains multi-objects in images. In addition, the work (Khandelwal et al. 2020) unifies the setting of training object detectors on the combination of fully-labeled data and weakly-labeled data, and conducts experiments on multi-object datasets PASCAL VOC and COCO. The task proposed by this paper could be treated as a special case of the task studied in (Khandelwal et al. 2020). We should avoid duplicate task settings.
Limited novelty. The novelty of the proposed method is limited. Combining recognition head and detection head is not new in weakly supervised object detection. The weakly supervised object detection networks (Yang et al. 2019, Zeng et al. 2019) also generate pseudo instance-level annotations from recognition head to train detection head (i.e., head with bounding box classification and regression) for weakly-labeled data.
Review summary: In summary, I would like to give a rejection to this paper due to the duplicate task settings and limited novelty.
Khandelwal et al., Weakly-supervised Any-shot Object Detection, 2020
---------- Post rebuttal ----------
After discussions with authors and reading other reviews, I acknowledge the contribution that this paper advances the performance of cross-supervised object detection.
However, I would like to keep my original reject score. The reasons are as follows.
Extending datasets from PASCAL VOC to COCO is not a significant change comparing to previous tasks. The general object detection papers also evaluated on PASCAL VOC only about five years ago and now evaluate mainly on COCO. With the development of computer vision techniques, it is natural to try more challenging datasets. So although this paper claims that this paper focuses on more challenging datasets, there is no significant difference between the tasks studied in previous works like [a] and this paper.
In addition, apart from ImageNet, the work [b] also evaluates their method on the Open Images dataset which is even larger and more challenging than COCO. The difference between the tasks studied in [b] and this paper is only that, [b] adds a constraint that weakly-labeled classes have semantic correlations with fully-labeled classes and this paper doesn't. This difference is also minor.
Therefore, the task itself cannot be one of the main contributions of this paper (especially the most important contribution of this paper). I would like to suggest the authors change their title / introduction / main paper by 1) giving lower wights to the task parts 2) giving higher weights to intuitions of why previous works fail on challenging datasets like COCO and motivations of the proposed method.
[a] YOLO9000: Better, Faster, Stronger, In CVPR, 2017
[b] Detecting 11K Classes: Large Scale Object Detection without Fine-Grained Bounding Boxes, In ICCV, 2019 | 2) giving higher weights to intuitions of why previous works fail on challenging datasets like COCO and motivations of the proposed method. [a] YOLO9000: Better, Faster, Stronger, In CVPR, 2017 [b] Detecting 11K Classes: Large Scale Object Detection without Fine-Grained Bounding Boxes, In ICCV, 2019 |
NIPS_2022_2373 | NIPS_2022 | weakness in He et al., and proposes a more invisible watermarking algorithm, making their method more appealing to the community. 2. Instead of using a heuristic search, the authors elegantly cast the watermark search issue into an optimization problem and provide rigorous proof. 3. The authors conduct comprehensive experiments to validate the efficacy of CATER in various settings, including an architectural mismatch between the victim and the imitation model and cross-domain imitation. 4. This work theoretically proves that CATER is resilient to statistical reverse-engineering, which is also verified by their experiments. In addition, they show that CATER can defend against ONION, an effective approach for backdoor removal.
Weakness: 1. The authors assume that all training data are from the API response, but what if the adversary only uses the part of the API response? 2. Figure 5 is hard to comprehend. I would like to see more details about the two baselines presented in Figure 5.
The authors only study CATER for the English-centric datasets. However, as we know, the widespread text generation APIs are for translation, which supports multiple languages. Probably, the authors could extend CATER to other languages in the future. | 2. Figure 5 is hard to comprehend. I would like to see more details about the two baselines presented in Figure 5. The authors only study CATER for the English-centric datasets. However, as we know, the widespread text generation APIs are for translation, which supports multiple languages. Probably, the authors could extend CATER to other languages in the future. |
viNQSOadLg | ICLR_2024 | * Lack of Training Details: The paper lacks sufficient information regarding the training process of the policy. It should provide more details on the training data used, the methodology for updating parameters, and the specific hyperparameters employed in the process.
* Unclear Literature Review: The literature review in the paper needs improvement. It is not adequately clear what the main contribution of the proposed method is, and how it distinguishes itself from existing work, particularly in relation to the utilization of GFlowNet for sequence generation. The paper should provide a more explicit and comparative analysis of related work.
* Ambiguity in Key Innovation: The claim that GFNSeqEditor can produce novel sequences with improved properties lacks clarity regarding the key innovation driving these contributions. The paper should better articulate what novel techniques or insights lead to the claimed improvements, thereby enhancing the reader's understanding of the method's unique value. | * Unclear Literature Review: The literature review in the paper needs improvement. It is not adequately clear what the main contribution of the proposed method is, and how it distinguishes itself from existing work, particularly in relation to the utilization of GFlowNet for sequence generation. The paper should provide a more explicit and comparative analysis of related work. |
NIPS_2018_66 | NIPS_2018 | of their proposed method for disentangling discrete features in different datasets. I think that the main of the paper lies in the relatively thorough experimentation. I thought the results in Figure 6 were particularly interesting in that they suggest that there is an ordering in features in terms of mutual information between data and latent variable (for which the KL is an upper bound), where higher mutual information features appear first as the capacity is increased. I also appreciate the explicit discussion of the robust of the degree of disentanglement across restarts, as well as the sensitivity to hyperparameters. Given the difficulties observed in Figure 4 in distinguishing between similar digits (such as 5s and 8s), it would be interesting to see results for this method on a dataset like dSprites, where the shapes are very similar in pixel space. The inferred chair rotations in Figure 7 are also a nice illustration of the ability of the method to generalize to the test set. The main thing that this paper lacks is a more quantitative evaluation. A number of recent papers have proposed metrics for evaluating disentangled representations. In addition the metrics proposed by Kim & Mnih (2018) and Chen et al. (2018), the work by Eastwood & Williams (2017) [1] is relevant in this context. All of these metrics presume that we have access to labels for true latent factors, which is not the case for any of the datasets considered in the experimentation. However, it would probably be worth evaluating one or more of these metrics on a dataset such as dSprites. A minor criticism is that details the training procedure and network architectures are somewhat scarce in the main text. It would be helpful to briefly describe the architectures and training setup in a bit more detail, and explicitly call out the relevant sections of the supplementary material. In particular, it would be good to list key parameters such as γ and the schedule for the capacities Cz and Cc, e.g., the figure captions. In Figure 6a, please mark the 25k iterations (e.g. with a vertical dashed line) to indicate that this is where the capacity is no longer increased further. Questions - How robust is the ordering on features Figure 6, given the noted variability across restarts in Section 4.3? I would hypothesize that the discrete variable always emerges first (given that this variable is in some sense given a âgreaterâ capacity than individual dimensions in the continuous variables). Is the ordering on the continuous variables always the same? What happens when you keep increasing the capacity beyond 25k iterations. Does the network eventually use all of the dimensions of the latent variables? - I would also appreciate some discussion of how the hyperparameters in the objective were chosen. In particular, one could imagine that the relative magnitude of Cc and Cz would matter, as well as γ. This means that there are more parameter to tune than in, e.g., a vanilla β-VAE. Can the authors comment on how they chose the reported values, and perhaps discuss the sensitivity to these particular hyperparameters in more detail? - In Figure 2, what is the range of values over which traversal is performed? Related Work In addition to the work by Eastwood & Williams, there are a couple of related references that the authors should probably cite: - Kumar et. al [2] also proposed the total correlation term along with Kim & Mnih (2018) and Chen et al. (2018). - A recent paper by Esmaeli et al. [3] employs an objective based on the Total Correlation, related to the one in Kim & Mnih (2018) and Chen et. al (2018) to induce disentangled representations that can incorporate both discrete and continuous variables. Minor Comments - As the authors write in the introduction, one of the purported advantages of VAEs over GANs is stability of training. However, as mentioned by the author, including multiple variables of different types also makes the representation unstable. Given this observation, maybe it is worth qualifying these statements in the introduction. - I would say that section 3.2 can be eliminated - I think that at this point readers can be presumed to know about the Gumbel-Softmax/Concrete distribution. - Figure 1 could be optimized to use less whitespace. - I would recommend to replace instances of (\citet{key}) with \citep{key}. References [1] Eastwood, C. & Williams, C. K. I. A Framework for the Quantitative Evaluation of Disentangled Representations. (2018). [2] Kumar, A., Sattigeri, P. & Balakrishnan, A. Variational inference of disentangled latent concepts from unlabeled observations. arXiv preprint arXiv:1711.00848 (2017). [3] Esmaeili, B. et al. Structured Disentangled Representations. arXiv:1804.02086 [cs, stat] (2018). | - I would say that section 3.2 can be eliminated - I think that at this point readers can be presumed to know about the Gumbel-Softmax/Concrete distribution. |
XX73vFMemG | EMNLP_2023 | 1. The paper studies the case where the student distills knowledge to the teacher, which improves the teacher's performance. However, the improvements could potentially be due to regularization effects rather than distillation as claimed, since all fine-tuning is performed for 10 epochs and without early-stopping. The fine-tuning on GLUE without validation early-stopping usually has very high variances, proper ablation studies are needed to verify.
2. The performance gains (especially, the Community KD) over standard one-way distillation seem quite marginal based on the experiments when compared to other BERT distillation techniques like BERT-PKD, TinyBERT, MobileBERT, or BERT-of-Theseus. This is not a good signal given the training process is more complex with co-training and co-distillation compared to other distillation techniques.
3. Evaluations are limited to BERT models only. Testing on other PLMs would be more convincing. | 1. The paper studies the case where the student distills knowledge to the teacher, which improves the teacher's performance. However, the improvements could potentially be due to regularization effects rather than distillation as claimed, since all fine-tuning is performed for 10 epochs and without early-stopping. The fine-tuning on GLUE without validation early-stopping usually has very high variances, proper ablation studies are needed to verify. |
NIPS_2018_688 | NIPS_2018 | weakness of the paper is that experiments are limited to a single task. That said, they compare against two reasonable baselines (CPO, and including the constraint as a negative reward). While the formal definition of the constrained objective in L149 - L155 is appreciated, it might be made a bit more clear by avoiding some notation. For example, instead of defining a new function I(x,y) (L151), you could simply use \delta(x >= y), stating that \delta is the Dirac delta function. A visualization of Eq 1 might be helpful to provide intuition. Minutia * Don't start sentences with citation. For example, (L79): "[32] proposed ..." * Stylistically, it's a bit odd to use a *lower* bound on the *cost* as a constraint. Usually, "costs" are minimized, so we'd want an upper bound. * Second part of Def 4.2 (L148) is redundant. If Pr(X >= \gamma) <= \rho, then it's vacuously true that Pr(X <= \gamma) >= 1 - \rho. Also, one of these two terms should be a strict inequality. * Include the definition of S (L154.5) on its own line (or use \mbox). * Label the X and Y axes of plots. This paper makes an important step towards safe RL. While this paper builds upon much previous work, it clearly documents and discusses comparisons to previous work. While the results are principally theoretical, I believe it will inspire both more theoretical work and practical applications. | * Label the X and Y axes of plots. This paper makes an important step towards safe RL. While this paper builds upon much previous work, it clearly documents and discusses comparisons to previous work. While the results are principally theoretical, I believe it will inspire both more theoretical work and practical applications. |
Kjs0mpGJwb | EMNLP_2023 | 1. Although the structural information has not been explicitly used in the current problem statement, it has been implicitly used in few previous works on bilingual mapping induction. Please see:
"Multi-Stage Framework with Refinement based Point Set Registration for Unsupervised Bi-Lingual Word Alignment". Oprea et al., COLING 2022.
"Point Set Registration for Unsupervised Bilingual Lexicon Induction". H. Cao and T. Zhao, IJCAI 2018.
As such, a proper discussion and comparison is necessary in the current paper.
2. The GCN that is proposed has no learnable parameters, so it is a static matrix transformation operation. So, why is the term GCN used - isn't it misleading? Further, if it is not learning anything, it is a simple aggregation function, as far as I understood. Then the structural information is not propagating through the entire graph - this is counter-intuitive because the author(s) claim that structural information usage is a key feature of the proposed framework. Am I missing something here?
3. For experiments, I have 2 comments - (i) addition of performance on word similarity and sentence translation tasks as in the MUSE paper (and others) would lend more credibility to the robustness and effectiveness of the framework. (ii) addition of morphologically rich languages like Finnish, Hebrew, etc and low-resource languages in the experiments would be good to have (minor point). | 3. For experiments, I have 2 comments - (i) addition of performance on word similarity and sentence translation tasks as in the MUSE paper (and others) would lend more credibility to the robustness and effectiveness of the framework. (ii) addition of morphologically rich languages like Finnish, Hebrew, etc and low-resource languages in the experiments would be good to have (minor point). |
ICLR_2023_2658 | ICLR_2023 | Weakness:
1.I think the work is lack of novelty as the work GPN[1] has already proposed to add node importance score in the calculation of class prototype and the paper only give a theoretical analysis on it.
2.The experiment part is not sufficient enough. (1) For few-shot graph node classification problem to predict nodes with novel labels, there are some methods that the paper does not compare with. For example, G-Meta is mentioned in the related works but not compared in the experiments. A recent work TENT[2] is not mentioned in related works. As far as I know, the above two approaches can be applied in the problem setting in the paper. (2) For the approach proposed in the paper, there is no detailed ablation study for the functionalities of each part designed. (3) It is better to add a case study part to show the strength of the proposed method by an example. Concerns:
1.The paper consider the node importance among nodes with same label in support set. In 1-shot scenario, how node importance can be used? I also find that the experiment part in the paper does not include the 1-shot paper setting, but related works such as RALE have 1-shot setting, why?
2.The paper says that the theory of node importance can be applied to other domains. I think there should be an example to verify that conclusion.
3.In section 5.3, ‘we get access to abundant nodes belonging to each class’. I do not think this is always true as there might be a class in the training set that only has few samples given the long-tailed distribution of samples in most graph datasets.
[1] Ding et al. Graph Prototypical Networks for Few-shot Learning on Attributed Networks
[2] Wang et al. Task-Adaptive Few-shot Node Classification | 1.The paper consider the node importance among nodes with same label in support set. In 1-shot scenario, how node importance can be used? I also find that the experiment part in the paper does not include the 1-shot paper setting, but related works such as RALE have 1-shot setting, why? |
Q2IInBu2kz | EMNLP_2023 | 1. You should compare your model with more recent models [1-5].
2. Contrastive learning has been widely used in Intent Detection [6-9], although the tasks are not identical. I think the novelty of this simple modification is not suitable for EMNLP.
3. You should provide more details about the formula in the text, e.g. $\ell_{BCE}$ ,even if it is simple, give specific details.
4. You don't provide the value of some hyper-parameters, such as τ.
5. The Figure 1 is blurry, which affects reading.
[1] Qin L, Wei F, Xie T, et al. GL-GIN: Fast and Accurate Non-Autoregressive Model for Joint Multiple Intent Detection and Slot Filling[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021: 178-188.
[2] Xing B, Tsang I. Co-guiding Net: Achieving Mutual Guidances between Multiple Intent Detection and Slot Filling via Heterogeneous Semantics-Label Graphs[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022: 159-169.
[3] Xing B, Tsang I. Group is better than individual: Exploiting Label Topologies and Label Relations for Joint Multiple Intent Detection and Slot Filling[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022: 3964-3975.
[4] Song M, Yu B, Quangang L, et al. Enhancing Joint Multiple Intent Detection and Slot Filling with Global Intent-Slot Co-occurrence[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022: 7967-7977.
[5] Cheng L, Yang W, Jia W. A Scope Sensitive and Result Attentive Model for Multi-Intent Spoken Language Understanding[J]. arXiv e-prints, 2022: arXiv: 2211.12220.
[6] Liu H, Zhang F, Zhang X, et al. An Explicit-Joint and Supervised-Contrastive Learning Framework for Few-Shot Intent Classification and Slot Filling[C]//Findings of the Association for Computational Linguistics: EMNLP 2021. 2021: 1945-1955.
[7] Qin L, Chen Q, Xie T, et al. GL-CLeF: A Global–Local Contrastive Learning Framework for Cross-lingual Spoken Language Understanding[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2022: 2677-2686.
[8] Liang S, Shou L, Pei J, et al. Label-aware Multi-level Contrastive Learning for Cross-lingual Spoken Language Understanding[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022: 9903-9918.
[9] Chang Y H, Chen Y N. Contrastive Learning for Improving ASR Robustness in Spoken Language Understanding[J]. arXiv preprint arXiv:2205.00693, 2022. | 3. You should provide more details about the formula in the text, e.g. $\ell_{BCE}$ ,even if it is simple, give specific details. |
nuPp6jdCgg | EMNLP_2023 | 1.While this paper shows many findings, few of them are new to the community.
2.There should be more discussions about why LLMs struggle at fine-grained hard constraints and how to address these problems.
3.It would be better to include vicuna and falcon in Table-2, Table-3, and Table-5. | 2.There should be more discussions about why LLMs struggle at fine-grained hard constraints and how to address these problems. |
NIPS_2020_396 | NIPS_2020 | 1- While the experimental results suggest that the proposed approach is valuable for self-supervised learning on 360 video data which have spatial audio, little insights are given about why do we need to do self-supervised learning on this kind of data. In particular, 1) There are currently several large audio-video datasets such as HowTo100M and VIOLIN, 2) There is not much 360 video data on YouTube in comparison to normal data. 2- For the experimental comparisons, the authors at least should report the performance with using other self-supervised learning losses. For instance, masking features, predicting next video/audio feature, or reconstructing a feature. This will be very useful for understanding the importance of introduced loss in comparison with previous ones. 3- How the videos are divided into 10s segments? 4- It would be interesting to see how this spatial alignment works. For example, aligning an audio to the video and visualizing the corresponding visual region. 5- What's the impact of batch size on performance? batch size of 28 seems small to cover enough positive and negative samples. In this case, using MoCo loss instead of InfoNCE wouldn't help? | 1- While the experimental results suggest that the proposed approach is valuable for self-supervised learning on 360 video data which have spatial audio, little insights are given about why do we need to do self-supervised learning on this kind of data. In particular, |
NIPS_2016_478 | NIPS_2016 | weakness is in the evaluation. The datasets used are very simple (whether artificial or real). Furthermore, there is no particularly convincing direct demonstration on real data (e.g. MNIST digits) that the network is actually robust to gain variation. Figure 3 shows that performance is worse without IP, but this is not quite the same thing. In addition, while GSM is discussed and stated as "mathematically distinct" (l.232), etc., it is not clear why GSM cannot be used on the same data and results compared to the PPG model's results. Minor comments (no need for authors to respond): - The link between IP and the terms/equations could be explained more explicitly and prominently - Pls include labels for subfigures in Figs 3 and 4, and not just state in the captions. - Have some of the subfigures in Figs 1 and 2 been swapped by mistake? | - The link between IP and the terms/equations could be explained more explicitly and prominently - Pls include labels for subfigures in Figs 3 and 4, and not just state in the captions. |
NIPS_2018_543 | NIPS_2018 | Weakness: The main idea of the paper is not original. The entire Section 2.1 is classical results in Gaussian process modeling. There are many papers and books described it. I only point out one such source, Chapter 3 and 4 of Santner, Thomas J., Brian J. Williams, and William I. Notz. The design and analysis of computer experiments. Springer Science & Business Media, 2013. The proposed Bayes-Sard framework (Theorem 2.7), which I suspected already exist in the Monte Carlo community, is a trivial application of the Gaussian process model in the numerical integration approximation. The convergence results, Theorem 2.11 and Theorem 2.12, are also some trivial extension of the classic results of RKHS methods. See Theorem 11.11 and 11.13 of Wendland, Holger. Scattered data approximation. Vol. 17. Cambridge university press, 2004. Or Theorem 14.5 of Fasshauer, Gregory E. Meshfree approximation methods with MATLAB. Vol. 6. World Scientific, 2007. Quality of this paper is relatively low, even though the clarity of the technical part is good. This work lacks basic originality, as I pointed out in its weakness. Overall, this paper has little significance. | 17. Cambridge university press, 2004. Or Theorem 14.5 of Fasshauer, Gregory E. Meshfree approximation methods with MATLAB. Vol. |
NIPS_2019_1089 | NIPS_2019 | - The paper can be seen as incremental improvements on previous work that has used simple tensor products to representation multimodal data. This paper largely follows previous setups but instead proposes to use higher-order tensor products. ****************************Quality**************************** Strengths: - The paper performs good empirical analysis. They have been thorough in comparing with some of the existing state-of-the-art models for multimodal fusion including those from 2018 and 2019. Their model shows consistent improvements across 2 multimodal datasets. - The authors provide a nice study of the effect of polynomial tensor order on prediction performance and show that accuracy increases up to a point. Weaknesses: - There are a few baselines that could also be worth comparing to such as âStrong and Simple Baselines for Multimodal Utterance Embeddings, NAACL 2019â - Since the model has connections to convolutional arithmetic units then ConvACs can also be a baseline for comparison. Given that you mention that âresulting in a correspondence of our HPFN to an even deeper ConACâ, it would be interesting to see a comparison table of depth with respect to performance. What depth is needed to learning âflexible and higher-order local and global intercorrelationsâ? - With respect to Figure 5, why do you think accuracy starts to drop after a certain order of around 4-5? Is it due to overfitting? - Do you think it is possible to dynamically determine the optimal order for fusion? It seems that the order corresponding to the best performance is different for different datasets and metrics, without a clear pattern or explanation. - The model does seem to perform well but there seem to be much more parameters in the model especially as the model consists of more layers. Could you comment on these tradeoffs including time and space complexity? - What are the impacts on the model when multimodal data is imperfect, such as when certain modalities are missing? Since the model builds higher-order interactions, does missing data at the input level lead to compounding effects that further affect the polynomial tensors being constructed, or is the model able to leverage additional modalities to help infer the missing ones? - How can the model be modified to remain useful when there are noisy or missing modalities? - Some more qualitative evaluation would be nice. Where does the improvement in performance come from? What exactly does the model pick up on? Are informative features compounded and highlighted across modalities? Are features being emphasized within a modality (i.e. better unimodal representations), or are better features being learned across modalities? ****************************Clarity**************************** Strengths: - The paper is well written with very informative Figures, especially Figures 1 and 2. - The paper gives a good introduction to tensors for those who are unfamiliar with the literature. Weaknesses: - The concept of local interactions is not as clear as the rest of the paper. Is it local in that it refers to the interactions within a time window, or is it local in that it is within the same modality? - It is unclear whether the improved results in Table 1 with respect to existing methods is due to higher-order interactions or due to more parameters. A column indicating the number of parameters for each model would be useful. - More experimental details such as neural networks and hyperparameters used should be included in the appendix. - Results should be averaged over multiple runs to determine statistical significance. - There are a few typos and stylistic issues: 1. line 2: "Despite of being compactâ -> âDespite being compactâ 2. line 56: âWe refer multiway arraysâ -> âWe refer to multiway arraysâ 3. line 158: âHPFN to a even deeper ConACâ -> âHPFN to an even deeper ConACâ 4. line 265: "Effect of the modelling mixed temporal-modality features." -> I'm not sure what this means, it's not grammatically correct. 5. equations (4) and (5) should use \left( and \right) for parenthesis. 6. and so on⦠****************************Significance**************************** Strengths: - This paper will likely be a nice addition to the current models we have for processing multimodal data, especially since the results are quite promising. Weaknesses: - Not really a weakness, but there is a paper at ACL 2019 on "Learning Representations from Imperfect Time Series Data via Tensor Rank Regularizationâ which uses low-rank tensor representations as a method to regularize against noisy or imperfect multimodal time-series data. Could your method be combined with their regularization methods to ensure more robust multimodal predictions in the presence of noisy or imperfect multimodal data? - The paper in its current form presents a specific model for learning multimodal representations. To make it more significant, the polynomial pooling layer could be added to existing models and experiments showing consistent improvement over different model architectures. To be more concrete, the yellow, red, and green multimodal data in Figure 2a) can be raw time-series inputs, or they can be the outputs of recurrent units, transformer units, etc. Demonstrating that this layer can improve performance on top of different layers would be this work more significant for the research community. ****************************Post Rebuttal**************************** I appreciate the effort the authors have put into the rebuttal. Since I already liked the paper and the results are quite good, I am maintaining my score. I am not willing to give a higher score since the tasks are rather straightforward with well-studied baselines and tensor methods have already been used to some extent in multimodal learning, so this method is an improvement on top of existing ones. | - Results should be averaged over multiple runs to determine statistical significance. |
NIPS_2021_2235 | NIPS_2021 | (and questions):
A) The biggest weakness I think is that the analysis happens on a very restricted scenario, with no transfer: the authors study only the case where we have a single dataset and learn the encoder without using the label that we know exist and use to learn the classifiers - this is suboptimal and would not make sense in practive. I understand that this evaluation is common practice in SSL paper, however this is only a small part of the evaulations these papers have, and transfer learning is the more important and realisting setting. The authors do discuss this in lines 121-124 but justifying their choice by only citing empirical evidence of correlation of this "task" with transfer tasks, but I wouldn't say there are no guarantees there. Calling the second stage of classifier learning on the same dataset as traing as a "downstream supervised task" is an exaggeration (I would suggest to the authors to rephrase). Although this task "correlates" with transfer tasks, it is not clear to me if also this analysis extends. It would be great to discuss this at least a bit further.
B) Even for this task above, there are further simplifications to facilitate the analysis: 1) only the SimCLR case is covered and yet, there is no analysis on a seemingly important (see SimCLR-v2 and other recent papers that show that) part of that approach, ie the projection head. 2) The MoCo approach which is a very popular variant with a memory queue is not discussed. How does the analysis extend to negatives from a memory queue and dual encoders with exponential moving average? 3) There is a further sumplification by the use of a mean classifier, which is not common practice . Why is that simplification there, and is it central for the analysis?
C) The (absolute) numbers in Table 1 are not so intutive, unbounded and hard to understand. It is really hard to understand what is the main message of Table 1 and some of the rows, eg colisions, could perhaps be made more informative by turning them into probabilities. It is unclear what is meant in line 269 by "10 sampled data augmentation per sample" and unclear what reporting the Collision bound without the alpha and beta constants offer (section 4.2 is very unclear to me).
Some more notes/questions:
The discussion on clustering based SSL methods and Sec 4.4 is very restricted to this unrealistic task, that becomes even more unrealistic for clustering based pretraining. It is uncler to me what it offers.
A missing ref (Kalantidis et al "Hard Negative Mixing for Contrastive Learning" NeuriPS 2020) synthesizes hard negatives for contrastive SSL. Same as MoCo, it would be interesting to discuss how this analysis extends to synthetic negatives. Rating
Although an interesting study, the paper has limitations (see "weaknesses" section above). I would say that the current version of the paper is marginally below the acceptance threshold, but I am looking forward to the authors addressing my concerns above in their rebuttal.
Post-rebutal thoughts
The authors provided extensive responses to my questions, answering many in a satisfactory way. I still think however that a central concern listed in the original review stand: the fact that Arora et al study the same task that first learns without labels and the with labels on the same dataset (and only that task) doesn't mean that this is what should be the only task to study for "Understanding Negative Samples in Instance Discriminative Self-supervised Representation Learning".
In their response, the authors claim that
The self-supervised learning setting of our analysis is practical because the setting is quite similar to a semi-supervised learning setting, where we can access massive unlabeled samples and a few labeled samples.
With all due respect, I wouldn't compare this to semi-supervised learning for one key reason: as the authors also say here, in semi-supervised learning you have few labeled examples, a key property of the task. So, I would totally understand this analysis if the proposed bound was evaluated in a semi-supervised setting. This is not the case here, ie more than few labeled examples per class are used for learning the classifiers in this case.
Similarly, wrt the answer on the usage of a mean classifier:
a few-shot learning setting uses a mean classifier, namely, Prototypical Networks [9], which has been cited more than 2700 times, according to Google Scholar.
Again, in the same way, the use of a mean classifier is indeed justified for few-shot learning, but it is well known that in the case of datasets with many labels, a logistic regression classifiers is superior.
Overall, I do see some merit in this paper, yet I think the breadth of the analysis is not enough; I will keep my score to 5.
The authors do discuss some limitations, but not potential societal impacts. Given the nature of the work, the latter is not easy to assess and in my opinion it is fine to skip for a theoretical paper on SSL. | 1) only the SimCLR case is covered and yet, there is no analysis on a seemingly important (see SimCLR-v2 and other recent papers that show that) part of that approach, ie the projection head. |
NIPS_2018_87 | NIPS_2018 | weakness/questions: 1. Description of the framework: It's not very clear what Bs is in the formulation. It's not introduced in the formulation, but later on the paper talks about how to form Bs along with Os and Zs for different supervision signals. And it;s very confusing what is Bs's role in the formulation. 2. computational cost: it would be great to see an analysis about the computation cost. 3. Experiment section: it seems that for the comparison with other methods, the tracklets are also generated using different process. So it's hard to draw conclusions based on the results. Is it possible to apply different algorithms to same set of tracklets? For example, for the comparison of temporal vs temp+BB, the conclusion is not clear as there are three ways of generating tracklets. It seems that the conclusion is -- when using same tracklet set, the temp + BB achieves similar performance as using temporal signal only. However, this is not explicitly stated in the paper. 4. The observation and conclusions are hidden in the experimental section. It would be great if the paper can highlight those observations and conclusions, which is very useful for understanding the trade-offs of annotation effort and corresponding training performance. 5. comparison with fully supervised methods: It would be great if the paper can show comparison with other fully supervised methods. 6. What is the metric used for the video level supervision experiment? It seems it's not using the tracklet based metrics here, but the paper didn't give details on that. | 4. The observation and conclusions are hidden in the experimental section. It would be great if the paper can highlight those observations and conclusions, which is very useful for understanding the trade-offs of annotation effort and corresponding training performance. |
ICLR_2021_243 | ICLR_2021 | Weakness: 1. As several modifications mentioned in Section 3.4 were used, it would be better to provide some ablation experiments of these tricks to validate the model performance further. 2. The model involves many hyperparameters. Thus, the selection of the hyperparameters in the paper needs further explanation. 3. A brief conclusion of the article and a summary of this paper's contributions need to be provided. 4. Approaches that leveraging noisy label noise label regularization and multi-label co-regularization were not reviewed or compared in this paper. | 1. As several modifications mentioned in Section 3.4 were used, it would be better to provide some ablation experiments of these tricks to validate the model performance further. |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 28