paper_id
stringlengths 10
19
| venue
stringclasses 15
values | focused_review
stringlengths 7
9.4k
| point
stringlengths 49
654
|
---|---|---|---|
ARR_2022_92_review | ARR_2022 | - The hyperlink for footnote 3 and 4 do not seem to work. - Line 172: an argument level -> on argument level | - The hyperlink for footnote 3 and 4 do not seem to work. |
ARR_2022_298_review | ARR_2022 | The main weak point of the paper is that at it is not super clear. There are many parts in which I believe the authors should spend some time in providing either more explanations or re-structure a bit the discussion (see the comments section).
- I suggest to revise a bit the discussion, especially in the modeling section, which in its current form is not clear enough. For example, in section 2 it would be nice to see a better formalization of the architecture. If I understood correctly, the Label Embeddings are external parameters; instead, the figure is a bit misleading, as it seems that the Label Embeddings are the output of the encoder.
- Also, when describing the contribution in the Introduction, using the word hypothesis/null hypothesis really made me think about statistical significance. For example, in lines 87-90 the authors introduce the hypothesis patterns (in contrast to the null hypothesis) referring to the way of representing the input labels, and they mention "no significant" difference, which instead is referring to the statistical significance. I would suggest to revise this part.
- It is not clear the role of the dropout, as there is not specific experiment or comment on the impact of such technique. Can you add some details?
- On which data is fine-tuned the model for the "Knowledge Distillation" - Please, add an intro paragraph to section 4.
- The baseline with CharSVM seems disadvantaged. In fact, a SVM model in a few-shot setting with up to 5-grams risks to have huge data-sparsity and overfitting problems. Can the authors explain why they selected this baseline? Is a better (fair) baseline available?
- In line 303 the authors mention "sentence transformers". Why are the authors mentioning this? Is it possible to add a citation?
- There are a couple of footnotes referring to wikipedia. It is fine, but I think the authors can find a better citation for the Frobenius norm and the Welch test.
- I suggest to make a change to the tables. Now the authors are reporting in bold the results whose difference is statistical significant. Would it be possible to highlight in bold the best result in each group (0, 8, 64, 512) and with another symbol (maybe underline) the statistical significant ones? - | - I suggest to revise a bit the discussion, especially in the modeling section, which in its current form is not clear enough. For example, in section 2 it would be nice to see a better formalization of the architecture. If I understood correctly, the Label Embeddings are external parameters; instead, the figure is a bit misleading, as it seems that the Label Embeddings are the output of the encoder. |
ACL_2017_501_review | ACL_2017 | The experiments are missing a key baseline: a state-of-the-art VQA model trained with only a yes/no label vocabulary. I would have liked more details on the human performance experiments. How many of the ~20% of incorrectly-predicted images are because the captions are genuinely ambiguous? Could the data be further cleaned up to yield an even higher human accuracy?
- General Discussion: My concern with this paper is that the data set may prove to be easy or gameable in some way. The authors can address this concern by running a suite of strong baselines on their data set and demonstrating their accuracies. I'm not convinced by the current set of experiments because the chosen neural network architectures appear quite different from the state-of-the-art architectures in similar tasks, which typically rely on attention mechanisms over the image.
Another nice addition to this paper would be an analysis of the data set. How many tokens does the correct caption share with distractors on average? What kind of understanding is necessary to distinguish between the correct and incorrect captions? I think this kind of analysis really helps the reader understand why this task is worthwhile relative to the many other similar tasks. The data generation technique is quite simple and wouldn't really qualify as a significant contribution, unless it worked surprisingly well.
- Notes I couldn't find a description of the FFNN architecture in either the paper or the supplementary material. It looks like some kind of convolutional network over the tokens, but the details are very unclear. I'm also confused about how the Veq2Seq+FFNN model is applied to both classification and caption generation. Is the loglikelihood of the caption combined with the FFNN prediction during classification? Is the FFNN score incorporated during caption generation?
The fact that the caption generation model performs (statistically significantly) *worse* than random chance needs some explanation. How is this possible?
528 - this description of the neural network is hard to understand. The final paragraph of the section makes it clear, however. Consider starting the section with it. | 528 - this description of the neural network is hard to understand. The final paragraph of the section makes it clear, however. Consider starting the section with it. |
NIPS_2020_314 | NIPS_2020 | 1) It seems like the model really works well when there is a mixture of datasets i.e. single, dual and multi speaker. Would be interesting to see dependency on this? 2) It seems like the model is limited to CTC loss, would it be possible to train them towards attention based enc-dec training? | 2) It seems like the model is limited to CTC loss, would it be possible to train them towards attention based enc-dec training? |
ARR_2022_138_review | ARR_2022 | 1. The problem formulation is flawed. The authors argue that one major motivation of this paper is to learn schema in a data-driven way other than laborious manual schema engineering. However, on the proposed four datasets, I feel that the schemas are somehow easy to learn. For example, on E2E, WikiTableText and WikiBio, only two columns are involved. According to my understanding, there is no discrepancy between training and testing w.r.t. the table schema. Things on the fourth dataset, Rotowire, feel more complicated. There are two tables, containing around 5 and 9 columns respectively. It's not clear whether a difference exists between training and testing. And the evaluation metric only considers the cell values while ignoring the schema. That's another reason I feel the schema is fairly easy to learn in the proposed setting. Furthermore, under the current setting, this problem sounds more like another type of machine reading that the model is asked to fill in the table as the schema is easy to predict. So I highly suggest considering schema generalization where the schemas differ between training and testing and additionally evaluating schema prediction.
2. It's not surprising that the major performance contribution comes from the pretrained language model (BART). But compared to that, the gain obtained from the proposed method is marginal (Table 5).
1. In Section 3 (line 247-252), I am wondering tables are divided into three types. For me, one type (the column header) should work.
2. Several typos exist in the paper.
3. Like I suggest in the weaknesses, more exploration of schema learning and generalization will make this paper much stronger and more interesting. Another workaround is to rephrase the motivation to make this paper focus more on the non-header part. | 1. In Section 3 (line 247-252), I am wondering tables are divided into three types. For me, one type (the column header) should work. |
IAFLoDz6H5 | ICLR_2025 | - The experiment setting is problematic.
- - Only toy tasks and language models are used. Instead of evaluating LLMs in a few-shot/zero-shot way, the paper fine-tunes the Pythia family of models on some classification tasks. Pythia models are only pre-trained on general corpus without any RLHF and their performances are well-known to be far lower than LLMs like LlaMa and Gemma. I doubt how largely the results in the paper can be generalized to other LLMs.
- - The attack methods are naive. Two attack methods are considered. One is to randomly add some tokens as suffixes of the inputs, while the other one generates a universal adversarial suffix. Since the paper is only considering the toy setting with classification tasks, I don't see any reason why other classical attack methods in NLP are not used. For example, check the following papers:
- - - [1] Jin, Di et al. “Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment.” AAAI Conference on Artificial Intelligence (2019).
- - - [2] Hou, Bairu et al. “TextGrad: Advancing Robustness Evaluation in NLP by Gradient-Driven Optimization.” ArXiv abs/2212.09254 (2022): n. pag.
- - - [3] Li, Linyang et al. “BERT-ATTACK: Adversarial Attack against BERT Using BERT.” ArXiv abs/2004.09984 (2020): n. pag.
Since this is largely an empirical paper, I would like to encourage the authors to refine the setting and conduct more comprehensive experiments to fulfill the goal. | - - The attack methods are naive. Two attack methods are considered. One is to randomly add some tokens as suffixes of the inputs, while the other one generates a universal adversarial suffix. Since the paper is only considering the toy setting with classification tasks, I don't see any reason why other classical attack methods in NLP are not used. For example, check the following papers: |
Qg0gtNkXIb | ICLR_2025 | 1. Critical Methodological Limitations:
- The paper relies heavily on pre-trained language models (specifically BERT) for prompt sampling but fails to acknowledge this as a fundamental limitation. This is particularly problematic as such models have a training cutoff date (pre-2018 for BERT), making it impossible to find triggers containing newer terms or concepts.
- The evaluation framework potentially misses a significant portion of memorization cases due to this temporal limitation.
- The dependency on pre-trained language models restricts the generalizability of the method.
2. Unclear and Redundant Writing:
- The paper's findings section contains three nearly identical statements that could be consolidated:
Quote: "MemBench reveals several key findings:
1. All image memorization mitigation methods result in a reduction of Text-Image alignment between generated images and prompts...
2. The mitigation methods affect the image generation capabilities of diffusion models, which can lead to lower image quality...
3. The mitigation methods can cause performance degradation in the general prompt scenario..."
These are essentially making the same point about performance degradation.
- The paper makes claims about metric superiority without proper justification:
Quote: "while Ren et al. (2024) measured FID, the Aesthetic Score offers a more straightforward way to evaluate individual image quality and better highlight these issues."
No evidence or explanation is provided for why Aesthetic Score is "more straightforward" or "better."
- Key metrics are used without proper introduction:
The paper extensively uses SSCD scores without ever explaining what they measure or their significance.
3. Poor Communication of Technical Contributions:
- The paper's main technical innovation (efficient prompt sampling) is not clearly highlighted.
- The findings section contains redundant statements about mitigation methods that could be consolidated.
- Claims about metric choices (e.g., aesthetic score vs FID) lack proper justification. | 2. The mitigation methods affect the image generation capabilities of diffusion models, which can lead to lower image quality... |
ExZ5gonvhs | ICLR_2024 | - The employment of prior knowledge, specifically in the form of a pretrained visual model and the target dataset, diverges from the fundamental principles of Self-Supervised Learning (SSL).
- The incorporation of such prior knowledge raises concerns about the fairness of comparisons with existing SSL methods. There is a potential risk that the pretrained visual model and target dataset might leak additional information into the model, thereby skewing results and leading to issues of unfairness.
- The difference between GSP-SSL and NNCLR lies primarily in their respective positive sampling strategies. However, the novelty of the proposed strategy is limited. | - The incorporation of such prior knowledge raises concerns about the fairness of comparisons with existing SSL methods. There is a potential risk that the pretrained visual model and target dataset might leak additional information into the model, thereby skewing results and leading to issues of unfairness. |
ACL_2017_49_review | ACL_2017 | There are some minor points, listed as follows: 1) Figure 1: I am a bit surprised that the function words dominate the content ones in a Japanese sentence. Sorry I may not understand Japanese. 2) In all equations, sequences/vectors (like matrices) should be represented as bold texts to distinguish from scalars, e.g., hi, xi, c, s, ... 3) Equation 12: s_j-1 instead of s_j.
4) Line 244: all encoder states should be referred to bidirectional RNN states.
5) Line 285: a bit confused about the phrase "non-sequential information such as chunks". Is chunk still sequential information???
6) Equation 21: a bit confused, e.g, perhaps insert k into s1(w) like s1(w)(k) to indicate the word in a chunk. 7) Some questions for the experiments: Table 1: source language statistics? For the baselines, why not running a baseline (without using any chunk information) instead of using (Li et al., 2016) baseline (|V_src| is different)? It would be easy to see the effect of chunk-based models. Did (Li et al., 2016) and other baselines use the same pre-processing and post-processing steps? Other baselines are not very comparable. After authors's response, I still think that (Li et al., 2016) baseline can be a reference but the baseline from the existing model should be shown. Figure 5: baseline result will be useful for comparison? chunks in the translated examples are generated *automatically* by the model or manually by the authors? Is it possible to compare the no. of chunks generated by the model and by the bunsetsu-chunking toolkit? In that case, the chunk information for Dev and Test in Table 1 will be required. BTW, the authors's response did not address my point here. 8) I am bit surprised about the beam size 20 used in the decoding process. I suppose large beam size is likely to make the model prefer shorter generated sentences. 9) Past tenses should be used in the experiments, e.g., Line 558: We *use* (used) ... Line 579-584: we *perform* (performed) ... *use* (used) ... ... - General Discussion: Overall, this is a solid work - the first one tackling the chunk-based NMT; and it well deserves a slot at ACL. | 1) Figure 1: I am a bit surprised that the function words dominate the content ones in a Japanese sentence. Sorry I may not understand Japanese. |
ICLR_2022_74 | ICLR_2022 | Weakness
• The theorem applies when the label error is small, less than 1/7. However, it might be non-trivial to obtain a predictor with that quality in the first place. For example, in the experiments, the initial solution are derived from k
-means (Lloyd's) algorithm, which might require many initial seeds to attain a good solution. Are there any guarantees can be made when the initial label error is larger?
• When k
gets larger, the k
-means algorithm (even with k
-means++) solution can be stuck at local minima, with arbitrarily worse objective [1]. How would algo+ k
-means++/predictor behave compare with k
-means++ (with multiple seeds)? Can the algorithm help escape the local minima and attain a much better solution? There is a collection of synthetic datasets [1] for k-means to understand the performance of the algorithm. I suggest the authors take these benchmark datasets into consideration for the experiments for evaluation. In the current experiment, only k = 10 and k = 25
are tested, and it is hard to see the comparison of algorithms when k
gets larger, which is a more challenging case for the k
-means problem.
• Minor suggestion: the average of k
-means objectives with multiple seeds are used as a baseline, I think the minimal k
-means objective over multiple seeds is more reasonable.
[1] Jin, Chi, et al. "Local maxima in the likelihood of gaussian mixture models: Structural results and algorithmic consequences." Advances in neural information processing systems 29 (2016): 4116-4124. [2] Fränti, Pasi, and Sami Sieranoja. "K-means properties on six clustering benchmark datasets." Applied Intelligence 48.12 (2018): 4743-4759. | • Minor suggestion: the average of k -means objectives with multiple seeds are used as a baseline, I think the minimal k -means objective over multiple seeds is more reasonable. [1] Jin, Chi, et al. "Local maxima in the likelihood of gaussian mixture models: Structural results and algorithmic consequences." Advances in neural information processing systems 29 (2016): 4116-4124. [2] Fränti, Pasi, and Sami Sieranoja. "K-means properties on six clustering benchmark datasets." Applied Intelligence 48.12 (2018): 4743-4759. |
NIPS_2016_117 | NIPS_2016 | weakness of this work is impact. The idea of "direct feedback alignment" follows fairly straightforwardly from the original FA alignment work. Its notable that it is useful in training very deep networks (e.g. 100 layers) but its not clear that this results in an advantage for function approximation (the error rate is higher for these deep networks). If the authors could demonstrate that DFA allows one to train and make use of such deep networks where BP and FA struggle on a larger dataset this would significantly enhance the impact of the paper. In terms of biological understanding, FA seems more supported by biological observations (which typically show reciprocal forward and backward connections between hierarchical brain areas, not direct connections back from one region to all others as might be expected in DFA). The paper doesn't provide support for their claim, in the final paragraph, that DFA is more biologically plausible than FA. Minor issues: - A few typos, there is no line numbers in the draft so I haven't itemized them. - Table 1, 2, 3 the legends should be longer and clarify whether the numbers are % errors, or % correct (MNIST and CIFAR respectively presumably). - Figure 2 right. I found it difficult to distinguish between the different curves. Maybe make use of styles (e.g. dashed lines) or add color. - Figure 3 is very hard to read anything on the figure. - I think this manuscript is not following the NIPS style. The citations are not by number and there are no line numbers or an "Anonymous Author" placeholder. - I might be helpful to quantify and clarify the claim "ReLU does not work very well in very deep or in convolutional networks." ReLUs were used in the AlexNet paper which, at the time, was considered deep and makes use of convolution (with pooling rather than ReLUs for the convolutional layers). | - I think this manuscript is not following the NIPS style. The citations are not by number and there are no line numbers or an "Anonymous Author" placeholder. |
pFTBsdZ1UM | EMNLP_2023 | (W1) (Task definition and novelty are not clear.) The notion of indicative summarization is not clear. According to Footnote 2, it is not clear what is different from extreme summarization (e.g., single-document such as XSum and multi-document such as TLDR). The TIFU Reddit dataset [Ref 1] is not cited or mentioned in this paper.
(W2) (Evaluation results do not verify the end-to-end task performance) The overall performance evaluation is missing or has some issues. Now the task consists of multiple components: (1) sentence clustering, (2) label assignment (which can be considered summarization), (3) frame assignment.
In 5.4, the author(s) compared with search-based solutions but summarization methods should also be compared as baselines. It is obvious that any summarization methods are better than search-based methods for exploration. And the results in 5.4 do not really support that the proposed solution is a better indicative summarization method than other summarization methods.
(W3) (Design choice is not well-justified; Technical novelty and significance are not clear) Similar to (W1) and (W2), it is not clear sentence clustering before frame assignments. Can frame assignment be done directly to each sentence before clustering? The lack of design choice discussion and ablation study makes it difficult to judge the justification of the design choice for the problem. The paper used existing LLMs and existing media frames to tackle the problem. Technical novelty and significance are not clear from the paper in its current form.
(W4) (Comparisons of a variety of zero-shot LLMs with no clear implications) Although the paper conducts comprehensive experiments by using a variety of LLMs, the learnings are not clear. Specifically, Table 4 does not tell which LLMs we should use for the purpose (in this case, frame assignment). The paper does not offer training or validation datasets. The reported results can be this task & dataset-only and may not be generalized. Again, it’s difficult to judge only from the results.
### Minor comments
- LL269-272, the author(s) claim that it is novel to apply abstractive summarization models to clutter labeling. However, the problem itself is actually multi-sentence summarization, which the author(s) referred to as cluster labeling and I do not agree with this claim. (Also, extractive-abstractive can be considered clustering labeling models then.) I leave this comment as a minor comment as it is not directly related to the main claim(s).
- To me, the task looks closer to Argument Mining rather than Summarization. In any case, the paper should further clarify the differences against Argument Mining/Discussion Summarization. | - To me, the task looks closer to Argument Mining rather than Summarization. In any case, the paper should further clarify the differences against Argument Mining/Discussion Summarization. |
NIPS_2018_185 | NIPS_2018 | Weakness: ##The clarity of this paper is medium. Some important parts are vague or missing. 1) Temperature calibration: 1.a) It was not clear what is the procedure for temperature calibration. The paper only describes an equation, without mentioning how to apply it. Could the authors list the steps they took? 1.b) I had to read Guo 2017 to understand that T is optimized with respect to NLL on the validation set, and yet I am not sure the authors do the same. Is the temperature calibration is applied on the train set? The validation set (like Guo 2017)? The test set? 1.c) Guo clearly states that temperature calibration does not affect the prediction accuracy. This contradicts the results on Table 2 & 3, where DCN-T is worse than DCN. 1.d) About Eq (5) and Eq (7): Does it mean that we make temperature calibration twice? Once for source class, and another for target classes? 1.e) It is written that temperature calibration is performed after training. Does it mean that we first do a hyper-param grid search for those of the loss function, and afterward we search only for the temperature? If yes, does it means that this method can be applied to other already trained models, without need to retrain? 2) Uncertainty Calibration From one point of view it looks like temperature calibration is independent of uncertainty calibration, with the regularization term H. However in lines 155-160 it appears that they are both are required to do uncertainty calibration. (2.a) This is confusing because the training regularization term (H) requires temperature calibration, yet temperature calibration is applied after training. Could the authors clarify this point? (2.b) Regarding H: Reducing the entropy, makes the predictions more confident. This is against the paper motivation to calibrate the networks since they are already over confident (lines 133-136). 3) Do the authors do uncertainty calibration on the (not-generalized) ZSL experiments (Table 2&3)? If yes, could they share the ablation results for DCN:(T+E), DCN:T, DCN:E ? 4) Do the authors do temperature calibration on the generalized ZSL experiments (Table 4)? If yes, could they share the ablation results for DCN:(T+E), DCN:T, DCN:E ? 5) The network structure: 5.a) Do the authors take the CNN image features as is, or do they incorporate an additional embedding layer? 5.b) What is the MLP architecture for embedding the semantic information? (number of layers / dimension / etc..) ##The paper ignores recent baselines from CVPR 2018 and CVPR 2017 (CVPR 2018 accepted papers were announced on March, and were available online). These baseline methods performance superceed the accuracy introduced in this paper. Some can be considered complementary to this work, but the paper canât simply ignore them. For example: Zhang, 2018: Zero-Shot Kernel Learning Xian, 2018: Feature Generating Networks for Zero-Shot Learning Arora, 2018: Generalized zero-shot learning via synthesized examples CVPR 2017: Zhang, 2017: Learning a Deep Embedding Model for Zero-Shot Learning ## Title/abstract/intro is overselling The authors state that they introduce a new deep calibration network architecture. However, their contributions are a novel regularization term, and a temperature calibration scheme that is applied after training. I wouldnât consider a softmax layer as a novel network architecture. Alternatively, I would suggest emphasizing a different perspective: The approach in the paper can be considered as more general, and can be potentially applied to any ZSL framework that outputs a probability distribution. For example: Atzmon 2018: Probabilistic AND-OR Attribute Grouping for Zero-Shot Learning Ba 2015: Predicting Deep Zero-Shot Convolutional Neural Networks using Textual Descriptions Other comments: It will make the paper stronger if there was an analysis that provides support for the uncertainty calibration claims in the generalized ZSL case, which is the focus of this paper. Introduction could be improved: The intro only motivates why (G)ZSL is important, which is great for new audience, but there is no interesting information for ZSL community. It can be useful to describe the main ideas in the intro. Also, confidence vs uncertainty, were only defined on section 3, while it was used in the abstract / intro. This was confusing. Related work: It is worth to mention Transductive ZSL approaches, which use unlabeled test data during training, and then discriminate this work from the transductive setting. For example: Tsai, 2017: Learning robust visual-semantic embeddings. Fu 2015: Transductive Multi-view Zero-Shot Learning I couldnât understand the meaning on lines 159, 160. Lines 174-179. Point is not clear. Sounds redundant. Fig 1 is not clear. I understand the motivation, but I couldnât understand Fig 1. | 2) Uncertainty Calibration From one point of view it looks like temperature calibration is independent of uncertainty calibration, with the regularization term H. However in lines 155-160 it appears that they are both are required to do uncertainty calibration. (2.a) This is confusing because the training regularization term (H) requires temperature calibration, yet temperature calibration is applied after training. Could the authors clarify this point? (2.b) Regarding H: Reducing the entropy, makes the predictions more confident. This is against the paper motivation to calibrate the networks since they are already over confident (lines 133-136). |
NIPS_2017_226 | NIPS_2017 | - An important reference is missing
- Other less important references are missing
- Bare-bones evaluation
The paper provides an approach to solve linear inverse problems by reducing training requirements. While there is some prior work in this area (notably the reference below and reference [4] of the paper), the paper has some very interesting improvements over them. In particular, the paper combines the best parts of [4] (decoupling of signal prior from the specific inverse problem being solved) and Lista (fast implementation). That said, evaluation is rather skim â almost anecdotal at times â and this needs fixing. There are other concerns as well on the specific choices made for the matrix inversion that needs clarification and justifications.
1) One of the main parts of the paper is a network learnt to invert (I + A^T A). The paper used the Woodbury identity to change it to a different form and learns the inverse of (I+AA^T) since this is a smaller matrix to invert. At test time, we need to apply not just this network but also "A" and "A^T" operators.
A competitor to this is to learn a deep network that inverts (I+A^T A). A key advantage of this is that we do not need apply A and A^T during test time. It is true that that this network learns to inverts a larger matrix ... But at test time we have increased rewards. Could we see a comparison against this ? both interms of accuracy as well as runtime (at training and testing) ?
2) Important reference missing. The paper is closely related to the idea of unrolling, first proposed in, âListaâ http://yann.lecun.com/exdb/publis/pdf/gregor-icml-10.pdf
While there are important similarities and differences between the proposed work and Lista, it is important that the paper talks about them and places itself in appropriate context.
3) Evaluation is rather limited to a visual comparison to very basic competitors (bicubic and wiener filter). It would be good to have comparisons to
- Specialized DNN: this would provide the loss in performance due to avoiding the specialized network.
- Speed-ups over [4] given the similarity (not having this is ok given [4] is an arXiv paper)
- Quantitative numbers that capture actual improvements over vanilla priors like TV and wavelets and gap to specialized DNNs. Typos
- Figure 2: Syntehtic | 2) Important reference missing. The paper is closely related to the idea of unrolling, first proposed in, âListaâ http://yann.lecun.com/exdb/publis/pdf/gregor-icml-10.pdf While there are important similarities and differences between the proposed work and Lista, it is important that the paper talks about them and places itself in appropriate context. |
NIPS_2020_25 | NIPS_2020 | 1. The proposed method is inapplicable to data from absolutely continuous probability distribution. The number of possible values of a data point in this case will be infinite. However, the paper relies on the vectorization of the probability distribution. For truly real world continuous data, huge matrices will have to be created and computed. 2. The choice of the uncertainty set is somewhat arbitrary. How to guarantee that the true distribution is covered? 3. There has to be a trade off between the data fitting and the generalization error. This seems to be related to how the uncertainty set is defined. In the extreme case that the uncertainty set is chosen to be infinitely large, then the model will underfit the data. Insight about this trade off is lacking. 4. The linear program in Theorem 3 need to be explained intuitively. I understand that this is a main theorem but it would help the reader a lot if the authors can explain what are the objective and the constraints in (3). 5. How is the feature mapping chosen? How sensitive is the model to the feature mapping? 6. I am not very much impressed by the numerical study. Cross-validation or other careful tuning methods should be used for the other SOTA methods to compare with the current method. | 4. The linear program in Theorem 3 need to be explained intuitively. I understand that this is a main theorem but it would help the reader a lot if the authors can explain what are the objective and the constraints in (3). |
yIEKq72cTE | ICLR_2024 | 1. The writing is confusing.
1. Definition 3.1 is not a definition. I can't see any definition from it. It seems to be a proposition or a theorem about the vulnerability of FedAvg for me.
2. Definition 4.1 is not clear.
1. Eq (4) is confusing. According to the definition of $\mathbb{N}$, $\mathbb{R}$ and $\mathbb{X}$, $\mathbb{N}=\mathbb{R}\cup\mathbb{X}$. Then why there are additional $\nabla W_1$ and $\nabla W_n$ except for elements in $\mathbb{R}\cup\mathbb{X}$?
2. How is Definition related to aggregation rule $\mathcal{A}$? $\mathcal{A}$ only appears in Eq.(4). But vector defined in Eq.(4)is not used.
3. The operation \ in lines 241 and 243, is not defined.
4. FLOT cost matrix in Algorithm 1 is not defined.
5. Why the input of FLOT in Eq. (10) is different from previous ones?
2. The proposed method requires a validation dataset on the server, which violates privacy principle in FL.
3. It seems that the convergence result in Eq. (15) cannot guarantee that the global model converge to a good model rather than a bad model? | 4. FLOT cost matrix in Algorithm 1 is not defined. |
NIPS_2021_2307 | NIPS_2021 | 1. It is unclear what the exact setting the paper considers in the continuous domain and how prior work would fail in that setting (please see the Questions). 2. Even when the paper proposes new algorithms (EI2 and UCB2) for the theoretical analysis, but it still benefits if we can see some experimental results about how BO performs with these two new algorithms. I would suggest including at least some BO experiments with the proposed algorithms EI2 and UCB2.
Questions: 1. As mentioned, I feel unclear on the setting of the paper in the continuous domain. For discrete domain, I can understand the setting of the problem is when T (number of iterations) << N (the cardinality of the discrete domain). However, for the continuous domain, I can not find/understand the setting. I guess it has something to do with T and L_k, the Lipschitz constant of the kernel. But what exactly the setting in the continuous domain is? 2. Lines 93-97: Why the regret bound of [Srinivas et al, 2010] is vacuous when in your setting? In other words, why γ T
grows linearly with T
in your setting? I cannot follow the arguments explaining why the simple regret bound in [Srinivas et al, 2010] does not decrease at all. I think this argument needs to be proved rigorously so that you can demonstrate your proposed algorithms and analysis are necessary. I think some dedicated parts of the main paper are even needed to explain why your setting hinders [Srinivas et al, 2010]. 3. Does the bound in Theorem 2, Eq. (30) converge to 0 when T goes to infinity? As the bound in [Grunewalder et al, 2010], Eq. (27) does converge to 0. The first term in Eq. (30) does converge to 0, but it is not trivial to derive that the 2nd term in Eq. (30) also converges to 0. Can the authors prove this?
Note: I'm willing to increase my score if the authors can address my questions properly. | 3. Does the bound in Theorem 2, Eq. (30) converge to 0 when T goes to infinity? As the bound in [Grunewalder et al, 2010], Eq. (27) does converge to 0. The first term in Eq. (30) does converge to 0, but it is not trivial to derive that the 2nd term in Eq. (30) also converges to 0. Can the authors prove this? Note: I'm willing to increase my score if the authors can address my questions properly. |
ICLR_2021_961 | ICLR_2021 | Weakness:
The number of graphs satisfying the property is very limited. It requires an r-regular graph. That is, the number of edges connected to one node is the same for all nodes. This condition is very difficult to satisfy in applications. Therefore, the application would be limited too.
The quantization part is limited comparing to the other two parts. What does the effect of quantization on the convergence rate and the communication cost? What is the benefit of using the quantization method in Davies et al. (2020)?
In the proposed algorithm, each time an edge is activated and the two nodes connected through the edge are updated. Therefore, there is still synchronization in Alg. 1. Whether is it possible to update one node based on the results from multiple connected nodes (i.e., one node is activated)?
Algorithm 2 is unclear. 'avg' is computed but not used. What are j' and 'i''? Update
The authors' response addresses some concerns, and I would like to keep the initial scores. | 1. Whether is it possible to update one node based on the results from multiple connected nodes (i.e., one node is activated)? Algorithm 2 is unclear. 'avg' is computed but not used. What are j' and 'i''? Update The authors' response addresses some concerns, and I would like to keep the initial scores. |
lHtNW6xqCd | ICLR_2024 | 1. The specific definition of the sparsity of the residual term in this paper is unclear. Does it mean that the residual term includes many zero elements? Besides, could the authors provide some evidence to support the sparsity assumption across various noisy cases? I think it's necessary to show the advantages of the assumptions of the proposed method compared with existing methods.
2. Some statements in this paper need further discussion or clarification:
- Handling diverse label noise doesn't make sense in my opinion, since the real-world label noise is usually instance-dependent.
- What is a "valid" transition matrix and residual term in Section 2.2? Could the authors provide some theoretical results that show in certain cases, a clean class-posterior probability can be obtained, regardless of instance-dependent noise or the noisy class-posterior has a large estimation error?
- Why log det(T) can be ignored in the generalization analysis?
3. (Minor) I suggest the authors discuss more recent SOTA works, e.g. [3,4].
4. (Minor) The novelty of techniques seems a little limited. It seems that the proposed method mainly combined the techniques from [1] and [2].
5. (Minor) The presentation should be improved largely. For example, this paper only uses a single number to refer to one equation.
[1] Provably end-to-end label-noise learning without anchor points. ICML 2021
[2] Robust training under label noise by over-parameterization. ICML 2022
[3] Selective-supervised contrastive learning with noisy labels. CVPR 2022
[4] Instance-dependent noisy label learning via graphical modelling. WACV 2023 | 1. The specific definition of the sparsity of the residual term in this paper is unclear. Does it mean that the residual term includes many zero elements? Besides, could the authors provide some evidence to support the sparsity assumption across various noisy cases? I think it's necessary to show the advantages of the assumptions of the proposed method compared with existing methods. |
ICLR_2023_1914 | ICLR_2023 | (1) The framing of synergies and their neuroscientific context is somewhat lacking. The premise of the paper is that muscle synergies can be predicted from the cortical inputs, e.g. We applied our method to the corticomuscular system, which is made up of corticospinal pathways between the primary motor cortex and muscles in the body and creates muscle synergies that enable efficient connections between the brain and muscles.”, however synergies are thought to be generated in the spinal cord and some of their first characterization was in frogs, a species without a motor cortex. One reason this is problematic is that the paper is framed as revealing new synergies, e.g. “However, the conventional approach uses only muscle activities (observed phenomena) to capture the muscle synergies, and there may still be unexplored muscle synergies that remain hidden” However, based on the model design it seems like it should be detecting a subset of the muscle-only synergies. Moreover, synergies is largely defined in a muscle-centric way. It is certainly the case that discovering cortico-muscular shared synergies is interesting, but the framing is very different. (2) There are no details provided on the TCN training, and importantly how data was split up for train and test splits. This is especially important for the SCI experiments. (3) There are multiple alternative models that could be considered, for instance performing NNMF on a linear or nonlinear prediction of EMG from ecog activity. The specific motivation for the increased number of parameters and model structure of the TCN is not provided. One of the appeals of NNMF is its simplicity, allowing it to be used across paradigms and contexts, and this TCN introduces a lot of added complexity, with only minimal gains in VAF at high numbers of syneries. (4) For the SCI experiments – there is no ground truth present and so it is impossible to evaluate which technique is ‘correct’. As noted above, without knowledge of how much data is required for model training, it is hard to know if the increase in number of synergies observed is a result of
Nits: - ‘connectivity’ is misleading, as it isn’t using the structural connections between the brain and body. - Figure 6: would help to have an estimate of variance for the number of synergies, e.g. from using different subsets of data to train/test. | - ‘connectivity’ is misleading, as it isn’t using the structural connections between the brain and body. |
ICLR_2023_1584 | ICLR_2023 | Weakness:
1. The proposed method relies on a pretrained object detection network that contains the sufficient semantic information for the in-distribution data. When the semantic of in-distribution data like medical images is not covered by the object detection network (pre-trained on natural image), the built semantic graph can be incomplete or even erroneous. If we use additional annotations to train a sufficiently strong object detection network, the effort will be extremely expensive comparing the existing methods.
2. The paper is not polished and not ready to publish, with missing details in related work / experiment / writing. See more in "Clarity, Quality, Novelty And Reproducibility". | 2. The paper is not polished and not ready to publish, with missing details in related work / experiment / writing. See more in "Clarity, Quality, Novelty And Reproducibility". |
NIPS_2022_2617 | NIPS_2022 | The essentialness of using orthogonal matrix is not studied. The whole OSA process 1. connects tokens within local windows with local window token orthogonalization, is serves as MLP layer within local windows, except the weight matrix of MLP is naturally orthogonal. 2. Connects tokens beyond local windows by forming new groups across previous local window. 3. Token reverse as the inverse of orthogonal matrix is easy to get, just the transpose of the matrix. Step 2 can be done regardless of the weight matrix of this local window MLP is orthogonal or not. Step 3 is the vital part that only orthogonal matrix weight can perform, I believe this should be studied, which is not presented, for validating the essentialness of using orthogonal matrix rather than just following the form that connects local and connects beyond local windows. | 3. Token reverse as the inverse of orthogonal matrix is easy to get, just the transpose of the matrix. Step 2 can be done regardless of the weight matrix of this local window MLP is orthogonal or not. Step 3 is the vital part that only orthogonal matrix weight can perform, I believe this should be studied, which is not presented, for validating the essentialness of using orthogonal matrix rather than just following the form that connects local and connects beyond local windows. |
NIPS_2019_1089 | NIPS_2019 | - The paper can be seen as incremental improvements on previous work that has used simple tensor products to representation multimodal data. This paper largely follows previous setups but instead proposes to use higher-order tensor products. ****************************Quality**************************** Strengths: - The paper performs good empirical analysis. They have been thorough in comparing with some of the existing state-of-the-art models for multimodal fusion including those from 2018 and 2019. Their model shows consistent improvements across 2 multimodal datasets. - The authors provide a nice study of the effect of polynomial tensor order on prediction performance and show that accuracy increases up to a point. Weaknesses: - There are a few baselines that could also be worth comparing to such as âStrong and Simple Baselines for Multimodal Utterance Embeddings, NAACL 2019â - Since the model has connections to convolutional arithmetic units then ConvACs can also be a baseline for comparison. Given that you mention that âresulting in a correspondence of our HPFN to an even deeper ConACâ, it would be interesting to see a comparison table of depth with respect to performance. What depth is needed to learning âflexible and higher-order local and global intercorrelationsâ? - With respect to Figure 5, why do you think accuracy starts to drop after a certain order of around 4-5? Is it due to overfitting? - Do you think it is possible to dynamically determine the optimal order for fusion? It seems that the order corresponding to the best performance is different for different datasets and metrics, without a clear pattern or explanation. - The model does seem to perform well but there seem to be much more parameters in the model especially as the model consists of more layers. Could you comment on these tradeoffs including time and space complexity? - What are the impacts on the model when multimodal data is imperfect, such as when certain modalities are missing? Since the model builds higher-order interactions, does missing data at the input level lead to compounding effects that further affect the polynomial tensors being constructed, or is the model able to leverage additional modalities to help infer the missing ones? - How can the model be modified to remain useful when there are noisy or missing modalities? - Some more qualitative evaluation would be nice. Where does the improvement in performance come from? What exactly does the model pick up on? Are informative features compounded and highlighted across modalities? Are features being emphasized within a modality (i.e. better unimodal representations), or are better features being learned across modalities? ****************************Clarity**************************** Strengths: - The paper is well written with very informative Figures, especially Figures 1 and 2. - The paper gives a good introduction to tensors for those who are unfamiliar with the literature. Weaknesses: - The concept of local interactions is not as clear as the rest of the paper. Is it local in that it refers to the interactions within a time window, or is it local in that it is within the same modality? - It is unclear whether the improved results in Table 1 with respect to existing methods is due to higher-order interactions or due to more parameters. A column indicating the number of parameters for each model would be useful. - More experimental details such as neural networks and hyperparameters used should be included in the appendix. - Results should be averaged over multiple runs to determine statistical significance. - There are a few typos and stylistic issues: 1. line 2: "Despite of being compactâ -> âDespite being compactâ 2. line 56: âWe refer multiway arraysâ -> âWe refer to multiway arraysâ 3. line 158: âHPFN to a even deeper ConACâ -> âHPFN to an even deeper ConACâ 4. line 265: "Effect of the modelling mixed temporal-modality features." -> I'm not sure what this means, it's not grammatically correct. 5. equations (4) and (5) should use \left( and \right) for parenthesis. 6. and so on⦠****************************Significance**************************** Strengths: - This paper will likely be a nice addition to the current models we have for processing multimodal data, especially since the results are quite promising. Weaknesses: - Not really a weakness, but there is a paper at ACL 2019 on "Learning Representations from Imperfect Time Series Data via Tensor Rank Regularizationâ which uses low-rank tensor representations as a method to regularize against noisy or imperfect multimodal time-series data. Could your method be combined with their regularization methods to ensure more robust multimodal predictions in the presence of noisy or imperfect multimodal data? - The paper in its current form presents a specific model for learning multimodal representations. To make it more significant, the polynomial pooling layer could be added to existing models and experiments showing consistent improvement over different model architectures. To be more concrete, the yellow, red, and green multimodal data in Figure 2a) can be raw time-series inputs, or they can be the outputs of recurrent units, transformer units, etc. Demonstrating that this layer can improve performance on top of different layers would be this work more significant for the research community. ****************************Post Rebuttal**************************** I appreciate the effort the authors have put into the rebuttal. Since I already liked the paper and the results are quite good, I am maintaining my score. I am not willing to give a higher score since the tasks are rather straightforward with well-studied baselines and tensor methods have already been used to some extent in multimodal learning, so this method is an improvement on top of existing ones. | - With respect to Figure 5, why do you think accuracy starts to drop after a certain order of around 4-5? Is it due to overfitting? |
tGEBnSQ0uE | ICLR_2024 | 1. The experimental comparison of methods seems not complete. What about methods in Table 1 and non-DP counterparts (including yours, i.e. sigma=0, which is usually a very insightful upper bound of any DP method)? Currently only 3 methods are compared, which seems strange given that the paragraph "Decentralized learning methods with privacy guarantee" discussed so many.
2. C=0 in Figure 2? In Figure 2(bcef), it empirically looks like smaller C is better. This leads to an interesting but missing comparison to infinitely small C, i.e. gradient normalization (clipping every gradient to the same value). See "Automatic Clipping: Differentially Private Deep Learning Made Easier and Stronger" for the centralized setting and "DP-NormFedAvg: Normalizing Client Updates for Privacy-Preserving Federated Learning" for the decentralized one. Would it outperform AdaD2P?
3. The models and datasets are too toy-like. I would at least expect to see CIFAR100, which is of the same size as CIFAR10 but more difficult. It would be desirable to see also ResNet 34 or 50 (more compute, but managable by your machines), and ViT-tiny or small (similar compute as ResNet 18). Is there a foreseeable challenge to experiment on language tasks?
I would raise my score if my questions are properly addressed. | 3. The models and datasets are too toy-like. I would at least expect to see CIFAR100, which is of the same size as CIFAR10 but more difficult. It would be desirable to see also ResNet 34 or 50 (more compute, but managable by your machines), and ViT-tiny or small (similar compute as ResNet 18). Is there a foreseeable challenge to experiment on language tasks? I would raise my score if my questions are properly addressed. |
NIPS_2019_387 | NIPS_2019 | - The main weakness is empirical---scratchGAN appreciably underperforms an MLE model in terms of LM score and reverse LM score. Further, samples from Table 7 are ungrammatical and incoherent, especially when compared to the (relatively) coherent MLE samples. - I find this statement in the supplemental section D.4 questionable: "Interestingly, we found that smaller architectures are necessary for LM compared to the GAN model, in order to avoid overfitting". This is not at all the case in my experience (e.g. Zaremba et al. 2014 train 1500-dimensional LSTMs on PTB!), which suggests that the baseline models are not properly regularized. D.4 mentions that dropout is applied to the embeddings. Are they also applied to the hidden states? - There is no comparison against existing text GANs , many of which have open source implentations. While SeqGAN is mentioned, they do not test it with the pretrained version. - Some natural ablation studies are missing: e.g. how does scratchGAN do if you *do* pretrain? This seems like a crucial baseline to have, especially the central argument against pretraining is that MLE-pretraining ultimately results in models that are not too far from the original model. Minor comments and questions : - Note that since ScratchGAN still uses pretrained embeddings, it is not truly trained from "scratch". (Though Figure 3 makes it clear that pretrained embeddings have little impact). - I think the authors risk overclaiming when they write "Existing language GANs... have shown little to no performance improvements over traditional language models", when it is clear that ScratchGAN underperforms a language model across various metrics (e.g. reverse LM). | - Some natural ablation studies are missing: e.g. how does scratchGAN do if you *do* pretrain? This seems like a crucial baseline to have, especially the central argument against pretraining is that MLE-pretraining ultimately results in models that are not too far from the original model. Minor comments and questions : |
NIPS_2018_464 | NIPS_2018 | of the approach is the definition of the behavior characterization, which is domain-dependent, and may be difficult to set in some environments; however, the authors make this point clear in the paper and I understand that finding methods to define good behavior characterization functions is out of the scope of the submission. The article is properly motivated, review of related work is thorough and extensive experiments are conducted. The methods are novel and their limitations are appropriately stated and justified. The manuscript is carefully written and easy to follow. Source code is provided in order to replicate the results, which is very valuable to the community. Overall, I believe this is a high quality submission and should be accepted to NIPS. Please read more detailed comments below: - The idea of training M policies in parallel is somewhat related to [Liu et al., Stein Variational Policy Gradient], a method that optimizes for a distribution of M diverse and high performing policies. Please add this reference to the related work section. - The update of w in NSRA-ES somehow resembles the adaptation of parameter noise in [Plapper et al., âParameter Space Noise for Explorationâ, ICLR 2018]. The main difference is that the adaptation in Plappert et al. is multiplicative, thus yielding more aggressive changes. Although this is not directly compatible with the proposed method, where w is initialized to 0, I wonder whether the authors tried different policies for the adaptation of w. Given its similarity to ES (where parameter noise is used for structured exploration instead of policy optimization), I believe this is a relevant reference that should be included in the paper as well. - It seems that the authors report the best policy in plots and tables (i.e. if f(theta_t) > f(theta_{t+k}), the final policy weights are theta_t). This is different to the setup by Salimans et al. (c.f. Figure 3 in their paper). I understand that this is required for methods that rely on novelty only (i.e. NS-ES), but not for all of them. Please make this difference clear in the text. - Related to the previous point, I believe that section 3.1 lacks a sentence describing how the final policy is selected (I understand that the best performing one, in terms of episodic reward, is kept). - In the equation between lines 282 and 283, authors should state how they handle comparisons between episodes with different lengths. I checked the provided code and it seems that the authors pad the shorter sequence by replicating its last state in order to compare both trajectories. Also, the lack of a normalization factor of 1/T makes this distance increase with T and favors longer trajectories (which can be a good proxy for many Atari games, but not necessarily for other domains). These decisions should be understood by readers without needing to check the code. - There is no caption for Figure 3 (right). Despite it is mentioned in the text, all figures should have a caption. - The blue and purple color in the plots are very similar. Given the small size of some of these plots, it is hard to distinguish them -- especially in the legend, where the line is quite thin. Please use different colors (e.g. use some orangish color for one of those lines). - Something similar happens with the ES plot in Figure 2. The trajectory is quite hard to distinguish in a computer screen. - In SI 6.5, the authors should mention that despite the preprocessing is identical to that in Mnih et al. [7], the evaluation is slightly different as no human starts are used. - In SI 6.6, the network description in the second paragraph is highly overlapping with that in the first paragraph. ---------------------------------------------------------------------------------------------------------------------------------------------------------- Most of my comments had to do with minor modifications that the authors will adress. As stated in my initial review, I vote for accepting this submission. | - In the equation between lines 282 and 283, authors should state how they handle comparisons between episodes with different lengths. I checked the provided code and it seems that the authors pad the shorter sequence by replicating its last state in order to compare both trajectories. Also, the lack of a normalization factor of 1/T makes this distance increase with T and favors longer trajectories (which can be a good proxy for many Atari games, but not necessarily for other domains). These decisions should be understood by readers without needing to check the code. |
ICLR_2022_1905 | ICLR_2022 | Weakness: 1.The author claim that ‘The observation consistently shows that only parts of subdivision splines are useful for decision boundary; and the goal of pruning is to remove those (redundant) subdivision splines and find winning tickets.’, however, in theoretical part, the author didn’t provide how the proposed algorithm in detail to remove the subdivision splines. Will the algorithm need extra computation cost for such space partition building? 2. When the author introduces the proposed algorithm, the author didn’t analysis if such method has the same convergence guarantee as Lottery Ticket Hypothesis. If so, what is the bound of the error probability? 3.In the experiment, the author didn’t consider Vision Transformer, which is an important SOTA model in image classification. And it is unsure if such technique is still working for larger image dataset such as ImageNet. Will the pruning strategy will be different in self attention layers? | 3.In the experiment, the author didn’t consider Vision Transformer, which is an important SOTA model in image classification. And it is unsure if such technique is still working for larger image dataset such as ImageNet. Will the pruning strategy will be different in self attention layers? |
5EHI2FGf1D | EMNLP_2023 | - no comparison against baselines. The functionality similarity comparison study reports only accuracy across optimization levels of binaries, but no baselines are considered. This is a widely-understood binary analysis application and many papers have developed architecture-agnostic similarity comparison (or often reported as codesearch, which is a similar task).
- rebuttal promises to add this evaluation
- in addition, the functionality similarity comparison methodology is questionable. The authors use cosine similarity with respect to embeddings, which to me makes the experiment rather circular. In contrast, I might have expected some type of dynamic analysis, testing, or some other reasoning to establish semantic similarity between code snippets.
- rebuttal addresses this point.
- vulnerability discovery methodology is also questionable. The authors consider a single vulnerability at a time, and while they acknowledge and address the data imbalance issue, I am not sure about the ecological validity of such a study. Previous work has considered multiple CVEs or CWEs at a time, and report whether or not the code contains any such vulnerability. Are the authors arguing that identifying one vulnerability at a time is an intended use case? In any case, the results are difficult to interpret (or are marginal improvements at best).
- addressed in rebuttal
- This paper is very similar to another accepted at Usenix 2023: Can a Deep Learning Model for One Architecture Be Used for Others?
Retargeted-Architecture Binary Code Analysis. In comparison to that paper, I do not quite understand the novelty here, except perhaps for a slightly different evaluation/application domain. I certainly acknowledge that this submission was made slightly before the Usenix 2023 proceedings were made available, but I would still want to understand how this differs given the overall similarity in idea (building embeddings that help a model target a new ISA).
- addressed in rebuttal
- relatedly, only x86 and ARM appear to be considered in the evaluation (the authors discuss building datasets for these ISAs). There are other ISAs to consider (e.g., PPC), and validating the approach against other ISAs would be important if claiming to build models that generalize to across architectures.
- author rebuttal promises a followup evaluation | - no comparison against baselines. The functionality similarity comparison study reports only accuracy across optimization levels of binaries, but no baselines are considered. This is a widely-understood binary analysis application and many papers have developed architecture-agnostic similarity comparison (or often reported as codesearch, which is a similar task). |
NIPS_2018_464 | NIPS_2018 | of the approach is the definition of the behavior characterization, which is domain-dependent, and may be difficult to set in some environments; however, the authors make this point clear in the paper and I understand that finding methods to define good behavior characterization functions is out of the scope of the submission. The article is properly motivated, review of related work is thorough and extensive experiments are conducted. The methods are novel and their limitations are appropriately stated and justified. The manuscript is carefully written and easy to follow. Source code is provided in order to replicate the results, which is very valuable to the community. Overall, I believe this is a high quality submission and should be accepted to NIPS. Please read more detailed comments below: - The idea of training M policies in parallel is somewhat related to [Liu et al., Stein Variational Policy Gradient], a method that optimizes for a distribution of M diverse and high performing policies. Please add this reference to the related work section. - The update of w in NSRA-ES somehow resembles the adaptation of parameter noise in [Plapper et al., âParameter Space Noise for Explorationâ, ICLR 2018]. The main difference is that the adaptation in Plappert et al. is multiplicative, thus yielding more aggressive changes. Although this is not directly compatible with the proposed method, where w is initialized to 0, I wonder whether the authors tried different policies for the adaptation of w. Given its similarity to ES (where parameter noise is used for structured exploration instead of policy optimization), I believe this is a relevant reference that should be included in the paper as well. - It seems that the authors report the best policy in plots and tables (i.e. if f(theta_t) > f(theta_{t+k}), the final policy weights are theta_t). This is different to the setup by Salimans et al. (c.f. Figure 3 in their paper). I understand that this is required for methods that rely on novelty only (i.e. NS-ES), but not for all of them. Please make this difference clear in the text. - Related to the previous point, I believe that section 3.1 lacks a sentence describing how the final policy is selected (I understand that the best performing one, in terms of episodic reward, is kept). - In the equation between lines 282 and 283, authors should state how they handle comparisons between episodes with different lengths. I checked the provided code and it seems that the authors pad the shorter sequence by replicating its last state in order to compare both trajectories. Also, the lack of a normalization factor of 1/T makes this distance increase with T and favors longer trajectories (which can be a good proxy for many Atari games, but not necessarily for other domains). These decisions should be understood by readers without needing to check the code. - There is no caption for Figure 3 (right). Despite it is mentioned in the text, all figures should have a caption. - The blue and purple color in the plots are very similar. Given the small size of some of these plots, it is hard to distinguish them -- especially in the legend, where the line is quite thin. Please use different colors (e.g. use some orangish color for one of those lines). - Something similar happens with the ES plot in Figure 2. The trajectory is quite hard to distinguish in a computer screen. - In SI 6.5, the authors should mention that despite the preprocessing is identical to that in Mnih et al. [7], the evaluation is slightly different as no human starts are used. - In SI 6.6, the network description in the second paragraph is highly overlapping with that in the first paragraph. ---------------------------------------------------------------------------------------------------------------------------------------------------------- Most of my comments had to do with minor modifications that the authors will adress. As stated in my initial review, I vote for accepting this submission. | - In SI 6.5, the authors should mention that despite the preprocessing is identical to that in Mnih et al. [7], the evaluation is slightly different as no human starts are used. |
wyHCt1P7SR | ICLR_2024 | - The biggest issue of the paper is the writing quality, which makes the paper very hard to follow. Details are listed below.
1. The introduction is extremely long and poorly organized. Many points are made, but I cannot find a precise sentence that emphasizes the essential contribution of the paper. The two applications (Ref-inpainting and NVS) seem to stem from the unified ARCI approach, but the introduction always tries to separate them when discussing their challenges and claiming improvements.
2. While the introduction makes separate claims for the two tasks, the descriptions of the two tasks in Sec.3 are heavily entangled, making it hard to clearly understand how the model works for each task.
3. Fig.1 to Fig.3 are very difficult to parse. The texts in the figures are too small. The inputs and outputs for each task are not clearly explained. The captions are not self-contained, and it is also very hard to link them to certain parts of the main text.
- There seems to be no *quantitative evaluation* for multiview image generation -- Table 2 of the main paper only provides results with a single target view. Since the paper claims improvements in multiview image generation, it is important to formally evaluate the consistency of the generated multiview images.
- The proposed ARCI is limited by the autoregressive generation design and cannot produce many multiview images. While a potential tradeoff is to constrain the length of the condition, it would definitely sacrifice the quality/consistency of the generated images. Note that an important goal of multiview image generation is to extract the 3D object/geometry. Both the number of views and the multiview consistency are important when exporting a 3D model from the generated images. The proposed designs are suboptimal compared to recent works such as [SyncDreamer](https://arxiv.org/abs/2309.03453) and [MVDiffusion](https://arxiv.org/abs/2307.01097), which can generate 16 or more views in parallel.
- Although the authors claim that ARCI outperforms Zero123 in novel view synthesis (NVS), Zero123 itself is not merely an NVS model. Zero123 can serve as a general diffusion model backbone with 3D shape/global priors, which facilitates many other approaches via fine-tuning or score distillation (e.g., [Magic123](https://arxiv.org/abs/2306.17843), [One-2-3-45](https://arxiv.org/abs/2306.16928), [SyncDreamer](https://arxiv.org/abs/2309.03453), etc). It is true that ARCI outperforms Zero123 in NVS, especially when the training budget is limited, but the scope of ARCI is much narrower than Zero123 given the more complicated designs. | 3. Fig.1 to Fig.3 are very difficult to parse. The texts in the figures are too small. The inputs and outputs for each task are not clearly explained. The captions are not self-contained, and it is also very hard to link them to certain parts of the main text. |
NIPS_2022_2286 | NIPS_2022 | Weakness 1. It is hard to understand what the axes are for Figure 1. 2. It is unclear what the major contributions of the paper are. Analyzing previous work does not constitute as a contribution. 3. It is unclear how the proposed method enables better results. For instance, Table 1 reports similar accuracies for this work compared to the previous ones. 4. The authors talk about advantages over the previous work in terms of efficiency however the paper does not report any metric that shows it is more efficient to train with this proposed method. 5. Does the proposed method converge faster compared to previous algorithms? 6. How does the proposed methods compare against surrogate gradient techniques? 7. The paper does not discuss how the datasets are converted to spike domain.
There are no potential negative societal impacts. One major limitation of this work is applicability to neuromorphic hardware and how will the work shown on GPU translate to neuromorphic cores. | 4. The authors talk about advantages over the previous work in terms of efficiency however the paper does not report any metric that shows it is more efficient to train with this proposed method. |
NIPS_2020_633 | NIPS_2020 | 1. The contribution is not enough. This paper address overfitting problem of training GAN with limited data, and proposed the differentiable augmentation. I think it is important factor, but still limited. 2. Using the accuracy (of both training and validation data) is not convincing metric, since conditional GAN suffers from limited diversity problem [1]. The generated image with limited diversity is also leveraged to train the discriminator (classifier). In this case, the learned discriminator is not suitable to evaluate the validation data. One interesting method [1] is to train one classifier using the generated images, and test it on real data. 3. I think more combination of data augmentation should be considered to support the proposed method: a new data augmentation. This paper only explore three simple terms: 'translation', 'cutout' and 'color', which fails to claim that current data augmentation fails. Some papers have investigated a series of data augmentation techniques (such as rotations, noise etc) for down stream tasks, I recommend to combine more data augmentations to support the proposed method. 4. As shown on Figure 4, are the data augmentation T (for G and D) same when updating the D? For example, at the same iteration the data augmentation T of D also is 'cutout' when G is 'cutout'. Do the data augmentation of updating D should be same or not? 5. The used the face dateset (Obama, grumpy cat and panda) have limited diversity when I check the provided data, which is easy to learn. I am doubt why the baseline result (Figure 3) is worse. 6. Table 4 reports the results of baselines. The result of MineGAN is weird. I guess author performed it by yourself since the released code of MineGAN (on StyleGAN) is later after the deadline of NeurIPS. I repeated the released code based on StyleGAN at cat and dog datasets, and got interesting results. I believe designing the miner network is little challenge, which is why the reported result is worse. 7. In figure 6, why authors consider the CIFAR-10 with 100% training data? To support the proposed method, the limited data should be consider. It would be convincing to leverage CIFAR-10 with limited data. [1] How good is my GAN? ECCV2018 | 1. The contribution is not enough. This paper address overfitting problem of training GAN with limited data, and proposed the differentiable augmentation. I think it is important factor, but still limited. |
NIPS_2016_450 | NIPS_2016 | . First of all, the experimental results are quite interesting, especially that the algorithm outperforms DQN on Atari. The results on the synthetic experiment are also interesting. I have three main concerns about the paper. 1. There is significant difficulty in reconstructing what is precisely going on. For example, in Figure 1, what exactly is a head? How many layers would it have? What is the "Frame"? I wish the paper would spend a lot more space explaining how exactly bootstrapped DQN operates (Appendix B cleared up a lot of my queries and I suggest this be moved into the main body). 2. The general approach involves partitioning (with some duplication) the samples between the heads with the idea that some heads will be optimistic and encouraging exploration. I think that's an interesting idea, but the setting where it is used is complicated. It would be useful if this was reduced to (say) a bandit setting without the neural network. The resulting algorithm will partition the data for each arm into K (possibly overlapping) sub-samples and use the empirical estimate from each partition at random in each step. This seems like it could be interesting, but I am worried that the partitioning will mean that a lot of data is essentially discarded when it comes to eliminating arms. Any thoughts on how much data efficiency is lost in simple settings? Can you prove regret guarantees in this setting? 3. The paper does an OK job at describing the experimental setup, but still it is complicated with a lot of engineering going on in the background. This presents two issues. First, it would take months to re-produce these experiments (besides the hardware requirements). Second, with such complicated algorithms it's hard to know what exactly is leading to the improvement. For this reason I find this kind of paper a little unscientific, but maybe this is how things have to be. I wonder, do the authors plan to release their code? Overall I think this is an interesting idea, but the authors have not convinced me that this is a principled approach. The experimental results do look promising, however, and I'm sure there would be interest in this paper at NIPS. I wish the paper was more concrete, and also that code/data/network initialisation can be released. For me it is borderline. Minor comments: * L156-166: I can barely understand this paragraph, although I think I know what you want to say. First of all, there /are/ bandit algorithms that plan to explore. Notably the Gittins strategy, which treats the evolution of the posterior for each arm as a Markov chain. Besides this, the figure is hard to understand. "Dashed lines indicate that the agent can plan ahead..." is too vague to be understood concretely. * L176: What is $x$? * L37: Might want to mention that these algorithms follow the sampled policy for awhile. * L81: Please give more details. The state-space is finite? Continuous? What about the actions? In what space does theta lie? I can guess the answers to all these questions, but why not be precise? * Can you say something about the computation required to implement the experiments? How long did the experiments take and on what kind of hardware? * Just before Appendix D.2. "For training we used an epsilon-greedy ..." What does this mean exactly? You have epsilon-greedy exploration on top of the proposed strategy? | * L81: Please give more details. The state-space is finite? Continuous? What about the actions? In what space does theta lie? I can guess the answers to all these questions, but why not be precise? |
s4xIeYimGQ | EMNLP_2023 | 1) The improvement over CoT baselines, especially Self-Consistency Decoding CoTs, is not very significant. With Self-Consistency Decoding, the average improvement is usually around 0.5%, which may limit the real application of the proposed method considering the higher computing usage of backward verification.
2) The method does not work very effectively on general reasoning tasks compared with mathematic reasoning.
3) Lack of deep analysis on when back verification would work and when would not. From the current results, we can only conclude that the proposed method may work better in mathematical reasoning tasks. One possible reason is the masked conditions might be more effective than True-False Item Verification. Some deep qualitative analysis is needed here, which may also further help to know how to generalize similar verification approaches to other reasoning tasks beyond arithmetic reasoning. | 2) The method does not work very effectively on general reasoning tasks compared with mathematic reasoning. |
aw2Jc5DFZC | ICLR_2025 | - **W1. SoftMoE Formulation**: The SoftMoE formulation presented in Section 2 of this paper differs from that of the original SoftMoE paper [1]. In the original formulation, each expert processes $p$ slots, making $ \Phi \in \mathbb{R}^{d \times (n \cdot p)}$ and $C(X) \in \mathbb{R}^{m \times (n \cdot p)}$. However, in this paper, the parameter $p$ is not included. Notably, setting $p=1$ in the original formulation results in an incomparability with other papers using SoftMoE, where $p$ is recognized as a significant hyperparameter. Furthermore, in line 243, the authors cite [2], claiming that this work demonstrates that even a single-expert SoftMoE can outperform traditional architectures. This comparison is inaccurate, as $p$ is not set to 1. Specifically, after reviewing the code, I observed that $p \approx m/n$; see lines 55-64 in this GitHub file: [Link](https://github.com/google/dopamine/blob/master/dopamine/labs/moes/architectures/softmoe.py).
- **W2. Theoretical Contribution**: I don't find the results presented in Theorem 1 insightful.
- **W2.1. Trivial Result**: Ignoring $p$ (the number of slots) in the SoftMoE formulation leads to a trivial outcome for $n=1$. Here, matrix $C(X)$ becomes an **all-one vector of size $m$** instead of a matrix of size $m \times p$, resulting in **identical output tokens** and thereby ignoring any temporal information in the tokens, effectively replacing them with a fixed token. This trivializes the ineffectiveness of SoftMoE with $n=1$, which is not the case when $p > 1$.
- **W2.2. Notion of Approximation**: The notion of approximation presented by the authors is non-standard. I encourage the authors to provide references or pointers to similar approaches in the literature to clarify this aspect.
- **W2.3. Proof Technique**: After reviewing Appendix A, I noticed that the proof relies on a special case where a contradiction arises as matrix norms approach infinity. This is acknowledged by the authors in Section 3, where they mention that normalizing the input makes the results from Theorem 1 inapplicable.
- **W3. Lack of comparison for Algorithm 1:** The paper lacks a comparison with the sparse routing mechanism considered in the original SoftMoE paper [1] (cf. Section 3.2). (Correct me if I am wrong) --- References:
[1] Puigcerver, Joan, et al. "From sparse to soft mixtures of experts." arXiv preprint arXiv:2308.00951 (2023).
[2] Obando-Ceron, Johan, et al. "Mixtures of experts unlock parameter scaling for deep RL." arXiv preprint arXiv:2402.08609 (2024). | - **W2.3. Proof Technique**: After reviewing Appendix A, I noticed that the proof relies on a special case where a contradiction arises as matrix norms approach infinity. This is acknowledged by the authors in Section 3, where they mention that normalizing the input makes the results from Theorem 1 inapplicable. |
RSincg5RBe | ICLR_2024 | - Although this work states that the (hierarchical) latent approach for graph generation provides a scalable solution for molecule generation, the provided experiments are limited to datasets (e.g., GuacaMol) in which previous diffusion models (e.g., GDSS and DiGress) are applicable. In order to justify the scalability of the proposed method, it should be evaluated in a larger dataset.
- The experimental setting for evaluating the computational efficiency is not clear. Is the training and sampling time measured in the same condition, e.g., training conducted via DDP and using the same number of V100 GPUs?
- Generation performance on unconditional molecule generation tasks should be evaluated with more descriptive metrics, for example, FCD, Scaffold similarity [1], and Fragment similarity [1]. Reported metrics, i.e., validity, uniqueness, and novelty fail to measure how similar (e.g., chemical aspects) are the generated molecules to the molecules from the test set. In particular, under the current setting, GDSS seems to be showing comparable results in large datasets (ZINC250K and GuacaMol) with significantly fewer parameters.
- The quantitative results of Tables 2 and 3 show that the performances of GLDM and HGLDM on unconditional generation tasks are almost the same, whereas there is a significant improvement using the hierarchical approach for conditional generation tasks. What is the reason for the hierarchical approach only effective in conditional tasks?
- As the continuous diffusion model (e.g., GDSS) outperforms the discrete diffusion model (e.g., DiGress) in Table 2, the continuous diffusion model should be compared as a baseline in Table 3 (i.e., conditional generation task). Although GDSS does not explicitly present a conditional framework, recent work [2] proposes a conditional molecule generation framework using classifier guidance based on GDSS, which could be used as a baseline.
- The performance of GLDM (and HGLDM) comes from the effectiveness of using a latent representation of graphs compared to previous graph diffusion models, not from the diffusion processes. Thereby, analysis of the latent representation, e.g., interpolation in the latent space or clustering of the latent points with respect to certain conditions, would greatly strengthen this work.
- Missing references on related works:
- Qiang et al., Coarse-to-Fine: a Hierarchical Diffusion Model for Molecule Generation in 3D, ICML 2023
- Xu et al., Geometric Latent Diffusion Models for 3D Molecule Generation, ICML 2023
- I would like to raise my score if the above concerns are sufficiently addressed. ---
[1] Polykovskiy et al., Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models, arXiv 2018
[2] Lee et al., Exploring Chemical Space with Score-based Out-of-distribution Generation, ICML 2023 | - As the continuous diffusion model (e.g., GDSS) outperforms the discrete diffusion model (e.g., DiGress) in Table 2, the continuous diffusion model should be compared as a baseline in Table 3 (i.e., conditional generation task). Although GDSS does not explicitly present a conditional framework, recent work [2] proposes a conditional molecule generation framework using classifier guidance based on GDSS, which could be used as a baseline. |
fB1iiH9xo7 | ICLR_2024 | - Authors use object detection as the downstream task, but I personally believe LiDAR-based segmentation is the best choice. Colorization-based pre-training mainly learns the semantics in my opinion, but object detection needs accurate locations and poses especially in the benchmark using IoU-based metrics such as KITTI and Waymo.
- The ultimate performance is not that good. The results on the Waymo Open Dataset are too weak. Other results on KITTI are also not convincing because PointRCNN and IA-SSD are not SOTA detectors nowadays.
- Although the writing is clear, there are some unnecessary parts like the decorative math on page 4, which do not make the paper better. | - Authors use object detection as the downstream task, but I personally believe LiDAR-based segmentation is the best choice. Colorization-based pre-training mainly learns the semantics in my opinion, but object detection needs accurate locations and poses especially in the benchmark using IoU-based metrics such as KITTI and Waymo. |
CbfsKHiWEn | ICLR_2025 | 1. The evaluation could be compared with more baselines, please refers to https://arxiv.org/pdf/2409.02795.
2. These evaluation tasks are too simple; some methods may be effective on benchmarks like IMDB but may not generalize well to more complex tasks.
3. Although the authors provide beautiful theory proof, the objective of Eq (12) seems to be in contradiction with IPO. | 3. Although the authors provide beautiful theory proof, the objective of Eq (12) seems to be in contradiction with IPO. |
NIPS_2016_182 | NIPS_2016 | weakness of the technique in my view is that the kerne values will be dependent on the dataset that is being used. Thus, the effectiveness of the kernel will require a rich enough dataset to work well. In this respect, the method should be compared to the basic trick that is used to allos non-PSD similarity metrics to be used in kernel methods, namely defining the kernel as k(x,x') = (s(x,z_1),...,s(x,z_N))^T(s(x',z_1),...,s(x',z_N)), where s(x,z) is a possibly non-PSD similarity metric (e.g. optimal assignment score between x and z) and Z = {z_1,...,z_n} is a database of objects to compared to. The write-up is (understandably) dense and thus not the easiest to follow. However, the authors have done a good job in communicating the methods efficiently. Technical remarks: - it would seem to me that in section 4, "X" should be a multiset (and [\cal X]**n the set of multisets of size n) instead of a set, since in order the histogram to honestly represent a graph that has repeated vertex or edge labels, you need to include the multiplicities of the labels in the graph as well. - In the histogram intersection kernel, it think for clarity, it would be good to replace "t" with the size of T; there is no added value to me in allowing "t" to be arbitrary. | - In the histogram intersection kernel, it think for clarity, it would be good to replace "t" with the size of T; there is no added value to me in allowing "t" to be arbitrary. |
HoyKFRhwMS | ICLR_2025 | 1. One limitation of retrieval and search engines is that the cost of storage and inference increases and the accuracy of recognition decreases as the memory database size grows. Although computing budget and accuracy can be traded off, this remains a limitation. Additionally, regarding the computation cost, I wonder if there is a rough estimate of the inference time when the database size increases to 1 billion images.
2. The adaptation capacity of the proposed visual memory to accommodate ever-growing concepts assumes that the image encoder can produce meaningful embeddings for new concepts. While for geometrically distinctive concepts, I believe this is less of a concern for DINO representations, as they are observed to contain rich geometric information, for concepts where class label correlates more with semantics rather than geometry, I wonder if the adaptation capacity still holds. | 2. The adaptation capacity of the proposed visual memory to accommodate ever-growing concepts assumes that the image encoder can produce meaningful embeddings for new concepts. While for geometrically distinctive concepts, I believe this is less of a concern for DINO representations, as they are observed to contain rich geometric information, for concepts where class label correlates more with semantics rather than geometry, I wonder if the adaptation capacity still holds. |
ICLR_2022_3218 | ICLR_2022 | Weakness: 1) Since this paper focuses on biometric verification learning, the comparison against the state-of-the-art loss functions widely used in face/iris verification should be added (e.g., Center-Loss, A-Softmax, AM-Softmax, ArcFace). 2) Cosine similarity score is more often used in biometric verification, so I wonder if it would work better than the Euclidean distance when computing the Decidability. 3) Large batch-size may be significant in the proposed loss. The authors conducted on three settings to select the best number of batch-size. However, it may be better to examine the performance with more settings. For example, what would happen if a small batch-size is used. 4) Why triplet loss cannot convergent on CASIA-V4? I guess many previous iris verification works have employed such loss. 5) Figure 5 shows the impact of the D-Loss before and after training the model. It is suggested to compare with other losses on it. | 1) Since this paper focuses on biometric verification learning, the comparison against the state-of-the-art loss functions widely used in face/iris verification should be added (e.g., Center-Loss, A-Softmax, AM-Softmax, ArcFace). |
ACL_2017_130_review | ACL_2017 | The paper starts with a detailed introduction and review of relevant work. Some of the cited references are more or less NLP background so they can be omitted e.g. (Salton 1989) in section 4.2.3. Other references are not directly related to the topic e.g. “sentiment classification” and “pedestrian detection in images”, lines 652-654, and they can be omitted too. In general lines 608-621, section 4.2.3 can be shortened as well etc. etc. The suggestion is to compress the first 5 pages, focusing the review strictly on the paper topic, and consider the technological innovation in more detail, incl. samples of English translations of the ABCD and/or Cindarela narratives.
The relatively short narratives in Portuguese esp. in ABCD dataset open the question how the similarities between words have been found, in order to construct word embeddings. In lines 272-289 the authors explain that they generate word-level networks from continuous word representations. What is the source for learning the continuous word representations; are these the datasets ABCD+Cinderella only, or external corpora were used? In lines 513-525 it is written that sub-word level (n-grams) networks were used to generate word embeddings. Again, what is the source for the training? Are we sure that the two kinds of networks together provide better accuracy? And what are the “out-of-vocabulary words” (line 516), from where they come?
- General Discussion: It is important to study how NLP can help to discover cognitive impairments; from this perspective the paper is interesting. Another interesting aspect is that it deals with NLP for Portuguese, and it is important to explain how one computes embeddings for a language with relatively fewer resources (compared to English). The text needs revision: shortening sections 1-3, compressing 4.1 and adding more explanations about the experiments. Some clarification about the NURC/SP N. 338 EF and 331 D2 transcription norms can be given.
Technical comments: Line 029: ‘… as it a lightweight …’ -> shouldn’t this be ‘… as in a lightweight …’ Line 188: PLN -> NLP Line 264: ‘out of cookie out of the cookie’ – some words are repeated twice Table 3, row 2, column 3: 72,0 -> 72.0 Lines 995-996: the DOI number is the same as the one at lines 1001-1002; the link behind the title at lines 992-993 points to the next paper in the list | 338 EF and 331 D2 transcription norms can be given. Technical comments: Line 029: ‘… as it a lightweight …’ -> shouldn’t this be ‘… as in a lightweight …’ Line 188: PLN -> NLP Line 264: ‘out of cookie out of the cookie’ – some words are repeated twice Table 3, row 2, column 3: 72,0 -> 72.0 Lines 995-996: the DOI number is the same as the one at lines 1001-1002; the link behind the title at lines 992-993 points to the next paper in the list |
ICLR_2021_2330 | ICLR_2021 | Weakness
- Method on Fourier domain supervision lacks more analysis and intuition. It's unclear how the size of the grid is defined to perform FFT, from my understanding, the size is critical as local frequency will be changed using different grid size. Is it fixed throughout training? What is the effect of having different sizes?
- The generator has a recurrent structure that supports 10 frame generation, but the discriminator looks at three frames (from figure 1) at a time, which seems to limit the power of temporal consistency.
- In figure 7 result and supplemental video result, SurfGAN produces smoother results (MSE seems closer to the red ground truth in figure 7). This seems contradicts the use of Fourier components for supervision -- what causes this discrepancy?
- Figure 4 is confusing. It's not clear what the columns mean -- it is not explained in the text or caption.
- Notation is confusing. M and N are used without definition. Suggestion
- Spell out F.L.T.R in figure 4
- Figure 1 text is too small to see
- It is recommended to have notation and figure cross-referenced (e.g. M and N are not shown in the figure) | - Notation is confusing. M and N are used without definition. Suggestion - Spell out F.L.T.R in figure 4 - Figure 1 text is too small to see - It is recommended to have notation and figure cross-referenced (e.g. M and N are not shown in the figure) |
j80yTpU7ni | ICLR_2024 | * The proposed method involves multiple gradient updates and connection strength calculations (either one at a given iteration), which seems very computationally demanding compared to existing gradient based MTL methods like MGDA (which are already computationally intensive).
* Although the authors mention the proposed method of priority allocation “prevent a specific task from exerting dominant influence over the entire network”, it is not easy to see why this is the case. The definition of the task priority does not seem to exclude the case where a specific task exerting dominant influence over the entire network.
* The y axis of Figure 4, “percentage of top-priority tasks” is ambiguous. What is the relationship between this quantity and the connection strength defined in Eq. 7?
Minor comments:
* In Algorithm1, using $p$ to denote the phase mixing probability and the dummy variable in the inner loop in Phase 2 might be confusing | * In Algorithm1, using $p$ to denote the phase mixing probability and the dummy variable in the inner loop in Phase 2 might be confusing |
ARR_2022_285_review | ARR_2022 | The main weakness of the paper is that it's very terse, partly due to the 4-page limit of a short submission. This leads to potentially important information being omitted, see detailed comments below.
Some of the mentioned points could be easily alleviated by utilising the additional page that is provided upon acceptance in a venue.
The following are points where i would like to see further clarifications: (NB: I do not have access to the forum/comments of the previous submission so some of my comments might have been addressed earlier.)
- Why are SST-2 and SICK-E chosen as representative tasks to show that the proposed method generalises to other few-shot settings? Other papers seem to go for the full SuperGLUE suite. Similarly, for those tasks, why is Autoprompt chosen as the only reference approach to compare against? Admittedly, the difference in performance to other approaches (at least for the SST-2 task) appears to not differ too much. - It is great to see that confidence scores were reported for the obtained results, but how exactly are they calculated?
- The high-level description helps to understand the approach intuitively, but a more detailed (e.g. mathematical) formulation, for example in the appendix, would be helpful as well. Similarly, the figure is supposed to help to understand the problem better, but I find it confusing in two ways: First, the figure is too abstract for me. Maybe having more text labels would help. Second, depicting sentiment analysis, it does not align well with the main contribution of the paper, improvements on the WiC task. Maybe reworking the figure to depict the WiC task would help with both problems.
- Qualitative error analysis in the Appendix is great, but the manuscript so far lacks a more detailed analysis of the results. One could wonder, for the WiC task, are the errors always due to models predicting "matched" for "not matched" GT? Is this similar to other approaches? For example, a by-class accuracy breakdown could answer some of these questions. | - The high-level description helps to understand the approach intuitively, but a more detailed (e.g. mathematical) formulation, for example in the appendix, would be helpful as well. Similarly, the figure is supposed to help to understand the problem better, but I find it confusing in two ways: First, the figure is too abstract for me. Maybe having more text labels would help. Second, depicting sentiment analysis, it does not align well with the main contribution of the paper, improvements on the WiC task. Maybe reworking the figure to depict the WiC task would help with both problems. |
jR6YMxVG9i | ICLR_2025 | -"a around" should be "an around"
-The grammar here could be improved as the phrasing is awkward: "Apart from agent framework"
-"VLM - generated" should be "VLM-generated" in Line 173
- If a trajectory fails to execute the desired command and the system tries to start over, the initial state, $s_1$, may not be the same if the system is trialing the trajectories in the real world. The paper does not seem to address the problem of undoing or resetting the system back to its initial state during the reflection and retry component. If the system used a world model, then this would not be a problem, but it doesn't seem that a world model is discussed or used.
- It would be helpful for the x-axis of Figure 2 to not have increments of 1/2 as the actual step size is 1.
- The paper could be improved by including an analysis to show how often failures in each iteration lead to more problems at subsequent iterations as opposed to getting to reset to the initial state.
- The results could be improved if there were confidence intervals on the results in the tables and some notion of temperature and how that hyperparameter might effect results.
- Table 3 is helpful to show the performance of the system improves with a 2x-3x increase in training data size; however, it would be even better if a dimension was included for the quality of the data.
- It would have been helpful to include additional benchmarking tasks outside of AitW.
- An additional analysis showing the performance of the system as a function of task complexity (e.g., number of steps required in the optimal plan) would have been useful.
- The approach is a relatively straight-forward combination of
- It would be helpful to add details about whether the reward model and the VLM policy are the same system. If so, how is the VLM trained to output the reward? Is this just intrinsic or through prompt engineering or is the VLM literally modified to add a scalar output head and trained from scratch or fine-tuned?
- The approach seems to combine some relatively simple elements to create a new system; the paper could be improved by being more clear about the novelty and intellectual merit of the approach more than just combining simple elements of prior work. | - It would have been helpful to include additional benchmarking tasks outside of AitW. |
ICLR_2021_1292 | ICLR_2021 | Weakness:
Comparison of Complexity: [1] presents the complexity of different efficient transformers. For linformer[2], the time and memory complexity is O(nk). Is there any justification of LSH sampling equipped YOSO with complexity more than O(nm\tau log(d)+nmd)? 2.Experiments: YOSO takes linformer as baselines. However, the pre-training experiment part does not provide steps vs ppl of linformer with YOSO in Figure 4. What is the comparison result of YOSO with linformer on iteration wise convergence? Also, linformer demonstrates better accuracy in downstream tasks such as SST-2. Is there any comparison to an explanation that can analyze this difference in performance? 3.Efficiency: YOSO demonstrates an advantage over linformer and longformer in memory and runtime. However, is there any analysis on why YOSO achieves this superiority with higher complexities? Are there any system-level advantages that YOSO can show?
Some discussions: Reformer[3] design an attention mechanism that computations are held in the neighbor tokens inside the hash buckets. YOSO also uses hash based sampling to compute attention via neighbor tokens that have high collision probability. On the other hand, linformer introduces a more global view for attention by the low rank projection. Is there any analysis of the local vs global intuition?
[1]Efficient Transformers: A Survey https://arxiv.org/pdf/2009.06732.pdf
[2]Linformer: Self-Attention with Linear Complexity https://arxiv.org/abs/2006.04768
[3] Reformer: The Efficient Transformer https://arxiv.org/abs/2001.04451 | 2.Experiments: YOSO takes linformer as baselines. However, the pre-training experiment part does not provide steps vs ppl of linformer with YOSO in Figure 4. What is the comparison result of YOSO with linformer on iteration wise convergence? Also, linformer demonstrates better accuracy in downstream tasks such as SST-2. Is there any comparison to an explanation that can analyze this difference in performance? |
ACL_2017_276_review | ACL_2017 | The novelty is fairly limited (essentially, another permutation of tasks in multitask learning), and only one way of combining the tasks is explored. E.g., it would have been interesting to see if pre-training is significantly worse than joint training; one could initialize the weights from an existing RNN LM trained on unlabeled data; etc.
- General Discussion: I was hesitating between a 3 and a 4. While the experiments are quite reasonable and the combinations of tasks sometimes new, there's quite a bit of work on multitask learning in RNNs (much of it already cited), so it's hard to get excited about this work. I nevertheless recommend acceptance because the experimental results may be useful to others.
- Post-rebuttal: I've read the rebuttal and it didn't change my opinion of the paper. | -General Discussion: I was hesitating between a 3 and a 4. While the experiments are quite reasonable and the combinations of tasks sometimes new, there's quite a bit of work on multitask learning in RNNs (much of it already cited), so it's hard to get excited about this work. I nevertheless recommend acceptance because the experimental results may be useful to others. |
ICLR_2022_1678 | ICLR_2022 | 1 - The authors propose a relaxation of rejection sampling which is using an arbitrary parameter β
instead of the true upper bound of the ratio p q
when the latter cannot be computed. The reviewer fails to understand why the authors did not directly use Importance sampling in the first place.
2- In algorithm 1, the reviewer fails to see a difference between QRS and RS, and will change their opinion if the authors can point out a value of u for which QRS and RS will behave differently.
3 - Uninteresting Section 2.2: - Equation 1 needs a parenthesis to avoid confusion - Equation 3 is pretty much obvious from the definition of TVD (Lemma 2.19 Aldous and Fill https://www.stat.berkeley.edu/users/aldous/RWG/book.html) - Equation 4 is obvious since using the apropriate upper bound gives you rejection sampling which is a perfect sampling algorithm.
4 - In the abstract the authors require the proposal distribution to upper bound the target everywhere which is not true as the authors themselves clarify in the text.
5 - While Equation 9 and 10 are great in that they can be used to compute TVD and KL between the true and QRS distributions, there are multiple issues which are neither stated as assumptions nor addressed appropriately, namely: - They're not unbiased estimators since Z
is not known and needs to be estimated, this point is not explicitly stated. - It is assumed that the normalizing constant of q
is known, which is not always the case. - They rely on importance sampling, which begs question 1. | 4 - In the abstract the authors require the proposal distribution to upper bound the target everywhere which is not true as the authors themselves clarify in the text. |
NIPS_2019_962 | NIPS_2019 | for exceptions. + Experiments are convincing. + To the best of my knowledge, the idea of using unsupervised keypoints for reinforcement learning is novel and promising. One can expect a variety of follow-up work. + Using keypoints as input state of a Q function is reasonable and reduces the dimensionality of the problem. + Reducing the search space to the most controllable keypoints instead of raw actions is a promising idea. Weaknesses: 1. Overstated claim on generalization In the introduction (L17-L22), the authors motivate their work by explaining that reinforcement learning approaches are limited because it is difficult to re-purpose task-specific representations, but that this is precisely what humans do. From this, one could have expected this paper to address this issue by training and applying the detector network across multiple games, re-purposing their keypoint detector. This would have be useful to verify that the learnt representations generalize to new contexts. But unfortunately, it hasn't been done, so it is a bit of an over-statement. Could this be a limitation of the method because the number of keypoints is fixed? 2. Deep RL that matters Experiments should be run multiple times. A longstanding issue with deep RL is their reproducibility and the significance of their improvements. It has been recently suggested that we need a community effort towards reproducibility [a], which should also be taken into account in this paper. Among the considerations, one critical thing is running multiple experiments and reporting the statistics. [a] Henderson, Peter, et al. "Deep reinforcement learning that matters." Thirty-Second AAAI Conference on Artificial Intelligence. 2018. 3. The choice of testing environment is not well motivated. Levels are selected without a clear rationale, with only a vague motivation in L167. This makes me suspect that they might be cherry picks. Authors should provide a more clear justification. This could be related to the next weakness that I will discuss, which is understandable. Even if this is the case, this should then be explicit with experimental evidence. 4. Keypoints are limited to moving objects A practical limitation comes from the fact that the keypoints are learnt from the moving parts of the image. As identified by the authors, the first resulting limitation is that the method assumes a fixed background, so that only meaningful objects move and can be detected as keypoints. Learning to detect keypoints based on what objects are moving has some limitations when these keypoints are supposed to be used as the input state of a Q function. One can imagine a game where some obstacles are immobile. The locations of these obstacles are important in order to make decisions but in this work, they would be ignored. It is therefore important that these limitations are also explicitly demonstrated. 5. Dealing with multiple instances. Because "PNet" generates one heatmap per keypoint, each keypoint detector "specializes" into a certain type of keypoint. This is fine for some applications (e.g. face keypoints) where only one instance of each kind of keypoint exists in each image. But there are games (e.g. Frostbite) where a lot of keypoints look exactly the same. And still, the detector is able to track them with consistency (as shown in the supplementary video). This is intriguing, as one could expect the detector to detect several keypoints at the same location, instead of distributing them almost perfectly. Is it because the receptive field is large? 6. Other issues - In section 3, the authors could improve the explanation of why the loss promotes the detection of meaningful keypoints. It is not obvious at first why the detector needs to detect keypoints to help with the reconstruction. - Figure 1: Referring to [15] as "PointNet" is confusing when this name doesn't appear anywhere in this paper ([15]) and there exists another paper with this name. See "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation", Charles R. Qi, Hao Su, Kaichun Mo, Leonidas J. Guibas. - Figure 1: The figure describes two "stop grad", but there is no mention or explanation of it in the text or caption. This is not theoretically motivated either, because most of the transported feature map comes from the source image (all the pixels that are not close from source or target keypoints). Blocking these gradients would block most of the gradients that can be used to train the "Feature CNN" and "PNet". - L93: "by marginalising the keypoint-detetor feature-maps along the image dimensions (as proposed in [15])". This would be better explained and self-contained by saying that a soft-argmax is used. - L189: "Distances above a threshold (ε) are excluded as potential matches". What threshold value is used? - What specific augmentation techniques are used during the training of the detector? - Figure 4: it is not clear what the meaning of "1-200 frames" is and how the values are computed. Why are the precision and recall changing with the trajectory length? Also, what is an "action repeat"? - Figure 6: the scores should be normalized (and maybe displayed as a plot) for easier comparison. ==== POST REBUTTAL ==== The rebuttal is quite convincing, and have addressed my concerns. I would like to raise the rating of the paper to 8 :-) I'm happy that my worries were just worries. | - Figure 1: Referring to [15] as "PointNet" is confusing when this name doesn't appear anywhere in this paper ([15]) and there exists another paper with this name. See "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation", Charles R. Qi, Hao Su, Kaichun Mo, Leonidas J. Guibas. |
NIPS_2021_1721 | NIPS_2021 | I was very confused at the beginning about the difference between this paper and Bayesian RL. Assuming that my understanding of this paper is correct, I think the writing could be improved. Here are my comments:
It might be helpful to have a clear paragraph about the problem definition of this paper. I have found it confusing between Bayesian RL and the problem that this paper is focusing on.
Line 29-45: the zoo example conveys the drawback of empirical risk minimization. I understand that the map represents empirical risk minimization. However, I am not sure what peeking through the window represents? The anti-empirical risk minimization approach?
Line 29-45: the zoo example is good. However, it is an analogy for supervised learning rather than an analogy for RL. Considering that this paper is doing RL, it might be better to use an analogy for RL.
Sec. 4 and Fig. 1: the authors try to explain why standard RL methods could fail to generalize. However, the example problem of shoe classification is a contextual bandit problem rather than an MDP problem. Since the authors mentioned MDP multiple times in Sec. 4, it might be better to give an example of an MDP rather than a bandit.
Sec. 4 and Fig. 1: When I first read Sec. 4 and Fig. 1, I was very confused.
Given that methods like UCB can solve bandit problems, when I first read it, I didn't understand why the author said more sophisticated MDPs that use uncertainty estimates in their construction. Later on, I understood that the authors are solving a different problem from the problem that UCB is trying to solve.
Therefore, I think this section could be clarified a bit to avoid this confusion.
Line 228-237: again, I think this is a bandit problem, not an MDP problem.
Line 228-237: in this paragraph, the authors use terms like Bayes-optimal, which is very confusing to me at first.
Bayes-optimal refers to optimally solving the trade-off between exploration and exploitation. However, in the problem that this paper is focusing on, there is no exploration.
It is very confusing to see bayes-optimal and a = argmax p(y | x, D); together. Because in Bayesian RL and bandits, a = argmax p(...) usually refers to a greedy heuristic to approximate the bayes-optimal policy, e.g., UCB. So a = argmax p(...) is not bayes-optimal.
I think maybe the author could define a new concept of optimality for the generalization problem formulated in this paper, rather than using terms in Bayesian RL.
Line 228-237: again, the term POMDP makes me confusing at first, too.
In a POMDP, the belief (or posterior) is updated at every iteration when a new observation comes. However, in the problem formulated in this paper, no exploration (active information gathering) is allowed, and the belief is not updated at all during testing. Thus, I am not sure whether POMDP is the correct term. Maybe the authors could use POMDP and clarify this point.
Line 236: the authors use the word memoryless without defining it. I figured it out by reading the appendix. It would be better to define it in advance.
Eq. 6: is the policy gradient in Eq. 6 solving the optimal problem? So after convergence, will we get the optimal solution to Eq. 5? It might be better to clarify. Minor
Line 78: but also on learning - on is unnecessary.
Line 132: dπ(s) = (1−γ)... - I don't understand why (1−γ) is there (I could be wrong).
Line 212: extra space in the beginning.
Line 216: extra space in the beginning.
Line 233: It might be better to use a larger symbol for the indicator function.
I didn't check the proofs in Appendix. Post-rebuttal
Thank you for your clarification! Now I think the paper is clearer in my mind and I appreciate it more! I have raised my score by 1 point. | 6: is the policy gradient in Eq. 6 solving the optimal problem? So after convergence, will we get the optimal solution to Eq. 5? It might be better to clarify. Minor Line 78: but also on learning - on is unnecessary. Line 132: dπ(s) = (1−γ)... |
NIPS_2022_1048 | NIPS_2022 | and comments: 1. This paper mainly focused on group sufficiency as the fairness metric. Is it possible to derive similar results under criteria of demographic parity or equalized odds? What are the potential challenges for other fair metrics? Under these settings, is it still possible to achieve both fairness and accuracy for many subgroups? 2. The regularization coefficient λ
seems to have a joint optimal value in 0.1-2. Could you elaborate more on why both fairness and accuracy drop when λ
is large? 3. Is it possible to assume the general gaussian distribution rather than isotropic gaussian in the proposed algorithm? What is the difference? 4. Can the proposed theoretical analysis be extended for a regression or segmentation task? For example, could we obtain the same results as the classification task? 5. Could you explain a bit more on the intuition of group sufficiency? Is there any relation to the well-known sufficient statistics? Other comments: 1. Could we extend the protected feature A
to a vector form? For instance, A
represents multiple attributes. 2. In the Introduction part, the authors introduced a medical therapy instance to present the importance of group sufficiency. Could you explain a bit more about the difference between sufficiency and DP/EO metrics in the real-world application scenarios? 3. In line 225 and line 227, the mathematical expression of gaussian distribution is ambiguous. 4. In section 4.4, it mentions the utilization of Monte-Carlo sampling method. I am curious about the influence of different sampling numbers.
================================================ Thanks the effort from the authors, and I am satisfied with the rebuttal. I would like to raise my score to 8. | 3. Is it possible to assume the general gaussian distribution rather than isotropic gaussian in the proposed algorithm? What is the difference? |
NIPS_2017_480 | NIPS_2017 | and limitations.
Other comments:
* Section 2.1: maybe itâs not necessary to introduce discounts and rewards at all, given that neither are used in the paper?
* Section 3.1: the method for finding the factors seems very brittle, and to rely on disentangled feature representations that are not noisy. Please discuss these limitations, and maybe hint at how factors could be found if the observations were a noisy sensory stream like vision.
* Line 192: freezing the partitioning in the first iteration seems like a risky choice that makes strong assumptions about the coverage of the initial data. At least discuss the limitations of this.
* Section 4: there is a mismatch between these options and the desired properties discussed in section 2.2: in particular, the proposed options are not âsubgoal optionsâ because their distribution over termination states strongly depends on the start states? Same for the Treasure Game.
* Line 218: explicitly define what the âgreedyâ baseline is.
* Figure 4: Comparing the greedy results between (b) and (c), it appears that whenever a key is obtained, the treasure is almost always found too, contrasting with the MCTS version that explores a lot of key-but-no-treasure states. Can you explain this? | * Line 192: freezing the partitioning in the first iteration seems like a risky choice that makes strong assumptions about the coverage of the initial data. At least discuss the limitations of this. |
ACL_2017_371_review | ACL_2017 | - The description is hard to follow. Proof-reading by an English native speaker would benefit the understanding - The evaluation of the approach has several weaknesses - General discussion - In Equation 1 and 2 the authors mention a phrase representation give a fix-length word embedding vector. But this is not used in the model. The representation is generated based on an RNN. What the propose of this description?
- Why are you using GRU for the Pyramid and LSTM for the sequential part? Is the combination of two architectures a reason for your improvements?
- What is the simplified version of the GRU? Why is it performing better? How is it performing on the large data set?
- What is the difference between RNNsearch (groundhog) and RNNsearch(baseline) in Table 4?
- What is the motivation for only using the ending phrases and e.g. not using the starting phrases?
- Did you use only the pyramid encoder? How is it performing? That would be a more fair comparison since it normally helps to make the model more complex.
- Why did you run RNNsearch several times, but PBNMT only once?
- Section 5.2: What is the intent of this section | - Section 5.2: What is the intent of this section |
ACL_2017_818_review | ACL_2017 | 1) Many aspects of the approach need to be clarified (see detailed comments below). What worries me the most is that I did not understand how the approach makes knowledge about objects interact with knowledge about verbs such that it allows us to overcome reporting bias. The paper gets very quickly into highly technical details, without clearly explaining the overall approach and why it is a good idea.
2) The experiments and the discussion need to be finished. In particular, there is no discussion of the results of one of the two tasks tackled (lower half of Table 2), and there is one obvious experiment missing: Variant B of the authors' model gives much better results on the first task than Variant A, but for the second task only Variant A is tested -- and indeed it doesn't improve over the baseline. - General Discussion: The paper needs quite a bit of work before it is ready for publication. - Detailed comments: 026 five dimensions, not six Figure 1, caption: "implies physical relations": how do you know which physical relations it implies?
Figure 1 and 113-114: what you are trying to do, it looks to me, is essentially to extract lexical entailments (as defined in formal semantics; see e.g. Dowty 1991) for verbs. Could you please explicit link to that literature?
Dowty, David. " Thematic proto-roles and argument selection." Language (1991): 547-619.
135 around here you should explain the key insight of your approach: why and how does doing joint inference over these two pieces of information help overcome reporting bias?
141 "values" ==> "value"?
143 please also consider work on multimodal distributional semantics, here and/or in the related work section. The following two papers are particularly related to your goals: Bruni, Elia, et al. "Distributional semantics in technicolor." Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1. Association for Computational Linguistics, 2012.
Silberer, Carina, Vittorio Ferrari, and Mirella Lapata. " Models of Semantic Representation with Visual Attributes." ACL (1). 2013.
146 please clarify that your contribution is the specific task and approach -- commonsense knowledge extraction from language is long-standing task.
152 it is not clear what "grounded" means at this point Section 2.1: why these dimensions, and how did you choose them?
177 explain terms "pre-condition" and "post-condition", and how they are relevant here 197-198 an example of the full distribution for an item (obtained by the model, or crowd-sourced, or "ideal") would help.
Figure 2. I don't really see the "x is slower than y" part: it seems to me like this is related to the distinction, in formal semantics, between stage-level vs. individual-level predicates: when a person throws a ball, the ball is faster than the person (stage-level) but it's not true in general that balls are faster than people (individual-level).
I guess this is related to the pre-condition vs. post-condition issue. Please spell out the type of information that you want to extract.
248 "Above definition": determiner missing Section 3 "Action verbs": Which 50 classes do you pick, and you do you choose them? Are the verbs that you pick all explicitly tagged as action verbs by Levin? 306ff What are "action frames"? How do you pick them?
326 How do you know whether the frame is under- or over-generating?
Table 1: are the partitions made by frame, by verb, or how? That is, do you reuse verbs or frames across partitions? Also, proportions are given for 2 cases (2/3 and 3/3 agreement), whereas counts are only given for one case; which?
336 "with... PMI": something missing (threshold?)
371 did you do this partitions randomly?
376 "rate *the* general relationship" 378 "knowledge dimension we choose": ? ( how do you choose which dimensions you will annotate for each frame?)
Section 4 What is a factor graph? Please give enough background on factor graphs for a CL audience to be able to follow your approach. What are substrates, and what is the role of factors? How is the factor graph different from a standard graph?
More generally, at the beginning of section 4 you should give a higher level description of how your model works and why it is a good idea.
420 "both classes of knowledge": antecedent missing.
421 "object first type" 445 so far you have been only talking about object pairs and verbs, and suddenly selectional preference factors pop in. They seem to be a crucial part of your model -- introduce earlier? In any case, I didn't understand their role.
461 "also"?
471 where do you get verb-level similarities from?
Figure 3: I find the figure totally unintelligible. Maybe if the text was clearer it would be interpretable, but maybe you can think whether you can find a way to convey your model a bit more intuitively. Also, make sure that it is readable in black-and-white, as per ACL submission instructions.
598 define term "message" and its role in the factor graph.
621 why do you need a "soft 1" instead of a hard 1?
647ff you need to provide more details about the EMB-MAXENT classifier (how did you train it, what was the input data, how was it encoded), and also explain why it is an appropriate baseline.
654 "more skimp seed knowledge": ?
659 here and in 681, problem with table reference (should be Table 2). 664ff I like the thought but I'm not sure the example is the right one: in what sense is the entity larger than the revolution? Also, "larger" is not the same as "stronger".
681 as mentioned above, you should discuss the results for the task of inferring knowledge on objects, and also include results for model (B) (incidentally, it would be better if you used the same terminology for the model in Tables 1 and 2) 778 "latent in verbs": why don't you mention objects here?
781 "both tasks": antecedent missing The references should be checked for format, e.g. Grice, Sorower et al for capitalization, the verbnet reference for bibliographic details. | 1) Many aspects of the approach need to be clarified (see detailed comments below). What worries me the most is that I did not understand how the approach makes knowledge about objects interact with knowledge about verbs such that it allows us to overcome reporting bias. The paper gets very quickly into highly technical details, without clearly explaining the overall approach and why it is a good idea. |
EWP9BVRRbA | ICLR_2025 | - The method does not address adversarial attacks aimed at generating benign yet contextually altered text, which may not contain harmful content but still alters the original intent. Could the authors discuss how their approach might be extended or modified to handle such cases, where adversarial examples produce benign text that misrepresents the original context?
- The threat model needs further clarification. Could the authors define the assumed threat model more explicitly, specifying the attacker’s level of access, capabilities, and the defender's available resources? Including this in a dedicated section would enhance clarity, particularly around the assumed white-box access to the victim model.
- The method currently relies on having both adversarial and benign samples to calculate the direction and threshold. Could the authors discuss how their approach might be adapted for scenarios where only single images are available for evaluation, or clarify any limitations in these cases?
- Why does the method focus on the last layer of the LLM? Could the authors justify this choice, ideally through an ablation study comparing performance across different layers?
- While the method appears efficient based on empirical results, some visualization of the "attacking direction" would be helpful, particularly in the context of cross-model transferability.
- Only one baseline is provided for comparison, which limits the evaluation of the method's effectiveness. Could the authors explain why other relevant baselines, such as the approach in Xu et al. [1] that leverages the semantic relationship between malicious queries and adversarial images, were not included? Expanding the baseline comparisons would strengthen the evaluation in a revised version of the paper.
[1] Xu, Yue, et al. 'Defending jailbreak attack in vlms via cross-modality information detector.'"
- What if the training and testing images come from different datasets? Could the authors evaluate the robustness of their method across diverse image distributions by conducting additional experiments using separate datasets for training and testing? This would help assess the method's generalizability.
- **Line 478:** Summarize the results rather than simply directing readers to the appendix. | - The threat model needs further clarification. Could the authors define the assumed threat model more explicitly, specifying the attacker’s level of access, capabilities, and the defender's available resources? Including this in a dedicated section would enhance clarity, particularly around the assumed white-box access to the victim model. |
ACL_2017_483_review | ACL_2017 | - 071: This formulation of argumentation mining is just one of several proposed subtask divisions, and this should be mentioned. For example, in [1], claims are detected and classified before any supporting evidence is detected.
Furthermore, [2] applied neural networks to this task, so it is inaccurate to say (as is claimed in the abstract of this paper) that this work is the first NN-based approach to argumentation mining.
- Two things must be improved in the presentation of the model: (1) What is the pooling method used for embedding features (line 397)? and (2) Equation (7) in line 472 is not clear enough: is E_i the random variable representing the *type* of AC i, or its *identity*? Both are supposedly modeled (the latter by feature representation), and need to be defined. Furthermore, it seems like the LHS of equation (7) should be a conditional probability.
- There are several unclear things about Table 2: first, why are the three first baselines evaluated only by macro f1 and the individual f1 scores are missing?
This is not explained in the text. Second, why is only the "PN" model presented? Is this the same PN as in Table 1, or actually the Joint Model? What about the other three?
- It is not mentioned which dataset the experiment described in Table 4 was performed on.
General Discussion: - 132: There has to be a lengthier introduction to pointer networks, mentioning recurrent neural networks in general, for the benefit of readers unfamiliar with "sequence-to-sequence models". Also, the citation of Sutskever et al. (2014) in line 145 should be at the first mention of the term, and the difference with respect to recursive neural networks should be explained before the paragraph starting in line 233 (tree structure etc.).
- 348: The elu activation requires an explanation and citation (still not enough well-known).
- 501: "MC", "Cl" and "Pr" should be explained in the label.
- 577: A sentence about how these hyperparameters were obtained would be appropriate.
- 590: The decision to do early stopping only by link prediction accuracy should be explained (i.e. why not average with type accuracy, for example?).
- 594: Inference at test time is briefly explained, but would benefit from more details.
- 617: Specify what the length of an AC is measured in (words?).
- 644: The referent of "these" in "Neither of these" is unclear.
- 684: "Minimum" should be "Maximum".
- 694: The performance w.r.t. the amount of training data is indeed surprising, but other models have also achieved almost the same results - this is especially surprising because NNs usually need more data. It would be good to say this.
- 745: This could alternatively show that structural cues are less important for this task.
- Some minor typos should be corrected (e.g. "which is show", line 161).
[1] Rinott, Ruty, et al. "Show Me Your Evidence-an Automatic Method for Context Dependent Evidence Detection." EMNLP. 2015.
[2] Laha, Anirban, and Vikas Raykar. " An Empirical Evaluation of various Deep Learning Architectures for Bi-Sequence Classification Tasks." COLING. 2016. | - 590: The decision to do early stopping only by link prediction accuracy should be explained (i.e. why not average with type accuracy, for example?). |
O3Mej5jlda | ICLR_2024 | Below are several concerns related to weaknesses.
- In Introduction, the paper explains few-shot learning is "facilitating downstream scenarios where labeled data can be expensive or difficult to obtain". While few-shot learning is somewhat established in the community, can authors motivate the study of few-shot learning with real-world applications? When factually does an application need few-shot learning?
- The paper writes "sampling class-imbalanced tasks". As the few-shot learning setting has only a few examples for each class, how to set a reasonable class-imbalanced task? Can authors explain with concrete details?
- The paper writes "Thus to make the evaluation protocol realistic while being fair for comparison, we have two choices: (1) determine hyperparameters of transfer algorithms in advance on a held-out dataset that is both different from the pretraining dataset and target downstream dataset". It is not clear why must it be different from both pretraining and target datasets. When embracing a pretrained model (e.g., CLIP's visual encoder), how to define such a validation set different from the pretraining data? Hypothetically, it is reasonably to think the pretraining data reflects data in the real world already. Can authors clarify and explain?
- While the paper makes a good point "a good few-shot transfer method should not only have high performance at its optimal hyperparameters, but should also have resistance to the change of hyperparameters," why is being resistant to different hyperparameters a reasonable argument in designing a good few-shot setup? Can we leave hyperparameter search as a standalone method/problem orthogonal to a few-shot protocol?
- The paper writes "No variation of the number of classes." But it is not clear why "the number of classes" matters. Can authors explain and clarify?
- In Section 5.1, the paper suggests "hyperparameter ensemble". But why don't we leave hyperparameter search as a algorithmic problem in few-shot learners? What if a few-shot method has good hyperparameters outside the predefined range of hyperparameters?
- The paper writes "every evaluated task can provably approach its optimal performance" but doesn't provably justify this claim. Authors should clarify.
- The paper writes "the hyperparameters won’t usually change too much from dataset to dataset". This is not clear. Can authors justify this?
- The paper writs "We thus restrict the maximum number of training samples in each class to 10, constructing “true” few-shot tasks." However, this conflicts previous suggestions of setting "class-imbalanced tasks". Can authors explain? Moreover, why 10 examples per class construct "true few-shot tasks"? Can authors discuss what numbers of per-class examples can be grounded as "true few-shot tasks"?
- While the paper tries to embrace pretrained models such as CLIP, it is not clear what few-shot tasks can be reasonable given that pretrained models have seen examples of few-shot tasks (e.g., images belonging to ImageNet classes). Can authors clarify?
- The paper writs "Fungi and Plant Disease, two fine-grained datasets whose category names are mostly rare words." How to tell the categories in these datasets are "mostly rare words"? Can authors clarify?
- The paper writes "This is something like a “text domain shift"". Can authors explain what it means? Shift from where to where?
- The paper has a strong claim that "This indicates we are not making progress in this field and we should rethink what’s the thing that leads to real improvement of few-shot multimodal transfer performance." This is not an insightful claim and I'd encourage authors to write more insightful comments, e.g., how to move forward with the proposed benchmark, what directions are reasonable to move forward, etc. | - The paper writes "sampling class-imbalanced tasks". As the few-shot learning setting has only a few examples for each class, how to set a reasonable class-imbalanced task? Can authors explain with concrete details? |
xbnNgqGefc | EMNLP_2023 | 1) All of the empirical evaluation is performed on one dataset. This makes it hard to judge the generalizability of the approach. But, it is understandable given the difficulty in annotating data for such a task.
2) The chat-gpt baseline is very rudimentary. Few-shot approach isn’t tested. Also, including the discourse relation information in the prompts (probably in a Chain-of-Thought style approach) might yield good results. This will only add to the paper’s evaluation. But, it is extraneous to their line of evaluation as presented.
3) In addition to sentence level and token level performance, it would have been interesting to see document level evaluation of propaganda as well. It seems like a natural setting for the task which is missing in their evaluation. | 2) The chat-gpt baseline is very rudimentary. Few-shot approach isn’t tested. Also, including the discourse relation information in the prompts (probably in a Chain-of-Thought style approach) might yield good results. This will only add to the paper’s evaluation. But, it is extraneous to their line of evaluation as presented. |
NIPS_2019_573 | NIPS_2019 | of the paper: - no theoretical guarantees for convergence/pruning - though experiments on the small networks (LeNet300 and LeNet5) are very promising: similar to DNS [16] on LeNet300, significantly better than DNS [16] on LeNet5, the ultimate goal of pruning is to reduce the compute needed for large networks. - on the large models authors only compare GSM to L-OBS. No motivation given for the choice of the competing algorithm. Based on the smaller experiments it should be DNS [16], the closest competitor, rather than L-OBS, showed quite poor performance compared to others. - Authors state that GSM can be used for automated pruning sensitivity estimation. 1) While graphs (Fig 2) show that GSM indeed correlates with layer sensitivity, it was not shown how to actually predict sensitivity, i.e. no algorithm that inputs model, runs GSM, processes GSM result and output sensitivity for each layer. 2) Authors don't explain the detail on how the ground truth of sensitivity is achieved, lines 238-239 just say "we first estimate a layer's sensitivity by pruning ...", but no details on how actual pruning was done. comments: 1) Table 1, Table 2, Table 3 - "origin/remain params|compression ratio| non-zero ratio" --- all these columns duplicate the information, only one of the is enough. 2) Figure 1 - plot 3, 4 - two lines are indistinguishable (not even sure if there are two, just a guess), would be better to plot relative error of approximation, rather than actual values; why plot 3, 4 are only for one value of beta while plot 1 and 2 are for three values? 3) All figures - unreadable in black and white 4) Pruning majorly works with large networks, which are usually trained in distributed settings, authors do not mention anything about potential necessity to find global top Q values of the metric over the average of gradients. This will potentially break big portion of acceleration techniques, such as quantization and sparsification. | 2) Authors don't explain the detail on how the ground truth of sensitivity is achieved, lines 238-239 just say "we first estimate a layer's sensitivity by pruning ...", but no details on how actual pruning was done. comments: |
NIPS_2020_409 | NIPS_2020 | - As also the authors explain, the proposed solution for the intractable normalizing constant is inelegant and also rather unclear. It is not clear how the function is approximated, given that the more complex version is not possible to know. Is some variational approximation used to make sure that the obtained function minimizes some distributional metric on the true normalizing constant? Also, is there a way to get an estimate of the crudeness of the approximation, perhaps on a simpler toy setting where one can solve the problem computationally? - Importantly, given that the approximation of the normalizing constant is it still fair to claim that the output predictions lie on the SO(3) manifold? Given the high dimensionality (9 dimensions are still quite high geometrically), is it fair to say that the approximation may lie quite far from the original SO(3) manifold? In that case, what exactly is the manifold learned and how does it relate to the SO(3) manifold? - Given the unimodality of the distribution, what is the convergence behavior of the algorithm? Is it observed that it may get stuck to bad local optima, especially for classes for which there is either symmetry or the visual similarity across different access is not very salient for the model to capture it well, especially in the early stages of the training? - Some parts in the text could be written more clearly. For instance, -- could the authors explicitly explain what is a proper rotation matrix in line 97? -- what exactly is meant in l. 105-106 regarding solving the problem of the matrix being non positive semidefinite? | - Some parts in the text could be written more clearly. For instance, -- could the authors explicitly explain what is a proper rotation matrix in line 97? -- what exactly is meant in l. 105-106 regarding solving the problem of the matrix being non positive semidefinite? |
NIPS_2018_134 | NIPS_2018 | - Some parts of the work are harder to follow and it helps to have checked [Cohen and Shashua, 2016] for background information. # Typos and Presentation - The citation of Kraehenbuehl and Koltun: it seems that the first and last name of the first author, i.e. Philipp, are swapped. - The paper seems to be using a different citation style than the rest of the NIPS submission. Is this intended? - line 111: it might make sense to not call g activation function, but rather a binary operator; similar to Cohen and Shashua, 2016. They do introduce the activation-pooling operator though that fulfils the required conditions. - line 111: I believe that the weight w_i is missing in the sum. - line 114: Why not mention that the operator has to be associative and commutative? - eq 6 and related equations: I believe that the operator after w_i should be the multiplication of the underlying vector space and not \cross_g: It is an operator between a scalar and a tensor, and not just between two scalars. - line 126: by the black *line* in the input # Further Questions - Would it make sense to include and learn AccNet as part of a larger predictor, e.g., for semantic segmentation, that make use of similar operators? - Do you plan to publish their implementation of the proposed AccNet? # Conclusion The work shows that the proposed method is expressive enough to approximate high-dimensional filtering operations while being fast. I think the paper makes an interesting contribution and I would like to see this work being published. | - line 111: it might make sense to not call g activation function, but rather a binary operator; similar to Cohen and Shashua, 2016. They do introduce the activation-pooling operator though that fulfils the required conditions. |
NIPS_2022_431 | NIPS_2022 | Lack information of comparison with related work.
Experiments on SQuAD have no results of the competitor A3.
Improvement is limited.
UPDATE: The authors' responses address my main concerns, which should be included in the revised version.
Minor suggestions: 1. The writing can be further improved. There are typos in: a) Line 121, “significant \delta” should be “significance \delta”. b) Line 162, “is a symmetric matrix contains input” should be “is a symmetric matrix containing input”. c) Line 176-177, “for encourage the attended items becomes” should be “for encouraging the attended items to become”. 2. The captions of Fig. 1 and Fig. 2 have large overlaps with your content. You can consider shrinking the captions to leave more space to your methods or related work. | 2. The captions of Fig. 1 and Fig. 2 have large overlaps with your content. You can consider shrinking the captions to leave more space to your methods or related work. |
ARR_2022_342_review | ARR_2022 | 1. The importance of context is well known and well-established in several prior work related to hate speech. While the paper cites works such as Gao and Huang, 2017 and Vidgen, et al., it just mentions that they don’t identify the role of context in annotation or modeling. The former definitely considers its role for modeling and the latter incorporates it in the annotation phase. Though this work performs analysis of corpus and model to study the role of the context, the claim of being the first work to establish the importance of context may be a little stretched.
2. Table 2 includes several work but drops out Vidgen et al, 2021, which might be really similar to the dataset presented in this work though the size varies significantly here. So, why is this dataset not used as a potential benchmark for evaluation (for investigating the role of context in detection of hate) as well?
3. Though MACE can be used to assess the competent annotators and eliminate redundant annotators, it could be challenging to use when it involves most ambiguous content.
4. Some of the analysis and results discussed (for eg. section 6) might be specific to the tested Roberta model. More experiments using different architectures are needed to determine if the findings and errors that arise are consistent across different models and different settings.
Claim of being the first one to recognize the importance of context might be too stretched.
More experiments with multiple runs with different random seeds for the dataset split will help report the mean score and the standard deviation. This will help us understand the sensitivity of the model to the data split or order of training etc. | 2. Table 2 includes several work but drops out Vidgen et al, 2021, which might be really similar to the dataset presented in this work though the size varies significantly here. So, why is this dataset not used as a potential benchmark for evaluation (for investigating the role of context in detection of hate) as well? |
NIPS_2016_283 | NIPS_2016 | weakness of the paper are the empirical evaluation which lacks some rigor, and the presentation thereof: - First off: The plots are terrible. They are too small, the colors are hard to distinguish (e.g. pink vs red), the axis are poorly labeled (what "error"?), and the labels are visually too similar (s-dropout(tr) vs e-dropout(tr)). These plots are the main presentation of the experimental results and should be much clearer. This is also the reason I rated the clarity as "sub-standard". - The results comparing standard- vs. evolutional dropout on shallow models should be presented as a mean over many runs (at least 10), ideally with error-bars. The plotted curves are obviously from single runs, and might be subject to significant fluctuations. Also the models are small, so there really is no excuse for not providing statistics. - I'd like to know the final used learning rates for the deep models (particularly CIFAR-10 and CIFAR-100). Because the authors only searched 4 different learning rates, and if the optimal learning rate for the baseline was outside the tested interval that could spoil the results. Another remark: - In my opinion the claim about evolutional dropout addresses the internal covariate shift is very limited: it can only increase the variance of some low-variance units. Batch Normalization on the other hand standardizes the variance and centers the activation. These limitations should be discussed explicitly. Minor: * | - The results comparing standard- vs. evolutional dropout on shallow models should be presented as a mean over many runs (at least 10), ideally with error-bars. The plotted curves are obviously from single runs, and might be subject to significant fluctuations. Also the models are small, so there really is no excuse for not providing statistics. |
wwJJUamHVp | ICLR_2024 | 1) The stated contributions seem to "oversell" the method:
- As discussed later, all physics-informed neural operators do not require any training data.
- Most neural operators can deal with any form of PDE data (forcing, coefficients, boundary conditions, initial conditions).
- In contrast, the proposed method appears not to work on *any* form of PDE data but only on data given in a parametrized form, i.e., by a finite-dimensional parameter vector. In particular, one cannot use (discretizations of) arbitrary input functions, which is possible for, e.g., FNO.
2) Further numerical results are needed:
- The hyperparameters of the baselines seem not to have been optimized for the considered problems.
- It would be good to present comparisons on problems that have been considered in the PIDeepONet or PINO papers since these methods have not been optimized for complex geometries. In this context, other approaches have been suggested; see, e.g., https://arxiv.org/pdf/2207.05209.pdf.
- For the current problems, it would be more suitable to see comparisons against graph-based neural PDE solvers.
- It is essential to compare runtimes of the considered methods.
3) Motivation of the work and comparisons with classical FEM methods:
- It seems that the proposed approach is merely learning a surrogate model for solving the linear/linearized system of equations arising in FEM. It still requires carefully choosing basis functions and meshes and assembling stiffness matrices (i.e., in the specific case of the present work, it is heavily relying on FEniCS). While current operator learning methods can not yet achieve the same accuracies as specialized numerical solvers, they are more universal and do not need to be adapted to specific PDEs.
- Considering training cost, what is the advantage of the proposed approach to just solving the linear system in (6) with a suitable solver? There is a single comparison of the runtime of FEONet and FEM in the appendix, but this seems to be a very critical point. Especially given that the achieved accuracy of FEONet seems to be orders of magnitude worse than the FEM solver. In the case of varying forcing functions, it seems that one could reuse the inverse of the stiffness matrix and just compute a single matrix-vector product to arrive at the solution (which could also be batched).
- There should be more information in the main text on how the bilinear form $B$ is computed for varying PDE data. | - It seems that the proposed approach is merely learning a surrogate model for solving the linear/linearized system of equations arising in FEM. It still requires carefully choosing basis functions and meshes and assembling stiffness matrices (i.e., in the specific case of the present work, it is heavily relying on FEniCS). While current operator learning methods can not yet achieve the same accuracies as specialized numerical solvers, they are more universal and do not need to be adapted to specific PDEs. |
NIPS_2021_57 | NIPS_2021 | I have some suggestions that may help the author to further improve the quality of the article. 1.The author may consider adding some verifications for tasks that rely more heavily on throughput, such as real-time target detection, real-time depth estimation. 2.The authors need to describe the experimental environment in more detail, such as the CUDA version and the PyTorch version. Because different versions of the experimental environment will have a certain impact on training speed and inference speed. 3. The description of "W_D has the following property" could change to"the property of W_D is shown in Fig.3" which may be more clear. | 2.The authors need to describe the experimental environment in more detail, such as the CUDA version and the PyTorch version. Because different versions of the experimental environment will have a certain impact on training speed and inference speed. |
NIPS_2021_616 | NIPS_2021 | The authors discuss two limitations: first, this paper focuses only on methods with explicit negatives. This is not a problem for me since it is okay for an analysis paper to focus on one type of methods. The second limitation is that the datasets used in the experiments are not fully realistic. This again is not an issue for me, since 1) the datasets used in this paper are variants of ImageNet and MNIST which are both realistic, and 2) fully realistic datasets will make it hard to control multiple aspects of variation with precision.
I agree with the authors' judgement that there is no immediate societal impact. | 2) fully realistic datasets will make it hard to control multiple aspects of variation with precision. I agree with the authors' judgement that there is no immediate societal impact. |
NIPS_2016_450 | NIPS_2016 | . First of all, the experimental results are quite interesting, especially that the algorithm outperforms DQN on Atari. The results on the synthetic experiment are also interesting. I have three main concerns about the paper. 1. There is significant difficulty in reconstructing what is precisely going on. For example, in Figure 1, what exactly is a head? How many layers would it have? What is the "Frame"? I wish the paper would spend a lot more space explaining how exactly bootstrapped DQN operates (Appendix B cleared up a lot of my queries and I suggest this be moved into the main body). 2. The general approach involves partitioning (with some duplication) the samples between the heads with the idea that some heads will be optimistic and encouraging exploration. I think that's an interesting idea, but the setting where it is used is complicated. It would be useful if this was reduced to (say) a bandit setting without the neural network. The resulting algorithm will partition the data for each arm into K (possibly overlapping) sub-samples and use the empirical estimate from each partition at random in each step. This seems like it could be interesting, but I am worried that the partitioning will mean that a lot of data is essentially discarded when it comes to eliminating arms. Any thoughts on how much data efficiency is lost in simple settings? Can you prove regret guarantees in this setting? 3. The paper does an OK job at describing the experimental setup, but still it is complicated with a lot of engineering going on in the background. This presents two issues. First, it would take months to re-produce these experiments (besides the hardware requirements). Second, with such complicated algorithms it's hard to know what exactly is leading to the improvement. For this reason I find this kind of paper a little unscientific, but maybe this is how things have to be. I wonder, do the authors plan to release their code? Overall I think this is an interesting idea, but the authors have not convinced me that this is a principled approach. The experimental results do look promising, however, and I'm sure there would be interest in this paper at NIPS. I wish the paper was more concrete, and also that code/data/network initialisation can be released. For me it is borderline. Minor comments: * L156-166: I can barely understand this paragraph, although I think I know what you want to say. First of all, there /are/ bandit algorithms that plan to explore. Notably the Gittins strategy, which treats the evolution of the posterior for each arm as a Markov chain. Besides this, the figure is hard to understand. "Dashed lines indicate that the agent can plan ahead..." is too vague to be understood concretely. * L176: What is $x$? * L37: Might want to mention that these algorithms follow the sampled policy for awhile. * L81: Please give more details. The state-space is finite? Continuous? What about the actions? In what space does theta lie? I can guess the answers to all these questions, but why not be precise? * Can you say something about the computation required to implement the experiments? How long did the experiments take and on what kind of hardware? * Just before Appendix D.2. "For training we used an epsilon-greedy ..." What does this mean exactly? You have epsilon-greedy exploration on top of the proposed strategy? | * L156-166: I can barely understand this paragraph, although I think I know what you want to say. First of all, there /are/ bandit algorithms that plan to explore. Notably the Gittins strategy, which treats the evolution of the posterior for each arm as a Markov chain. Besides this, the figure is hard to understand. "Dashed lines indicate that the agent can plan ahead..." is too vague to be understood concretely. |
ARR_2022_125_review | ARR_2022 | 1. A more explicit definition / description / explanation of what is meant by “systematic” performance gains or improvements seems to be necessary since this is one of the proposed advantages of the proposed method, and it would also be better if you could explicitly describe the connection between "systematic performance gain" and the robustness of model performance. The aforementioned two aspects could make the contribution of the proposed method more obvious and salient. 2. A brief discussion of the motivation and selection of the auxiliary tasks seem needed in Section 3. Some of the issues were described in the first paragraph of Section 3, which could be viewed as implicitly motivating the use of the auxiliary tasks, but it still seems not very sufficient: the jump from problem description to “Hence the idea of using auxiliary tasks taking into account all the heads'' seems abrupt. I think a brief elaboration on why these 4 tasks are designed and chosen would make the picture clearer and would also echo the argument mentioned at the beginning, which is to capture some form of interdependence between arcs.
Content: 1. Line 027: what did you mean by “plain dependency parsing”? Syntactic dependency? It would be clearer if a reference is added here to provide an example. 2. Line 047: it would be better and more explicit if the task and/or dataset is mentioned as part of the “achieve state-of-the-art results” statement. 3. Lines 078-079 / Line 08: For clarity, it would be better if the evaluation metric is mentioned here to better understand the scale of the improvement; this would also be helpful to understand the results reported in this paper for comparability: the expression “labelled F-measure scores (LF1) (including ROOT arcs)” was used in Fernández-González and Gómez-Rodríguez (2020). 4. Line 079: Since ID and OOD are not widely used acronyms and this is the first time of them being used, it would be better to define them first and use ID or OOD thereafter: e.g. in-domain (ID) and out-of-domain (OOD) 5. Lines 130-135: The example wrt the mwe label can probably benefit from a visualization of the description, if the space is allowed. Also, the clause “which never happens in the training set” may benefit from a little rephrasing - if I’m not mistaken, this example was mentioned here to support the claim that “impossible sets of labels for the set of heads of a given dependent”. Does this mean that such a combination is incorrect and thus not possible? If so, saying that such scenarios never happen in the training set could mean that it is likely to conceptually have such combinations exist, but it’s just that it’s not seen in the training data. 6. Is there a specific reason for choosing 9 as the number of runs? ( Conventionally, 3, 5, or 10 are common). Typos: - Line 018: near-soa > did you mean “near-state-of-the-art (sota)”? Define before using an acronym - Line 036: decision > decisions - Line 062: propose > proposes - Line 097: biafine > biaffine - footnote2: a dot (.) is missing after “g” in “e.g” - only pointed this out as a typo as you used “e.g.” throughout the paper as in Lines 157 / 225 / 230 / 231 - Line 202: experimented > experiment - only pointed this out as a typo as you seem to use present tense to describe other people’s work as in Lines 047 / 078 / 081 - Line 250: provide > provides/provided; significative > significant (also caption in Table 1) Miscellaneous: 1. Inconsistent use of uppercase / lowercase in the title: tasks > Task; boost > Boost 2. Inconsistent intext citation styles: - Lines 077-078 & 277-278: (Fernández-González and Gómez-Rodríguez, 2020) > Fernández-González and Gómez-Rodríguez (2020) - Lines 276-277: (He and Choi, 2020) > He and Choi (2020) 3. Placement of footnotes: the footnotes follow the ending punctuation, as shown in the *ACL Template Section 4.1: Lines 104 / 166 / 170 / 231 / 239 / 249 / 250 / 266. 4. FYI - the link to the code in footnote 5 didn’t seem to work; it said “Transfer expired: Sorry, this transfer has expired and is not available any more”. Not sure if I need to register an account to get to the code. 5. Add the abbreviation “LF” after “labeled Fscore” on Line 245 so that the use of “LF” later can be attributed to. | 3. Lines 078-079 / Line 08: For clarity, it would be better if the evaluation metric is mentioned here to better understand the scale of the improvement; this would also be helpful to understand the results reported in this paper for comparability: the expression “labelled F-measure scores (LF1) (including ROOT arcs)” was used in Fernández-González and Gómez-Rodríguez (2020). |
NIPS_2017_110 | NIPS_2017 | weakness of this paper in my opinion (and one that does not seem to be resolved in Schiratti et al., 2015 either), is that it makes no attempt to answer this question, either theoretically, or by comparing the model with a classical longitudinal approach.
If we take the advantage of the manifold approach on faith, then this paper certainly presents a highly useful extension to the method presented in Schiratti et al. (2015). The added flexibility is very welcome, and allows for modelling a wider variety of trajectories. It does seem that only a single breakpoint was tried in the application to renal cancer data; this seems appropriate given this dataset, but it would have been nice to have an application to a case where more than one breakpoint is advantageous (even if it is in the simulated data). Similarly, the authors point out that the model is general and can deal with trajectories in more than one dimensions, but do not demonstrate this on an applied example.
(As a side note, it would be interesting to see this approach applied to drug response data, such as the Sanger Genomics of Drug Sensitivity in Cancer project).
Overall, the paper is well-written, although some parts clearly require a background in working on manifolds. The work presented extends Schiratti et al. (2015) in a useful way, making it applicable to a wider variety of datasets.
Minor comments:
- In the introduction, the second paragraph talks about modelling curves, but it is not immediately obvious what is being modelled (presumably tumour growth).
- The paper has a number of typos, here are some that caught my eyes: p.1 l.36 "our model amounts to estimate an average trajectory", p.4 l.142 "asymptotic constrains", p.7 l. 245 "the biggest the sample size", p.7l.257 "a Symetric Random Walk", p.8 l.269 "the escapement of a patient".
- Section 2.2., it is stated that n=2, but n is the number of patients; I believe the authors meant m=2.
- p.4, l.154 describes a particular choice of shift and scaling, and the authors state that "this [choice] is the more appropriate.", but neglect to explain why.
- p.5, l.164, "must be null" - should this be "must be zero"?
- On parameter estimation, the authors are no doubt aware that in classical mixed models, a popular estimation technique is maximum likelihood via REML. While my intuition is that either the existence of breakpoints or the restriction to a manifold makes REML impossible, I was wondering if the authors could comment on this.
- In the simulation study, the authors state that the standard deviation of the noise is 3, but judging from the observations in the plot compared to the true trajectories, this is actually not a very high noise value. It would be good to study the behaviour of the model under higher noise.
- For Figure 2, I think the x axis needs to show the scale of the trajectories, as well as a label for the unit.
- For Figure 3, labels for the y axes are missing.
- It would have been useful to compare the proposed extension with the original approach from Schiratti et al. (2015), even if only on the simulated data. | - In the simulation study, the authors state that the standard deviation of the noise is 3, but judging from the observations in the plot compared to the true trajectories, this is actually not a very high noise value. It would be good to study the behaviour of the model under higher noise. |
NIPS_2020_664 | NIPS_2020 | 1) The algorithms (although rigorously analyzed) are somewhat obvious modifications of the best known ones from the online literature. 2) The bounds have o(1) terms and start improving over the previously known results for arbitrarily long inputs. I am not sure how large these inputs needs to be, but it seems that this would seriously limit the applications of this approach. 3) I am a bit skeptical about how the competitive ratio results are presented. Ideally, I would have liked a result like the meta-result (or similar tradeoff) using only \lambda (or only c) and the prediction error. Instead the actual statements contain both the \lambda and c confidence parameters. This could be ok, but I don't think that a good tradeoff is obtained for all combination of values \lambda and c. So, is \lambda chosen first and then c optimised accordingly, for the worst case value of the prediction error? Is it the other way around? Since OPT is unknown, just knowing \lambda seems to provide very little information regarding the accuracy of the prediction and I don't think it's obvious at all what the actual tradeoff is, given the current statements. From a theoretical standpoint there are advantages to this presentation, but in this case I would prefer something different. | 2) The bounds have o(1) terms and start improving over the previously known results for arbitrarily long inputs. I am not sure how large these inputs needs to be, but it seems that this would seriously limit the applications of this approach. |
NIPS_2020_91 | NIPS_2020 | - DVP needs to perform training at test time (25-50 epochs) per testing sequence. - The reviewer understands that the Figure 8 provides some insights "when to stop", however, it is unclear how it will change or is it sensitive to the length of videos (longer videos). - It is interesting to see how DVP perform on video with different length? | - It is interesting to see how DVP perform on video with different length? |
ARR_2022_82_review | ARR_2022 | - In the “Updating Facts” section, although the results seem to show that modifying the neurons using the word embeddings is effective, the paper lacks a discussion on this. It is not intuitive to me that there is a connection between a neuron at a middle layer and the word embeddings (which are used at the input layer). - Using integrated gradients to measure the attribution has been studied in existing papers. The paper also proposes post-processing steps to filter out the “false-positive” neurons, however, the paper doesn’t show how important these post-processing steps are. I think an ablation study may be needed.
- The paper lacks details of experimental settings. For example, how are those hyperparameters ($t$, $p$, $\lambda_1$, etc.) tuned? In table 5, why do “other relations” have a very different scale of perplexity compared to “erased relation” before erasing? Are “other relations” randomly selected?
- The baseline method (i.e., using activation values as the attribution score) is widely used in previous studies. Although the paper empirically shows that the baseline is not as effective as the proposed method, - I expect more discussion on why using activation values is not a good idea.
- One limitation of this study is that the paper only focuses on single-word cloze queries (as discussed in the paper).
- Figure 3: The illustration is not clear to me. Why are there two “40%” in the figure?
- I was confused that the paper targets single-token cloze queries or multi-token ones. I did not see a clear clarification until reading the conclusion. | - I was confused that the paper targets single-token cloze queries or multi-token ones. I did not see a clear clarification until reading the conclusion. |
ICLR_2023_2086 | ICLR_2023 | . Section 2: The authors mentioned that the absorbing diffusion is the most promising generation method. Can you add some explanation on that?
. Section 3.1: The diffusion ordering network produces the probability of the node at time t via equation (1). Can you explain why such an ordering would reflect the topology/regularities of the graph?
. Section 3.3: The proposed training objective has ignored the KL-divergence term in equation (3). Can you evaluate such approximation error, ie. calculate the actual KL-divergence and check whether it indeed approaches zero? Experiments:
. Table 1: The performance of the proposed GRAPHARM on Cora is not competitive. Can you explain that?
. Effect of Diffusion Ordering: Can you illustrate the proposed ordering visually to give a sense that it does reflect the topology/regularities of the graph when compared to the random ordering? | . Section 3.3: The proposed training objective has ignored the KL-divergence term in equation (3). Can you evaluate such approximation error, ie. calculate the actual KL-divergence and check whether it indeed approaches zero? Experiments: |
JJH7m9v4tv | ICLR_2025 | 1. This paper has a strong connection with [1], as both employ additional discriminator guidance to improve sampling by rectifying sampling bias. The only significant difference is the generator utilized by the GAN in this paper. Therefore, the main contribution may be considered limited.
2. Is it really useful to use more sampling steps to improve GAN modeling? If the goal is solely better performance, why not directly employ diffusion models for sampling, given their proven modeling capability?
3. In my opinion, Section 2 shows limited connection with the methodology section. Furthermore, the theoretical analysis is somewhat simplistic and closely related to [1].
4. The experimental results are marginal improvement on various datasets.
5. It is suggested that the format of references should be uniform.
6. The writing should follow the policy because the pseudocode in the appendix is beyond the boundaries.
[1] Dongjun Kim, Yeongmin Kim, Se Jung Kwon, Wanmo Kang, and Il-Chul Moon. Refining generative process with discriminator guidance in score-based diffusion models. arXiv preprint arXiv:2211.17091, 2022. | 3. In my opinion, Section 2 shows limited connection with the methodology section. Furthermore, the theoretical analysis is somewhat simplistic and closely related to [1]. |
NIPS_2022_2074 | NIPS_2022 | 1.) BlendedMVS is not evaluated quantitatively, and I couldn't find an argument for this.
2.) It is not clear to me why the explicit SDF supervision (Sec. 3.2) is done with occlusion handling (L. 165) and view-aware (L. 173). It is only stated that "the introduced SDF loss is consistent with the process of color rendering" (L. 177 - 178). Instead, in every iteration, a subset of the sparse point cloud could be sampled and the loss can be applied on the 3D location without any occlusion reasoning etc. which seems simpler and more straight-forward. I believe a good reason / ablation study for this complicated setup is missing
3.) The described process of finding the surface intersection (L. 197ff) is very similar to proposed root-finding methods from ray marching-based approaches for neural implicit surfaces like [23] and a short note on this+citation on this would be helpful for the reader.
4.) The fact that the photometric consistency loss is only applied on grey-scale images (L. 211ff) is interesting, and an ablation study on this would be helpful.
5.) NCC is used as the phometric consistency metric. Have the authors investigated other measures as well? This could be an interesting ablation study (but not a must in this manuscript).
6.) It is not clear how the patch size of 11x11 (L. 227) was determined.
7.) The fact that Colmap's best trim parameter is 7 (L. 253) should be cited, e.g. [23].
8.) The visual ablation study (Fig 5) could be bigger with zoom-in windows to better see the differences, similar to Fig 1 of Supp Mat.
9.) Table 3 / "Geometry bias of volumetric integration": very interesting, but details are missing. Are here the expected depth maps used obtained via volume rendering? I think at least the supp mat should contain relevant formulas how the quantities are obtained.
10.) Appendix C: Why is NeRF's rendering quality worse than NeuS / Geo NeuS?
11.) Would be interesting to further discuss or which situations the losses help in particular, e.g. mostly for specular areas?
12.) Fig 4 caption typo: NueS -> NeuS
The authors discussion on limitations and negative societal impact (L. 331 - 334) is quite limited. The authors could tackle more complex datasets / more complex scenes if the proposed system does not fail on DTU and the selected BlendedMVS scenes. What happens if photoconsistency is not given, e.g., because of strong specular highlights? How does the model perform in the presence of transparent surfaces? How could the model handle sparse set of inputs instead of the considered dense coverage of scenes? These could be starting points for an interesting limitation discussion. | 11.) Would be interesting to further discuss or which situations the losses help in particular, e.g. mostly for specular areas? |
79FVDdfoSR | ICLR_2024 | **Contribution.** I doubt that the paper in its present state is strong enough for ICLR.
1. The main result looks like a relatively simple add-on to the study of equivariant networks by Wood & Shawe-Taylor (1996). Most results found in the appendix are either simple facts from the group and representation theory or are borrowed from Wood & Shawe-Taylor (1996) (e.g., the key equivariance condition (3)). Theorem 1 seems to be a relatively simple consequence of these results.
2. Though the paper includes a discussion of applications of Theorem 1 in Sections 5 and 6, I found these sections not easy to follow. They essentially consist of a long and not so well structured list of comments surrounding a few corollaries. The context of these comments is explained relatively vaguely, partly through references to other papers. It is not really clear to me why any of these corollaries and comments is actually important. The paper does not include specific examples, illustrations or experiments showing how Theorem 1 can, say, help improve designs of equivariant models.
**Writing.** The writing is sloppy in several places, with typos and unclear grammar. For example, only in Lemma 5:
- the index $i$ seems to be missing in the definition of $\mathcal{T}(\mathcal M)$;
- the indices and coefficients are wrong in the formula for $f(\sum_{j\in S}M_{rj}x)$;
- a word seems to be missing in "It is a simple to check".
I could understand the proof of Lemma 5 only with some effort (especially the sentence " And obviously each.."). The sentence "Hence, we can restrict our classification.." after the proof of Lemma 7 seems to consist of two different statements, but it is unclear where the first one ends and the second begins. In Lemma 7: "dived into three different type". | **Contribution.** I doubt that the paper in its present state is strong enough for ICLR. |
NIPS_2021_812 | NIPS_2021 | I see two primary weaknesses in this paper: 1) numerous sweeping claims are made regarding their superiority over previously published results without sufficient support. 2) The empirical demonstration of the model compares against control models that are not well fit to the tasks. Both of these weaknesses are easily addressable, by 1) either supporting their claims more carefully or dialing them back and 2) choosing a more reasonable control or experimental task.
The authors make a large number of bold claims, for example in lines 9-10; 44-47; 52-53; 67; 96-99; 110; 123-125; 139; 141-142; 166; 300-301; 339-341. While I do not know the literature well enough to directly refute all of them, I will note a few examples where they appear to be wrong. I believe that this weakness can be resolved by either softening the claims themselves (which, in my opinion, would not reduce the novelty or significance of the submission) or providing a more detailed review of how previous works compare.
a) On lines 44-47 the authors note that “all models cited above, with the exception of Khemakhem et al. (2020a), assume that the data are fully observed and noise-free”. While that might be true, I immediately thought of the work of Locatello et al. {1}, which was not cited by the authors, but is an identifiable disentanglement method and comments on their ability to handle noise: “The generative model does not model additive noise; we assume that the noise is explicitly modeled as a latent variable and its effect is manifested through [the ground truth generator], as is done by [several citations].” (last paragraph of section 3) I admit that this might not necessarily be a contradictory claim, but I nonetheless would appreciate it if the authors clarified how their method of explicitly modeling additive noise is beneficial over the type of setup in {1} and the citations in Locatello et al.’s quoted sentence.
1.b) On lines 96-97 the authors note that “the mixing function f is assumed bijective and thus dimension reduction is not possible in most of the above models. The only exception is Khemakhem et al. (2020a) who…” I find the second part of this quote (“dimension reduction is not possible”) surprising in general, considering that any publication attempting to perform disentanglement on the DisentanglementLib dataset necessarily has to assume dimensionality reduction. I thought maybe the authors intended to say that the theory does not allow dimensionality reduction, even if the practical implementations do. However, Klindt et al., which is cited earlier, presents an identifiable disentanglement framework that only assumes an injective mixing function in their theory and therefore allows for dimensionality reduction.
1.c) The authors claim that “under the conditions given in the next section, we can now guarantee identifiability for a very broad and rich class of models. First, notice that all previous Nonlinear ICA time-series models can be recast and often improved upon when viewed through this new unifying framework.” I do not understand how this could possibly be true given that not all previous Nonlinear ICA work abides by the conditions given in sections 3 and 4. Just as two examples, the requirement for unconditional independence on lines 114-115, and tail behavior in assumption A1 are not ubiquitous in the literature. I could definitely be misunderstanding what the authors intended with their statement, in which case I would be happy with a clarification.
I understand that it is not reasonable for the authors to precisely state how every previous contribution fails to meet their claimed advantages. However, given the examples above, I found myself unconvinced in general. I further found myself wondering if it was necessary to make so many sweeping claims in the first place.
The authors set up a simulated example problem in section 5.1 to understand how their model works with a restricted experiment. I agree that this is an important step for understanding how the model behaves. They choose to compare against what they claim to be the state of the art, IIA-HMM. However, they state themselves that “IIA-HMM has a much simpler model of dynamics and no noise model, and likely lost information due to PCA pre-processing.” This leaves me wondering why they felt that this was a fair comparison, given that the problem setup is not at all matched to what IIA-HMM was designed to solve. I would be curious to see how they do for a non-dimensionality-reduced setup, since as far as I know their framework does not require the number of generating latents to be fewer than the data dimensionality. Or, a better solution would be for them to compare their model against an alternative that is appropriately matched to the dimensionality reduction task. Furthermore, the authors do not provide a baseline comparison for their denoising task. This might be because the authors wished to focus on identifiable models, which restricts the otherwise large set of denoising methods. However, again given the work cited in {1}, I am unconvinced that there was not a suitable comparison to be made. Without appropriate comparisons, we are left to rely solely on the scalar MCC metric in an unfamiliar simulated example, which I find insufficient.
Additional minor complaints:
The citations need to be revised and edited. Many are listed as arxiv prints that are now published in peer-reviewed venues. A few are missing the venue entirely. Morioka 2020b appears to be listed twice.
I noticed typos on line 243 (“a complete statistics”), fig 2 caption (“ground true independent”)
I think it would be helpful for practitioners if the authors included some justification for their decision on line 318, or at least a discussion on the tradeoffs for the number of independent components chosen.
I would appreciate it if the authors spent more time discussing trade-offs made in their framework. As of now it is limited to the first two sentences of the “Limitations” section starting on line 367. I found this to be rather terse given the space allocated for stating the failures of prior work. For example, if I understand the unconditional independence assumption on lines 114-115 correctly, then it seems highly unlikely to be met in real-world data, including their MEG experiment. The constraint on the distribution tails and 2nd order nature of the generator also seem restricting with respect to real data. Perhaps the authors could note alternative work that does not require such assumptions for identifiability (in exchange for other restrictions), which would assist future researchers seeking a more general solution.
{1} http://proceedings.mlr.press/v119/locatello20a.html | 2) The empirical demonstration of the model compares against control models that are not well fit to the tasks. Both of these weaknesses are easily addressable, by |
NIPS_2022_2286 | NIPS_2022 | Weakness 1. It is hard to understand what the axes are for Figure 1. 2. It is unclear what the major contributions of the paper are. Analyzing previous work does not constitute as a contribution. 3. It is unclear how the proposed method enables better results. For instance, Table 1 reports similar accuracies for this work compared to the previous ones. 4. The authors talk about advantages over the previous work in terms of efficiency however the paper does not report any metric that shows it is more efficient to train with this proposed method. 5. Does the proposed method converge faster compared to previous algorithms? 6. How does the proposed methods compare against surrogate gradient techniques? 7. The paper does not discuss how the datasets are converted to spike domain.
There are no potential negative societal impacts. One major limitation of this work is applicability to neuromorphic hardware and how will the work shown on GPU translate to neuromorphic cores. | 2. It is unclear what the major contributions of the paper are. Analyzing previous work does not constitute as a contribution. |
NIPS_2018_103 | NIPS_2018 | Weakness (or more like suggestions for future improvements): Although paper tries to address the practical concerns (handling the noisy cases and only need to sample two parameters at a time), the method still has some practical issues. For example, QBC-based methods are vulnerable to noise. If we could quantify the level of noise at the beginning (e.g., knows the beta in eq. (1)), we can modify QBC to have nice guarantees. However, we usually do not know this information before collecting the data. In fact, most of the time, we do not even know which class of models with what kinds of hyper-parameters will perform very well before collecting enough data. Furthermore, clustering tasks are usually more ambiguous. There might also be many equally good local minimums in complicated clustering problems and models. Different annotators often have their own solutions from different local minimums. This means the noise level is usually high and hard to be predicted. Without knowing beta, I do not know how robust the structure QBC would be. Even if we know the noise level, it is possible that we spend lots of effort searching g* by eliminating lots of equally good local minimums, but we find that g* is not good enough eventually because the model assumption has some flaws or the original model becomes too simple as more samples are collected. The experiments are only done using toy data and do not compare with other strategies such as uncertainty sampling. Is the algorithm scalable? Is it possible to know the time complexity of the proposed structural QBC when choosing the next example with respect to all the hyper-parameters? Can this study really motivate more people to use QBC in practice? I understand that this is a theoretical paper, and my educated guess is that the theoretical part of the paper is very solid even though the practicality of the method has not been fully demonstrated. That's the main reason why I vote a clear accept. Minor suggestions: 1. I can only find the explanation of notation \nu in Algorithm 1. Suggest mentioning the meaning of \nu in the text as well. 2. In Algorithm 2, it does not say how to determine n_t. What does the "appropriate number" mean in line 225? It is hard to find the answer in [30]. 3. Line 201, it should be Gilad-Bachrach et al. [16] investigated ... 4. In the related work section, I recommend the authors to cite some other efforts, especially the ones which try to make QBC based methods more practical and test its performances on multiple real-world datasets (e.g., Huang et al.) Huang, Tzu-Kuo, et al. "Efficient and parsimonious agnostic active learning." Advances in Neural Information Processing Systems. 2015. After rebuttal I think all reviewers agree that this is a solid theoretical work, so my judgment remains the same. The authors do not address the practicality issues in the rebuttal which prevents me to increase the overall score to an even higher level. | 2. In Algorithm 2, it does not say how to determine n_t. What does the "appropriate number" mean in line 225? It is hard to find the answer in [30]. |
NIPS_2020_1160 | NIPS_2020 | 1.The authors proposed to use RL based method, while the motivation is not very clear. 2. It's hard to reproduce the results. Will the code be public avaliable. | 2. It's hard to reproduce the results. Will the code be public avaliable. |
NIPS_2019_1364 | NIPS_2019 | weakness of the paper. Questions: * In which cases the assumptions of theorems 3,4 hold? In addition to SLC, they have some matroid related assumptions. Since these results intend to demonstrate the power of the SLC class, these should be discussed in more detail. * How the diversity related \alpha enters the mixing bounds? It seems that the bound depends very weakly on \alpha only through \nu(S_0). Edit following author's response: I'm inclined to keep my score of 6. This is due to following reasons: 1) I still find the theoretical contribution ok but not particularly strong, given existing results. As mentioned in the review, it is a weak, unpractical bound, and the proof, in itself, does not provide particular mathematical novelty. 2) The claims that "in practice the mixing time is even better" are not nearly sufficiently supported by the experiments, and therefore the evidence provided to practitioners is very limited. 3) My question regarding dependence on $\alpha$ was not answered in a satisfactory manner. I would expect a more explicit dependence on $\alpha$, since with higher diversity the problem should be more complicated. If this is not reflected in the bounds, it means the bounds are very loose. | 2) The claims that "in practice the mixing time is even better" are not nearly sufficiently supported by the experiments, and therefore the evidence provided to practitioners is very limited. |
NIPS_2022_1048 | NIPS_2022 | and comments: 1. This paper mainly focused on group sufficiency as the fairness metric. Is it possible to derive similar results under criteria of demographic parity or equalized odds? What are the potential challenges for other fair metrics? Under these settings, is it still possible to achieve both fairness and accuracy for many subgroups? 2. The regularization coefficient λ
seems to have a joint optimal value in 0.1-2. Could you elaborate more on why both fairness and accuracy drop when λ
is large? 3. Is it possible to assume the general gaussian distribution rather than isotropic gaussian in the proposed algorithm? What is the difference? 4. Can the proposed theoretical analysis be extended for a regression or segmentation task? For example, could we obtain the same results as the classification task? 5. Could you explain a bit more on the intuition of group sufficiency? Is there any relation to the well-known sufficient statistics? Other comments: 1. Could we extend the protected feature A
to a vector form? For instance, A
represents multiple attributes. 2. In the Introduction part, the authors introduced a medical therapy instance to present the importance of group sufficiency. Could you explain a bit more about the difference between sufficiency and DP/EO metrics in the real-world application scenarios? 3. In line 225 and line 227, the mathematical expression of gaussian distribution is ambiguous. 4. In section 4.4, it mentions the utilization of Monte-Carlo sampling method. I am curious about the influence of different sampling numbers.
================================================ Thanks the effort from the authors, and I am satisfied with the rebuttal. I would like to raise my score to 8. | 1. Could we extend the protected feature A to a vector form? For instance, A represents multiple attributes. |
ACL_2017_494_review | ACL_2017 | - I was hoping to see some analysis of why the morph-fitted embeddings worked better in the evaluation, and how well that corresponds with the intuitive motivation of the authors. - The authors introduce a synthetic word similarity evaluation dataset, Morph-SimLex. They create it by applying their presumably semantic-meaning-preserving morphological rules to SimLex999 to generate many more pairs with morphological variability. They do not manually annotate these new pairs, but rather use the original similarity judgements from SimLex999.
The obvious caveat with this dataset is that the similarity scores are presumed and therefore less reliable. Furthermore, the fact that this dataset was generated by the very same rules that are used in this work to morph-fit word embeddings, means that the results reported on this dataset in this work should be taken with a grain of salt. The authors should clearly state this in their paper.
- (Soricut and Och, 2015) is mentioned as a future source for morphological knowledge, but in fact it is also an alternative approach to the one proposed in this paper for generating morphologically-aware word representations. The authors should present it as such and differentiate their work.
- The evaluation does not include strong morphologically-informed embedding baselines. General Discussion: With the few exceptions noted, I like this work and I think it represents a nice contribution to the community. The authors presented a simple approach and showed that it can yield nice improvements using various common embeddings on several evaluations and four different languages. I’d be happy to see it in the conference.
Minor comments: - Line 200: I found this phrasing unclear: “We then query … of linguistic constraints”.
- Section 2.1: I suggest to elaborate a little more on what the delta is between the model used in this paper and the one it is based on in Wieting 2015. It seemed to me that this was mostly the addition of the REPEL part.
- Line 217: “The method’s cost function consists of three terms” - I suggest to spell this out in an equation.
- Line 223: x and t in this equation (and following ones) are the vector representations of the words. I suggest to denote that somehow. Also, are the vectors L2-normalized before this process? Also, when computing ‘nearest neighbor’ examples do you use cosine or dot-product? Please share these details.
- Line 297-299: I suggest to move this text to Section 3, and make the note that you did not fine-tune the params in the main text and not in a footnote.
- Line 327: (create, creates) seems like a wrong example for that rule.
- I have read the author response | - Line 223: x and t in this equation (and following ones) are the vector representations of the words. I suggest to denote that somehow. Also, are the vectors L2-normalized before this process? Also, when computing ‘nearest neighbor’ examples do you use cosine or dot-product? Please share these details. |
NIPS_2019_962 | NIPS_2019 | for exceptions. + Experiments are convincing. + To the best of my knowledge, the idea of using unsupervised keypoints for reinforcement learning is novel and promising. One can expect a variety of follow-up work. + Using keypoints as input state of a Q function is reasonable and reduces the dimensionality of the problem. + Reducing the search space to the most controllable keypoints instead of raw actions is a promising idea. Weaknesses: 1. Overstated claim on generalization In the introduction (L17-L22), the authors motivate their work by explaining that reinforcement learning approaches are limited because it is difficult to re-purpose task-specific representations, but that this is precisely what humans do. From this, one could have expected this paper to address this issue by training and applying the detector network across multiple games, re-purposing their keypoint detector. This would have be useful to verify that the learnt representations generalize to new contexts. But unfortunately, it hasn't been done, so it is a bit of an over-statement. Could this be a limitation of the method because the number of keypoints is fixed? 2. Deep RL that matters Experiments should be run multiple times. A longstanding issue with deep RL is their reproducibility and the significance of their improvements. It has been recently suggested that we need a community effort towards reproducibility [a], which should also be taken into account in this paper. Among the considerations, one critical thing is running multiple experiments and reporting the statistics. [a] Henderson, Peter, et al. "Deep reinforcement learning that matters." Thirty-Second AAAI Conference on Artificial Intelligence. 2018. 3. The choice of testing environment is not well motivated. Levels are selected without a clear rationale, with only a vague motivation in L167. This makes me suspect that they might be cherry picks. Authors should provide a more clear justification. This could be related to the next weakness that I will discuss, which is understandable. Even if this is the case, this should then be explicit with experimental evidence. 4. Keypoints are limited to moving objects A practical limitation comes from the fact that the keypoints are learnt from the moving parts of the image. As identified by the authors, the first resulting limitation is that the method assumes a fixed background, so that only meaningful objects move and can be detected as keypoints. Learning to detect keypoints based on what objects are moving has some limitations when these keypoints are supposed to be used as the input state of a Q function. One can imagine a game where some obstacles are immobile. The locations of these obstacles are important in order to make decisions but in this work, they would be ignored. It is therefore important that these limitations are also explicitly demonstrated. 5. Dealing with multiple instances. Because "PNet" generates one heatmap per keypoint, each keypoint detector "specializes" into a certain type of keypoint. This is fine for some applications (e.g. face keypoints) where only one instance of each kind of keypoint exists in each image. But there are games (e.g. Frostbite) where a lot of keypoints look exactly the same. And still, the detector is able to track them with consistency (as shown in the supplementary video). This is intriguing, as one could expect the detector to detect several keypoints at the same location, instead of distributing them almost perfectly. Is it because the receptive field is large? 6. Other issues - In section 3, the authors could improve the explanation of why the loss promotes the detection of meaningful keypoints. It is not obvious at first why the detector needs to detect keypoints to help with the reconstruction. - Figure 1: Referring to [15] as "PointNet" is confusing when this name doesn't appear anywhere in this paper ([15]) and there exists another paper with this name. See "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation", Charles R. Qi, Hao Su, Kaichun Mo, Leonidas J. Guibas. - Figure 1: The figure describes two "stop grad", but there is no mention or explanation of it in the text or caption. This is not theoretically motivated either, because most of the transported feature map comes from the source image (all the pixels that are not close from source or target keypoints). Blocking these gradients would block most of the gradients that can be used to train the "Feature CNN" and "PNet". - L93: "by marginalising the keypoint-detetor feature-maps along the image dimensions (as proposed in [15])". This would be better explained and self-contained by saying that a soft-argmax is used. - L189: "Distances above a threshold (ε) are excluded as potential matches". What threshold value is used? - What specific augmentation techniques are used during the training of the detector? - Figure 4: it is not clear what the meaning of "1-200 frames" is and how the values are computed. Why are the precision and recall changing with the trajectory length? Also, what is an "action repeat"? - Figure 6: the scores should be normalized (and maybe displayed as a plot) for easier comparison. ==== POST REBUTTAL ==== The rebuttal is quite convincing, and have addressed my concerns. I would like to raise the rating of the paper to 8 :-) I'm happy that my worries were just worries. | 2. Deep RL that matters Experiments should be run multiple times. A longstanding issue with deep RL is their reproducibility and the significance of their improvements. It has been recently suggested that we need a community effort towards reproducibility [a], which should also be taken into account in this paper. Among the considerations, one critical thing is running multiple experiments and reporting the statistics. [a] Henderson, Peter, et al. "Deep reinforcement learning that matters." Thirty-Second AAAI Conference on Artificial Intelligence. 2018. |
R22JPTQYWV | ICLR_2025 | 1. Since the authors use “the highest resolution feature map of the FPN”, it is not clear whether the resolution while learning the CPM affects the performance.
2. There are some minor issues: Please check Figure 2, Line 433, and Line 468. Some equations end with a period, while others end with a comma. Please ensure they are consistent. | 2. There are some minor issues: Please check Figure 2, Line 433, and Line 468. Some equations end with a period, while others end with a comma. Please ensure they are consistent. |
NIPS_2017_104 | NIPS_2017 | ---
There aren't any major weaknesses, but there are some additional questions that could be answered and the presentation might be improved a bit.
* More details about the hard-coded demonstration policy should be included. Were different versions of the hard-coded policy tried? How human-like is the hard-coded policy (e.g., how a human would demonstrate for Baxter)? Does the model generalize from any working policy? What about a policy which spends most of its time doing irrelevant or intentionally misleading manipulations? Can a demonstration task be input in a higher level language like the one used throughout the paper (e.g., at line 129)?
* How does this setting relate to question answering or visual question answering?
* How does the model perform on the same train data it's seen already? How much does it overfit?
* How hard is it to find intuitive attention examples as in figure 4?
* The model is somewhat complicated and its presentation in section 4 requires careful reading, perhaps with reference to the supplement. If possible, try to improve this presentation. Replacing some of the natural language description with notation and adding breakout diagrams showing the attention mechanisms might help.
* The related works section would be better understood knowing how the model works, so it should be presented later. | * How does this setting relate to question answering or visual question answering? |
fVxIEHGnVT | ICLR_2024 | 1. $kNN-ECD$ is very similar to $kNN-MT$. Therefore, the technical contribution of the paper is limited.
2. The motivation of applying $kNN-MT$ is not very clear. Although $kNN-MT$ is useful for natural language translation, is there some particular reasons that it will be more effective for programming languages.
3. The presentation is experiment results is hard to read, especially for Table 3 and Table 4. I would suggest the authors to use Figures to present this results and put the detailed numbers in the Appendix.
4. The paper does not show the proposed method can perform error correction for OOD errors. The paper uses model $A$ to build the pair of incorrect programs and correct programs. Therefore, the error is specifically related to model $A$ itself. For a new model $B$, it may make different kinds of error, does the proposed method with learning datastore for model $A$ can fix the error of model $B$. If not, the method requires building datastore for every new method, which largely limiting the application of the proposed method. Minor:
"Uncorrect" should be changed into "Incorrect" | 1. $kNN-ECD$ is very similar to $kNN-MT$. Therefore, the technical contribution of the paper is limited. |
NIPS_2020_243 | NIPS_2020 | My main concerns are listed as follows: 1. There are some typos in the manuscript, e.g., in Abstract, "betwenn". 2. It is a pity that the authors only perform experiments on positive-unlabeled learning, optimal transport techniques have been used in many applications. More results on other applications such as transfer learning, few-shot learning, or zero-shot learning may be better, with more baseline methods being compared. 3. In recent years, both optimal transport and deep learning are hot research issues. The authors are encourage to explain how to expand the proposed method to integrate with deep learning models. 4. For Figure 1, are the figures generated by real experiments or artificially? If they are artificially generated, can authors conduct some real-world experiments to support the phenomenon occurred in these figures? This would be an important evaluation of the proposed method. 5. When citing literature, the tense of sentences is inconsistent, e.g., "Peyré et al. (2016) proposed" and "Chizat et al. (2018) propose". | 4. For Figure 1, are the figures generated by real experiments or artificially? If they are artificially generated, can authors conduct some real-world experiments to support the phenomenon occurred in these figures? This would be an important evaluation of the proposed method. |
ICLR_2021_973 | ICLR_2021 | .
Clearly state your recommendation (accept or reject) with one or two key reasons for this choice. I recommend acceptance. The number of updates needed to learn realistic brain-like representations is a fair criticism of current models, and this paper demonstrates that this number can be greatly reduced, with moderate reduction in Brain-Score. I was surprised that it worked so well.
Ask questions you would like answered by the authors to help you clarify your understanding of the paper and provide the additional evidence you need to be confident in your assessment. - Is the third method (updating only down-sampling layers) meant to be biologically relevant? If so, can anything more specific be said about this, other than that different cortical layers learn at different rates? - Given that the brain does everything in parallel, why is the number of weight updates a better metric than the number of network updates?
Provide additional feedback with the aim to improve the paper. - Bottom of pg. 4: I think 37 bits / synapse (Zador, 2019) relates to specification of the target neuron rather than specification of the connection weight. So I’m not sure its obvious how this relates to the weight compression scheme. The target neurons are already fully specified in CORnet-S. - Pg. 5: “The training time reduction is less drastic than the parameter reduction because most gradients are still computed for early down-sampling layers (Discussion).” This seems not to have been revisited in the Discussion (which is fine, just delete “Discussion”). - Fig. 3: Did you experiment with just training the middle Conv layers (as opposed to upsample or downsample layers)? - Fig. 3: Why go to 0 trained parameters for downstream training, but minimum ~1M trained parameters for CT? - Fig. 4: On the color bar, presumably one of the labels should say “worse”. - Section B.1: How many Gaussian components were used, or how many parameters total? Or if different for each layer, what was the maximum across all layers? - Section B.3: I wasn’t clear on the numbers of parameters used in each approach. - D.1: How were CORnet-S clusters mapped to ResNet blocks? I thought different clusters were used in each layer. If not, maybe this could be highlighted in Section 4. | - Section B.3: I wasn’t clear on the numbers of parameters used in each approach. |
NIPS_2016_537 | NIPS_2016 | weakness of the paper is the lack of clarity in some of the presentation. Here are some examples of what I mean. 1) l 63, refers to a "joint distribution on D x C". But C is a collection of classifiers, so this framework where the decision functions are random is unfamiliar. 2) In the first three paragraphs of section 2, the setting needs to be spelled out more clearly. It seems like the authors want to receive credit for doing something in greater generality than what they actually present, and this muddles the exposition. 3) l 123, this is not the definition of "dominated" 4) for the third point of definition one, is there some connection to properties of universal kernels? See in particular chapter 4 of Steinwart and Christmann which discusses the ability of universal kernels two separate an arbitrary finite data set with margin arbitrarily close to one. 5) an example and perhaps a figure would be quite helpful in explaining the definition of uniform shattering. 6) in section 2.1 the phrase "group action" is used repeatedly, but it is not clear what this means. 7) in the same section, the notation {\cal P} with a subscript is used several times without being defined. 8) l 196-7: this requires more explanation. Why exactly are the two quantities different, and why does this capture the difference in learning settings? ---- I still lean toward acceptance. I think NIPS should have room for a few "pure theory" papers. | 5) an example and perhaps a figure would be quite helpful in explaining the definition of uniform shattering. |
NIPS_2019_360 | NIPS_2019 | 1. Proposed model involves a lot of complicated moving parts - it is not clear whether it'll be easily reproducible given that it is so complicated. 2. I don't believe the proposed transductive method is very novel as I believe its related to a common way to incorporate unlabeled data in semi-supervised methods (see self-training methods in semi-supervised learning). | 2. I don't believe the proposed transductive method is very novel as I believe its related to a common way to incorporate unlabeled data in semi-supervised methods (see self-training methods in semi-supervised learning). |
NIPS_2022_1392 | NIPS_2022 | 1.The proposed formulation lacks a sufficient clarification about the uniqueness, including, essential difference in principle from existing DRO formulation. 2.the problem under study involves distributional robustness and metric learning, thus the authors should NOT overlook some existing works in the two aspects, at least being mentioned to make differences, especially those appeared in 2021 and 2022. 3.The assumption among classes is NOT practice.
Though the formulation or definition in this manu. is somewhat trivial, but its highlight lies in optimization and theoretical property analysis from which some conclusions or insights can be gained. | 3.The assumption among classes is NOT practice. Though the formulation or definition in this manu. is somewhat trivial, but its highlight lies in optimization and theoretical property analysis from which some conclusions or insights can be gained. |
ICLR_2023_3918 | ICLR_2023 | - The evaluation results reported in table 1 are based on only three trials for each case. While this is fine, statistically this is not significant, and thus it does not make sense to report the deviations. That is why that in many cases the deviation is 0. Due to this reason, statements such as “our performance is at least two standard deviation better than the next best baseline” do not make sense. - In the reported ablation studies in Table 2, for CUB and SOP datasets, the complete loss function performed even worse than those with some terms missing. That does not appear to make sense. Why? | - The evaluation results reported in table 1 are based on only three trials for each case. While this is fine, statistically this is not significant, and thus it does not make sense to report the deviations. That is why that in many cases the deviation is 0. Due to this reason, statements such as “our performance is at least two standard deviation better than the next best baseline” do not make sense. |
ACL_2017_31_review | ACL_2017 | ] See below for details of the following weaknesses: - Novelties of the paper are relatively unclear.
- No detailed error analysis is provided.
- A feature comparison with prior work is shallow, missing two relevant papers.
- The paper has several obscure descriptions, including typos.
[General Discussion:] The paper would be more impactful if it states novelties more explicitly. Is the paper presenting the first neural network based approach for event factuality identification? If this is the case, please state that.
The paper would crystallize remaining challenges in event factuality identification and facilitate future research better if it provides detailed error analysis regarding the results of Table 3 and 4. What are dominant sources of errors made by the best system BiLSTM+CNN(Att)? What impacts do errors in basic factor extraction (Table 3) have on the overall performance of factuality identification (Table 4)? The analysis presented in Section 5.4 is more like a feature ablation study to show how useful some additional features are.
The paper would be stronger if it compares with prior work in terms of features. Does the paper use any new features which have not been explored before? In other words, it is unclear whether main advantages of the proposed system come purely from deep learning, or from a combination of neural networks and some new unexplored features. As for feature comparison, the paper is missing two relevant papers: - Kenton Lee, Yoav Artzi, Yejin Choi and Luke Zettlemoyer. 2015 Event Detection and Factuality Assessment with Non-Expert Supervision. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1643-1648.
- Sandeep Soni, Tanushree Mitra, Eric Gilbert and Jacob Eisenstein. 2014.
Modeling Factuality Judgments in Social Media Text. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 415-420.
The paper would be more understandable if more examples are given to illustrate the underspecified modality (U) and the underspecified polarity (u). There are two reasons for that. First, the definition of 'underspecified' is relatively unintuitive as compared to other classes such as 'probable' or 'positive'.
Second, the examples would be more helpful to understand the difficulties of Uu detection reported in line 690-697. Among the seven examples (S1-S7), only S7 corresponds to Uu, and its explanation is quite limited to illustrate the difficulties.
A minor comment is that the paper has several obscure descriptions, including typos, as shown below: - The explanations for features in Section 3.2 are somewhat intertwined and thus confusing. The section would be more coherently organized with more separate paragraphs dedicated to each of lexical features and sentence-level features, by: - (1) stating that the SIP feature comprises two features (i.e., lexical-level and sentence-level) and introduce their corresponding variables (l and c) *at the beginning*; - (2) moving the description of embeddings of the lexical feature in line 280-283 to the first paragraph; and - (3) presenting the last paragraph about relevant source identification in a separate subsection because it is not about SIP detection.
- The title of Section 3 ('Baseline') is misleading. A more understandable title would be 'Basic Factor Extraction' or 'Basic Feature Extraction', because the section is about how to extract basic factors (features), not about a baseline end-to-end system for event factuality identification.
- The presented neural network architectures would be more convincing if it describes how beneficial the attention mechanism is to the task.
- Table 2 seems to show factuality statistics only for all sources. The table would be more informative along with Table 4 if it also shows factuality statistics for 'Author' and 'Embed'.
- Table 4 would be more effective if the highest system performance with respect to each combination of the source and the factuality value is shown in boldface.
- Section 4.1 says, "Aux_Words can describe the *syntactic* structures of sentences," whereas section 5.4 says, "they (auxiliary words) can reflect the *pragmatic* structures of sentences." These two claims do not consort with each other well, and neither of them seems adequate to summarize how useful the dependency relations 'aux' and 'mark' are for the task.
- S7 seems to be another example to support the effectiveness of auxiliary words, but the explanation for S7 is thin, as compared to the one for S6. What is the auxiliary word for 'ensure' in S7?
- Line 162: 'event go in S1' should be 'event go in S2'.
- Line 315: 'in details' should be 'in detail'.
- Line 719: 'in Section 4' should be 'in Section 4.1' to make it more specific.
- Line 771: 'recent researches' should be 'recent research' or 'recent studies'. 'Research' is an uncountable noun.
- Line 903: 'Factbank' should be 'FactBank'. | - A feature comparison with prior work is shallow, missing two relevant papers. |
NIPS_2019_286 | NIPS_2019 | 1.) l Line 8,56,70,93,: Usage of the word equivalent. I would suggest a more cautious usage of this word. Especially, if the equivalence is not verified. 2.) A differentiation between the ultrametric d and the ultrametric u would make their different usages clearer. 3.) Line 186 ff. : A usage of the subdominant ultrametric for the cluster-size regularization, would make the algorithms part more consistent with the following considerations in this paper. 4) The paper is not sufficiently clear in some aspects (see below for a list of questions) Overall, I have the impressions the the weaknesses could be fixed until the final paper submission. | 1.) l Line 8,56,70,93,: Usage of the word equivalent. I would suggest a more cautious usage of this word. Especially, if the equivalence is not verified. |
NIPS_2018_874 | NIPS_2018 | --- None of these weaknesses stand out as major and they are not ordered by importance. * Role of and relation to human judgement: Visual explanations are useless if humans do not interpret them correctly (see framework in [1]). This point is largely ignored by other saliency papers, but I would like to see it addressed (at least in brief) more often. What conclusions are humans supposed to make using these explanations? How can we be confident that users will draw correct conclusions and not incorrect ones? Do the proposed sanity checks help identify explanation methods which are more human friendly? Even if the answer to the last question is no, it would be useful to discuss. * Role of architectures: Section 5.3 addresses the concern that architectural priors could lead to meaningful explanations. I suggest toning down some of the bolder claims in the rest of the paper to allude to this section (e.g. "properties of the model" -> "model parameters"; l103). Hint at the nature of the independence when it is first introduced. Incomplete or incorrect claims: * l84: The explanation of GBP seems incorrect. Gradients are set to 0, not activations. Was the implementation correct? * l86-87: GradCAM uses the gradient of classification output w.r.t. feature map, not gradient of feature map w.r.t. input. Furthermore, the Guided GradCAM maps in figure 1 and throughout the paper appear incorrect. They look exactly (pixel for pixel) equivalent to the GBP maps directly to their left. This should not be the case (e.g., in the first column of figure 2 the GradCAM map assigns 0 weight to the top left corner, but somehow that corner is still non-0 for Guided GradCAM). The GradCAM maps look like they're correct. l194-196: These methods are only equivalent gradient * input in the case of piecewise linear activations. l125: Which rank correlation is used? Theoretical analysis and similarity to edge detector: * l33-34: The explanations are only somewhat similar to an edge detector, and differences could reflect model differences. Even if the same, they might result from a model which is more complex than an edge detector. This presentation should be a bit more careful. * The analysis of a conv layer is rather hand wavy. It is not clear to me that edges should appear in the produced saliency mask as claimed at l241. The evidence in figure 6 helps, but it is not completely convincing and the visualizations do not (strictly speaking) immitate an edge detector (e.g., look at the vegitation in front of the lighthouse). It would be useful to include a conv layer initialized with sobel filter and a canny edge detector in figure 6. Also, quantitative experimental results comparing an edge detector to the other visual explanations would help. Figure 14 makes me doubt this analysis more because many non-edge parts of the bird are included in the explanations. Although this work already provides a fairly large set of experiments there are some highly relevant experiments which weren't considered: * How much does this result rely on the particular (re)intialization method? Which initialization method was used? If it was different than the one used to train the model then what justifies the choice? * How do these explanations change with hyperparameters like choice of activation function (e.g., for non piecewise linear choices). How do LRP/DeepLIFT (for non piecewise linear activations) perform? * What if the layers are randomized in the other direction (from input to output)? Is it still the classifier layer that matters most? * The difference between gradient * input in Fig3C/Fig2 and Fig3A/E is striking. Point that out. * A figure and/or quantitative results for section 3.2 would be helpful. Just how similar are the results? Quality --- There are a lot of weaknesses above and some of them apply to the scientific quality of the work but I do not think any of them fundamentally undercut the main result. Clarity --- The paper was clear enough, though I point out some minor problems below. Minor presentation details: * l17: Incomplete citation: "[cite several saliency methods]" * l122/126: At first it says only the weights of a specific layer are randomized, next it says that weights from input to specific layer are randomized, and finally (from the figures and their captions) it says reinitialization occurs between logits and the indicated layer. * Are GBP and IG hiding under the input * gradient curve in Fig3A/E? * The presentation would be better if it presented the proposed approach as one metric (e.g., with a name), something other papers could cite and optimize for. * GradCAM is removed from some figures in the supplement and Gradient-VG is added without explanation. Originality --- A number of papers evaluate visual explanations but none have used this approach to my knowledge. Significance --- This paper could lead to better visual explanations. It's a good metric, but it only provides sanity checks and can't identify really good explanations, only bad ones. Optimizing for this metric would not get the community a lot farther than it is today, though it would probably help. In summary, this paper is a 7 because of novelty and potential impact. I wouldn't argue too strongly against rejection because of the experimental and presentation flaws pointed out above. If those were fixed I would argue strongly against rejection. [1]: Doshi-Velez, Finale and Been Kim. âA Roadmap for a Rigorous Science of Interpretability.â CoRR abs/1702.08608 (2017): n. pag. | --- None of these weaknesses stand out as major and they are not ordered by importance. |
ACL_2017_614_review | ACL_2017 | - I don't understand effectiveness of the multi-view clustering approach.
Almost all across the board, the paraphrase similarity view does significantly better than other views and their combination. What, then, do we learn about the usefulness of the other views? There is one empirical example of how the different views help in clustering paraphrases of the word 'slip', but there is no further analysis about how the different clustering techniques differ, except on the task directly. Without a more detailed analysis of differences and similarities between these views, it is hard to draw solid conclusions about the different views. - The paper is not fully clear on a first read. Specifically, it is not immediately clear how the sections connect to each other, reading more like disjoint pieces of work. For instance, I did not understand the connections between section 2.1 and section 4.3, so adding forward/backward pointer references to sections should be useful in clearing up things. Relatedly, the multi-view clustering section (3.1) needs editing, since the subsections seem to be out of order, and citations seem to be missing (lines 392 and 393).
- The relatively poor performance on nouns makes me uneasy. While I can expect TWSI to do really well due to its nature, the fact that the oracle GAP for PPDBClus is higher than most clustering approaches is disconcerting, and I would like to understand the gap better. This also directly contradicts the claim that the clustering approach is generalizable to all parts of speech (124-126), since the performance clearly isn't uniform.
- General Discussion: The paper is mostly straightforward in terms of techniques used and experiments. Even then, the authors show clear gains on the lexsub task by their two-pronged approach, with potentially more to be gained by using stronger WSD algorithms.
Some additional questions for the authors : - Lines 221-222 : Why do you add hypernyms/hyponyms?
- Lines 367-368 : Why does X^{P} need to be symmetric?
- Lines 387-389 : The weighting scheme seems kind of arbitrary. Was this indeed arbitrary or is this a principled choice?
- Is the high performance of SubstClus^{P} ascribable to the fact that the number of clusters was tuned based on this view? Would tuning the number of clusters based on other matrices affect the results and the conclusions?
- What other related tasks could this approach possibly generalize to? Or is it only specific to lexsub? | - I don't understand effectiveness of the multi-view clustering approach. Almost all across the board, the paraphrase similarity view does significantly better than other views and their combination. What, then, do we learn about the usefulness of the other views? There is one empirical example of how the different views help in clustering paraphrases of the word 'slip', but there is no further analysis about how the different clustering techniques differ, except on the task directly. Without a more detailed analysis of differences and similarities between these views, it is hard to draw solid conclusions about the different views. |
ICLR_2021_2892 | ICLR_2021 | - Proposition 2 seems to lack an argument why Eq 16 forms a complete basis for all functions h. The function h appears to be defined as any family of spherical signals parameterized by a parameter in [-pi/2, pi/2]. If that’s the case, why eq 16? As a concrete example, let \hat{h}^\theta_lm = 1 if l=m=1 and 0 otherwise, so constant in \theta. The only constant associated Legendre polynomial is P^0_0, so this h is not expressible in eq 16. Instead, it seems like there are additional assumptions necessary on the family of spherical functions h to let the decomposition eq 16, and thus proposition 2, work. Hence, it looks like that proposition 2 doesn’t actually characterize all azimuthal correlations. - In its discussion of SO(3) equivariant spherical convolutions, the authors do not mention the lift to SO(3) signals, which allow for more expressive filters than the ones shown in figure 1. - Can the authors clarify figure 2b? I do not understand what is shown. - The architecture used for the experiments is not clearly explained in this paper. Instead the authors refer to Jiang et al. (2019) for details. This makes the paper not self-contained. - The authors appear to not use a fast spherical Fourier transform. Why not? This could greatly help performance. Could the authors comment on the runtime cost of the experiments? - The sampling of the Fourier features to a spherical signal and then applying a point-wise non-linearity is not exactly equivariant (as noted by Kondor et al 2018). Still, the authors note at the end of Sec 6 “This limitation can be alleviated by applying fully azimuthal-rotation equivariant operations.”. Perhaps the authors can comment on that? - The experiments are limited to MNIST and a single real-world dataset. - Out of the many spherical CNNs currently in existence, the authors compare only to a single one. For example, comparisons to SO(3) equivariant methods would be interesting. Furthermore, it would be interesting to compare to SO(3) equivariant methods in which SO(3) equivariance is broken to SO(2) equivariance by adding to the spherical signal a channel that indicates the theta coordinate. - The experimental results are presented in an unclear way. A table would be much clearer. - An obvious approach to the problem of SO(2) equivariance of spherical signals, is to project the sphere to a cylinder and apply planar 2D convolutions that are periodic in one direction and not in the other. This suffers from distortion of the kernel around the poles, but perhaps this wouldn’t be too harmful. An experimental comparison to this method would benefit the paper.
Recommendation: I recommend rejection of this paper. I am not convinced of the correctness of proposition 2 and proposition 1 is similar to equivariance arguments made in prior work. The experiments are limited in their presentation, the number of datasets and the comparisons to prior work.
Suggestions for improvement: - Clarify the issue around eq 16 and proposition 2 - Improve presentation of experimental results and add experimental details - Evaluate the model of more data sets - Compare the model to other spherical convolutions
Minor points / suggestions: - When talking about the Fourier modes as numbers, perhaps clarify if these are reals or complex. - In Def 1 in the equation it is confusing to have theta twice on the left-hand side. It would be clearer if h did not have a subscript on the left-hand side. | - The architecture used for the experiments is not clearly explained in this paper. Instead the authors refer to Jiang et al. (2019) for details. This makes the paper not self-contained. |
Subsets and Splits