forum_id
stringlengths
8
20
forum_title
stringlengths
4
171
forum_authors
sequencelengths
0
25
forum_abstract
stringlengths
4
4.27k
forum_keywords
sequencelengths
1
10
forum_pdf_url
stringlengths
38
50
note_id
stringlengths
8
13
note_type
stringclasses
6 values
note_created
int64
1,360B
1,736B
note_replyto
stringlengths
8
20
note_readers
sequencelengths
1
5
note_signatures
sequencelengths
1
1
note_text
stringlengths
10
16.6k
2IwSOTWvXu
Convergence-Aware Online Model Selection with Time-Increasing Bandits
[ "Yu Xia", "Fang Kong", "Tong Yu", "Liya Guo", "Ryan A. Rossi", "Sungchul Kim", "Shuai Li" ]
Web-based applications such as chatbots, search engines and news recommendations continue to grow in scale and complexity with the recent surge in the adoption of large language models (LLMs). Online model selection has thus garnered increasing attention due to the need to choose the best model among a diverse set while balancing task reward and exploration cost. Organizations faces decisions like whether to employ a costly API-based LLM or a locally finetuned small LLM, weighing cost against performance. Traditional selection methods often evaluate every candidate model before choosing one, which are becoming impractical given the rising costs of training and finetuning LLMs. Moreover, it is undesirable to allocate excessive resources towards exploring poor-performing models. While some recent works leverage online bandit algorithm to manage such exploration-exploitation trade-off in model selection, they tend to overlook the increasing-then-converging trend in model performances as the model is iteratively finetuned, leading to less accurate predictions and suboptimal model selections. In this paper, we propose a time-increasing bandit algorithm TI-UCB, which effectively predicts the increase of model performances due to training or finetuning and efficiently balances exploration and exploitation in model selection. To further capture the converging points of models, we develop a change detection mechanism by comparing consecutive increase predictions. We theoretically prove that our algorithm achieves a lower regret upper bound, improving from prior works' polynomial regret to logarithmic in a similar setting. The advantage of our method is also empirically validated through extensive experiments on classification model selection and online selection of LLMs. Our results highlight the importance of utilizing increasing-then-converging pattern for more efficient and economic model selection in the deployment of LLMs.
[ "Online Model Selection", "Increasing Bandits" ]
https://openreview.net/pdf?id=2IwSOTWvXu
X4lPQgP3cT
official_review
1,700,820,089,159
2IwSOTWvXu
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission600/Reviewer_vDTo" ]
review: This paper studies the model selection problem tailored to the LLMs setting. In particular, it casts the problem as a variant of rest bandit problem with increasing reward. The increasing reward capture that the performance of an LLM would increase by fine tuning. TI-UCB algorithm is developed for this problem. Regret bounds are proved and extensive experiments are conducted to evaluate the TI-UCB. Overall, this paper is well written and has a fluent logic flow. It studies a timely and important problem. The algorithm analysis looks sound and experiments looks sufficient. I appreciate this work, but I have several concerns on this paper. The finetuning cost is frequently mentioned in the motivation and I am convinced that the cost is an important factor. However, it seems that the model does not capture the finetuning cost explicitly. The proposed model only captures the reward. Without capturing the finetuning cost, the proposed model is not well tailored to the problem. The reward model that captures the convergence of LLM needs more justification. What’s the formal definition of the convergence of an LLM? Does an LLM model converge after several rounds of finetune? Under what conditions it converge? Intuitively, the convergence should depend on the finetune data. This rise a problem of how to select finetune data. The regret definition is unclear. In particular, the n*_i(T) is not well defined. What do you mean by optimal action sequence in the definition of n*_i(T)? Intuitively, in different round, one may have different test document for finetuning, and the n*_i(T) should depends on the finetuning data. But it seems that it does not depend on the finetuning data. questions: Please refer to my concerns. ethics_review_flag: No ethics_review_description: NO. scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
2IwSOTWvXu
Convergence-Aware Online Model Selection with Time-Increasing Bandits
[ "Yu Xia", "Fang Kong", "Tong Yu", "Liya Guo", "Ryan A. Rossi", "Sungchul Kim", "Shuai Li" ]
Web-based applications such as chatbots, search engines and news recommendations continue to grow in scale and complexity with the recent surge in the adoption of large language models (LLMs). Online model selection has thus garnered increasing attention due to the need to choose the best model among a diverse set while balancing task reward and exploration cost. Organizations faces decisions like whether to employ a costly API-based LLM or a locally finetuned small LLM, weighing cost against performance. Traditional selection methods often evaluate every candidate model before choosing one, which are becoming impractical given the rising costs of training and finetuning LLMs. Moreover, it is undesirable to allocate excessive resources towards exploring poor-performing models. While some recent works leverage online bandit algorithm to manage such exploration-exploitation trade-off in model selection, they tend to overlook the increasing-then-converging trend in model performances as the model is iteratively finetuned, leading to less accurate predictions and suboptimal model selections. In this paper, we propose a time-increasing bandit algorithm TI-UCB, which effectively predicts the increase of model performances due to training or finetuning and efficiently balances exploration and exploitation in model selection. To further capture the converging points of models, we develop a change detection mechanism by comparing consecutive increase predictions. We theoretically prove that our algorithm achieves a lower regret upper bound, improving from prior works' polynomial regret to logarithmic in a similar setting. The advantage of our method is also empirically validated through extensive experiments on classification model selection and online selection of LLMs. Our results highlight the importance of utilizing increasing-then-converging pattern for more efficient and economic model selection in the deployment of LLMs.
[ "Online Model Selection", "Increasing Bandits" ]
https://openreview.net/pdf?id=2IwSOTWvXu
1UOy4Vf5Ga
decision
1,705,909,252,255
2IwSOTWvXu
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept (Oral) comment: Authors present a work for online model selection using bandits augmented with LLMs. The work is novel, and highly relevant to the community. The paper is also technically sound, and the authors demonstrated the practical value on real-world datasets. Authors have satisfactory responses to reviewer questions, and all reviewers have acknowledged the author rebuttals.
26QVTwbqyN
Query2GMM: Learning Representation with Gaussian Mixture Model for Reasoning over Knowledge Graphs
[ "Yuhan Wu", "Yuanyuan Xu", "Wenjie Zhang", "Xiwei Xu", "Ying Zhang" ]
Logical query answering over Knowledge Graphs (KGs) is a fundamental yet complex task. A promising approach to achieve this is to embed queries and entities jointly into the same embedding space. Research along this line suggests that using multi-modal distribution to represent answer entities is more suitable than uni-modal distribution, as a single query may contain multiple disjoint answer subsets due to the compositional nature of multi-hop queries and the varying latent semantics of relations. However, existing methods based on multi-modal distribution roughly represent each subset without capturing its accurate cardinality, or even degenerate into uni-modal distribution learning during the reasoning process due to the lack of an effective similarity measure. To better model queries with diversified answers, we propose Query2GMM for answering logical queries over knowledge graphs. In Query2GMM, we present the GMM embedding to represent each query using a univariate Gaussian Mixture Model (GMM). Each subset of a query is encoded by its cardinality, semantic center and dispersion degree, allowing for precise representation of multiple subsets. Then we design specific neural networks for each operator to handle the inherent complexity that comes with multi-modal distribution while alleviating the cascading errors. Last, we define a new similarity measure to assess the relationships between an entity and a query's multi-answer subsets, enabling effective multi-modal distribution learning for reasoning. Comprehensive experimental results show that Query2GMM outperforms the best competitor by an absolute average of 6.35%.
[ "Knowledge Graph", "Probabilistic Reasoning", "Logical Query", "Multi-modal Distribution", "Neural Reasoning" ]
https://openreview.net/pdf?id=26QVTwbqyN
kDPSXEFFmx
official_review
1,700,879,038,597
26QVTwbqyN
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1527/Reviewer_9nC5" ]
review: This work introduces Query2GMM, a Gaussian Mixture Model-based approach for answering logical queries over knowledge graphs. Distinct from existing multi-modal works, Query2GMM is capable of quantifying different answer subsets for each query, as it encodes the cardinality in the representation of a set of entities. Additionally, Query2GMM incorporates a novel distribution similarity measure. Both an experiment and an ablation study confirm its effectiveness. Pros: 1. Great presentation, novelty 2. Thorough experimentation Cons: 1. The problem statement lacks clarity, particularly in the definition of training and test data. 2. There are errors in representation: 2.1 The mention of $rFF(\cdot)$ on page 4, right column, line 453, section 3.4, should be introduced earlier in section 3.3. Furthermore, the representation of $rFF(\cdot)$ in Formulas 8-9 differs from that in Formula 13. The same inconsistency is observed with $MLP(\cdot)$ in Formula 6. 2.2 In Formula 17, the variable $t$ is not defined. Although it is ostensibly the index of the sample, the definition of the sample itself is missing. questions: 1. How to obtain the union of two sets of entities in Query2GMM? 2. A limitation of the neural models of First-Order Logic (FOL) operations in this paper, is their lack of interpretability and uncertainty in preserving the inherent properties of these FOL operations. Specifically, it remains unclear whether, as the loss function nears zero, the trained parameters can maintain key properties of FOL operators, such as De Morgan's Laws, Distributive Laws, Associative Laws, etc. For two GMM embeddings $G_{q_1}$,$G_{q_2}$ trained on FB237/NELL, what is the mathematic expectation of distribution distance between $Negation(Intersection(G_{q_1},G_{q_2}))$ and $Union(Negation(G_{q_1}), Negation(G_{q_2}))$? 3. Why were experiments not conducted on the WN18RR dataset? ethics_review_flag: No ethics_review_description: n/a scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 6 technical_quality: 4 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
26QVTwbqyN
Query2GMM: Learning Representation with Gaussian Mixture Model for Reasoning over Knowledge Graphs
[ "Yuhan Wu", "Yuanyuan Xu", "Wenjie Zhang", "Xiwei Xu", "Ying Zhang" ]
Logical query answering over Knowledge Graphs (KGs) is a fundamental yet complex task. A promising approach to achieve this is to embed queries and entities jointly into the same embedding space. Research along this line suggests that using multi-modal distribution to represent answer entities is more suitable than uni-modal distribution, as a single query may contain multiple disjoint answer subsets due to the compositional nature of multi-hop queries and the varying latent semantics of relations. However, existing methods based on multi-modal distribution roughly represent each subset without capturing its accurate cardinality, or even degenerate into uni-modal distribution learning during the reasoning process due to the lack of an effective similarity measure. To better model queries with diversified answers, we propose Query2GMM for answering logical queries over knowledge graphs. In Query2GMM, we present the GMM embedding to represent each query using a univariate Gaussian Mixture Model (GMM). Each subset of a query is encoded by its cardinality, semantic center and dispersion degree, allowing for precise representation of multiple subsets. Then we design specific neural networks for each operator to handle the inherent complexity that comes with multi-modal distribution while alleviating the cascading errors. Last, we define a new similarity measure to assess the relationships between an entity and a query's multi-answer subsets, enabling effective multi-modal distribution learning for reasoning. Comprehensive experimental results show that Query2GMM outperforms the best competitor by an absolute average of 6.35%.
[ "Knowledge Graph", "Probabilistic Reasoning", "Logical Query", "Multi-modal Distribution", "Neural Reasoning" ]
https://openreview.net/pdf?id=26QVTwbqyN
dxwxTsu8RQ
decision
1,705,909,231,708
26QVTwbqyN
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept (Oral) comment: This article introduces an approach to enable query answering over knowledge graphs through embedding techniques. Results show that this approach mostly outperforms exsting approaches. All reviewers agree that this work is novel and produces valuable insights and research contributions, and deserves to be accepted. We do recommend the reviewers to include the suggested changes from the reviewers, such as ideas on future work and fixes to various minor errors.
26QVTwbqyN
Query2GMM: Learning Representation with Gaussian Mixture Model for Reasoning over Knowledge Graphs
[ "Yuhan Wu", "Yuanyuan Xu", "Wenjie Zhang", "Xiwei Xu", "Ying Zhang" ]
Logical query answering over Knowledge Graphs (KGs) is a fundamental yet complex task. A promising approach to achieve this is to embed queries and entities jointly into the same embedding space. Research along this line suggests that using multi-modal distribution to represent answer entities is more suitable than uni-modal distribution, as a single query may contain multiple disjoint answer subsets due to the compositional nature of multi-hop queries and the varying latent semantics of relations. However, existing methods based on multi-modal distribution roughly represent each subset without capturing its accurate cardinality, or even degenerate into uni-modal distribution learning during the reasoning process due to the lack of an effective similarity measure. To better model queries with diversified answers, we propose Query2GMM for answering logical queries over knowledge graphs. In Query2GMM, we present the GMM embedding to represent each query using a univariate Gaussian Mixture Model (GMM). Each subset of a query is encoded by its cardinality, semantic center and dispersion degree, allowing for precise representation of multiple subsets. Then we design specific neural networks for each operator to handle the inherent complexity that comes with multi-modal distribution while alleviating the cascading errors. Last, we define a new similarity measure to assess the relationships between an entity and a query's multi-answer subsets, enabling effective multi-modal distribution learning for reasoning. Comprehensive experimental results show that Query2GMM outperforms the best competitor by an absolute average of 6.35%.
[ "Knowledge Graph", "Probabilistic Reasoning", "Logical Query", "Multi-modal Distribution", "Neural Reasoning" ]
https://openreview.net/pdf?id=26QVTwbqyN
asRaW2xi5J
official_review
1,700,388,881,871
26QVTwbqyN
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1527/Reviewer_oq1Q" ]
review: This paper improved the multi-modal reasoning process for the logical query answering problem over knowledge graphs, by modeling the multi-modal reasoning process via more accurate Gaussian Mixture model as well as designing new multi-modal distance representations. At the same time, a more effective neural model for logical operators is proposed, taking a simple improvement on the previous work, and proving the effectiveness of its components through extensive experiments. The overall writing is clear and logical, the experimental results obtained are optimal in most cases, and the quality of the work is good. However, there are some problems which should be addressed before it is considered for acceptance. If the following problems are well-addressed, I believe that the essential contributions of this paper are important for logical query answering over knowledge graphs. The role of cardinality is not adequately demonstrated in the paper, and even though the experimental part of the ablation experiment demonstrates its effectiveness, it does not explain the principle of its action well enough, and there should be more experiments beyond the modeling effect to justify this "elegant" Gaussian Mixture representation. Some of the sentences discussing the role and problems of components are not concise enough and the content is repetitive and redundant. The work improves on the previous work on multi-modal Gaussian Mixture query representation, which is not as innovative as it could be. questions: Please explain the mechanism of cardinality's role in elegantly modeling multi-modal Gaussian mixture distribution learning for reasoning, and the role of the new mixed Wasserstein distance designed to accurately model the reasoning process, thus demonstrating that the article's improvements to the previous work on multivariate Gaussian distribution are important and essential. Please explain exactly how the baseline for the ablation experiments on neural models for logical operators was designed. More importantly, why not use a similar connection to the normal residual network as a comparison baseline for ablation experiments? ethics_review_flag: No ethics_review_description: None scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 4 technical_quality: 5 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
26QVTwbqyN
Query2GMM: Learning Representation with Gaussian Mixture Model for Reasoning over Knowledge Graphs
[ "Yuhan Wu", "Yuanyuan Xu", "Wenjie Zhang", "Xiwei Xu", "Ying Zhang" ]
Logical query answering over Knowledge Graphs (KGs) is a fundamental yet complex task. A promising approach to achieve this is to embed queries and entities jointly into the same embedding space. Research along this line suggests that using multi-modal distribution to represent answer entities is more suitable than uni-modal distribution, as a single query may contain multiple disjoint answer subsets due to the compositional nature of multi-hop queries and the varying latent semantics of relations. However, existing methods based on multi-modal distribution roughly represent each subset without capturing its accurate cardinality, or even degenerate into uni-modal distribution learning during the reasoning process due to the lack of an effective similarity measure. To better model queries with diversified answers, we propose Query2GMM for answering logical queries over knowledge graphs. In Query2GMM, we present the GMM embedding to represent each query using a univariate Gaussian Mixture Model (GMM). Each subset of a query is encoded by its cardinality, semantic center and dispersion degree, allowing for precise representation of multiple subsets. Then we design specific neural networks for each operator to handle the inherent complexity that comes with multi-modal distribution while alleviating the cascading errors. Last, we define a new similarity measure to assess the relationships between an entity and a query's multi-answer subsets, enabling effective multi-modal distribution learning for reasoning. Comprehensive experimental results show that Query2GMM outperforms the best competitor by an absolute average of 6.35%.
[ "Knowledge Graph", "Probabilistic Reasoning", "Logical Query", "Multi-modal Distribution", "Neural Reasoning" ]
https://openreview.net/pdf?id=26QVTwbqyN
YR7v4ETWs7
official_review
1,701,158,027,844
26QVTwbqyN
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1527/Reviewer_k9Sh" ]
review: **Summary:** The present paper presents Query2GMM, a method using Gaussian mixture models for logical query answering over knowledge graphs. This paper is inspired by the recent finding that ideal query embeddings might follow a multi-modal distribution. The paper addresses this finding by presenting a method based on GMMs. The method is evaluated on two standard benchmark datasets for the task against a variety of state-of-the-art baselines. On NELL all baselines are clearly outperformed, on FB15K-237, the improvements are smaller, particularly on queries with negation. **Review:** The paper presents a novel and technically valuable new method for answering FOL queries on knowledge graphs. The method is well-motivated and well-grounded in recent literature in the field. The presentation of the work is good. The final experimental results are informative, especially because of the extensive ablation studies demonstrating the relevance of the different operators. Overall, this is a very good paper. **Strengths:** - Novel and very interesting new method for logical query answering - Very good and recent related work - Strong experimental results - Well-written paper questions: - Since the paper does not contain a paragraph on future work, I would be interested in the next steps and shortcomings of the current method. ethics_review_flag: No ethics_review_description: No scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 6 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
26QVTwbqyN
Query2GMM: Learning Representation with Gaussian Mixture Model for Reasoning over Knowledge Graphs
[ "Yuhan Wu", "Yuanyuan Xu", "Wenjie Zhang", "Xiwei Xu", "Ying Zhang" ]
Logical query answering over Knowledge Graphs (KGs) is a fundamental yet complex task. A promising approach to achieve this is to embed queries and entities jointly into the same embedding space. Research along this line suggests that using multi-modal distribution to represent answer entities is more suitable than uni-modal distribution, as a single query may contain multiple disjoint answer subsets due to the compositional nature of multi-hop queries and the varying latent semantics of relations. However, existing methods based on multi-modal distribution roughly represent each subset without capturing its accurate cardinality, or even degenerate into uni-modal distribution learning during the reasoning process due to the lack of an effective similarity measure. To better model queries with diversified answers, we propose Query2GMM for answering logical queries over knowledge graphs. In Query2GMM, we present the GMM embedding to represent each query using a univariate Gaussian Mixture Model (GMM). Each subset of a query is encoded by its cardinality, semantic center and dispersion degree, allowing for precise representation of multiple subsets. Then we design specific neural networks for each operator to handle the inherent complexity that comes with multi-modal distribution while alleviating the cascading errors. Last, we define a new similarity measure to assess the relationships between an entity and a query's multi-answer subsets, enabling effective multi-modal distribution learning for reasoning. Comprehensive experimental results show that Query2GMM outperforms the best competitor by an absolute average of 6.35%.
[ "Knowledge Graph", "Probabilistic Reasoning", "Logical Query", "Multi-modal Distribution", "Neural Reasoning" ]
https://openreview.net/pdf?id=26QVTwbqyN
ChE94devSK
official_review
1,700,728,397,429
26QVTwbqyN
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1527/Reviewer_MSrd" ]
review: *By mistake, I submitted a wrong review to this paper, I now replaced it with the correct one. Apologies for the confusion* This paper presents a method for query answering over knowledge graph by embedding both queries and KG entities in the same embedding space. The main innovation of the method presented in the paper is the use of multi-modal distributions in the form of a Gaussian Mixture model. The method takes as input a KG and First-Order Logical formulae that express queries. For each of the logical operations (negation, conjunction etc), an embedding submethod is defined. The combination of said methods lead to a GMM distribution in the embedding space. A Wasserstein Distance metric is then used in the learning procedure. The paper presents the method and evaluation on two datasets, comparing it to several baseline methods. The method mostly outperforms the state of the art. Furthermore, ablation studies are presented that show the influence of the different elements of the method. The paper the method in a mostly mathematical manner but also is able to convey the main intuitions well. I have a few remaining questions and remarks regarding the paper. - The related work section is now presented at the end. I would suggest to move it to the front of the paper. several questions I had while reading the paper are addressed only in the related work section. - One of these questions concerns by to do this using statistical methods at all, why not do (logical) pattern matching. The related work section states that this is not adequate for incomplete knowledge, but this claim is not empirically backed up. How do said methods perform on the queries in the experimental setup? Also, such methods are claimed to not provide answers in real-time, but again, that claim is not backed up by empirical results in the paper. It would be good to compare the learning/query response time of the Query2GMM with such logical pattern matching methods. - Regarding empirical evaluation, shown in Table 1. Where do these queries come from (1p, 2p etc)? What characteristics do they have? To what extent do they use unknown information? This is not made clear in the paper and makes it more difficult to assess the usefulness of the solution. questions: - The related work section is now presented at the end. I would suggest to move it to the front of the paper. several questions I had while reading the paper are addressed only in the related work section. - One of these questions concerns by to do this using statistical methods at all, why not do (logical) pattern matching. The related work section states that this is not adequate for incomplete knowledge, but this claim is not empirically backed up. How do said methods perform on the queries in the experimental setup? Also, such methods are claimed to not provide answers in real-time, but again, that claim is not backed up by empirical results in the paper. It would be good to compare the learning/query response time of the Query2GMM with such logical pattern matching methods. - Regarding empirical evaluation, shown in Table 1. Where do these queries come from (1p, 2p etc)? What characteristics do they have? To what extent do they use unknown information? This is not made clear in the paper and makes it more difficult to assess the usefulness of the solution. ethics_review_flag: No ethics_review_description: - scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 4 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
26QVTwbqyN
Query2GMM: Learning Representation with Gaussian Mixture Model for Reasoning over Knowledge Graphs
[ "Yuhan Wu", "Yuanyuan Xu", "Wenjie Zhang", "Xiwei Xu", "Ying Zhang" ]
Logical query answering over Knowledge Graphs (KGs) is a fundamental yet complex task. A promising approach to achieve this is to embed queries and entities jointly into the same embedding space. Research along this line suggests that using multi-modal distribution to represent answer entities is more suitable than uni-modal distribution, as a single query may contain multiple disjoint answer subsets due to the compositional nature of multi-hop queries and the varying latent semantics of relations. However, existing methods based on multi-modal distribution roughly represent each subset without capturing its accurate cardinality, or even degenerate into uni-modal distribution learning during the reasoning process due to the lack of an effective similarity measure. To better model queries with diversified answers, we propose Query2GMM for answering logical queries over knowledge graphs. In Query2GMM, we present the GMM embedding to represent each query using a univariate Gaussian Mixture Model (GMM). Each subset of a query is encoded by its cardinality, semantic center and dispersion degree, allowing for precise representation of multiple subsets. Then we design specific neural networks for each operator to handle the inherent complexity that comes with multi-modal distribution while alleviating the cascading errors. Last, we define a new similarity measure to assess the relationships between an entity and a query's multi-answer subsets, enabling effective multi-modal distribution learning for reasoning. Comprehensive experimental results show that Query2GMM outperforms the best competitor by an absolute average of 6.35%.
[ "Knowledge Graph", "Probabilistic Reasoning", "Logical Query", "Multi-modal Distribution", "Neural Reasoning" ]
https://openreview.net/pdf?id=26QVTwbqyN
1rNcF8ouXZ
official_review
1,700,844,956,818
26QVTwbqyN
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1527/Reviewer_6CnQ" ]
review: The paper addresses the general idea of using machine learning to provide approximate answers to graph queries. The general idea is to learn vector space embeddings of both the entities in the knowledge graph and the queries, and then to which entities are correct answers using an operation on the two embeddings. One basic idea is to use embeddings that represent geometric regions in the embedding space, where the answers are the entities whose embeddings fall into that region. This does not work very well when the shapes of the regions are too simple. This paper suggests embedding queries into a gaussian mixture distribution, i.e. a vector containing the parameters of such a distribution. The multi-modal nature of this embedding makes it better suited for representing the answer sets of complex queries, e.g. containing unions. The work seems sound to me, but I am not an expert. I have some questions. questions: 1. The paper mentions "reasoning" a lot, but as far as I can see, the approach addresses first-order queries, i.e. queries that amount to model checking of first order formulae. This is the same as SQL queries over relational data for instance. This is not usually referred to as reasoning in this community, it is simply query evaluation. We talk about reasoning when there is additionally a TBox (e.g. OWL ontology) and the answers to a query under the combination of the the instance data in the graph and the axioms in the TBox are required. I think this is not happening in this work? 1. The most common approach to answering queries is usually fast, gives 100% exact answers and does not involve machine learning at all. I’m missing that baseline… the approach certainly does not outperform usual query processing in terms of accuracy. It it faster? Does it use less of another resource? When taking into account the learning phase? 1. The usual approach to query processing, after deciding a query plan, is to evaluate basic graph patterns (joins of triples), then combining them using union, projection, optional, etc. How does your approach compare to using ML techniques for the basic graph patterns, and combining the results using conventional methods? ethics_review_flag: No ethics_review_description: N/A scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 5 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
1LEQBHanqf
Social Media Discourses on Interracial Intimacy: Tracking Racism and Sexism through Chinese Geo-located Social Media Data
[ "Zheng WEI", "Yixuan Xie", "Danyun XIAO", "Simin Zhang", "Pan Hui", "Muzhi ZHOU" ]
By analyzing the regional differences in the sentiment of comments on short video posts related to interracial intimate relationships on Douyin, a Chinese social media platform, we depict the Chinese social media discourses on four interracial relationship types (Black men and Chinese women, Black women and Chinese men, White men and Chinese women, White women and Chinese men) and explore potential regional differences in these discourses. The region information is derived from the IP geolocation, which has been publicly available since April 2022, when the Chinese government mandated social media companies to display the IP geolocation of all platform users. Our content analysis revealed that the Black men and Chinese women attracted the most negative comments and the White women and Chinese men received the least negative comments. We also observed substantial regional differences in the discourses towards these interracial relationships. We investigated several provincial socioeconomic development indicators and noted that GDP, population size, and openness to Western cultures all contribute to the negative sentiment levels. This work advances our understanding of the interplay of race, gender, and immigration in constructing public discourses on social media and offers important insights into how these discourses evolve along with socioeconomic development.
[ "Interracial Intimate Relationships", "Social Media", "Sentiment Analysis", "IP Geolocation" ]
https://openreview.net/pdf?id=1LEQBHanqf
ifydntwece
decision
1,705,909,233,357
1LEQBHanqf
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept (Oral) comment: Quoting fAja's adequate paper summary: "The paper examines discussions of four interracial relationship types (BM-CW, BW-CM, WM-CW, WW-CM) focused on Douyin. It capitalizes on access to IP geolocation data to shed light on regional differences in users' attitudes and relevant their correlates such as GDP, population, and openness to Western cultures at an aggregate level." Overall, there was strong interest, if not excitement, concerning the topic studied as, usually, xenophobia is studied as a purely Western phenomenon. Also on the technical side, the methods chosen seem adequate and reviewers largely viewed the analysis as solid. In particular given that Reviewer fAja, who had the most specific technical concerns, acknowledged many of the author comments as helpful, but then seemed to fail to update their review and scores. It would be good to slightly extend the discussion of ethical concerns. At the moment, the authors mention that the data used (of which a sample will be shared) is anonymized. But there is ample literature showing that removing user IDs does not necessarily ensure anonymity. E.g., depending on the platform, it could be possible to search for the text in a comment, or to use other contextual cluse to de-anonymize a subset of users. This might or might not be problematic, and it does not necessarily preclude the publication of the paper. But discussing how/if such de-anonymization might be possible, and what the implications might be, would strengthen the paper and make it further comply with community norms. The final version should also include details such as the IRB number. For a future/camera-ready version it would be good to explicitly mention that the exclusive scope is heterosexual couples.
1LEQBHanqf
Social Media Discourses on Interracial Intimacy: Tracking Racism and Sexism through Chinese Geo-located Social Media Data
[ "Zheng WEI", "Yixuan Xie", "Danyun XIAO", "Simin Zhang", "Pan Hui", "Muzhi ZHOU" ]
By analyzing the regional differences in the sentiment of comments on short video posts related to interracial intimate relationships on Douyin, a Chinese social media platform, we depict the Chinese social media discourses on four interracial relationship types (Black men and Chinese women, Black women and Chinese men, White men and Chinese women, White women and Chinese men) and explore potential regional differences in these discourses. The region information is derived from the IP geolocation, which has been publicly available since April 2022, when the Chinese government mandated social media companies to display the IP geolocation of all platform users. Our content analysis revealed that the Black men and Chinese women attracted the most negative comments and the White women and Chinese men received the least negative comments. We also observed substantial regional differences in the discourses towards these interracial relationships. We investigated several provincial socioeconomic development indicators and noted that GDP, population size, and openness to Western cultures all contribute to the negative sentiment levels. This work advances our understanding of the interplay of race, gender, and immigration in constructing public discourses on social media and offers important insights into how these discourses evolve along with socioeconomic development.
[ "Interracial Intimate Relationships", "Social Media", "Sentiment Analysis", "IP Geolocation" ]
https://openreview.net/pdf?id=1LEQBHanqf
Yr15eRVm71
official_review
1,701,401,951,374
1LEQBHanqf
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission124/Reviewer_xDm7" ]
review: This paper is fascinating and I enjoyed reading it—what an exciting topic and seemingly strong execution. The writing was easy to understand, and the analysis was straightforward. I appreciated that the author didn't unnecessarily increase the complexity of their analysis when simplicity would tell the story just as well. I also appreciate that the author added limitations and a subsection on ethics. With research like this using publically available data, researchers often neglect this. I have notes meant to improve readability. 1. Add commas to numbers in tables such as Table 1 "Number of Comments" data to increase legibility. 2. Normalize title casing on subsection headers like subsection 3.1 (which also has a type on the title) 3. Titles for sub-sections 5.5 and 5.6 should be plural. questions: Could you make the subset of the data you used available on some dataverse? ethics_review_flag: No ethics_review_description: I selected No. scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 7 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
1LEQBHanqf
Social Media Discourses on Interracial Intimacy: Tracking Racism and Sexism through Chinese Geo-located Social Media Data
[ "Zheng WEI", "Yixuan Xie", "Danyun XIAO", "Simin Zhang", "Pan Hui", "Muzhi ZHOU" ]
By analyzing the regional differences in the sentiment of comments on short video posts related to interracial intimate relationships on Douyin, a Chinese social media platform, we depict the Chinese social media discourses on four interracial relationship types (Black men and Chinese women, Black women and Chinese men, White men and Chinese women, White women and Chinese men) and explore potential regional differences in these discourses. The region information is derived from the IP geolocation, which has been publicly available since April 2022, when the Chinese government mandated social media companies to display the IP geolocation of all platform users. Our content analysis revealed that the Black men and Chinese women attracted the most negative comments and the White women and Chinese men received the least negative comments. We also observed substantial regional differences in the discourses towards these interracial relationships. We investigated several provincial socioeconomic development indicators and noted that GDP, population size, and openness to Western cultures all contribute to the negative sentiment levels. This work advances our understanding of the interplay of race, gender, and immigration in constructing public discourses on social media and offers important insights into how these discourses evolve along with socioeconomic development.
[ "Interracial Intimate Relationships", "Social Media", "Sentiment Analysis", "IP Geolocation" ]
https://openreview.net/pdf?id=1LEQBHanqf
T8En7GARRl
official_review
1,700,829,215,296
1LEQBHanqf
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission124/Reviewer_fAja" ]
review: The paper examines discussions of four interracial relationship types (BM-CW, BW-CM, WM-CW, WW-CM) focused on Douyin. It capitalizes on access to IP geolocation data to shed light on regional differences in users' attitudes and relevant their correlates such as GDP, population, and openness to Western cultures at an aggregate level. Pros * Immigration and xenophobia in Asia, relative to Western countries, is understudied * Use of geolocations data (owing to the government mandates) alleviates the problems related to sample selection Cons * The process through which the authors narrow the topic categories to the 16 big clusters and matching them with the three-fold theoretical framework is somewhat subjective and arbitrary * The authors need more discussions of (the importance of) Douyin (non-Chinese audiences would have little information) * While manual checking of the videos minimizes false positives, the authors should consider some form of keyword expansion tool to expand their search query (like https://gking.harvard.edu/files/gking/files/keywordalgorithm.pdf) * The authors should provide more information about the instructions for manual annotation, classifier performance, etc. questions: See the cons section above ethics_review_flag: No ethics_review_description: - scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 2 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
1LEQBHanqf
Social Media Discourses on Interracial Intimacy: Tracking Racism and Sexism through Chinese Geo-located Social Media Data
[ "Zheng WEI", "Yixuan Xie", "Danyun XIAO", "Simin Zhang", "Pan Hui", "Muzhi ZHOU" ]
By analyzing the regional differences in the sentiment of comments on short video posts related to interracial intimate relationships on Douyin, a Chinese social media platform, we depict the Chinese social media discourses on four interracial relationship types (Black men and Chinese women, Black women and Chinese men, White men and Chinese women, White women and Chinese men) and explore potential regional differences in these discourses. The region information is derived from the IP geolocation, which has been publicly available since April 2022, when the Chinese government mandated social media companies to display the IP geolocation of all platform users. Our content analysis revealed that the Black men and Chinese women attracted the most negative comments and the White women and Chinese men received the least negative comments. We also observed substantial regional differences in the discourses towards these interracial relationships. We investigated several provincial socioeconomic development indicators and noted that GDP, population size, and openness to Western cultures all contribute to the negative sentiment levels. This work advances our understanding of the interplay of race, gender, and immigration in constructing public discourses on social media and offers important insights into how these discourses evolve along with socioeconomic development.
[ "Interracial Intimate Relationships", "Social Media", "Sentiment Analysis", "IP Geolocation" ]
https://openreview.net/pdf?id=1LEQBHanqf
Cg6ghgqDvh
official_review
1,700,799,593,470
1LEQBHanqf
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission124/Reviewer_kRMS" ]
review: As a non-subject-matter expert, the evaluation of this study is being limited only to the technical aspects of the paper. Topic modeling: - How are the hyperparameters chosen? The results of running BerTopic could be sensitive to the choices of hyperparameters. How can the authors justify their choice on the hyperparameters? - Also, in a similar vein, the results of Bertopic are often sensitive to random initialization. Have the authors attempted to run Bertopic with different random seeds? - It would be informative to provide top-k words that appeared in each topic so that the reviewers/readers can see how the authors came up with the topic names. Sentiment analysis: - Similar comments from the topic modeling can be also applied here. Could the authors provide information on how sensitive the model is on hyperparameters and random seeds? - Regarding the annotation process: could the authors provide more information on the annotation process? How the annotators are trained to perform the task? - Has the model been validated on the held-out validation set? How do you know if the model produces accurate results or not? As Section 5.1 and 5.2 are based on the topic modeling and the sentiment analysis, it is hard to assess the value of those sections. 5.3. What is multi-variant linear probability model? And can the authors elaborate on how to interpret the table (Table 3)? Although the paper studies a topic that could be considered important and has an academic value, the current version of the paper does not seem to have an academic rigor in executing the experiment, relying on rather anecdotal evidence. questions: Please see the questions above. ethics_review_flag: No ethics_review_description: It is not certain if this study requires ethics review, but would like to let the PCs know that the study involves some information that can potentially be private. scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 4 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
1LEQBHanqf
Social Media Discourses on Interracial Intimacy: Tracking Racism and Sexism through Chinese Geo-located Social Media Data
[ "Zheng WEI", "Yixuan Xie", "Danyun XIAO", "Simin Zhang", "Pan Hui", "Muzhi ZHOU" ]
By analyzing the regional differences in the sentiment of comments on short video posts related to interracial intimate relationships on Douyin, a Chinese social media platform, we depict the Chinese social media discourses on four interracial relationship types (Black men and Chinese women, Black women and Chinese men, White men and Chinese women, White women and Chinese men) and explore potential regional differences in these discourses. The region information is derived from the IP geolocation, which has been publicly available since April 2022, when the Chinese government mandated social media companies to display the IP geolocation of all platform users. Our content analysis revealed that the Black men and Chinese women attracted the most negative comments and the White women and Chinese men received the least negative comments. We also observed substantial regional differences in the discourses towards these interracial relationships. We investigated several provincial socioeconomic development indicators and noted that GDP, population size, and openness to Western cultures all contribute to the negative sentiment levels. This work advances our understanding of the interplay of race, gender, and immigration in constructing public discourses on social media and offers important insights into how these discourses evolve along with socioeconomic development.
[ "Interracial Intimate Relationships", "Social Media", "Sentiment Analysis", "IP Geolocation" ]
https://openreview.net/pdf?id=1LEQBHanqf
A02u1fo1b2
official_review
1,700,669,110,256
1LEQBHanqf
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission124/Reviewer_Tvy4" ]
review: This study explores the correlations between geographic socioeconomic factors and sentiment toward interracial relationships through analysis of posts on Chinese social media platforms. The study is original and explores an important area in Chinese social media landscape, made possible through the mandate in April 2022 of IP geolocation data being required for all social media posts. It offers a novel contribution in sentiment analysis paired with ip address related geolocation data. questions: How reliable is the geolocation data? In other contexts this data has been shown to be rather unreliable. The bulk of this study assumes that the geolocation data is accurate, but this should be substantiated more. What about IP spoofing? VPN use? Bots? Even the reviewers themselves use a service in order to make their IP address appear to be in the particular regions they are studying (line 394). Etc. Consider citing any available resources that demonstrate the reliability of this geolocation / IP information. ethics_review_flag: No ethics_review_description: n/a scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
1IqCKEGGgw
Consistency Guided Knowledge Retrieval and Denoising in LLMs for Zero-shot Document-level Relation Triplet Extraction
[ "Qi Sun", "Kun Huang", "Xiaocui Yang", "Rong Tong", "Kun Zhang", "Soujanya Poria" ]
Document-level Relation Triplet Extraction (DocRTE) is a fundamental task in information systems that aims to simultaneously extract entities with semantic relations from a document. Existing methods heavily rely on a substantial amount of fully labeled data. However, collecting and annotating data for newly emerging relations is time-consuming and labor-intensive. Recent advanced Large Language Models (LLMs), such as ChatGPT and LLaMA, exhibit impressive long-text generation capabilities, inspiring us to explore an alternative approach for obtaining auto-labeled documents with new relations. In this paper, we propose a Zero-shot Document-level Relation Triplet Extraction (ZeroDocRTE) framework, which Generates labeled data by Retrieval and Denoising Knowledge from LLMs, called GenRDK. Specifically, we propose a chain-of-retrieval prompt to guide ChatGPT to generate labeled long-text data step by step. To improve the quality of synthetic data, we propose a denoising strategy based on the consistency of cross-document knowledge. Leveraging our denoised synthetic data, we proceed to fine-tune the LLaMA2-13B-Chat for extracting document-level relation triplets. We perform experiments for both zero-shot document-level relation and triplet extraction on two public datasets. The experimental results illustrate that our GenRDK framework outperforms strong baselines. The code and synthetic dataset will be released on GitHub.
[ "Document-level Relation Triplet Extraction", "Zero-shot Learning", "Knowledge Denoising", "Large Language Models", "Synthetic Data" ]
https://openreview.net/pdf?id=1IqCKEGGgw
izscb62dMK
official_review
1,700,495,158,473
1IqCKEGGgw
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2194/Reviewer_bUsJ" ]
review: This paper addresses the challenge of Zero-shot Document-level Relation Triplet Extraction (ZeroDocRTE), a fundamental task in information systems that typically demands substantial amounts of fully labeled data. The proposed framework, named Consistency Guided Knowledge Retrieval and Denoising (GenRDK), introduces an innovative approach leveraging advanced Large Language Models (LLMs), including ChatGPT and LLaMA. The key contributions can be summarized as follows: 1. **ZeroDocRTE Framework - GenRDK:** The paper introduces a novel framework for Zero-shot Document-level Relation Triplet Extraction, moving away from the reliance on human-annotated data. GenRDK utilizes advanced LLMs, specifically ChatGPT, guided by a Chain-of-Retrieval prompt to generate labeled long-text data systematically. 2. **Experimental Validation:** Extensive experiments are conducted on two public datasets, evaluating the effectiveness of GenRDK in both DocRTE and DocRE tasks under zero-shot conditions. The results demonstrate the superiority of GenRDK over competitive baselines, highlighting its capability to distill latent relational facts from LLMs. 3. **Knowledge Consistency Denoising Strategy:** The paper contributes a consistency-guided knowledge denoising strategy, leveraging cross-document knowledge graphs to remove unreliable relational facts and improve the overall quality of the synthetic data. 4. **Fine-tuning with LLaMA2-13B-Chat:** Leveraging the denoised synthetic data, the authors perform fine-tuning on the powerful LLaMA2-13B-Chat model, showcasing its effectiveness in extracting document-level relation triplets. **Pros:** 1. The innovative framework, titled Consistency Guided Knowledge Retrieval and Denoising (GenRDK), specifically designed for Zero-shot Document-level Relation Triplet Extraction (ZeroDocRTE), has been introduced. This framework exhibits commendable performance on the DocRTE and DocRE datasets, indicating its efficacy in handling complex relation extraction tasks. 2. The methodology employed for generating supervised data through ChatGPT is noteworthy. It incorporates a consistency-guided knowledge denoising strategy, which not only assures the quality of the data but also significantly reduces the resource-intensive nature of manual data annotation. 3. The dataset and related code have been made open-source, ensuring reproducibility. **Cons:** 1. Concerns arise regarding the consistency-guided knowledge denoising strategy. This strategy relies on refining two Knowledge Graphs (KGs), created from data generated by ChatGPT and pseudo-labels from a Pre-denoising Model. Given that both KGs are inherently prone to noise and uncertainty, the effectiveness of this denoising process remains questionable. A detailed analysis, including the proportions of different error types encountered during the denoising phase, particularly the errors in removal and addition, would be beneficial for a comprehensive understanding of this strategy. 2. The sensitivity of different models to varying prompts is a critical aspect that appears to be underexplored in the study. The reliance on a singular prompt template raises concerns about the generalizability of the results. Expanding the experimental design to include a diverse range of prompt templates and presenting an average of the outcomes would provide a more robust evaluation of the framework's performance across different scenarios. questions: See cons. ethics_review_flag: No ethics_review_description: NA scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 4 technical_quality: 4 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
1IqCKEGGgw
Consistency Guided Knowledge Retrieval and Denoising in LLMs for Zero-shot Document-level Relation Triplet Extraction
[ "Qi Sun", "Kun Huang", "Xiaocui Yang", "Rong Tong", "Kun Zhang", "Soujanya Poria" ]
Document-level Relation Triplet Extraction (DocRTE) is a fundamental task in information systems that aims to simultaneously extract entities with semantic relations from a document. Existing methods heavily rely on a substantial amount of fully labeled data. However, collecting and annotating data for newly emerging relations is time-consuming and labor-intensive. Recent advanced Large Language Models (LLMs), such as ChatGPT and LLaMA, exhibit impressive long-text generation capabilities, inspiring us to explore an alternative approach for obtaining auto-labeled documents with new relations. In this paper, we propose a Zero-shot Document-level Relation Triplet Extraction (ZeroDocRTE) framework, which Generates labeled data by Retrieval and Denoising Knowledge from LLMs, called GenRDK. Specifically, we propose a chain-of-retrieval prompt to guide ChatGPT to generate labeled long-text data step by step. To improve the quality of synthetic data, we propose a denoising strategy based on the consistency of cross-document knowledge. Leveraging our denoised synthetic data, we proceed to fine-tune the LLaMA2-13B-Chat for extracting document-level relation triplets. We perform experiments for both zero-shot document-level relation and triplet extraction on two public datasets. The experimental results illustrate that our GenRDK framework outperforms strong baselines. The code and synthetic dataset will be released on GitHub.
[ "Document-level Relation Triplet Extraction", "Zero-shot Learning", "Knowledge Denoising", "Large Language Models", "Synthetic Data" ]
https://openreview.net/pdf?id=1IqCKEGGgw
M8bi3VLS6B
official_review
1,701,238,562,994
1IqCKEGGgw
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2194/Reviewer_RLck" ]
review: This paper presents a zero-shot document-level relation extraction framework. The new framework aims to address the label data scarcity issue by leveraging LLM. Specifically, it first uses LLM to auto-generate labeled long-text data, then applies a unique denoising strategy to clean/filter the generated synthetic data, and finally uses the refined data to fine-tune a final docRE model (mainly based on LLaMA2-13B-Chat). Authors demonstrate the effectiveness of this framework through experiments on two public datasets and show it can outperform many strong baselines. Overall this paper is clearly written, easy to digest, and the presented method is reasonable. The pipeline of using LLM to first generate synthetic data, do some domain knowledge guided filtering, and finally train a model is somewhat standard, but the authors do inject the ad-hoc task knowledge into the design of chain-of-retrieval prompt. Therefore I think some high-level idea of this paper is worth learning. My main concerns of this paper are: 1) how well it can generalize? and 2) limited baseline methods. Two evaluated benchmarks originate from the same source and seems no other specific zero-shot docRE models are tested (c.f. Table 1). BTW, I feel the experiments in Table 4 are more important and are not emphasized enough. Finally, although the authors claim they will release the code and synthetic dataset on GitHub later but they haven't done so during the review period. To summarize, I feel the paper worth reading but there are a few places can be further improved. questions: 1. Have you tried to generate a non docRE based dataset and test your framework's effectiveness on the dataset? 2. Is your framework sensitive to the (likely hand-picked) chain-of-retrieval prompt and query LLM? ethics_review_flag: No ethics_review_description: N/A scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
1IqCKEGGgw
Consistency Guided Knowledge Retrieval and Denoising in LLMs for Zero-shot Document-level Relation Triplet Extraction
[ "Qi Sun", "Kun Huang", "Xiaocui Yang", "Rong Tong", "Kun Zhang", "Soujanya Poria" ]
Document-level Relation Triplet Extraction (DocRTE) is a fundamental task in information systems that aims to simultaneously extract entities with semantic relations from a document. Existing methods heavily rely on a substantial amount of fully labeled data. However, collecting and annotating data for newly emerging relations is time-consuming and labor-intensive. Recent advanced Large Language Models (LLMs), such as ChatGPT and LLaMA, exhibit impressive long-text generation capabilities, inspiring us to explore an alternative approach for obtaining auto-labeled documents with new relations. In this paper, we propose a Zero-shot Document-level Relation Triplet Extraction (ZeroDocRTE) framework, which Generates labeled data by Retrieval and Denoising Knowledge from LLMs, called GenRDK. Specifically, we propose a chain-of-retrieval prompt to guide ChatGPT to generate labeled long-text data step by step. To improve the quality of synthetic data, we propose a denoising strategy based on the consistency of cross-document knowledge. Leveraging our denoised synthetic data, we proceed to fine-tune the LLaMA2-13B-Chat for extracting document-level relation triplets. We perform experiments for both zero-shot document-level relation and triplet extraction on two public datasets. The experimental results illustrate that our GenRDK framework outperforms strong baselines. The code and synthetic dataset will be released on GitHub.
[ "Document-level Relation Triplet Extraction", "Zero-shot Learning", "Knowledge Denoising", "Large Language Models", "Synthetic Data" ]
https://openreview.net/pdf?id=1IqCKEGGgw
M2okwKISuI
official_review
1,699,355,690,891
1IqCKEGGgw
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2194/Reviewer_cKHu" ]
review: This work focuses on the zero-shot document-level relation (triplet) extraction tasks. The authors argue that conventional methods mainly focus on the sentence-level zero-shot RE/RTE tasks, and propose this more challenging setting. To solve this task, the authors design a chain-of-retrieval prompt to guide GPT models to generate labeled data for training. To reduce noises, the consistency of cross-document knowledge is adopted to purify generated instances. The proposed framework achieves good results compared to LLM-based models on both RE and RTE. In conclusion, this work is a good example of adopting LLMs for more challenging NLP tasks. The proposed framework is clear and sound. The experiments and analyses are solid. questions: There are some questions as follows: 1. Is there any zero-shot document-level RE/RTE methods? 2. It suggested that the authors could give more discussions on existing works of sentence-level zero-shot RE/RTE, some of which may be compared as strong baselines in experiments. The current baselines are all LLM models. 3. The authors could give more insights on how to design the chain-of-retrieval prompt. How to evaluate the indispensability of each prompt in model designing? 4. Comparing Table 2 with Table 1, I find that the proposed framework achieves much larger improvements on RE than RTE (though RTE is the harder task). The authors could give more explanation on this finding. ethics_review_flag: No ethics_review_description: NA scope: 2: The connection to the Web is incidental, e.g., use of Web data or API novelty: 3 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
1IqCKEGGgw
Consistency Guided Knowledge Retrieval and Denoising in LLMs for Zero-shot Document-level Relation Triplet Extraction
[ "Qi Sun", "Kun Huang", "Xiaocui Yang", "Rong Tong", "Kun Zhang", "Soujanya Poria" ]
Document-level Relation Triplet Extraction (DocRTE) is a fundamental task in information systems that aims to simultaneously extract entities with semantic relations from a document. Existing methods heavily rely on a substantial amount of fully labeled data. However, collecting and annotating data for newly emerging relations is time-consuming and labor-intensive. Recent advanced Large Language Models (LLMs), such as ChatGPT and LLaMA, exhibit impressive long-text generation capabilities, inspiring us to explore an alternative approach for obtaining auto-labeled documents with new relations. In this paper, we propose a Zero-shot Document-level Relation Triplet Extraction (ZeroDocRTE) framework, which Generates labeled data by Retrieval and Denoising Knowledge from LLMs, called GenRDK. Specifically, we propose a chain-of-retrieval prompt to guide ChatGPT to generate labeled long-text data step by step. To improve the quality of synthetic data, we propose a denoising strategy based on the consistency of cross-document knowledge. Leveraging our denoised synthetic data, we proceed to fine-tune the LLaMA2-13B-Chat for extracting document-level relation triplets. We perform experiments for both zero-shot document-level relation and triplet extraction on two public datasets. The experimental results illustrate that our GenRDK framework outperforms strong baselines. The code and synthetic dataset will be released on GitHub.
[ "Document-level Relation Triplet Extraction", "Zero-shot Learning", "Knowledge Denoising", "Large Language Models", "Synthetic Data" ]
https://openreview.net/pdf?id=1IqCKEGGgw
GTfBWLhX1U
official_review
1,699,236,887,736
1IqCKEGGgw
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2194/Reviewer_UHVe" ]
review: The paper proposes GenRDK which is a new method for zero-shot document-level relation triplet extraction (ZeroDocRTE). Unlike existing methods that address sentence-level zero-shot relation extraction (ZeroRE) and relation triplet extraction (ZeroRTE), GenRDK addresses a more challenging task where relational facts can be expressed in multiple sentences or a document in general. GenRDK generates document-level synthetic data for the unseen relations using a new chain-of-retrieval (CoR) prompts that guide Chat-GPT to generate documents with the intended relations step by step. Then, the quality of the generated synthetic data is improved by a new denoising strategy based on the scores computed from synthetic and pseudo triples. Pros S1. The authors proposed to generate synthetic data for unseen relations using a novel CoR retrieval prompts that guide ChatGPT to generate a better quality synthetic data than both vanilla prompts and chain-of-thought (CoT) prompts. S2. The authors showed that the generated synthetic data can be noisy with either incorrect triplets as a result of ChatGPT hallucinations, or missing triplets. To improve the quality of the synthetic data, the authors proposed to denoise the synthetic data with the help of pseudo triplets that are obtained from feeding a llama model, that is trained using seen triplets, with the generated documents. S3. The experimental results show the effectiveness of GenRDK for both the ZeroRE and ZeroRTE tasks. The authors reported important ablation experiments such as the effectiveness of CoR and the effectiveness of knowledge denoising. Cons W1. The proposed model generates additional synthetic data for the unseen relations, which means that the training of the model contains data for the unseen relations, so the unseen relations are not completely unseen. This scenario is not the usual zero-shot learning where there is no knowledge about what is unseen. W2. There should be more discussions about the reasons that make adapting existing sentence-level methods inadequate for ZeroDocRTE. W3. From describing the baseline methods, it is unclear whether the reported baselines are fine-tuned on the seen data or not. If not, then the comparison can be unfair where the proposed method has the advantage of being fine-tuned before it is tested. In addition, the proposed method should be compared to other methods that also use additional generated data (like RelationPrompt [1] even if it is originally proposed for sentence-level ZeroRE and ZeroRTE) for a fair comparison. [1] Chia et al. RelationPrompt: Leveraging Prompts to Generate Synthetic Data for Zero-Shot Relation Triplet Extraction I acknowledge that I have read the rebuttal(s). questions: In this paper, the authors proposed a new method, called GenRDK, for ZeroDocRTE. To improve the zero-shot relation extraction results, the proposed method generates document-level synthetic data using CoR retrieval prompts that guide ChatGPT to generate documents with the intended relations step by step. The generated data is denoised using a consistency-guided cross-document knowledge denoising. There are some points that should be taken into consideration: 1. Knowing what is considered as unseen and using that in the training strategy is an additional piece of information that should not be used in the typical zero-shot learning setting. It is like we actually know the expected relations in the testing set. Do we expect the same results to hold for really unseen relations? Please comment on this aspect. In addition, what is the size of the synthetic data? How does it affect the relation extraction results? Is the quality of the graph obtained from the synthetic data (KGs) better or worse than the pseudo triplets graph (KGp)? 2. The paper mentions that existing methods concentrate on sentence-level ZeroRE and ZeroRTE tasks, where entities and relations are confined within a single sentence. I actually don’t understand why the existing methods cannot be adapted to the document-level ZeroRE and ZeroRTE tasks. The motivation of the paper would be stronger if there are either more explanations about what makes the existing approach inadequate for document-level relation extraction tasks, or some evaluation metrics that show a weak performance of the existing sentence-level models applied to documents. Please comment on this aspect. 3. It is not clear whether the reported baselines are fine-tuned on the seen data or not. If it is not, then the proposed method can clearly benefit from the fine-tuning using the seen data compared to the baselines. At least, the results of fine-tuning llama models on the seen data should be reported (which corresponds to the pseudo triplets). In addition, the proposed method can clearly benefit from the generated data for the unseen relations compared to the baselines even without synthetic data denoising. Therefore, the proposed method should be compared to other methods that also use additional generated data (like RelationPrompt [1] even if it is originally proposed for sentence-level ZeroRE and ZeroRTE) to clearly understand what makes GenRDK better than the baselines (ideally it should be CoR and denoising and not fine-tuning and synthetic data). Please comment on this aspect. ethics_review_flag: No ethics_review_description: NA scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
1IqCKEGGgw
Consistency Guided Knowledge Retrieval and Denoising in LLMs for Zero-shot Document-level Relation Triplet Extraction
[ "Qi Sun", "Kun Huang", "Xiaocui Yang", "Rong Tong", "Kun Zhang", "Soujanya Poria" ]
Document-level Relation Triplet Extraction (DocRTE) is a fundamental task in information systems that aims to simultaneously extract entities with semantic relations from a document. Existing methods heavily rely on a substantial amount of fully labeled data. However, collecting and annotating data for newly emerging relations is time-consuming and labor-intensive. Recent advanced Large Language Models (LLMs), such as ChatGPT and LLaMA, exhibit impressive long-text generation capabilities, inspiring us to explore an alternative approach for obtaining auto-labeled documents with new relations. In this paper, we propose a Zero-shot Document-level Relation Triplet Extraction (ZeroDocRTE) framework, which Generates labeled data by Retrieval and Denoising Knowledge from LLMs, called GenRDK. Specifically, we propose a chain-of-retrieval prompt to guide ChatGPT to generate labeled long-text data step by step. To improve the quality of synthetic data, we propose a denoising strategy based on the consistency of cross-document knowledge. Leveraging our denoised synthetic data, we proceed to fine-tune the LLaMA2-13B-Chat for extracting document-level relation triplets. We perform experiments for both zero-shot document-level relation and triplet extraction on two public datasets. The experimental results illustrate that our GenRDK framework outperforms strong baselines. The code and synthetic dataset will be released on GitHub.
[ "Document-level Relation Triplet Extraction", "Zero-shot Learning", "Knowledge Denoising", "Large Language Models", "Synthetic Data" ]
https://openreview.net/pdf?id=1IqCKEGGgw
FGH4KbLRRE
decision
1,705,909,257,337
1IqCKEGGgw
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept (Oral) comment: Paper describes a zero-shot method to extract entities and semantic relationships from documents. The paper is clear and understandable, and the topic is a good fit with the conference. Overall, I think the approach is interesting and the experiments support the claims of the paper. The authors promise a code and data release after publication. The reviewing process has proceeded in an ideal fashion. Any questions raised by the reviewers are fully addressed by the authors, and in some cases the reviewers have signed off on these responses, expressing their support for acceptance. The topic has relatively broad interest, and I recommend oral presentation. However, I note that I am not well calibrated on the breakpoint between oral and poster presentation.
1GVyE9J021
Matching Feature Separation Network for Domain Adaptation in Entity Matching
[ "Chenchen Sun", "Yang Xu", "Derong shen", "Tiezheng Nie" ]
Entity matching (EM) determines whether two records from different data sources refer to the same real-world entity. Currently, deep learning (DL) based EM methods have achieved state-of-the-art (SOTA) results. However, applying DL-based EM methods often costs a lot of human efforts to label the data. To address this challenge, we propose a new domain adaptation (DA) framework for EM called Matching Feature Separation Network (MFSN). We implement DA by separating private and common matching features. Briefly, MFSN first uses three encoders to explicitly model the private and common matching features in both the source and target domains. Then, it transfers the knowledge learned from the source common matching features to the target domain. We also propose an enhanced variant called Feature Representation and Separation Enhanced MFSN (MFSN-FRSE). Compared with MFSN, it has superior feature representation and separation capabilities. We evaluate the effectiveness of MFSN and MFSN-FRSE on twelve transferring EM tasks. The results show that our framework is approximately 7% higher in F1 score on average than the previous SOTA methods. Then, we verify the effectiveness of each module in MFSN and MFSN-FRSE by ablation study. Finally, we explore the optimal strategy of each module in MFSN and MFSN-FRSE through detailed tests.
[ "Entity matching", "Deep neural network", "Domain adaptation", "Matching feature separation network", "Data integration" ]
https://openreview.net/pdf?id=1GVyE9J021
zTEOgL7FPr
official_review
1,700,878,384,039
1GVyE9J021
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission471/Reviewer_ePsi" ]
review: This paper proposes a new domain adaptation (DA) framework named MFSN for entity matching (EM). The framework employs three encoders to obtain matching features. One shared encoder is used to obtain common features between the source domain and the target domain; two individual encoders are used to obtain private features for the two domains, respectively. The authors design a similarity loss and a difference loss to distinguish the common and private features. Furthermore, a decoder is introduced to reconstruct candidate pairs based on their features, which ensures the features are relevant to the EM task. Finally, candidate pairs are classified by an MLP. Massive experiments are conducted to demonstrate the effectiveness of the proposed framework, including 12 transferring EM tasks, the visualization analysis, and the ablation study. Strengths: S1. This paper focuses on an important issue: how to perform entity matching with unlabeled data, which is overlooked by existing supervised works. S2. The authors conduct expensive experiments on 12 tasks, and the results are sufficient to demonstrate the effectiveness and generalization of the proposed method. S3. This paper is well-written and has good readability. Weaknesses: W1. The method lacks novelty. The framework for separating private and common matching features is proposed in the related work [19]. The authors just replace the encoders with pre-trained BERT, and replace the decoder with Transformer-Decoder. Figure 2 in this paper is very similar to Fig. 2 in [19]. W2. The analyses are insufficient. Besides the main results, only visualization and ablation studies are provided. questions: Q1. Typos: In the last sentence of Introduction, Section 6 => Section 5 Q2. I recommend specifying that the classifier in Equation (17) is an MLP, although this can be taken from Figure 2. Q3. In Equation (14), how to compute the cross-entropy loss between two tokens? Q4. How to determine the best hyper-parameters? There are three hyper-parameters in Equation (25). In my opinion, it is very hard to find the best one. Q5. How is visualization performed? For each candidate pair, MFSN-FRSE obtains a common feature and a private feature, which one is selected for visualization, or they are summed up? Q6. It can be seen that massive experiments are conducted in the appendix. However, the analyses are insufficient in Section 4. For example, this paper emphasizes separating common features and private features, will the private feature distributions of the source domain and the target domain be obviously different while the distributions of common features are similar? Would you mind providing some analyses about this? ethics_review_flag: No ethics_review_description: No scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 2 technical_quality: 4 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
1GVyE9J021
Matching Feature Separation Network for Domain Adaptation in Entity Matching
[ "Chenchen Sun", "Yang Xu", "Derong shen", "Tiezheng Nie" ]
Entity matching (EM) determines whether two records from different data sources refer to the same real-world entity. Currently, deep learning (DL) based EM methods have achieved state-of-the-art (SOTA) results. However, applying DL-based EM methods often costs a lot of human efforts to label the data. To address this challenge, we propose a new domain adaptation (DA) framework for EM called Matching Feature Separation Network (MFSN). We implement DA by separating private and common matching features. Briefly, MFSN first uses three encoders to explicitly model the private and common matching features in both the source and target domains. Then, it transfers the knowledge learned from the source common matching features to the target domain. We also propose an enhanced variant called Feature Representation and Separation Enhanced MFSN (MFSN-FRSE). Compared with MFSN, it has superior feature representation and separation capabilities. We evaluate the effectiveness of MFSN and MFSN-FRSE on twelve transferring EM tasks. The results show that our framework is approximately 7% higher in F1 score on average than the previous SOTA methods. Then, we verify the effectiveness of each module in MFSN and MFSN-FRSE by ablation study. Finally, we explore the optimal strategy of each module in MFSN and MFSN-FRSE through detailed tests.
[ "Entity matching", "Deep neural network", "Domain adaptation", "Matching feature separation network", "Data integration" ]
https://openreview.net/pdf?id=1GVyE9J021
ZgcqYbuqBh
official_review
1,700,751,635,365
1GVyE9J021
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission471/Reviewer_Vjy9" ]
review: In this paper, the authors present an approach to domain adaptation in entity matching, addressing the challenges posed by data scarcity and the requirement for labeled data in specific domains. The central concept revolves around explicitly distinguishing common features shared by two domains from the domain-specific features, thereby discarding the latter to mitigate model confusion. The paper introduces two primary architectures. The first, MFSN, comprises encoder and decoder modules based on BERT to simultaneously learn both private and common features. The second architecture, MFSN-FRSE, is positioned as an improved version of MFSN, incorporating hidden states of tokens that constitute an entity (including arguments and their labels). According to the reported results, the proposed method demonstrates a noteworthy average improvement of 7% in F1 score compared to previous state-of-the-art approaches. Pros: * The inclusion of comprehensive technical details about the used architectures (MFSN/MFSN-FRSE) enhances the clarity and understanding of the proposed methodologies for training. * The clarity of the training phase allows readers to gain a deeper insight into the architecture, contributing to the transparency and reproducibility of the research. Cons: * Reference for Claims in Section 3.3: The assertion that the difference loss improves the insights of both shared and private encoders lacks proper reference support. * Inference for Target Dataset Entity Matching: the paper lacks an adequate explanation of how the proposed model is applied during the inference phase for entity matching on a target dataset. To address this gap, the authors should include a detailed section or subsection explaining the steps involved in applying the model to new datasets for entity matching. This would provide readers with a more comprehensive understanding of the practical aspects of the proposed technique. * Lack of Reference for DomAtt in Section 3.6: lacks proper references for DomAtt, creating ambiguity regarding the background and context of the discussed content. Moreover, sharing the code would improve the replicability of this work questions: * The training process is elucidated well; nevertheless, the methodology for obtaining results on the target domain database remains unclear. How is the inference carried out? Was there a section of the architecture dedicated to learning source domain matching features that was omitted? * If the aim is to differentiate between common shared matching features and private matching features, the purpose of the difference loss becomes a point of inquiry. Is the intention to maximize similarity between shared features and common features within the same domain? If so, this poses a potential contradiction, as the primary goal is to distinguish these features rather than emphasize their similarity. ethics_review_flag: No ethics_review_description: n.a. scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
1GVyE9J021
Matching Feature Separation Network for Domain Adaptation in Entity Matching
[ "Chenchen Sun", "Yang Xu", "Derong shen", "Tiezheng Nie" ]
Entity matching (EM) determines whether two records from different data sources refer to the same real-world entity. Currently, deep learning (DL) based EM methods have achieved state-of-the-art (SOTA) results. However, applying DL-based EM methods often costs a lot of human efforts to label the data. To address this challenge, we propose a new domain adaptation (DA) framework for EM called Matching Feature Separation Network (MFSN). We implement DA by separating private and common matching features. Briefly, MFSN first uses three encoders to explicitly model the private and common matching features in both the source and target domains. Then, it transfers the knowledge learned from the source common matching features to the target domain. We also propose an enhanced variant called Feature Representation and Separation Enhanced MFSN (MFSN-FRSE). Compared with MFSN, it has superior feature representation and separation capabilities. We evaluate the effectiveness of MFSN and MFSN-FRSE on twelve transferring EM tasks. The results show that our framework is approximately 7% higher in F1 score on average than the previous SOTA methods. Then, we verify the effectiveness of each module in MFSN and MFSN-FRSE by ablation study. Finally, we explore the optimal strategy of each module in MFSN and MFSN-FRSE through detailed tests.
[ "Entity matching", "Deep neural network", "Domain adaptation", "Matching feature separation network", "Data integration" ]
https://openreview.net/pdf?id=1GVyE9J021
HCjWfCDTnd
official_review
1,700,646,993,044
1GVyE9J021
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission471/Reviewer_gpPv" ]
review: This paper looks at the problem of domain adaptation for entity matching. In particular, it argues for the use of language models to support domain adaptation in the process and proposes a new model architecture that tries to distinguish between features that are common across domains and those that are particular to the domains. The shows good performance to published baselines but I was curious about performance against straight fine-tuned language models or even very large generative language models used in a few shot way. This has shown to have good performance on these entity matching tasks [1]. ## Strong points * S1) Domain adaption is an important tasks * S2) The experimental setup for showing transfer performance is well done * S3) The ablation study tests the contributions of each component architecture ## Weak points * W1) While entity matching is interesting to web applications, the paper is not well situated with respect to the web. * W2) The paper uses "older" language models (e.g. Bert) and does not motivate why to use these models * W3) It was unclear the additional training data used for the adapted models. This was one of the arguments used to motivated the need for domain adaptation. [1] Vos, David, Till Döhmen, and Sebastian Schelter. "Towards Parameter-Efficient Automation of Data Wrangling Tasks with Prefix-Tuning." NeurIPS 2022 First Table Representation Workshop. 2022. #After rebuttal - I appreciate the authors response but my view is still the same. questions: Can you comment on the amount of extra training data used during adaptation or are the models just trained and tested directly? It was unclear in the paper. ethics_review_flag: No ethics_review_description: There are no ethics issues with this paper scope: 2: The connection to the Web is incidental, e.g., use of Web data or API novelty: 4 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
1GVyE9J021
Matching Feature Separation Network for Domain Adaptation in Entity Matching
[ "Chenchen Sun", "Yang Xu", "Derong shen", "Tiezheng Nie" ]
Entity matching (EM) determines whether two records from different data sources refer to the same real-world entity. Currently, deep learning (DL) based EM methods have achieved state-of-the-art (SOTA) results. However, applying DL-based EM methods often costs a lot of human efforts to label the data. To address this challenge, we propose a new domain adaptation (DA) framework for EM called Matching Feature Separation Network (MFSN). We implement DA by separating private and common matching features. Briefly, MFSN first uses three encoders to explicitly model the private and common matching features in both the source and target domains. Then, it transfers the knowledge learned from the source common matching features to the target domain. We also propose an enhanced variant called Feature Representation and Separation Enhanced MFSN (MFSN-FRSE). Compared with MFSN, it has superior feature representation and separation capabilities. We evaluate the effectiveness of MFSN and MFSN-FRSE on twelve transferring EM tasks. The results show that our framework is approximately 7% higher in F1 score on average than the previous SOTA methods. Then, we verify the effectiveness of each module in MFSN and MFSN-FRSE by ablation study. Finally, we explore the optimal strategy of each module in MFSN and MFSN-FRSE through detailed tests.
[ "Entity matching", "Deep neural network", "Domain adaptation", "Matching feature separation network", "Data integration" ]
https://openreview.net/pdf?id=1GVyE9J021
GuiXePh28c
decision
1,705,909,229,805
1GVyE9J021
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept comment: This paper addresses the important task (for KG construction) of entity matching when there is data scarcity, specifically exploring domain adaptation using pre-trained language models like BERT. The advantages lie in its focus on entity matching with unlabeled data, extensive experiments on 12 tasks to demonstrate the effectiveness (compared to previous state-of-the-art approaches) and generalization , and a well-structured presentation. The experimental setup and ablation studies contribute to the paper's strength. The authors addressed some of the concerns on weaknesses and questions from the reviewers, including a perceived lack of novelty with the method building on an existing framework and, while the system shows good performance against baselines, it is not compared to more recent LLMs. The use of older language models like BERT raised concerns about the performance against straight fine-tuned language models or even very large generative language models. Nonetheless, authors see the pre-trained LM as just one component to implement of the encoder with the research goal is to explore how to effectively combine pre-trained LMs with domain separation networks to solve the domain adaptation in entity matching In the camera-ready version, authors should include the answers to reviewers regarding addressing the novelty concerns, providing clearer analyses, and expanding the discussion and clarifications, as well as providing the source code.
1GVyE9J021
Matching Feature Separation Network for Domain Adaptation in Entity Matching
[ "Chenchen Sun", "Yang Xu", "Derong shen", "Tiezheng Nie" ]
Entity matching (EM) determines whether two records from different data sources refer to the same real-world entity. Currently, deep learning (DL) based EM methods have achieved state-of-the-art (SOTA) results. However, applying DL-based EM methods often costs a lot of human efforts to label the data. To address this challenge, we propose a new domain adaptation (DA) framework for EM called Matching Feature Separation Network (MFSN). We implement DA by separating private and common matching features. Briefly, MFSN first uses three encoders to explicitly model the private and common matching features in both the source and target domains. Then, it transfers the knowledge learned from the source common matching features to the target domain. We also propose an enhanced variant called Feature Representation and Separation Enhanced MFSN (MFSN-FRSE). Compared with MFSN, it has superior feature representation and separation capabilities. We evaluate the effectiveness of MFSN and MFSN-FRSE on twelve transferring EM tasks. The results show that our framework is approximately 7% higher in F1 score on average than the previous SOTA methods. Then, we verify the effectiveness of each module in MFSN and MFSN-FRSE by ablation study. Finally, we explore the optimal strategy of each module in MFSN and MFSN-FRSE through detailed tests.
[ "Entity matching", "Deep neural network", "Domain adaptation", "Matching feature separation network", "Data integration" ]
https://openreview.net/pdf?id=1GVyE9J021
9Iuxmr6FFz
official_review
1,700,821,756,587
1GVyE9J021
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission471/Reviewer_fDPu" ]
review: The paper describes an approach to the Entity Matching problem, i.e. the problem of identifying when two different records in different data sets describe the same entity in the real world. Machine learning techniques for entity matching are well explored, but suffer from the necessity of labelling prohibitive amounts of training data. To address this, an elaborate framework for this task is presented, that attempts to use transfer learning techniques to achieve good performance on domain specific data sets by leveraging pre-trained general purpose language models. The framework is presented in detail and an experimental evaluation is given. Although entity matching is a problem for some applications of semantic technologies, I find this paper to match the topics of the Semantics and Knowledge track only marginally. questions: The framework seems to rely heavily on pre-trained *language* models. Is it implicitly assumed that the datasets in question consist (largely) of natural language? Or would the approach work for traditional relational databases? How about knowledge graphs? ethics_review_flag: No ethics_review_description: no issue scope: 2: The connection to the Web is incidental, e.g., use of Web data or API novelty: 5 technical_quality: 6 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
1GIGp2MgFH
Lower-Left Partial AUC: An Effective and Efficient Optimization Metric for Recommendation
[ "Wentao Shi", "Chenxu Wang", "Fuli Feng", "Yang Zhang", "Wenjie Wang", "Junkang Wu", "Xiangnan He" ]
Optimization metrics are crucial for building recommendation systems at scale. However, an effective and efficient metric for practical use remains elusive. While Top-K ranking metrics are the gold standard for optimization, they suffer from significant computational overhead. Alternatively, the more efficient accuracy and AUC metrics often fall short of capturing the true targets of recommendation tasks, leading to suboptimal performance. To overcome this dilemma, we propose a new optimization metric, Lower-Left Partial AUC (LLPAUC), which is computationally efficient like AUC but strongly correlates with Top-K ranking metrics. Compared to AUC, LLPAUC considers only the partial area under the ROC curve in the Lower-Left corner to push the optimization focus on Top-K. We provide theoretical validation of the correlation between LLPAUC and Top-K ranking metrics and demonstrate its robustness to noisy user feedback. We further design an efficient point-wise recommendation loss to maximize LLPAUC and evaluate it on three datasets, validating its effectiveness and robustness.
[ "Partial AUC; Recommendation system; Optimization Metric" ]
https://openreview.net/pdf?id=1GIGp2MgFH
kslp9d11vl
official_review
1,700,646,272,975
1GIGp2MgFH
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission350/Reviewer_oikj" ]
review: summary: The authors propose LLPAUC which has a higher correlation of ranking metrics (such as NDCG or Recall) than AUC and is computationally efficient like AUC. Then the authors propose the relaxation of LLPAUC for optimization under the deep learning framework. By optimizing the surrogate loss of LLPAUC, the model would get better top-k ranking performance than AUC and the training efficiency is the same as the point-wise loss functions. The author also claims that another advantage of LLPAUC is that its robustness for noisy data. The authors conduct experiments both in clean training and noisy training settings. strength: 1. Effective and Efficient optimization for recommendation systems is an important and valuable research topic. 2. The proposed LLPAUC is a novel metric which is closer to top-k ranking metrics than AUC. weakness: 1. Although the computational efficiency of pairwise is slightly lower than that of pointwise, the computational complexity the pairwise methods bring is not a real-world problem. The advanced pairwise or listwise learning-to-rank methods are often used to optimize the model in real-world recommendation and advertising systems. However, the experimental part lacks a comparison with these methods. questions: 1. The results of the mainstream LTR methods are curious, such as the Lambda Framework. Xuanhui Wang Cheng Li Nadav Golbandi Mike Bendersky Marc Najork Proceedings of The 27th ACM International Conference on Information and Knowledge Management (CIKM '18), ACM (2018), pp. 1313-1322 ethics_review_flag: No ethics_review_description: none scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 4 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
1GIGp2MgFH
Lower-Left Partial AUC: An Effective and Efficient Optimization Metric for Recommendation
[ "Wentao Shi", "Chenxu Wang", "Fuli Feng", "Yang Zhang", "Wenjie Wang", "Junkang Wu", "Xiangnan He" ]
Optimization metrics are crucial for building recommendation systems at scale. However, an effective and efficient metric for practical use remains elusive. While Top-K ranking metrics are the gold standard for optimization, they suffer from significant computational overhead. Alternatively, the more efficient accuracy and AUC metrics often fall short of capturing the true targets of recommendation tasks, leading to suboptimal performance. To overcome this dilemma, we propose a new optimization metric, Lower-Left Partial AUC (LLPAUC), which is computationally efficient like AUC but strongly correlates with Top-K ranking metrics. Compared to AUC, LLPAUC considers only the partial area under the ROC curve in the Lower-Left corner to push the optimization focus on Top-K. We provide theoretical validation of the correlation between LLPAUC and Top-K ranking metrics and demonstrate its robustness to noisy user feedback. We further design an efficient point-wise recommendation loss to maximize LLPAUC and evaluate it on three datasets, validating its effectiveness and robustness.
[ "Partial AUC; Recommendation system; Optimization Metric" ]
https://openreview.net/pdf?id=1GIGp2MgFH
LGuoJgeKKf
official_review
1,701,283,352,127
1GIGp2MgFH
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission350/Reviewer_Gs2H" ]
review: The paper introduces a new optimization metric called Lower-Left Partial AUC (LLPAUC). This metric is shown to have a better correlation with Top-K ranking metrics than previous metrics. The authors provide both theoretical and empirical evidence to support their claims. They conducted extensive experiments on several datasets and baselines, which confirmed the effectiveness of their proposed method. Pros: - The paper is well-written and technically sound. All the theoretical analyses including proofs are presented either in the main text or Appendix; - The method is built on top of solid research; - An optimization procedure is presented for the newly proposed metric; - The results confirm that research claims on three different datasets and two recommendation models considering six different baselines. Cons: - Although I am convinced, that adding more datasets and methods to optimize the proposed metric could provide stronger evidence about the generality of the approach. In particular, other commonly used benchmarks such as Movielens; - The idea has limited novelty since most of the theoretical building blocks are present in previous works. questions: Can you specify which aspect of your approach you consider to be the most unique or innovative? ethics_review_flag: No ethics_review_description: None scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 7 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
1GIGp2MgFH
Lower-Left Partial AUC: An Effective and Efficient Optimization Metric for Recommendation
[ "Wentao Shi", "Chenxu Wang", "Fuli Feng", "Yang Zhang", "Wenjie Wang", "Junkang Wu", "Xiangnan He" ]
Optimization metrics are crucial for building recommendation systems at scale. However, an effective and efficient metric for practical use remains elusive. While Top-K ranking metrics are the gold standard for optimization, they suffer from significant computational overhead. Alternatively, the more efficient accuracy and AUC metrics often fall short of capturing the true targets of recommendation tasks, leading to suboptimal performance. To overcome this dilemma, we propose a new optimization metric, Lower-Left Partial AUC (LLPAUC), which is computationally efficient like AUC but strongly correlates with Top-K ranking metrics. Compared to AUC, LLPAUC considers only the partial area under the ROC curve in the Lower-Left corner to push the optimization focus on Top-K. We provide theoretical validation of the correlation between LLPAUC and Top-K ranking metrics and demonstrate its robustness to noisy user feedback. We further design an efficient point-wise recommendation loss to maximize LLPAUC and evaluate it on three datasets, validating its effectiveness and robustness.
[ "Partial AUC; Recommendation system; Optimization Metric" ]
https://openreview.net/pdf?id=1GIGp2MgFH
9fZ6ilSU31
decision
1,705,909,243,693
1GIGp2MgFH
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept comment: This work presents a novel optimization metric, Lower-Left Partial AUC (LLPAUC), demonstrating its superior correlation with Top-K ranking metrics compared to existing metrics. The reviewers mostly concur that the paper represents a substantial contribution to the field. They recommend that the authors incorporate the material discussed into the final version of the paper.
1GIGp2MgFH
Lower-Left Partial AUC: An Effective and Efficient Optimization Metric for Recommendation
[ "Wentao Shi", "Chenxu Wang", "Fuli Feng", "Yang Zhang", "Wenjie Wang", "Junkang Wu", "Xiangnan He" ]
Optimization metrics are crucial for building recommendation systems at scale. However, an effective and efficient metric for practical use remains elusive. While Top-K ranking metrics are the gold standard for optimization, they suffer from significant computational overhead. Alternatively, the more efficient accuracy and AUC metrics often fall short of capturing the true targets of recommendation tasks, leading to suboptimal performance. To overcome this dilemma, we propose a new optimization metric, Lower-Left Partial AUC (LLPAUC), which is computationally efficient like AUC but strongly correlates with Top-K ranking metrics. Compared to AUC, LLPAUC considers only the partial area under the ROC curve in the Lower-Left corner to push the optimization focus on Top-K. We provide theoretical validation of the correlation between LLPAUC and Top-K ranking metrics and demonstrate its robustness to noisy user feedback. We further design an efficient point-wise recommendation loss to maximize LLPAUC and evaluate it on three datasets, validating its effectiveness and robustness.
[ "Partial AUC; Recommendation system; Optimization Metric" ]
https://openreview.net/pdf?id=1GIGp2MgFH
3BYaB0jXyX
official_review
1,700,789,146,847
1GIGp2MgFH
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission350/Reviewer_dGPW" ]
review: This paper investigates Lower-Lefft Partial AUC (LLPAUC) as an adaption of the AUC metric which introduces constraints on the upper bounds for true and false positive rates (TPR and FPR). The aim of this metric is to be computationally as efficient as AUC, but better correlate with Top-K ranking metrics while being robust to noisy user feedback. The paper presents exhaustive experiments across three different datasets for two models. # Strengths - The work effectively extends beyond preliminary approaches into that direction, e.g. with OPAUC. - The code was actually shared upfront and made available which facilitates reproducing the work. - Results are reported with standard deviation and show up to be significant and substantial. # Weaknesses - L299: should be AUC = LLPAUC(1,1) ## Empirical Analysis - Missing information on how many runs the reported results are based on. - Table 3: Highlighting for Amazon, NDCG@20 for LightCGN is not correct # Formal Comments Wording, writing improvements, etc. - referring to the respective lines below: - L114: noisy - L277: missing whitespaces - L834: resulting in questions: - Can you please elaborate on the number of evaluation runs to put the standard deviation into context? ethics_review_flag: No ethics_review_description: . scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
0x1bm3XsuC
E2USD: Efficient-yet-effective Unsupervised State Detection for Multivariate Time Series
[ "Zhichen Lai", "Huan Li", "Dalin Zhang", "Yan Zhao", "Weizhu Qian", "Christian S. Jensen" ]
Cyber-physical system sensors emit multivariate time series (MTS) that monitor physical system processes. Such time series generally capture unknown numbers of states, each with a different duration, that correspond to specific conditions, e.g., “walking” or “running” in human-activity monitoring. Unsupervised identification of such states facilitates storage and processing in subsequent data analyses, as well as enhances result interpretability. Existing state-detection proposals face three challenges. First, they introduce substantial computational overhead, rendering them impractical in resource-constrained or streaming settings. Second, although state-of-the-art (SOTA) proposals employ contrastive learning for representation, insufficient attention to false negatives hampers model convergence and accuracy. Third, SOTA proposals predominantly only emphasize offline non-streaming deployment, we highlight an urgent need to optimize online streaming scenarios. We propose E2USD that enables efficient-yet-accurate unsupervised MTS state detection. E2USD exploits a Fast Fourier Transform-based Time Series Compressor (fftCompress) and a Decomposed Dual-view Embedding Module (ddEM) that together encode input MTSs at low computational overhead. Additionally, we propose a False Negative Cancellation Contrastive Learning method (fnccLearning) to counteract the effects of false negatives and to achieve more cluster-friendly embedding spaces. To reduce computational overhead further in streaming settings, we introduce Adaptive Threshold Detection (adaTD). Comprehensive experiments with six baselines and six datasets offer evidence that E2USD is capable of SOTA accuracy at significantly reduced computational overhead. Our code is available at http://bit.ly/3rMFJVv.
[ "Unsupervised State Detection", "Time Series Representation Learning", "Contrastive Learning" ]
https://openreview.net/pdf?id=0x1bm3XsuC
vtC7zjvPFu
official_review
1,701,151,481,214
0x1bm3XsuC
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1678/Reviewer_L7P1" ]
review: The paper addresses the challenge of unsupervised identification of states in multivariate time series (MTS) emitted by cyber-physical system sensors. It claims that their model is suitable for resource-constrained devices and streaming scenarios. The proposed solution, E2Usd, utilizes a Fast Fourier Transform-based Time Series Compressor (fftCompress), a Decomposed Dual-view (Trend and Seasonality features) Embedding Module (ddEM), and a False Negative Cancellation Contrastive Learning method (fnccLearning). To reduce computational overhead in streaming scenarios, Adaptive Threshold Detection (adaTD) is introduced. They have investigated the effectiveness of their method across 6 datasets compared to 6 other baselines. The paper is very well written and easy to follow. The technical material and the rationale behind each decision are explained clearly. The integration of fftCompress and ddEM for compact MTS embedding, along with the novel fnccLearning method, showcases originality in addressing the well-known challenges in unsupervised state detection. The adaptive threshold mechanism for streaming data further adds to the originality of the work. questions: 1. The Related work section and the baseline comparison do not cover the most related SOTA in the field of multivariate time series contrastive learning and False Negative Cancellation techniques. - CPC on dataset related to this work: -Harish Haresamudram, Irfan Essa, and Thomas Plötz. 2021. Contrastive predictive coding for human activity recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies5, 2(2021),1–26. - Shohreh Deldari, Daniel V. Smith, Hao Xue, and Flora D. Salim. 2021.Time Series Change Point Detection with Self-Supervised Contrastive Predictive Coding.In Proceedings of The Web Conference 2021 (WWW’21). - Contrastive learning on multivariate TS: - Shohreh Deldari, Hao Xue, Aaqib Saeed, Daniel V. Smith, and Flora D. Salim. 2022. COCOA: Cross Modality Contrastive Learning for Sensor Data. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. - False Negatives: - Huynh, T., Kornblith, S., Walter, M.R., Maire, M. and Khademi, M., 2022. Boosting contrastive self-supervised learning with false negative cancellation. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 2785-2795). - Robinson, J., Chuang, C.Y., Sra, S. and Jegelka, S., 2020. Contrastive learning with hard negative samples. arXiv preprint arXiv:2010.04592. - Jain, Y., Tang, C.I., Min, C., Kawsar, F. and Mathur, A., 2022. Collossl: Collaborative self-supervised learning for human activity recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 6(1), pp.1-28. >>> In this work, proposed for multi-device TS data, the authors proposed a novel technique based on Maximum Mean Discrepancy (MMD) to evaluate negative pairs. - etc. 2. The authors have considered the impact of false negatives and proposed an adaptive method to discard false negatives or reduce them. I was wondering what can the impact of False positives be. For example, in the streaming scenario, there might be a change in the state of the data. However, according to the random selection of positive pairs (i.e., consecutive frames), two positive pairs may not necessarily be related to the same distribution/state. 3. That would be great if the authors could do more experiments on evaluating their false negative cancellation method as one of their claimed contributions. minor changes and typos:
 - the format of the text in the first and second paragraphs of Section 2.1 should not be italic. ethics_review_flag: No ethics_review_description: NA scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 6 technical_quality: 6 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
0x1bm3XsuC
E2USD: Efficient-yet-effective Unsupervised State Detection for Multivariate Time Series
[ "Zhichen Lai", "Huan Li", "Dalin Zhang", "Yan Zhao", "Weizhu Qian", "Christian S. Jensen" ]
Cyber-physical system sensors emit multivariate time series (MTS) that monitor physical system processes. Such time series generally capture unknown numbers of states, each with a different duration, that correspond to specific conditions, e.g., “walking” or “running” in human-activity monitoring. Unsupervised identification of such states facilitates storage and processing in subsequent data analyses, as well as enhances result interpretability. Existing state-detection proposals face three challenges. First, they introduce substantial computational overhead, rendering them impractical in resource-constrained or streaming settings. Second, although state-of-the-art (SOTA) proposals employ contrastive learning for representation, insufficient attention to false negatives hampers model convergence and accuracy. Third, SOTA proposals predominantly only emphasize offline non-streaming deployment, we highlight an urgent need to optimize online streaming scenarios. We propose E2USD that enables efficient-yet-accurate unsupervised MTS state detection. E2USD exploits a Fast Fourier Transform-based Time Series Compressor (fftCompress) and a Decomposed Dual-view Embedding Module (ddEM) that together encode input MTSs at low computational overhead. Additionally, we propose a False Negative Cancellation Contrastive Learning method (fnccLearning) to counteract the effects of false negatives and to achieve more cluster-friendly embedding spaces. To reduce computational overhead further in streaming settings, we introduce Adaptive Threshold Detection (adaTD). Comprehensive experiments with six baselines and six datasets offer evidence that E2USD is capable of SOTA accuracy at significantly reduced computational overhead. Our code is available at http://bit.ly/3rMFJVv.
[ "Unsupervised State Detection", "Time Series Representation Learning", "Contrastive Learning" ]
https://openreview.net/pdf?id=0x1bm3XsuC
iXe7s4nmLb
official_review
1,700,850,302,040
0x1bm3XsuC
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1678/Reviewer_v6kn" ]
review: The paper addresses the task of unsupervised identification of states in multivariate time series (MTS) within the scope of cyber-physical system sensors. Pros: - Novel Approach: The idea to combine the Fast Fourier Transform-based Time Series Compressor (fftCompress) and a Decomposed Dual-view Embedding Module (ddEM) is intriguing. - Addressing Challenges: The paper addresses the challenges faced by existing state-detection methods, including computational overhead, false negatives, and a lack of emphasis on online streaming scenarios. The proposed Adaptive Threshold Detection (adaTD) for reducing computational overhead in streaming settings is a noteworthy contribution. - Real-world Applications: The inclusion of comprehensive experiments with multiple baselines/datasets, along with testing on real devices and hardware, adds practical relevance to the proposed method. This application-driven approach enhances the paper's significance. - Code Availability: The authors provide a link to the code, promoting reproducibility in the research community. Cons: - Negative Pair Mining: While the paper effectively tackles the challenges associated with negative pair mining, it could benefit from a discussion on alternative approaches that do without negative pairs, such as BYOL. This addition would contribute to the completeness of the paper's exploration of methodologies. - Conceptual Differentiation: The paper references TCN but could provide a more detailed discussion on how the pair selection in E2Usd is conceptually different from methods like TCN. This clarification would help readers understand the unique contributions of E2Usd in relation to existing approaches. - Ablation Analysis: Given the multiple proposed modules, an ablation analysis by removing components, such as FFTCOMPRESS, would be beneficial. This analysis could shed light on the individual importance of each module and provide insights into the system's robustness. questions: In summary, the paper demonstrates a novel solution to the problem of unsupervised MTS state detection. It effectively addresses challenges and provides practical applications. To enhance the paper, I recommend further discussions on negative pair mining alternatives, conceptual differentiation from TCN, and an ablation analysis to assess the importance of individual components. ethics_review_flag: No ethics_review_description: n/a scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 4 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
0x1bm3XsuC
E2USD: Efficient-yet-effective Unsupervised State Detection for Multivariate Time Series
[ "Zhichen Lai", "Huan Li", "Dalin Zhang", "Yan Zhao", "Weizhu Qian", "Christian S. Jensen" ]
Cyber-physical system sensors emit multivariate time series (MTS) that monitor physical system processes. Such time series generally capture unknown numbers of states, each with a different duration, that correspond to specific conditions, e.g., “walking” or “running” in human-activity monitoring. Unsupervised identification of such states facilitates storage and processing in subsequent data analyses, as well as enhances result interpretability. Existing state-detection proposals face three challenges. First, they introduce substantial computational overhead, rendering them impractical in resource-constrained or streaming settings. Second, although state-of-the-art (SOTA) proposals employ contrastive learning for representation, insufficient attention to false negatives hampers model convergence and accuracy. Third, SOTA proposals predominantly only emphasize offline non-streaming deployment, we highlight an urgent need to optimize online streaming scenarios. We propose E2USD that enables efficient-yet-accurate unsupervised MTS state detection. E2USD exploits a Fast Fourier Transform-based Time Series Compressor (fftCompress) and a Decomposed Dual-view Embedding Module (ddEM) that together encode input MTSs at low computational overhead. Additionally, we propose a False Negative Cancellation Contrastive Learning method (fnccLearning) to counteract the effects of false negatives and to achieve more cluster-friendly embedding spaces. To reduce computational overhead further in streaming settings, we introduce Adaptive Threshold Detection (adaTD). Comprehensive experiments with six baselines and six datasets offer evidence that E2USD is capable of SOTA accuracy at significantly reduced computational overhead. Our code is available at http://bit.ly/3rMFJVv.
[ "Unsupervised State Detection", "Time Series Representation Learning", "Contrastive Learning" ]
https://openreview.net/pdf?id=0x1bm3XsuC
SXoZaU927C
decision
1,705,909,240,755
0x1bm3XsuC
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept (Oral) comment: The paper tackles the problem of detecting state from multivariate time series (MTS) data emitted by sensors. Reviewers appreciate that the paper was well written with all choices justified and motivated. There is also a commitment to open science including open source code repository being made available. The authors also engaged well with the reviewers' comments, including implementing a new baseline that applies BYOL to MTS data, which was pointed out by one of the reviewers. In any subsequent revision, the paper should include all the promised clarifications and changes that emerged during the rebuttal period, as it will improve the presentation significantly.
0x1bm3XsuC
E2USD: Efficient-yet-effective Unsupervised State Detection for Multivariate Time Series
[ "Zhichen Lai", "Huan Li", "Dalin Zhang", "Yan Zhao", "Weizhu Qian", "Christian S. Jensen" ]
Cyber-physical system sensors emit multivariate time series (MTS) that monitor physical system processes. Such time series generally capture unknown numbers of states, each with a different duration, that correspond to specific conditions, e.g., “walking” or “running” in human-activity monitoring. Unsupervised identification of such states facilitates storage and processing in subsequent data analyses, as well as enhances result interpretability. Existing state-detection proposals face three challenges. First, they introduce substantial computational overhead, rendering them impractical in resource-constrained or streaming settings. Second, although state-of-the-art (SOTA) proposals employ contrastive learning for representation, insufficient attention to false negatives hampers model convergence and accuracy. Third, SOTA proposals predominantly only emphasize offline non-streaming deployment, we highlight an urgent need to optimize online streaming scenarios. We propose E2USD that enables efficient-yet-accurate unsupervised MTS state detection. E2USD exploits a Fast Fourier Transform-based Time Series Compressor (fftCompress) and a Decomposed Dual-view Embedding Module (ddEM) that together encode input MTSs at low computational overhead. Additionally, we propose a False Negative Cancellation Contrastive Learning method (fnccLearning) to counteract the effects of false negatives and to achieve more cluster-friendly embedding spaces. To reduce computational overhead further in streaming settings, we introduce Adaptive Threshold Detection (adaTD). Comprehensive experiments with six baselines and six datasets offer evidence that E2USD is capable of SOTA accuracy at significantly reduced computational overhead. Our code is available at http://bit.ly/3rMFJVv.
[ "Unsupervised State Detection", "Time Series Representation Learning", "Contrastive Learning" ]
https://openreview.net/pdf?id=0x1bm3XsuC
76fRWRbs1J
official_review
1,700,610,863,312
0x1bm3XsuC
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1678/Reviewer_4nTr" ]
review: The paper aims to improve unsupervised state detection performance for multivariate time series (MTS) data. Existing methods suffer from high computational overhead, false negative sampling, and lack of optimization for online streaming scenarios. This paper proposes a method called E2USD to attempt to address these problems. E2USD leverages fftCompress and ddEM to reduce the computational cost, fnccLearning to improve the negative sampling, and adaTD to optimize the whole proposed solution for online streaming scenarios. The experimental analysis compares E2USD with several existing methods on six datasets, Synthetic, MoCap, ActRecTut, PAMAP2, UscHad, and UcrSeg, which shows the effectiveness of the proposed method. **Pros** 1. This paper is well-organized with a clear structure and detailed explanation for core components, such as Section 2~3. 2. The component study and parameter sensitivity study are helpful to illustrate the effects of each component and parameter. 3. Intuitive figures, such as Figures 1~3, are provided to illustrate the key concepts. **Cons** 1. The proposed method aims to improve the efficiency of unsupervised state detection for MTS. However, the experimental study lacks an in-depth analysis and explanation of why the proposed method can improve runtime efficiency. 2. The proposed method leverages contrastive learning, which lacks discussions in the related work. 3. Some important details are missing from this paper, such as how to set up the static threshold detection. questions: Please check the cons listed above. ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 4 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
0x1bm3XsuC
E2USD: Efficient-yet-effective Unsupervised State Detection for Multivariate Time Series
[ "Zhichen Lai", "Huan Li", "Dalin Zhang", "Yan Zhao", "Weizhu Qian", "Christian S. Jensen" ]
Cyber-physical system sensors emit multivariate time series (MTS) that monitor physical system processes. Such time series generally capture unknown numbers of states, each with a different duration, that correspond to specific conditions, e.g., “walking” or “running” in human-activity monitoring. Unsupervised identification of such states facilitates storage and processing in subsequent data analyses, as well as enhances result interpretability. Existing state-detection proposals face three challenges. First, they introduce substantial computational overhead, rendering them impractical in resource-constrained or streaming settings. Second, although state-of-the-art (SOTA) proposals employ contrastive learning for representation, insufficient attention to false negatives hampers model convergence and accuracy. Third, SOTA proposals predominantly only emphasize offline non-streaming deployment, we highlight an urgent need to optimize online streaming scenarios. We propose E2USD that enables efficient-yet-accurate unsupervised MTS state detection. E2USD exploits a Fast Fourier Transform-based Time Series Compressor (fftCompress) and a Decomposed Dual-view Embedding Module (ddEM) that together encode input MTSs at low computational overhead. Additionally, we propose a False Negative Cancellation Contrastive Learning method (fnccLearning) to counteract the effects of false negatives and to achieve more cluster-friendly embedding spaces. To reduce computational overhead further in streaming settings, we introduce Adaptive Threshold Detection (adaTD). Comprehensive experiments with six baselines and six datasets offer evidence that E2USD is capable of SOTA accuracy at significantly reduced computational overhead. Our code is available at http://bit.ly/3rMFJVv.
[ "Unsupervised State Detection", "Time Series Representation Learning", "Contrastive Learning" ]
https://openreview.net/pdf?id=0x1bm3XsuC
0l5LEHlgK2
official_review
1,700,669,245,789
0x1bm3XsuC
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1678/Reviewer_jrT4" ]
review: ## Review The paper presents an innovative unsupervised approach to state detection designed for environments with constrained computational resources. This method not only broadens the application spectrum but also addresses certain limitations inherent to current state-of-the-art (SOTA) techniques, particularly in practical deployments. The experimental outcomes support the method's efficacy, providing commendable results. The article's narrative structure is logically coherent, with each section seamlessly contributing to the overarching argument. Pros: - Effectively addresses false positives in contrastive learning with pairwise sample similarity. - Reduced model usage with minimal accuracy loss via dynamic thresholding. - Achieve performance precision nearing state-of-the-art methods. Cons: - The FFTCompress module functions like a low-pass filter in most scenarios. The necessity of calculating spectral intervals based on continuous spectral energy density seems questionable. This method's utility might be clarified by a more nuanced justification. - The paper lacks detailed data presentation during practical validation on an MCU. Providing comprehensive experimental details would bolster the credibility of the results and allow for a clearer understanding of the method's performance in real-world settings. questions: 1. In real-world scenarios, what would be an efficient method to determine appropriate FFT bandwidth for the data? 2. Section 4.5 mentions an average sample latency of 44.95ms on the MCU. Could you specify what proportion of this delay is attributed to triggering cluster detection, and what are the actual latencies for both processes individually? 3. Typically in FFT analyses, the zero index contributes to a higher energy component, as shown in Figure 2(1) and Section 7.3. Given the method of selecting frequency bands described in the paper, wouldn’t a low-pass filter suffice in most cases instead of the FFTCompress module? 4. In Section 3.2, a lambda coefficient is introduced (below Equation 12) for selecting the least similar sample pairs. Does this coefficient significantly affect algorithm performance? ethics_review_flag: No ethics_review_description: N/A scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 4 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
0utESEzD6E
ARTEMIS: Detecting Airdrop Hunters in NFT Markets with a Graph Learning System
[ "Chenyu Zhou", "Hongzhou Chen", "Hao Wu", "Junyu Zhang", "Wei Cai" ]
As Web3 projects leverage airdrops to incentivize participation, airdrop hunters tactically amass wallet addresses to capitalize on token giveaways. This poses challenges to the decentralization goal. Current detection approaches tailored for cryptocurrencies overlook non-fungible tokens (NFTs) nuances. We introduce ARTEMIS, an optimized graph neural network system for identifying airdrop hunters in NFT transactions. ARTEMIS captures NFT airdrop hunters through: (1) a multimodal module extracting visual and textual insights from NFT metadata using Transformer models; (2) a tailored node aggregation function chaining NFT transaction sequences, retaining behavioral insights; (3) engineered features based on market manipulation theories detecting anomalous trading. Evaluated on decentralized exchange Blur's data, ARTEMIS significantly outperforms baselines in pinpointing hunters. This pioneering computational solution for an emergent Web3 phenomenon has broad applicability for blockchain anomaly detection.
[ "Airdrop Hunters", "Web3", "NFTs", "Graph Neural Network", "Multimodal Deep Learning" ]
https://openreview.net/pdf?id=0utESEzD6E
wa0xdO1cKz
official_review
1,700,800,548,845
0utESEzD6E
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1716/Reviewer_2P8m" ]
review: This paper introduces ARTEMIS, an optimized graph neural network system for identifying air-drop hunters in NFT transactions. Pros: employs multi-modal attributes to enhance the detection performance of air-drop hunters. Cons: The proposed method is a combination of existing methods. questions: 1. Why use ViT as the pre-trained transformer? Are there any potential alternatives? 2. The authors combine ViT and Bert in their method. This contribution has limited novelty. 3. The benchmarks used in the experiments are outdated. ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 3 technical_quality: 3 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
0utESEzD6E
ARTEMIS: Detecting Airdrop Hunters in NFT Markets with a Graph Learning System
[ "Chenyu Zhou", "Hongzhou Chen", "Hao Wu", "Junyu Zhang", "Wei Cai" ]
As Web3 projects leverage airdrops to incentivize participation, airdrop hunters tactically amass wallet addresses to capitalize on token giveaways. This poses challenges to the decentralization goal. Current detection approaches tailored for cryptocurrencies overlook non-fungible tokens (NFTs) nuances. We introduce ARTEMIS, an optimized graph neural network system for identifying airdrop hunters in NFT transactions. ARTEMIS captures NFT airdrop hunters through: (1) a multimodal module extracting visual and textual insights from NFT metadata using Transformer models; (2) a tailored node aggregation function chaining NFT transaction sequences, retaining behavioral insights; (3) engineered features based on market manipulation theories detecting anomalous trading. Evaluated on decentralized exchange Blur's data, ARTEMIS significantly outperforms baselines in pinpointing hunters. This pioneering computational solution for an emergent Web3 phenomenon has broad applicability for blockchain anomaly detection.
[ "Airdrop Hunters", "Web3", "NFTs", "Graph Neural Network", "Multimodal Deep Learning" ]
https://openreview.net/pdf?id=0utESEzD6E
lATbXXza6i
official_review
1,700,870,481,048
0utESEzD6E
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1716/Reviewer_VPCy" ]
review: The authors built a system to detect airdrop hunters who create multiple accounts to profit from DApp token giveaways. This paper tackles an important and interesting research problem, but it needs several improvements. The authors started by collecting transaction and airdrop data related to Blur. Clustering and labeling this data, they found 4,808 airdrop hunter wallet addresses. Unfortunately, no details were shared on how the clustering and labeling were performed and how accurate it was. Then, they developed a complex architecture leveraging visual, textual, graph, and transaction-related features to train a model composed of VIT, BERT, GNN, and other NN components. The model trained by the authors outperforms baseline models. While the authors performed some hyperparameter tuning for ARTEMIS, they don’t discuss such efforts for the baseline models. The lack of tuning of baseline models suggests that the comparison was potentially unfair. While the model outperformed the best baselines, its precision is only 0.71, meaning the classifier will make 3 FP decisions for every 7 TP classifications. Based on how active airdrop hunters are, these false positives would mean that some of the most active benign users could be affected if ARTEMIS is used in real life, potentially negatively impacting the DApp users. It would have been great if the authors included a study about the FPs. It would make the paper better if the authors included an adversarial discussion on how hard/easy it would be for airdrop hunters to evade ARTEMIS. questions: How did you cluster and label airdrop hunters? How did you validate that an address is as an airdrop hunter? Could a DApp set up a token giveaway that doesn’t incentivize airdrop hunting? How much effort did you make to improve the baseline models (e.g., hyperparameter tuning? Do you think this comparison was far? How easy would it be for airdrop hunters to adapt to your model and avoid detection? How could Blur utilize Artemis without hurting benign users, given that ARTEMIS only achieved 0.71 precision? ethics_review_flag: No ethics_review_description: None scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 6 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
0utESEzD6E
ARTEMIS: Detecting Airdrop Hunters in NFT Markets with a Graph Learning System
[ "Chenyu Zhou", "Hongzhou Chen", "Hao Wu", "Junyu Zhang", "Wei Cai" ]
As Web3 projects leverage airdrops to incentivize participation, airdrop hunters tactically amass wallet addresses to capitalize on token giveaways. This poses challenges to the decentralization goal. Current detection approaches tailored for cryptocurrencies overlook non-fungible tokens (NFTs) nuances. We introduce ARTEMIS, an optimized graph neural network system for identifying airdrop hunters in NFT transactions. ARTEMIS captures NFT airdrop hunters through: (1) a multimodal module extracting visual and textual insights from NFT metadata using Transformer models; (2) a tailored node aggregation function chaining NFT transaction sequences, retaining behavioral insights; (3) engineered features based on market manipulation theories detecting anomalous trading. Evaluated on decentralized exchange Blur's data, ARTEMIS significantly outperforms baselines in pinpointing hunters. This pioneering computational solution for an emergent Web3 phenomenon has broad applicability for blockchain anomaly detection.
[ "Airdrop Hunters", "Web3", "NFTs", "Graph Neural Network", "Multimodal Deep Learning" ]
https://openreview.net/pdf?id=0utESEzD6E
jKM6jgDf2G
official_review
1,699,102,131,197
0utESEzD6E
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1716/Reviewer_LPuG" ]
review: This paper presents a study on the emerging issue of NFT Airdrop Hunters and proposes a novel graph representation learning technique integrated with multimodal features to detect potential hunters. The authors have utilized the data from NFT marketplace Blur for their evaluation, and their system, ARTEMIS, exhibits significant improvement over existing models in identifying hunters. Generally, I like this paper, however, there are two major concerns that should be addressed before it can be accepted. Pros: - The paper addresses an interesting and relevant issue in the realm of NFTs - Airdrop Hunting. The topic is fresh and has not been extensively researched, making this study a significant contribution to the field. - The proposed method is a creative combination of graph learning and multimodal features, designed to capture the nuanced behaviors of hunters. This technique is technically sound and innovative. - The authors have conducted a comprehensive evaluation using data from the Blur market. The results demonstrate a substantial improvement over previous models, underscoring the effectiveness of the proposed system. Cons: - Concern one: The paper could benefit from a more detailed explanation of certain design decisions. The authors' rationale behind specific choices remains unclear (see more details in question) - Concern Two: The paper does not clarify whether NFT Airdrop Hunting is a widespread issue across all marketplaces or is specific to certain ones. This information is crucial to understanding the scope and applicability of the proposed system. questions: - The authors have utilized both the textual description and image attributes of NFTs to generate features. The ablation study indicates the usefulness of these multimodal features. However, it is unclear why these two features are significant in the context of Airdrop Hunters. In the threat model, these two attributes are defined by the vendor (victim) and should remain consistent across legitimate users and attackers (hunters). This suggests that these attributes may not be very useful in detecting hunters. I suspect that the multimode feature only serves as a unique identifier for **differentiating NFT assets**. If that is the case, using a multimodel instead of a simple hash ID seems to be overkill here. Please provide justification on why these advanced features are necessary in **differentiating attackers**. - The evaluation of the system is based solely on data from one marketplace - Blur. There are numerous popular NFT marketplaces, such as OpenSea, Rarible, Mintable, Foundation, and Nifty Gateway. Is Airdrop Hunting a common problem across these marketplaces, or is it specific to Blur? The authors should clarify this to provide a better understanding of the system's applicability and effectiveness across different platforms. ethics_review_flag: No ethics_review_description: N/A scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 6 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
0utESEzD6E
ARTEMIS: Detecting Airdrop Hunters in NFT Markets with a Graph Learning System
[ "Chenyu Zhou", "Hongzhou Chen", "Hao Wu", "Junyu Zhang", "Wei Cai" ]
As Web3 projects leverage airdrops to incentivize participation, airdrop hunters tactically amass wallet addresses to capitalize on token giveaways. This poses challenges to the decentralization goal. Current detection approaches tailored for cryptocurrencies overlook non-fungible tokens (NFTs) nuances. We introduce ARTEMIS, an optimized graph neural network system for identifying airdrop hunters in NFT transactions. ARTEMIS captures NFT airdrop hunters through: (1) a multimodal module extracting visual and textual insights from NFT metadata using Transformer models; (2) a tailored node aggregation function chaining NFT transaction sequences, retaining behavioral insights; (3) engineered features based on market manipulation theories detecting anomalous trading. Evaluated on decentralized exchange Blur's data, ARTEMIS significantly outperforms baselines in pinpointing hunters. This pioneering computational solution for an emergent Web3 phenomenon has broad applicability for blockchain anomaly detection.
[ "Airdrop Hunters", "Web3", "NFTs", "Graph Neural Network", "Multimodal Deep Learning" ]
https://openreview.net/pdf?id=0utESEzD6E
WUbYBQo2bV
decision
1,705,909,227,767
0utESEzD6E
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept (Oral) comment: While the proposal might suffer from some over-simplification, such as: little information provided about ground-truth labeling and a need for further refinement of the proposed methodology (the detection performance is not particularly good), the addressed problem is novel and there is convergence on the fact that the quality of the technical analysis is very high (combining solutions that, per se, already exist). There has been an active and useful exchange between the authors and reviewers. ---
0utESEzD6E
ARTEMIS: Detecting Airdrop Hunters in NFT Markets with a Graph Learning System
[ "Chenyu Zhou", "Hongzhou Chen", "Hao Wu", "Junyu Zhang", "Wei Cai" ]
As Web3 projects leverage airdrops to incentivize participation, airdrop hunters tactically amass wallet addresses to capitalize on token giveaways. This poses challenges to the decentralization goal. Current detection approaches tailored for cryptocurrencies overlook non-fungible tokens (NFTs) nuances. We introduce ARTEMIS, an optimized graph neural network system for identifying airdrop hunters in NFT transactions. ARTEMIS captures NFT airdrop hunters through: (1) a multimodal module extracting visual and textual insights from NFT metadata using Transformer models; (2) a tailored node aggregation function chaining NFT transaction sequences, retaining behavioral insights; (3) engineered features based on market manipulation theories detecting anomalous trading. Evaluated on decentralized exchange Blur's data, ARTEMIS significantly outperforms baselines in pinpointing hunters. This pioneering computational solution for an emergent Web3 phenomenon has broad applicability for blockchain anomaly detection.
[ "Airdrop Hunters", "Web3", "NFTs", "Graph Neural Network", "Multimodal Deep Learning" ]
https://openreview.net/pdf?id=0utESEzD6E
FasxN9iUFN
official_review
1,700,675,642,042
0utESEzD6E
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1716/Reviewer_2hfh" ]
review: This paper introduces ARTEMIS, a graph neural network (GNN) system to detect airdrop hunters in NFT transactions. The main idea is to improve GNN sampling to consider NFT transaction paths and augment the model with metadata (text/image) embedding and price features. The system is evaluated on Blur’s data. Strengths: - The proposed system is well explained, and the designs are reasonable. - The behavior analysis of airdrop hunters is informative. Weaknesses: - Little information is provided on ground-truth labeling. - The detection performance is not good. - It is unclear what the text/image embedding is trying to achieve - Unclear if the detection method can be generalized to other airdrop applications The system is largely reasonable by considering NFT metadata, NFT transactions, and anomalous trading prices in the modeling process. I have the following concerns. **Data collection** I don’t understand how the ground-truth labels are obtained. The paper stated that “we compared airdrop records to identify airdrop hunters meticulously. Subsequently, we sampled varying hunter scales and visualized microscopic transaction paths to validate data reliability” This is extremely vague and I cannot figure it out what is done here. Further details are needed on the ground-truth labeling procedure and the evidence of airdrop hunting. **Feature Analysis** For the hold time analysis, the paper argues the hold time of airdrop hunters is shorter (36 days) than regulars (53 days). I don’t think this is a super strong indicator given the large variance of the two distributions. Also, their median hold time seems to be quite similar based on the provided figure. What are the image and text features supposed to catch? It would be helpful to explain the intuitions behind the embedding. Do you expect the airdrop hunters are go after those with similar text/words and images? Are these features trying to capture some form of “similarity”? Please clarify. **Generalizability and Accuracy** The system is designed based on a single dataset from Blur. It is unclear if the methodology can be generalized to airdrops at other platforms. Unfortunately, after extensive feature engineering, the system performance is not satisfying. 0.71 precision and 0.729 recall is too low to deploy as a detector. The paper also lacks error analyses to understand what caused the misclassification. Broken references: - “Fig. 6 shows a detailed illustration of the process, which can be extended to multi-hop neighbors” – I don’t think fig6 is the right figure for this. - “We tested the address distribution in the blur market and found it also follows a power-law distribution, with the results illustrated in fig. ??” questions: Please see the above review. ethics_review_flag: No ethics_review_description: NA scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 4 technical_quality: 2 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
0utESEzD6E
ARTEMIS: Detecting Airdrop Hunters in NFT Markets with a Graph Learning System
[ "Chenyu Zhou", "Hongzhou Chen", "Hao Wu", "Junyu Zhang", "Wei Cai" ]
As Web3 projects leverage airdrops to incentivize participation, airdrop hunters tactically amass wallet addresses to capitalize on token giveaways. This poses challenges to the decentralization goal. Current detection approaches tailored for cryptocurrencies overlook non-fungible tokens (NFTs) nuances. We introduce ARTEMIS, an optimized graph neural network system for identifying airdrop hunters in NFT transactions. ARTEMIS captures NFT airdrop hunters through: (1) a multimodal module extracting visual and textual insights from NFT metadata using Transformer models; (2) a tailored node aggregation function chaining NFT transaction sequences, retaining behavioral insights; (3) engineered features based on market manipulation theories detecting anomalous trading. Evaluated on decentralized exchange Blur's data, ARTEMIS significantly outperforms baselines in pinpointing hunters. This pioneering computational solution for an emergent Web3 phenomenon has broad applicability for blockchain anomaly detection.
[ "Airdrop Hunters", "Web3", "NFTs", "Graph Neural Network", "Multimodal Deep Learning" ]
https://openreview.net/pdf?id=0utESEzD6E
9i0w6S9eyg
official_review
1,701,056,698,113
0utESEzD6E
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1716/Reviewer_MYPg" ]
review: The study focuses on detecting Airdrop Hunters within the NFT trading domain in Web3. It introduces ARTEMIS as a solution with three key components. Firstly, the system employs a tailored neighbor sampling method and aggregator to connect multi-hop NFT transaction sequences. Secondly, ARTEMIS incorporates modules for multimodal feature extraction - images and descriptions. These modules leverage Transformer-based pre-trained models to extract both visual and textual insights from NFTs. Thirdly, the system engineers the representation of common NFT prices and advanced features focused on hunters, drawing from market manipulation theories and domain knowledge. Strength: 1. The authors gather a real-world dataset which can be used for future studies for related research. 2. The idea of using transaction-based paths for node sampling is well designed, and based on their claim it is the first work of using ML in airdrop hunting. 3. The evaluation shows the effectiveness of the model for the problem compared to other graph-based and non-graph methods. Weakness: 1. Except the novelty of problem, the core of the mythology doesn't look novel regarding graph learning and NFT feature extraction. 2. The presentation of the paper can be further improved regarding the attack model of airdrop hunter, and the necessities of using ML model to detect hunters. 3. Experiments are over simplified. The characteristics of the the patterns of airdrop hunters is not demonstrated. 4. There are some minor problems in the text. For example, on page 7, fig ?? should be corrected. Also, line 49. questions: 1. The code and the link to the dataset are missing in the paper. Are the authors planning to open source them? 2. The model uses attention for fusion. Can authors interpret the attention distribution for the text and image to clarify which part has made the most contribution? 3. Are there other datasets that can be used to further investigate the results? 4. What is the attack model of airdrop hunter 5. Based on the Figure 2, it looks like the airdrop hunters usually open multiple accounts and make fake transactions among them, leading to the creation of cycles in user graphs. Can this problem solved by cycle detection in the graph? Why is ML method the good way to address this problem? ethics_review_flag: No ethics_review_description: None scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 3 technical_quality: 3 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
0mNYLhS1pN
Generative News Recommendation
[ "Shen Gao", "Jiabao Fang", "Quan Tu", "Zhitao Yao", "Zhumin Chen", "Pengjie Ren", "Zhaochun Ren" ]
Most existing news recommendation methods tackle this task by conducting semantic matching between candidate news and user representation produced by past clicked news. However, they ignore the higher-level associative relationships between news, and building these relationships typically requires common-sense knowledge and reasoning ability. And the definition of these methods dictates that they can only deliver news articles as-is. On the contrary, integrating several relevant news into a coherent narrative would assist users in gaining a quicker and more comprehensive understanding of events. In this paper, we propose a novel generative news recommendation paradigm that includes two steps: (1) Leveraging the internal knowledge and reasoning capabilities of the Large Language Model (LLM) to perform high-level matching between candidate news and user representation; (2) Generating a coherent and logically structured narrative based on the associations between related news and user interests, thus engaging users in further reading of the news. Specifically, we propose Generative News Recommendation (GNR). First, we compose the multi-level representation of news and users by leveraging LLM to generate theme-level representations and combine them with semantic-level representations. Next, in order to generate a coherent narrative, we explore the news relationship and filter the related news according to the user preference. Finally, we propose a novel training method named UIFT to train the LLM to fuse the multiple related news in a coherent narrative. Extensive experiments show that GNR can improve the recommendation accuracy and eventually generate more personalized and factually consistent narratives.
[ "News Recommendation; Generative Recommendation" ]
https://openreview.net/pdf?id=0mNYLhS1pN
zYZzTizu75
official_review
1,699,159,196,532
0mNYLhS1pN
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission733/Reviewer_TUCw" ]
review: This paper introduces the concept of "Generative News Recommendation", which aims to generate coherent narratives for users. It consists of a three-stage pipeline: 1. news and user summarization; 2. news exploration; and 3. narrative fusion. The entire article's writing is passable at best, with some parts requiring multiple readings to comprehend. Pros: - Probably a new field to explore, enhancing user reading experience. - A nice incorporation with LLMs by using its powerful comprehension and summarization ability. - A good performance displayed in the experiments. Cons: - I doubt the effectiveness of the news exploration and narrative fusion. In the reality, users tend to read news of multiple categories in his/her browsing session and care multiple events all at once. Therefore, it would be hard to select the **focal news** and thus hard to perform news exploration. - Missing an important baseline [1] you mentioned in the related work, which also produces item-level (content summarizer) and user-level (user profiler) representations in news recommendation domain. You can also include other research that combining LLMs with news recommendation [2][3]. - If I understand correctly, the first row of the experimental results in Table 3 (Sem News and Sem User) did not use data generated by ChatGPT. I am doubleful of the efficacy of the experiment involving PLM4NR variant that includes abstract, as it may not outperform the title-only version. - Clarity: Line 300. PLM used in [4] is only used as the backbone of the news encoder. [1] ONCE: Boosting Content-based Recommendation with Both Open- and Closed-source Large Language Models [2] LKPNR: LLM and KG for Personalized News Recommendation Framework [3] A Preliminary Study of ChatGPT on News Recommendation: Personalization, Provider Fairness, Fake News [4] Empowering News Recommendation with Pre-trained Language Models questions: Q1: The authors only select a small-scale subset (i.e., politics) of the MIND dataset. Is it reasonable to provide recommendations for a single category of news? Please first explain. If the answer is no, consider using a larger dataset and providing users with solutions that cater to multiple interests. Otherwise, please conduct more experiments on multiple subsets (e.g., travel, lifestyle, crime). Q2: Can you compare GNR with the GENRE framework proposed by ONCE[1], which also use ChatGPT to generate news summary and user profile? And what are the differences? Q3: Please refer to the third Cons and anaylse the experiments of the PLM4NR variants? ethics_review_flag: No ethics_review_description: null scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 4 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
0mNYLhS1pN
Generative News Recommendation
[ "Shen Gao", "Jiabao Fang", "Quan Tu", "Zhitao Yao", "Zhumin Chen", "Pengjie Ren", "Zhaochun Ren" ]
Most existing news recommendation methods tackle this task by conducting semantic matching between candidate news and user representation produced by past clicked news. However, they ignore the higher-level associative relationships between news, and building these relationships typically requires common-sense knowledge and reasoning ability. And the definition of these methods dictates that they can only deliver news articles as-is. On the contrary, integrating several relevant news into a coherent narrative would assist users in gaining a quicker and more comprehensive understanding of events. In this paper, we propose a novel generative news recommendation paradigm that includes two steps: (1) Leveraging the internal knowledge and reasoning capabilities of the Large Language Model (LLM) to perform high-level matching between candidate news and user representation; (2) Generating a coherent and logically structured narrative based on the associations between related news and user interests, thus engaging users in further reading of the news. Specifically, we propose Generative News Recommendation (GNR). First, we compose the multi-level representation of news and users by leveraging LLM to generate theme-level representations and combine them with semantic-level representations. Next, in order to generate a coherent narrative, we explore the news relationship and filter the related news according to the user preference. Finally, we propose a novel training method named UIFT to train the LLM to fuse the multiple related news in a coherent narrative. Extensive experiments show that GNR can improve the recommendation accuracy and eventually generate more personalized and factually consistent narratives.
[ "News Recommendation; Generative Recommendation" ]
https://openreview.net/pdf?id=0mNYLhS1pN
keu3Z2Gk6g
decision
1,705,909,245,496
0mNYLhS1pN
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept (Oral) comment: This work presents a novel paradigm to make news recommendations with generative models. The paradigm was systematically designed and clearly presented. Experimental results are capable of demonstrating the effectiveness of the proposed paradigm. Reviewers offer valuable suggestions and the authors' response provide new data that should be incorporated into this paper.
0mNYLhS1pN
Generative News Recommendation
[ "Shen Gao", "Jiabao Fang", "Quan Tu", "Zhitao Yao", "Zhumin Chen", "Pengjie Ren", "Zhaochun Ren" ]
Most existing news recommendation methods tackle this task by conducting semantic matching between candidate news and user representation produced by past clicked news. However, they ignore the higher-level associative relationships between news, and building these relationships typically requires common-sense knowledge and reasoning ability. And the definition of these methods dictates that they can only deliver news articles as-is. On the contrary, integrating several relevant news into a coherent narrative would assist users in gaining a quicker and more comprehensive understanding of events. In this paper, we propose a novel generative news recommendation paradigm that includes two steps: (1) Leveraging the internal knowledge and reasoning capabilities of the Large Language Model (LLM) to perform high-level matching between candidate news and user representation; (2) Generating a coherent and logically structured narrative based on the associations between related news and user interests, thus engaging users in further reading of the news. Specifically, we propose Generative News Recommendation (GNR). First, we compose the multi-level representation of news and users by leveraging LLM to generate theme-level representations and combine them with semantic-level representations. Next, in order to generate a coherent narrative, we explore the news relationship and filter the related news according to the user preference. Finally, we propose a novel training method named UIFT to train the LLM to fuse the multiple related news in a coherent narrative. Extensive experiments show that GNR can improve the recommendation accuracy and eventually generate more personalized and factually consistent narratives.
[ "News Recommendation; Generative Recommendation" ]
https://openreview.net/pdf?id=0mNYLhS1pN
im2nAV1Hfa
official_review
1,701,059,554,499
0mNYLhS1pN
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission733/Reviewer_cMUN" ]
review: This paper introduces a novel generative news recommendation paradigm GNR, aimed at enhancing news recommendation and fulfilling user needs more precisely using LLM. Extensive experiments conducted on MIND datasets demonstrate that the GNR enhances recommendation performance and generates personalized and factually consistent narratives. Strengths: 1. Overall, the proposed approach is well designed and the results quite convincing. 2. This paper is well written and organized. 3. The experiments are very solid. Weakness: 1. The dimensions of all vectors and matrices should be explicitly provided. Additionally, for clarity, vectors should be represented using bold lowercase letters, and matrices should be denoted by bold uppercase letters. 2. The description of 'Multi-level Representation Combination' might be imprecise as the paper only mentions two levels. Typically, 'multi-level' implies three or more levels. Therefore, the terminology used in discussing the representation combination should accurately reflect the number of levels addressed in this context. 3. Some operations are unclear and inaccurate, such as, a) In Eq. (1), there is no clarification for 'Linear,' and it is unclear whether \alpha is a vector or a scalar weight; b) In Eq. (2), it is imperative to ensure that the computation of $p_i$ does not encounter a division by zero scenario. For instance, incorporating a regularization term in the denominator could address this concern; c) In Eq. (3), the base of the logarithmic function should be explicitly stated, specifying whether it is a logarithm with base 2 or the natural logarithm with base e. This clarification should also be applied to Eq. (6); d) In Eq. (4), the term $e_j^{\text{news}}$ denotes the positive new embedding rather than the positive news itself, and the interpretation of $\varepsilon$ requires further elaboration; questions: NA ethics_review_flag: No ethics_review_description: NA scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 5 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
0mNYLhS1pN
Generative News Recommendation
[ "Shen Gao", "Jiabao Fang", "Quan Tu", "Zhitao Yao", "Zhumin Chen", "Pengjie Ren", "Zhaochun Ren" ]
Most existing news recommendation methods tackle this task by conducting semantic matching between candidate news and user representation produced by past clicked news. However, they ignore the higher-level associative relationships between news, and building these relationships typically requires common-sense knowledge and reasoning ability. And the definition of these methods dictates that they can only deliver news articles as-is. On the contrary, integrating several relevant news into a coherent narrative would assist users in gaining a quicker and more comprehensive understanding of events. In this paper, we propose a novel generative news recommendation paradigm that includes two steps: (1) Leveraging the internal knowledge and reasoning capabilities of the Large Language Model (LLM) to perform high-level matching between candidate news and user representation; (2) Generating a coherent and logically structured narrative based on the associations between related news and user interests, thus engaging users in further reading of the news. Specifically, we propose Generative News Recommendation (GNR). First, we compose the multi-level representation of news and users by leveraging LLM to generate theme-level representations and combine them with semantic-level representations. Next, in order to generate a coherent narrative, we explore the news relationship and filter the related news according to the user preference. Finally, we propose a novel training method named UIFT to train the LLM to fuse the multiple related news in a coherent narrative. Extensive experiments show that GNR can improve the recommendation accuracy and eventually generate more personalized and factually consistent narratives.
[ "News Recommendation; Generative Recommendation" ]
https://openreview.net/pdf?id=0mNYLhS1pN
dsWa09toqZ
official_review
1,701,153,957,911
0mNYLhS1pN
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission733/Reviewer_7iL9" ]
review: This paper presents Generative News Recommendation. First, it utilizes a Large Language Model (LLM) for high-level matching between candidate news and user representation. This involves generating multi-level representations of news and users. Second, it generates a coherent narrative by exploring news relationships, filtering related news based on user preferences, and using a novel training method called User Interest-guided Fusion Training (UIFT) to train the LLM to merge multiple related news into a coherent story. Pros: 1. It is interesting to incorporate LLMs into news recommendation. 2. The experimental results show some improvement. Cons: 1. The paper writing quality should be improved. For example, it is difficult to understand Figure 2. 2. Only one datasdet is used for experiments, which is not sufficient. 3. Only two baseline methods are compared. More baselines should be compared. 4. The motivation of this work, illustrated in introduction, seems to be well handled by knowledge graph incorporated news recommendation methods. Why not compare with KG-based news recommendation methods? questions: Can knowledge graph incorporated news recommendation methods handle the problem illustrated in the introduction section? Can you make some comparison with them? ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 5 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
0mNYLhS1pN
Generative News Recommendation
[ "Shen Gao", "Jiabao Fang", "Quan Tu", "Zhitao Yao", "Zhumin Chen", "Pengjie Ren", "Zhaochun Ren" ]
Most existing news recommendation methods tackle this task by conducting semantic matching between candidate news and user representation produced by past clicked news. However, they ignore the higher-level associative relationships between news, and building these relationships typically requires common-sense knowledge and reasoning ability. And the definition of these methods dictates that they can only deliver news articles as-is. On the contrary, integrating several relevant news into a coherent narrative would assist users in gaining a quicker and more comprehensive understanding of events. In this paper, we propose a novel generative news recommendation paradigm that includes two steps: (1) Leveraging the internal knowledge and reasoning capabilities of the Large Language Model (LLM) to perform high-level matching between candidate news and user representation; (2) Generating a coherent and logically structured narrative based on the associations between related news and user interests, thus engaging users in further reading of the news. Specifically, we propose Generative News Recommendation (GNR). First, we compose the multi-level representation of news and users by leveraging LLM to generate theme-level representations and combine them with semantic-level representations. Next, in order to generate a coherent narrative, we explore the news relationship and filter the related news according to the user preference. Finally, we propose a novel training method named UIFT to train the LLM to fuse the multiple related news in a coherent narrative. Extensive experiments show that GNR can improve the recommendation accuracy and eventually generate more personalized and factually consistent narratives.
[ "News Recommendation; Generative Recommendation" ]
https://openreview.net/pdf?id=0mNYLhS1pN
LSlAzrPOaD
official_review
1,700,727,484,155
0mNYLhS1pN
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission733/Reviewer_Fi73" ]
review: The paper proposes a new approach to news recommendation called Generative News Recommendation (GNR), which uses internal knowledge and reasoning capabilities of a Large Language Model to match candidate news with user representation, and then generates a coherent narrative based on the associations between related news and user interests. The approach, called UIFT, is shown to improve recommendation accuracy and result in more personalized and factually consistent narratives. The idea is novel, which not only performs personalized matching based on LLM but also generate narratives based on related news set. It opens new research opportunites that combining recommendation and generation with the power of LLMs. However, it will be better to make more comprehensive experiments. For example, I am interested in the comparison between personalized narratives generation and non-personalized news summarization/generation based on LLM. I recommend the authors to provide more generated examples for readers. questions: How much difference does the personalization make in narratives generation? ethics_review_flag: No ethics_review_description: NA scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 5 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
0iwNrRRIiZ
Masked Graph Autoencoder with Non-discrete Bandwidths
[ "Ziwen Zhao", "Yuhua Li", "Yixiong Zou", "Jiliang Tang", "Ruixuan Li" ]
Masked graph autoencoders have emerged as a powerful graph self-supervised learning method that has yet to be fully explored. In this paper, we unveil that the existing discrete edge masking and binary link reconstruction strategies are insufficient to learn topologically informative representations, from the perspective of message propagation on graph neural networks. These limitations include blocking message flows, vulnerability to over-smoothness, and suboptimal neighborhood discriminability. Inspired by these understandings, we explore non-discrete edge masks, which are sampled from a continuous and dispersive probability distribution instead of the discrete Bernoulli distribution. These masks restrict the amount of output messages for each edge, referred to as "bandwidths". We propose a novel, informative, and effective topological masked graph autoencoder using bandwidth masking and a layer-wise bandwidth prediction objective. We demonstrate its powerful graph topological learning ability both theoretically and empirically. Our proposed framework outperforms representative baselines in both self-supervised link prediction (improving the discrete edge reconstructors by at most 20%) and node classification on numerous datasets, solely with a structure-learning pretext. Our implementation is available at https://anonymous.4open.science/r/anadnaB.
[ "Graph neural networks", "graph self-supervised learning", "masked graph autoencoders" ]
https://openreview.net/pdf?id=0iwNrRRIiZ
b7gvlzyNeO
decision
1,705,909,210,390
0iwNrRRIiZ
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept comment: The paper develops a new self-supervised learning method that uses non-discrete edge masking. Pros: * Self-supervised learning is an important problem in graph learning and the model trained with the self-supervised method has good performance in both node classification and link prediction. * The method is quite novel. * The paper is well-written. * The paper has extensive evaluation of the method. Cons: * The datasets used in the paper are quite small. Although the authors show performance numbers on relatively larger datasets, such as ogbn-mag and ogbl-ppa, the authors should use larger datasets in the final version. * The evaluation method for link prediction raised concerns from a reviewer. I agree with the reviewer that the evaluation method used in the paper is relatively less common. The authors should consider having more explanations why the evaluation method is used in the paper. If the space allows, maybe the results of both evaluation methods can be provided in the paper. * The paper should have analysis of the computation and memory complexity to indicate that the method is scalable to large datasets.
0iwNrRRIiZ
Masked Graph Autoencoder with Non-discrete Bandwidths
[ "Ziwen Zhao", "Yuhua Li", "Yixiong Zou", "Jiliang Tang", "Ruixuan Li" ]
Masked graph autoencoders have emerged as a powerful graph self-supervised learning method that has yet to be fully explored. In this paper, we unveil that the existing discrete edge masking and binary link reconstruction strategies are insufficient to learn topologically informative representations, from the perspective of message propagation on graph neural networks. These limitations include blocking message flows, vulnerability to over-smoothness, and suboptimal neighborhood discriminability. Inspired by these understandings, we explore non-discrete edge masks, which are sampled from a continuous and dispersive probability distribution instead of the discrete Bernoulli distribution. These masks restrict the amount of output messages for each edge, referred to as "bandwidths". We propose a novel, informative, and effective topological masked graph autoencoder using bandwidth masking and a layer-wise bandwidth prediction objective. We demonstrate its powerful graph topological learning ability both theoretically and empirically. Our proposed framework outperforms representative baselines in both self-supervised link prediction (improving the discrete edge reconstructors by at most 20%) and node classification on numerous datasets, solely with a structure-learning pretext. Our implementation is available at https://anonymous.4open.science/r/anadnaB.
[ "Graph neural networks", "graph self-supervised learning", "masked graph autoencoders" ]
https://openreview.net/pdf?id=0iwNrRRIiZ
YfsvczLljt
official_review
1,701,394,936,062
0iwNrRRIiZ
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission338/Reviewer_pid2" ]
review: The paper proposes a new masked graph autoencoder model with structure-aware non-discrete bandwidths. The major idea is to change discrete masks to non-discrete edge masks with a continuous and dispersive probability distribution. The problem studied in this work is popular and important. Experiments on several datasets (from small to large) demonstrate that the proposed model outperforms many baseline methods. Additional analyses such as ablation and parameter analysis are provided. Overall, this is a good work. Strength + Motivation: The problem studied in this work is popular and important. The motivation is clear. + Method: The proposed method is novel. It is interesting and new to explore non-discrete edge masks. Theoretical justifications are also provided. + Experiments: Experiments are extensive and the proposed method outperforms baseline methods. + Presentation: The paper writing is good and well organized. Weakness/questions - I checked the results on baseline methods in their original papers. Their results (e.g., MaskMAE) are better than the results reported in this paper. If using different experiment settings, please explain why and clearly describe the experimental difference. More analyses such as embedding visualization and case studies are suggested. questions: See the questions that I mentioned in the above review. ethics_review_flag: No ethics_review_description: NA scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 6 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
0iwNrRRIiZ
Masked Graph Autoencoder with Non-discrete Bandwidths
[ "Ziwen Zhao", "Yuhua Li", "Yixiong Zou", "Jiliang Tang", "Ruixuan Li" ]
Masked graph autoencoders have emerged as a powerful graph self-supervised learning method that has yet to be fully explored. In this paper, we unveil that the existing discrete edge masking and binary link reconstruction strategies are insufficient to learn topologically informative representations, from the perspective of message propagation on graph neural networks. These limitations include blocking message flows, vulnerability to over-smoothness, and suboptimal neighborhood discriminability. Inspired by these understandings, we explore non-discrete edge masks, which are sampled from a continuous and dispersive probability distribution instead of the discrete Bernoulli distribution. These masks restrict the amount of output messages for each edge, referred to as "bandwidths". We propose a novel, informative, and effective topological masked graph autoencoder using bandwidth masking and a layer-wise bandwidth prediction objective. We demonstrate its powerful graph topological learning ability both theoretically and empirically. Our proposed framework outperforms representative baselines in both self-supervised link prediction (improving the discrete edge reconstructors by at most 20%) and node classification on numerous datasets, solely with a structure-learning pretext. Our implementation is available at https://anonymous.4open.science/r/anadnaB.
[ "Graph neural networks", "graph self-supervised learning", "masked graph autoencoders" ]
https://openreview.net/pdf?id=0iwNrRRIiZ
QUP7eE3Anl
official_review
1,701,131,426,319
0iwNrRRIiZ
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission338/Reviewer_itKW" ]
review: Summary: This paper revisits the problem of graph autoencoding via masking. It first shows theoretically that removal of random edges to perform masking results in oversmoothed representations. It goes further, showing *why* this happens. To overcome this and other shortcomings, the authors propose _Bandana_, a novel masking strategy that samples masks from a continuous distribution. In the encoder, message passing through edges is _partially_ impeded by the selected bandwidths, and the goal of the decoder is to estimate how much each edge was masked. Overall, I lean toward acceptance of this paper. Pros: 1.) The theory and experiments complement one another well. 2.) The performance of Bandana on a variety of tasks is a substantial empirical improvement on state of the art methods. 3.) The writing is clear. questions: N/A ethics_review_flag: No ethics_review_description: N/A scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
0iwNrRRIiZ
Masked Graph Autoencoder with Non-discrete Bandwidths
[ "Ziwen Zhao", "Yuhua Li", "Yixiong Zou", "Jiliang Tang", "Ruixuan Li" ]
Masked graph autoencoders have emerged as a powerful graph self-supervised learning method that has yet to be fully explored. In this paper, we unveil that the existing discrete edge masking and binary link reconstruction strategies are insufficient to learn topologically informative representations, from the perspective of message propagation on graph neural networks. These limitations include blocking message flows, vulnerability to over-smoothness, and suboptimal neighborhood discriminability. Inspired by these understandings, we explore non-discrete edge masks, which are sampled from a continuous and dispersive probability distribution instead of the discrete Bernoulli distribution. These masks restrict the amount of output messages for each edge, referred to as "bandwidths". We propose a novel, informative, and effective topological masked graph autoencoder using bandwidth masking and a layer-wise bandwidth prediction objective. We demonstrate its powerful graph topological learning ability both theoretically and empirically. Our proposed framework outperforms representative baselines in both self-supervised link prediction (improving the discrete edge reconstructors by at most 20%) and node classification on numerous datasets, solely with a structure-learning pretext. Our implementation is available at https://anonymous.4open.science/r/anadnaB.
[ "Graph neural networks", "graph self-supervised learning", "masked graph autoencoders" ]
https://openreview.net/pdf?id=0iwNrRRIiZ
PBcL4doVYv
official_review
1,700,817,477,044
0iwNrRRIiZ
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission338/Reviewer_UNcB" ]
review: ## Summary: ## The paper explores an approach in graph self-supervised learning by addressing the limitations of existing discrete edge masking and binary link reconstruction strategies. The authors propose a topological masked graph autoencoder using bandwidth masking and a layer-wise bandwidth prediction objective. They also demonstrate good graph topological learning ability in the experiments. ## Strengths: ## - The research topic of masked graph autoencoder is very interesting and important for the graph machine learning community. - The authors propose solid theoretical analyses to the method. - The experiments show the effectiveness of the method. ## Weaknesses: ## - The authors should conduct fair comparasions under the same hyperparameter setting strategy rather than only searching hyperparameter for the proposed method (Table 6). - More analyses should be present for the experimental results to support the main points in the paper. - The relations between the present theories and the graph itself seem to be weak. The authors should pay more attention to the properties of graphs instead of general machine learning. - The experiments should be conducted on more large-scale benchmarks such as more datasets in Open Graph Benchmark besides ogb-arxiv. questions: What is the meaning of NODATA (the model cannot perform due to the specific data format) in the tables? Why does these results cannot be reported? ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 4 technical_quality: 2 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
0iwNrRRIiZ
Masked Graph Autoencoder with Non-discrete Bandwidths
[ "Ziwen Zhao", "Yuhua Li", "Yixiong Zou", "Jiliang Tang", "Ruixuan Li" ]
Masked graph autoencoders have emerged as a powerful graph self-supervised learning method that has yet to be fully explored. In this paper, we unveil that the existing discrete edge masking and binary link reconstruction strategies are insufficient to learn topologically informative representations, from the perspective of message propagation on graph neural networks. These limitations include blocking message flows, vulnerability to over-smoothness, and suboptimal neighborhood discriminability. Inspired by these understandings, we explore non-discrete edge masks, which are sampled from a continuous and dispersive probability distribution instead of the discrete Bernoulli distribution. These masks restrict the amount of output messages for each edge, referred to as "bandwidths". We propose a novel, informative, and effective topological masked graph autoencoder using bandwidth masking and a layer-wise bandwidth prediction objective. We demonstrate its powerful graph topological learning ability both theoretically and empirically. Our proposed framework outperforms representative baselines in both self-supervised link prediction (improving the discrete edge reconstructors by at most 20%) and node classification on numerous datasets, solely with a structure-learning pretext. Our implementation is available at https://anonymous.4open.science/r/anadnaB.
[ "Graph neural networks", "graph self-supervised learning", "masked graph autoencoders" ]
https://openreview.net/pdf?id=0iwNrRRIiZ
ESzSe1wgzO
official_review
1,700,625,782,038
0iwNrRRIiZ
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission338/Reviewer_sz9j" ]
review: The paper introduces Bandana, a novel masked graph autoencoder framework. Bandana diverges from the traditional discrete mask-then-reconstruct approach, employing a bandwidth masking and reconstruction scheme. It uses non-discrete edge masks sampled from a continuous and dispersive probability distribution, aiming to overcome limitations like blocking message flows and suboptimal neighborhood discriminability inherent in binary link reconstruction strategies. The framework's effectiveness is demonstrated both theoretically and empirically, outperforming representative baselines in link prediction and node classification tasks. [+] The introduction of non-discrete masking and bandwidth prediction is a significant departure from traditional methods, potentially offering more nuanced and efficient graph representation learning. [+] The paper provides both theoretical insights and empirical evidence to support the superiority of the Bandana framework over existing methods. [+] Extensive experiments across multiple datasets validate the model's efficacy and robustness. [-] Better to give more analysis about how bandwidths affect the results. More case studies would be better. [-] Better to run larger datasets. questions: 1. How does the Bandana framework manage the computational complexity introduced by non-discrete bandwidths? 2. Is there a risk of overfitting with the Bandana model, especially in scenarios with limited training data? 3. Can the authors provide more insights into the interpretability of the model, especially regarding how bandwidths affect message propagation? ethics_review_flag: No ethics_review_description: n/a scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 5 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
0iwNrRRIiZ
Masked Graph Autoencoder with Non-discrete Bandwidths
[ "Ziwen Zhao", "Yuhua Li", "Yixiong Zou", "Jiliang Tang", "Ruixuan Li" ]
Masked graph autoencoders have emerged as a powerful graph self-supervised learning method that has yet to be fully explored. In this paper, we unveil that the existing discrete edge masking and binary link reconstruction strategies are insufficient to learn topologically informative representations, from the perspective of message propagation on graph neural networks. These limitations include blocking message flows, vulnerability to over-smoothness, and suboptimal neighborhood discriminability. Inspired by these understandings, we explore non-discrete edge masks, which are sampled from a continuous and dispersive probability distribution instead of the discrete Bernoulli distribution. These masks restrict the amount of output messages for each edge, referred to as "bandwidths". We propose a novel, informative, and effective topological masked graph autoencoder using bandwidth masking and a layer-wise bandwidth prediction objective. We demonstrate its powerful graph topological learning ability both theoretically and empirically. Our proposed framework outperforms representative baselines in both self-supervised link prediction (improving the discrete edge reconstructors by at most 20%) and node classification on numerous datasets, solely with a structure-learning pretext. Our implementation is available at https://anonymous.4open.science/r/anadnaB.
[ "Graph neural networks", "graph self-supervised learning", "masked graph autoencoders" ]
https://openreview.net/pdf?id=0iwNrRRIiZ
61r8h8PsSa
official_review
1,700,636,113,872
0iwNrRRIiZ
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission338/Reviewer_h9UR" ]
review: In this paper, the authors propose a new masked graph autoencoder method for graph self-supervised learning. Specifically, the authors find that the existing discrete edge mask and binary link reconstruction methods suffer from block message flows, over-smoothing, and suboptimal neighborhood discriminability. To tackle these issues, the authors propose Bandana, a non-discrete edge mask method using bandwidth masking and layerwise bandwidth prediction. Experimental results on node classification and link prediction demonstrate the effectiveness of the proposed method. Pros: (1) Self-supervised learning on graphs is a trending direction and masked graph autoencoder is a promising solution. (2) The paper establishes a theoretical relationship between the bandwidth mechanism and regularized denoising autoencoders, providing a solid theoretical basis for its approach. (3) The authors have provided the source codes as well as experimental details for reproducibility. (4) The paper is well-written in general. Cons and questions: (1) The improvement of the proposed method over GraphMAE for node classification is somewhat marginal. (2) There is no discussion regarding the time complexity or efficiency of the proposed method, which could be added. (3) I also wonder whether the proposed approach can be generalized to different types of graphs beyond those tested in the paper? questions: See above ethics_review_flag: No ethics_review_description: N.A. scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
0hq1SCYftX
Privacy-Preserving and Fairness-Aware Federated Learning for Critical Infrastructure Protection and Resilience
[ "Yanjun Zhang", "Ruoxi Sun", "Liyue Shen", "Guangdong Bai", "Jason Xue", "Mark Huasong Meng", "Xue Li", "Ryan Ko", "Surya Nepal" ]
The energy industry is undergoing significant transformations as it strives to achieve net-zero emissions and future-proof its infrastructure, where every participant in the power grid has the potential to both consume and produce energy resources. Federated learning – which enables multiple participants to collaboratively train a model without aggregating the training data – becomes a viable technology. However, the global model parameters that have to be shared for optimization are still susceptible to training data leakage. In this work, we propose Confined Gradient Descent (CGD) that enhances the privacy of federated learning by eliminating the sharing of global model parameters. CGD exploits the fact that a gradient descent optimization can start with a set of discrete points and converges to another set in the neighborhood of the global minimum of the objective function. As such, each participant can independently initiate its own private global model (referred to as the confined model), and collaboratively learn it towards the optimum. The updates to their own models are worked out in a secure collaborative way during the training process. In such a manner, CGD retains the ability of learning from distributed data but greatly diminishes information sharing. Such a strategy also allows the proprietary confined models to adapt to the heterogeneity in federated learning, providing inherent benefits of fairness. We theoretically and empirically demonstrate that decentralized CGD (i) provides a stronger differential privacy (DP) protection; (ii) is robust against the state-of-the-art poisoning privacy attacks; (iii) results in bounded fairness guarantee among participants; and (iv) provides high test accuracy (comparable with centralized learning) with a bounded convergence rate over four real-world datasets.
[ "privacy preservation", "decentralized federated learning", "fairness", "AI security" ]
https://openreview.net/pdf?id=0hq1SCYftX
TMnRWX0hoM
official_review
1,700,220,792,171
0hq1SCYftX
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1348/Reviewer_GJ5E" ]
review: 1. The process of updating the global model after local pruning is unclear, particularly in terms of how heterogeneous models are aggregated. 1. Each client is required to select a certain proportion of updates for aggregation, which significantly increases communication overhead. 1. The claimed enhancement of privacy protection does not actually stem from pruning, but rather from the addition of noise defined by differential privacy. By reducing dimensions, sensitivity is lowered, but this is essentially no different from existing methods. 1. Furthermore, the facts that the author's idea relies on (lines 19-21) and Figure 2 are not convincing, lacking solid evidence or references to existing work. The summary in Table 1 overly downplays existing work and is not accurate. 1. The experiments lack comparison with the latest heterogeneous DFL aggregation mechanisms. 1. The background provided does not have a strong connection with the work described, and the scheme does not take into account the characteristics of industrial control equipment, despite the author's claim that the FL is designed for industrial equipment. questions: Please see the review comment. --- Rebuttals read. ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 2 technical_quality: 3 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
0hq1SCYftX
Privacy-Preserving and Fairness-Aware Federated Learning for Critical Infrastructure Protection and Resilience
[ "Yanjun Zhang", "Ruoxi Sun", "Liyue Shen", "Guangdong Bai", "Jason Xue", "Mark Huasong Meng", "Xue Li", "Ryan Ko", "Surya Nepal" ]
The energy industry is undergoing significant transformations as it strives to achieve net-zero emissions and future-proof its infrastructure, where every participant in the power grid has the potential to both consume and produce energy resources. Federated learning – which enables multiple participants to collaboratively train a model without aggregating the training data – becomes a viable technology. However, the global model parameters that have to be shared for optimization are still susceptible to training data leakage. In this work, we propose Confined Gradient Descent (CGD) that enhances the privacy of federated learning by eliminating the sharing of global model parameters. CGD exploits the fact that a gradient descent optimization can start with a set of discrete points and converges to another set in the neighborhood of the global minimum of the objective function. As such, each participant can independently initiate its own private global model (referred to as the confined model), and collaboratively learn it towards the optimum. The updates to their own models are worked out in a secure collaborative way during the training process. In such a manner, CGD retains the ability of learning from distributed data but greatly diminishes information sharing. Such a strategy also allows the proprietary confined models to adapt to the heterogeneity in federated learning, providing inherent benefits of fairness. We theoretically and empirically demonstrate that decentralized CGD (i) provides a stronger differential privacy (DP) protection; (ii) is robust against the state-of-the-art poisoning privacy attacks; (iii) results in bounded fairness guarantee among participants; and (iv) provides high test accuracy (comparable with centralized learning) with a bounded convergence rate over four real-world datasets.
[ "privacy preservation", "decentralized federated learning", "fairness", "AI security" ]
https://openreview.net/pdf?id=0hq1SCYftX
LUiBfqvICF
official_review
1,700,643,599,759
0hq1SCYftX
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1348/Reviewer_WJ5t" ]
review: This paper presents a novel serverless approach to federated learning, focusing on enhancing privacy and fairness. This design achieves better fairness and stronger privacy protection under the same level of noise. Overall, I believe the proposed framework is innovative and feasible, while with two concerns regarding the design choice and the experiments. # Choice of using both MPC and DP The design incorporates both Multi-Party Computation (MPC) and Differential Privacy (DP) for gradient privacy. While MPC secures gradient aggregation, DP introduces noise to local gradients. Specifically, each participant generates local gradients in each training round and then adds noise to these local gradients (step 2 on page 5). Subsequently, a randomly selected subset of participants securely aggregates their gradients using MPC (step 3 on page 5). These aggregated gradients are then used to update the participants' local models, with the update of the local model being controlled by the Loss threshold LRR. However, the concurrent use of both MPC and DP is confusing to me. Each method, independently, appears to focus on preventing the direct transmission of participants' gradients, thereby avoiding privacy leakage. MPC alone seems capable of ensuring secure gradient aggregation, which should sufficiently protect the participants' real inputs. Conversely, employing DP alone might also be feasible, where participants receive only gradients with noise from others. The overlapping roles of MPC and DP in the framework necessitates a clearer explanation of their combined utility in enhancing the overall privacy of the federated learning process. # Efficiency of MPC Concerning the computational cost of MPC, the authors note on page 3 that their approach "ensures the secrecy of individual updates while devolving the computational complexity to the local gradients." Given that cost is a common evaluation criterion in MPC-based solutions, I believe an analysis of this approach's computational cost should be included. Furthermore, comparing this cost with that of traditional Federated Learning plus MPC approaches would enhance the evaluation. While I recognize that this paper introduces a decentralized solution, thereby distributing the computational cost across each participant's local server, a comparative analysis would nonetheless provide a clearer understanding of the approach's efficiency. This suggestion is somewhat nitpicking, and I would not press for these comparative evaluations, but their inclusion could enrich the paper's technical assessment, I expect. questions: - Could you please elaborate on the specific reasons for using both MPC and DP in the proposed approach? How do these two methods complement each other, and what unique benefits does each method offer to the overall privacy of the federated learning process? ethics_review_flag: No ethics_review_description: N/A scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
0hq1SCYftX
Privacy-Preserving and Fairness-Aware Federated Learning for Critical Infrastructure Protection and Resilience
[ "Yanjun Zhang", "Ruoxi Sun", "Liyue Shen", "Guangdong Bai", "Jason Xue", "Mark Huasong Meng", "Xue Li", "Ryan Ko", "Surya Nepal" ]
The energy industry is undergoing significant transformations as it strives to achieve net-zero emissions and future-proof its infrastructure, where every participant in the power grid has the potential to both consume and produce energy resources. Federated learning – which enables multiple participants to collaboratively train a model without aggregating the training data – becomes a viable technology. However, the global model parameters that have to be shared for optimization are still susceptible to training data leakage. In this work, we propose Confined Gradient Descent (CGD) that enhances the privacy of federated learning by eliminating the sharing of global model parameters. CGD exploits the fact that a gradient descent optimization can start with a set of discrete points and converges to another set in the neighborhood of the global minimum of the objective function. As such, each participant can independently initiate its own private global model (referred to as the confined model), and collaboratively learn it towards the optimum. The updates to their own models are worked out in a secure collaborative way during the training process. In such a manner, CGD retains the ability of learning from distributed data but greatly diminishes information sharing. Such a strategy also allows the proprietary confined models to adapt to the heterogeneity in federated learning, providing inherent benefits of fairness. We theoretically and empirically demonstrate that decentralized CGD (i) provides a stronger differential privacy (DP) protection; (ii) is robust against the state-of-the-art poisoning privacy attacks; (iii) results in bounded fairness guarantee among participants; and (iv) provides high test accuracy (comparable with centralized learning) with a bounded convergence rate over four real-world datasets.
[ "privacy preservation", "decentralized federated learning", "fairness", "AI security" ]
https://openreview.net/pdf?id=0hq1SCYftX
CCZHubFwld
official_review
1,700,852,183,942
0hq1SCYftX
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1348/Reviewer_ZJHe" ]
review: Federated learning (FL) enables multiple clients to cooperatively train a central machine learning model with the help of a central server. Nevertheless, in current FL frameworks, the global model parameters that need to be shared are still vulnerable to potential data leakage. In this paper, the authors introduce a new approach called Confined Gradient Descent (CGD) to bolster the privacy of FL. The authors support their proposal with both theoretical and empirical evidence to demonstrate its effectiveness. Strengths: 1) The paper is well crafted and straightforward to follow. 2) Thorough experiments show the effectiveness of the proposed method. Weaknesses: 1) The specific value of the loss threshold LRR is unspecified. 2) It's not evident whether the datasets adhere to an iid or non-iid distribution. questions: 1) The loss threshold LRR plays a pivotal role in the proposed method. However, the authors have not provided the exact LRR values for all datasets, and it is crucial to investigate the impact of varying LRR values. 2) An inherent characteristic of federated learning is the non-iid distribution of training data among clients. The authors should provide a clear explanation of how they simulate this non-iid setting for all datasets. 3) In Section 4.2, it's unclear whether the experiments are conducted in a synchronous or asynchronous setting. It would be advantageous to present the performance of the proposed method under both settings, given that Section 3.2 mentions CGD's asynchronous capability. If asynchronous setups are used, it is important to clearly outline how the authors simulate this asynchronous environment. ethics_review_flag: No ethics_review_description: NA scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 6 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
0hq1SCYftX
Privacy-Preserving and Fairness-Aware Federated Learning for Critical Infrastructure Protection and Resilience
[ "Yanjun Zhang", "Ruoxi Sun", "Liyue Shen", "Guangdong Bai", "Jason Xue", "Mark Huasong Meng", "Xue Li", "Ryan Ko", "Surya Nepal" ]
The energy industry is undergoing significant transformations as it strives to achieve net-zero emissions and future-proof its infrastructure, where every participant in the power grid has the potential to both consume and produce energy resources. Federated learning – which enables multiple participants to collaboratively train a model without aggregating the training data – becomes a viable technology. However, the global model parameters that have to be shared for optimization are still susceptible to training data leakage. In this work, we propose Confined Gradient Descent (CGD) that enhances the privacy of federated learning by eliminating the sharing of global model parameters. CGD exploits the fact that a gradient descent optimization can start with a set of discrete points and converges to another set in the neighborhood of the global minimum of the objective function. As such, each participant can independently initiate its own private global model (referred to as the confined model), and collaboratively learn it towards the optimum. The updates to their own models are worked out in a secure collaborative way during the training process. In such a manner, CGD retains the ability of learning from distributed data but greatly diminishes information sharing. Such a strategy also allows the proprietary confined models to adapt to the heterogeneity in federated learning, providing inherent benefits of fairness. We theoretically and empirically demonstrate that decentralized CGD (i) provides a stronger differential privacy (DP) protection; (ii) is robust against the state-of-the-art poisoning privacy attacks; (iii) results in bounded fairness guarantee among participants; and (iv) provides high test accuracy (comparable with centralized learning) with a bounded convergence rate over four real-world datasets.
[ "privacy preservation", "decentralized federated learning", "fairness", "AI security" ]
https://openreview.net/pdf?id=0hq1SCYftX
C9QQsguXRX
official_review
1,701,653,843,168
0hq1SCYftX
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1348/Reviewer_DUsh" ]
review: Welcome to submit your work to WWW. The proposed CGD aims to improve fairness and privacy for federated learning by going towards a fully serverless manner. With a logical structure, the paper provides detailed discussions on privacy-preserving FL, fairness considerations in FL and threat models. The motivation is somehow unclear why such fully decentralized approach is necessary and what specific problems from WWW related applications/systems are calling for such design. As to relevance to WWW community, it's unclear how CGD can contribute to Web services and systems. The evaluation of CGD is theoretic based, covering general ML models and datasets such as MNIST and CIFAR-10. The results in Section 4 and Section 5 are difficult to follow, especially as referring to figures and tables in appendix. Overall, the fairness and privacy angles are valuable to explore but the proposed CGD design lacks convincing evidence for WWW services and systems. questions: Given the concerns, there are one core questions for authors to ponder on: - From the design angle, what's the core motivation for CGD to take such fully serverless path, especially for Web users and developers? ethics_review_flag: No ethics_review_description: N/A scope: 1: The work is irrelevant to the Web novelty: 3 technical_quality: 3 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
0hq1SCYftX
Privacy-Preserving and Fairness-Aware Federated Learning for Critical Infrastructure Protection and Resilience
[ "Yanjun Zhang", "Ruoxi Sun", "Liyue Shen", "Guangdong Bai", "Jason Xue", "Mark Huasong Meng", "Xue Li", "Ryan Ko", "Surya Nepal" ]
The energy industry is undergoing significant transformations as it strives to achieve net-zero emissions and future-proof its infrastructure, where every participant in the power grid has the potential to both consume and produce energy resources. Federated learning – which enables multiple participants to collaboratively train a model without aggregating the training data – becomes a viable technology. However, the global model parameters that have to be shared for optimization are still susceptible to training data leakage. In this work, we propose Confined Gradient Descent (CGD) that enhances the privacy of federated learning by eliminating the sharing of global model parameters. CGD exploits the fact that a gradient descent optimization can start with a set of discrete points and converges to another set in the neighborhood of the global minimum of the objective function. As such, each participant can independently initiate its own private global model (referred to as the confined model), and collaboratively learn it towards the optimum. The updates to their own models are worked out in a secure collaborative way during the training process. In such a manner, CGD retains the ability of learning from distributed data but greatly diminishes information sharing. Such a strategy also allows the proprietary confined models to adapt to the heterogeneity in federated learning, providing inherent benefits of fairness. We theoretically and empirically demonstrate that decentralized CGD (i) provides a stronger differential privacy (DP) protection; (ii) is robust against the state-of-the-art poisoning privacy attacks; (iii) results in bounded fairness guarantee among participants; and (iv) provides high test accuracy (comparable with centralized learning) with a bounded convergence rate over four real-world datasets.
[ "privacy preservation", "decentralized federated learning", "fairness", "AI security" ]
https://openreview.net/pdf?id=0hq1SCYftX
C8ErPsJtWI
decision
1,705,909,240,009
0hq1SCYftX
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept comment: This paper presents a new serverless approach to federated learning, Confined Gradient Descent (CGD), which is focused on enhancing privacy and fairness. The reviewers are split on the quality of the paper, as several reviewers believes the work is innovative and practical, while others give more critical reviews. Overall, I feel the work is solid and the approach in interesting in the sense that it achieves better fairness and stronger privacy protection under the same level of noise. However, I do have one important concern: This paper proposes a privacy-preserving approach for FL, which is pretty general in my opinion. I don't see why the proposed method is specific to critical infrastructure protection, especially for energy sectors. So the first sentence in the abstract, "The energy industry is undergoing significant transformations as it strives to achieve net-zero emissions and future-proof its infrastructure, where every participant in the power grid has the potential to both consume and produce energy resources." seems pretty confusing to me. I suggest the authors change the paper title and modify the abstract if the paper is accepted.
0hq1SCYftX
Privacy-Preserving and Fairness-Aware Federated Learning for Critical Infrastructure Protection and Resilience
[ "Yanjun Zhang", "Ruoxi Sun", "Liyue Shen", "Guangdong Bai", "Jason Xue", "Mark Huasong Meng", "Xue Li", "Ryan Ko", "Surya Nepal" ]
The energy industry is undergoing significant transformations as it strives to achieve net-zero emissions and future-proof its infrastructure, where every participant in the power grid has the potential to both consume and produce energy resources. Federated learning – which enables multiple participants to collaboratively train a model without aggregating the training data – becomes a viable technology. However, the global model parameters that have to be shared for optimization are still susceptible to training data leakage. In this work, we propose Confined Gradient Descent (CGD) that enhances the privacy of federated learning by eliminating the sharing of global model parameters. CGD exploits the fact that a gradient descent optimization can start with a set of discrete points and converges to another set in the neighborhood of the global minimum of the objective function. As such, each participant can independently initiate its own private global model (referred to as the confined model), and collaboratively learn it towards the optimum. The updates to their own models are worked out in a secure collaborative way during the training process. In such a manner, CGD retains the ability of learning from distributed data but greatly diminishes information sharing. Such a strategy also allows the proprietary confined models to adapt to the heterogeneity in federated learning, providing inherent benefits of fairness. We theoretically and empirically demonstrate that decentralized CGD (i) provides a stronger differential privacy (DP) protection; (ii) is robust against the state-of-the-art poisoning privacy attacks; (iii) results in bounded fairness guarantee among participants; and (iv) provides high test accuracy (comparable with centralized learning) with a bounded convergence rate over four real-world datasets.
[ "privacy preservation", "decentralized federated learning", "fairness", "AI security" ]
https://openreview.net/pdf?id=0hq1SCYftX
9wt9gNCUBs
official_review
1,700,770,125,140
0hq1SCYftX
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1348/Reviewer_bB5a" ]
review: - Overall, I like the proposed idea - Motivation is a bit unclear: in the introduction, the authors describe that their approach aims to preserve the privacy of the data but it is unclear what is the threat from a potential privacy leak in the given example (i.e., critical infrastructure). I understand the example about fairness (lines 98-102) but I am missing a similar example with regards to privacy. - Authors use a P2P network as a motivating example for their study assuming mobile users (i.e., electric vehicles) but they neither test nor provide any details on the behaviour of their approach in cases where there is high churn (users frequently become offline throughout the duration of the protocol). What is the impact of churn in the provided fairness and privacy? - Apart from the accuracy, privacy and fairness evaluation, the system performance is quite brief (only section 6.1 actually). It would be great if authors could measure the performance of their prototype also in terms of latency - Paper would benefit from a few more text passes so it is easier to read. e.g., very long sentences like lines 106-111 which constitute actually one sentence. questions: - Page 5: “A subset of participants 𝑆 (randomly selected with the fraction parameter 𝐶)” → I am not sure I understand exactly how the participants are being selected and how the system protects itself from malicious attempts (e.g., to poison or disrupt the secure addition). ethics_review_flag: No ethics_review_description: There are no ethical issues scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 6 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
0BN7CbAQNS
A Study of GDPR Compliance under the Transparency and Consent Framework
[ "Mike Smith", "Antonio Torres", "Riley Grossman", "Pritam Sen", "Yi Chen", "Cristian Borcea" ]
This paper presents a study of GDPR compliance under the Interactive Advertising Bureau Europe’s Transparency and Consent Framework (TCF). This framework provides digital advertising market participants a standard for sharing users’ privacy consent choices. TCF is widely used across the Internet, and this paper presents its first thorough evaluation, investigating both the compliance of websites with TCF and its impact on user privacy. We reviewed 2,230 websites that use TCF and accepted the automatic decline of user consent by our data collection system. Unlike previous work on GDPR compliance, we found that most websites using TCF properly record the user’s consent choice. However, we found that 72.8% of the websites that were TCF compliant claimed legitimate interest as a rationale for overriding the consent choice. While legitimate interest is legal under GDPR, previous studies have shown that most users disagreed with how it is being used to collect data. Additionally, analysis of cookies set to the browsers indicates that TCF may not fully protect user privacy even when websites are compliant. Our research provides regulators and publishers with a data collection and analysis system to monitor compliance, detect non-compliance, and examine questionable practices of circumventing user consent choices using legitimate interest.
[ "Privacy Regulation", "GDPR Compliance", "Consent Management Platforms", "Transparency and Consent Framework (TCF)", "Ad Tech" ]
https://openreview.net/pdf?id=0BN7CbAQNS
ucN5yLm2r6
official_review
1,700,481,136,389
0BN7CbAQNS
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1815/Reviewer_qpZP" ]
review: “A Study of GDPR Compliance” presents a study on TCS compliance including an assessment of the uptake from a significant sample and its actual implementation in terms of cookies. The goal of the study is to provide a picture of the current state of the standard, with a particular focus on the override of preferences claiming legitimate interest under the GDPR clause. The paper presents a well-structured method for a systematic study of TCF implementation and (correctness and compliance). The study looks into how the user choice about cookies is stored in TC strings and provides findings about the justifications for avoiding user explicit rejection based on a range of clauses of legitimate interest. The topic is on the spot with the conference and the track. My only comments concern its limitations and novelty. On the one hand, the study follows the path of previous works and confirms their findings. On the other hand, the study is limited to how declarations are handled (at least in my understanding), avoiding looking at other related practices such as the transmission of user data back to servers. In a nutshell, my main question is about what is actually being assessed by this work: what is the relation between TCF and actual practice (beyond cookies)? Are there ways to avoid the system while keeping compliance with TC strings? I am left with these doubts and it is hard to guess the actual scope of the contribution (which is there at least on the technical solution but hard to quantify concerning the more broad discussion about privacy). Overall, the technical quality of the paper is appropriate to the level of the conference. The method used to collect data is very well described and documented. The writing, figures and tables are good and the structure is well thought out. questions: - What is the novelty of the paper? The study confirms previous findings about publishers' bad practices of ignoring explicit rejections. In general, it is hard to identify the novelty of the work as the paper (very well) reports about similar studies. The technical work on TC strings adds little to the findings. - Are there any recommendations for browsers? It seems to me that a clear finding of this and other works is that self-regulation is not fit for purpose. Are there any reasons why this issue cannot be addressed by browser companies with one consistent and verified approach? Should not this be the role of a regulator as the EP? - What is the actual scope of this work in terms of uncovering practices? It seems from its introduction that TC strings collect declarations. However, did the authors look into actual traffic back to servers? Please clarify this point in case it is relevant or not and why. ethics_review_flag: No ethics_review_description: No issues, data are being generated by researchers activities scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 7 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
0BN7CbAQNS
A Study of GDPR Compliance under the Transparency and Consent Framework
[ "Mike Smith", "Antonio Torres", "Riley Grossman", "Pritam Sen", "Yi Chen", "Cristian Borcea" ]
This paper presents a study of GDPR compliance under the Interactive Advertising Bureau Europe’s Transparency and Consent Framework (TCF). This framework provides digital advertising market participants a standard for sharing users’ privacy consent choices. TCF is widely used across the Internet, and this paper presents its first thorough evaluation, investigating both the compliance of websites with TCF and its impact on user privacy. We reviewed 2,230 websites that use TCF and accepted the automatic decline of user consent by our data collection system. Unlike previous work on GDPR compliance, we found that most websites using TCF properly record the user’s consent choice. However, we found that 72.8% of the websites that were TCF compliant claimed legitimate interest as a rationale for overriding the consent choice. While legitimate interest is legal under GDPR, previous studies have shown that most users disagreed with how it is being used to collect data. Additionally, analysis of cookies set to the browsers indicates that TCF may not fully protect user privacy even when websites are compliant. Our research provides regulators and publishers with a data collection and analysis system to monitor compliance, detect non-compliance, and examine questionable practices of circumventing user consent choices using legitimate interest.
[ "Privacy Regulation", "GDPR Compliance", "Consent Management Platforms", "Transparency and Consent Framework (TCF)", "Ad Tech" ]
https://openreview.net/pdf?id=0BN7CbAQNS
uVfLMpejvR
official_review
1,700,756,834,400
0BN7CbAQNS
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1815/Reviewer_aDoZ" ]
review: ***Paper summary:*** This paper conducts an evaluation of website compliance with the Transparency and Consent Framework (TCF) and its impact on user privacy, covering 2,230 TCF-utilizing websites. Key findings indicate that 72.8% of TCF-compliant websites justify overriding user consent through the legitimate interest claim. This study is the first to explore TCF compliance post-versions 2.0 and 2.1, offering insights crucial for regulators and market participants navigating GDPR compliance and the IAB Tech Lab's Global Privacy Platform. ***Detailed comments for authors*** Thank you for submitting your work. The paper undertakes a study to assess TCF compliance, revealing that only 2.2% of websites failed to adhere to users' consent choices, an improvement compared to prior works [40] that analyzed TCF compliance. The paper provides a comprehensive overview of the evolving TCF compliance landscape, demonstrating a positive trend in reduced user choice decline compared to previous studies. The authors also delve into the use of legitimate interest, a concern raised in previous works. Notably, they explore its combination with specific purposes, highlighting potential violations on 16 websites out of the 2,230 studied (approximately 0.7% of the visited websites). While the paper effectively investigates the evolution of TCF compliance, I regret to note that the scientific contribution appears insufficient for me to recommend acceptance. questions: - How does your work compare to previous studies, and what are the novel contributions? - You analyzed both domains and pages within these domains, providing separate results for each. When detecting a violation in a given domain, can there still be compliant pages within the same domain? - In Line 635, where it states, "we determined that 2,315 crawls to 48 distinct domains qualified for this analysis," does this imply an average of 48 pages per domain? are all visited pages belong to the same domain ? - The paper's writing needs improvement, some sections are challenging to follow. ** Minor:** - Could you clarify how you derived the 1.3% (2.2%) value in Table 2? ethics_review_flag: No ethics_review_description: I did not identify any ethical issues in the paper. scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 2 technical_quality: 4 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
0BN7CbAQNS
A Study of GDPR Compliance under the Transparency and Consent Framework
[ "Mike Smith", "Antonio Torres", "Riley Grossman", "Pritam Sen", "Yi Chen", "Cristian Borcea" ]
This paper presents a study of GDPR compliance under the Interactive Advertising Bureau Europe’s Transparency and Consent Framework (TCF). This framework provides digital advertising market participants a standard for sharing users’ privacy consent choices. TCF is widely used across the Internet, and this paper presents its first thorough evaluation, investigating both the compliance of websites with TCF and its impact on user privacy. We reviewed 2,230 websites that use TCF and accepted the automatic decline of user consent by our data collection system. Unlike previous work on GDPR compliance, we found that most websites using TCF properly record the user’s consent choice. However, we found that 72.8% of the websites that were TCF compliant claimed legitimate interest as a rationale for overriding the consent choice. While legitimate interest is legal under GDPR, previous studies have shown that most users disagreed with how it is being used to collect data. Additionally, analysis of cookies set to the browsers indicates that TCF may not fully protect user privacy even when websites are compliant. Our research provides regulators and publishers with a data collection and analysis system to monitor compliance, detect non-compliance, and examine questionable practices of circumventing user consent choices using legitimate interest.
[ "Privacy Regulation", "GDPR Compliance", "Consent Management Platforms", "Transparency and Consent Framework (TCF)", "Ad Tech" ]
https://openreview.net/pdf?id=0BN7CbAQNS
kXdLcAVMTp
official_review
1,700,745,959,128
0BN7CbAQNS
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1815/Reviewer_CQbu" ]
review: The paper studies the Advertising Bureau Europe’s Transparency and Consent Framework (TCF), and in particular how websites implement the current version (2.1). The authors conclude that most websites comply with the framework rules, but many rely on the legitimate-interests legal basis and therefore may not actually be GDPR compliant. I think the technical discussion is good, but some of the legal discussion feels confused. For example the paper talks about the 12 purposes (which is defined by the TCF) as if it is of equal standing to the legal bases in the GDPR. Whereas the TCF is not legally enforcable and is disputed, but the GDPR is law. It would be better to distinguish what the GDPR says from how the IAB interpret it. Similarly the paper stats that opt-in consent is required, but that's not the cases when one of the other legal bases is used. The experiments appear to be well executed and the results are interesting; particularly that TCF v2.2 seems to forbid common behaviour. It should have significant impact on policy. questions: - Of the claims made, which are clearly required by GDPR as compared to the IAB interpretation? ethics_review_flag: No ethics_review_description: N/A scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
0BN7CbAQNS
A Study of GDPR Compliance under the Transparency and Consent Framework
[ "Mike Smith", "Antonio Torres", "Riley Grossman", "Pritam Sen", "Yi Chen", "Cristian Borcea" ]
This paper presents a study of GDPR compliance under the Interactive Advertising Bureau Europe’s Transparency and Consent Framework (TCF). This framework provides digital advertising market participants a standard for sharing users’ privacy consent choices. TCF is widely used across the Internet, and this paper presents its first thorough evaluation, investigating both the compliance of websites with TCF and its impact on user privacy. We reviewed 2,230 websites that use TCF and accepted the automatic decline of user consent by our data collection system. Unlike previous work on GDPR compliance, we found that most websites using TCF properly record the user’s consent choice. However, we found that 72.8% of the websites that were TCF compliant claimed legitimate interest as a rationale for overriding the consent choice. While legitimate interest is legal under GDPR, previous studies have shown that most users disagreed with how it is being used to collect data. Additionally, analysis of cookies set to the browsers indicates that TCF may not fully protect user privacy even when websites are compliant. Our research provides regulators and publishers with a data collection and analysis system to monitor compliance, detect non-compliance, and examine questionable practices of circumventing user consent choices using legitimate interest.
[ "Privacy Regulation", "GDPR Compliance", "Consent Management Platforms", "Transparency and Consent Framework (TCF)", "Ad Tech" ]
https://openreview.net/pdf?id=0BN7CbAQNS
izjDjDudfE
decision
1,705,909,221,033
0BN7CbAQNS
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept (Oral) comment: Our decision is to accept. Please see the AC's review below and improve the work considering that and the reviewers' feedback for cemera-ready submission. "This work analyzes compliance with TCF. The authors uncover interesting findings relative to the legitimate interest rationale. The reviewers maintain that the work's novelty is unclear but the work as being sound. After reading through the conversations, I agree with the assessments. The authors are encouraged to clarify the novelty of their work and address the methodological concerns raised. I recommend accept."
0BN7CbAQNS
A Study of GDPR Compliance under the Transparency and Consent Framework
[ "Mike Smith", "Antonio Torres", "Riley Grossman", "Pritam Sen", "Yi Chen", "Cristian Borcea" ]
This paper presents a study of GDPR compliance under the Interactive Advertising Bureau Europe’s Transparency and Consent Framework (TCF). This framework provides digital advertising market participants a standard for sharing users’ privacy consent choices. TCF is widely used across the Internet, and this paper presents its first thorough evaluation, investigating both the compliance of websites with TCF and its impact on user privacy. We reviewed 2,230 websites that use TCF and accepted the automatic decline of user consent by our data collection system. Unlike previous work on GDPR compliance, we found that most websites using TCF properly record the user’s consent choice. However, we found that 72.8% of the websites that were TCF compliant claimed legitimate interest as a rationale for overriding the consent choice. While legitimate interest is legal under GDPR, previous studies have shown that most users disagreed with how it is being used to collect data. Additionally, analysis of cookies set to the browsers indicates that TCF may not fully protect user privacy even when websites are compliant. Our research provides regulators and publishers with a data collection and analysis system to monitor compliance, detect non-compliance, and examine questionable practices of circumventing user consent choices using legitimate interest.
[ "Privacy Regulation", "GDPR Compliance", "Consent Management Platforms", "Transparency and Consent Framework (TCF)", "Ad Tech" ]
https://openreview.net/pdf?id=0BN7CbAQNS
d0u91k3HtV
official_review
1,700,799,891,525
0BN7CbAQNS
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1815/Reviewer_Lrgr" ]
review: In this paper, the authors use automated crawls to opt-out of tracking by websites in the EU to assess whether these websites are storing compliant TCF strings. The authors conclude that most sites are correctly recording consent choices, but that websites abuse "legitimate interest" to ignore users' choices. In general, it is important to study compliance with privacy laws, and this study fits within a recent body of work studying GDPR compliance in Europe. This work updates previous studies that found, arguably, more widespread non-compliance. Unfortunately, the authors undercut the relevance of their own paper. The authors primary finding is that "legitimate interest" is being abused to facilitate ad tracking and targeting. However, as the authors note in 4.4, this is a known problem that has already been litigated, and TCF 2.2 will forbid this practice. In this light, the papers' findings seem to be too late. The authors claim that their crawler stores HTTP requests and the cause that generated each request (e.g., the script that triggered it). However, unless I'm missing something, the authors don't utilize this data at all. This is a huge missed opportunity. Just because consent is stored in a cookie by the CMP does not mean third-parties are reading or receiving this information. Analyzing the parameters sent to third-parties, and why they were sent, would add significant depth to the paper. Specific Comments: 1, "The high-level insight from our results is that most TCF sites are legally compliant." --- This is a bold proclamation, given that legal compliance stretches beyond asking for and recording consent. It also encompasses actually honoring consent information, e.g., not recording data or not serving targeted ads when consent is declined. It may be safer for the authors to stick to the facts rather than make broad proclamations, i.e., 97% of the sites in the sample asked for and recorded consent correctly. 2.2, "Given that each version of the TCF builds on the previous versions, our findings are still relevant." --- This is a true statement, but it also means prior work on TCF is still relevant as well. The authors claim that their study is the first to examine TCF 2.1 and 2.2, but its not clear what this contributes given that the authors are not actually investigating the specific changes that were implemented in TCF 2.1 and 2.2. Rather, this work seems like a straightforward replication of prior work that has already examined compliance with consent choices in the EU. 2.3: This section is really unconvincing. I'm very familiar with prior studies of GDPR compliance via TCF, and I'm not convinced this study is treading any new ground. Again, this study seems strongest when positioned as a replication: works have studied TCF consent signals in ~2020 and ~2021, but compliance practices may have changed since then. 4.1: DeepSee.io does not appear to be a public data source. How did the authors get access to it? Table 3: A more informative way to perform this analysis is to plot the compliance rate vs. Tranco rank, rather than arbitrarily analyzing domains with rank > 5000. questions: 4.1: The description of the crawl process is somewhat ambiguous. Did the authors' crawler visit the homepage for each website? How were subpage links selected? 4.1: It is good practice to report the version number and date of the Tranco list used in studies. 4.1: When was the crawl conducted? 4.2: This section is a bit confusing. Is this section only examining consent strings in cookies, or is this also examining consent strings in HTTP parameters? Also, were there websites that stored in the consent string in multiple cookies, and if so, were the values consistent? 4.2: Is it possible that any of the cases where the TC string encoded "consent given" were the result of errors in the crawler, i.e., it clicked the consent button instead of the decline button? In other words, are the authors 100% sure that these are instances of non-compliance, or could crawler error be the cause? I am especially worried about this in the case of Ringier, since the non-complaince rate was 100%; did the crawler uniformly malfunction in some way when it encountered these banners? ethics_review_flag: No ethics_review_description: N/A scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 2 technical_quality: 5 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
0BN7CbAQNS
A Study of GDPR Compliance under the Transparency and Consent Framework
[ "Mike Smith", "Antonio Torres", "Riley Grossman", "Pritam Sen", "Yi Chen", "Cristian Borcea" ]
This paper presents a study of GDPR compliance under the Interactive Advertising Bureau Europe’s Transparency and Consent Framework (TCF). This framework provides digital advertising market participants a standard for sharing users’ privacy consent choices. TCF is widely used across the Internet, and this paper presents its first thorough evaluation, investigating both the compliance of websites with TCF and its impact on user privacy. We reviewed 2,230 websites that use TCF and accepted the automatic decline of user consent by our data collection system. Unlike previous work on GDPR compliance, we found that most websites using TCF properly record the user’s consent choice. However, we found that 72.8% of the websites that were TCF compliant claimed legitimate interest as a rationale for overriding the consent choice. While legitimate interest is legal under GDPR, previous studies have shown that most users disagreed with how it is being used to collect data. Additionally, analysis of cookies set to the browsers indicates that TCF may not fully protect user privacy even when websites are compliant. Our research provides regulators and publishers with a data collection and analysis system to monitor compliance, detect non-compliance, and examine questionable practices of circumventing user consent choices using legitimate interest.
[ "Privacy Regulation", "GDPR Compliance", "Consent Management Platforms", "Transparency and Consent Framework (TCF)", "Ad Tech" ]
https://openreview.net/pdf?id=0BN7CbAQNS
a4ottxUu6z
official_review
1,700,655,387,456
0BN7CbAQNS
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1815/Reviewer_zKaR" ]
review: The paper presents an evaluation of GDPR compliance and its impact on user privacy, with a focus on the Interactive Advertising Bureau Europe's TCF. This framework aims to provide a standard for sharing users' privacy consent choices among digital advertising market participants and is widely used across the internet. The study involves a thorough examination of 2230 websites using TCF and their compliance with GDPR regulations, particularly regarding the recording of user consent choices. The study emploies robust methodologies to investigate a critical aspect of online privacy and data protection. It offers an analysis of GDPR compliance in the context of TCF, a relevant topic given the widespread use of TCF in digital advertising. The paper is well-structured and clearly presents its findings, methodology, and the implications of its results. The data is meticulously analyzed and presented in an accessible format, making the complex subject matter understandable. This study is likely unique in its focus on the TCF's compliance with GDPR. It fills a gap in existing research by not only assessing whether websites record user consent correctly but also exploring the nuances of 'legitimate interest' as a legal basis for data processing. The findings are significant, especially for regulators, publishers, and privacy advocates. The study reveals that while most websites using TCF record user consent properly, a substantial number (72.8%) claim 'legitimate interest' to override user consent choices. This raises concerns about the effectiveness of TCF in protecting user privacy. Pros Examines a large sample of websites (2230) for a thorough analysis. Addresses a critical and timely issue in digital privacy and data protection. Highlights the widespread use of 'legitimate interest' and its implications for user consent. Offers valuable insights for regulators and publishers to monitor compliance and detect non-compliance. Cons While the use of 'legitimate interest' is extensively discussed, the paper might benefit from a deeper exploration of its varied applications and user perspectives. The study identifies issues with the IAB decoder, which could impact the accuracy of compliance assessments. The focus is predominantly on European websites, which may limit its applicability in other jurisdictions with different privacy regulations. questions: 1. Could you elaborate on how 'legitimate interest' is often interpreted and applied by websites? Understanding the nuances of this legal basis could clarify whether its frequent use is a loophole in GDPR compliance or a legitimate practice. 2. The study mentions potential issues with the IAB TC String Decoder. How might these issues have affected your findings, and could resolving these issues change the study's conclusions regarding GDPR compliance? 3. Your study focuses on European websites. Do you believe your findings are representative of global trends in GDPR compliance, or are they specific to the European context? 4. Your research indicates that most users disagree with the application of 'legitimate interest' for data collection. Could you provide more details on how user perceptions were gauged and how these perceptions might influence the interpretation of your findings? ethics_review_flag: No ethics_review_description: I've selected "No" scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
04DbUMZGMD
ESCNet: Entity-enhanced and Stance Checking Network for Multi-modal Fact-Checking
[ "Fanrui Zhang", "Jiawei Liu", "Jingyi Xie", "Qiang Zhang", "Yongchao Xu", "Zheng-Jun Zha" ]
Recently, misinformation incorporating both texts and images has been disseminated more effectively than those containing text alone on social media, raising significant concerns for multi-modal fact-checking. Existing research makes contributions to multi-modal feature extraction and interaction, but fails to fully enhance the valuable semantic representations or excavate the intricate entity information. Besides, existing multi-modal fact-checking datasets are primarily focused on English and merely concentrate on a single type of misinformation, thereby neglecting a comprehensive summary and coverage of various types of misinformation. Taking these factors into account, we construct the first large-scale Chinese Multi-modal Fact-Checking (CMFC) dataset which encompasses 46,000 claims. The CMFC covers all types of misinformation for fact-checking and is divided into two sub-datasets, Collected Chinese Multi-modal Fact-Checking (CCMF) and Synthetic Chinese Multi-modal Fact-Checking (SCMF). To establish baseline performance, we propose a novel Entity-enhanced and Stance Checking Network (ESCNet), which includes Multi-modal Feature Extraction Module, Stance Transformer, and Entity-enhanced Encoder. The ESCNet jointly models stance semantic reasoning features and knowledge-enhanced entity pair features, in order to simultaneously learn effective semantic-level and knowledge-level claim representations. Our work offers the first step and establishes a benchmark for evidence-based, multi-type, multi-modal fact-checking, and significantly outperforms previous baseline models.
[ "Multi-modal fact-checking; Datasets; Knowledge graph" ]
https://openreview.net/pdf?id=04DbUMZGMD
xkzzYdFjKV
official_review
1,701,403,114,768
04DbUMZGMD
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission779/Reviewer_gHHR" ]
review: Authors propose ESCNet (Entity enhanced and Stance Checking Network) for multi-modal fact checking. ESCNet consists of three parts: Multi-modal Feature Extraction Module, Stance Transformer and Entity-enhanced Encoder. Authors also build a large-scale multi-modal fact checking dataset with ground truth. Strength: 1. A very large-scale dataset is generated, which is publicly available could benefit this research. 2. Experimental results show that the proposed method is better than the competing methods by health margin. 3. Ablation study is provided to show the contribution of various components of the model. Weakness: 1. The writing of the paper is poor. Authors discussed different components of the model without providing much justification of the role of the different components, which make the paper hard to read and follow. 2. it is not clear how the knowledge-enhanced distance measurement works and how it helps. 3. Only one public dataset is used to validate. questions: Please respond to the weakness comments. ethics_review_flag: No ethics_review_description: No issue scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 4 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
04DbUMZGMD
ESCNet: Entity-enhanced and Stance Checking Network for Multi-modal Fact-Checking
[ "Fanrui Zhang", "Jiawei Liu", "Jingyi Xie", "Qiang Zhang", "Yongchao Xu", "Zheng-Jun Zha" ]
Recently, misinformation incorporating both texts and images has been disseminated more effectively than those containing text alone on social media, raising significant concerns for multi-modal fact-checking. Existing research makes contributions to multi-modal feature extraction and interaction, but fails to fully enhance the valuable semantic representations or excavate the intricate entity information. Besides, existing multi-modal fact-checking datasets are primarily focused on English and merely concentrate on a single type of misinformation, thereby neglecting a comprehensive summary and coverage of various types of misinformation. Taking these factors into account, we construct the first large-scale Chinese Multi-modal Fact-Checking (CMFC) dataset which encompasses 46,000 claims. The CMFC covers all types of misinformation for fact-checking and is divided into two sub-datasets, Collected Chinese Multi-modal Fact-Checking (CCMF) and Synthetic Chinese Multi-modal Fact-Checking (SCMF). To establish baseline performance, we propose a novel Entity-enhanced and Stance Checking Network (ESCNet), which includes Multi-modal Feature Extraction Module, Stance Transformer, and Entity-enhanced Encoder. The ESCNet jointly models stance semantic reasoning features and knowledge-enhanced entity pair features, in order to simultaneously learn effective semantic-level and knowledge-level claim representations. Our work offers the first step and establishes a benchmark for evidence-based, multi-type, multi-modal fact-checking, and significantly outperforms previous baseline models.
[ "Multi-modal fact-checking; Datasets; Knowledge graph" ]
https://openreview.net/pdf?id=04DbUMZGMD
rb5KhYlm5c
official_review
1,700,629,424,855
04DbUMZGMD
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission779/Reviewer_TrUH" ]
review: This research tackles the pressing issue of misinformation, highlighting the potency of multi-modal content (text and images) in social media dissemination. While previous research makes strides in feature extraction, it falls short in fully harnessing semantic and entity information. Additionally, existing datasets primarily cater to English and single types of misinformation, resulting in incomplete coverage. In response, the authors construct the first large-scale Collected Chinese Multi-modal Fact-Checking (CCMF) and Synthetic Chinese Multimodal Fact-Checking (SCMF) benchmarks for evidence-based, multi-type, multi-modal fact-checking, encompassing a wide array of misinformation types. They introduce ESCNet, a novel model amalgamating semantic reasoning and knowledge-enhanced features, which includes Multi-modal Feature Extraction Module, Stance Transformer, and Entity-enhanced Encoder. ESCNet not only surpasses previous models but also sets a new standard for evidence-based, multi-modal fact-checking. a.Strengths: S1: The authors construct the first large-scale Collected Chinese Multi-modal Fact-Checking (CCMF) and Synthetic Chinese Multimodal Fact-Checking (SCMF) benchmarks. S2: The experimental results illustrate the effectiveness and robustness of the proposed ESCNet framework, incorporating a wide range of sota baselines for comparative analysis. S3: Clear and well-motivated reasoning in the paper. Well-written and structured. b.Weaknesses: W1: It's best to be able to provide the code for checking. W2: I suggest to add the percentage of improvement in Figure and Table compared with other baselines to help read. W3: The results of the model are extremely dependent on Multi-modal Feature Extraction Module. If the extraction effect is not good, it is extremely easy to accumulate errors. How to deal with this problem? I suggest to combine the overall textual and visual feature and perform ablation study to verify the Multi-modal Feature Extraction Module. questions: Q1: How to deal with this problem stated in W3? I suggest to combine the overall textual and visual feature and perform ablation study to verify the Multi-modal Feature Extraction Module. Q2: Will the CCMF and SCMF datasets and ESCNet Model be open sourced? Q3: Previous study mainly focused on Graph Pool to generate the overall representation using Graph Neural Networks(GNNs), what is the difference between Entity-enhanced Encoder and GNNs Encoder such as GCN and GAT? (I would be glad to change my score if the code and data will be open sourced) ethics_review_flag: No ethics_review_description: No scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 6 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
04DbUMZGMD
ESCNet: Entity-enhanced and Stance Checking Network for Multi-modal Fact-Checking
[ "Fanrui Zhang", "Jiawei Liu", "Jingyi Xie", "Qiang Zhang", "Yongchao Xu", "Zheng-Jun Zha" ]
Recently, misinformation incorporating both texts and images has been disseminated more effectively than those containing text alone on social media, raising significant concerns for multi-modal fact-checking. Existing research makes contributions to multi-modal feature extraction and interaction, but fails to fully enhance the valuable semantic representations or excavate the intricate entity information. Besides, existing multi-modal fact-checking datasets are primarily focused on English and merely concentrate on a single type of misinformation, thereby neglecting a comprehensive summary and coverage of various types of misinformation. Taking these factors into account, we construct the first large-scale Chinese Multi-modal Fact-Checking (CMFC) dataset which encompasses 46,000 claims. The CMFC covers all types of misinformation for fact-checking and is divided into two sub-datasets, Collected Chinese Multi-modal Fact-Checking (CCMF) and Synthetic Chinese Multi-modal Fact-Checking (SCMF). To establish baseline performance, we propose a novel Entity-enhanced and Stance Checking Network (ESCNet), which includes Multi-modal Feature Extraction Module, Stance Transformer, and Entity-enhanced Encoder. The ESCNet jointly models stance semantic reasoning features and knowledge-enhanced entity pair features, in order to simultaneously learn effective semantic-level and knowledge-level claim representations. Our work offers the first step and establishes a benchmark for evidence-based, multi-type, multi-modal fact-checking, and significantly outperforms previous baseline models.
[ "Multi-modal fact-checking; Datasets; Knowledge graph" ]
https://openreview.net/pdf?id=04DbUMZGMD
b4eYpZ2aMn
official_review
1,701,101,327,215
04DbUMZGMD
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission779/Reviewer_Jhzw" ]
review: The paper presents a good contribution in multi-modal fact-checking with the first large-scale Chinese Multi-modal Fact-Checking (CMFC) dataset. It comprises 46,000 claims, covering diverse types of misinformation. This dataset includes both Collected Chinese Multi-modal Fact-Checking (CCMF) and Synthetic Chinese Multi-modal Fact-Checking (SCMF) sub-datasets. To benchmark performance, the paper introduces the Entity-enhanced and Stance Checking Network (ESCNet), a novel model incorporating a Multi-modal Feature Extraction Module, Stance Transformer, and Entity-enhanced Encoder. ESCNet can jointly modeling semantic reasoning features and knowledge-enhanced entity pair features. The combination of data building and model innovation makes this work valuable and impactful in advancing the field of evidence-based, multi-type, and multi-modal fact-checking. Pros: Great efforts of a new dataset. Cons: - The writing of the model ESCNet is overall lacking a lot of details (perhaps due to space limits.) - The improvement of ESCNet in Table 3, 4, 5 seems convincing, but it might be due to more parameters and complexity of the model. questions: Are you going to make the dataset publically available? Have you considered creating a parallel English dataset, considering that Chinese-to-English translation is relatively straightforward? This would enhance the accessibility and utility of your research for other researchers in the field. ethics_review_flag: No ethics_review_description: No scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
04DbUMZGMD
ESCNet: Entity-enhanced and Stance Checking Network for Multi-modal Fact-Checking
[ "Fanrui Zhang", "Jiawei Liu", "Jingyi Xie", "Qiang Zhang", "Yongchao Xu", "Zheng-Jun Zha" ]
Recently, misinformation incorporating both texts and images has been disseminated more effectively than those containing text alone on social media, raising significant concerns for multi-modal fact-checking. Existing research makes contributions to multi-modal feature extraction and interaction, but fails to fully enhance the valuable semantic representations or excavate the intricate entity information. Besides, existing multi-modal fact-checking datasets are primarily focused on English and merely concentrate on a single type of misinformation, thereby neglecting a comprehensive summary and coverage of various types of misinformation. Taking these factors into account, we construct the first large-scale Chinese Multi-modal Fact-Checking (CMFC) dataset which encompasses 46,000 claims. The CMFC covers all types of misinformation for fact-checking and is divided into two sub-datasets, Collected Chinese Multi-modal Fact-Checking (CCMF) and Synthetic Chinese Multi-modal Fact-Checking (SCMF). To establish baseline performance, we propose a novel Entity-enhanced and Stance Checking Network (ESCNet), which includes Multi-modal Feature Extraction Module, Stance Transformer, and Entity-enhanced Encoder. The ESCNet jointly models stance semantic reasoning features and knowledge-enhanced entity pair features, in order to simultaneously learn effective semantic-level and knowledge-level claim representations. Our work offers the first step and establishes a benchmark for evidence-based, multi-type, multi-modal fact-checking, and significantly outperforms previous baseline models.
[ "Multi-modal fact-checking; Datasets; Knowledge graph" ]
https://openreview.net/pdf?id=04DbUMZGMD
YNRwUPcJNu
official_review
1,700,580,442,604
04DbUMZGMD
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission779/Reviewer_Xcxi" ]
review: In this paper teh authors present the first large-scale, multi-domain Chinese multimodal fact-checking dataset. The authors also describe a Entity-enhanced Encoder with a knowledge enhanced distance measurement strategy and a signed attention mechanism to capture high-level entity information. The paper is interesting, although a little long (with appendices the paper is over 15 pages). In general, the paper is well written, although the presentation could be improved. Some parts could perhaps be omitted (e.g. I think Floyd Warshall's algorithm is quite well known and does not need to be mentioned in the paper). Overall, I think it is a good paper that deserves to be published. The main contribution is the presentation of a new dataset that will be of interest to researchers interested in natural language processing and the web. I would like to suggest that the authors expand the section on the construction of the dataset. Indeed, they make some choices in the pre-processing phase that have a crucial impact on the final quality of the dataset, and it is appropriate that these choices are deeply motivated in order to understand what are the limitations of the use of the dataset. It would be interesting to understand whether the procedure used by the authors can be reused to produce datasets in languages other than Chinese and, if so, what effort is required to produce such an adaptation. Overall, a good paper, well written (although, I repeat, the presentation can be improved, which is of interest to the community). questions: Please detail better the procedure for constructing the dataset ethics_review_flag: No ethics_review_description: None scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 7 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
04DbUMZGMD
ESCNet: Entity-enhanced and Stance Checking Network for Multi-modal Fact-Checking
[ "Fanrui Zhang", "Jiawei Liu", "Jingyi Xie", "Qiang Zhang", "Yongchao Xu", "Zheng-Jun Zha" ]
Recently, misinformation incorporating both texts and images has been disseminated more effectively than those containing text alone on social media, raising significant concerns for multi-modal fact-checking. Existing research makes contributions to multi-modal feature extraction and interaction, but fails to fully enhance the valuable semantic representations or excavate the intricate entity information. Besides, existing multi-modal fact-checking datasets are primarily focused on English and merely concentrate on a single type of misinformation, thereby neglecting a comprehensive summary and coverage of various types of misinformation. Taking these factors into account, we construct the first large-scale Chinese Multi-modal Fact-Checking (CMFC) dataset which encompasses 46,000 claims. The CMFC covers all types of misinformation for fact-checking and is divided into two sub-datasets, Collected Chinese Multi-modal Fact-Checking (CCMF) and Synthetic Chinese Multi-modal Fact-Checking (SCMF). To establish baseline performance, we propose a novel Entity-enhanced and Stance Checking Network (ESCNet), which includes Multi-modal Feature Extraction Module, Stance Transformer, and Entity-enhanced Encoder. The ESCNet jointly models stance semantic reasoning features and knowledge-enhanced entity pair features, in order to simultaneously learn effective semantic-level and knowledge-level claim representations. Our work offers the first step and establishes a benchmark for evidence-based, multi-type, multi-modal fact-checking, and significantly outperforms previous baseline models.
[ "Multi-modal fact-checking; Datasets; Knowledge graph" ]
https://openreview.net/pdf?id=04DbUMZGMD
TAvbece5dl
decision
1,705,909,234,039
04DbUMZGMD
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept comment: All reviewers gave positive scores to this paper. They point out the technical contributions, while some reviewers also criticize certain aspects of writing that should be improved. Please take their detailed comments into account when preparing the next version.
04DbUMZGMD
ESCNet: Entity-enhanced and Stance Checking Network for Multi-modal Fact-Checking
[ "Fanrui Zhang", "Jiawei Liu", "Jingyi Xie", "Qiang Zhang", "Yongchao Xu", "Zheng-Jun Zha" ]
Recently, misinformation incorporating both texts and images has been disseminated more effectively than those containing text alone on social media, raising significant concerns for multi-modal fact-checking. Existing research makes contributions to multi-modal feature extraction and interaction, but fails to fully enhance the valuable semantic representations or excavate the intricate entity information. Besides, existing multi-modal fact-checking datasets are primarily focused on English and merely concentrate on a single type of misinformation, thereby neglecting a comprehensive summary and coverage of various types of misinformation. Taking these factors into account, we construct the first large-scale Chinese Multi-modal Fact-Checking (CMFC) dataset which encompasses 46,000 claims. The CMFC covers all types of misinformation for fact-checking and is divided into two sub-datasets, Collected Chinese Multi-modal Fact-Checking (CCMF) and Synthetic Chinese Multi-modal Fact-Checking (SCMF). To establish baseline performance, we propose a novel Entity-enhanced and Stance Checking Network (ESCNet), which includes Multi-modal Feature Extraction Module, Stance Transformer, and Entity-enhanced Encoder. The ESCNet jointly models stance semantic reasoning features and knowledge-enhanced entity pair features, in order to simultaneously learn effective semantic-level and knowledge-level claim representations. Our work offers the first step and establishes a benchmark for evidence-based, multi-type, multi-modal fact-checking, and significantly outperforms previous baseline models.
[ "Multi-modal fact-checking; Datasets; Knowledge graph" ]
https://openreview.net/pdf?id=04DbUMZGMD
8PHYgekCCK
official_review
1,700,659,093,904
04DbUMZGMD
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission779/Reviewer_SKAi" ]
review: The article introduces the Entity-enhanced Stance Checking Network (ESCNet), a novel approach to multimodal fact-checking. This model is significant for its integration of multi-modal feature extraction, stance transformation, and an entity-enhanced encoder. The authors also present the first large-scale Chinese Multi-modal Fact-Checking (CMFC) dataset, encompassing 46,000 claims. This dataset is noteworthy for covering all types of misinformation and being divided into two sub-datasets: Collected Chinese Multi-modal Fact-Checking (CCMF) and Synthetic Chinese Multi-modal Fact-Checking (SCMF). In addition, The author compared many novel Fact-Checking methods in the experimental section and achieved SOTA results on the NewsCLIPpings dataset, which havecertain application values. However, there are some weaknesses that need to be further improved. 1. Dataset Specificity: The focus on Chinese-language data, while valuable, might limit the generalizability of the findings and the applicability of the model in other linguistic contexts. 2. Technical Details: The paper could benefit from more detailed explanations of the technical aspects of the ESCNet, particularly how it integrates and processes multimodal data. questions: 1. How applicable is the ESCNet model to languages and datasets other than Chinese? Are there specific modifications needed for such adaptations? 2. Can you elaborate on the technical workings of the entity-enhanced encoder and how it integrates with the stance transformer to enhance the fact-checking process? 3. What are the potential future directions for this research, particularly in terms of expanding the dataset to other languages and improving the model's capabilities? ethics_review_flag: No ethics_review_description: None scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 4 technical_quality: 4 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
yhGKPtRoOx
PC-X: Profound Clustering via Slow Exemplars
[ "Yuangang Pan", "Yinghua Yao", "Ivor Tsang" ]
Deep clustering aims at learning clustering and data representation jointly to deliver clustering-friendly representation. In spite of their significant improvements in clustering accuracy, existing approaches are far from meeting the requirements from other perspectives, such as universality, interpretability and efficiency, which become increasingly important with the emerging demand for diverse applications. We introduce a new framework named Profound Clustering via slow eXemplars (PC-X), which fulfils the above four basic requirements simultaneously. In particular, PC-X encodes data within the auto-encoder (AE) network to reduce its dependence on data modality (\textit{universality}). Further, inspired by exemplar-based clustering, we design a \PCX{Centroid-Integration Unit (CI-Unit)}, which not only facilitate the suppression of sample-specific details for better representation learning (\textit{accuracy}), but also prompt clustering centroids to become legible exemplars (\textit{interpretability}). Further, these exemplars are calibrated stably with mini-batch data following our tailor-designed optimization scheme and converges in linear (\textit{efficiency}). Empirical results on benchmark datasets demonstrate the superiority of PC-X in terms of universality, interpretability and efficiency, in addition to clustering accuracy. The code of this work is available at https://github.com/Yuangang-Pan/PC-X/.
[ "Deep clustering", "interpretable machine learning", "Optimization" ]
https://openreview.net/pdf?id=yhGKPtRoOx
t3ce6JTupV
official_review
1,697,405,586,110
yhGKPtRoOx
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission38/Reviewer_iJVS" ]
title: Review: the paper introduces a well-structured new deep clustering framework, complemented by a rich theoretical analysis and extensive experiments, but lacks a clear discussion of its limitations, and would benefit from better definitions of terminology and a more nuanced exploration of interpretability and efficiency. review: **Quality:** The submission is of good quality, showcasing a well-thought-out framework (PC-X) aimed at addressing challenges inherent in deep clustering. The theoretical underpinnings are strong, and the paper does a commendable job of situating PC-X within the existing landscape of clustering paradigms. **Clarity:** The paper is well-structured and well-written, making it easy to follow. The comparative analysis using Table 1 provides a clear picture of how PC-X stands relative to existing methods, although some definitions could have been articulated better for enhanced clarity. **Originality:** (a) PC-X, with its skip-connection module and unique optimization algorithm, presents a novel approach to deep clustering; (b) The idea of making clustering centroids into legible exemplars through decoding is innovative and adds a fresh perspective to the domain. **Significance:** The paper has the potential to further the discourse in deep clustering, especially around interpretability and efficiency. **Pros:** - Well-structured and well-articulated paper making it easy to follow. - Novel framework (PC-X) with innovative components like the skip-connection module and the unique optimization algorithm. - Rich theoretical analysis backing the proposed framework. Extensive experimentation against multiple baseline methods on several datasets. **Cons:** - Datasets used for experimentation are not very large or diverse, which may not fully demonstrate the universality of PC-X. - Lack of a clear discussion on the limitations of PC-X, which could have provided a more balanced view of the framework. - Some definitions, especially that of "Universality", are not clear enough, which might lead to confusion. The discussion on interpretability and efficiency could be more nuanced, and the claims could be better substantiated. rating: 7: Good paper, accept confidence: 3: The reviewer is fairly confident that the evaluation is correct
yhGKPtRoOx
PC-X: Profound Clustering via Slow Exemplars
[ "Yuangang Pan", "Yinghua Yao", "Ivor Tsang" ]
Deep clustering aims at learning clustering and data representation jointly to deliver clustering-friendly representation. In spite of their significant improvements in clustering accuracy, existing approaches are far from meeting the requirements from other perspectives, such as universality, interpretability and efficiency, which become increasingly important with the emerging demand for diverse applications. We introduce a new framework named Profound Clustering via slow eXemplars (PC-X), which fulfils the above four basic requirements simultaneously. In particular, PC-X encodes data within the auto-encoder (AE) network to reduce its dependence on data modality (\textit{universality}). Further, inspired by exemplar-based clustering, we design a \PCX{Centroid-Integration Unit (CI-Unit)}, which not only facilitate the suppression of sample-specific details for better representation learning (\textit{accuracy}), but also prompt clustering centroids to become legible exemplars (\textit{interpretability}). Further, these exemplars are calibrated stably with mini-batch data following our tailor-designed optimization scheme and converges in linear (\textit{efficiency}). Empirical results on benchmark datasets demonstrate the superiority of PC-X in terms of universality, interpretability and efficiency, in addition to clustering accuracy. The code of this work is available at https://github.com/Yuangang-Pan/PC-X/.
[ "Deep clustering", "interpretable machine learning", "Optimization" ]
https://openreview.net/pdf?id=yhGKPtRoOx
Ye6NMF4Mxu
decision
1,700,361,165,974
yhGKPtRoOx
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Program_Chairs" ]
decision: Accept (Oral) comment: All reviewers and AC agreed that the paper is of high quality, presenting a well-structured framework for deep clustering. It introduces novel components, like the skip-connection module and a unique optimization algorithm, and offers strong theoretical foundations. The potential to advance the field of deep clustering, particularly in terms of interpretability and efficiency, makes it a strong acceptance case. The action PC chair for this paper is Atlas Wang, who made the decision after carefully reading the paper as well as the comments by all reviewers and AC. The decision is agreed by all PC chairs. title: Paper Decision
yhGKPtRoOx
PC-X: Profound Clustering via Slow Exemplars
[ "Yuangang Pan", "Yinghua Yao", "Ivor Tsang" ]
Deep clustering aims at learning clustering and data representation jointly to deliver clustering-friendly representation. In spite of their significant improvements in clustering accuracy, existing approaches are far from meeting the requirements from other perspectives, such as universality, interpretability and efficiency, which become increasingly important with the emerging demand for diverse applications. We introduce a new framework named Profound Clustering via slow eXemplars (PC-X), which fulfils the above four basic requirements simultaneously. In particular, PC-X encodes data within the auto-encoder (AE) network to reduce its dependence on data modality (\textit{universality}). Further, inspired by exemplar-based clustering, we design a \PCX{Centroid-Integration Unit (CI-Unit)}, which not only facilitate the suppression of sample-specific details for better representation learning (\textit{accuracy}), but also prompt clustering centroids to become legible exemplars (\textit{interpretability}). Further, these exemplars are calibrated stably with mini-batch data following our tailor-designed optimization scheme and converges in linear (\textit{efficiency}). Empirical results on benchmark datasets demonstrate the superiority of PC-X in terms of universality, interpretability and efficiency, in addition to clustering accuracy. The code of this work is available at https://github.com/Yuangang-Pan/PC-X/.
[ "Deep clustering", "interpretable machine learning", "Optimization" ]
https://openreview.net/pdf?id=yhGKPtRoOx
TptbrtAjYn
meta_review
1,700,015,406,568
yhGKPtRoOx
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission38/Area_Chair_aDsg" ]
metareview: This paper proposes a new deep clustering algorithm trying to maintain 4 nice properties: accuracy, universality, interpretability and efficiency. All reviewers agree the paper has novel contributions and is well-written. The convergence analysis is a nice result of the paper. The authors have tried to address the concerns of reviewers by adding more results on large-scale datasets. Based on these, I will recommend acceptance. recommendation: Accept (Poster) confidence: 5: The area chair is absolutely certain
yhGKPtRoOx
PC-X: Profound Clustering via Slow Exemplars
[ "Yuangang Pan", "Yinghua Yao", "Ivor Tsang" ]
Deep clustering aims at learning clustering and data representation jointly to deliver clustering-friendly representation. In spite of their significant improvements in clustering accuracy, existing approaches are far from meeting the requirements from other perspectives, such as universality, interpretability and efficiency, which become increasingly important with the emerging demand for diverse applications. We introduce a new framework named Profound Clustering via slow eXemplars (PC-X), which fulfils the above four basic requirements simultaneously. In particular, PC-X encodes data within the auto-encoder (AE) network to reduce its dependence on data modality (\textit{universality}). Further, inspired by exemplar-based clustering, we design a \PCX{Centroid-Integration Unit (CI-Unit)}, which not only facilitate the suppression of sample-specific details for better representation learning (\textit{accuracy}), but also prompt clustering centroids to become legible exemplars (\textit{interpretability}). Further, these exemplars are calibrated stably with mini-batch data following our tailor-designed optimization scheme and converges in linear (\textit{efficiency}). Empirical results on benchmark datasets demonstrate the superiority of PC-X in terms of universality, interpretability and efficiency, in addition to clustering accuracy. The code of this work is available at https://github.com/Yuangang-Pan/PC-X/.
[ "Deep clustering", "interpretable machine learning", "Optimization" ]
https://openreview.net/pdf?id=yhGKPtRoOx
K0GE2SgFuu
official_review
1,696,625,238,667
yhGKPtRoOx
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission38/Reviewer_p97g" ]
title: Clear paper with some questions on experimental results review: The authors present an intuitive clustering method that appears to be efficient and generalizable. The authors describe existing shortcomings and the motivation clearly. Although there are several grammatical errors, they are insignificant enough to not affect the clarity. Pros + Thorough comparisons with various baselines and explored different data modalities + Nontrivial theoretical analysis Cons - Although the number of datasets tested is sufficient, the scale of the data is quite limited - Included theoretical algorithm complexities but lacks empirical verification of speed-ups/memory requirements compared to other methods Questions 1) Based on Figure 2, it is interesting that your method preserves diversity within each cluster in contrast to IDEC's embedding space where points in a cluster collapse on each other. Why is this happening, and what are the implications of this property? 2) Is there an interpretation of what the fully-connected layer does in the fusion step? Does it learn a specific way to mix information? rating: 9 confidence: 3
yhGKPtRoOx
PC-X: Profound Clustering via Slow Exemplars
[ "Yuangang Pan", "Yinghua Yao", "Ivor Tsang" ]
Deep clustering aims at learning clustering and data representation jointly to deliver clustering-friendly representation. In spite of their significant improvements in clustering accuracy, existing approaches are far from meeting the requirements from other perspectives, such as universality, interpretability and efficiency, which become increasingly important with the emerging demand for diverse applications. We introduce a new framework named Profound Clustering via slow eXemplars (PC-X), which fulfils the above four basic requirements simultaneously. In particular, PC-X encodes data within the auto-encoder (AE) network to reduce its dependence on data modality (\textit{universality}). Further, inspired by exemplar-based clustering, we design a \PCX{Centroid-Integration Unit (CI-Unit)}, which not only facilitate the suppression of sample-specific details for better representation learning (\textit{accuracy}), but also prompt clustering centroids to become legible exemplars (\textit{interpretability}). Further, these exemplars are calibrated stably with mini-batch data following our tailor-designed optimization scheme and converges in linear (\textit{efficiency}). Empirical results on benchmark datasets demonstrate the superiority of PC-X in terms of universality, interpretability and efficiency, in addition to clustering accuracy. The code of this work is available at https://github.com/Yuangang-Pan/PC-X/.
[ "Deep clustering", "interpretable machine learning", "Optimization" ]
https://openreview.net/pdf?id=yhGKPtRoOx
B8pjzrc2Bq
official_review
1,696,757,043,382
yhGKPtRoOx
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission38/Reviewer_RQJF" ]
title: Method for clustering data review: **Summary** The authors propose a clustering method which doesn't use any domain knowledge, i.e., does not use augmentation and is interpretable, i.e., centroids can be visualized and are semantically meaningful. They propose a skip connection that encourages the interpretability of centroids and use an auxiliary variable to optimize. Results support their claim, and consistently perform better than the baselines mentioned. **Strengths** 1. Present a simple way to encourage interpretability using the skip connection. 2. Convergence analysis, ablation study and effectiveness of different loss components are presented in the paper. **Weakness** 1. The claim that the method is universal to any modality is incorrect. You need to change the architecture of auto-encoders for different modalities. 2. Optimization section (3.1.1.) could be improved. I'm unsure if the optimization decomposition steps are done for different batches **Minor** Avoid using the term 'skip connection' as it holds a particular meaning in the literature. rating: 7: Good paper, accept confidence: 3: The reviewer is fairly confident that the evaluation is correct
y2ozeixGaU
WS-iFSD: Weakly Supervised Incremental Few-shot Object Detection Without Forgetting
[ "Xinyu Gong", "Li Yin", "Juan-Manuel Perez-Rua", "Zhangyang Wang", "Zhicheng Yan" ]
Traditional object detection algorithms rely on extensive annotations from a pre-defined set of base categories, leaving them ill-equipped to identify objects from novel classes. We address this limitation by introducing a novel framework for Incremental Few-Shot Object Detection (iFSD). Leveraging a meta-learning approach, our \hypernetwork is designed to generate class-specific codes, enabling object recognition from both base and novel categories. To enhance the \hypernetwork's generalization performance, we propose a Weakly Supervised Class Augmentation technique that significantly amplifies the training data by merely requiring image-level labels for object localization. Additionally, we stabilize detection performance on base categories by freezing the backbone and detection heads during meta-training. Our model demonstrates significant performance gains on two major benchmarks. Specifically, it outperforms the state-of-the-art ONCE approach on the MS COCO dataset by margins of $2.8\%$ and $20.5\%$ in box AP for novel and base categories, respectively. When trained on MS COCO and cross-evaluated on PASCAL VOC, our model achieves a four-fold improvement in box AP compared to ONCE.
[ "few-shot object detection" ]
https://openreview.net/pdf?id=y2ozeixGaU
fyFpcEaGSv
decision
1,700,497,909,482
y2ozeixGaU
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Program_Chairs" ]
decision: Accept (Oral) comment: The paper proposes a weakly supervised approach for incremental few-shot object detection. The paper contributes a novel class augmentation technique using Grad-CAM, and a performance preservation strategy by freezing the backbone and detection head during meta-training. The paper is well-written and the experiments are convincing. The reviewers raised some concerns about the novelty of the paper and the potential challenges of weak localization. The authors addressed these concerns in their rebuttal. The paper is ready to be accepted. In the camera ready, the authors should consider: (1) Providing a more detailed comparison with state-of-the-art methods; (2) Conducting more experiments to evaluate the robustness of the proposed approach to weak localization. The action PC chair for this paper is Gintare Karolina Dziugaite, who made the decision after carefully reading the paper as well as the comments by all reviewers and AC. The decision is agreed by all PC chairs. title: Paper Decision
y2ozeixGaU
WS-iFSD: Weakly Supervised Incremental Few-shot Object Detection Without Forgetting
[ "Xinyu Gong", "Li Yin", "Juan-Manuel Perez-Rua", "Zhangyang Wang", "Zhicheng Yan" ]
Traditional object detection algorithms rely on extensive annotations from a pre-defined set of base categories, leaving them ill-equipped to identify objects from novel classes. We address this limitation by introducing a novel framework for Incremental Few-Shot Object Detection (iFSD). Leveraging a meta-learning approach, our \hypernetwork is designed to generate class-specific codes, enabling object recognition from both base and novel categories. To enhance the \hypernetwork's generalization performance, we propose a Weakly Supervised Class Augmentation technique that significantly amplifies the training data by merely requiring image-level labels for object localization. Additionally, we stabilize detection performance on base categories by freezing the backbone and detection heads during meta-training. Our model demonstrates significant performance gains on two major benchmarks. Specifically, it outperforms the state-of-the-art ONCE approach on the MS COCO dataset by margins of $2.8\%$ and $20.5\%$ in box AP for novel and base categories, respectively. When trained on MS COCO and cross-evaluated on PASCAL VOC, our model achieves a four-fold improvement in box AP compared to ONCE.
[ "few-shot object detection" ]
https://openreview.net/pdf?id=y2ozeixGaU
SuUIIPtr2j
meta_review
1,699,834,868,131
y2ozeixGaU
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission36/Area_Chair_QT55" ]
metareview: The authors have properly addressed the concerns raised by reviewers and validated the advantages of combining existing techniques for few-shot object detection. Specifically, in the revision they have reported the computational complexity of the method. recommendation: Accept (Poster) confidence: 4: The area chair is confident but not absolutely certain
y2ozeixGaU
WS-iFSD: Weakly Supervised Incremental Few-shot Object Detection Without Forgetting
[ "Xinyu Gong", "Li Yin", "Juan-Manuel Perez-Rua", "Zhangyang Wang", "Zhicheng Yan" ]
Traditional object detection algorithms rely on extensive annotations from a pre-defined set of base categories, leaving them ill-equipped to identify objects from novel classes. We address this limitation by introducing a novel framework for Incremental Few-Shot Object Detection (iFSD). Leveraging a meta-learning approach, our \hypernetwork is designed to generate class-specific codes, enabling object recognition from both base and novel categories. To enhance the \hypernetwork's generalization performance, we propose a Weakly Supervised Class Augmentation technique that significantly amplifies the training data by merely requiring image-level labels for object localization. Additionally, we stabilize detection performance on base categories by freezing the backbone and detection heads during meta-training. Our model demonstrates significant performance gains on two major benchmarks. Specifically, it outperforms the state-of-the-art ONCE approach on the MS COCO dataset by margins of $2.8\%$ and $20.5\%$ in box AP for novel and base categories, respectively. When trained on MS COCO and cross-evaluated on PASCAL VOC, our model achieves a four-fold improvement in box AP compared to ONCE.
[ "few-shot object detection" ]
https://openreview.net/pdf?id=y2ozeixGaU
QlqU3nSn6l
official_review
1,697,472,680,080
y2ozeixGaU
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission36/Reviewer_oZQJ" ]
title: WS-iFSD: Weakly Supervised Incremental Few-shot Object Detection Without Forgetting review: The paper introduces an incremental few-shot object detection framework based on meta-learning networks. Specifically, the authors leverage the recent ONCE meta-learning network to learn class codes for new categories, enabling online addition of new classes for object detection without retraining. The paper highlights that training the meta-learning network alongside the detection network led to conflicts, resulting in limited generalization to new classes. To address this, the authors propose two strategies to enhance training effectiveness: 1. Utilizing Grad-CAM for coarse object localization in ImageNet, significantly expanding the number of object categories and images during training to improve the meta-network's performance. 2. Freezing the detection network during the meta-training phase to retain the ability to detect base classes. **Strengths**: 1. **Easy to follow**: The paper introduces an innovative approach to incremental few-shot object detection, leveraging meta-learning networks and class codes for new categories. This approach can potentially have significant practical implications. 2. **Clear Presentation**: The paper is well-structured and clearly presents the proposed framework, making it accessible to readers. **Weaknesses**: 1. **Overlap in Training Data**: The potential overlap between ImageNet and COCO categories raises concerns about the effectiveness of the model in recognizing new classes. This issue needs to be addressed and clarified. 2. **Over-reliance on Class Codes**: The heavy dependence on class codes generated by the meta-network for detection poses potential challenges when the detection network struggles to extract features from new classes. The authors should explore alternative methods to mitigate this risk. rating: 6: Marginally above acceptance threshold confidence: 3: The reviewer is fairly confident that the evaluation is correct
y2ozeixGaU
WS-iFSD: Weakly Supervised Incremental Few-shot Object Detection Without Forgetting
[ "Xinyu Gong", "Li Yin", "Juan-Manuel Perez-Rua", "Zhangyang Wang", "Zhicheng Yan" ]
Traditional object detection algorithms rely on extensive annotations from a pre-defined set of base categories, leaving them ill-equipped to identify objects from novel classes. We address this limitation by introducing a novel framework for Incremental Few-Shot Object Detection (iFSD). Leveraging a meta-learning approach, our \hypernetwork is designed to generate class-specific codes, enabling object recognition from both base and novel categories. To enhance the \hypernetwork's generalization performance, we propose a Weakly Supervised Class Augmentation technique that significantly amplifies the training data by merely requiring image-level labels for object localization. Additionally, we stabilize detection performance on base categories by freezing the backbone and detection heads during meta-training. Our model demonstrates significant performance gains on two major benchmarks. Specifically, it outperforms the state-of-the-art ONCE approach on the MS COCO dataset by margins of $2.8\%$ and $20.5\%$ in box AP for novel and base categories, respectively. When trained on MS COCO and cross-evaluated on PASCAL VOC, our model achieves a four-fold improvement in box AP compared to ONCE.
[ "few-shot object detection" ]
https://openreview.net/pdf?id=y2ozeixGaU
QJ7xg2sl6H
official_review
1,697,457,132,528
y2ozeixGaU
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission36/Reviewer_bxCd" ]
title: Review for Submission36 WS-iFSD: Weakly Supervised Incremental Few-shot Object Detection Without Forgetting review: Summary: This paper propose a new method for weakly spupervised incremental few-shot detection. They use a metra-learning approach and introduce a weakly supervised class augmentation. They outperforms traditional methods by a clear margin on MS COCO and PASCAL VOC. Strength: 1. They have achived very good performance improvements in terms of both the number of new classess, and mAP. 2. It's interesting to study the problem that using images of a new category as the support set for meta-learning. In other words, I like the idea that applying meta-learning framework to incremental few-shot learning. 3. I like the idea that using a pre-trained classification model with Grad-CAM for data augmentation. Weakenss: 1. In this paper, authors mainly compare their methods with ONCE,Feature-Reweight and MAML, what were proposed before 2020. I adverse authors to compare their method with some more advanced method. 2. Authors propose a new data augmentation method, and the Mis-Classification Filtering for remove the bad augmentations. However, no experimental results are probided to validate their effectiveness. Ablation on this is necessary. 3. In this paper, authors use models pretrained on ImageNet to introduce the knowledge from the new categories. Another popular method is to introduce new knowledge by using large-scale pretrained multi-modal models such as CLIP. To the best of my knowledge, the second road has also achieved good performance. Does your method work better than such kind of method? Please discuss on it. rating: 7: Good paper, accept confidence: 3: The reviewer is fairly confident that the evaluation is correct
y2ozeixGaU
WS-iFSD: Weakly Supervised Incremental Few-shot Object Detection Without Forgetting
[ "Xinyu Gong", "Li Yin", "Juan-Manuel Perez-Rua", "Zhangyang Wang", "Zhicheng Yan" ]
Traditional object detection algorithms rely on extensive annotations from a pre-defined set of base categories, leaving them ill-equipped to identify objects from novel classes. We address this limitation by introducing a novel framework for Incremental Few-Shot Object Detection (iFSD). Leveraging a meta-learning approach, our \hypernetwork is designed to generate class-specific codes, enabling object recognition from both base and novel categories. To enhance the \hypernetwork's generalization performance, we propose a Weakly Supervised Class Augmentation technique that significantly amplifies the training data by merely requiring image-level labels for object localization. Additionally, we stabilize detection performance on base categories by freezing the backbone and detection heads during meta-training. Our model demonstrates significant performance gains on two major benchmarks. Specifically, it outperforms the state-of-the-art ONCE approach on the MS COCO dataset by margins of $2.8\%$ and $20.5\%$ in box AP for novel and base categories, respectively. When trained on MS COCO and cross-evaluated on PASCAL VOC, our model achieves a four-fold improvement in box AP compared to ONCE.
[ "few-shot object detection" ]
https://openreview.net/pdf?id=y2ozeixGaU
LHSzvBHS0P
official_review
1,696,909,723,352
y2ozeixGaU
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission36/Reviewer_fvmq" ]
title: Review of the Manuscript on WS-iFSD review: Strengths: 1. Innovation in Weakly Supervised Approach: WS-iFSD integrates weakly localized objects using Grad-CAM, magnifying class and image data, offering a fresh, data-efficient approach to iFSD. 2. Performance Preservation Strategy: Freezing the backbone and detection head during meta-training to preserve base-category detection performance is strategically sound. 3. Strong Empirical Evaluation: Substantial improvements on benchmarks (MS COCO, PASCAL VOC) validate the method’s efficacy over current approaches like ONCE. Weaknesses: 1. Novelty of this paper is questionable. The entire framework seems a combination of existing techniques, e.g., Grad-CAM for weak object localization, backbnoe freezing, etc. Despite the effectiveness, the contribution of this paper is incremental. 2. Weak Localization Concerns: The reliance on coarsely inferred bounding boxes could be a source of training noise and might impact detection reliability in complex environments. More analyses on the impact form those noise is beneficial. 3. Computational Complexity: Absence of a detailed discussion regarding computational costs with increased class and image data might overlook practical deployment challenges. 3. Generalization Discussion: Further exploration of the model’s generalization and performance across diverse detection scenarios and datasets is needed to establish robustness. Overall: The manuscript presents an interesting, data-augmented weakly supervised approach for iFSD, demonstrating marked improvements over established benchmarks. However the novelty and contribution of this work is questionable, and delving deeper into potential challenges with weak localization and computational complexities, along with a broader analysis of applicability, would fortify the research. The work is promising but warrants further exploration in specified areas for comprehensive insights and applicability. rating: 5: Marginally below acceptance threshold confidence: 3: The reviewer is fairly confident that the evaluation is correct
y2ozeixGaU
WS-iFSD: Weakly Supervised Incremental Few-shot Object Detection Without Forgetting
[ "Xinyu Gong", "Li Yin", "Juan-Manuel Perez-Rua", "Zhangyang Wang", "Zhicheng Yan" ]
Traditional object detection algorithms rely on extensive annotations from a pre-defined set of base categories, leaving them ill-equipped to identify objects from novel classes. We address this limitation by introducing a novel framework for Incremental Few-Shot Object Detection (iFSD). Leveraging a meta-learning approach, our \hypernetwork is designed to generate class-specific codes, enabling object recognition from both base and novel categories. To enhance the \hypernetwork's generalization performance, we propose a Weakly Supervised Class Augmentation technique that significantly amplifies the training data by merely requiring image-level labels for object localization. Additionally, we stabilize detection performance on base categories by freezing the backbone and detection heads during meta-training. Our model demonstrates significant performance gains on two major benchmarks. Specifically, it outperforms the state-of-the-art ONCE approach on the MS COCO dataset by margins of $2.8\%$ and $20.5\%$ in box AP for novel and base categories, respectively. When trained on MS COCO and cross-evaluated on PASCAL VOC, our model achieves a four-fold improvement in box AP compared to ONCE.
[ "few-shot object detection" ]
https://openreview.net/pdf?id=y2ozeixGaU
C5rlpEdUTz
official_review
1,697,166,844,864
y2ozeixGaU
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission36/Reviewer_fGfk" ]
title: Review comments review: This paper proposes a framework for incremental few-shot object detection, which leverages a meta-learning approach with backbone and detection heads frozen, to generate class-specific codes, and it also introduces a weakly supervised approach for class augmentation technique with minimal requirement on image-level localization labels. The novelty of this paper seems limited as the used meta-learning method, the freezing scheme and the class augmentation technique share much similarities with existing ones. The abstract states that "it outperforms the state-of-the-art ONCE approach 13 on the MS COCO dataset". However, ONCES [26] is published on CVPR 2020, which may not be viewed as a SOTA method. rating: 4: Ok but not good enough - rejection confidence: 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper
y2ozeixGaU
WS-iFSD: Weakly Supervised Incremental Few-shot Object Detection Without Forgetting
[ "Xinyu Gong", "Li Yin", "Juan-Manuel Perez-Rua", "Zhangyang Wang", "Zhicheng Yan" ]
Traditional object detection algorithms rely on extensive annotations from a pre-defined set of base categories, leaving them ill-equipped to identify objects from novel classes. We address this limitation by introducing a novel framework for Incremental Few-Shot Object Detection (iFSD). Leveraging a meta-learning approach, our \hypernetwork is designed to generate class-specific codes, enabling object recognition from both base and novel categories. To enhance the \hypernetwork's generalization performance, we propose a Weakly Supervised Class Augmentation technique that significantly amplifies the training data by merely requiring image-level labels for object localization. Additionally, we stabilize detection performance on base categories by freezing the backbone and detection heads during meta-training. Our model demonstrates significant performance gains on two major benchmarks. Specifically, it outperforms the state-of-the-art ONCE approach on the MS COCO dataset by margins of $2.8\%$ and $20.5\%$ in box AP for novel and base categories, respectively. When trained on MS COCO and cross-evaluated on PASCAL VOC, our model achieves a four-fold improvement in box AP compared to ONCE.
[ "few-shot object detection" ]
https://openreview.net/pdf?id=y2ozeixGaU
3QGaZtZ41L
official_review
1,696,605,879,901
y2ozeixGaU
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission36/Reviewer_69Nq" ]
title: Review of Submission 36 review: **Overall Evaluation** The primary innovation of this paper lies in bridging the gap between traditional object detection algorithms, which often lack adaptability to novel categories and introducing a weakly supervised class augmentation leveraging image-level labels. This approach underscores the paper's originality, and the empirical results demonstrate its efficacy. *Pros* 1. The bottleneck of iFSD is effectively addressed through weak supervision, and the use of coarse-grained supervision by the authors is reasonable. 2. The WS-iFSD pipeline is thoughtfully designed, aligning seamlessly with the motivation. Additionally, the potential uncertainty in pseudo-annotation is well mitigated by MCF. 3. Overall, the paper is well-organized and easy to follow. *Cons* 1. The analysis of computational complexity is missing. Considering that employing Pseudo Bounding Boxes from ImageNet through Grad-CAM facilitates better class augmentation, it might also increase time complexity. The authors should delve into this aspect, comparing the added complexity against performance enhancements. 2. The details for misclassification filtering are not comprehensive. One arising concern is whether the comparison of predicted labels against ground-truth labels was performed manually, which could be resource-intensive. 3. Given that class augmentation is derived from ImageNet, the extent of class overlap could be pivotal. While the authors touch upon the impact of novel overlapping classes, merely adding more classes, whether overlapping or not, for the novel classes AP seems somewhat straightforward and naive. A more thorough examination of the balance between the introduction of new classes and its effect on the base classes AP would be persuasive. *Minor comments* (1) Line 5: hypernetworkis -> hypernetwork is (2) Line 156: hypernetworkand -> hypernetwork and rating: 7: Good paper, accept confidence: 3: The reviewer is fairly confident that the evaluation is correct
uE1C3im4wF
Sparse Fréchet sufficient dimension reduction via nonconvex optimization
[ "Jiaying Weng", "Chenlu Ke", "Pei Wang" ]
In the evolving landscape of statistical learning, exploiting low-dimensional structures, particularly for non-Euclidean objects, is an essential and ubiquitous task with wide applications ranging from image analysis to biomedical research. Among the momentous developments in the non-Euclidean domain, Fréchet regression extends beyond Riemannian manifolds to study complex random response objects in a metric space with Euclidean features. Our work focuses on sparse Fréchet dimension reduction where the number of features far exceeds the sample size. The goal is to achieve parsimonious models by identifying a low-dimensional and sparse representation of features through sufficient dimension reduction. To this end, we construct a multitask regression model with synthetic responses and achieve sparse estimation by leveraging the minimax concave penalty. Our approach not only sidesteps inverting a large covariance matrix but also mitigates estimation bias in feature selection. To tackle the nonconvex optimization challenge, we develop a double approximation shrinkage-thresholding algorithm that combines a linear approximation to the penalty term and a quadratic approximation to the loss function. The proposed algorithm is efficient as each iteration has a clear and explicit solution. Experimental results for both simulated and real-world data demonstrate the superior performance of the proposed method compared to existing alternatives.
[ "Fréchet regression; minimax concave penalty; multitask regression; sufficient dimension reduction; sufficient variable selection." ]
https://openreview.net/pdf?id=uE1C3im4wF
vjPQ8sPdoF
decision
1,700,432,854,764
uE1C3im4wF
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Program_Chairs" ]
decision: Accept (Oral) comment: After a thorough review of the paper titled "Sparse Fréchet Dimension Reduction with Minima Concave Penalty," it is clear that the authors have addressed an interesting problem with meaningful contributions. The paper presents a nonconvex optimization algorithm involving a minima concave penalty in the context of Sparse Fréchet dimension reduction when dealing with feature dimensionality exceeding sample size. While the reviewers recognized the significance of the problem and the value of the contributions, they had some concerns regarding the paper's organization and requested clarifications. Additionally, they noted that the empirical results, while somewhat limited, still effectively demonstrated the merits of the proposed approach. During the rebuttal, the authors responded to all reviewers' comments in a convincing manner. Therefore, we recommend the acceptance of the paper. We strongly encourage the authors to carefully revise their final submission in line with the feedback provided by the reviewers. The action PC chair for this paper is Qing Qu, who made the decision after carefully reading the paper as well as the comments by all reviewers and AC. The decision is agreed upon by all PC chairs. title: Paper Decision
uE1C3im4wF
Sparse Fréchet sufficient dimension reduction via nonconvex optimization
[ "Jiaying Weng", "Chenlu Ke", "Pei Wang" ]
In the evolving landscape of statistical learning, exploiting low-dimensional structures, particularly for non-Euclidean objects, is an essential and ubiquitous task with wide applications ranging from image analysis to biomedical research. Among the momentous developments in the non-Euclidean domain, Fréchet regression extends beyond Riemannian manifolds to study complex random response objects in a metric space with Euclidean features. Our work focuses on sparse Fréchet dimension reduction where the number of features far exceeds the sample size. The goal is to achieve parsimonious models by identifying a low-dimensional and sparse representation of features through sufficient dimension reduction. To this end, we construct a multitask regression model with synthetic responses and achieve sparse estimation by leveraging the minimax concave penalty. Our approach not only sidesteps inverting a large covariance matrix but also mitigates estimation bias in feature selection. To tackle the nonconvex optimization challenge, we develop a double approximation shrinkage-thresholding algorithm that combines a linear approximation to the penalty term and a quadratic approximation to the loss function. The proposed algorithm is efficient as each iteration has a clear and explicit solution. Experimental results for both simulated and real-world data demonstrate the superior performance of the proposed method compared to existing alternatives.
[ "Fréchet regression; minimax concave penalty; multitask regression; sufficient dimension reduction; sufficient variable selection." ]
https://openreview.net/pdf?id=uE1C3im4wF
h9CMCdSMVx
official_review
1,696,715,644,058
uE1C3im4wF
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission10/Reviewer_jzrn" ]
title: Not qualified to review this review: Thank you for considering me as a reviewer for this manuscript. After careful consideration, I feel that I may not be the most suitable reviewer for this particular work. My expertise does not sufficiently cover the areas related to the theoretical perspective of sufficient dimension reduction, which seems to be a foundational concept in the paper. The work is also defined build on several other Fréchet SDR work that I have never heard of. As such, I am concerned that I might not be able to fully grasp the motivation and nuances of the method presented. It would be in the best interest of the author and AC to not take into account of my comments on this work. rating: 6: Marginally above acceptance threshold confidence: 1: The reviewer's evaluation is an educated guess
uE1C3im4wF
Sparse Fréchet sufficient dimension reduction via nonconvex optimization
[ "Jiaying Weng", "Chenlu Ke", "Pei Wang" ]
In the evolving landscape of statistical learning, exploiting low-dimensional structures, particularly for non-Euclidean objects, is an essential and ubiquitous task with wide applications ranging from image analysis to biomedical research. Among the momentous developments in the non-Euclidean domain, Fréchet regression extends beyond Riemannian manifolds to study complex random response objects in a metric space with Euclidean features. Our work focuses on sparse Fréchet dimension reduction where the number of features far exceeds the sample size. The goal is to achieve parsimonious models by identifying a low-dimensional and sparse representation of features through sufficient dimension reduction. To this end, we construct a multitask regression model with synthetic responses and achieve sparse estimation by leveraging the minimax concave penalty. Our approach not only sidesteps inverting a large covariance matrix but also mitigates estimation bias in feature selection. To tackle the nonconvex optimization challenge, we develop a double approximation shrinkage-thresholding algorithm that combines a linear approximation to the penalty term and a quadratic approximation to the loss function. The proposed algorithm is efficient as each iteration has a clear and explicit solution. Experimental results for both simulated and real-world data demonstrate the superior performance of the proposed method compared to existing alternatives.
[ "Fréchet regression; minimax concave penalty; multitask regression; sufficient dimension reduction; sufficient variable selection." ]
https://openreview.net/pdf?id=uE1C3im4wF
ZFH9xxAf02
official_review
1,696,815,812,717
uE1C3im4wF
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission10/Reviewer_LJBD" ]
title: The theory is interesting. The problem setup is poor. review: Paper summary: This paper focuses on the problem of sufficient dimension reduction (SDR). The model consists of random object response matrix X in a metric space and a predictor vector Y in a Euclidean space; and SDR assumes that they are statistically independent of each other given an unknown linear function of response matrix X, namely, \beta^T X. The number of features in X exceeds the sample size. The goal is to identify a low-dimensional and sparse representation of the dimension reduction subspace (DSR) --- which is related to \beta. They construct a multitask regression model with synthetic responses (computed from X and Y) and achieve sparse estimation by leveraging the minimax concave penalty. Their algorithms avoid inverting the ill-conditioned covariance matrix. The optimization problems are nonconvex. So they develop and implement a double approximation shrinkage-thresholding algorithm that combines a linear approximation to the penalty term and a quadratic approximation to the loss function. The show the utility of their algorithms on manifold-valued synthetic data and one real-world dataset. 
 My Evaluation: The problem statement is interesting. However, the authors should reorganize subsection 2.1 to more clearly explain it; perhaps they can dedicate a latex environment under "Problem 1" and formalize the problem. What are the measurements?, what is the goal? and what are the assumptions? One of the main ideas to arrive at the current solution is "conditional mean independence" which is not explained. Furthermore, it is not clear how one arrives at equation 4 from equation (3). I left a comment below about this. Every step of the derivation should be clearly explained (as well as the required assumptions). Overall, I think this is a nice problem statement. The proposed solutions are neat and, for the most part, clearly explained (section 3). The experiments are limited but overall they showcase the performance of their approach compared to the LASSO-based methods on synthetic data (using small sample sizes). I'm willing in to increase my score if the authors address all the comments in this review. Questions, Comments, and Suggestions: (1) In the introduction, the problem of "Fréchet SDR" is not motivated nor introduced.
 (2) Equation (1) : What is the orthogonality with respect to? Is there a probability distribution assumption? If so, please make it clear. That way, you can better discuss terms like “regression information”. (3) (page 1, line 38) “The matrix \beta satisfying (1) is not identifiable.” Up to what? I assume it is upto a post-multiplication by an orthogonal (or an invertible?) matrix . Please make it clear.
 (4) (page 1, line 39) “However, DRS is not unique.“ Is this generally true? If so, please dedicate a remark and explain why this the case. My understanding is that it should depend on the specific model, no? For example, a simple linear model Y = \beta^{T} X + n where n is an independent AWGN. Is DRS unique in this example?
 (5) 
(page 2, line 81) “The double approximation technique relies less on the initial values and provides explicit expressions at each iteration, … “ What does it mean that this technique relies less on the initial values? Do you mean it has a global solution? Or maybe it is not sensitive to the initial values? If so, why or in which cases?
 (6) (page 3, line 91) "To detect the conditional mean independence, ... " Could you please explain what this means? This seems very important because it is the main reason for using WIRE kernel matrix to identify DSR. (7) (Proposition 1) . (a) What is "metric space of negative type"? (b) This proposition puts a constraint on both the distribution of X and the underlying metric space. Am I correct? If so, could you please give a couple of examples in which the linearity assumption is true. (c) Please reference the proposition inside its environment. (8) (Section 2.2) The derived conclusion of \eta \prop 1/n X^T X \beta is true but not immediately obvious. I spent some time deriving it for myself. I suggest that you do the following: (a) Dedicate a definition for the synthetic response variables -- computed from Y and X (b) Start with the fact that \Sigma Span( \beta ) = Span( \Lambda ) (under the assumptions) and claim that \mathbb{Y} (defined earlier) can be written as \mathbb{Y} = X \beta where \beta is a vector in DSR. Then this can be generalized for when \beta is a matrix. Then, motivate the use of equation 4. The current derivation/explanation is very confusing. (9) (Line 2 in Algorithm 1). Could you please explain how X^*_j = \lambda / w_j \mathbb{X}_j is derived? This is not obvious to me. A remark on the main idea in adapting LLA to the sparse setting would benefit the readers. Also, please make sure that your clearly define terms like \mathbb{X}_j, X_j, w_j, e_j, \beta_i, b_{i, k-1}, .... These notations are very confusing. I suggest for rows and columns use e_i^T X , X e_j ( not X_j, ... ) and for iterations use superscript with parentheses like X^{(k)}. Also indices should varying from lower case letters to uppercase ones, like \sum_{n = 1 }^{N} not \sum_{j=1}^{p} ... (10) (Algorithm 2) What is O_{i,k+1}? (11) (Page 6, line 210) "[24] discovered a disparity a ... " (a) Please do not use numbered references as nouns (everywhere it applies). (b) Please explain why eigenvalues of \Lambda (instead of \Simga^{-1} \Lambda) are important? And also, why do we need the adjustment terms? (12) The one-step LLA_G algorithm seem to provide very good numerical results (Table 2 and example 3 in Figures 1 and 2 ) compared to DASTA methods. Any comments on the computation complexity of LLA vs DASTA? (13) (page 1, line 62) "Most of the existing sparse SDR methods employed Lasso or group-Lasso penalty, both of which are convex and lead to biased estimation." What is the source of this biased estimation? And how does the nonconvex MCP penalty remedy that? rating: 5: Marginally below acceptance threshold confidence: 3: The reviewer is fairly confident that the evaluation is correct
uE1C3im4wF
Sparse Fréchet sufficient dimension reduction via nonconvex optimization
[ "Jiaying Weng", "Chenlu Ke", "Pei Wang" ]
In the evolving landscape of statistical learning, exploiting low-dimensional structures, particularly for non-Euclidean objects, is an essential and ubiquitous task with wide applications ranging from image analysis to biomedical research. Among the momentous developments in the non-Euclidean domain, Fréchet regression extends beyond Riemannian manifolds to study complex random response objects in a metric space with Euclidean features. Our work focuses on sparse Fréchet dimension reduction where the number of features far exceeds the sample size. The goal is to achieve parsimonious models by identifying a low-dimensional and sparse representation of features through sufficient dimension reduction. To this end, we construct a multitask regression model with synthetic responses and achieve sparse estimation by leveraging the minimax concave penalty. Our approach not only sidesteps inverting a large covariance matrix but also mitigates estimation bias in feature selection. To tackle the nonconvex optimization challenge, we develop a double approximation shrinkage-thresholding algorithm that combines a linear approximation to the penalty term and a quadratic approximation to the loss function. The proposed algorithm is efficient as each iteration has a clear and explicit solution. Experimental results for both simulated and real-world data demonstrate the superior performance of the proposed method compared to existing alternatives.
[ "Fréchet regression; minimax concave penalty; multitask regression; sufficient dimension reduction; sufficient variable selection." ]
https://openreview.net/pdf?id=uE1C3im4wF
VIroaRR4V9
official_review
1,697,136,054,928
uE1C3im4wF
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission10/Reviewer_PKWv" ]
title: Main Review review: This manuscript proposes a new method for high-dimensional Frechet SDR by augmenting a weighted inverse regression ensemble with a nonconvex penalty. Overall the reviewer finds this paper easy to follow, and the contributions are clear, hence would like to recommend a “clear accept”. Several minor comments: 1. In line 61, perhaps the authors can consider using a single \cite{} to contain all bibs, so that the citations will appear as [16-27]. 2. The reviewer could be wrong, but the formulation in equation (4) looks very similar to a sparse dictionary learning problem. More precisely, for each input data vector $(x_i,y_i)$, we want to use at most $d$ elements $x_i$ to represent $y_i$. The reviewer thinks it would be great if the authors could elaborate more on the connection between the proposed objective function with the sparse dictionary learning problem (especially in the non-convex optimization setting). Note that there is a vast number of papers in the sparse dictionary literature if the authors would like to add some related papers, perhaps the authors could use [1] as a potential starting point. But feel free to ignore this suggestion, if the authors find this suggestion not relevant. [1] Zhang, Yuqian, Qing Qu, and John Wright. "From symmetry to geometry: Tractable nonconvex problems." arXiv preprint arXiv:2007.06753 (2020). rating: 7: Good paper, accept confidence: 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper
uE1C3im4wF
Sparse Fréchet sufficient dimension reduction via nonconvex optimization
[ "Jiaying Weng", "Chenlu Ke", "Pei Wang" ]
In the evolving landscape of statistical learning, exploiting low-dimensional structures, particularly for non-Euclidean objects, is an essential and ubiquitous task with wide applications ranging from image analysis to biomedical research. Among the momentous developments in the non-Euclidean domain, Fréchet regression extends beyond Riemannian manifolds to study complex random response objects in a metric space with Euclidean features. Our work focuses on sparse Fréchet dimension reduction where the number of features far exceeds the sample size. The goal is to achieve parsimonious models by identifying a low-dimensional and sparse representation of features through sufficient dimension reduction. To this end, we construct a multitask regression model with synthetic responses and achieve sparse estimation by leveraging the minimax concave penalty. Our approach not only sidesteps inverting a large covariance matrix but also mitigates estimation bias in feature selection. To tackle the nonconvex optimization challenge, we develop a double approximation shrinkage-thresholding algorithm that combines a linear approximation to the penalty term and a quadratic approximation to the loss function. The proposed algorithm is efficient as each iteration has a clear and explicit solution. Experimental results for both simulated and real-world data demonstrate the superior performance of the proposed method compared to existing alternatives.
[ "Fréchet regression; minimax concave penalty; multitask regression; sufficient dimension reduction; sufficient variable selection." ]
https://openreview.net/pdf?id=uE1C3im4wF
NOc3SX9Y7K
meta_review
1,699,711,593,415
uE1C3im4wF
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission10/Area_Chair_zUhj" ]
metareview: In this paper, the authors deal with the problem of Sparse Fréchet dimension reduction in the setting where features exceed sample size. They propose a nonconvex optimization algorithm that involves a minima concave penalty. The reviewers recognized that the problem under study is interesting and the contributions of the current paper meaningful. The reviewers raised some concerns regarding the organization of the paper and requested clarifications. Moreover, they found somewhat limited empirical results but at the same time admitted that they sufficiently showcase the merits of the approach. The authors responded to all reviewers' comments point-by-point, and their responses were quite convincing. Therefore, I recommend the acceptance of the paper as a poster and strongly encourage the authors to accordingly revise their final submission as requested by the reviewers. recommendation: Accept (Poster) confidence: 4: The area chair is confident but not absolutely certain
p8WpFhcKsK
Efficiently Disentangle Causal Representations
[ "Yuanpeng Li", "Joel Hestness", "Mohamed Elhoseiny", "Liang Zhao", "Kenneth Church" ]
This paper proposes an efficient approach to learning disentangled representations with causal mechanisms based on the difference of conditional probabilities in original and new distributions. We approximate the difference with models' generalization abilities so that it fits in the standard machine learning framework and can be computed efficiently. In contrast to the state-of-the-art approach, which relies on the learner's adaptation speed to new distribution, the proposed approach only requires evaluating the model's generalization ability. We provide a theoretical explanation for the advantage of the proposed method, and our experiments show that the proposed technique is 1.9--11.0$\times$ more sample efficient and 9.4--32.4$\times$ quicker than the previous method on various tasks.
[ "causal representation learning" ]
https://openreview.net/pdf?id=p8WpFhcKsK
xpOSWuqW1M
official_review
1,697,173,276,343
p8WpFhcKsK
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission9/Reviewer_fXyy" ]
title: Review review: **Summary:**. The paper explores a new approach to concurrently disentangling causal variables and determining the causal relationship between them. It draws inspiration from the work of Bengio et al. (2020), but diverges by focusing on the generalization gap instead of employing a meta-objective and adaptation speed. **Pros:** • The proposed method addresses both learning a disentangled representation of causal variables and discovering their causal direction. **Cons:** • The evaluation is somewhat limited. Specifically, it remains uncertain how the method can be extended to more than two variables and larger benchmarking datasets. Additionally, it would be beneficial to see ablation studies on edge cases, particularly the impact of choosing conditional and marginal distributions on the condition of a small entropy gap in Proposition 2 (which lies at the core of the theoretical justification of the method) **Detailed:** The proposed method presents an interesting solution for acquiring both disentangled (causal) representations and the causal relationships between them. It builds upon the concept introduced by Bengio et al. (2020), but simplifies the computations to a comparison of the generalization gap. Consequently, it eliminates the necessity for defining connection-wise structural parameters and employing REINFORCE-based gradient estimators. However, I do have some concerns about this work. Firstly, I find the evaluation (and hence the impact) of the paper limited. The method is designed solely for setups involving two observational variables, and it does not appear readily extensible to studies involving datasets with more variables. As a result, its usefulness in recovering causal directionality in large-scale practical applications is unclear. Furthermore, the only baseline considered is the work of Bengio et al. (2020). While this is a logical baseline given the similarities between both approaches, it's worth noting that the notion of meta-learning the structural parameters of a causal graph has also been explored in more expansive and efficient studies (e.g., [2] and [3]). Secondly, if I comprehend correctly, according to Proposition 1 (and consequently 2), the method is capable of working only with interventions on the cause variable (i.e. the causal graph cannot change between the transfer and train distributions). Consequently, there are no assurances regarding its behavior when the transfer dataset is, in fact, an intervention on the effect, thereby breaking the dependence between observed variables. Simultaneously, according to Proposition 2, the difference in generalization gap is posited as a valid predictor if the delta entropy gap between variables B and A is "reasonably small." However, the authors only provide an intuition that such a statement should not be violated in real-world applications. The paper does not examine any real-world datasets or distributions (or their "approximations," as encountered, for instance, in BnLearn, since it is difficult to request a "real" dataset), nor does it conduct ablation studies on potential edge cases that could lead to the violation of the aforementioned argument. I believe the paper could benefit from a more comprehensive analysis from this perspective. **Questions:** How is σ(γ) defined in this approach? **References:**. [1] Bengio, Yoshua, et al. "A meta-transfer objective for learning to disentangle causal mechanisms." arXiv preprint arXiv:1901.10912 (2019). [2] Ke, Nan Rosemary, et al. "Learning neural causal models from unknown interventions." arXiv preprint arXiv:1910.01075 (2019). [3] Lippe, Phillip, Taco Cohen, and Efstratios Gavves. "Efficient neural causal discovery without acyclicity constraints." arXiv preprint arXiv:2107.10483 (2021). rating: 5: Marginally below acceptance threshold confidence: 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper
p8WpFhcKsK
Efficiently Disentangle Causal Representations
[ "Yuanpeng Li", "Joel Hestness", "Mohamed Elhoseiny", "Liang Zhao", "Kenneth Church" ]
This paper proposes an efficient approach to learning disentangled representations with causal mechanisms based on the difference of conditional probabilities in original and new distributions. We approximate the difference with models' generalization abilities so that it fits in the standard machine learning framework and can be computed efficiently. In contrast to the state-of-the-art approach, which relies on the learner's adaptation speed to new distribution, the proposed approach only requires evaluating the model's generalization ability. We provide a theoretical explanation for the advantage of the proposed method, and our experiments show that the proposed technique is 1.9--11.0$\times$ more sample efficient and 9.4--32.4$\times$ quicker than the previous method on various tasks.
[ "causal representation learning" ]
https://openreview.net/pdf?id=p8WpFhcKsK
wVYljf1y7t
meta_review
1,699,835,687,954
p8WpFhcKsK
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission9/Area_Chair_RnYz" ]
metareview: The reviewers are generally positive about the paper and find it technically strong, original, and well-written. Several concerns were raised, including the limitation to two observational variables and limited experimental evaluation. Overall, the paper has sufficient novel contributions and is worth acceptance. recommendation: Accept (Poster) confidence: 4: The area chair is confident but not absolutely certain
p8WpFhcKsK
Efficiently Disentangle Causal Representations
[ "Yuanpeng Li", "Joel Hestness", "Mohamed Elhoseiny", "Liang Zhao", "Kenneth Church" ]
This paper proposes an efficient approach to learning disentangled representations with causal mechanisms based on the difference of conditional probabilities in original and new distributions. We approximate the difference with models' generalization abilities so that it fits in the standard machine learning framework and can be computed efficiently. In contrast to the state-of-the-art approach, which relies on the learner's adaptation speed to new distribution, the proposed approach only requires evaluating the model's generalization ability. We provide a theoretical explanation for the advantage of the proposed method, and our experiments show that the proposed technique is 1.9--11.0$\times$ more sample efficient and 9.4--32.4$\times$ quicker than the previous method on various tasks.
[ "causal representation learning" ]
https://openreview.net/pdf?id=p8WpFhcKsK
q0H26scDi2
official_review
1,696,473,591,235
p8WpFhcKsK
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission9/Reviewer_oMj8" ]
title: A novel approach that enhances an existing method, supported by both solid theoretical justification and empirical evaluation. review: **Overview** This paper proposes an efficient and theoretically motivated approach to simplify the technique of disentangling causal representations. **Strength** * The key idea of approximating the difference of conditional probabilities with models' generalization abilities is intuitively reasonable and further carefully approved with corner case well discussed. * The empirical evaluation is solid as the experiments follow the most standard work, and the efficiency improvement of the proposed method is very pronounced. **Question** * Does the paper offer insights into the approximation error associated with using generalization abilities as a surrogate for the actual difference in conditional probabilities? Is there any provided intuition or established bounds? rating: 7: Good paper, accept confidence: 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper
p8WpFhcKsK
Efficiently Disentangle Causal Representations
[ "Yuanpeng Li", "Joel Hestness", "Mohamed Elhoseiny", "Liang Zhao", "Kenneth Church" ]
This paper proposes an efficient approach to learning disentangled representations with causal mechanisms based on the difference of conditional probabilities in original and new distributions. We approximate the difference with models' generalization abilities so that it fits in the standard machine learning framework and can be computed efficiently. In contrast to the state-of-the-art approach, which relies on the learner's adaptation speed to new distribution, the proposed approach only requires evaluating the model's generalization ability. We provide a theoretical explanation for the advantage of the proposed method, and our experiments show that the proposed technique is 1.9--11.0$\times$ more sample efficient and 9.4--32.4$\times$ quicker than the previous method on various tasks.
[ "causal representation learning" ]
https://openreview.net/pdf?id=p8WpFhcKsK
azTd178Zw8
decision
1,700,422,031,728
p8WpFhcKsK
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Program_Chairs" ]
decision: Accept (Oral) comment: This paper proposes a new approach to disentangling causal representations based on the difference of conditional probabilities in the original and new distributions, which is more sample-efficient and faster than prior art. The paper is worthy of acceptance. The action PC chair for this paper is Yuejie Chi, who made the decision after carefully reading the paper as well as the comments by all reviewers and AC. The decision is agreed by all PC chairs. title: Paper Decision
p8WpFhcKsK
Efficiently Disentangle Causal Representations
[ "Yuanpeng Li", "Joel Hestness", "Mohamed Elhoseiny", "Liang Zhao", "Kenneth Church" ]
This paper proposes an efficient approach to learning disentangled representations with causal mechanisms based on the difference of conditional probabilities in original and new distributions. We approximate the difference with models' generalization abilities so that it fits in the standard machine learning framework and can be computed efficiently. In contrast to the state-of-the-art approach, which relies on the learner's adaptation speed to new distribution, the proposed approach only requires evaluating the model's generalization ability. We provide a theoretical explanation for the advantage of the proposed method, and our experiments show that the proposed technique is 1.9--11.0$\times$ more sample efficient and 9.4--32.4$\times$ quicker than the previous method on various tasks.
[ "causal representation learning" ]
https://openreview.net/pdf?id=p8WpFhcKsK
Fw4H4UY2xj
official_review
1,696,901,548,314
p8WpFhcKsK
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission9/Reviewer_sut8" ]
title: Solid theoretical analysis review: This paper studies efficient learning of disentangled causal representations by approximating conditional probability differences with generalization loss. **Quality** The paper is technically strong with solid theoretical analysis and extensive experiments. **Clarity** The writing is clear and easy to follow. The problem is well motivated, and the proposed approach is intuitively explained. Theoretical results clearly convey the mechanisms and efficiency benefits. **Originality** Approximating conditional divergence with generalization loss to identify causality is novel. This simplification enables direct leverage of standard ML workflows. **Significance** The significant efficiency improvements enable wider application of causal representation learning. ## Pros 1. Intuitive approximation using generalization loss avoids the adaptation process. 2. Theoretical analysis clearly explains mechanisms and advantages. 3. Empirical results strongly demonstrate efficiency benefits. 4. Enables straightforward integration with standard ML workflows. ## Cons 1. More intuition behind the generalization loss approximation could be useful. 2. Experiments on more complex real datasets could better showcase benefits. 3. More discussion on sensitivity and failure cases would be helpful. 4. Comparison with more baselines besides the one previous method is needed. 5. A broader impact discussion could be added. rating: 8: Top 50% of accepted papers, clear accept confidence: 3: The reviewer is fairly confident that the evaluation is correct
kmzH8kT9TE
Emergence of Segmentation with Minimalistic White-Box Transformers
[ "Yaodong Yu", "Tianzhe Chu", "Shengbang Tong", "Ziyang Wu", "Druv Pai", "Sam Buchanan", "Yi Ma" ]
Transformer-like models for vision tasks have recently proven effective for a wide range of downstream applications such as segmentation and detection. Previous works have shown that segmentation properties emerge in vision transformers (ViTs) trained using self-supervised methods such as DINO, but not in those trained on supervised classification tasks. In this study, we probe whether segmentation emerges in transformer-based models solely as a result of intricate self-supervised learning mechanisms, or if the same emergence can be achieved under much broader conditions through proper design of the model architecture. Through extensive experimental results, we demonstrate that when employing a white-box transformer-like architecture known as CRATE, whose design explicitly models and pursues low-dimensional structures in the data distribution, segmentation properties, at both the whole and parts levels, already emerge with a minimalistic supervised training recipe. Layer-wise finer-grained analysis reveals that the emergent properties strongly corroborate the designed mathematical functions of the white-box network. Our results suggest a path to design white-box foundation models that are simultaneously highly performant and mathematically fully interpretable.
[ "white-box transformer", "emergence of segmentation properties" ]
https://openreview.net/pdf?id=kmzH8kT9TE
j1gQgfeKER
official_review
1,695,911,643,969
kmzH8kT9TE
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission12/Reviewer_xxKA" ]
title: An Official Review about Acceptance review: # Summary By conducting thorough quantitative and qualitative analyses, the paper effectively highlights the prowess of CRATE models in semantic segmentation, achieved through simple supervised training of classification. The research strongly underscores the merits of White-Box Transformers. # Strengths 1. **Well-Written**: The paper is structured lucidly, with detailed experimental settings, making it accessible and easy to follow. 2. **Robust Experiments**: This paper undertakes comprehensive quantitative and qualitative evaluations, attesting to CRATE's superiority. 3. **Interesting Findings**: The study underscores the importance of a more interpretable architecture design for the academic community. # Questions 1. In Figure 6 (left), VIT-B displays optimal performance in the last block, whereas CRATE-B peaks in the penultimate layer. Paper [1] suggests that the penultimate-layer features in ViTs trained with DINO strongly correlate with visual input saliency. Considering both models in Figure 6 are supervised-trained, why do they peak in different layers? Is there a more cogent explanation? 2. CRATE advances semantic segmentation via its white-box design. How does it stack up against other black-box architectural designs, such as PVT [2]? Can black-box designs also enhance the emergence? # References [1] Emerging properties in self-supervised vision transformers [2] Pyramid vision transformer: A versatile backbone for dense prediction without convolutions rating: 7: Good paper, accept confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
kmzH8kT9TE
Emergence of Segmentation with Minimalistic White-Box Transformers
[ "Yaodong Yu", "Tianzhe Chu", "Shengbang Tong", "Ziyang Wu", "Druv Pai", "Sam Buchanan", "Yi Ma" ]
Transformer-like models for vision tasks have recently proven effective for a wide range of downstream applications such as segmentation and detection. Previous works have shown that segmentation properties emerge in vision transformers (ViTs) trained using self-supervised methods such as DINO, but not in those trained on supervised classification tasks. In this study, we probe whether segmentation emerges in transformer-based models solely as a result of intricate self-supervised learning mechanisms, or if the same emergence can be achieved under much broader conditions through proper design of the model architecture. Through extensive experimental results, we demonstrate that when employing a white-box transformer-like architecture known as CRATE, whose design explicitly models and pursues low-dimensional structures in the data distribution, segmentation properties, at both the whole and parts levels, already emerge with a minimalistic supervised training recipe. Layer-wise finer-grained analysis reveals that the emergent properties strongly corroborate the designed mathematical functions of the white-box network. Our results suggest a path to design white-box foundation models that are simultaneously highly performant and mathematically fully interpretable.
[ "white-box transformer", "emergence of segmentation properties" ]
https://openreview.net/pdf?id=kmzH8kT9TE
dWEgmglAZz
decision
1,700,421,224,203
kmzH8kT9TE
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Program_Chairs" ]
decision: Accept (Oral) comment: This paper demonstrated the emerging capability of segmentation in a white-box transformer-like architecture known as CRATE, with minimalistic supervised training. The paper has been well received by the reviewers, and may inspire future works on the design of white-box foundation models that are strongly performant. The action PC chair for this paper is Yuejie Chi, who made the decision after carefully reading the paper as well as the comments by all reviewers and AC. The decision is agreed by all PC chairs. title: Paper Decision
kmzH8kT9TE
Emergence of Segmentation with Minimalistic White-Box Transformers
[ "Yaodong Yu", "Tianzhe Chu", "Shengbang Tong", "Ziyang Wu", "Druv Pai", "Sam Buchanan", "Yi Ma" ]
Transformer-like models for vision tasks have recently proven effective for a wide range of downstream applications such as segmentation and detection. Previous works have shown that segmentation properties emerge in vision transformers (ViTs) trained using self-supervised methods such as DINO, but not in those trained on supervised classification tasks. In this study, we probe whether segmentation emerges in transformer-based models solely as a result of intricate self-supervised learning mechanisms, or if the same emergence can be achieved under much broader conditions through proper design of the model architecture. Through extensive experimental results, we demonstrate that when employing a white-box transformer-like architecture known as CRATE, whose design explicitly models and pursues low-dimensional structures in the data distribution, segmentation properties, at both the whole and parts levels, already emerge with a minimalistic supervised training recipe. Layer-wise finer-grained analysis reveals that the emergent properties strongly corroborate the designed mathematical functions of the white-box network. Our results suggest a path to design white-box foundation models that are simultaneously highly performant and mathematically fully interpretable.
[ "white-box transformer", "emergence of segmentation properties" ]
https://openreview.net/pdf?id=kmzH8kT9TE
ac4pk8fL5e
official_review
1,696,632,143,505
kmzH8kT9TE
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission12/Reviewer_WH3x" ]
title: A interesting work of a novel and explainable architecture of Vision Transformer review: This paper proposes a white-box Transformer architecture for visual segmentation tasks. The model is designed to optimize the sparse rate reduction objective, which results in the property that each layer first compresses the distribution of tokens and then sparsely encodes the next representation. The visualization shows a good segmentation performance compared with ViT. Meanwhile, each layer and attention head is explainable. This paper is very well-written. However, I still have several questions or concerns. 1. What is the major contribution compared with [51]? From my understanding, the architecture seems similar to [51]. Is the contribution mainly about the tasks on segmentation? 2. It would be better if there were more theoretical explanations of the relationship between the CRATE model architecture and segmentation tasks in Section 2. I like the proposed mechanism of each layer, but it is unclear how it helps segmentation tasks in theory. 3. I will treat such work as an important work on the theoretical understanding of Vison Transformers. The proposed mechanism of first compressing and then sparsely encoding is very interesting to me. Some recent theoretical works [a], [b], [c] on (Vision) Transformers provide another explanation of the learning process, which could be summarized as feature matching and selection. It would be great if this paper could cover a discussion with these works. 4. A minor point. How do you compare the training efficiency of your proposed method with existing works on segmentation tasks? [a] S. Jelassi et al., Neurips 2022. "Vision transformers provably learn spatial structure." [b] H. Li et al., ICLR 2023. "A Theoretical Understanding of Shallow Vision Transformers: Learning, Generalization, and Sample Complexity." [c] Y. Li et al., ICML 2023. "How do transformers learn topic structure: Towards a mechanistic understanding." rating: 6: Marginally above acceptance threshold confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
kmzH8kT9TE
Emergence of Segmentation with Minimalistic White-Box Transformers
[ "Yaodong Yu", "Tianzhe Chu", "Shengbang Tong", "Ziyang Wu", "Druv Pai", "Sam Buchanan", "Yi Ma" ]
Transformer-like models for vision tasks have recently proven effective for a wide range of downstream applications such as segmentation and detection. Previous works have shown that segmentation properties emerge in vision transformers (ViTs) trained using self-supervised methods such as DINO, but not in those trained on supervised classification tasks. In this study, we probe whether segmentation emerges in transformer-based models solely as a result of intricate self-supervised learning mechanisms, or if the same emergence can be achieved under much broader conditions through proper design of the model architecture. Through extensive experimental results, we demonstrate that when employing a white-box transformer-like architecture known as CRATE, whose design explicitly models and pursues low-dimensional structures in the data distribution, segmentation properties, at both the whole and parts levels, already emerge with a minimalistic supervised training recipe. Layer-wise finer-grained analysis reveals that the emergent properties strongly corroborate the designed mathematical functions of the white-box network. Our results suggest a path to design white-box foundation models that are simultaneously highly performant and mathematically fully interpretable.
[ "white-box transformer", "emergence of segmentation properties" ]
https://openreview.net/pdf?id=kmzH8kT9TE
MSlHZMxntP
official_review
1,696,680,638,223
kmzH8kT9TE
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission12/Reviewer_KcjJ" ]
title: Captivating observation about the emergence of segmentation properties review: This paper investigates the emergence of segmentation properties within Vision Transformer models (ViTs). Contrary to the prevailing belief that segmentation properties predominantly result from intricate self-supervised techniques like DINO, this paper illustrates that these properties can also manifest through architectural choices within the conventional supervised training paradigm. The authors provide a comprehensive overview of the background literature necessary for grasping the paper's content, rendering their work self-contained and accessible. Moreover, the paper carefully examines both qualitative and quantitative metrics to support its claims. Specifically, the authors demonstrate that their findings hold across diverse datasets, varying model sizes, and a spectrum of evaluation metrics. The inclusion of insightful ablation studies, notably the architectural modification of ViT (specifically, the replacement of MHSA with MSSA), adds depth to the analysis. A notable feature of this paper lies in its detailed description of the experimental methodologies, thoughtfully provided in the appendix (huge plus!). This greatly improves the ability to replicate the results and encourages their use as a basis for subsequent research endeavors. In conclusion, the findings presented in this work hold substantial value for the research community. They introduce and validate novel and intriguing insights previously unexplored, effectively challenging established beliefs regarding the emergence of segmentation properties in ViTs. A few minor comments and questions for the authors: 1. Could you elaborate more on the thought process that led you to investigate this specific architecture choice? Usually, papers introducing novel architectures lack insight into this aspect, and such findings may appear as if they came out of nowhere. However, I believe that architectural improvements are typically the result of an iterative process, often involving failed attempts. Including a paragraph describing other architectural options you explored (if any) and explaining how and where they fell short in producing segmentation properties would be highly beneficial. This information could prove invaluable to fellow researchers seeking to build upon your work, potentially saving them time and effort by avoiding similar pitfalls. 2. Your paper demonstrates the emergence of segmentation in the attention maps of CRATE trained in a supervised manner on ImageNet-21k. Have you observed these same segmentation properties persisting after fine-tuning the model for other downstream tasks, or do these features tend to diminish during the transfer-learning process? It would be valuable to include an analysis of some of the transfer-learning datasets mentioned in Appendix C.2 to shed light on this aspect. 3. Figure 11. has a typo in the x-axis label: “Epocs” -> “Epochs” 4. Looking at Figure 11 (left), it’s evident that AP score saturates after 9th epoch. Could you provide the same analysis as in Figure 11 (left and right) for at least one more model and one more dataset? This additional data would provide insight into when these segmentation properties typically manifest during training. Are they consistently present early in training, or does their appearance depend on factors like model size and dataset selection? 5. Could you also provide the same analysis as in Figure 11 for classic ViT models. This would offer further insights into whether classic ViTs could have benefited from extended training or if they also saturate early in training, albeit with notably lower scores. rating: 7: Good paper, accept confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
kmzH8kT9TE
Emergence of Segmentation with Minimalistic White-Box Transformers
[ "Yaodong Yu", "Tianzhe Chu", "Shengbang Tong", "Ziyang Wu", "Druv Pai", "Sam Buchanan", "Yi Ma" ]
Transformer-like models for vision tasks have recently proven effective for a wide range of downstream applications such as segmentation and detection. Previous works have shown that segmentation properties emerge in vision transformers (ViTs) trained using self-supervised methods such as DINO, but not in those trained on supervised classification tasks. In this study, we probe whether segmentation emerges in transformer-based models solely as a result of intricate self-supervised learning mechanisms, or if the same emergence can be achieved under much broader conditions through proper design of the model architecture. Through extensive experimental results, we demonstrate that when employing a white-box transformer-like architecture known as CRATE, whose design explicitly models and pursues low-dimensional structures in the data distribution, segmentation properties, at both the whole and parts levels, already emerge with a minimalistic supervised training recipe. Layer-wise finer-grained analysis reveals that the emergent properties strongly corroborate the designed mathematical functions of the white-box network. Our results suggest a path to design white-box foundation models that are simultaneously highly performant and mathematically fully interpretable.
[ "white-box transformer", "emergence of segmentation properties" ]
https://openreview.net/pdf?id=kmzH8kT9TE
JkY9BFNnvm
meta_review
1,699,820,537,458
kmzH8kT9TE
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission12/Area_Chair_JySd" ]
metareview: This paper provides insightful investigations into whether an effective segmentation transformer-based model can come from supervised training/pre-training. Detailed model and optimization designs are offered, together with extensive analysis and informative visualizations. The authors also conduct great efforts during the rebuttal period. All reviewers voted for the acceptance of this submission and I agree it can benefit our community. recommendation: Accept (Poster) confidence: 4: The area chair is confident but not absolutely certain
kkz4BbquBy
Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates
[ "Murat Onur Yildirim", "Elif Ceren Gok Yildirim", "Ghada Sokar", "Decebal Constantin Mocanu", "Joaquin Vanschoren" ]
Continual learning (CL) refers to the ability of an intelligent system to sequentially acquire and retain knowledge from a stream of data with as little computational overhead as possible. To this end; regularization, replay, architecture, and parameter isolation approaches were introduced to the literature. Parameter isolation using a sparse network which enables to allocate distinct parts of the neural network to different tasks and also allows to share of parameters between tasks if they are similar. Dynamic Sparse Training (DST) is a prominent way to find these sparse networks and isolate them for each task. This paper is the first empirical study investigating the effect of different DST components under the CL paradigm to fill a critical research gap and shed light on the optimal configuration of DST for CL if it exists. Therefore, we perform a comprehensive study in which we investigate various DST components to find the best topology per task on well-known CIFAR100 and miniImageNet benchmarks in a task-incremental CL setup since our primary focus is to evaluate the performance of various DST criteria, rather than the process of mask selection. We found that, at a low sparsity level, Erdos-Renyi Kernel (ERK) initialization utilizes the backbone more efficiently and allows to effectively learn increments of tasks. At a high sparsity level, however, uniform initialization demonstrates more reliable and robust performance. In terms of growth strategy; performance is dependent on the defined initialization strategy and the extent of sparsity. Finally, adaptivity within DST components is a promising way for better continual learners.
[ "continual learning", "sparse neural networks", "dynamic sparse training" ]
https://openreview.net/pdf?id=kkz4BbquBy
y2DhNzztu6
official_review
1,696,653,605,950
kkz4BbquBy
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission27/Reviewer_YxBF" ]
title: please see the review. review: The paper investigates the use of Dynamic Sparse Training (DST) strategies in Continual Learning (CL) scenarios. The authors explore different initialization and growth strategies and their impact on performance. They conduct experiments on CIFAR100 and miniImageNet datasets with varying sparsity levels and task numbers. The choice of initialization and growth strategies depends on the sparsity level and number of tasks. The adaptive approach proposed by the authors shows promising results in enhancing performance. Advantage: 1. The paper explores the performance of different DST strategies in the context of continual learning (CL) tasks. 2. It proposes an adaptive approach for selecting DST criteria per task, which improves performance compared to fixed strategies. Disadvantage: 1. It's unclear how the DST strategies discussed in the paper perform when applied to other neural networks like VGG and MobileNet. Additional experimentation or information on their applicability to different architectures could be valuable. 2. The paper focuses on sparsity levels of 80%, 90%, and 95% to draw conclusion that at a low to moderate sparsity level, ERK initialization is more efficient and at a high sparsity level, uniform initialization is more robust. It would be beneficial to know how these strategies perform at other sparsity levels, like higher than 95%, lower than 80%, or more fine-grained sparsity levels between 80% and 95%. Also, it would be better to know how these strategies perform on other works, will other works demonstrate the same conclusion? 3. I wonder if the proposed adaptive method can be applied to other works, will the performance of adaptive method be better than random or gradient in other works? 4. It would be better to show how the frequency of topology updates affects performance. rating: 6: Marginally above acceptance threshold confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
kkz4BbquBy
Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates
[ "Murat Onur Yildirim", "Elif Ceren Gok Yildirim", "Ghada Sokar", "Decebal Constantin Mocanu", "Joaquin Vanschoren" ]
Continual learning (CL) refers to the ability of an intelligent system to sequentially acquire and retain knowledge from a stream of data with as little computational overhead as possible. To this end; regularization, replay, architecture, and parameter isolation approaches were introduced to the literature. Parameter isolation using a sparse network which enables to allocate distinct parts of the neural network to different tasks and also allows to share of parameters between tasks if they are similar. Dynamic Sparse Training (DST) is a prominent way to find these sparse networks and isolate them for each task. This paper is the first empirical study investigating the effect of different DST components under the CL paradigm to fill a critical research gap and shed light on the optimal configuration of DST for CL if it exists. Therefore, we perform a comprehensive study in which we investigate various DST components to find the best topology per task on well-known CIFAR100 and miniImageNet benchmarks in a task-incremental CL setup since our primary focus is to evaluate the performance of various DST criteria, rather than the process of mask selection. We found that, at a low sparsity level, Erdos-Renyi Kernel (ERK) initialization utilizes the backbone more efficiently and allows to effectively learn increments of tasks. At a high sparsity level, however, uniform initialization demonstrates more reliable and robust performance. In terms of growth strategy; performance is dependent on the defined initialization strategy and the extent of sparsity. Finally, adaptivity within DST components is a promising way for better continual learners.
[ "continual learning", "sparse neural networks", "dynamic sparse training" ]
https://openreview.net/pdf?id=kkz4BbquBy
kBPQvHcbXx
official_review
1,696,700,683,599
kkz4BbquBy
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission27/Reviewer_dCsQ" ]
title: Review review: Overview This paper investigating the effect of different DST components under the CL paradigm by performing a comprehensive study in which we investigate various DST components to find the best topology per task on well-known CIFAR100 and miniImageNet benchmarks in a task-incremental CL setup. Strengths 1. It is very interesting results that ERK and uniform sparsity experience different performance at different sparsity levels under CL setting. The results have practical significance to the further research on CL. 2. Adaptivity within DST is also evaluated under CL setting, and the design is very important in practical use of DST, not just in CL. Weaknesses 1. The observation is sufficient, however, there is no detailed discussion on why the DST has such difference in CL in terms of its settings on sparsity. 2. The main body of the paper should have more discussion on the rationale of DST’s different settings. rating: 5: Marginally below acceptance threshold confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
kkz4BbquBy
Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates
[ "Murat Onur Yildirim", "Elif Ceren Gok Yildirim", "Ghada Sokar", "Decebal Constantin Mocanu", "Joaquin Vanschoren" ]
Continual learning (CL) refers to the ability of an intelligent system to sequentially acquire and retain knowledge from a stream of data with as little computational overhead as possible. To this end; regularization, replay, architecture, and parameter isolation approaches were introduced to the literature. Parameter isolation using a sparse network which enables to allocate distinct parts of the neural network to different tasks and also allows to share of parameters between tasks if they are similar. Dynamic Sparse Training (DST) is a prominent way to find these sparse networks and isolate them for each task. This paper is the first empirical study investigating the effect of different DST components under the CL paradigm to fill a critical research gap and shed light on the optimal configuration of DST for CL if it exists. Therefore, we perform a comprehensive study in which we investigate various DST components to find the best topology per task on well-known CIFAR100 and miniImageNet benchmarks in a task-incremental CL setup since our primary focus is to evaluate the performance of various DST criteria, rather than the process of mask selection. We found that, at a low sparsity level, Erdos-Renyi Kernel (ERK) initialization utilizes the backbone more efficiently and allows to effectively learn increments of tasks. At a high sparsity level, however, uniform initialization demonstrates more reliable and robust performance. In terms of growth strategy; performance is dependent on the defined initialization strategy and the extent of sparsity. Finally, adaptivity within DST components is a promising way for better continual learners.
[ "continual learning", "sparse neural networks", "dynamic sparse training" ]
https://openreview.net/pdf?id=kkz4BbquBy
MmVLmOFJ0J
meta_review
1,699,931,495,166
kkz4BbquBy
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission27/Area_Chair_CW2g" ]
metareview: This submission provides a valuable empirical analysis of Dynamic Sparse Training (DST) in Continual Learning (CL), contributing notably to the field. The study's methodical examination of DST components using CIFAR100 and miniImageNet benchmarks is commendable. The findings regarding the performance of Erdos-Renyi Kernel (ERK) and uniform initialization across different sparsity levels are insightful. While further exploration of various neural architectures and a deeper theoretical discussion could enhance the paper, its solid methodology and relevance stand out. Overall, the paper is a worthy addition to CPAL 2024. recommendation: Accept (Poster) confidence: 4: The area chair is confident but not absolutely certain
kkz4BbquBy
Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates
[ "Murat Onur Yildirim", "Elif Ceren Gok Yildirim", "Ghada Sokar", "Decebal Constantin Mocanu", "Joaquin Vanschoren" ]
Continual learning (CL) refers to the ability of an intelligent system to sequentially acquire and retain knowledge from a stream of data with as little computational overhead as possible. To this end; regularization, replay, architecture, and parameter isolation approaches were introduced to the literature. Parameter isolation using a sparse network which enables to allocate distinct parts of the neural network to different tasks and also allows to share of parameters between tasks if they are similar. Dynamic Sparse Training (DST) is a prominent way to find these sparse networks and isolate them for each task. This paper is the first empirical study investigating the effect of different DST components under the CL paradigm to fill a critical research gap and shed light on the optimal configuration of DST for CL if it exists. Therefore, we perform a comprehensive study in which we investigate various DST components to find the best topology per task on well-known CIFAR100 and miniImageNet benchmarks in a task-incremental CL setup since our primary focus is to evaluate the performance of various DST criteria, rather than the process of mask selection. We found that, at a low sparsity level, Erdos-Renyi Kernel (ERK) initialization utilizes the backbone more efficiently and allows to effectively learn increments of tasks. At a high sparsity level, however, uniform initialization demonstrates more reliable and robust performance. In terms of growth strategy; performance is dependent on the defined initialization strategy and the extent of sparsity. Finally, adaptivity within DST components is a promising way for better continual learners.
[ "continual learning", "sparse neural networks", "dynamic sparse training" ]
https://openreview.net/pdf?id=kkz4BbquBy
DbqDMSKsaf
official_review
1,696,653,138,442
kkz4BbquBy
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission27/Reviewer_v5pG" ]
title: Please see the review review: This paper proposes a comprehensive analysis, aiming to evaluate different components of Dynamic Sparse Training(DST) in Continuous Learning (CL). Extensive experiments are conducted to support the findings. Pros: 1. The work of evaluating the different settings of DST in CL scenarios can be a guide for later research. 2. The experiments on the different initialization of DST and the prune-and-grow approach are comprehensive. Cons & Question 1. Do the findings proposed in this paper help increase the performance of previous work like SparCL, NISPA, and WSN? It would be great to have some comparisons with other methods. 2. The findings are based on ResNet structure. Can similar results be found in VGG and MobileNet? 3. It is interesting that adaptively choosing DST methods can improve performance, as shown in 5.3. Does this finding exist in all the sparsity levels (Low sparsity and high sparsity)? 4. Most of the findings apply to DST (e.g. ERK distribution can reach higher accuracy, and gradient- and momentum-based growth provide better performance), and I would like to know what is uniquely specific to CL. 5. As ITOP systematically evaluate the impact of topology update frequency, I am curious about the effect of update frequency on CL. rating: 6: Marginally above acceptance threshold confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
kkz4BbquBy
Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates
[ "Murat Onur Yildirim", "Elif Ceren Gok Yildirim", "Ghada Sokar", "Decebal Constantin Mocanu", "Joaquin Vanschoren" ]
Continual learning (CL) refers to the ability of an intelligent system to sequentially acquire and retain knowledge from a stream of data with as little computational overhead as possible. To this end; regularization, replay, architecture, and parameter isolation approaches were introduced to the literature. Parameter isolation using a sparse network which enables to allocate distinct parts of the neural network to different tasks and also allows to share of parameters between tasks if they are similar. Dynamic Sparse Training (DST) is a prominent way to find these sparse networks and isolate them for each task. This paper is the first empirical study investigating the effect of different DST components under the CL paradigm to fill a critical research gap and shed light on the optimal configuration of DST for CL if it exists. Therefore, we perform a comprehensive study in which we investigate various DST components to find the best topology per task on well-known CIFAR100 and miniImageNet benchmarks in a task-incremental CL setup since our primary focus is to evaluate the performance of various DST criteria, rather than the process of mask selection. We found that, at a low sparsity level, Erdos-Renyi Kernel (ERK) initialization utilizes the backbone more efficiently and allows to effectively learn increments of tasks. At a high sparsity level, however, uniform initialization demonstrates more reliable and robust performance. In terms of growth strategy; performance is dependent on the defined initialization strategy and the extent of sparsity. Finally, adaptivity within DST components is a promising way for better continual learners.
[ "continual learning", "sparse neural networks", "dynamic sparse training" ]
https://openreview.net/pdf?id=kkz4BbquBy
5WfjnhWIMw
decision
1,700,497,881,769
kkz4BbquBy
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Program_Chairs" ]
decision: Accept (Oral) comment: This paper investigates the effect of different dynamic sparse training components under the continual learning paradigm. The reviewers and AC agree that overall, the analysis in the paper is a valuable contribution to this research area. The authors perform a comprehensive study in which they investigate various dynamic sparse training components to find the best topology per task in a task-incremental continual learning setup. The reviewers generally found the experiments in the paper convincing in support of the claimed findings. However, some reviewers raised concerns about the applicability of the proposed approach to other neural networks and the performance of the proposed approach at different sparsity levels. The authors should consider addressing these concerns in the camera ready version of the paper. The action PC chair for this paper is Gintare Karolina Dziugaite, who made the decision after carefully reading the paper as well as the comments by all reviewers and AC. The decision is agreed by all PC chairs. title: Paper Decision
jRVS6C3Wia
Decoding Micromotion in Low-dimensional Latent Spaces from StyleGAN
[ "Qiucheng Wu", "Yifan Jiang", "Junru Wu", "Kai Wang", "Eric Zhang", "Humphrey Shi", "Zhangyang Wang", "Shiyu Chang" ]
The disentanglement of StyleGAN latent space has paved the way for realistic and controllable image editing, but does StyleGAN know anything about temporal motion, as it was only trained on static images? To study the motion features in the latent space of StyleGAN, in this paper, we hypothesize and demonstrate that a series of meaningful, natural, and versatile small, local movements (referred to as "micromotion", such as expression, head movement, and aging effect) can be represented in low-rank spaces extracted from the latent space of a conventionally pre-trained StyleGAN-v2 model for face generation, with the guidance of proper "anchors" in the form of either short text or video clips. Starting from one target face image, with the editing direction decoded from the low-rank space, its micromotion features can be represented as simple as an affine transformation over its latent feature. Perhaps more surprisingly, such micromotion subspace, even learned from just single target face, can be painlessly transferred to other unseen face images, even those from vastly different domains (such as oil painting, cartoon, and sculpture faces). It demonstrates that the local feature geometry corresponding to one type of micromotion is aligned across different face subjects, and hence that StyleGAN-v2 is indeed ``secretly'' aware of the subject-disentangled feature variations caused by that micromotion. As an application, we present various successful examples of applying our low-dimensional micromotion subspace technique to directly and effortlessly manipulate faces. Compared with previous editing methods, our framework shows high robustness, low computational overhead, and impressive domain transferability. Our code is publicly available at https://github.com/wuqiuche/micromotion-StyleGAN.
[ "generative model", "low-rank decomposition" ]
https://openreview.net/pdf?id=jRVS6C3Wia
thL6qIBLU3
official_review
1,696,719,440,018
jRVS6C3Wia
[ "everyone" ]
[ "CPAL.cc/2024/Conference/Submission31/Reviewer_i4G4" ]
title: Valuable problem, good idea with interesting results review: This paper studies the latent space of StyleGAN, a family of GAN-based generative models. Attributes editing by modifying the latent code of StyleGAN is an important task while existing works can hardly edit an attribute without other undesired changes. This paper tries to understand if this is because of the intrinsic limits of the entangled latent space, or, just because the existing works are not good enough at disentangling. They hypothesize that a low-rank feature space can be extracted from the StyleGAN high-dimensional feature space, where universal editing directions can be reconstructed from *micromotions*. Empirical results verify this hypothesis and show that the low-rank subspace can be used for high-quality editing. Pros: 1. GAN-based generative models have shown impressive results in generation and editing. The understanding of it is relatively under-explored. This paper adds new insights in this track. 2. The proposed low-rank subspace analysis is technically sound and interesting to me. 3. Empirically, the results support the proposed hypothesis and show better results of using it for high-fidelity editing over other approaches. 4. The paper is well-written, logic clear and neat. Cons: 1. One concern is that only StyleGAN-v2 is evaluated, while the paper actually attempts to answer a question for the general StyleGAN family. I wonder if the conclusions found on StyleGAN-v2 can generalize to other StyleGAN models. rating: 7: Good paper, accept confidence: 3: The reviewer is fairly confident that the evaluation is correct