forum_id
stringlengths 8
20
| forum_title
stringlengths 4
171
| forum_authors
sequencelengths 0
25
| forum_abstract
stringlengths 4
4.27k
| forum_keywords
sequencelengths 1
10
| forum_pdf_url
stringlengths 38
50
| note_id
stringlengths 8
13
| note_type
stringclasses 6
values | note_created
int64 1,360B
1,736B
| note_replyto
stringlengths 8
20
| note_readers
sequencelengths 1
5
| note_signatures
sequencelengths 1
1
| note_text
stringlengths 10
16.6k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
FKeY9J4Pnf | MCFEND: A Multi-source Benchmark Dataset for Chinese Fake News Detection | [
"Yupeng Li",
"Haorui He",
"Jin Bai",
"Dacheng Wen"
] | The prevalence of fake news across various online sources can have significant influence to the public.
Existing Chinese fake news detection datasets are limited to the news sourced from Weibo solely.
However, fake news that originates from multiple sources exhibits diversity across various aspects, including its content and social context. Methods trained on data from such a single news source can be hardly applicable to the real-world scenarios.
Our pilot experiment demonstrates that the macro F1 score of the state-of-the-art method trained on the largest Chinese fake news detection dataset Weibo-21 so far, drops from 0.98 to 0.47 when changing the test data from Weibo-21 to multi-source data, failing to identify 35.34% of the multi-source fake news.
To address this limitation, we construct the first multi-source benchmark dataset for Chinese fake news detection, termed MCFEND, which contains news collected from diverse sources, such as social platforms, messaging apps, and traditional online news outlets, and fact-checked
through 14 authoritative fact-checking agencies.
In addition, various established Chinese fake news detection methods are thoroughly evaluated on our proposed dataset, including the state-of-the-art approaches, in both the cross-source and multi-source scenarios. MCFEND contributes to the field of fake news detection by aiming at a benchmark to evaluate and advance Chinese fake news detection approaches in real-world scenarios. | [
"Multi-source Benchmark Dataset",
"Chinese Fake News Detection",
"Cross-source Evaluation",
"Multi-source Evaluation"
] | https://openreview.net/pdf?id=FKeY9J4Pnf | m3HUPSHNIH | official_review | 1,701,359,627,670 | FKeY9J4Pnf | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission422/Reviewer_NBj5"
] | review: **Summary**
The authors introduce a Multi-source benchmark dataset for Chinese FakE News Detection (MCFEND). While Chinese fake news detection has been relatively well-studied, most of the prevailing datasets originate solely from Weibo. Hence, the introduction of a multi-source benchmark dataset offers a diverse perspective, enhancing the assessment of trained fake news detection models. This dataset comprises 23,974 pieces of Chinese news, constructed using different methods from 14 fact-checking agencies.
**Strengths**
1. **Multi-Source Dataset**: The MCFEND dataset addresses an existing research gap where Chinese fake news models are exclusively trained and evaluated using data collected solely from Weibo. These single-source datasets pose a high risk to the efficacy and generalizability of the models.
2. **Innovative Dataset Construction Approach**. The authors employed an innovative technique to obtain the second group within the MCFEND dataset through cross-lingual identical news retrieval. This method proves effective in gathering credible misinformation instances, thereby expanding the scope of hate speech beyond what may be encompassed on Chinese platforms.
3. **Comprehensive Experiments**. The authors systematically conducted experiments on an extensive array of baseline models, encompassing content-based methods, propagation-based approaches, and others. The detailed experimental settings enhance comprehension of the various models using Chinese fake news datasets. Additionally, the authors conducted a thorough evaluation on the distinct groups within the MCFEND dataset, further underscoring the significance of a multi-domain dataset.
**Weakness**
1. **Writing contradictions.** In the introduction, the authors stated that the construction of the first group involves fabricated news data gathered from nine fact-checking agencies. However, a contradiction arises in the MCFEND dataset construction section, where the authors specify that the first group comprises five Chinese fact-checking agencies identified as active by Duke Reporters, in addition to nine other Chinese fact-checking agencies.
2. **Selection of posts from search queries**. While the authors mentioned the process of retrieving posts from Weibo (i.e., via headlines or extracted keywords) in Section 3.2.4 Social Context Collection, it is not clear how the authors determine the relevant posts and the point of stoppage.
3. **Unusual Experiment Results**. In Table 4, although the authors emphasized that the BERT model achieved the highest Macro F1 score for Group 1, it is observed that CLIP actually has a higher Macro F1 score (51.83 versus 49.70). Furthermore, there is confusion regarding the authors' computation of the overall Macro F1 score. While the table indicates that the CLIP model consistently outperforms the RoBERTa model, the CLIP model surprisingly has a lower overall Macro F1 score. Similar inconsistencies can be noted in Table 6 (CAFE versus BERT-EMO).
I have read the author's rebuttal, and their comments have clarified many of the weaknesses surrounding the experiments.
questions: 1. How many groups are used in constructing the first group in the MCFEND dataset? (related to Weakness #1)
2. What are the selection criteria for the posts obtained from the search queries? What is the point of stoppage? (related to Weakness #2)
3. How are the (overall) Macro F1 scores computed? If there is a misunderstanding, can you kindly guide me through the evaluation steps?
4. What is the length breakdown of the various posts? This is related to the experimental setup where the authors standardize the post to a maximum length of 256 tokens.
5. Is the original post content preserved? Since language models (i.e., BERT and RoBERTa) are pre-trained on a large corpus of English data in a self-supervised fashion, the text post-processing might hurt their performance instead.
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 6
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
FKeY9J4Pnf | MCFEND: A Multi-source Benchmark Dataset for Chinese Fake News Detection | [
"Yupeng Li",
"Haorui He",
"Jin Bai",
"Dacheng Wen"
] | The prevalence of fake news across various online sources can have significant influence to the public.
Existing Chinese fake news detection datasets are limited to the news sourced from Weibo solely.
However, fake news that originates from multiple sources exhibits diversity across various aspects, including its content and social context. Methods trained on data from such a single news source can be hardly applicable to the real-world scenarios.
Our pilot experiment demonstrates that the macro F1 score of the state-of-the-art method trained on the largest Chinese fake news detection dataset Weibo-21 so far, drops from 0.98 to 0.47 when changing the test data from Weibo-21 to multi-source data, failing to identify 35.34% of the multi-source fake news.
To address this limitation, we construct the first multi-source benchmark dataset for Chinese fake news detection, termed MCFEND, which contains news collected from diverse sources, such as social platforms, messaging apps, and traditional online news outlets, and fact-checked
through 14 authoritative fact-checking agencies.
In addition, various established Chinese fake news detection methods are thoroughly evaluated on our proposed dataset, including the state-of-the-art approaches, in both the cross-source and multi-source scenarios. MCFEND contributes to the field of fake news detection by aiming at a benchmark to evaluate and advance Chinese fake news detection approaches in real-world scenarios. | [
"Multi-source Benchmark Dataset",
"Chinese Fake News Detection",
"Cross-source Evaluation",
"Multi-source Evaluation"
] | https://openreview.net/pdf?id=FKeY9J4Pnf | ks6Z9kxXjk | decision | 1,705,909,251,796 | FKeY9J4Pnf | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: This is the meta-review by the SPC responsible for your paper, and takes into account the opinions expressed by the referees, the subsequent decision thread, and my own opinions about your work.
- This paper presents a large multi-source Chinese fake news detection dataset called MCFEND. Unlike existing narrative datasets collected solely from Weibo, the authors employed an effective and innovative technique to collect fake news from multiple sources.
- Overall, reviewers recognized the value of this work in advancing Chinese fake news detection, and the experiments are comprehensive and robust.
- However, reviewers expressed some concerns, which the authors somewhat addressed during the rebuttal stage and committed to resolving in the final version. |
FKeY9J4Pnf | MCFEND: A Multi-source Benchmark Dataset for Chinese Fake News Detection | [
"Yupeng Li",
"Haorui He",
"Jin Bai",
"Dacheng Wen"
] | The prevalence of fake news across various online sources can have significant influence to the public.
Existing Chinese fake news detection datasets are limited to the news sourced from Weibo solely.
However, fake news that originates from multiple sources exhibits diversity across various aspects, including its content and social context. Methods trained on data from such a single news source can be hardly applicable to the real-world scenarios.
Our pilot experiment demonstrates that the macro F1 score of the state-of-the-art method trained on the largest Chinese fake news detection dataset Weibo-21 so far, drops from 0.98 to 0.47 when changing the test data from Weibo-21 to multi-source data, failing to identify 35.34% of the multi-source fake news.
To address this limitation, we construct the first multi-source benchmark dataset for Chinese fake news detection, termed MCFEND, which contains news collected from diverse sources, such as social platforms, messaging apps, and traditional online news outlets, and fact-checked
through 14 authoritative fact-checking agencies.
In addition, various established Chinese fake news detection methods are thoroughly evaluated on our proposed dataset, including the state-of-the-art approaches, in both the cross-source and multi-source scenarios. MCFEND contributes to the field of fake news detection by aiming at a benchmark to evaluate and advance Chinese fake news detection approaches in real-world scenarios. | [
"Multi-source Benchmark Dataset",
"Chinese Fake News Detection",
"Cross-source Evaluation",
"Multi-source Evaluation"
] | https://openreview.net/pdf?id=FKeY9J4Pnf | Uythh9bk8H | official_review | 1,701,252,036,131 | FKeY9J4Pnf | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission422/Reviewer_DFyZ"
] | review: Pros:
The paper is well-written and easy to understand.
I think it is an important issue of fake news detection from multiple platforms, so the dataset is an important support.
Cons:
I think the concept of “source” is not clearly defined, does it refer to sources the news pieces come from or the fact-checking agencies?
There are some typos in the paper.
questions: 1. In Table 3, you mentioned the data of Group 3 ranges from Dec. 2014 to Mar. 2021, but I found that Weibo21 ranges from Dec. 2014 to Mar. 2021, can you confirm again?
2. How do you think about the difference between multi-source fake news detection and multi-domain fake news detection? Do methods for multi-domain fake news detection work for multi-source scenarios?
3. Again for the concept of “source”, I think this concept refers to different platforms, such as Messaging App, News Outlets, etc. (Figure 2). But it seems like you refer to the three groups as different sources in experiments, I think it is necessary to clarify it more clearly.
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
FKeY9J4Pnf | MCFEND: A Multi-source Benchmark Dataset for Chinese Fake News Detection | [
"Yupeng Li",
"Haorui He",
"Jin Bai",
"Dacheng Wen"
] | The prevalence of fake news across various online sources can have significant influence to the public.
Existing Chinese fake news detection datasets are limited to the news sourced from Weibo solely.
However, fake news that originates from multiple sources exhibits diversity across various aspects, including its content and social context. Methods trained on data from such a single news source can be hardly applicable to the real-world scenarios.
Our pilot experiment demonstrates that the macro F1 score of the state-of-the-art method trained on the largest Chinese fake news detection dataset Weibo-21 so far, drops from 0.98 to 0.47 when changing the test data from Weibo-21 to multi-source data, failing to identify 35.34% of the multi-source fake news.
To address this limitation, we construct the first multi-source benchmark dataset for Chinese fake news detection, termed MCFEND, which contains news collected from diverse sources, such as social platforms, messaging apps, and traditional online news outlets, and fact-checked
through 14 authoritative fact-checking agencies.
In addition, various established Chinese fake news detection methods are thoroughly evaluated on our proposed dataset, including the state-of-the-art approaches, in both the cross-source and multi-source scenarios. MCFEND contributes to the field of fake news detection by aiming at a benchmark to evaluate and advance Chinese fake news detection approaches in real-world scenarios. | [
"Multi-source Benchmark Dataset",
"Chinese Fake News Detection",
"Cross-source Evaluation",
"Multi-source Evaluation"
] | https://openreview.net/pdf?id=FKeY9J4Pnf | OYgIbyVm94 | official_review | 1,701,424,829,236 | FKeY9J4Pnf | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission422/Reviewer_CThV"
] | review: This paper presents a new dataset for fake news detection in Chinese. The dataset is compiled from many different sources and contains 23k examples of multimedia content that was fact-checked by 14 fact-checking agencies. The authors then evaluate 8 fake news detection approaches on this dataset and show that models trained on commonly used Weibo-only content do not generalize to other sources and incorporating training data from different sources improves the accuracy of these models.
This is an interesting paper that makes a useful contribution to the detection of fake news in Chinese by creating a large, multi-source, multi-modal dataset.
The collection methodology is well-documented and the dataset greatly improves the ability of future researchers to study this phenomenon across different platforms.
The baselines chosen are fairly robust and the difference seen in their original performance at the task vs. that after being trained on this new dataset shows that it greatly improves their ability to detect fake news in the wild.
However, some things in the approach are unclear. In the fake news detection pipeline, the authors standardize the text to 256 tokens, which is much shorter than the average news article length and would remove much of the context needed to judge whether it is fake news or not.
It is also not clear whether any deduplication is performed since it could be that multiple instances of the same fake news story have been obtained from different sources, which would inflate the accuracy of the models if they have already seen similar samples in the training set.
It would be helpful to see some descriptive analysis of the difference between fake news obtained from different sources in addition to the embedding projections presented. Factors like average length, topic distribution, linguistic style, amount of images included, would be helpful in understanding the nature of fake news across the different platforms.
The authors may wish to try using UMAP instead of t-SNE for dimensionality reduction for the plots in Figure 3 and 4 as it can preserve global structure better. It would also be good to see some annotation of different topics within the plot.
Overall, this is a well-written paper and the figures and tables are of good quality as well.
---
I have read and replied to the authors' response and made the necessary changes to my review.
questions: What was the accuracy of the OCR used to extract fact checking labels?
How was it verified that the Chinese articles found via translation do indeed contain the same misleading statements as the original, and not just neutral coverage of an English fake news article?
Sentence-BERT has a limited context length, how were longer pieces of text truncated or chunked in order to generate their vector representations in Section 3.3?
Typos and presentation improvements:
In Section 1, the footnote number for footnote 9 in the text bleeds outside the column margins.
In Section 3.1, the footnote with the fact checking website URLs could be provided as a list or table in the appendix for better readability.
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Et9rHdWGAZ | ModelGo: A Tool for Machine Learning License Analysis | [
"Moming Duan",
"Qinbin Li",
"Bingsheng He"
] | Productionizing machine learning projects is inherently complex, involving a multitude of interconnected components that are assembled like LEGO blocks and evolve throughout development lifecycle.
These components encompass software, databases, and models, each subject to various licenses governing their reuse and redistribution.
However, existing license analysis approaches for Open Source Software (OSS) are not well-suited for this context.
For instance, some projects are licensed without explicitly granting sublicensing rights, or the granted rights can be revoked, potentially exposing their derivatives to legal risks.
Indeed, the analysis of licenses in machine learning projects grows significantly more intricate as it involves interactions among diverse types of licenses and licensed materials.
To the best of our knowledge, no prior research has delved into the exploration of license conflicts within this domain.
In this paper, we introduce ModelGo, a practical tool for auditing potential legal risks in machine learning projects to enhance compliance and fairness.
With ModelGo, we present license assessment reports based on 5 use cases with diverse model-reusing scenarios, rendered by real-world machine learning components.
Finally, we summarize the reasons behind license conflicts and provide guidelines for minimizing them. | [
"License analysis",
"AI licensing",
"model mining"
] | https://openreview.net/pdf?id=Et9rHdWGAZ | to39uy4KAt | official_review | 1,700,733,337,748 | Et9rHdWGAZ | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1243/Reviewer_yuGm"
] | review: The paper proposes ModelGo, a tool for auditing potential legal risks in machine learning (ML) projects. The authors argue that the traditional OSS license analysis cannot be directly extended to ML projects as they further include datasets and models which may be under different types of license. The authors also propose a taxonomy bridging AI activities and license language keywords. Utilizing their tool the authors analyze five diverse ML projects and generate assessment reports. Additionally, the authors also provide guidelines for minimizing license violation links.
The paper is well-written and easy to follow. I also feel the topic is timely and relevant given the current growth and proliferation of AI systems.
Often, the ML projects developed in the Academia are not properly organized in terms of licensing. I would like to know how difficult it is to deploy the developed ModelGo tool to a ML project. I would imagine that it would require manual effort to incorporate the missing information.
Also how do the authors ensure that the proposed taxonomy is comprehensive and encompasses all AI activities and more importantly when a new activity is proposed how easy or difficult it is to extend the tool to it.
I thank the authors for their response. I will stick with my original ratings.
questions: Please check the Review section.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Et9rHdWGAZ | ModelGo: A Tool for Machine Learning License Analysis | [
"Moming Duan",
"Qinbin Li",
"Bingsheng He"
] | Productionizing machine learning projects is inherently complex, involving a multitude of interconnected components that are assembled like LEGO blocks and evolve throughout development lifecycle.
These components encompass software, databases, and models, each subject to various licenses governing their reuse and redistribution.
However, existing license analysis approaches for Open Source Software (OSS) are not well-suited for this context.
For instance, some projects are licensed without explicitly granting sublicensing rights, or the granted rights can be revoked, potentially exposing their derivatives to legal risks.
Indeed, the analysis of licenses in machine learning projects grows significantly more intricate as it involves interactions among diverse types of licenses and licensed materials.
To the best of our knowledge, no prior research has delved into the exploration of license conflicts within this domain.
In this paper, we introduce ModelGo, a practical tool for auditing potential legal risks in machine learning projects to enhance compliance and fairness.
With ModelGo, we present license assessment reports based on 5 use cases with diverse model-reusing scenarios, rendered by real-world machine learning components.
Finally, we summarize the reasons behind license conflicts and provide guidelines for minimizing them. | [
"License analysis",
"AI licensing",
"model mining"
] | https://openreview.net/pdf?id=Et9rHdWGAZ | kU4JgFJR2f | decision | 1,705,909,220,293 | Et9rHdWGAZ | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: Our decision is to accept. Please see the AC's review below and improve the work considering that and the reviewers' feedback for cemera-ready submission.
"Reviewers unanimously found this to be a high quality paper tackling an important and timely problem. It would make a great addition to TheWebConf program." |
Et9rHdWGAZ | ModelGo: A Tool for Machine Learning License Analysis | [
"Moming Duan",
"Qinbin Li",
"Bingsheng He"
] | Productionizing machine learning projects is inherently complex, involving a multitude of interconnected components that are assembled like LEGO blocks and evolve throughout development lifecycle.
These components encompass software, databases, and models, each subject to various licenses governing their reuse and redistribution.
However, existing license analysis approaches for Open Source Software (OSS) are not well-suited for this context.
For instance, some projects are licensed without explicitly granting sublicensing rights, or the granted rights can be revoked, potentially exposing their derivatives to legal risks.
Indeed, the analysis of licenses in machine learning projects grows significantly more intricate as it involves interactions among diverse types of licenses and licensed materials.
To the best of our knowledge, no prior research has delved into the exploration of license conflicts within this domain.
In this paper, we introduce ModelGo, a practical tool for auditing potential legal risks in machine learning projects to enhance compliance and fairness.
With ModelGo, we present license assessment reports based on 5 use cases with diverse model-reusing scenarios, rendered by real-world machine learning components.
Finally, we summarize the reasons behind license conflicts and provide guidelines for minimizing them. | [
"License analysis",
"AI licensing",
"model mining"
] | https://openreview.net/pdf?id=Et9rHdWGAZ | cIurHfBUqm | official_review | 1,700,825,234,641 | Et9rHdWGAZ | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1243/Reviewer_QMu7"
] | review: Summary: The paper studies the problem of understanding ML licensing in many different settings, such as when using OSS, pre-trained models, or various datasets. The authors explain the problem of verifying software licenses, especially when a model might have one license, but the data it is trained on might have a different license. They describe the ModelGo system, which checks for licensing conflicts and evaluate it on a series of case studies. The paper, as it's written, takes the perspective of a software developer who wants to know whether a set of tools with different licenses can be used together risk-free. On a technical level, this seems to be evaluating a network of dependencies for conflicts between licences.
Strengths:
- This is a very important problem, especially with the influx of LLMs and the legal questions around their use of copyrighted data. I'm excited to see a paper working to tackle this issue.
- I appreciate that the paper takes a close look at different popular datasets and models used and provides concrete, urgent examples of this problem (eg the Getty Images case). These examples make it all the more clear that this is an important area of research.
- Overall I found the paper well-organized and clear to read.
Weaknesses:
- I think the paper could be improved by incorporating more information about what legal scholars think about this area. For example, I would be interested to know how legal scholars are taxonomizing different components of ML projects. Adding a bit more related work or context in this area might help. I think this is particularly true in light of developments such as the guarantee that ChatGPT will handle copyright issues for any of its clients.
questions: How do legal scholars taxonomize different components of ML projects?
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Et9rHdWGAZ | ModelGo: A Tool for Machine Learning License Analysis | [
"Moming Duan",
"Qinbin Li",
"Bingsheng He"
] | Productionizing machine learning projects is inherently complex, involving a multitude of interconnected components that are assembled like LEGO blocks and evolve throughout development lifecycle.
These components encompass software, databases, and models, each subject to various licenses governing their reuse and redistribution.
However, existing license analysis approaches for Open Source Software (OSS) are not well-suited for this context.
For instance, some projects are licensed without explicitly granting sublicensing rights, or the granted rights can be revoked, potentially exposing their derivatives to legal risks.
Indeed, the analysis of licenses in machine learning projects grows significantly more intricate as it involves interactions among diverse types of licenses and licensed materials.
To the best of our knowledge, no prior research has delved into the exploration of license conflicts within this domain.
In this paper, we introduce ModelGo, a practical tool for auditing potential legal risks in machine learning projects to enhance compliance and fairness.
With ModelGo, we present license assessment reports based on 5 use cases with diverse model-reusing scenarios, rendered by real-world machine learning components.
Finally, we summarize the reasons behind license conflicts and provide guidelines for minimizing them. | [
"License analysis",
"AI licensing",
"model mining"
] | https://openreview.net/pdf?id=Et9rHdWGAZ | agu5G73uUv | official_review | 1,700,808,620,593 | Et9rHdWGAZ | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1243/Reviewer_B3Bx"
] | review: This paper provides a potentially helpful tool to identify licensing contradictions or errors in machine learning projects. I am not a licensing or legal expert, but I very much believe this is an important topic, and one that machine learning practitioners should take much more seriously. I appreciate the push toward developing tools to assist those building these projects in making informed choices about what they can and should not do or use, and the ethics and appropriateness of the license attached to the resulting ML product.
The main drawbacks of this paper for me are the immense amount of legal/licensing jargon to wade through. I don't think there's a way around this, and the authors do provide helpful summary blocks. But as someone not seeped in legal jargon, I struggled to read this paper.
This doesn't mean it won't still be helpful to those building machine learning projects, it just makes it a challenging read.
EDIT: I think the authors addressed my concern in the comments, and, though I'm (still) not an expert in licensing details, I think it's really important to make this discussion central to ML work.
questions: I am curious (as I am not a legal expert), about other legal considerations. Like is transforming a text into vectors negate the original licence? That is, when you scramble the text so much, does that mean something? (I'm really showing my legal naivete here, but I have some memory that that form of transformation means something legally). That is, are there other legal/license considerations here beyond just the actual combinations of licences themselves?
ethics_review_flag: No
ethics_review_description: No human subjects involved - no ethical issues
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 7
reviewer_confidence: 1: The reviewer's evaluation is an educated guess |
Et9rHdWGAZ | ModelGo: A Tool for Machine Learning License Analysis | [
"Moming Duan",
"Qinbin Li",
"Bingsheng He"
] | Productionizing machine learning projects is inherently complex, involving a multitude of interconnected components that are assembled like LEGO blocks and evolve throughout development lifecycle.
These components encompass software, databases, and models, each subject to various licenses governing their reuse and redistribution.
However, existing license analysis approaches for Open Source Software (OSS) are not well-suited for this context.
For instance, some projects are licensed without explicitly granting sublicensing rights, or the granted rights can be revoked, potentially exposing their derivatives to legal risks.
Indeed, the analysis of licenses in machine learning projects grows significantly more intricate as it involves interactions among diverse types of licenses and licensed materials.
To the best of our knowledge, no prior research has delved into the exploration of license conflicts within this domain.
In this paper, we introduce ModelGo, a practical tool for auditing potential legal risks in machine learning projects to enhance compliance and fairness.
With ModelGo, we present license assessment reports based on 5 use cases with diverse model-reusing scenarios, rendered by real-world machine learning components.
Finally, we summarize the reasons behind license conflicts and provide guidelines for minimizing them. | [
"License analysis",
"AI licensing",
"model mining"
] | https://openreview.net/pdf?id=Et9rHdWGAZ | JX39l3kvyt | official_review | 1,700,646,442,355 | Et9rHdWGAZ | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1243/Reviewer_wX85"
] | review: The paper introduces and discusses ModelGo, a tool designed to analyze potential license conflicts, improper license choices, use restrictions, and obligations in machine learning (ML) projects. It provides insights into the challenges of licensing in ML, such as the intricate interaction of various license types and the need for specialized tools to handle these complexities.
The paper is well-structured, clearly presenting the problem, the methodology, and the application of ModelGo. It successfully articulates the need for such a tool in the ML domain and presents case studies to demonstrate its utility.
The work addresses a relatively unexplored problem in ML license analysis. The authors claim this to be the first attempt to tackle license analysis challenges specifically in the ML context, highlighting its novelty. The significance of this work lies in its practical application to a growing field. As ML projects become more prevalent, the complexity of license management increases. ModelGo offers a solution to navigate this complexity, potentially impacting how ML projects are managed and mitigating legal risks.
Pros
1. Addresses a Critical Gap: The tool addresses the significant issue of license management in ML, which is increasingly important as the field grows
2. Practical Application: ModelGo is designed for practical application, as demonstrated through real-world case studies
3. Methodology: The methodology for license analysis, including the concept of 'activity proliferation', is innovative and tailored to the unique challenges in ML projects.
Cons
1. Complexity: The complexity of the tool might be a barrier for some users, particularly those without a background in legal or license management.
2. Limited Scope: While ModelGo covers a broad range of licenses, it may not encompass all possible licensing scenarios in the rapidly evolving field of ML
3. Dependency on Accurate Data: The effectiveness of ModelGo is contingent on the accurate and comprehensive input of licensing data, which might be a limiting factor.
questions: 1. Scope of Licensing Types: How does ModelGo handle emerging or non-standard licensing types that are increasingly seen in the ML field? Are there plans to regularly update the tool to include new types of licenses?
2. Complexity for Users: Given the complexity of license management in ML, how user-friendly is ModelGo for individuals without a legal or technical background? What measures have been taken to make the tool accessible to a broader audience?
3. Accuracy of License Data: The effectiveness of ModelGo depends on the accurate input of licensing data. How does the tool ensure or verify the accuracy of the licensing information fed into it?
4. Response to Rapid Changes in ML Field: The ML field is rapidly evolving. How adaptable is ModelGo to the fast-paced changes in technology, especially in terms of accommodating new models, datasets, and associated licenses?
5. Global Applicability and Legal Jurisdictions: Does ModelGo take into account the variations in legal frameworks and copyright laws across different countries? How does it handle licensing issues that span multiple legal jurisdictions?
ethics_review_flag: Yes
ethics_review_description: If ModelGo handles sensitive or proprietary data in its analysis of machine learning projects, it must ensure data privacy and security. This is particularly relevant if the tool accesses datasets or models that contain personal or confidential information.
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Eprj62feR6 | Multimodal Relation Extraction via a Mixture of Hierarchical Visual Context Learners | [
"Xiyang Liu",
"Chunming Hu",
"Richong Zhang",
"Kai Sun",
"Samuel Mensah",
"Yongyi Mao"
] | Multimodal relation extraction is a fundamental task of multimodal information extraction. Recent studies have shown promising results by integrating hierarchical visual features from local regions, like image patches, to the broader global regions that form the entire image. However, research to date has largely ignored the understanding of how hierarchical visual semantics are represented and the characteristics that can benefit relation extraction. To bridge this gap, we propose a novel two-stage hierarchical visual context fusion transformer incorporating the mixture of multimodal experts framework to effectively represent and integrate hierarchical visual features into textual semantic representations. In addition, we introduce the concept of hierarchical tracking maps to facilitate the understanding of the intrinsic mechanisms of image information processing involved in multimodal models. We thoroughly investigate the implications of hierarchical visual contexts through four dimensions: performance evaluation, the nature of auxiliary visual information, the patterns observed in the image encoding hierarchy, and the significance of various visual encoding levels. Empirical studies show that our approach achieves new state-of-the-art performance on the MNRE dataset. | [
"Multimodal Relation Extraction",
"Multimodal Fusion"
] | https://openreview.net/pdf?id=Eprj62feR6 | h5vGc8Jigi | official_review | 1,700,497,698,777 | Eprj62feR6 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1741/Reviewer_T59C"
] | review: This paper introduces a new method for integrating hierarchical visual features into textual semantic representations, enhancing relation extraction effectiveness. It utilizes a two-stage hierarchical visual context fusion transformer along with a mixture of multimodal experts framework, aiming to boost performance and deepen the understanding of image information processing. The paper is well-structured and has clear motivations.
Reasons to accept:
1.The paper proposes a two-stage multimodal fusion model that initially captures text-relevant visual information, followed by employing multimodal experts to integrate visual features at various levels.
2.It presents hierarchical tracking maps to discern visual semantics from local to global scales within the multimodal model layers, using multi-level visual features to decode relationships.
3.The paper includes extensive experimental analysis, demonstrating the effectiveness of the proposed approach.
Reasons to reject:
1.Although Figure 2 intuitively explains the methodology, Section 3 describes it in an overly complex manner, making it difficult for readers to grasp. Additionally, some symbols are introduced without proper definition, complicating the understanding of the proposed approach.
2.Certain experimental results lack clarity and motivation. The rationale behind conducting specific analyses is not straightforward, and a more detailed case study with error analysis would enhance the paper's comprehensibility.
questions: See reasons to reject.
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Eprj62feR6 | Multimodal Relation Extraction via a Mixture of Hierarchical Visual Context Learners | [
"Xiyang Liu",
"Chunming Hu",
"Richong Zhang",
"Kai Sun",
"Samuel Mensah",
"Yongyi Mao"
] | Multimodal relation extraction is a fundamental task of multimodal information extraction. Recent studies have shown promising results by integrating hierarchical visual features from local regions, like image patches, to the broader global regions that form the entire image. However, research to date has largely ignored the understanding of how hierarchical visual semantics are represented and the characteristics that can benefit relation extraction. To bridge this gap, we propose a novel two-stage hierarchical visual context fusion transformer incorporating the mixture of multimodal experts framework to effectively represent and integrate hierarchical visual features into textual semantic representations. In addition, we introduce the concept of hierarchical tracking maps to facilitate the understanding of the intrinsic mechanisms of image information processing involved in multimodal models. We thoroughly investigate the implications of hierarchical visual contexts through four dimensions: performance evaluation, the nature of auxiliary visual information, the patterns observed in the image encoding hierarchy, and the significance of various visual encoding levels. Empirical studies show that our approach achieves new state-of-the-art performance on the MNRE dataset. | [
"Multimodal Relation Extraction",
"Multimodal Fusion"
] | https://openreview.net/pdf?id=Eprj62feR6 | fJmx7Vbjiv | official_review | 1,700,192,608,674 | Eprj62feR6 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1741/Reviewer_ix3f"
] | review: This paper addressed a web-related challenging issue which is to identify relationship between entities using multimodal input. The paper is well written. The proposed method, which deeply fuses two features from image and text encoders, respectively, by adding a cross-attention layer for each of ViT layer to pay cross attention to input text feature. Finally, a mixture-of-experts model is applied to fuse the deeply fused multimodal features which is fed to a classification head to identify relationships. It's innovative to leverage cross-attention for each and every layer of ViT and leverage MOE to refuse output of all layers. The experiments indicate this approach improves relationship extraction performance on a public dataset MNRE.
questions: The main concern is short of model performance evaluation. The proposed method is only evaluated on one dataset. Authors may consider other datasets, like the one in 'FL-MSRE: A Few-Shot Learning based Approach to Multimodal Social Relation Extraction'. Additionally, it will make the paper more convincing by incorporate more ablation studies. For example, the proposed architecture leverage BERT as text encoder and ViT as visual encoder, and It would be convincing to provide model performance using other text encoder. The model provide the MOE weights of text feature and features from different fused ViT layers. It would be more convincing to show model performance on MNRE without text feature or with only a few of the fused ViT layers.
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Eprj62feR6 | Multimodal Relation Extraction via a Mixture of Hierarchical Visual Context Learners | [
"Xiyang Liu",
"Chunming Hu",
"Richong Zhang",
"Kai Sun",
"Samuel Mensah",
"Yongyi Mao"
] | Multimodal relation extraction is a fundamental task of multimodal information extraction. Recent studies have shown promising results by integrating hierarchical visual features from local regions, like image patches, to the broader global regions that form the entire image. However, research to date has largely ignored the understanding of how hierarchical visual semantics are represented and the characteristics that can benefit relation extraction. To bridge this gap, we propose a novel two-stage hierarchical visual context fusion transformer incorporating the mixture of multimodal experts framework to effectively represent and integrate hierarchical visual features into textual semantic representations. In addition, we introduce the concept of hierarchical tracking maps to facilitate the understanding of the intrinsic mechanisms of image information processing involved in multimodal models. We thoroughly investigate the implications of hierarchical visual contexts through four dimensions: performance evaluation, the nature of auxiliary visual information, the patterns observed in the image encoding hierarchy, and the significance of various visual encoding levels. Empirical studies show that our approach achieves new state-of-the-art performance on the MNRE dataset. | [
"Multimodal Relation Extraction",
"Multimodal Fusion"
] | https://openreview.net/pdf?id=Eprj62feR6 | a0X2rWIT67 | decision | 1,705,909,254,746 | Eprj62feR6 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: This paper tackles a challenging issue in the realm of web-related research, focusing on identifying relationships between entities using multimodal input. The paper is well-crafted, presenting a method that intricately fuses features from image and text encoders. This fusion involves incorporating a cross-attention layer for each ViT layer, enabling cross-attention to the input text feature. Ultimately, a mixture-of-experts (MOE) model is employed to amalgamate the deeply fused multimodal features, which are then fed into a classification head for relationship identification. The innovation lies in the utilization of cross-attention for every layer of ViT and leveraging MOE to refine the output of all layers. Experimental results indicate that this approach significantly enhances relationship extraction performance on the MNRE public dataset.
All of the reviewers agree that this work is interesting with solid results. However, there are still a few weaknesses pointed out by the reviewers. The authors have addressed most of the concerns in the rebuttal. I would encourage the authors to carefully incorporate the discussions with the new results into their revision. |
Eprj62feR6 | Multimodal Relation Extraction via a Mixture of Hierarchical Visual Context Learners | [
"Xiyang Liu",
"Chunming Hu",
"Richong Zhang",
"Kai Sun",
"Samuel Mensah",
"Yongyi Mao"
] | Multimodal relation extraction is a fundamental task of multimodal information extraction. Recent studies have shown promising results by integrating hierarchical visual features from local regions, like image patches, to the broader global regions that form the entire image. However, research to date has largely ignored the understanding of how hierarchical visual semantics are represented and the characteristics that can benefit relation extraction. To bridge this gap, we propose a novel two-stage hierarchical visual context fusion transformer incorporating the mixture of multimodal experts framework to effectively represent and integrate hierarchical visual features into textual semantic representations. In addition, we introduce the concept of hierarchical tracking maps to facilitate the understanding of the intrinsic mechanisms of image information processing involved in multimodal models. We thoroughly investigate the implications of hierarchical visual contexts through four dimensions: performance evaluation, the nature of auxiliary visual information, the patterns observed in the image encoding hierarchy, and the significance of various visual encoding levels. Empirical studies show that our approach achieves new state-of-the-art performance on the MNRE dataset. | [
"Multimodal Relation Extraction",
"Multimodal Fusion"
] | https://openreview.net/pdf?id=Eprj62feR6 | EFqHWGNh6B | official_review | 1,699,775,118,449 | Eprj62feR6 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1741/Reviewer_P4kp"
] | review: The paper proposes a novel approach to multimodal relation extraction by integrating hierarchical visual features into textual semantic representations. This is achieved through a two-stage hierarchical visual context fusion transformer paired with a mixture of multimodal experts framework. The concept of hierarchical tracking maps is introduced, aiming to enhance the understanding of image information processing within multimodal models. The paper claims state-of-the-art performance on the MNRE dataset.
**Strengths:**
1. The idea of the mixture of hierarchical visual context is interesting.
2. The experimental results consistently demonstrate performance improvements across many baselines on MNRE.
3. The paper is well-written and easy to follow.
**Weaknesses:**
1. The generalization of the methods deserves more testing. In this paper, methods only tested on the Multimodal Relation Extraction task(MRE), while many papers published on top-conference like [1][2][3] are tested on both MRE and Multimodal NER.
2. It's unclear what other expenses are linked with operating in this work.
3. More information on the nature and diversity of the auxiliary visual information used would be beneficial.
4. The paper could benefit from a discussion on the model's limitations and potential areas for improvement.
[1] Prompt Me Up: Unleashing the Power of Alignments for
Multimodal Entity and Relation Extraction. In MM. 2023.
[2] Hybrid Transformer with Multi-level Fusion for
Multimodal Knowledge Graph Completion. In SIGIR. 2022.
[3] Grounded Multimodal Named Entity Recognition on Social Media. In NAACL. 2022.
questions: Please see above.
ethics_review_flag: No
ethics_review_description: no
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Eprj62feR6 | Multimodal Relation Extraction via a Mixture of Hierarchical Visual Context Learners | [
"Xiyang Liu",
"Chunming Hu",
"Richong Zhang",
"Kai Sun",
"Samuel Mensah",
"Yongyi Mao"
] | Multimodal relation extraction is a fundamental task of multimodal information extraction. Recent studies have shown promising results by integrating hierarchical visual features from local regions, like image patches, to the broader global regions that form the entire image. However, research to date has largely ignored the understanding of how hierarchical visual semantics are represented and the characteristics that can benefit relation extraction. To bridge this gap, we propose a novel two-stage hierarchical visual context fusion transformer incorporating the mixture of multimodal experts framework to effectively represent and integrate hierarchical visual features into textual semantic representations. In addition, we introduce the concept of hierarchical tracking maps to facilitate the understanding of the intrinsic mechanisms of image information processing involved in multimodal models. We thoroughly investigate the implications of hierarchical visual contexts through four dimensions: performance evaluation, the nature of auxiliary visual information, the patterns observed in the image encoding hierarchy, and the significance of various visual encoding levels. Empirical studies show that our approach achieves new state-of-the-art performance on the MNRE dataset. | [
"Multimodal Relation Extraction",
"Multimodal Fusion"
] | https://openreview.net/pdf?id=Eprj62feR6 | 9fcxhkabUZ | official_review | 1,700,578,969,521 | Eprj62feR6 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1741/Reviewer_H68d"
] | review: The paper deals with the problem of multimodal relation extraction and proposes a novel two-stage hierarchical visual context fusion transformer. The model incorporates a mixture of multimodal experts framework to represent and integrate hierarchical visual features into textual semantic representations.
The paper is well-written and all in all easy to follow, even for a non-expert (which is my case).
The related work is concise but well structured and the positioning of the paper wrt to previous approaches is clear within each of the three axes of work.
The method section would benefit from a running example (for instance one taken from the introduction figure 1).
The experiments are well-designed and the results are convincing. However, given the complexity of the architecture, I’d appreciate giving some detail regarding the efficiency of the approach as compared to the baselines, in particular given the only minor improvements over them.
Minor:
- The experiments are not a contribution, I’d remove them from the list in the introduction - they are means to demonstrate the quality / relevance of the contribution (which is the novel model)
questions: What is the computational efficiency of the new model wrt the baselines?
ethics_review_flag: No
ethics_review_description: no issues
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
Eprj62feR6 | Multimodal Relation Extraction via a Mixture of Hierarchical Visual Context Learners | [
"Xiyang Liu",
"Chunming Hu",
"Richong Zhang",
"Kai Sun",
"Samuel Mensah",
"Yongyi Mao"
] | Multimodal relation extraction is a fundamental task of multimodal information extraction. Recent studies have shown promising results by integrating hierarchical visual features from local regions, like image patches, to the broader global regions that form the entire image. However, research to date has largely ignored the understanding of how hierarchical visual semantics are represented and the characteristics that can benefit relation extraction. To bridge this gap, we propose a novel two-stage hierarchical visual context fusion transformer incorporating the mixture of multimodal experts framework to effectively represent and integrate hierarchical visual features into textual semantic representations. In addition, we introduce the concept of hierarchical tracking maps to facilitate the understanding of the intrinsic mechanisms of image information processing involved in multimodal models. We thoroughly investigate the implications of hierarchical visual contexts through four dimensions: performance evaluation, the nature of auxiliary visual information, the patterns observed in the image encoding hierarchy, and the significance of various visual encoding levels. Empirical studies show that our approach achieves new state-of-the-art performance on the MNRE dataset. | [
"Multimodal Relation Extraction",
"Multimodal Fusion"
] | https://openreview.net/pdf?id=Eprj62feR6 | 7ifJGcrfoG | official_review | 1,699,963,793,092 | Eprj62feR6 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1741/Reviewer_C4br"
] | review: The authors propose a novel two-stage hierarchical visual context fusion transformer incorporating the mixture of multimodal experts framework to effectively represent and integrate hierarchical visual features into textual semantic representations. The authors introduce the concept of hierarchical tracking maps to facilitate the understanding of the intrinsic mechanisms of image information processing involved in multimodal models.
Strong Points:
1. The research problem is very important. Multimodal relation extraction is important for web information mining.
2. The description of the model is very detailed. The article is well written.
3. The author's analysis and discussion of the experiment section is adequate.
questions: Weak Points:
1, The authors propose to utilize hierarchical visual information. But why not use hierarchical textual information?
2, The authors experimented on only one dataset. It is recommended that the authors experiment on multiple multimodal relationship extraction datasets to verify the validity of the model.
ethics_review_flag: No
ethics_review_description: No.
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Ejjh59dA05 | Game-theoretic Counterfactual Explanation for Graph Neural Networks | [
"Chirag P Chhablani",
"Sarthak Jain",
"Akshay Channesh",
"Ian A. Kash",
"Sourav Medya"
] | Graph Neural Networks (GNNs) have been a powerful tool for node classification tasks in complex networks. However, their decision-making processes remain a black-box to users, making it challenging to understand the reasoning behind their predictions. Counterfactual explanations (CFE) have shown promise in enhancing the interpretability of machine learning models. Prior approaches to compute CFE for GNNS often are learning-based approaches that require training additional graphs. In this paper, we propose a semivalue-based, non-learning approach to generate CFE for node classification tasks, eliminating the need for any additional training. Our results reveals that computing Banzhaf values requires lower sample complexity in identifying the counterfactual explanations compared to other popular methods such as computing Shapley values. Our empirical evidence indicates computing Banzhaf values can achieve up to a fourfold speed up compared to Shapley values. We also design a thresholding method for computing Banzhaf values and show theoretical and empirical results on its robustness in noisy environments, making it superior to Shapley values. Furthermore, the thresholded Banzhaf values are shown to enhance efficiency without compromising the quality (i.e., fidelity) in the explanations in three popular graph datasets. | [
"Counterfactual Explanation",
"Graph Neural Networks",
"Game Theory"
] | https://openreview.net/pdf?id=Ejjh59dA05 | qwTbcvryWI | official_review | 1,700,547,609,603 | Ejjh59dA05 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission574/Reviewer_iPhy"
] | review: This paper addresses the counterfactual explanation of node classification using GNN model. Instead of using training-based methods, it uses semi-values and is hence training-free and efficient. The proposed method, "Thresholded Banzhaf Values", is shown to be sample efficient theoretically and demonstrated to be so experimentally, in particular against the Shapley value, which is the previous attempts to solve the problem of counterfactual explanability on GNNs. In addition, the paper attempts to show the superiority of the Banzhaf value over the Shapley value by using experiments on explanability benchmarks and then in the presence of random noise. While the Banzhaf value seem to outperform the Shapley value under some budgets, it seems that the performance gain is not significant. While the theoretical part of this paper is rather solid, some questions remain about the specifics of experimental designs and the interpretation of the experimental results.
questions: 1. How many repeated experiments are performed for each reported results, and how is the performance' stability/variance? The estimation of the values can be stochastic, so it is important to report the performance averaged over multiple runs.
2. It seems that the main contribution should be the efficiency of using the Banzhaf value over the Shapley value, as the performance gain is not significant, is that the case?
3. The methods such as Random, Top-K, Greedy, while poor in terms of fidelity, is much faster to run. So if the concern is efficiency, it is worthwhile to design a new metrics that consider both fidelity and "time taken" together. It is important to show the model's superiority over others in this 'holistic' way.
ethics_review_flag: No
ethics_review_description: No Ethics Concerns
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Ejjh59dA05 | Game-theoretic Counterfactual Explanation for Graph Neural Networks | [
"Chirag P Chhablani",
"Sarthak Jain",
"Akshay Channesh",
"Ian A. Kash",
"Sourav Medya"
] | Graph Neural Networks (GNNs) have been a powerful tool for node classification tasks in complex networks. However, their decision-making processes remain a black-box to users, making it challenging to understand the reasoning behind their predictions. Counterfactual explanations (CFE) have shown promise in enhancing the interpretability of machine learning models. Prior approaches to compute CFE for GNNS often are learning-based approaches that require training additional graphs. In this paper, we propose a semivalue-based, non-learning approach to generate CFE for node classification tasks, eliminating the need for any additional training. Our results reveals that computing Banzhaf values requires lower sample complexity in identifying the counterfactual explanations compared to other popular methods such as computing Shapley values. Our empirical evidence indicates computing Banzhaf values can achieve up to a fourfold speed up compared to Shapley values. We also design a thresholding method for computing Banzhaf values and show theoretical and empirical results on its robustness in noisy environments, making it superior to Shapley values. Furthermore, the thresholded Banzhaf values are shown to enhance efficiency without compromising the quality (i.e., fidelity) in the explanations in three popular graph datasets. | [
"Counterfactual Explanation",
"Graph Neural Networks",
"Game Theory"
] | https://openreview.net/pdf?id=Ejjh59dA05 | XgZwmWGpfa | official_review | 1,700,385,070,104 | Ejjh59dA05 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission574/Reviewer_Pn9d"
] | review: In this paper, the authors focus on the interpretability of GNNs, which is a very important research topic. Unlike previous research works, the authors propose a novel approach to build a counterfactual explainer using thresholded Banzhaf Values for the node classification task. They illustrate the proposed approach through rigorous theoretical proofs. In addition, the authors significantly outperform other baselines in terms of time efficiency.
questions: 1. The authors compare fewer baselines, why don't you compare more last baselines? Is this field less popular?
2. The authors performed experiments on only 3 artificial datasets. Why don't you conduct experiments on real-world datasets? The ultimate goal of research on the interpretability of graph neural networks is to enhance human trust in the model in real-world applications. The lack of experiments on real-world datasets makes it difficult for me to believe that the authors' method can work in the real world.
3. Can the authors' proposed method be generalized to graph-level tasks?
4. Why is b set to 0.05 in Tables 1 and 2, but 0.1 in Tables 3 and 4?
5. Line 679 has a citation error.
ethics_review_flag: No
ethics_review_description: no
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Ejjh59dA05 | Game-theoretic Counterfactual Explanation for Graph Neural Networks | [
"Chirag P Chhablani",
"Sarthak Jain",
"Akshay Channesh",
"Ian A. Kash",
"Sourav Medya"
] | Graph Neural Networks (GNNs) have been a powerful tool for node classification tasks in complex networks. However, their decision-making processes remain a black-box to users, making it challenging to understand the reasoning behind their predictions. Counterfactual explanations (CFE) have shown promise in enhancing the interpretability of machine learning models. Prior approaches to compute CFE for GNNS often are learning-based approaches that require training additional graphs. In this paper, we propose a semivalue-based, non-learning approach to generate CFE for node classification tasks, eliminating the need for any additional training. Our results reveals that computing Banzhaf values requires lower sample complexity in identifying the counterfactual explanations compared to other popular methods such as computing Shapley values. Our empirical evidence indicates computing Banzhaf values can achieve up to a fourfold speed up compared to Shapley values. We also design a thresholding method for computing Banzhaf values and show theoretical and empirical results on its robustness in noisy environments, making it superior to Shapley values. Furthermore, the thresholded Banzhaf values are shown to enhance efficiency without compromising the quality (i.e., fidelity) in the explanations in three popular graph datasets. | [
"Counterfactual Explanation",
"Graph Neural Networks",
"Game Theory"
] | https://openreview.net/pdf?id=Ejjh59dA05 | TCPulqv1Am | official_review | 1,701,036,557,096 | Ejjh59dA05 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission574/Reviewer_ZbQS"
] | review: This paper proposes to use Banzhaf value to find top-k important edges in GNN predictions. The proposed method approximate Banzhaf value using MSR estimator and incorporates a threshold to the utility function. Experiments show the proposed method has lower running time than the MC estimator for Shapley value and better CFE performance than methods that do not need training.
Strength
- This work proposes a method that enables us to obtain counterfactual examples for graph data without extra training.
- This work shows theoretical results on the number of calls of the utility function needed to obtain the top-k important edges.
Weakness
- this work only considers counterfactual examples that are obtained from deleting edges, ignoring other possible counterfactual examples which can only be obtained by modifying features and adding edges.
- the problem is defined as finding top-k important edges that contribute the most to the prediction of class c. However, this does not really align with the definition of CFE in the literature, which means with top-k edges removed, it is not necessary to have the predicted label flipped too. I would suggest the authors to replace the use of the term counterfactual explanation to something else.
- In the experiment, even if the method does not need training, it will be better to compare with some of the representative methods that need training as an upperbound of performance.
- The error of MSR estimator in approximating the true Banzhaf value is not discussed.
questions: - The notation is quite confusing. In L195-196 v is a function, but v is a node in L254 etc.
- L322-L331 It would be better to elaborate the difference between Banzhaf value and Shapley value formally with solid notations. The text description is very hard to comprehend.
- What is the full name of MSR estimator?
- Notation mismatch in L391 and L393.
- L404, I am confused. Is \phi_{MC} for Shapley value or Banzhaf value? As L371 mentioned it is for Banzhaf value.
- L425, what is \beta? It is not defined.
- L679 there is a question mark.
- In pratice, what is the best way to select a reasonable threshold?
- The time taken of Banzhaf increases with budget while Shapley does not in Table 1-4. Does this imply that, when budget is large enough, Banzhaf can be more expensive than Shapley?
- In the MSR estimator, I guess the expectation is taken over S, where S is the set of subgraphs that have at least one node as the neighbor of i. It would be great for the authors to make this clear.
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 4
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
Ejjh59dA05 | Game-theoretic Counterfactual Explanation for Graph Neural Networks | [
"Chirag P Chhablani",
"Sarthak Jain",
"Akshay Channesh",
"Ian A. Kash",
"Sourav Medya"
] | Graph Neural Networks (GNNs) have been a powerful tool for node classification tasks in complex networks. However, their decision-making processes remain a black-box to users, making it challenging to understand the reasoning behind their predictions. Counterfactual explanations (CFE) have shown promise in enhancing the interpretability of machine learning models. Prior approaches to compute CFE for GNNS often are learning-based approaches that require training additional graphs. In this paper, we propose a semivalue-based, non-learning approach to generate CFE for node classification tasks, eliminating the need for any additional training. Our results reveals that computing Banzhaf values requires lower sample complexity in identifying the counterfactual explanations compared to other popular methods such as computing Shapley values. Our empirical evidence indicates computing Banzhaf values can achieve up to a fourfold speed up compared to Shapley values. We also design a thresholding method for computing Banzhaf values and show theoretical and empirical results on its robustness in noisy environments, making it superior to Shapley values. Furthermore, the thresholded Banzhaf values are shown to enhance efficiency without compromising the quality (i.e., fidelity) in the explanations in three popular graph datasets. | [
"Counterfactual Explanation",
"Graph Neural Networks",
"Game Theory"
] | https://openreview.net/pdf?id=Ejjh59dA05 | Bw3tPdUonH | decision | 1,705,909,211,492 | Ejjh59dA05 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: The paper presents a new approach to counter factual analysis for GNNs via the use of Banzhaf values. The authors show the superiority of this approach as compared to the standard approach of computing Shapely values. All the reviewers agreed that this is a novel technical contribution. The authors are recommended to take into account the readability related suggestions from the reviewers. |
Ejjh59dA05 | Game-theoretic Counterfactual Explanation for Graph Neural Networks | [
"Chirag P Chhablani",
"Sarthak Jain",
"Akshay Channesh",
"Ian A. Kash",
"Sourav Medya"
] | Graph Neural Networks (GNNs) have been a powerful tool for node classification tasks in complex networks. However, their decision-making processes remain a black-box to users, making it challenging to understand the reasoning behind their predictions. Counterfactual explanations (CFE) have shown promise in enhancing the interpretability of machine learning models. Prior approaches to compute CFE for GNNS often are learning-based approaches that require training additional graphs. In this paper, we propose a semivalue-based, non-learning approach to generate CFE for node classification tasks, eliminating the need for any additional training. Our results reveals that computing Banzhaf values requires lower sample complexity in identifying the counterfactual explanations compared to other popular methods such as computing Shapley values. Our empirical evidence indicates computing Banzhaf values can achieve up to a fourfold speed up compared to Shapley values. We also design a thresholding method for computing Banzhaf values and show theoretical and empirical results on its robustness in noisy environments, making it superior to Shapley values. Furthermore, the thresholded Banzhaf values are shown to enhance efficiency without compromising the quality (i.e., fidelity) in the explanations in three popular graph datasets. | [
"Counterfactual Explanation",
"Graph Neural Networks",
"Game Theory"
] | https://openreview.net/pdf?id=Ejjh59dA05 | 8MpJ6FBPZF | official_review | 1,700,678,764,759 | Ejjh59dA05 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission574/Reviewer_5nmi"
] | review: Contributions:
The authors propose a semivalue-based method for generating counterfactual explanations (CFE) for node classification tasks that does not require additional learning. They design a thresholding method for computing Banzhaf values, and they find that computing Banzhaf values (rather than Shapley values) requires a lower sample complexity and may be more efficient (up to 4 times faster) and robust.
My recommendation is based on S1, S2, W1, W2, W3. I am happy to raise my scores based on the authors' responses to my questions and clarification/justification of W1, W2, W3.
Quality:
Pros:
- (S2) The theoretical analysis of computational efficiency generally looks correct (Section 4.1).
- The authors theoretically show that using thresholding to compute Banzhaf values does not change their safety margin.
- (S1) Banzhaf vallues are empirically more efficient.
Cons:
- Lines 198-205: The authors state that a semivalue maps subsets of players to a real number, but a semivalue should take a characteristic function/game as input.
- (W1) Experiments: The authors should also include other edge explanation methods that are not based on Shapley values (even if they are not counterfactual explanations) as baselines (e.g., GStarX). The authors could also run more experiments to see how sensitive Banzhaf values are to the budget, as it appears from Tables 1 and 2 that Banzhaf values vary significantly between a budget of 3 and 5. Why do the authors not report the standard error over the three samples (lines 693-694)? The robustness results for Banzhaf values in Tables 3 and 4 are not convincingly lower than Shapley values without reporting the standard error over the samples.
Clarity:
Cons:
- The properties of semivalues (Section 2.2.1) are poorly stated by the authors. (1) should be $\Phi_i$ rather than $\Phi$. The symbol for the characteristic function changes from $v_1$ to $U$ to $v$. Please provide the domain and co-domain of $U$ and a high-level description of what it represents.
- Line 399: Given that the superiority of Banzhaf values in terms of efficiency hinges on MSR being numerically unstable for Shapley values, the authors should elaborate further on this numerical instability and why it is not a problem for Banzhaf values.
- The authors leverage numerous results from [1] in their theoretical analysis and should thus clearly restate these results in the paper before using them.
Originality:
Pros:
- The authors contribute a thresholding method for efficiently and robustly computing Banzhaf values.
Cons:
- (W2) The authors adopt the MSR estimator and much theoretical analysis for Banzhaf values from [1].
Significance:
Pros:
- Current learning-based approaches for CFE often require additional training and may not be interpretable themselves.
- Banzhaf values are more intuitive than Shapley values for edge attribution because node classification should be invariant to the ordering of edges.
Cons:
- (W3) The authors should discuss the limitation and societal implications of their work.
[1] Jiachen T Wang and Ruoxi Jia. 2023. Data banzhaf: A robust data valuation framework for machine learning. In International Conference on Artificial Intelligence and Statistics. PMLR, 6388–6421.
[2] Zhang, Shichang, et al. "Gstarx: Explaining graph neural networks with structure-aware cooperative games." Advances in Neural Information Processing Systems 35 (2022): 19810-19823.
EDIT: I have read the authors' rebuttal.
questions: Please see Review (above).
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 6
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
Ejjh59dA05 | Game-theoretic Counterfactual Explanation for Graph Neural Networks | [
"Chirag P Chhablani",
"Sarthak Jain",
"Akshay Channesh",
"Ian A. Kash",
"Sourav Medya"
] | Graph Neural Networks (GNNs) have been a powerful tool for node classification tasks in complex networks. However, their decision-making processes remain a black-box to users, making it challenging to understand the reasoning behind their predictions. Counterfactual explanations (CFE) have shown promise in enhancing the interpretability of machine learning models. Prior approaches to compute CFE for GNNS often are learning-based approaches that require training additional graphs. In this paper, we propose a semivalue-based, non-learning approach to generate CFE for node classification tasks, eliminating the need for any additional training. Our results reveals that computing Banzhaf values requires lower sample complexity in identifying the counterfactual explanations compared to other popular methods such as computing Shapley values. Our empirical evidence indicates computing Banzhaf values can achieve up to a fourfold speed up compared to Shapley values. We also design a thresholding method for computing Banzhaf values and show theoretical and empirical results on its robustness in noisy environments, making it superior to Shapley values. Furthermore, the thresholded Banzhaf values are shown to enhance efficiency without compromising the quality (i.e., fidelity) in the explanations in three popular graph datasets. | [
"Counterfactual Explanation",
"Graph Neural Networks",
"Game Theory"
] | https://openreview.net/pdf?id=Ejjh59dA05 | 109S9OpHDt | official_review | 1,701,184,361,559 | Ejjh59dA05 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission574/Reviewer_NBys"
] | review: Strongness:
1. It provides a comprehensive comparison of the proposed method with existing baselines like Shapley values, demonstrating its efficiency and effectiveness.
2. The application of Banzhaf values for generating counterfactual explanations in GNNs is a novel approach.
3. The paper introduces new insights into the efficiency and robustness of Banzhaf values compared to Shapley values in the context of GNNs.
4. The study addresses an important aspect of AI interpretability, contributing to the understanding of GNN predictions, which is crucial for their adoption in critical applications.
5. The proposed method could be particularly useful in domains where understanding the rationale behind AI decisions is as important as the decisions themselves.
Weakness:
1. The technical nature and the complexity of the game-theoretic concepts might make the paper challenging for readers not familiar with these areas.
2. The foundational concepts of GNNs and counterfactual explanations are established areas, so the paper's contribution lies in the novel application of these concepts rather than in the creation of new theories.
3. The proposed method could be particularly useful in domains where understanding the rationale behind AI decisions is as important as the decisions themselves.
questions: 1. How well does your method perform on real-world datasets, considering their complexity and variability compared to synthetic datasets?
2. How scalable is your method in terms of graph size and complexity? Are there any limitations when it comes to larger or more complex networks?
3. Beyond the Shapley value, did you compare the Banzhaf value approach with other explainability methods? How does it fare against these methods?
4. How does the structure of the graph (e.g., density, connectivity) affect the counterfactual explanations generated by your method?
5. How robust is your method in the presence of noisy or incomplete data, which is common in real-world scenarios?
6. How interpretable are the explanations generated by your method for stakeholders or decision-makers who may not be experts in graph theory or machine learning?
7. Have you investigated the potential for biases in the explanations generated by your method? How can such biases be identified and mitigated?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
EeyaKZtYFX | Weakly Supervised Anomaly Detection via Knowledge-Data Alignment | [
"Haihong Zhao",
"Chenyi Zi",
"Yang Liu",
"Chen Zhang",
"Yan Zhou",
"Jia Li"
] | Anomaly detection (AD) plays a pivotal role in numerous web-based applications, including malware detection, anti-money laundering, device failure detection, and network fault analysis. Most methods, which rely on unsupervised learning, are hard to reach satisfactory detection accuracy due to the lack of labels. Weakly Supervised Anomaly Detection (WSAD) has been introduced with a limited number of labeled anomaly samples to enhance model performance. Nevertheless, it is still challenging for models, trained on an inadequate amount of labeled data, to generalize to unseen anomalies.
In this paper, we introduce a novel framework Knowledge-Data Alignment (KDAlign) to integrate rule knowledge, typically summarized by human experts, to supplement the limited labeled data. Specifically, we transpose these rules into the knowledge space and subsequently recast the incorporation of knowledge as the alignment of knowledge and data. To facilitate this alignment, we employ the Optimal Transport (OT) technique. We then incorporate the OT distance as an additional loss term to the original objective function of WSAD methodologies. Comprehensive experimental results on five real-world datasets demonstrate that our proposed KDAlign framework markedly surpasses its state-of-the-art counterparts, achieving superior performance across various anomaly types. | [
"Anomaly Detection; Knowledge-Data Alignment; Weakly Supervised Learning"
] | https://openreview.net/pdf?id=EeyaKZtYFX | wjJ6VoHn2n | decision | 1,705,909,252,424 | EeyaKZtYFX | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: This paper proposes an approach for weakly supervised anomaly detection using knowledge-data alignment. The proposal allows incorporating rule knowledge from human experts to supplement limited labeled anomalies.
Overall, reviewers rated this paper generally favorably in both technical quality and novelty (especially AQ1S and AMY2). The main technical concerns raised pre-rebuttal were largely addressed by authors in detailed responses. In particular, reviewers raised several evaluation gaps including comparisons with pseudo-labeling, noise ratios and hyperparameter sensitivity, which were satisfactorily resolved given discussion.
I encourage authors to be mindful in adapting the requested updates to the final version. |
EeyaKZtYFX | Weakly Supervised Anomaly Detection via Knowledge-Data Alignment | [
"Haihong Zhao",
"Chenyi Zi",
"Yang Liu",
"Chen Zhang",
"Yan Zhou",
"Jia Li"
] | Anomaly detection (AD) plays a pivotal role in numerous web-based applications, including malware detection, anti-money laundering, device failure detection, and network fault analysis. Most methods, which rely on unsupervised learning, are hard to reach satisfactory detection accuracy due to the lack of labels. Weakly Supervised Anomaly Detection (WSAD) has been introduced with a limited number of labeled anomaly samples to enhance model performance. Nevertheless, it is still challenging for models, trained on an inadequate amount of labeled data, to generalize to unseen anomalies.
In this paper, we introduce a novel framework Knowledge-Data Alignment (KDAlign) to integrate rule knowledge, typically summarized by human experts, to supplement the limited labeled data. Specifically, we transpose these rules into the knowledge space and subsequently recast the incorporation of knowledge as the alignment of knowledge and data. To facilitate this alignment, we employ the Optimal Transport (OT) technique. We then incorporate the OT distance as an additional loss term to the original objective function of WSAD methodologies. Comprehensive experimental results on five real-world datasets demonstrate that our proposed KDAlign framework markedly surpasses its state-of-the-art counterparts, achieving superior performance across various anomaly types. | [
"Anomaly Detection; Knowledge-Data Alignment; Weakly Supervised Learning"
] | https://openreview.net/pdf?id=EeyaKZtYFX | vFut6YujzR | official_review | 1,700,790,465,841 | EeyaKZtYFX | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission651/Reviewer_AMY2"
] | review: This work proposes a novel framework for weakly supervised anomaly detection via **knowledge-data alignment**. The authors claim that their framework can incorporate rule knowledge, derived from human experts, to supplement the limited labeled anomaly samples and improve the performance of deep learning models. The authors use **optimal transport** techniques to align knowledge and data in a high-dimensional embedding space and introduce an additional loss term to the original objective function of weakly supervised anomaly detection methods. The authors conduct experiments on **five** real-world datasets and demonstrate that their framework outperforms several baselines and achieves state-of-the-art results.
questions: 1. Could you provide more details about the **knowledge encoder** used in the KDAlign framework inside the main text? How were they implemented and trained?
2. Could you provide some **qualitative examples** or **visualizations** of the knowledge-data alignment? What does the alignment look like in the high-dimensional embedding space, and how does it change during the training process?
3. Could you discuss the **limitations** of the KDAlign framework and the potential **future work**? For example, how to handle noisy or incomplete rule knowledge, how to scale up to large datasets or complex models, or how to incorporate other types of weak supervision such as partial labels or pairwise constraints?
ethics_review_flag: No
ethics_review_description: no
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 6
reviewer_confidence: 1: The reviewer's evaluation is an educated guess |
EeyaKZtYFX | Weakly Supervised Anomaly Detection via Knowledge-Data Alignment | [
"Haihong Zhao",
"Chenyi Zi",
"Yang Liu",
"Chen Zhang",
"Yan Zhou",
"Jia Li"
] | Anomaly detection (AD) plays a pivotal role in numerous web-based applications, including malware detection, anti-money laundering, device failure detection, and network fault analysis. Most methods, which rely on unsupervised learning, are hard to reach satisfactory detection accuracy due to the lack of labels. Weakly Supervised Anomaly Detection (WSAD) has been introduced with a limited number of labeled anomaly samples to enhance model performance. Nevertheless, it is still challenging for models, trained on an inadequate amount of labeled data, to generalize to unseen anomalies.
In this paper, we introduce a novel framework Knowledge-Data Alignment (KDAlign) to integrate rule knowledge, typically summarized by human experts, to supplement the limited labeled data. Specifically, we transpose these rules into the knowledge space and subsequently recast the incorporation of knowledge as the alignment of knowledge and data. To facilitate this alignment, we employ the Optimal Transport (OT) technique. We then incorporate the OT distance as an additional loss term to the original objective function of WSAD methodologies. Comprehensive experimental results on five real-world datasets demonstrate that our proposed KDAlign framework markedly surpasses its state-of-the-art counterparts, achieving superior performance across various anomaly types. | [
"Anomaly Detection; Knowledge-Data Alignment; Weakly Supervised Learning"
] | https://openreview.net/pdf?id=EeyaKZtYFX | j2cfpjjJ4F | official_review | 1,701,226,000,514 | EeyaKZtYFX | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission651/Reviewer_s7w9"
] | review: This paper studies the problem of weakly supervised anomaly detection by integrating rule knowledge via knowledge-data alignment. In general the studied problem is interesting and the proposed solution sounds reasonable. My major concern is the clarity of methodology, especially the calculation of optimal transport. Detailed comments are listed as follows.
Strengths
1. The proposed intuitive of using rule knowledge to aid weakly supervised anomaly detection is well justified;
2. The proposed solution provides some technical contribution to solve this problem;
3. The empirical results supports the claims.
Weaknesses
1. The clarify of methodology section needs to be improved. I am not particularly familiar with this topic, but as a general reader, I cannot justify the validity of several key techniques given their current forms. For instance, in line 354-355, it is unclear about how the propositional formula in F is transformed into graphs and the authors should provide more illustration to this step. Section 3.3 is also presented in an abstract manner without relating to X and F, which is hard to follow: what is the form of $S(f_i, x_j)$? How is $C$ calculated? What are $u$ and $v$ in the context of $E_X$ and $E_F$?
2. Some claims are not well justified. In the discussion of noisy rule, in line 148-150, the authors mentioned "when a sample matches a noisy rule, the distance of that sample to some other closely related rules will be farther". Is the claim supported by any evidence?
questions: 1. Are the rules generated on training, validation or test data? What does line 573 "delete the anomaly samples that match rules from the training set" mean?
2. What does it mean by KD-ResNet introduces knowledge without the knowledge-data alignment? Then how is the knowledge introduced?
3. How are baselines influenced by noisy knowledge?
4. How sensitive is the method to the hyperparameter $\lambda$?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 4
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
EeyaKZtYFX | Weakly Supervised Anomaly Detection via Knowledge-Data Alignment | [
"Haihong Zhao",
"Chenyi Zi",
"Yang Liu",
"Chen Zhang",
"Yan Zhou",
"Jia Li"
] | Anomaly detection (AD) plays a pivotal role in numerous web-based applications, including malware detection, anti-money laundering, device failure detection, and network fault analysis. Most methods, which rely on unsupervised learning, are hard to reach satisfactory detection accuracy due to the lack of labels. Weakly Supervised Anomaly Detection (WSAD) has been introduced with a limited number of labeled anomaly samples to enhance model performance. Nevertheless, it is still challenging for models, trained on an inadequate amount of labeled data, to generalize to unseen anomalies.
In this paper, we introduce a novel framework Knowledge-Data Alignment (KDAlign) to integrate rule knowledge, typically summarized by human experts, to supplement the limited labeled data. Specifically, we transpose these rules into the knowledge space and subsequently recast the incorporation of knowledge as the alignment of knowledge and data. To facilitate this alignment, we employ the Optimal Transport (OT) technique. We then incorporate the OT distance as an additional loss term to the original objective function of WSAD methodologies. Comprehensive experimental results on five real-world datasets demonstrate that our proposed KDAlign framework markedly surpasses its state-of-the-art counterparts, achieving superior performance across various anomaly types. | [
"Anomaly Detection; Knowledge-Data Alignment; Weakly Supervised Learning"
] | https://openreview.net/pdf?id=EeyaKZtYFX | ROhOdw9Oe2 | official_review | 1,700,819,479,944 | EeyaKZtYFX | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission651/Reviewer_AQ1S"
] | review: This paper proposes KDAlign to integrate rule knowledge, typically summarized by human experts, to supplement limited labeled data in a weakly supervised anomaly detection problem.
Strengths
1. This paper is easy to understand.
2. The proposed idea of utilizing knowledge for anomaly detection makes sense and is expected to work well in other scenarios.
3. Extensive experiments were conducted to demonstrate the effectiveness of the proposed approach.
Weaknesses
1. There are unclear details on the proposed approach. Some parts of the approach, such as building a knowledge graph, should be better explained in the main paper, not only in the appendix. Please refer to the questions below.
2. The main experimental setting is too favorable to the proposed approach, as the authors delete anomaly samples that match rules from the training set. It is uncertain if the proposed approach can work in more challenging settings.
questions: Questions
1. Why do we need to "align" the data and knowledge in the first place? Is alignment the only way to incorporate knowledge into predictions?
2. If the sizes of the data and knowledge sets are different, how can we define the knowledge-data alignment M? What if multiple rules can be applied to a single data point?
3. What is $E_T$? I can't find its definition in the text.
4. How is weakly-supervised anomaly detection (AD) different from semi-supervised AD?
5. Is the proposed approach the first attempt to use such knowledge in anomaly detection?
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
EeyaKZtYFX | Weakly Supervised Anomaly Detection via Knowledge-Data Alignment | [
"Haihong Zhao",
"Chenyi Zi",
"Yang Liu",
"Chen Zhang",
"Yan Zhou",
"Jia Li"
] | Anomaly detection (AD) plays a pivotal role in numerous web-based applications, including malware detection, anti-money laundering, device failure detection, and network fault analysis. Most methods, which rely on unsupervised learning, are hard to reach satisfactory detection accuracy due to the lack of labels. Weakly Supervised Anomaly Detection (WSAD) has been introduced with a limited number of labeled anomaly samples to enhance model performance. Nevertheless, it is still challenging for models, trained on an inadequate amount of labeled data, to generalize to unseen anomalies.
In this paper, we introduce a novel framework Knowledge-Data Alignment (KDAlign) to integrate rule knowledge, typically summarized by human experts, to supplement the limited labeled data. Specifically, we transpose these rules into the knowledge space and subsequently recast the incorporation of knowledge as the alignment of knowledge and data. To facilitate this alignment, we employ the Optimal Transport (OT) technique. We then incorporate the OT distance as an additional loss term to the original objective function of WSAD methodologies. Comprehensive experimental results on five real-world datasets demonstrate that our proposed KDAlign framework markedly surpasses its state-of-the-art counterparts, achieving superior performance across various anomaly types. | [
"Anomaly Detection; Knowledge-Data Alignment; Weakly Supervised Learning"
] | https://openreview.net/pdf?id=EeyaKZtYFX | 94hTHvdyXS | official_review | 1,701,129,292,683 | EeyaKZtYFX | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission651/Reviewer_NGBz"
] | review: This paper focuses on Weakly Supervised Anomaly Detection (WSAD) problem with limited labeled data. Specifically, the authors propose Knowledge-Data Alignment (KDAlign) to integrate rule knowledge for anomaly detection via transforming knowledge and then aligning with data in representation space. As a result, the alignment loss term is proposed using optimal transport as an additional loss. Experiments demonstrate the effectiveness of KDAlign for real world datasets and noisy knowledge setting.
Strong points
1. This paper is well-written and easy to follow.
2. The research problem to integrate knowledge rule is well-motivated and the knowledge-data alignment solution is simple and natural.
Weak points
1. The technical novelty is limited. Even though the Knowledge-Data Alignment idea is interesting, the knowledge rules, as an auxiliary information, have been integrated in many papers. The proposed method is not new from my pool. Additionally, If we have knowledge rule, can we use these rules to generate pseudo labels and the train the model? Is there any specific reason to encode knowledge instead of generating labels? What’s the performance comparison results?
2. The rationale of superiority of KDAlign for noisy knowledge is still unclear to me. Why can KDAlign tackle noisy knowledge during training? I understand it can tackle noisy knowledge in inference for the well-trained model, but what if noisy knowledge during training? Can you elaborate more on the points/reason tackling noisy knowledge in training? Additionally, there is not clear relation between performance improvement and noisy ratio. Is the performance improvement larger for high noise? What’s the maximum noise ratio that KDAlign can tackle?
3. Why optimal transport? In Lines 439-440, the authors attribute the robustness over noise knowledge into OT due to the global perspective. What is rationale behind that? From my understanding, there are multiple distribution distance metrics to measure such global distance, such as mutual information, MSE, KL divergence etc. Any insights behind OT? If yes, more ablation study on distance metrics should be conducted.
questions: 1. The technical novelty is limited. Even though the Knowledge-Data Alignment idea is interesting, the knowledge rules, as an auxiliary information, have been integrated in many papers. The proposed method is not new from my pool. Additionally, If we have knowledge rule, can we use these rules to generate pseudo labels and the train the model? Is there any specific reason to encode knowledge instead of generating labels? What’s the performance comparison results?
2. The rationale of superiority of KDAlign for noisy knowledge is still unclear to me. Why can KDAlign tackle noisy knowledge during training? I understand it can tackle noisy knowledge in inference for the well-trained model, but what if noisy knowledge during training? Can you elaborate more on the points/reason tackling noisy knowledge in training? Additionally, there is not clear relation between performance improvement and noisy ratio. Is the performance improvement larger for high noise? What’s the maximum noise ratio that KDAlign can tackle?
3. Why optimal transport? In Lines 439-440, the authors attribute the robustness over noise knowledge into OT due to the global perspective. What is rationale behind that? From my understanding, there are multiple distribution distance metrics to measure such global distance, such as mutual information, MSE, KL divergence etc. Any insights behind OT? If yes, more ablation study on distance metrics should be conducted.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 3
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
EbJdgSNIW7 | Unlocking the Non-deterministic Computing Power with Memory-Elastic Multi-Exit Neural Networks | [
"Jiaming Huang",
"Yi Gao",
"Wei Dong"
] | With the increasing demand for Web of Things (WoT) and edge computing, the efficient utilization of limited computing power on edge devices is becoming a crucial challenge. Traditional neural networks (NNs) as web services rely on deterministic computational resources. However, they may fail to output the results on non-deterministic computing power which could be preempted at any time, degrading the task performance significantly. Multi-exit NNs with multiple branches have been proposed as a solution, but the accuracy of intermediate results may be unsatisfactory. In this paper, we propose MEEdge, a system that automatically transforms classic single-exit models into heterogeneous and dynamic multi-exit models which enables Memory-Elastic inference at the Edge with non-deterministic computing power. To build heterogeneous multi-exit models, MEEdge uses efficient convolutions to form a branch zoo and High Priority First (HPF)-based branch placement method for branch growth. To adapt models to dynamically varying computational resources, we employ a novel on-device scheduler for collaboration. Further, to reduce the memory overhead caused by dynamic branches, we propose neuron-level weight sharing and few-shot knowledge distillation(KD) retraining. Our experimental results show that models generated by MEEdge can achieve up to 27.31% better performance than existing multi-exit NNs. | [
"Non-deterministic computing power",
"multi-exit neural networks",
"memory-elastic",
"edge computing"
] | https://openreview.net/pdf?id=EbJdgSNIW7 | xzIc10UCja | decision | 1,705,909,237,876 | EbJdgSNIW7 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: This paper proposes a systematic methodology to convert single-exit NNs towards multi-exit branch NNs to suit non-deterministic computing power, with solutions to address heterogenous NN training and non-deterministic computing power scheduling. All reviewers acknowledge the motivations and technical contributions in this paper and provide positive ratings and feedback. The authors are encouraged to incorporate the review comments into the camera-ready version of the manuscript, including the application in more complex scenarios, overhead analysis, etc. I recommend accepting this version to the Web Conference. |
EbJdgSNIW7 | Unlocking the Non-deterministic Computing Power with Memory-Elastic Multi-Exit Neural Networks | [
"Jiaming Huang",
"Yi Gao",
"Wei Dong"
] | With the increasing demand for Web of Things (WoT) and edge computing, the efficient utilization of limited computing power on edge devices is becoming a crucial challenge. Traditional neural networks (NNs) as web services rely on deterministic computational resources. However, they may fail to output the results on non-deterministic computing power which could be preempted at any time, degrading the task performance significantly. Multi-exit NNs with multiple branches have been proposed as a solution, but the accuracy of intermediate results may be unsatisfactory. In this paper, we propose MEEdge, a system that automatically transforms classic single-exit models into heterogeneous and dynamic multi-exit models which enables Memory-Elastic inference at the Edge with non-deterministic computing power. To build heterogeneous multi-exit models, MEEdge uses efficient convolutions to form a branch zoo and High Priority First (HPF)-based branch placement method for branch growth. To adapt models to dynamically varying computational resources, we employ a novel on-device scheduler for collaboration. Further, to reduce the memory overhead caused by dynamic branches, we propose neuron-level weight sharing and few-shot knowledge distillation(KD) retraining. Our experimental results show that models generated by MEEdge can achieve up to 27.31% better performance than existing multi-exit NNs. | [
"Non-deterministic computing power",
"multi-exit neural networks",
"memory-elastic",
"edge computing"
] | https://openreview.net/pdf?id=EbJdgSNIW7 | w2HVHQgVhP | official_review | 1,699,624,657,883 | EbJdgSNIW7 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission154/Reviewer_1V43"
] | review: Paper proposes an approach to create a multi-exit models with a standard single exit networks to handle different inference load on edge devices. It further employs weight sharing an retraining of certain layers with knowledge distillation(KD). The use of KD is not new and has been explored earlier in [1]. The branch zoo concept is not explained clearly, I initially thought it refers to a pretrained model (or layers) but I think this is not the case. Given such a complicated procedure to design a multi-exit model, I am wondering about its feasibility in comparison with a simpler approach like comparing against methods that use entropy of predictions to exit early.
[1] Phuong, Mary, and Christoph H. Lampert. "Distillation-based training for multi-exit architectures." Proceedings of the IEEE/CVF international conference on computer vision. 2019.
questions: Does High Priority First always try to force the method to exit from earliest layer possible? What if the early exit produce highly uncertain prediction?
How does the performance looks like in comparison with exiting based on entropy?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
EbJdgSNIW7 | Unlocking the Non-deterministic Computing Power with Memory-Elastic Multi-Exit Neural Networks | [
"Jiaming Huang",
"Yi Gao",
"Wei Dong"
] | With the increasing demand for Web of Things (WoT) and edge computing, the efficient utilization of limited computing power on edge devices is becoming a crucial challenge. Traditional neural networks (NNs) as web services rely on deterministic computational resources. However, they may fail to output the results on non-deterministic computing power which could be preempted at any time, degrading the task performance significantly. Multi-exit NNs with multiple branches have been proposed as a solution, but the accuracy of intermediate results may be unsatisfactory. In this paper, we propose MEEdge, a system that automatically transforms classic single-exit models into heterogeneous and dynamic multi-exit models which enables Memory-Elastic inference at the Edge with non-deterministic computing power. To build heterogeneous multi-exit models, MEEdge uses efficient convolutions to form a branch zoo and High Priority First (HPF)-based branch placement method for branch growth. To adapt models to dynamically varying computational resources, we employ a novel on-device scheduler for collaboration. Further, to reduce the memory overhead caused by dynamic branches, we propose neuron-level weight sharing and few-shot knowledge distillation(KD) retraining. Our experimental results show that models generated by MEEdge can achieve up to 27.31% better performance than existing multi-exit NNs. | [
"Non-deterministic computing power",
"multi-exit neural networks",
"memory-elastic",
"edge computing"
] | https://openreview.net/pdf?id=EbJdgSNIW7 | qtrqk8iRRx | official_review | 1,700,764,110,666 | EbJdgSNIW7 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission154/Reviewer_pTzV"
] | review: Summary:
This paper proposes MEEdge, a system that transforms single-exit neural network models into heterogeneous and dynamic multi-exit models for resource-constrained edge devices. The key ideas are: (1) Automatically generate multiple branch candidates with different structures using efficient convolutions. Select high-quality branches through survival analysis to build a branch zoo. (2) Propose a HPF-based branch placement method to select and place heterogeneous branches online according to device resource budgets. This facilitates server-device collaboration for memory-elastic inference. (3) Reduce memory overhead of branches through neuron-level weight sharing and few-shot knowledge distillation retraining on the server. (4) Design an on-device scheduler for collaborating with server to dynamically update branches based on available memory to achieve anytime inference.
Strength:
1. Comprehensively tackles the problem of enabling neural network inference on non-deterministic edge resources through both server-side and device-side techniques.
2. Extensive experiments on image classification and gesture recognition datasets demonstrate the ability to dynamically adapt to resource changes.
3. The proposed techniques such as neuron-level weight sharing, and HPF-based search can be applied more broadly.
Weakness:
1. The end-to-end performance could be analyzed in more complex edge computing scenarios with additional metrics.
2. More analysis could be provided on how the techniques generalize to other model architectures and applications.
3. The security and privacy implications of the server-device collaboration are not discussed.
questions: please refer to weakness
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
EbJdgSNIW7 | Unlocking the Non-deterministic Computing Power with Memory-Elastic Multi-Exit Neural Networks | [
"Jiaming Huang",
"Yi Gao",
"Wei Dong"
] | With the increasing demand for Web of Things (WoT) and edge computing, the efficient utilization of limited computing power on edge devices is becoming a crucial challenge. Traditional neural networks (NNs) as web services rely on deterministic computational resources. However, they may fail to output the results on non-deterministic computing power which could be preempted at any time, degrading the task performance significantly. Multi-exit NNs with multiple branches have been proposed as a solution, but the accuracy of intermediate results may be unsatisfactory. In this paper, we propose MEEdge, a system that automatically transforms classic single-exit models into heterogeneous and dynamic multi-exit models which enables Memory-Elastic inference at the Edge with non-deterministic computing power. To build heterogeneous multi-exit models, MEEdge uses efficient convolutions to form a branch zoo and High Priority First (HPF)-based branch placement method for branch growth. To adapt models to dynamically varying computational resources, we employ a novel on-device scheduler for collaboration. Further, to reduce the memory overhead caused by dynamic branches, we propose neuron-level weight sharing and few-shot knowledge distillation(KD) retraining. Our experimental results show that models generated by MEEdge can achieve up to 27.31% better performance than existing multi-exit NNs. | [
"Non-deterministic computing power",
"multi-exit neural networks",
"memory-elastic",
"edge computing"
] | https://openreview.net/pdf?id=EbJdgSNIW7 | WqTMuYHhwz | official_review | 1,700,693,504,295 | EbJdgSNIW7 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission154/Reviewer_GYiH"
] | review: This paper proposes memory-elastic multi-exit neural networks. The paper is well written and structured. It is very easy to follow.
Here are the pros and cons of this paper:
Pros:
+ DFS selection in the online stage can work very well for the scenario
+ Multiple existing methods have been compared with the proposed method.
+ The proposed method clearly outperforms the existing methods.
Cons:
- More technical details about Branch zoo and Branch Survival could be added. It is not clear whether NAS is used and how NAS is used. As in Section 5, the disadvantages of NAS have been discussed, but it is not very clear of the technical details of alternative solutions or how NAS is used in the paper to solve those issues. To resolve this, more technical details could be added to Section 3.2.
- In Section 3.2.2, it is declared that "To minimize unnecessary storage and search overhead, we perform Branch Survival to eliminate underperforming branches."As this is offline, the motivation of minimizing unnecessary storage and search overhead could be discussed more, as the search can be done in a server, so need to consider those factors.
- As this paper focuses on the constraints of memory. It would be good to discuss when this could happen, i.e., in which application scenario, there is a memory constraint and in which scenario, the constraints are from other factors, such as computational capability, etc.
questions: Please see the cons in the review.
ethics_review_flag: No
ethics_review_description: na
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
EbJdgSNIW7 | Unlocking the Non-deterministic Computing Power with Memory-Elastic Multi-Exit Neural Networks | [
"Jiaming Huang",
"Yi Gao",
"Wei Dong"
] | With the increasing demand for Web of Things (WoT) and edge computing, the efficient utilization of limited computing power on edge devices is becoming a crucial challenge. Traditional neural networks (NNs) as web services rely on deterministic computational resources. However, they may fail to output the results on non-deterministic computing power which could be preempted at any time, degrading the task performance significantly. Multi-exit NNs with multiple branches have been proposed as a solution, but the accuracy of intermediate results may be unsatisfactory. In this paper, we propose MEEdge, a system that automatically transforms classic single-exit models into heterogeneous and dynamic multi-exit models which enables Memory-Elastic inference at the Edge with non-deterministic computing power. To build heterogeneous multi-exit models, MEEdge uses efficient convolutions to form a branch zoo and High Priority First (HPF)-based branch placement method for branch growth. To adapt models to dynamically varying computational resources, we employ a novel on-device scheduler for collaboration. Further, to reduce the memory overhead caused by dynamic branches, we propose neuron-level weight sharing and few-shot knowledge distillation(KD) retraining. Our experimental results show that models generated by MEEdge can achieve up to 27.31% better performance than existing multi-exit NNs. | [
"Non-deterministic computing power",
"multi-exit neural networks",
"memory-elastic",
"edge computing"
] | https://openreview.net/pdf?id=EbJdgSNIW7 | BCGwd0ezqR | official_review | 1,700,552,420,646 | EbJdgSNIW7 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission154/Reviewer_hEzM"
] | review: **Quality.** This paper proposes MEEdge that transforms single-exit models (simple DNNs) into dynamic multi-exit DNNs that are as accurate as single-exit models and enable memory-elastic inference at edge. To achieve its design goals, the paper leverages three novel techniques including dynamic branch construction, neuron-level weight sharing and few-shot knowledge distillation. Therefore, this paper is of good quality in terms of its design objectives and technical contributions.
**Clarity.** This paper is well written and easy to follow. The figures are well formatted and clearly annotated. Some typos exist but the readability is good.
**Significance.** The paper tackles an important limitation of multi-exit models on edge by its novel application of memory overhead minimization technique and knowledge distillation techniques.
In sum, the paper enjoys the following pros and cons:
Pros:
1. The paper proposes a novel multi-exit architecture and applies several techniques to achieve the design goals.
2. The paper is well written.
Cons:
1. Experiments are limited on traditional architectures like ResNet and VGG, the authors should study the scalability to larger and recent vision models (e.g., Transformer-based?).
2. Lack of discussion of potential adaptive threat for this novel multi-exit architecuture. Previous works (e.g., [1-3]) show that existing multi-exit models are vulnerable to adversarial example, backdoor attack and privacy attack. Please discuss whether they can be applied here.
3. Please consider comparison with other model transformations like quantization for edge deployment. Discuss the advantages of multi-exit model.
[1] Hong et al., A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference. ICLR 2021.
[2] Li et al., Auditing membership leakages of multi-exit networks. CCS 2022.
[3] Dong et al., Mind Your Heart: Stealthy Backdoor Attack on Dynamic Deep Neural Network in Edge Computing. INFOCOM 2023.
questions: See the comments in cons.
ethics_review_flag: No
ethics_review_description: None
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
EbJdgSNIW7 | Unlocking the Non-deterministic Computing Power with Memory-Elastic Multi-Exit Neural Networks | [
"Jiaming Huang",
"Yi Gao",
"Wei Dong"
] | With the increasing demand for Web of Things (WoT) and edge computing, the efficient utilization of limited computing power on edge devices is becoming a crucial challenge. Traditional neural networks (NNs) as web services rely on deterministic computational resources. However, they may fail to output the results on non-deterministic computing power which could be preempted at any time, degrading the task performance significantly. Multi-exit NNs with multiple branches have been proposed as a solution, but the accuracy of intermediate results may be unsatisfactory. In this paper, we propose MEEdge, a system that automatically transforms classic single-exit models into heterogeneous and dynamic multi-exit models which enables Memory-Elastic inference at the Edge with non-deterministic computing power. To build heterogeneous multi-exit models, MEEdge uses efficient convolutions to form a branch zoo and High Priority First (HPF)-based branch placement method for branch growth. To adapt models to dynamically varying computational resources, we employ a novel on-device scheduler for collaboration. Further, to reduce the memory overhead caused by dynamic branches, we propose neuron-level weight sharing and few-shot knowledge distillation(KD) retraining. Our experimental results show that models generated by MEEdge can achieve up to 27.31% better performance than existing multi-exit NNs. | [
"Non-deterministic computing power",
"multi-exit neural networks",
"memory-elastic",
"edge computing"
] | https://openreview.net/pdf?id=EbJdgSNIW7 | BAjJN6MQhw | official_review | 1,700,716,272,341 | EbJdgSNIW7 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission154/Reviewer_22e2"
] | review: The paper addresses the challenge of efficiently utilizing limited computing power on edge devices. It introduces MEEdge, a system that transforms classic single-exit models into heterogeneous and dynamic multi-exit models, enabling Memory-Elastic inference at the edge with non-deterministic computing power. The authors proposed new approaches such as branch cultivation, HPDF to achieve the goal.
### Pros
* Well-motivated
* I appreciate the illustration used in the paper, they are helpful for readers unfamiliar with this field to navigate such a dense paper
* Automated branch selection and update with dynamic resource constraints
### Cons
* Some errors
* Line 390 page 4 Eqn 1 is confusing: L appears on both sides, the line 388 referrs to Eqn 2
* Line 110 p1 “for for” → “for”
questions: * What is the overhead (in terms of runtime and storage) of the offline branch cultivation and HPF? Seems like a costly process given the search space
* The back-and-forth communication between device and server can easily take a few hundred ms, which is on par of the single-exit inference time (fig 1). I wonder if the multi-exit approach is truely beneficial if taking into account the server-device communication time?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
EVeORls6Oc | Uplift Modeling for Target User Attacks on Recommender Systems | [
"Wenjie Wang",
"Changsheng Wang",
"Fuli Feng",
"Wentao Shi",
"Daizong Ding",
"Tat-Seng Chua"
] | Recommender systems are vulnerable to injective attacks, which inject limited fake users into the platforms to manipulate the exposure of target items to all users. In this work, we identify that conventional injective attackers overlook the fact that each item has its unique potential audience, and meanwhile, the attack difficulty across different users varies. Blindly attacking all users will result in a waste of fake user budgets and inferior attack performance. To address these issues, we focus on an under-explored attack task called target user attacks, aiming at promoting target items to a particular user group. In addition, we formulate the varying attack difficulty as heterogeneous treatment effects through a causal lens and propose an Uplift-guided Budget Allocation (UBA) framework. UBA estimates the treatment effect on each target user and optimizes the allocation of fake user budgets to maximize the attack performance.
Theoretical and empirical analysis demonstrates the rationality of treatment effect estimation methods of UBA. By instantiating UBA on multiple attackers, we conduct extensive experiments on three datasets under various settings with different target items, target users, fake user budgets, victim models, and defense models, validating the effectiveness and robustness of UBA. | [
"Recommender Attack",
"Target User Attack",
"Uplift Modeling",
"Treatment Effect Estimation"
] | https://openreview.net/pdf?id=EVeORls6Oc | tZ4eIHYOnb | official_review | 1,700,806,708,661 | EVeORls6Oc | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission507/Reviewer_N8KH"
] | review: Paper summary:
This paper proposes an approach to addressing the vulnerability of recommender systems to injective attacks. By focusing on target user attacks and formulating varying attack difficulty as heterogeneous treatment effects, the Uplift-guided Budget Allocation (UBA) framework optimizes the allocation of fake user budgets to maximize attack performance. The paper presents theoretical and empirical analysis to demonstrate the rationality and effectiveness of UBA and validates its robustness against defense models through extensive experiments on three datasets under various settings. The paper also highlights the significance of target user attacks and introduces related literature on uplift modeling and injective attacks.
===
Pros:
P1. The paper investigates injective attacks on recommender systems, which can inspire existing industries to resist attacks.
P2. The paper validates the effectiveness of the proposed approach through extensive experiments on three datasets under various settings.
P3. The paper provides a comprehensive review of related literature on uplift modeling and injective attacks, which helps readers understand the context and significance of the proposed approach.
===
Cons:
C1. The datasets for evaluation seem to be limited.
C2. The efficiency of the proposed approach needs to be investigated since the budget allocation problem is NP-hard
C3. The writing and experiment settings can be improved.
questions: Q1. The paper uses only three small datasets to evaluate the proposed approach, which may not be representative of all possible scenarios. It would be useful to test the approach on more large datasets to validate its generalizability.
Q2. The efficiency of the proposed framework is not well investigated. It is better to add the time cost with different budgets and target users since the budget allocation problem is NP-hard regarding the number of budgets and target users.
Q3. The comparisons with baselines seem to be unfair. The baselines consider all users for promotion, while this paper only focuses on ~100 target users. However, the hit ratio is only computed based on these target users, so it is better to simply extend baselines in the same scenarios.
Q4. Some parts are unclear. For example, in Table 8, why the time of the whole attack process is longer than one-time estimations?
ethics_review_flag: No
ethics_review_description: NA
scope: 2: The connection to the Web is incidental, e.g., use of Web data or API
novelty: 4
technical_quality: 4
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
EVeORls6Oc | Uplift Modeling for Target User Attacks on Recommender Systems | [
"Wenjie Wang",
"Changsheng Wang",
"Fuli Feng",
"Wentao Shi",
"Daizong Ding",
"Tat-Seng Chua"
] | Recommender systems are vulnerable to injective attacks, which inject limited fake users into the platforms to manipulate the exposure of target items to all users. In this work, we identify that conventional injective attackers overlook the fact that each item has its unique potential audience, and meanwhile, the attack difficulty across different users varies. Blindly attacking all users will result in a waste of fake user budgets and inferior attack performance. To address these issues, we focus on an under-explored attack task called target user attacks, aiming at promoting target items to a particular user group. In addition, we formulate the varying attack difficulty as heterogeneous treatment effects through a causal lens and propose an Uplift-guided Budget Allocation (UBA) framework. UBA estimates the treatment effect on each target user and optimizes the allocation of fake user budgets to maximize the attack performance.
Theoretical and empirical analysis demonstrates the rationality of treatment effect estimation methods of UBA. By instantiating UBA on multiple attackers, we conduct extensive experiments on three datasets under various settings with different target items, target users, fake user budgets, victim models, and defense models, validating the effectiveness and robustness of UBA. | [
"Recommender Attack",
"Target User Attack",
"Uplift Modeling",
"Treatment Effect Estimation"
] | https://openreview.net/pdf?id=EVeORls6Oc | oNYXYJCuKK | official_review | 1,700,708,126,942 | EVeORls6Oc | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission507/Reviewer_NQFn"
] | review: The paper focuses on target user attacks in recommender systems. These attacks aim to manipulate the exposure of specific items to a particular user group. The key novelty is the introduction of the Uplift-guided Budget Allocation (UBA) framework, which optimizes the allocation of fake user budgets based on the estimated treatment effect on each target user. This approach aims to maximize attack performance while addressing the varying difficulty of attacking different users. The framework was empirically tested on three datasets under various scenarios, including different target items, user groups, budget constraints, victim models, and defense models, to validate its effectiveness and robustness.
Pros:
1. Similar to some existing work, the proposed approach (UBA) optimizes the fake user interaction matrix. In addition, UBA considers varying attack difficulty of each user in optimization. By optimizing the allocation of fake user budgets through the UBA framework, the paper presents a more resource-efficient approach to carrying out attacks, maximizing the impact with minimal resources.
2. The framework is tested across multiple scenarios and datasets, which provides a validation of its effectiveness and robustness.
Cons:
1. The main contribution is target user attack with varying user attack difficulty. As mentioned in the paper, to keep the optimization simple, the paper adopts two disjoint steps including the estimation of the treatment effect Y(u,i)Df and the selection of treatment. The treatment effect can be estimated either through surrogate model and simulations or high-order interaction path (A3). In addition, the proposed method computes the optimal budget based on a dynamic programming algorithm. Those two are not jointly optimized and some steps are computational expensive.
2. Among the two estimation methods, UBA with surrogate models seems more computational expensive. Is there any quantitively analysis of the accuracy-efficiency trade-off for UBA/with surrogate and UBA/with A3?
3. Only simple unsupervised defending methods (PCA, FAP) are discussed in the last section. Adding more recent defending methods such as “Denoise unreliable interactions for graph collaborative filtering” would have added more value to the paper.
questions: 1. Neural collaborative filtering (neurl CF) models are widely used models and perform better than matrix factorization (MF). For A3, is it easy to extend the proof from matrix factorization models to neural models?
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
EVeORls6Oc | Uplift Modeling for Target User Attacks on Recommender Systems | [
"Wenjie Wang",
"Changsheng Wang",
"Fuli Feng",
"Wentao Shi",
"Daizong Ding",
"Tat-Seng Chua"
] | Recommender systems are vulnerable to injective attacks, which inject limited fake users into the platforms to manipulate the exposure of target items to all users. In this work, we identify that conventional injective attackers overlook the fact that each item has its unique potential audience, and meanwhile, the attack difficulty across different users varies. Blindly attacking all users will result in a waste of fake user budgets and inferior attack performance. To address these issues, we focus on an under-explored attack task called target user attacks, aiming at promoting target items to a particular user group. In addition, we formulate the varying attack difficulty as heterogeneous treatment effects through a causal lens and propose an Uplift-guided Budget Allocation (UBA) framework. UBA estimates the treatment effect on each target user and optimizes the allocation of fake user budgets to maximize the attack performance.
Theoretical and empirical analysis demonstrates the rationality of treatment effect estimation methods of UBA. By instantiating UBA on multiple attackers, we conduct extensive experiments on three datasets under various settings with different target items, target users, fake user budgets, victim models, and defense models, validating the effectiveness and robustness of UBA. | [
"Recommender Attack",
"Target User Attack",
"Uplift Modeling",
"Treatment Effect Estimation"
] | https://openreview.net/pdf?id=EVeORls6Oc | O1qZLrn816 | official_review | 1,700,655,311,460 | EVeORls6Oc | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission507/Reviewer_sZHj"
] | review: Summary
This paper investigates target user attacks against recommender systems. The authors propose to migrate the methodology of traditional injective attacks to target user attacks by assigning a budget to each target user. They propose a model-agnostic approach, UBA, to get the optimal budget allocation plan. To estimate the effect of different budget allocation plans, they propose two estimation methods, namely w/ and w/o. w/ uses the change in the output of the surrogate model after the attack to estimate the effect of the budget allocation plan. w/o uses the number of high-order interaction paths between the user and the item to estimate the effect of the budget allocation plan. Finally, they experimentally validate the effectiveness of the method.
Pros
1. The authors propose a model-agnostic approach to migrate the methodology of traditional injective attacks to target user attacks, which is interesting and effective.
2. The authors find a correlation between recommendation prediction scores and the number of high-order interaction paths, which is simple but interesting.
3. The paper is well-structured.
Cons
1. The authors estimate the treatment effect for each target user individually, but in practice the attack on one user is likely to result in a change in the recommendation for other users. Therefore, there are limitations in assigning budgets to each target user and decomposing multi-targeted target user attacks into multiple single-targeted attacks.
2. The authors used existing attack methods directly for experiments. It would be better to propose new attack algorithms based on the UBA architecture.
3. In the hyper-parameter tuning section, the choices of α and β are too few.
questions: 1. Proposition 1 is derived on which dataset? Has it been validated on other datasets?
2.In Section 4.1, the authors present two hyperparameters, α and β, and assume that they are invariant after the attack. I would like to know if there is any reason for this.
3. Since Section 4.1 yields a strong correlation between recommendation prediction scores and number of high-order interaction paths, why not generate attacks based on it directly? For example, according to Proposition 2, we can copy the target user's interactions as the fake user's interactions and add an interaction with target item, which maximizes the number of three-order paths.
4. Why have Target and UBA groups only experimented on AIA, AUSH, Legup?
5. In Algorithm 1, B[k] is always equal to k. What is the significance of B?
6. Why are "Target" and "UBA" groups only experimented on AIA, AUSH and Legup?
7.Why does the "Target" group in Table 6 outperform the original attack algorithm despite having an optimization objective different from the attack objective?
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
EVeORls6Oc | Uplift Modeling for Target User Attacks on Recommender Systems | [
"Wenjie Wang",
"Changsheng Wang",
"Fuli Feng",
"Wentao Shi",
"Daizong Ding",
"Tat-Seng Chua"
] | Recommender systems are vulnerable to injective attacks, which inject limited fake users into the platforms to manipulate the exposure of target items to all users. In this work, we identify that conventional injective attackers overlook the fact that each item has its unique potential audience, and meanwhile, the attack difficulty across different users varies. Blindly attacking all users will result in a waste of fake user budgets and inferior attack performance. To address these issues, we focus on an under-explored attack task called target user attacks, aiming at promoting target items to a particular user group. In addition, we formulate the varying attack difficulty as heterogeneous treatment effects through a causal lens and propose an Uplift-guided Budget Allocation (UBA) framework. UBA estimates the treatment effect on each target user and optimizes the allocation of fake user budgets to maximize the attack performance.
Theoretical and empirical analysis demonstrates the rationality of treatment effect estimation methods of UBA. By instantiating UBA on multiple attackers, we conduct extensive experiments on three datasets under various settings with different target items, target users, fake user budgets, victim models, and defense models, validating the effectiveness and robustness of UBA. | [
"Recommender Attack",
"Target User Attack",
"Uplift Modeling",
"Treatment Effect Estimation"
] | https://openreview.net/pdf?id=EVeORls6Oc | LNhjw8UKX6 | official_review | 1,701,042,769,284 | EVeORls6Oc | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission507/Reviewer_JrHP"
] | review: ### Summary
This paper presents a framework for a new injective attacks of recommender systems, which is the target user attacks. The authors proposed a new framework called Uplift-guided Budget Allocation (UBA) and show the attacker effectiveness on three datasets with various settings.
### Strength
1. Is it an interesting and meaningful setting to attack target user groups, i.e., expose a target item to a specific user group instead of all users.
2. This model is technically sound.
3. The effectiveness of this proposed method has been justified with extensive experiments.
### Weakness
1. Related work: recently a group of new injective recommender attacks paper based on model extraction [1-3], which is missed in the introduction about the existing attacks.
2. The authors provided an expired repo link, so the reproducibility cannot be verified.
### Reference
1. Yue, Zhenrui, et al. "Black-box attacks on sequential recommenders via data-free model extraction." *Proceedings of the 15th ACM Conference on Recommender Systems*. 2021.
2. Chen, Jingfan, et al. "Knowledge-enhanced black-box attacks for recommendations." *Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining*. 2022.
3. Nguyen, Thanh Toan, et al. "Poisoning GNN-based recommender systems with generative surrogate-based attacks." *ACM Transactions on Information Systems* (2022).
questions: 1. Is there any discussion or consideration of ethical aspects related to the attacking methods?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 6
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
EVeORls6Oc | Uplift Modeling for Target User Attacks on Recommender Systems | [
"Wenjie Wang",
"Changsheng Wang",
"Fuli Feng",
"Wentao Shi",
"Daizong Ding",
"Tat-Seng Chua"
] | Recommender systems are vulnerable to injective attacks, which inject limited fake users into the platforms to manipulate the exposure of target items to all users. In this work, we identify that conventional injective attackers overlook the fact that each item has its unique potential audience, and meanwhile, the attack difficulty across different users varies. Blindly attacking all users will result in a waste of fake user budgets and inferior attack performance. To address these issues, we focus on an under-explored attack task called target user attacks, aiming at promoting target items to a particular user group. In addition, we formulate the varying attack difficulty as heterogeneous treatment effects through a causal lens and propose an Uplift-guided Budget Allocation (UBA) framework. UBA estimates the treatment effect on each target user and optimizes the allocation of fake user budgets to maximize the attack performance.
Theoretical and empirical analysis demonstrates the rationality of treatment effect estimation methods of UBA. By instantiating UBA on multiple attackers, we conduct extensive experiments on three datasets under various settings with different target items, target users, fake user budgets, victim models, and defense models, validating the effectiveness and robustness of UBA. | [
"Recommender Attack",
"Target User Attack",
"Uplift Modeling",
"Treatment Effect Estimation"
] | https://openreview.net/pdf?id=EVeORls6Oc | 81Kf0doh28 | decision | 1,705,909,244,154 | EVeORls6Oc | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: The reviewers were broadly very positive about this work, both in terms of novelty and technical quality. The authors provided in-depth explanations and more details in the discussion phase. Code and data are shared by the authors for reproducibility. Overall, this could be a solid contribution to the conference. |
EVeORls6Oc | Uplift Modeling for Target User Attacks on Recommender Systems | [
"Wenjie Wang",
"Changsheng Wang",
"Fuli Feng",
"Wentao Shi",
"Daizong Ding",
"Tat-Seng Chua"
] | Recommender systems are vulnerable to injective attacks, which inject limited fake users into the platforms to manipulate the exposure of target items to all users. In this work, we identify that conventional injective attackers overlook the fact that each item has its unique potential audience, and meanwhile, the attack difficulty across different users varies. Blindly attacking all users will result in a waste of fake user budgets and inferior attack performance. To address these issues, we focus on an under-explored attack task called target user attacks, aiming at promoting target items to a particular user group. In addition, we formulate the varying attack difficulty as heterogeneous treatment effects through a causal lens and propose an Uplift-guided Budget Allocation (UBA) framework. UBA estimates the treatment effect on each target user and optimizes the allocation of fake user budgets to maximize the attack performance.
Theoretical and empirical analysis demonstrates the rationality of treatment effect estimation methods of UBA. By instantiating UBA on multiple attackers, we conduct extensive experiments on three datasets under various settings with different target items, target users, fake user budgets, victim models, and defense models, validating the effectiveness and robustness of UBA. | [
"Recommender Attack",
"Target User Attack",
"Uplift Modeling",
"Treatment Effect Estimation"
] | https://openreview.net/pdf?id=EVeORls6Oc | 3WfDApQ2jR | official_review | 1,700,938,246,838 | EVeORls6Oc | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission507/Reviewer_ufRF"
] | review: The paper proposes a novel attack approach against recommender systems called Uplift-guided Budget Allocation (i.e., UBA) which calculates the optimal allocation of budgets to generate and inject fake users into the system. Indeed, the initial authors’ assumption is that not all users might be interested in a specific target item for the attack (as users are usually clustered according to their expressed preferences). Thus, to not waste the budget for users who are unwilling to interact with the target attack item, the authors decide to produce attacks that are specifically tailored for each user. As the UBA framework is model-agnostic and it is assumed that the actual recommendation system is not known in advance (i.e., black-box scenario) the authors propose two variants where the attack leverages a surrogate recommendation model for the real one (e.g., MF) or tries to simulate the user’s path within the user-item graph in a random walk-alike manner with three walks. The UBA framework is tested in a suite of 10 state-of-the-art other attack strategies, and three possible recommendation systems as surrogate models, on three recommendation datasets. Experiments, which are further conducted on an extensive set of evaluation dimensions, demonstrate the efficacy of the proposed approach in all its components and design strategies.
*Pros*:
- The authors extensively state the current issues and how their framework may address them
- The paper is well-structured and easy to follow
- The proposed methodology is sound thanks to the theoretical and empirical demonstrations/intuitions provided by the authors in the main and appendix parts of the paper
- The experimental setting is extensive with several baselines, recommendation datasets, and evaluation dimensions
*Cons*:
- Despite the code being shared at review time, the provided URL seems to be broken
*Detailed comments*
Overall, the paper proposes a very nice approach well-placed in the existing literature, outlining the critical aspects of the current solutions and how the introduced framework may address them. From a structural viewpoint, the paper is well-written, and its narrative is easy to follow even for those who are not very familiar with the main topics of the work. In terms of methodology, the UBA framework seems to be adequately sound in all its formulations and theoretical foundations, which are extensively investigated and demonstrated (especially in the appendix). Finally, the experimental setting is extensive as several baselines are tested, on a sufficient group of recommendation datasets, and numerous evaluation dimensions are considered to encompass all possible facets of the approach. The only negative aspect I can see here is that the code, despite being shared, is not accessible at review time (maybe the URL is broken or expired).
questions: - Can the authors provide intuitions or mathematical proofs of how the UBA framework may work better in the setting without the surrogate model?
- To the best of my knowledge, I see that the intuition (theoretically and empirically demonstrated in the appendix) that the three-order path $A^3$ is positively correlated to the prediction score of any CF model is something not completely new to the field. Indeed, also the authors from [*] used the three-hop adjacency matrix to simulate the path of the user exploring distant items. In light of this, can the authors explain which are the connections between their intuitions/demonstrations and the cited paper?
[*] Bibek Paudel, Fabian Christoffel, Chris Newell, Abraham Bernstein: Updatable, Accurate, Diverse, and Scalable Recommendations for Interactive Applications. ACM Trans. Interact. Intell. Syst. 7(1): 1:1-1:34 (2017)
ethics_review_flag: No
ethics_review_description: No issue
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
E7zJ6uzdVd | BlockDFL: A Blockchain-based Fully Decentralized Peer-to-Peer Federated Learning Framework | [
"Zhen Qin",
"Xueqiang YAN",
"Mengchu Zhou",
"Shuiguang Deng"
] | Federated learning (FL) enables collaborative training of machine learning models without sharing training data. Traditional FL heavily relies on a trusted centralized server. Although decentralized FL eliminates the central dependence, it may worsen the other inherit problems faced by FL such as poisoning attacks and data representation leakage due to insufficient restrictions on the behavior of participants, and heavy communication cost, especially in fully decentralized scenarios, i.e., peer-to-peer (P2P) settings. In this paper, we propose a blockchain-based fully decentralized P2P framework for FL, called BlockDFL. It takes blockchain as the foundation, leveraging the proposed PBFT-based voting mechanism and two-layer scoring mechanism to coordinate FL among peer participants without mutual trust, while effectively defending against poisoning attacks. Gradient compression is introduced to lowering communication cost and prevent data from being reconstructed from transmitted model updates. Extensive experiments conducted on two real-world datasets exhibit that BlockDFL obtains competitive accuracy compared to centralized FL and can defend poisoning attacks while achieving efficiency and scalability. Especially when the proportion ofmalicious participants is as high as 40%, BlockDFL can still preserve the accuracy of FL, outperforming existing fully decentralized P2P FL frameworks based on blockchain. | [
"Decentralized Federated Learning",
"Peer-to-Peer",
"Blockchain",
"Trustworthy Federated Learning"
] | https://openreview.net/pdf?id=E7zJ6uzdVd | vYWI0K7YAy | official_review | 1,699,273,285,414 | E7zJ6uzdVd | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission640/Reviewer_eALM"
] | review: ### Summary:
The paper proposes BlockDFL, an approach to address issues prevalent in other federated learning (FL) approaches, such as the risk of data poisoning and data reconstruction attacks. It relies on a PBFT-based voting mechanism as well as a two-layered scoring concept. Further, it periodically assigns participants into one of three roles based on the hash of the most recent block. The paper compares BlockDFL to traditional (not fully decentralized) FL in terms of accuracy and achieves comparable results. The paper further stresses the superiority of BlockDFL over other approaches when a large number of malicious users are involved (BlockDFL can tolerate up to 40% of malicious users and still provides reasonable accuracy).
### Pros:
+1: Well-written paper with easy-to-understand illustrations and figures
+2: Extensive evaluation, which includes comparisons to related work
### Cons:
-1: Non-blockchain-based decentralized FL approaches are not really considered
-2: Details on the deployment and associated overhead/costs is not discussed
The paper is well-written, and I was able to follow the presentation despite not being an expert in federated learning. The presentation, illustration, and comparison of related work create a nice reading experience, which motivates to read on. Thanks for doing a great job in this regard. Additionally, I like that the performance of the approach is also compared to a "traditional", i.e., not fully decentralized, FL approach.
### Detailed Comments:
#### -1: Other Related Work
Apart from comparing the accuracy to a not fully decentralized FL approach, the paper only considers blockchain-based FL approaches when discussing related work. In my opinion, this focus weakens the papers since non-blockchain-based approaches, which do not make use of a centralized aggregator/trusted party, have been proposed in the past. As a result, the paper is currently not presenting the full picture. This limitation also applies to the nicely curated comparison of related work (Table 2). To name a few, the authors could take a look at DOIs 10.1109/TSIPN.2022.3151242 and 10.1038/s41467-023-38569-4, but I am confident that there are more relevant approaches out there. Hence, moving forward, I would expect the authors to also consider this angle.
#### -2: Deployment Discussion
The design of BlockDFL builds on blockchain technology for its operation. However, from my point of view, the paper omits certain aspects related to this design choice, making it challenging to holistically assess the paper. For example, the paper does not state what the properties of the underlying blockchain are, e.g., permissioned or permissionless. Moreover, it fails to explicitly state who operates the blockchain. I believe that the entities participating in the FL are also the operators of the blockchain. However, this information is never presented, as far as I noticed.
On a related note, the evaluation section never looks into the overhead that is being introduced by building the design on a blockchain. While it provides some numbers on the performance in Figure 5, an analysis of the storage overhead is missing. I would like to know how quickly the storage grows over time and how realistic such numbers are/would be for real-world applications. Without these details, the system's impact cannot be fully judged.
#### Other:
- Introduction: I would like to see references to the statements in Lines 69 and 103.
- Figure 1: The authors could consider updating the figure to better stress that Steps 2-3 and 3-4 occur for each node with the selected role. At the moment, the visualization could also imply that a single update is being sent.
- Related Work: The content (and arguments) presented in this section largely overlaps with the introduction. Thus, it is somewhat repetitive. I believe that this space could better be used to present additional information (see above).
- Krum vs. Multi-Krum: The paper mixes the references of [3] and [26] throughout the paper and refers to both approaches as Krum. In my opinion, this approach is confusing. Therefore, I suggest the authors to use multi-Krum when talking about Biscotti.
- The relevance to the web is only briefly outlined at the beginning of the introduction. The authors could better stress this link throughout the paper but, at least as part of the conclusion, to put the contributions into perspective.
### Post-Rebuttal
I kindly thank the authors for responding to the reviews, providing a lot of additional information, and outlining their proposed changes.
They cleared up several aspects that have previously been missing from the manuscript to truly comprehend it.
Hopefully, the authors will be able to convincingly incorporate the proposed changes when revising their paper.
questions: What is the reason for only focusing on blockchain-based FL approaches? Is there a convincing argument to exclude other decentralized approaches from the comparison?
What is the overhead of utilizing BlockDFL storage-wise? How does it compare to related work?
Who is a participant/node of the underlying blockchain, only entities that also participate in the federated learning?
What kind of underlying blockchain is being used? What are its properties (e.g., permissioned vs. permissionless)?
ethics_review_flag: No
ethics_review_description: n/a
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 4
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
E7zJ6uzdVd | BlockDFL: A Blockchain-based Fully Decentralized Peer-to-Peer Federated Learning Framework | [
"Zhen Qin",
"Xueqiang YAN",
"Mengchu Zhou",
"Shuiguang Deng"
] | Federated learning (FL) enables collaborative training of machine learning models without sharing training data. Traditional FL heavily relies on a trusted centralized server. Although decentralized FL eliminates the central dependence, it may worsen the other inherit problems faced by FL such as poisoning attacks and data representation leakage due to insufficient restrictions on the behavior of participants, and heavy communication cost, especially in fully decentralized scenarios, i.e., peer-to-peer (P2P) settings. In this paper, we propose a blockchain-based fully decentralized P2P framework for FL, called BlockDFL. It takes blockchain as the foundation, leveraging the proposed PBFT-based voting mechanism and two-layer scoring mechanism to coordinate FL among peer participants without mutual trust, while effectively defending against poisoning attacks. Gradient compression is introduced to lowering communication cost and prevent data from being reconstructed from transmitted model updates. Extensive experiments conducted on two real-world datasets exhibit that BlockDFL obtains competitive accuracy compared to centralized FL and can defend poisoning attacks while achieving efficiency and scalability. Especially when the proportion ofmalicious participants is as high as 40%, BlockDFL can still preserve the accuracy of FL, outperforming existing fully decentralized P2P FL frameworks based on blockchain. | [
"Decentralized Federated Learning",
"Peer-to-Peer",
"Blockchain",
"Trustworthy Federated Learning"
] | https://openreview.net/pdf?id=E7zJ6uzdVd | qJ9BzV4UtW | decision | 1,705,909,238,923 | E7zJ6uzdVd | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: The paper received 5 reviews. Generally, the reviews were positive, with some minor concerns. The authors engaged with the reviewers effectively to address their concerns during the rebuttal. After the discussion phase, the final recommendations were 2 accept, 2 weak accept and 1 borderline. After reviewing the reviews and discussions, I am recommending acceptance. The paper makes some solid contributions and the issues raised can be addressed during the camera-ready phase. |
E7zJ6uzdVd | BlockDFL: A Blockchain-based Fully Decentralized Peer-to-Peer Federated Learning Framework | [
"Zhen Qin",
"Xueqiang YAN",
"Mengchu Zhou",
"Shuiguang Deng"
] | Federated learning (FL) enables collaborative training of machine learning models without sharing training data. Traditional FL heavily relies on a trusted centralized server. Although decentralized FL eliminates the central dependence, it may worsen the other inherit problems faced by FL such as poisoning attacks and data representation leakage due to insufficient restrictions on the behavior of participants, and heavy communication cost, especially in fully decentralized scenarios, i.e., peer-to-peer (P2P) settings. In this paper, we propose a blockchain-based fully decentralized P2P framework for FL, called BlockDFL. It takes blockchain as the foundation, leveraging the proposed PBFT-based voting mechanism and two-layer scoring mechanism to coordinate FL among peer participants without mutual trust, while effectively defending against poisoning attacks. Gradient compression is introduced to lowering communication cost and prevent data from being reconstructed from transmitted model updates. Extensive experiments conducted on two real-world datasets exhibit that BlockDFL obtains competitive accuracy compared to centralized FL and can defend poisoning attacks while achieving efficiency and scalability. Especially when the proportion ofmalicious participants is as high as 40%, BlockDFL can still preserve the accuracy of FL, outperforming existing fully decentralized P2P FL frameworks based on blockchain. | [
"Decentralized Federated Learning",
"Peer-to-Peer",
"Blockchain",
"Trustworthy Federated Learning"
] | https://openreview.net/pdf?id=E7zJ6uzdVd | Wxf0svuUSY | official_review | 1,700,184,607,380 | E7zJ6uzdVd | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission640/Reviewer_Md1b"
] | review: **Paper summary**
The paper employs blockchain to address the untrustworthiness of federated learning (FL) participants, including both aggregator and clients. It introduces BlockDFL, which is a decentralized peer-to-peer (P2P) framework that further decentralize the current aggregator-based FL methods. It includes existing scoring and defence systems like Krum and gradient comparison to mitigate issues such as poisoning attacks and model inversion attacks.
**Strengths**
+ A framework that leverages the transparency of blockchain to address the trust in FL methods.
+ The paper is well-structured and well-written
**Weaknesses**
- Lack of technical contribution. Leveraging blockchain for decentralized FL seems an intuitive idea. The defence against poisoning and model inversion attacks is also an straight application of existing mechanisms.
- Experimental settings are questionable (see detailed comments below)
**Detailed comments**
This paper presents an intriguing approach of leveraging blockchain for decentralized FL. The soundness of the approach is reasonable, given that all employed mechanisms are mature. The paper is well-written. Below I elaborate on the weaknesses listed above.
***Significance***
The paper seems falling short in demonstrating a significant technical contribution. Leveraging blockchain for decentralized FL appears intuitive, and the defense against poisoning and model inversion attacks seems to be a straightforward application of existing mechanisms. A more profound exploration of novel contributions, algorithms, or methodologies would enhance the paper's scientific impact.
***Evaluation***
The choice of small-scale datasets, i.e., MNIST and CIFAR-10, raises concerns about the generalization of BlockDFL to complex real-world scenarios.
No FHE- or MPC-based solutions are selected as the baseline (Section 5.3). Given that these solutions are considered among the most costly, it is imperative to benchmark BlockDFL against them. A comprehensive evaluation against them would provide a clearer understanding of the relative trade-offs in terms of computational overhead and security.
questions: See my comments above.
ethics_review_flag: No
ethics_review_description: n/a
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 3
technical_quality: 6
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
E7zJ6uzdVd | BlockDFL: A Blockchain-based Fully Decentralized Peer-to-Peer Federated Learning Framework | [
"Zhen Qin",
"Xueqiang YAN",
"Mengchu Zhou",
"Shuiguang Deng"
] | Federated learning (FL) enables collaborative training of machine learning models without sharing training data. Traditional FL heavily relies on a trusted centralized server. Although decentralized FL eliminates the central dependence, it may worsen the other inherit problems faced by FL such as poisoning attacks and data representation leakage due to insufficient restrictions on the behavior of participants, and heavy communication cost, especially in fully decentralized scenarios, i.e., peer-to-peer (P2P) settings. In this paper, we propose a blockchain-based fully decentralized P2P framework for FL, called BlockDFL. It takes blockchain as the foundation, leveraging the proposed PBFT-based voting mechanism and two-layer scoring mechanism to coordinate FL among peer participants without mutual trust, while effectively defending against poisoning attacks. Gradient compression is introduced to lowering communication cost and prevent data from being reconstructed from transmitted model updates. Extensive experiments conducted on two real-world datasets exhibit that BlockDFL obtains competitive accuracy compared to centralized FL and can defend poisoning attacks while achieving efficiency and scalability. Especially when the proportion ofmalicious participants is as high as 40%, BlockDFL can still preserve the accuracy of FL, outperforming existing fully decentralized P2P FL frameworks based on blockchain. | [
"Decentralized Federated Learning",
"Peer-to-Peer",
"Blockchain",
"Trustworthy Federated Learning"
] | https://openreview.net/pdf?id=E7zJ6uzdVd | 8tmMG8eB1W | official_review | 1,700,459,198,933 | E7zJ6uzdVd | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission640/Reviewer_2pGz"
] | review: This paper proposes a blockchain-based fully decentralized P2P framework for FL, called BlockDFL. It takes blockchain as the foundation, leveraging the proposed PBFT-based voting mechanism and two-layer scoring mechanism to coordinate FL among peer participants without mutual trust.
pros:
1. This paper evaluates the prposed framework on two real-world datasets.
2. This paper has a good structure.
cons:
1. How to theoretically prove BlockDFL could prevent the global model from being jeopardized by poisoning attacks and prevent the private training data from being revealed?
2. It is not reasonable to claim that BlockDFL can resist poisoning attacks when there are up to 40% malicious participants for both IID and non-IID data, as the result is not theoretically proven.
3. Two works used for comparison lack representativeness. For example, why not compare the proposed solution with the original FL?
4. In fig.2 (b) and (d), when facing 60% of malicious participants, the average accuracy of vanilla FL is higher than the proposed framework. It is better to explain this result.
5. When evaluating timing consumption and scalability, why do the authors only use one dataset? Meanwhile, the authors do not compare the proposed framework with vanilla FL.
questions: see cons.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 3
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
E7zJ6uzdVd | BlockDFL: A Blockchain-based Fully Decentralized Peer-to-Peer Federated Learning Framework | [
"Zhen Qin",
"Xueqiang YAN",
"Mengchu Zhou",
"Shuiguang Deng"
] | Federated learning (FL) enables collaborative training of machine learning models without sharing training data. Traditional FL heavily relies on a trusted centralized server. Although decentralized FL eliminates the central dependence, it may worsen the other inherit problems faced by FL such as poisoning attacks and data representation leakage due to insufficient restrictions on the behavior of participants, and heavy communication cost, especially in fully decentralized scenarios, i.e., peer-to-peer (P2P) settings. In this paper, we propose a blockchain-based fully decentralized P2P framework for FL, called BlockDFL. It takes blockchain as the foundation, leveraging the proposed PBFT-based voting mechanism and two-layer scoring mechanism to coordinate FL among peer participants without mutual trust, while effectively defending against poisoning attacks. Gradient compression is introduced to lowering communication cost and prevent data from being reconstructed from transmitted model updates. Extensive experiments conducted on two real-world datasets exhibit that BlockDFL obtains competitive accuracy compared to centralized FL and can defend poisoning attacks while achieving efficiency and scalability. Especially when the proportion ofmalicious participants is as high as 40%, BlockDFL can still preserve the accuracy of FL, outperforming existing fully decentralized P2P FL frameworks based on blockchain. | [
"Decentralized Federated Learning",
"Peer-to-Peer",
"Blockchain",
"Trustworthy Federated Learning"
] | https://openreview.net/pdf?id=E7zJ6uzdVd | 6rvnxvVXgt | official_review | 1,699,162,876,325 | E7zJ6uzdVd | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission640/Reviewer_qDqw"
] | review: ### Summary
In this paper, the authors propose and implement a blockchain-based federated learning framework, named BlockDFL. Specifically, it addressed not only the centralized issue in the traditional federated learning, but also the privacy leakage and poisoning attack issues in the untrust decentralized model. The experimental results prove that, compared to the traditional federated learning, BlockDFL can also achieve good efficiency and scalability. Moreover, compared to other decentralized solutions for federated learning, BlockDFL can resist a higher proportion of malicious users while achieving enough efficiency on non-naive datasets.
### Strength
- This paper proposes BlockDFL, a decentralized framework for federated learning. It considers the inherent issues, i.e., malicious nodes, privacy leakage, and the efficiency issue, and proposes the corresponding solutions for these issues.
- The experiment is comprehensive, quantitatively evaluating the tolerance of the malicious nodes and the efficiency and the scalability of BlockDFL. Moreover, the authors also qualitatively compare the BlockDFL with the other decentralized federated learning solutions.
- Writing quality is good, and the paper is quite readable.
### Weakness
- Some key parts require more explicit discussion. For example, there need a section to discuss the `threat model` to explicitly illustrate the abilities of malicious users in federated learning, and a `threat to validity` to discuss the limitation of this work.
- The experiment can be further improved. For example, for the proposed three issues, the protection against privacy leakage is only discussed in text, instead of by a quantitative experiment like the other two.
### Comments
Except for the main concerns I raised above, here are some minor ones.
At L286, the authors say “The participant whose portion corresponds to $h_{-1}$ is selected as the first aggregator”. It is not clear that what is `corresponds to`. This should be clarified.
In Section 5.2 and 5.3, the authors focus more on the description of the experimental results. However, I think the authors should pay some attention on analyzing the results, like why such a distinction exists between BlockDFL and other solutions. This can make the conclusion more convincing.
In Fig. 3, 4, and 5, the authors should offer different legends and line types for different candidates to make sure readers can distinguish them even in monochromatic (black and white) print.
Some typos and formatting issues exist:
- L177: “Such kind of risk” -> “Such a kind of risk”;
- L424: “denote the” -> “denotes the”;
- L537: remove the subsubsection title.
questions: Please refer to the `Review` part.
ethics_review_flag: No
ethics_review_description: -
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
E7zJ6uzdVd | BlockDFL: A Blockchain-based Fully Decentralized Peer-to-Peer Federated Learning Framework | [
"Zhen Qin",
"Xueqiang YAN",
"Mengchu Zhou",
"Shuiguang Deng"
] | Federated learning (FL) enables collaborative training of machine learning models without sharing training data. Traditional FL heavily relies on a trusted centralized server. Although decentralized FL eliminates the central dependence, it may worsen the other inherit problems faced by FL such as poisoning attacks and data representation leakage due to insufficient restrictions on the behavior of participants, and heavy communication cost, especially in fully decentralized scenarios, i.e., peer-to-peer (P2P) settings. In this paper, we propose a blockchain-based fully decentralized P2P framework for FL, called BlockDFL. It takes blockchain as the foundation, leveraging the proposed PBFT-based voting mechanism and two-layer scoring mechanism to coordinate FL among peer participants without mutual trust, while effectively defending against poisoning attacks. Gradient compression is introduced to lowering communication cost and prevent data from being reconstructed from transmitted model updates. Extensive experiments conducted on two real-world datasets exhibit that BlockDFL obtains competitive accuracy compared to centralized FL and can defend poisoning attacks while achieving efficiency and scalability. Especially when the proportion ofmalicious participants is as high as 40%, BlockDFL can still preserve the accuracy of FL, outperforming existing fully decentralized P2P FL frameworks based on blockchain. | [
"Decentralized Federated Learning",
"Peer-to-Peer",
"Blockchain",
"Trustworthy Federated Learning"
] | https://openreview.net/pdf?id=E7zJ6uzdVd | 1NgWbX4AF0 | official_review | 1,700,646,264,475 | E7zJ6uzdVd | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission640/Reviewer_KebL"
] | review: This paper proposes BlockDFL as a fully decentralized, i.e. peer-to-peer-based, framework for federated learning that has a claimed resilience against poisoning attacks when up to 40% of the participants providing updates to the federated machine-learning model are malicious.
The authors make use of a blockchain that is updated via PBFT by a committee that is newly selected in each communication round.
BlockDFL uses the blockchain to draw randomness for reassigning the roles of update providers, aggregators, and the verifiers constituting the PBFT committee, to manage participants' stakes for contributing honestly to the learned model, and to disseminate global model updates.
The authors provide an overall convincing approach for a P2P-based federated-learning architecture and sensible evaluate their proposed design, yielding competitive results compared to related approaches.
However, there are some points worthwhile addressing:
- ~~Relevance of the paper to TheWebConf's Systems track seems like an afterthought that is briefly motivated in the introduction but never picked up again.~~ (The authors have addressed this concern during the rebuttal and I expect that they will update their manuscript in their interest of pinpointing the relevance of their contributions). The paper generally reads as if it was addressed to a different audience since preliminary knowledge of concepts from federated learning is mandatory and Section 2 is kept at a minimum. For instance, Krum and its application plays a central role in distinguishing BlockDFL from other approaches; hence, the authors should provide some technical background to aid the reader going forward.
- In this regard, the authors should consider condensing the introduction to what is necessary to motivate the approach, and move technical discussions to Section 2 as much as possible.
- Section 2 (Preliminaries) partly antedates design choices of BlockDFL, which should be restricted to Sections 3 and 4, respectively.
- The paper should better reflect the authors' actual contributions, especially in contrast to the seemingly very similar approach of Biscotti [26]. To this end, I suggest:
- Move Section 6 (Related Work) to the front, before the current Section 3, so that the following sections can better communicate the changes made over [26].
- Currently, Sections 5.4 and Section 6 have a very similar theme (albeit with different scope) anyway. Maybe the authors could use the current Section 6, when moved, to better motivate the current research gap and the design goals later on.
**Minor comments:**
- Section 1: "It can also record stake for monetary reward ..."; the relation between both parts of that sentence is unclear.
- Section 1: "For example, protecting the privacy ..." is not a full sentence.
- Section 2.1: "For example, Krum [3] regards the model model updates [that?] significantly differ..."; the word "that" seems to be missing here.
- Section 5.2: That the portion of malicious participants is increased by 10% at a time (between 0% and 60%) is only implicit when taking Figures 2 and 3 into account.
Update: I acknowledge that I have read the authors' rebuttal comments.
questions: - This is likely relevant to all proof-of-stake-based approaches, but what effectively happens if the verifiers disagree and do not reach consensus of the round's block? What happens to the provided stakes then and how are the stakes locked in the first place?
- Regarding Section 5.1: How would non-uniform stakes, i.e. different weights for potentially malicious participants, affect the accuracy of BlockDFL, and how sustainable are attacks based on the attackers losing stake in each round?
- Section 5.2: What does the "lowest power of DP" mean? Weak protection (large $\varepsilon$) or small $\varepsilon$ (better protection)?
- Section 5.2: Is there any intuition why BlockDFL achieves a tiny better accuracy than centralized FL (99.29% vs. 99.28%) even when attackers are absent?
- Section 5.2: Is there an explanation for the dips of successful attacks for a 40%-attacker in the non-IID datasets (Figures 3b and 3d)?
ethics_review_flag: No
ethics_review_description: -
scope: 2: The connection to the Web is incidental, e.g., use of Web data or API
novelty: 4
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
E5hHCiZcat | Differentially Private Selection from Secure Distributed Computing | [
"Ivan Damgård",
"Hannah Keller",
"Boel Nelson",
"Claudio Orlandi",
"Rasmus Pagh"
] | Given a collection of vectors $\boldsymbol{x}^{(1)},\dots,\boldsymbol{x}^{(n)} \in \{0,1\}^d$, the *selection* problem asks to report the index of an "approximately largest" entry in $\boldsymbol{x}=\sum_{j=1}^n \boldsymbol{x}^{(j)}$.
Selection abstracts a host of problems, for example: Recommendation of a popular item based on user feedback; releasing statistics on the most popular web sites; hyperparameter tuning and feature selection in machine learning.
We study selection under differential privacy, where a released index guarantees privacy for individual vectors.
Though selection can be solved with an excellent utility guarantee in the central model of differential privacy, the distributed setting where no single entity is trusted to aggregate the data lacks solutions.
Specifically, strong privacy guarantees with high utility are offered in high trust settings, but not in low trust settings.
For example, in the popular *shuffle model* of distributed differential privacy, there are strong lower bounds suggesting that the utility of the central model cannot be obtained.
In this paper we design a protocol for differentially private selection in a trust setting similar to the shuffle model---with the crucial difference that our protocol tolerates corrupted servers while maintaining privacy.
Our protocol uses techniques from secure multi-party computation (MPC) to implement a protocol that:
(i) has utility on par with the best mechanisms in the central model,
(ii) scales to large, distributed collections of high-dimensional vectors, and
(iii) uses $k\geq 3$ servers that collaborate to compute the result, where the differential privacy guarantee holds assuming an honest majority.
Since general-purpose MPC techniques are not sufficiently scalable, we propose a novel application of *integer secret sharing*, and evaluate the utility and efficiency of our protocol both theoretically and empirically.
Our protocol improves on previous work by Champion, shelat and Ullman (CCS '19) by significantly reducing the communication costs, demonstrating that large-scale differentially private selection with information-theoretical guarantees is feasible in a distributed setting. | [
"differential privacy",
"selection",
"cryptography",
"multi-party computation"
] | https://openreview.net/pdf?id=E5hHCiZcat | hypboY6dgj | official_review | 1,701,413,652,406 | E5hHCiZcat | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission675/Reviewer_WbAW"
] | review: The paper focuses on the development and analysis of algorithms for differentially private selection in the context of secure distributed computing. The authors propose a novel method for selecting the maximum value from a set of data while preserving differential privacy and utilizing multi-party computation (MPC). The methodology involves adapting existing differentially private selection algorithms to work within an MPC framework, ensuring privacy and security throughout the process.
## Strengths:
- Unique approach to differentially private selection by integrating it with secure multi-party computation, addressing both privacy and security concerns effectively.
- Thoroughly evaluated algorithms on various datasets, providing a robust assessment of their performance and utility.
## Areas for Improvement:
- The complexity of implementing these algorithms in practical, real-world systems may be high, potentially limiting their accessibility and usability.
questions: - How does the performance of your algorithms compare to non-MPC-based differentially private selection methods in terms of accuracy and computational efficiency?
- How might the implementation complexity of these algorithms affect their practical adoption in real-world applications?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 6
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
E5hHCiZcat | Differentially Private Selection from Secure Distributed Computing | [
"Ivan Damgård",
"Hannah Keller",
"Boel Nelson",
"Claudio Orlandi",
"Rasmus Pagh"
] | Given a collection of vectors $\boldsymbol{x}^{(1)},\dots,\boldsymbol{x}^{(n)} \in \{0,1\}^d$, the *selection* problem asks to report the index of an "approximately largest" entry in $\boldsymbol{x}=\sum_{j=1}^n \boldsymbol{x}^{(j)}$.
Selection abstracts a host of problems, for example: Recommendation of a popular item based on user feedback; releasing statistics on the most popular web sites; hyperparameter tuning and feature selection in machine learning.
We study selection under differential privacy, where a released index guarantees privacy for individual vectors.
Though selection can be solved with an excellent utility guarantee in the central model of differential privacy, the distributed setting where no single entity is trusted to aggregate the data lacks solutions.
Specifically, strong privacy guarantees with high utility are offered in high trust settings, but not in low trust settings.
For example, in the popular *shuffle model* of distributed differential privacy, there are strong lower bounds suggesting that the utility of the central model cannot be obtained.
In this paper we design a protocol for differentially private selection in a trust setting similar to the shuffle model---with the crucial difference that our protocol tolerates corrupted servers while maintaining privacy.
Our protocol uses techniques from secure multi-party computation (MPC) to implement a protocol that:
(i) has utility on par with the best mechanisms in the central model,
(ii) scales to large, distributed collections of high-dimensional vectors, and
(iii) uses $k\geq 3$ servers that collaborate to compute the result, where the differential privacy guarantee holds assuming an honest majority.
Since general-purpose MPC techniques are not sufficiently scalable, we propose a novel application of *integer secret sharing*, and evaluate the utility and efficiency of our protocol both theoretically and empirically.
Our protocol improves on previous work by Champion, shelat and Ullman (CCS '19) by significantly reducing the communication costs, demonstrating that large-scale differentially private selection with information-theoretical guarantees is feasible in a distributed setting. | [
"differential privacy",
"selection",
"cryptography",
"multi-party computation"
] | https://openreview.net/pdf?id=E5hHCiZcat | ZJMllUDClj | decision | 1,705,909,219,803 | E5hHCiZcat | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: Our decision is to accept. Please see the AC's review below and improve the work considering that and the reviewers' feedback for cemera-ready submission.
"This paper introduces a new approach to differentially private selection in secure distributed computing environments, focusing on selecting the maximum value from a dataset in a way that maintains differential privacy through multi-party computation (MPC). The method adapts existing differentially private selection algorithms to function within an MPC framework.
Most of the referees find the approach innovative, integrating differentially private selection with MPC, which addresses both privacy and security concerns in a unique and commendable way. Moreover, the algorithms have been thoroughly evaluated on various datasets, establishing a robust assessment of their performance. The experimental results are then complemented by theoretical analysis that proves the method's competitiveness with centralized DP algorithms. Also very importantly, the problem addressed in the paper has clear practical implications, and the authors have written the narrative in a clear way, with a structure that makes the content accessible to a broader audience.
Despite the many strengths of the paper, it has a few drawbacks. Both R1 and R5 note that the complexity of implementing these algorithms in practical systems may create a barrier to usage, limiting their accessibility and usability in real-world applications. R4 also comments that some DP techniques used in the paper seem ""standard,"" and the MPC analysis at times lacks sufficient clarity. R5 also remarks that the empirical evaluation, though robust, is somewhat limited in that it primarily focuses on a 3-server case, so expanding this to more diverse server configurations and datasets would provide a more comprehensive understanding of the protocol's performance. I would also like to see the authors discuss how the proposed algorithms compare to non-MPC-based differentially private selection methods in terms of accuracy and computational efficiency." |
E5hHCiZcat | Differentially Private Selection from Secure Distributed Computing | [
"Ivan Damgård",
"Hannah Keller",
"Boel Nelson",
"Claudio Orlandi",
"Rasmus Pagh"
] | Given a collection of vectors $\boldsymbol{x}^{(1)},\dots,\boldsymbol{x}^{(n)} \in \{0,1\}^d$, the *selection* problem asks to report the index of an "approximately largest" entry in $\boldsymbol{x}=\sum_{j=1}^n \boldsymbol{x}^{(j)}$.
Selection abstracts a host of problems, for example: Recommendation of a popular item based on user feedback; releasing statistics on the most popular web sites; hyperparameter tuning and feature selection in machine learning.
We study selection under differential privacy, where a released index guarantees privacy for individual vectors.
Though selection can be solved with an excellent utility guarantee in the central model of differential privacy, the distributed setting where no single entity is trusted to aggregate the data lacks solutions.
Specifically, strong privacy guarantees with high utility are offered in high trust settings, but not in low trust settings.
For example, in the popular *shuffle model* of distributed differential privacy, there are strong lower bounds suggesting that the utility of the central model cannot be obtained.
In this paper we design a protocol for differentially private selection in a trust setting similar to the shuffle model---with the crucial difference that our protocol tolerates corrupted servers while maintaining privacy.
Our protocol uses techniques from secure multi-party computation (MPC) to implement a protocol that:
(i) has utility on par with the best mechanisms in the central model,
(ii) scales to large, distributed collections of high-dimensional vectors, and
(iii) uses $k\geq 3$ servers that collaborate to compute the result, where the differential privacy guarantee holds assuming an honest majority.
Since general-purpose MPC techniques are not sufficiently scalable, we propose a novel application of *integer secret sharing*, and evaluate the utility and efficiency of our protocol both theoretically and empirically.
Our protocol improves on previous work by Champion, shelat and Ullman (CCS '19) by significantly reducing the communication costs, demonstrating that large-scale differentially private selection with information-theoretical guarantees is feasible in a distributed setting. | [
"differential privacy",
"selection",
"cryptography",
"multi-party computation"
] | https://openreview.net/pdf?id=E5hHCiZcat | VG9BV0Av33 | official_review | 1,700,825,368,555 | E5hHCiZcat | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission675/Reviewer_x73t"
] | review: Summary: The paper studies the selection problem. There is a set of vectors $x_1, \dots, x_n$ and the goal is to report the index $i$ such that $\sum_{j=1}^n x_i$ is highest. Specifically, the goal is to select the index $i$ under differential privacy as well as without a central trusted party. Instead, they use secret sharing to develop an algorithm implemented by $k$ servers, where $t$ servers may be corrupted or malicious. Additionally, they evaluate the algorithm on DPBench.
Strengths:
- This is a problem with clear practical implications, with clear theoretical analysis as well as empirical experiments.
- I appreciate that the authors describe the algorithm first in the central model and then in the decentralized model. This makes the paper much more accessible.
- The paper overall is very well-written and clear.
Weaknesses:
- No weaknesses to report.
questions: NA
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 7
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
E5hHCiZcat | Differentially Private Selection from Secure Distributed Computing | [
"Ivan Damgård",
"Hannah Keller",
"Boel Nelson",
"Claudio Orlandi",
"Rasmus Pagh"
] | Given a collection of vectors $\boldsymbol{x}^{(1)},\dots,\boldsymbol{x}^{(n)} \in \{0,1\}^d$, the *selection* problem asks to report the index of an "approximately largest" entry in $\boldsymbol{x}=\sum_{j=1}^n \boldsymbol{x}^{(j)}$.
Selection abstracts a host of problems, for example: Recommendation of a popular item based on user feedback; releasing statistics on the most popular web sites; hyperparameter tuning and feature selection in machine learning.
We study selection under differential privacy, where a released index guarantees privacy for individual vectors.
Though selection can be solved with an excellent utility guarantee in the central model of differential privacy, the distributed setting where no single entity is trusted to aggregate the data lacks solutions.
Specifically, strong privacy guarantees with high utility are offered in high trust settings, but not in low trust settings.
For example, in the popular *shuffle model* of distributed differential privacy, there are strong lower bounds suggesting that the utility of the central model cannot be obtained.
In this paper we design a protocol for differentially private selection in a trust setting similar to the shuffle model---with the crucial difference that our protocol tolerates corrupted servers while maintaining privacy.
Our protocol uses techniques from secure multi-party computation (MPC) to implement a protocol that:
(i) has utility on par with the best mechanisms in the central model,
(ii) scales to large, distributed collections of high-dimensional vectors, and
(iii) uses $k\geq 3$ servers that collaborate to compute the result, where the differential privacy guarantee holds assuming an honest majority.
Since general-purpose MPC techniques are not sufficiently scalable, we propose a novel application of *integer secret sharing*, and evaluate the utility and efficiency of our protocol both theoretically and empirically.
Our protocol improves on previous work by Champion, shelat and Ullman (CCS '19) by significantly reducing the communication costs, demonstrating that large-scale differentially private selection with information-theoretical guarantees is feasible in a distributed setting. | [
"differential privacy",
"selection",
"cryptography",
"multi-party computation"
] | https://openreview.net/pdf?id=E5hHCiZcat | LMG001tbRb | official_review | 1,700,771,715,753 | E5hHCiZcat | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission675/Reviewer_FqUy"
] | review: Overview: This work considers the problem of differentially private (DP) selection, wherein the goal is to report an approximate argmax of a sum of Boolean vectors, each held by a separate party. While this problem is fairly well-understood in the centralized DP setting, as well as the far more restrictive local DP model, this work introduces an intermediate trust regime where data is sent to $k$ servers where a majority are honest. By leveraging noising techniques from the DP literature with MPC techniques, this work obtains a DP algorithm for this setting. In the experimental evaluation, for $k=3$ servers, it is shown that the new algorithm is competitive with the centralized DP algorithms. These accuracy guarantees are supplemented with runtime investigations.
Strengths: This work introduces an interesting intermediate notion of privacy where one can obtain improved computational efficiency in practice. The theoretical analysis is complemented by extensive experimentation that suggests these algorithms are indeed competitive with centralized algorithms. In general, the paper is fairly well-written.
Weaknesses: The DP techniques themselves appear somewhat standard. Some of the discussion on the MPC analysis and implementation could be made clearer.
questions: Comments:
---Line 98: I didn't quite understand the reference to quantum computing. This may merit more elaboration.
---Line 3 of Alg 1: maybe change $d$ to $n$?
--Part of the discussion in Sections 4.2 and 4.3 are somewhat hard to follow for readers (like myself) without much background in MPC. In particular, a more formal version of the Proof of Corollary 4.1 would be useful, as would be a formal, self-contained statement of the relevant facts that are used from the existing work of [EGK+20], [ACD+19], and [DEF+19].
--The plots in Figure 1 are a little difficult to read.
--How does the runtime or experimental accuracy compare to existing locally private algorithms? Or are these not even feasible to implement?
--The $r$ parameter in the experimental evaluation appears to denote ``remaining bits,'' but it was not clear to me where this is a parameter in the pseudocode for any of the provided algorithms. But perhaps I missed where this is explained.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 1: The reviewer's evaluation is an educated guess |
E5hHCiZcat | Differentially Private Selection from Secure Distributed Computing | [
"Ivan Damgård",
"Hannah Keller",
"Boel Nelson",
"Claudio Orlandi",
"Rasmus Pagh"
] | Given a collection of vectors $\boldsymbol{x}^{(1)},\dots,\boldsymbol{x}^{(n)} \in \{0,1\}^d$, the *selection* problem asks to report the index of an "approximately largest" entry in $\boldsymbol{x}=\sum_{j=1}^n \boldsymbol{x}^{(j)}$.
Selection abstracts a host of problems, for example: Recommendation of a popular item based on user feedback; releasing statistics on the most popular web sites; hyperparameter tuning and feature selection in machine learning.
We study selection under differential privacy, where a released index guarantees privacy for individual vectors.
Though selection can be solved with an excellent utility guarantee in the central model of differential privacy, the distributed setting where no single entity is trusted to aggregate the data lacks solutions.
Specifically, strong privacy guarantees with high utility are offered in high trust settings, but not in low trust settings.
For example, in the popular *shuffle model* of distributed differential privacy, there are strong lower bounds suggesting that the utility of the central model cannot be obtained.
In this paper we design a protocol for differentially private selection in a trust setting similar to the shuffle model---with the crucial difference that our protocol tolerates corrupted servers while maintaining privacy.
Our protocol uses techniques from secure multi-party computation (MPC) to implement a protocol that:
(i) has utility on par with the best mechanisms in the central model,
(ii) scales to large, distributed collections of high-dimensional vectors, and
(iii) uses $k\geq 3$ servers that collaborate to compute the result, where the differential privacy guarantee holds assuming an honest majority.
Since general-purpose MPC techniques are not sufficiently scalable, we propose a novel application of *integer secret sharing*, and evaluate the utility and efficiency of our protocol both theoretically and empirically.
Our protocol improves on previous work by Champion, shelat and Ullman (CCS '19) by significantly reducing the communication costs, demonstrating that large-scale differentially private selection with information-theoretical guarantees is feasible in a distributed setting. | [
"differential privacy",
"selection",
"cryptography",
"multi-party computation"
] | https://openreview.net/pdf?id=E5hHCiZcat | FPHKM4xZ2M | official_review | 1,700,773,956,423 | E5hHCiZcat | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission675/Reviewer_9cvi"
] | review: **Summary:**
The paper studies differentially private approaches for the selection problem (finding the index of the approximately largest entry in a sum of D-dimensional binary vectors). The paper focuses on a distributed setting where there are k servers, a minority <= t < k/2 of which are corrupted servers which may pool information. The main contribution is a provably private approach based on secure multi-party computation and an empirical evaluation of this approach.
The approach transforms an “idealized” algorithm into a private algorithm using secure multiparty computation. In the “idealized” algorithm, which is based on the standard approach of ReportNoisyArgmax, every party adds negative binomial noise and the output is the argmax. The secure MPC implementation splits into “computation servers” and “supporting servers” and leverages integer secret sharing to transmit the sampled noise between servers.
The paper proves that the approach is private, and empirically evaluates the approach on DPBench in comparison with several baseline approaches.
**Strengths:**
- The paper tackles an important problem of designing a practical and provably private distributed algorithm for the selection problem.
- The paper is very well-written and honestly presents its ideas in relation with related work.
- The paper explains why the approach is provably private and empirically evaluates the approach. This is a nice mix of theoretical results and empirical analysis.
**Weaknesses:**
- As the paper notes, the idea of using MPC as a distributed approach to differential privacy was already suggested by Steinke (2020). That being said, the paper concretizes this high-level idea and empirically evaluates it.
- The proof techniques are relatively straightforward and are relatively simple extensions of typical approaches in differential privacy (e.g., report argmax) and typical approaches in MPC. That being said, this is a relatively minor weakness.
**Minor comments:**
- Differentialy -> “differentially” on p.1
questions: None
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
E5hHCiZcat | Differentially Private Selection from Secure Distributed Computing | [
"Ivan Damgård",
"Hannah Keller",
"Boel Nelson",
"Claudio Orlandi",
"Rasmus Pagh"
] | Given a collection of vectors $\boldsymbol{x}^{(1)},\dots,\boldsymbol{x}^{(n)} \in \{0,1\}^d$, the *selection* problem asks to report the index of an "approximately largest" entry in $\boldsymbol{x}=\sum_{j=1}^n \boldsymbol{x}^{(j)}$.
Selection abstracts a host of problems, for example: Recommendation of a popular item based on user feedback; releasing statistics on the most popular web sites; hyperparameter tuning and feature selection in machine learning.
We study selection under differential privacy, where a released index guarantees privacy for individual vectors.
Though selection can be solved with an excellent utility guarantee in the central model of differential privacy, the distributed setting where no single entity is trusted to aggregate the data lacks solutions.
Specifically, strong privacy guarantees with high utility are offered in high trust settings, but not in low trust settings.
For example, in the popular *shuffle model* of distributed differential privacy, there are strong lower bounds suggesting that the utility of the central model cannot be obtained.
In this paper we design a protocol for differentially private selection in a trust setting similar to the shuffle model---with the crucial difference that our protocol tolerates corrupted servers while maintaining privacy.
Our protocol uses techniques from secure multi-party computation (MPC) to implement a protocol that:
(i) has utility on par with the best mechanisms in the central model,
(ii) scales to large, distributed collections of high-dimensional vectors, and
(iii) uses $k\geq 3$ servers that collaborate to compute the result, where the differential privacy guarantee holds assuming an honest majority.
Since general-purpose MPC techniques are not sufficiently scalable, we propose a novel application of *integer secret sharing*, and evaluate the utility and efficiency of our protocol both theoretically and empirically.
Our protocol improves on previous work by Champion, shelat and Ullman (CCS '19) by significantly reducing the communication costs, demonstrating that large-scale differentially private selection with information-theoretical guarantees is feasible in a distributed setting. | [
"differential privacy",
"selection",
"cryptography",
"multi-party computation"
] | https://openreview.net/pdf?id=E5hHCiZcat | 7drVPJsG4z | official_review | 1,700,727,497,784 | E5hHCiZcat | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission675/Reviewer_4dCi"
] | review: The authors propose a novel protocol that utilizes secure multi-party computation (MPC) techniques to perform differentially private selection in distributed settings. A key feature of their approach is the ability to work with corrupted servers while maintaining privacy, addressing a significant gap in the current landscape where strong privacy guarantees often require high-trust settings. However, there are some major comments in the following.
1. The approach of combining integer secret sharing with MPC for differentially private selection in a distributed setting is innovative. It addresses a significant challenge in distributed computing, making the study highly relevant for contemporary data privacy concerns.
2. The Noise-and-round mechanism's ability to achieve near-central model utility in a distributed environment is noteworthy.
3. The use of integer secret sharing in combination with existing MPC techniques seems to be well thought out and methodologically sound.
4. The numerical experiments provide a practical perspective on the protocol's utility and scalability.
questions: 1. The empirical evaluation, though robust, is limited to a 3-server case using synthetic and real-world data. Expanding the evaluation to include a wider range of server configurations and more diverse datasets could provide a more comprehensive understanding of the protocol's performance and scalability.
2. The use of MPC and secret sharing might be resource-intensive in terms of computation and communication overhead. This aspect could be a disadvantage in resource-constrained environments or where efficiency is a paramount concern.
3. The format of this paper does not follow the standard, especially the format of references and citations of references.
4. The abstract employs a narrative style that is somewhat unconventional in the context of technical and scientific publications.
5. The paper appears to lack a dedicated conclusion section, which is a critical component of academic articles.
ethics_review_flag: No
ethics_review_description: nil
scope: 2: The connection to the Web is incidental, e.g., use of Web data or API
novelty: 3
technical_quality: 3
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
E5Gqn6WlAK | Causally Debiased Time-aware Recommendation | [
"Lei Wang",
"Chen Ma",
"Xian Wu",
"Zhaopeng Qiu",
"Yefeng Zheng",
"Xu Chen"
] | Time-aware recommendation has been widely studied for modeling the user dynamic preference and a lot of models have
been proposed. However, these models often overlook the fact that users may not behave evenly on the timeline, and observed datasets
can be biased by user intrinsic preferences or previous recommender systems, leading to degraded model performance. We propose
a causally debiased time-aware recommender framework to accurately learn user preference. We formulate the task of time-aware
recommendation by a causal graph, identifying two types of biases on the item and time levels. To optimize the ideal unbiased learning
objective, we propose a debiased framework based on the inverse propensity score (IPS) and extend it to the doubly robust method.
Considering that the user preference can be diverse and complex, which may result in unmeasured confounders, we develop a sensitivity
analysis method to obtain more accurate IPS. We theoretically draw a connection between the proposed method and the ideal learning
objective, which to the best of our knowledge, is the first time in the research community. We conduct extensive experiments on three
real-world datasets to demonstrate the effectiveness of our model. To promote this research direction, we have released our project at https://www-cdtr.github.io/. | [
"Time-aware recommendation",
"Collaborative filtering",
"Counterfactual",
"Causal inference"
] | https://openreview.net/pdf?id=E5Gqn6WlAK | rmaVj9CedU | official_review | 1,700,131,662,458 | E5Gqn6WlAK | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission495/Reviewer_NPL9"
] | review: 1. Causal Graph
- I cannot fully buy the Figure 2a. why T is affected by U and V?
- Instead, I think the time T changes the user and item (i.e., the user and item representation is affected by the time).
- we get the affected U' and I' with the graph U -> U' <- T -> I' <- I (U, I are the general user and item variables, and U' and I' are time-aware variables.)
- and the rating R is affected by U' and I', with U' -> R <- I'
- Also, I cannot understand why Figure 2b is an 'ideal' learning objective. To accurately estimate the user feedback on all items at all times, the model needs to utilize the item information and the time information.
- You cannot call Figure 2 a causal graph, because it's your desired data generation process, not the real world.
- After all, the authors do not utilize the causal graph. Why do we need this? If you really want to utilize the causal graph, you need to adopt any causality-aware techniques like total effect or reference values.
2. Method
- On the other hand, the proposed method itself is sound.
- If someone wants to deal with the bias of the observation and the time, she can adopt IPS on both.
- However, I personally recommend using the observation variable for the item and the time. p(v|u), p(t|u,v) do not make any probabilistic sence.
- Instead, how about $p(o_{u,v}=1|u)$ and $p(o_{u,v,t}=1|u,v)$?
- How do you estimate the propensity scores? In section 4.4, I can see the modeling for the propensity score, but cannot find any loss functions for that.
3. Contribution
- The contribution of this work is somewhat limited.
- They adopt the existing DR estimator both for the item observation and interaction time.
- They adopt the existing Sensitivity Analysis for the hidden confounders.
4. Experiment
- Why do not compare 'Learning heterogeneous temporal patterns of user preference for timely recommendation, WWW 2021'?
- If this method is adopted in the real-world system, do we have to make inferences for all users and all items in every time slot?
questions: Please refer to Review.
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 3
technical_quality: 3
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
E5Gqn6WlAK | Causally Debiased Time-aware Recommendation | [
"Lei Wang",
"Chen Ma",
"Xian Wu",
"Zhaopeng Qiu",
"Yefeng Zheng",
"Xu Chen"
] | Time-aware recommendation has been widely studied for modeling the user dynamic preference and a lot of models have
been proposed. However, these models often overlook the fact that users may not behave evenly on the timeline, and observed datasets
can be biased by user intrinsic preferences or previous recommender systems, leading to degraded model performance. We propose
a causally debiased time-aware recommender framework to accurately learn user preference. We formulate the task of time-aware
recommendation by a causal graph, identifying two types of biases on the item and time levels. To optimize the ideal unbiased learning
objective, we propose a debiased framework based on the inverse propensity score (IPS) and extend it to the doubly robust method.
Considering that the user preference can be diverse and complex, which may result in unmeasured confounders, we develop a sensitivity
analysis method to obtain more accurate IPS. We theoretically draw a connection between the proposed method and the ideal learning
objective, which to the best of our knowledge, is the first time in the research community. We conduct extensive experiments on three
real-world datasets to demonstrate the effectiveness of our model. To promote this research direction, we have released our project at https://www-cdtr.github.io/. | [
"Time-aware recommendation",
"Collaborative filtering",
"Counterfactual",
"Causal inference"
] | https://openreview.net/pdf?id=E5Gqn6WlAK | qsEN0mt8T6 | official_review | 1,700,550,411,324 | E5Gqn6WlAK | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission495/Reviewer_eeDQ"
] | review: ### Summary
This work studies on dynamic debiased recommendation, and proposes a new method based on doubly-robust strategy. This work has the following strengths and weaknesses:
### Strengths
1. This work studies on an important problem.
2. This work proposes a novel doubly robust method to tackle both popularity bias and temporal bias.
3. Both theoretical analyses and empirical experiments are conducted to validate the effectiveness of the proposal.
### Weaknesses
1. Although this work introduces a novel debiasing recommendation method utilizing a doubly robust approach, the majority of the techniques appear to be direct extensions of existing methods such as temporal Inverse Propensity Scoring (IPS) [16] and sensitivity analysis [9]. Consequently, the novelty is incremental.
2. The experiments have some limitations:
a) The baselines are weak. Some SOTA debiaisng strategies should be considered such as [a1][a2]. Notably, [a1] has demonstrated effective mitigation of temporal bias and warrants consideration.
b) Ranking metrics like NDCG, Precision and Recall should be included, which are more closely align with the recommendation objective.
c) It would be better to include more advanced backbone model, especially those sequential recommendation models, which draw mch attention from the RS community.
3. This work misses some important related work on recommendation debiasing including:
a) Some work on addressing popularity bias:
[a1] SIGIR’23: Invariant Collaborative Filtering to Popularity Distribution Shift
[a2] CIKM’22: Countering Popularity Bias by Regularizing Score Difference
[a3] TKDE’22:Popularity bias is not always evil: Disentangling benign and harmful bias for recommendation
b) Some work on DR-based recommendation debiasing:
[b1] SIGIR’21: AutoDebias: Learning to debias for recommendation
[b2] ICML’23: StableDR: Stabilized Doubly Robust Learning for Recommendation on Data Missing Not at Random
[b3] CIKM’23: CDR: Conservative Doubly Robust Learning for Debiased Recommendation
### Overall evaluation
In summary, while this work has some limitations, I appreciate this work studies on an important problem and deliver a theoretical-sound method. I think this work is borderline but I relatively incline to weak accept if the authors could address my concerns.
questions: Please refer to weaknesses.
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
E5Gqn6WlAK | Causally Debiased Time-aware Recommendation | [
"Lei Wang",
"Chen Ma",
"Xian Wu",
"Zhaopeng Qiu",
"Yefeng Zheng",
"Xu Chen"
] | Time-aware recommendation has been widely studied for modeling the user dynamic preference and a lot of models have
been proposed. However, these models often overlook the fact that users may not behave evenly on the timeline, and observed datasets
can be biased by user intrinsic preferences or previous recommender systems, leading to degraded model performance. We propose
a causally debiased time-aware recommender framework to accurately learn user preference. We formulate the task of time-aware
recommendation by a causal graph, identifying two types of biases on the item and time levels. To optimize the ideal unbiased learning
objective, we propose a debiased framework based on the inverse propensity score (IPS) and extend it to the doubly robust method.
Considering that the user preference can be diverse and complex, which may result in unmeasured confounders, we develop a sensitivity
analysis method to obtain more accurate IPS. We theoretically draw a connection between the proposed method and the ideal learning
objective, which to the best of our knowledge, is the first time in the research community. We conduct extensive experiments on three
real-world datasets to demonstrate the effectiveness of our model. To promote this research direction, we have released our project at https://www-cdtr.github.io/. | [
"Time-aware recommendation",
"Collaborative filtering",
"Counterfactual",
"Causal inference"
] | https://openreview.net/pdf?id=E5Gqn6WlAK | dM5InM0Y38 | official_review | 1,700,808,395,915 | E5Gqn6WlAK | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission495/Reviewer_rdLC"
] | review: The paper addresses the issue of bias in time-aware recommender systems. The proposed solution uses a causal graph to identify biases at both item and time levels and employs the inverse propensity score (IPS) method, extended to a doubly robust method, to optimize for an unbiased learning objective. Additionally, the framework includes a sensitivity analysis method to better handle unmeasured confounders.
Strength:
- The authors provide a robust theoretical analysis connecting the proposed method with the ideal learning objective.
- The paper introduces a framework that addresses both item and time-level biases.
- Open sources implementation code to support reproducibility.
Weakness:
- The concept of the methods introduced appears to be a reiteration of ideas previously explored in other recommendation scenarios, lacking significant novelty.
- While the authors suggest that their framework is applicable to both explicit and implicit feedback, the evaluation is limited to explicit feedback. Notably, the CoNCARS base model, originally designed for implicit feedback, is not tested in this context. The authors should encompass implicit feedback metrics, such as NDCG or Hit Rate, to substantiate these claims.
- It would be great if the authors could include more recent models for debiasing or unobserved confounders.
questions: - The authors mention that the symbols are presented in Table 1 (in section 3), but the Table 1 is the performance table.
- Is there any particular reason not to include Amazon in ablation studies? It is also great to include some ablation studies with S2 time processing to show the consistency.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
E5Gqn6WlAK | Causally Debiased Time-aware Recommendation | [
"Lei Wang",
"Chen Ma",
"Xian Wu",
"Zhaopeng Qiu",
"Yefeng Zheng",
"Xu Chen"
] | Time-aware recommendation has been widely studied for modeling the user dynamic preference and a lot of models have
been proposed. However, these models often overlook the fact that users may not behave evenly on the timeline, and observed datasets
can be biased by user intrinsic preferences or previous recommender systems, leading to degraded model performance. We propose
a causally debiased time-aware recommender framework to accurately learn user preference. We formulate the task of time-aware
recommendation by a causal graph, identifying two types of biases on the item and time levels. To optimize the ideal unbiased learning
objective, we propose a debiased framework based on the inverse propensity score (IPS) and extend it to the doubly robust method.
Considering that the user preference can be diverse and complex, which may result in unmeasured confounders, we develop a sensitivity
analysis method to obtain more accurate IPS. We theoretically draw a connection between the proposed method and the ideal learning
objective, which to the best of our knowledge, is the first time in the research community. We conduct extensive experiments on three
real-world datasets to demonstrate the effectiveness of our model. To promote this research direction, we have released our project at https://www-cdtr.github.io/. | [
"Time-aware recommendation",
"Collaborative filtering",
"Counterfactual",
"Causal inference"
] | https://openreview.net/pdf?id=E5Gqn6WlAK | PnYJtCnWeJ | official_review | 1,700,728,299,463 | E5Gqn6WlAK | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission495/Reviewer_moET"
] | review: Summary
In this paper, the authors aim to solve the problem of dynamic debiased recommendation. To achieve this goal, the authors proposed an IPS based model, causally debiased time-aware recommender framework, which is from the casual perspective. And the authors designed an IPS method was designed to correct the item- and time-level biases, and then extend the IPS method to the doubly robust model. Besides, a sensitivity is applied to capture the unmeasured confounders. In the experiment, the authors evaluate the performance over the real-world dataset comparing to several baselines.
Pros
Originality
It is novel and interesting that the authors proposed a debiased time-aware recommender framework from the casual perspective, which considers the user preference. And it is the first time that the authors draw a connection between the proposed method and the ideal learning objective.
Quality
From the perspective of quality, it is high. The authors proposed a causally debiased time-aware recommender framework to learn user preference and considered both item-level and the time-level biases. And the authors theoretically prove that the method is unbiases to the ideal objective. Additionally, the data analysis is thorough, well-executed, and adequately supports the conclusions drawn.
Clarity
In this paper, the introduction provides a clear overview of the research topic and objectives, and the body sections are logically organized. And the language used in this paper is clear and easy to understand. Besides, some key concepts are well explained.
Significance
The work in this paper is of great significance. Firstly, CDTR outperforms other forecasting models. And then, the unmeasured confounders can be captured by a sensitivity analysis method.
And then, WINNET harvests the high computational efficiency for other forecasting models and make full use of the correlation between period trend and oscillation.
Cons
The framework of CDTR is needed.
Although the article provides detailed theoretical analysis, there are still certain parts that are difficult to understand and require multiple readings to grasp the main points.
In the experiment, I noticed that in some cases, the performance of DR-UM is not the best. More explanation is needed.
Please pay attention to the layout of the paper, for example, avoid having only one word per line.
questions: Using different strategies, S1 or S2, the metrics is quite different. The metrics using the strategy of S1 is lower than that using the strategy of S2. So, why still using S1 and S2?
There are differences between experimental results and theoretical demonstrations. Have you considered the reason for the difference?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
E5Gqn6WlAK | Causally Debiased Time-aware Recommendation | [
"Lei Wang",
"Chen Ma",
"Xian Wu",
"Zhaopeng Qiu",
"Yefeng Zheng",
"Xu Chen"
] | Time-aware recommendation has been widely studied for modeling the user dynamic preference and a lot of models have
been proposed. However, these models often overlook the fact that users may not behave evenly on the timeline, and observed datasets
can be biased by user intrinsic preferences or previous recommender systems, leading to degraded model performance. We propose
a causally debiased time-aware recommender framework to accurately learn user preference. We formulate the task of time-aware
recommendation by a causal graph, identifying two types of biases on the item and time levels. To optimize the ideal unbiased learning
objective, we propose a debiased framework based on the inverse propensity score (IPS) and extend it to the doubly robust method.
Considering that the user preference can be diverse and complex, which may result in unmeasured confounders, we develop a sensitivity
analysis method to obtain more accurate IPS. We theoretically draw a connection between the proposed method and the ideal learning
objective, which to the best of our knowledge, is the first time in the research community. We conduct extensive experiments on three
real-world datasets to demonstrate the effectiveness of our model. To promote this research direction, we have released our project at https://www-cdtr.github.io/. | [
"Time-aware recommendation",
"Collaborative filtering",
"Counterfactual",
"Causal inference"
] | https://openreview.net/pdf?id=E5Gqn6WlAK | C3YLhKtIWY | decision | 1,705,909,244,134 | E5Gqn6WlAK | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: The paper proposed a debiased time-aware recommender framework by incorperating a crafted causal graph. While the studied problem is interesting, the paper indeed needs improvment on evaluation and paper presentation. |
E5Gqn6WlAK | Causally Debiased Time-aware Recommendation | [
"Lei Wang",
"Chen Ma",
"Xian Wu",
"Zhaopeng Qiu",
"Yefeng Zheng",
"Xu Chen"
] | Time-aware recommendation has been widely studied for modeling the user dynamic preference and a lot of models have
been proposed. However, these models often overlook the fact that users may not behave evenly on the timeline, and observed datasets
can be biased by user intrinsic preferences or previous recommender systems, leading to degraded model performance. We propose
a causally debiased time-aware recommender framework to accurately learn user preference. We formulate the task of time-aware
recommendation by a causal graph, identifying two types of biases on the item and time levels. To optimize the ideal unbiased learning
objective, we propose a debiased framework based on the inverse propensity score (IPS) and extend it to the doubly robust method.
Considering that the user preference can be diverse and complex, which may result in unmeasured confounders, we develop a sensitivity
analysis method to obtain more accurate IPS. We theoretically draw a connection between the proposed method and the ideal learning
objective, which to the best of our knowledge, is the first time in the research community. We conduct extensive experiments on three
real-world datasets to demonstrate the effectiveness of our model. To promote this research direction, we have released our project at https://www-cdtr.github.io/. | [
"Time-aware recommendation",
"Collaborative filtering",
"Counterfactual",
"Causal inference"
] | https://openreview.net/pdf?id=E5Gqn6WlAK | Bzxq7x0YWL | official_review | 1,700,537,206,081 | E5Gqn6WlAK | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission495/Reviewer_cQaE"
] | review: In this submission, the authors formulate the task of time-aware recommendation by a causal graph, by analyzing the causes of the item- and time-level biases, and adjust the training samples based on the inverse propensity score (IPS) to correct these biases.
Pros:
1) The paper provides a detailed theoretical analysis of the proposed models.
2) Extensive experiments are conducted, demonstrating the effectiveness of the framework.
3) The paper explores an intriguing and significant area in recommendation system.
Cons:
1) The improvement over the strongest existing baseline is relatively modest.
2) The evaluation focuses solely on RMSE and MAE. Incorporating additional metrics like Recall, F1, and NDCG could provide a more comprehensive assessment.
3) In Section 5.4, the scenarios T1, T2, ..., T5 seem impractical for real-world recommendation contexts. Typically, only the top 20 (or fewer) items are presented to users, which might undermine the effectiveness of the debiasing method.
questions: Are there any other observations related to the dataset segmentation methods, S1 and S2? For example, changing the number of bins or the use the days of a year.
What do the distributions of the time information of user-item interactions for all three datasets look like based on your segmentation?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
E4Qtxcq9bU | Making Cloud Spot Instance Interruption Events Visible | [
"Kyunghwan Kim",
"Kyungyong Lee"
] | Public cloud computing vendors offer a surplus of computing resources at a cheaper price with a service of spot instance. Despite the possible great cost savings from using spot instances, sudden resource interruption can happen as demand changes. To help users estimate cost savings and the possibility of interruption when using spot instances, vendors provide diverse datasets. However, the effectiveness of the datasets is not quantitatively evaluated yet, and many users still rely on hunch when choosing spot instances. To help users lower the chance of interruption of the spot instance for reliable usage, in this paper we thoroughly analyze various datasets of the spot instance and present the feasibility for value prediction. Then, to measure how the public datasets reflect real-world spot instance interruption events, we conduct extensive tests for spot instances of AWS, Azure, and Google Cloud while proposing a novel heuristic to control experimental cost minimizing information loss. Combining the dataset analysis, prediction, and real-world experiment result, we show the feasibility for lowering the possibility of interruption events significantly. | [
"cloud computing",
"spot instance",
"interruption modeling",
"enhancing reliability",
"spot instance datasets"
] | https://openreview.net/pdf?id=E4Qtxcq9bU | x75rbSfLCV | official_review | 1,700,742,572,576 | E4Qtxcq9bU | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1368/Reviewer_GyoK"
] | review: This paper analyzes various datasets of the spot instance and present the feasibility for value prediction. Then, to measure how the public datasets reflect real-world spot instance interruption events, the authors conduct real-world experiments for spot instances of AWS, Azure, and Google Cloud.
The problem is relevant. The paper is well written. The paper provides an interesting view of the current landscaper of cloud computing. It can ignite future research, although the specific contribution of this paper is relatively limited.
questions: - How can future researchers benefit from this analysis?
- If the current auction mechanism changes, would the result/methodology of the paper be still valid?
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
E4Qtxcq9bU | Making Cloud Spot Instance Interruption Events Visible | [
"Kyunghwan Kim",
"Kyungyong Lee"
] | Public cloud computing vendors offer a surplus of computing resources at a cheaper price with a service of spot instance. Despite the possible great cost savings from using spot instances, sudden resource interruption can happen as demand changes. To help users estimate cost savings and the possibility of interruption when using spot instances, vendors provide diverse datasets. However, the effectiveness of the datasets is not quantitatively evaluated yet, and many users still rely on hunch when choosing spot instances. To help users lower the chance of interruption of the spot instance for reliable usage, in this paper we thoroughly analyze various datasets of the spot instance and present the feasibility for value prediction. Then, to measure how the public datasets reflect real-world spot instance interruption events, we conduct extensive tests for spot instances of AWS, Azure, and Google Cloud while proposing a novel heuristic to control experimental cost minimizing information loss. Combining the dataset analysis, prediction, and real-world experiment result, we show the feasibility for lowering the possibility of interruption events significantly. | [
"cloud computing",
"spot instance",
"interruption modeling",
"enhancing reliability",
"spot instance datasets"
] | https://openreview.net/pdf?id=E4Qtxcq9bU | hD0XHqaxUq | official_review | 1,700,811,319,142 | E4Qtxcq9bU | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1368/Reviewer_RFsa"
] | review: This paper explores the reliability of spot instances by statistically analyzing availability datasets of preemptible instances of major public cloud computing vendors. The authors evaluated several existing predictors on the AWS Spot Placement Score (SPS) dataset. This paper also uses the Kaplan-Meier estimator to model the probability of spot instance survival rate. Finally, the authors concluded this paper with some research insights to guide readers to utilize cloud resources more efficiently.
Pros:
(1) This paper studies the reliability of spot instances, which might be an important and well-studied problem for cloud-computing-powered companies.
(2) This paper provides rich line charts to explain the statistical and distributional features of studied datasets.
(3) This paper includes data from major cloud computing vendors such as AWS, Azure, and Google Cloud Platform (GCP), which are very representative of the current cloud computing market.
Cons:
(1) The novelty of this paper is not enough. Section 3 "SPOT INSTANCE DATASET CHARACTERIZATION" contains just basic data analysis. Section 4 "PREDICTING SPOT INSTANCE DATASET CHANGE" evaluates existing prediction methods. Subsection 5.1 "Spot Instance Interrupt Analysis" uses a well-established statistical model.
(2) This paper is not presented in review mode, which does not meet the recommended formatting requirements of the conference.
(3) The settings of SPS prediction experiments in this paper are not reasonable, and the evaluated results of these experiments are almost meaningless. In section 4 "PREDICTING SPOT INSTANCE DATASET CHANGE" and subsection 5.2 " Effectiveness of Instant Availability Dataset", the SPS data are categorized in a bin size of 1 and 0.5, respectively. However, as shown in Figure 2(c), the values of SPS data nearly stay at 2.66 and could be always binned to 3. In this way, the important fluctuation in the original data is totally neglected and the prediction task becomes to predict a nearly constant discrete series which is very easy. As a consequence, as shown in Table 1 and Figure 5, the prediction accuracy is always very high for every model and in every setting.
(4) The meaning of the SPS metric is not clearly introduced in the main paper. SPS is used throughout the whole paper but is only further studied in Appendix A.
(5) There are certain writing issues in this paper. For example, in section 6 "RELATED WORK", there are two titles "Characterizing Spot Instance Dataset:" and "Modeling and Using Spot Instance Dataset:" for the first paragraph.
(6) In Figure 2, the scales of the Y-axes of three sub-figures are inconsistent and unreasonable. For example, in sub-figure "(b) Interruption-Free Score Average", the trend of the change of AWS through time is very hard to tell.
questions: (1) Concerning the prediction experiments in section 4 "PREDICTING SPOT INSTANCE DATASET CHANGE", why the accuracy results are so close across every setting?
(2) In Figure 6, what does the X-axis " Feature (Time Recency)" mean? And why the most important features are in the negative value range (roughly around minus 20)?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 3
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
E4Qtxcq9bU | Making Cloud Spot Instance Interruption Events Visible | [
"Kyunghwan Kim",
"Kyungyong Lee"
] | Public cloud computing vendors offer a surplus of computing resources at a cheaper price with a service of spot instance. Despite the possible great cost savings from using spot instances, sudden resource interruption can happen as demand changes. To help users estimate cost savings and the possibility of interruption when using spot instances, vendors provide diverse datasets. However, the effectiveness of the datasets is not quantitatively evaluated yet, and many users still rely on hunch when choosing spot instances. To help users lower the chance of interruption of the spot instance for reliable usage, in this paper we thoroughly analyze various datasets of the spot instance and present the feasibility for value prediction. Then, to measure how the public datasets reflect real-world spot instance interruption events, we conduct extensive tests for spot instances of AWS, Azure, and Google Cloud while proposing a novel heuristic to control experimental cost minimizing information loss. Combining the dataset analysis, prediction, and real-world experiment result, we show the feasibility for lowering the possibility of interruption events significantly. | [
"cloud computing",
"spot instance",
"interruption modeling",
"enhancing reliability",
"spot instance datasets"
] | https://openreview.net/pdf?id=E4Qtxcq9bU | YyT8f6Aj7y | official_review | 1,700,762,659,847 | E4Qtxcq9bU | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1368/Reviewer_eQiU"
] | review: Significance/Pros:
This topic is very interesting considering the significance of spot instance usage, given its broad interest. It's crucial for users employing such services to have a clear understanding. Clear structure and experimental details are clearly described.
Utilising data obtained from public cloud providers so the results are not based on the simulated data/information.
Offers a diverse range of relevant literature.
Quality: In general, the paper is well-written and clearly articulated, yet certain aspects require additional clarification.
Cons:
It's unclear whether the authors utilised datasets from SpotLake – which I assume yes?
Adding explicit details about the datasets used, clearly indicating their sources, would be beneficial. There are some repetitive statements.
Some relevant technical details and insights are missing which are listed as questions.
There are some repetitive statements.
Questions which needs further clarification:
1. Were there instances where the ratio of savings to interruptions was higher compared to other instance types? This could provide meaningful insights. How did you compare instances of different types across various CPs?
2. Figure 2a - it is apparent that AWS has the least cost savings. Is it same across all the instance types?
3. Could you provide more information about the dataset's size and specifics? How many instance categories are there, along with their regions and entries?
4. Figure 2b presents the temporal change pattern - Is it consistent across all regions that offer the same instance type? Have you noticed any regional impact on the pattern?
5. SPS score/ time when cloud usage is low - Is this specific to a particular region or consistent across all regions?
6. How do you compare your work/RF-based findings to those mentioned in Reference 29?
7. On AWS, the SPS score ranges from 1 to 10. How does this mapping align with equation 1?
8. What is the rationale behind the mapping of dataset with values in the range of five categories to a numeric value between 1.0 and 3.0 increments by 0.5?
9. What's the reasoning behind choosing these algorithms, given that some are tailored to time series while others are generic classifiers?
10. Their accuracies are nearly identical — is there an underlying aspect that would lead you to prefer one over another? Also, it is not convincing that XGBoost is performing the best of all.
11. The author stated an of 63.2% and 168% increase in the spot instance running time, could this be aided with additional clarification when compared to the base model or findings?
questions: Questions which needs further clarification:
1. Were there instances where the ratio of savings to interruptions was higher compared to other instance types? This could provide meaningful insights. How did you compare instances of different types across various CPs?
2. Figure 2a - it is apparent that AWS has the least cost savings. Is it same across all the instance types?
3. Could you provide more information about the dataset's size and specifics? How many instance categories are there, along with their regions and entries?
4. Figure 2b presents the temporal change pattern - Is it consistent across all regions that offer the same instance type? Have you noticed any regional impact on the pattern?
5. SPS score/ time when cloud usage is low - Is this specific to a particular region or consistent across all regions?
6. How do you compare your work/RF-based findings to those mentioned in Reference 29?
7. On AWS, the SPS score ranges from 1 to 10. How does this mapping align with equation 1?
8. What is the rationale behind the mapping of dataset with values in the range of five categories to a numeric value between 1.0 and 3.0 increments by 0.5?
9. What's the reasoning behind choosing these algorithms, given that some are tailored to time series while others are generic classifiers?
10. Their accuracies are nearly identical — is there an underlying aspect that would lead you to prefer one over another? Also, it is not convincing that XGBoost is performing the best of all.
11. The author stated an of 63.2% and 168% increase in the spot instance running time, could this be aided with additional clarification when compared to the base model or findings?
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
E4Qtxcq9bU | Making Cloud Spot Instance Interruption Events Visible | [
"Kyunghwan Kim",
"Kyungyong Lee"
] | Public cloud computing vendors offer a surplus of computing resources at a cheaper price with a service of spot instance. Despite the possible great cost savings from using spot instances, sudden resource interruption can happen as demand changes. To help users estimate cost savings and the possibility of interruption when using spot instances, vendors provide diverse datasets. However, the effectiveness of the datasets is not quantitatively evaluated yet, and many users still rely on hunch when choosing spot instances. To help users lower the chance of interruption of the spot instance for reliable usage, in this paper we thoroughly analyze various datasets of the spot instance and present the feasibility for value prediction. Then, to measure how the public datasets reflect real-world spot instance interruption events, we conduct extensive tests for spot instances of AWS, Azure, and Google Cloud while proposing a novel heuristic to control experimental cost minimizing information loss. Combining the dataset analysis, prediction, and real-world experiment result, we show the feasibility for lowering the possibility of interruption events significantly. | [
"cloud computing",
"spot instance",
"interruption modeling",
"enhancing reliability",
"spot instance datasets"
] | https://openreview.net/pdf?id=E4Qtxcq9bU | VLQRh0WXSU | official_review | 1,700,597,783,110 | E4Qtxcq9bU | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1368/Reviewer_6iDH"
] | review: n this paper, the authors take an investigative look into the spot instance availability and reliability of AWS, Azure and GCP cloud providers. The authors conduct this analysis in two steps. First, they perform passive analysis on the datasets released by the cloud providers and attempt to see correlation between availability scores and other potentially affecting patterns (e.g. Time of day). The authors further conduct real-world experiements by reserving spot instances in all three providers global network until they observe interupption events. Based on the findings from both analysis, the paper also investigates if predicting future scores released by the providers can improve the spot instance selction resulting in more longevity. Overall, the paper provides several interesting insights for developers/researchers aiming to utilize spot instances and is highly relevant to the Web community.
## Strengths
- The paper tackles a super interesting problem which is more relevant after the policy changes of the cloud providers
- The paper is well written and easy to follow
- Some of the takeaways are super relevant and well-motivated
## Weakness
- This reviewer found section 3 of the paper particularly weak and hand-wavy in reaching the final conclusion. Specifically, the “time frame” of interruption dataset of AWS and Azure is not mentioned and it is not clear if the two times correlated with each other. Further, it is possible that the spot instance availability and interruptions are more correlated with instance type (more capable instances are of fairly limited availability in first place and therefore may get more interruptions as their demand increases) or dependent on geographical regions (US and EU may see more user traffic and usage compared to South America or Africa and therefore instances their might not get interrupted so often). However, for most of the analysis, the authors combine all instances and geographical values together and it is hard to understand the nuanced differences. The core analysis in the appendix gives some insight but it is also combining different cloud providers together.
- Section 3 also over-estimates the findings and takeaways, which may not encompass the reality. For instance, in Fig 3, the authors draw the conclusion on the daily and weekly patterns from a 1000th order difference in the SPS values. The SPS values in section 3 remain relatively stable and therefore the need of prediction is not well motivated here. Also the SPS conclusions shown in Fig 2 does not correlate with author’s own experiments as “On average, spot instances with the score of 3 runs for 4.7 hours, while that of score 1 runs only for 1.8 hours.” (In appendix) but the SPS values mostly hovers around 2.6 in section 3. The authors must clarify the takeaways by also showing the median and standard devision of the SPS values.
- The paper has limited exploration of GCP in sec 3 due to dataset availability which limits the correlations throughout the paper. The authors are advised to adjust the contributions of the paper to reflect this accurately.
- While the reviewer appreciated the real-world experiments and takeaways paper draws from correlating with the provider datasets, the details on the measurement methodology are lacking. This makes it difficult for future researchers to improve upon or replicate the work conducted by the authors in the paper.
- “Even after gathering the interruption event experiment results, without knowing the internal mechanism of spot instance operations, it can be challenging to select features that really impact the interruption events” → The paper also makes no attempt to shed light on the “internal mechanism” of spot operations. Please tone down this motivation in Intro.
questions: In addition to the weakness section above, the authors are requested to answer the following questions.
- How will the approach overcome future policy/dataset changes by AWS? What aspects from previous research in this domain can be leveraged as it is unlikely that cloud providers will change the spot availability schemes even if they change the scores/information in the dataset?
- Are the authors planning to release their experiment scripts to public?
ethics_review_flag: No
ethics_review_description: Not Applicable
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
E4Qtxcq9bU | Making Cloud Spot Instance Interruption Events Visible | [
"Kyunghwan Kim",
"Kyungyong Lee"
] | Public cloud computing vendors offer a surplus of computing resources at a cheaper price with a service of spot instance. Despite the possible great cost savings from using spot instances, sudden resource interruption can happen as demand changes. To help users estimate cost savings and the possibility of interruption when using spot instances, vendors provide diverse datasets. However, the effectiveness of the datasets is not quantitatively evaluated yet, and many users still rely on hunch when choosing spot instances. To help users lower the chance of interruption of the spot instance for reliable usage, in this paper we thoroughly analyze various datasets of the spot instance and present the feasibility for value prediction. Then, to measure how the public datasets reflect real-world spot instance interruption events, we conduct extensive tests for spot instances of AWS, Azure, and Google Cloud while proposing a novel heuristic to control experimental cost minimizing information loss. Combining the dataset analysis, prediction, and real-world experiment result, we show the feasibility for lowering the possibility of interruption events significantly. | [
"cloud computing",
"spot instance",
"interruption modeling",
"enhancing reliability",
"spot instance datasets"
] | https://openreview.net/pdf?id=E4Qtxcq9bU | 0j87U7QQs1 | decision | 1,705,909,240,083 | E4Qtxcq9bU | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: This paper focuses on cloud spot instances and their availability and reliability. The topic is of relevance to this community. Reviewers have praised that the authors investigate data from the major 3 cloud providers. However, reviewers have raised the following as areas for improvement: clarity of the contributions, granularity of SPS bins, detail of some data and graphs. The authors' rebuttal is detailed, but they did not answer all concerns (e.g. the time frame issue from reviewer 6iDH). Overall, I recommend this paper to be accepted, but I will not argue strongly in favor. |
E3n0tbH5rq | General Debiasing for Graph-based Collaborative Filtering via Adversarial Graph Dropout | [
"An Zhang",
"Wenchang Ma",
"Pengbo Wei",
"Leheng Sheng",
"Xiang Wang"
] | Graph neural networks (GNNs) have shown impressive performance in recommender systems, particularly in collaborative filtering (CF). The key lies in aggregating neighborhood information on a user-item interaction graph to enhance user/item representations. However, we have discovered that this aggregation mechanism comes with a drawback – it amplifies biases present in the interaction graph.
For instance, a user's interactions with items can be driven by both unbiased true interest and various biased factors like item popularity or exposure. But the current aggregation approach combines all information, both biased and unbiased, leading to biased representation learning. Consequently, graph-based recommenders can learn distorted views of users/items, hindering the modeling of their true preferences and generalization.
To address this issue, we introduce a novel framework called \underline{Adv}ersarial Graph \underline{Drop}out (AdvDrop). It differentiates between unbiased and biased interactions, enabling unbiased representation learning. For each user/item, AdvDrop employs adversarial learning to split the neighborhood into two views: one with bias-mitigated interactions and the other with bias-augmented interactions. After view-specific aggregation, AdvDrop ensures that the bias-mitigated and bias-augmented representations remain invariant, shielding them from the influence of bias.
We validate AdvDrop's effectiveness on six public datasets that cover both general and specific biases, demonstrating significant improvements. Furthermore, our method exhibits meaningful separation of subgraphs and achieves unbiased representations for graph-based CF models, as revealed by in-depth analysis. | [
"Collaborative filtering",
"Debiasing",
"General Debias"
] | https://openreview.net/pdf?id=E3n0tbH5rq | yimtsxeZBw | decision | 1,705,909,249,817 | E3n0tbH5rq | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: Summary: this paper introduces AdvDrop, a novel framework designed to mitigate bias in graph neural networks (GNNs) used for collaborative filtering in recommender systems. It employs adversarial learning to separate user-item interactions into bias-mitigated and bias-augmented views, leading to unbiased representation learning. The effectiveness of AdvDrop is demonstrated through extensive experiments across various datasets.
#### Strengths
1. **Innovative Approach**: Utilizes adversarial methods to effectively address bias issues in GNNs.
2. **Comprehensive Structure**: Well-organized presentation and provision of open-source code for replication.
#### Weaknesses
1. **Limited Novelty**: Some concepts are not novel and are confusingly defined.
2. **Unconvincing Experimental Results**: The comparative experiments lack compelling evidence of the method's superiority.
3. **Lack of Theoretical Depth**: Insufficient discussion on the mathematical principles and long-term impacts of the AdvDrop framework.
4. **User Experience Considerations**: The impact of debiasing on user satisfaction and engagement is not fully addressed.
#### Suggestions for Improvement
1. **Clarify Adversarial Learning Details**: Provide explicit explanations of training hyperparameters and their impact on performance.
2. **Dataset Suitability**: Consider experimenting with more suitable datasets, especially where current results are not satisfactory.
3. **Enhance Theoretical Discussion**: Expand on the theoretical basis of AdvDrop, including its mathematical principles and assumptions.
4. **Long-Term Impact Analysis**: Assess the potential long-term effects of AdvDrop on recommendation system performance and user behavior.
5. **User Experience Balance**: Investigate how AdvDrop balances algorithmic fairness with personalized user experience, and its effect on user satisfaction.
6. **Compare with Simple Baselines**: Justify the improvement of AdvDrop over simpler baselines like dropping interactions proportional to bias.
7. **Improve Clarity and Consistency**: Address issues with clarity in figures and consistency in explanations and representations.
The authors made good efforts in responding and summarizing reviewer's questions, which ultimately weighs positively at ACs side and impacts my final recommendation. |
E3n0tbH5rq | General Debiasing for Graph-based Collaborative Filtering via Adversarial Graph Dropout | [
"An Zhang",
"Wenchang Ma",
"Pengbo Wei",
"Leheng Sheng",
"Xiang Wang"
] | Graph neural networks (GNNs) have shown impressive performance in recommender systems, particularly in collaborative filtering (CF). The key lies in aggregating neighborhood information on a user-item interaction graph to enhance user/item representations. However, we have discovered that this aggregation mechanism comes with a drawback – it amplifies biases present in the interaction graph.
For instance, a user's interactions with items can be driven by both unbiased true interest and various biased factors like item popularity or exposure. But the current aggregation approach combines all information, both biased and unbiased, leading to biased representation learning. Consequently, graph-based recommenders can learn distorted views of users/items, hindering the modeling of their true preferences and generalization.
To address this issue, we introduce a novel framework called \underline{Adv}ersarial Graph \underline{Drop}out (AdvDrop). It differentiates between unbiased and biased interactions, enabling unbiased representation learning. For each user/item, AdvDrop employs adversarial learning to split the neighborhood into two views: one with bias-mitigated interactions and the other with bias-augmented interactions. After view-specific aggregation, AdvDrop ensures that the bias-mitigated and bias-augmented representations remain invariant, shielding them from the influence of bias.
We validate AdvDrop's effectiveness on six public datasets that cover both general and specific biases, demonstrating significant improvements. Furthermore, our method exhibits meaningful separation of subgraphs and achieves unbiased representations for graph-based CF models, as revealed by in-depth analysis. | [
"Collaborative filtering",
"Debiasing",
"General Debias"
] | https://openreview.net/pdf?id=E3n0tbH5rq | cW2E59l6bq | official_review | 1,700,474,041,391 | E3n0tbH5rq | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2121/Reviewer_v2Fm"
] | review: This paper proposed a novel debiasing framework named Adversarial Graph Dropout (AdvDrop), aimed at addressing inherent biases in graph-based Collaborative Filtering (CF) models. The authors initially identify and highlight the issue of biases in graph-based CF models, particularly the biases amplified during the aggregation of user-item interaction information using Graph Neural Networks (GNNs). They then propose the AdvDrop framework, which employs adversarial learning to distinguish between biased and unbiased interactions and ensures that the representations aggregated from these interactions remain invariant. Through a series of extensive experiments, including tests across different bias scenarios and visualization of representations, the authors validate the effectiveness of AdvDrop in mitigating inherent biases, enhancing model generalization, and promoting fairness in recommendations.
Strengths:
1. The study addresses a significant and non-negligible problem in recommender systems - inherent biases. This is crucial for enhancing the fairness and accuracy of recommendation systems, directly impacting the user experience and satisfaction.
2. The paper not only proposes a new solution but also delves deeply into the root causes of the problem and the different types of biases. This thorough exploration of the issue contributes to further research in the field.
3. The paper is well-written, with a clear logical structure and expression, making it easy for readers to understand complex concepts and methodologies.
Weaknesses:
1. While the paper proposes a novel solution, the discussion on the theoretical underpinnings and in-depth mathematical principles behind this solution might not be sufficiently thorough. Adding more theoretical analysis could help in better understanding how the framework operates and its limitations.
2. The paper might not delve deeply into the long-term impacts of the AdvDrop framework on the performance of recommendation systems and user behavior. Understanding these long-term effects is crucial for assessing the sustainability of the framework in practical applications.
3. While the focus of the research is on reducing algorithmic bias, it may not fully consider the impact of this debiasing approach on the end-user experience. User satisfaction and engagement are key factors in measuring the success of recommendation systems and should be considered in future research.
questions: 1. The theoretical basis of the AdvDrop framework seems to be insufficiently discussed in the paper. Could you elaborate on the mathematical principles and assumptions behind this framework? Specifically, how is the role and mechanism of adversarial learning in debiasing concretely implemented?
2. There seems to be a lack of assessment in the paper on the long-term impacts of the AdvDrop framework. Have you considered the potential effects this framework might have on the long-term performance of recommendation systems and user behavior?
3. How do you balance algorithmic fairness with personalized user experience in the process of debiasing? Does the AdvDrop framework negatively impact user satisfaction and engagement while reducing biases?
ethics_review_flag: No
ethics_review_description: n/a
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
E3n0tbH5rq | General Debiasing for Graph-based Collaborative Filtering via Adversarial Graph Dropout | [
"An Zhang",
"Wenchang Ma",
"Pengbo Wei",
"Leheng Sheng",
"Xiang Wang"
] | Graph neural networks (GNNs) have shown impressive performance in recommender systems, particularly in collaborative filtering (CF). The key lies in aggregating neighborhood information on a user-item interaction graph to enhance user/item representations. However, we have discovered that this aggregation mechanism comes with a drawback – it amplifies biases present in the interaction graph.
For instance, a user's interactions with items can be driven by both unbiased true interest and various biased factors like item popularity or exposure. But the current aggregation approach combines all information, both biased and unbiased, leading to biased representation learning. Consequently, graph-based recommenders can learn distorted views of users/items, hindering the modeling of their true preferences and generalization.
To address this issue, we introduce a novel framework called \underline{Adv}ersarial Graph \underline{Drop}out (AdvDrop). It differentiates between unbiased and biased interactions, enabling unbiased representation learning. For each user/item, AdvDrop employs adversarial learning to split the neighborhood into two views: one with bias-mitigated interactions and the other with bias-augmented interactions. After view-specific aggregation, AdvDrop ensures that the bias-mitigated and bias-augmented representations remain invariant, shielding them from the influence of bias.
We validate AdvDrop's effectiveness on six public datasets that cover both general and specific biases, demonstrating significant improvements. Furthermore, our method exhibits meaningful separation of subgraphs and achieves unbiased representations for graph-based CF models, as revealed by in-depth analysis. | [
"Collaborative filtering",
"Debiasing",
"General Debias"
] | https://openreview.net/pdf?id=E3n0tbH5rq | Yl9jgijZO3 | official_review | 1,700,562,970,545 | E3n0tbH5rq | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2121/Reviewer_zGW2"
] | review: This paper proposes an approach to optimize subgraph sampling through adversarial learning, thereby partitioning the original user-item interaction graph into two graphs: one that is bias-aware and another that mitigates bias. This enables the learning of debiased representations for recommendation purposes.
Strength:
1. This article addresses a crucial issue: how to mitigate biases present in recommendation.
2. The experiment seems to show the effectiveness of proposed method.
Weakness:
Some questions regarding the methods and motivation in this paper need to be addressed. Please refer to the "Questions" section for further clarification.
questions: Q1: In the visual results presented in Figure 1, the effectiveness of AdaDrop is not clearly demonstrated. On the one hand, in Figure 1(a)-1(d), the features of Male and Female remain clearly separable; on the other hand, in Figure 1(e)-2(g), embeddings for the Tail category are also clustered together.
Q2: The author mentions the presence of various bias factors in real-world situations. A question that arises is whether the simple partitioning into only two sub-graphs (i.e., bias-aware and bias-mitigated) is feasible considering there are many bias factors. Perhaps in cases where only one bias factor is considered, this approach is feasible because It allows for a clear separation of whether the edge is influenced by that specific bias factor. However, the author considers multiple bias factors, raising the question of whether these factors may influence and entangle with each other. In such a scenario, can we still be sure that partitioning into only two sub-graphs for all the bias factors is enough?
Q3: Is minimizing $\mathcal{L}_{inv}$ necessary when optimizing the recommenders? Since the method is going to encode two different representations—one for bias-aware and another for bias-mitigated—why should we use contrastive learning to bring these two different representations close for the same node in the graph? What exactly is the 'invariant' part for, given that the two graphs are different? Why is there an invariant part?
Q4: Is there something like a case study to further convince that the bias is alleviated?
ethics_review_flag: No
ethics_review_description: No
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
E3n0tbH5rq | General Debiasing for Graph-based Collaborative Filtering via Adversarial Graph Dropout | [
"An Zhang",
"Wenchang Ma",
"Pengbo Wei",
"Leheng Sheng",
"Xiang Wang"
] | Graph neural networks (GNNs) have shown impressive performance in recommender systems, particularly in collaborative filtering (CF). The key lies in aggregating neighborhood information on a user-item interaction graph to enhance user/item representations. However, we have discovered that this aggregation mechanism comes with a drawback – it amplifies biases present in the interaction graph.
For instance, a user's interactions with items can be driven by both unbiased true interest and various biased factors like item popularity or exposure. But the current aggregation approach combines all information, both biased and unbiased, leading to biased representation learning. Consequently, graph-based recommenders can learn distorted views of users/items, hindering the modeling of their true preferences and generalization.
To address this issue, we introduce a novel framework called \underline{Adv}ersarial Graph \underline{Drop}out (AdvDrop). It differentiates between unbiased and biased interactions, enabling unbiased representation learning. For each user/item, AdvDrop employs adversarial learning to split the neighborhood into two views: one with bias-mitigated interactions and the other with bias-augmented interactions. After view-specific aggregation, AdvDrop ensures that the bias-mitigated and bias-augmented representations remain invariant, shielding them from the influence of bias.
We validate AdvDrop's effectiveness on six public datasets that cover both general and specific biases, demonstrating significant improvements. Furthermore, our method exhibits meaningful separation of subgraphs and achieves unbiased representations for graph-based CF models, as revealed by in-depth analysis. | [
"Collaborative filtering",
"Debiasing",
"General Debias"
] | https://openreview.net/pdf?id=E3n0tbH5rq | Vdj04sbDzV | official_review | 1,700,579,683,374 | E3n0tbH5rq | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2121/Reviewer_QSnU"
] | review: The paper presents a novel embedding framework for recommender systems that identifies bias present in user-item interactions and also removes this bias by randomly sampling subgraphs. The authors discuss their approach theoretically and then illustrate and evaluate their approach on a collection of datasets. The experimental results show increased performance as compared to a set of baseline and sota methods. Finally, the authors analyze their approach by performing an ablation study and by analyzing the representations obtained by their approach. The paper is generally very well written and has promising results. Potentially, the paper may be impactful on future research and development of debiasing methods in recommender systems.
questions: 1) Figure 4 show the probabilities of dropping interactions for various popularity groups. Essentially, this probability is proportional to popularity. I would expect similar probabilities for dropping interactions w.r.t. sensitive attributes. Hence, we can think of a simple baseline for removing popularity or minority/majority biases: we just simply drop interactions with probability proportional to a bias that we want to remove (without any training, just using relative frequencies). What is the improvement of the presented approach over such a baseline?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
E3n0tbH5rq | General Debiasing for Graph-based Collaborative Filtering via Adversarial Graph Dropout | [
"An Zhang",
"Wenchang Ma",
"Pengbo Wei",
"Leheng Sheng",
"Xiang Wang"
] | Graph neural networks (GNNs) have shown impressive performance in recommender systems, particularly in collaborative filtering (CF). The key lies in aggregating neighborhood information on a user-item interaction graph to enhance user/item representations. However, we have discovered that this aggregation mechanism comes with a drawback – it amplifies biases present in the interaction graph.
For instance, a user's interactions with items can be driven by both unbiased true interest and various biased factors like item popularity or exposure. But the current aggregation approach combines all information, both biased and unbiased, leading to biased representation learning. Consequently, graph-based recommenders can learn distorted views of users/items, hindering the modeling of their true preferences and generalization.
To address this issue, we introduce a novel framework called \underline{Adv}ersarial Graph \underline{Drop}out (AdvDrop). It differentiates between unbiased and biased interactions, enabling unbiased representation learning. For each user/item, AdvDrop employs adversarial learning to split the neighborhood into two views: one with bias-mitigated interactions and the other with bias-augmented interactions. After view-specific aggregation, AdvDrop ensures that the bias-mitigated and bias-augmented representations remain invariant, shielding them from the influence of bias.
We validate AdvDrop's effectiveness on six public datasets that cover both general and specific biases, demonstrating significant improvements. Furthermore, our method exhibits meaningful separation of subgraphs and achieves unbiased representations for graph-based CF models, as revealed by in-depth analysis. | [
"Collaborative filtering",
"Debiasing",
"General Debias"
] | https://openreview.net/pdf?id=E3n0tbH5rq | 9g6eFyuiRY | official_review | 1,701,324,423,569 | E3n0tbH5rq | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2121/Reviewer_2uHt"
] | review: The paper discusses the issue of biased representation learning in graph-based collaborative filtering (CF) models used in recommender systems. It highlights that the aggregation mechanism in these models amplifies biases present in the user-item interaction graph, leading to distorted views of users and items. To address this problem, the authors propose a novel framework called Adversarial Graph Dropout (AdvDrop), which differentiates between biased and unbiased interactions and enables unbiased representation learning.
AdvDrop employs adversarial learning to split the neighborhood into two views: one with bias-mitigated interactions and the other with bias-aware interactions. After view-specific aggregation, the framework ensures that the bias-mitigated and bias-aware representations remain invariant, mitigating the influence of bias. Experimental results on various datasets demonstrate the effectiveness of AdvDrop in reducing biases and improving recommendation accuracy.
A potential negative point of this work is the lack of detailed descriptions of parameter tuning for the compared baseline methods. While the paper compares AdvDrop with existing debiasing baselines, it does not provide comprehensive information on how the parameters of these baselines were tuned. Without a clear description of the parameter tuning process, it becomes difficult to assess the fairness of the comparison and understand whether the baselines were optimized for their best performance.
The paper lacks a comprehensive analysis of the computational efficiency of the proposed AdvDrop framework. Considering the potential scalability challenges of graph-based models, it would have been valuable to investigate and report the computational overhead introduced by AdvDrop, particularly in terms of training time and memory requirements. Such analysis would help readers assess the practical feasibility and scalability of AdvDrop in real-world scenarios with large-scale graphs and datasets.
questions: This work proposes the Adversarial Graph Dropout (AdvDrop) framework to address the issue of biased representation learning in graph-based collaborative filtering models. The paper highlights the amplification of biases in the aggregation mechanism of these models and introduces AdvDrop as a solution. AdvDrop differentiates between biased and unbiased interactions and employs adversarial learning to split the neighborhood into bias-mitigated and bias-aware views. The paper demonstrates the effectiveness of AdvDrop through experiments on public datasets, showcasing reduced biases and improved recommendation accuracy. However, the work lacks in-depth analysis of the proposed framework, comparison with state-of-the-art methods, evaluation on real-world datasets, and detailed descriptions of parameter tuning for baseline methods. Additionally, the computational efficiency and potential unintended consequences of AdvDrop are not thoroughly discussed.
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 3
technical_quality: 3
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
E3n0tbH5rq | General Debiasing for Graph-based Collaborative Filtering via Adversarial Graph Dropout | [
"An Zhang",
"Wenchang Ma",
"Pengbo Wei",
"Leheng Sheng",
"Xiang Wang"
] | Graph neural networks (GNNs) have shown impressive performance in recommender systems, particularly in collaborative filtering (CF). The key lies in aggregating neighborhood information on a user-item interaction graph to enhance user/item representations. However, we have discovered that this aggregation mechanism comes with a drawback – it amplifies biases present in the interaction graph.
For instance, a user's interactions with items can be driven by both unbiased true interest and various biased factors like item popularity or exposure. But the current aggregation approach combines all information, both biased and unbiased, leading to biased representation learning. Consequently, graph-based recommenders can learn distorted views of users/items, hindering the modeling of their true preferences and generalization.
To address this issue, we introduce a novel framework called \underline{Adv}ersarial Graph \underline{Drop}out (AdvDrop). It differentiates between unbiased and biased interactions, enabling unbiased representation learning. For each user/item, AdvDrop employs adversarial learning to split the neighborhood into two views: one with bias-mitigated interactions and the other with bias-augmented interactions. After view-specific aggregation, AdvDrop ensures that the bias-mitigated and bias-augmented representations remain invariant, shielding them from the influence of bias.
We validate AdvDrop's effectiveness on six public datasets that cover both general and specific biases, demonstrating significant improvements. Furthermore, our method exhibits meaningful separation of subgraphs and achieves unbiased representations for graph-based CF models, as revealed by in-depth analysis. | [
"Collaborative filtering",
"Debiasing",
"General Debias"
] | https://openreview.net/pdf?id=E3n0tbH5rq | 0oYblGHrEe | official_review | 1,700,644,271,359 | E3n0tbH5rq | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2121/Reviewer_X8fj"
] | review: # summary
This paper proposes AdvDrop, a framework that addresses biased representation learning in graph neural networks (GNNs) used for collaborative filtering (CF) in recommender systems. AdvDrop differentiates between unbiased and biased interactions and employs adversarial learning to split the neighborhood into bias-mitigated and bias-augmented views. By aggregating information separately and ensuring representation invariance, AdvDrop mitigates bias and achieves unbiased representations. Experimental results on various datasets demonstrate its effectiveness in improving CF models and separating subgraphs.
# strengths
- This paper is well-organized and provides open-source code.
- Using adversarial methods to mitigate bias issues in GNNs is a very promising approach.
# weaknesses
- The novelty of the proposed method is limited, and some concepts and definitions are confusing.
- The comparative experiments yielded unsatisfactory results and lacked convincing evidence.
questions: 1、While the theoretical aspects of this paper are well-described, I recommend further clarification on the specific details related to adversarial learning. This includes providing a more explicit explanation of the training hyperparameters and their significant impact on the training performance. Enhancing the clarity of these details is essential for readers to better understand and follow, ultimately improving the overall effectiveness of the proposed approach.
2、The experimental results were conducted on three public datasets, but the performance of both the baseline and the proposed method on the largest dataset, kuairec, was not particularly satisfactory. Would it be necessary to conduct experiments on more suitable datasets? The experimental results fail to effectively persuade me and demonstrate the superiority of this method.
ethics_review_flag: No
ethics_review_description: \
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
E17Hx61n0G | Graph Contrastive Learning with Cohesive Subgraph Awareness | [
"Yucheng Wu",
"Leye Wang",
"Xiao Han",
"Han-Jia Ye"
] | Graph contrastive learning (GCL) has emerged as a state-of-the-art strategy for learning representations of diverse graphs including social and biomedical networks. GCL widely uses stochastic graph topology augmentation, such as uniform node removal, to generate augmented graphs. However, such stochastic augmentations may severely damage the intrinsic properties of a graph and deteriorate the following representation learning process. Specifically, cohesive topological properties (e.g., $k$-core and $k$-truss) indicate strong and critical connections among multiple nodes; randomly removing nodes from a cohesive subgraph may remarkably alter the graph properties. In contrast, we argue that incorporating an awareness of cohesive subgraphs during the graph augmentation and learning processes has the potential to enhance GCL performance. To this end, we propose a novel unified framework called \textit{CTAug}, to seamlessly integrate cohesion awareness into various existing GCL mechanisms. In particular, \textit{CTAug} comprises two specialized modules: \textit{topology augmentation enhancement} and \textit{graph learning enhancement}. The former module generates augmented graphs that carefully preserve cohesion properties, while the latter module bolsters the graph encoder's ability to discern subgraph patterns. Theoretical analysis shows that \textit{CTAug} can strictly improve existing GCL mechanisms. Empirical experiments verify that \textit{CTAug} can achieve state-of-the-art performance for both graph and node representation learning, especially for graphs with high degrees. | [
"social networks",
"graph contrastive learning",
"self-supervised learning",
"cohesive subgraph"
] | https://openreview.net/pdf?id=E17Hx61n0G | qaSVNBtnxw | official_review | 1,700,816,954,216 | E17Hx61n0G | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission950/Reviewer_mfRv"
] | review: Summary:
This paper presents a new graph contrastive learning model with cohesive subgraph awareness. Specifically, the proposed CTAug improves over the existing works in the following ways: (1) it presents a topology augmentation module tailored for cohesive subgraphs, (2) it leverages both deterministic and probabilistic augmentations, and (3) it presents an enhanced graph encoders to consider subgraph features.
Strengths
+ The idea is clearly presented and is easy to understand.
+ The overall framework appears to be technically sound.
Weaknesses:
- The technical contribution is limited. Apart from the cohesive subgraph extraction, the other two modules are not clearly motivated. For example, it is not clear to me what are "Unified Framework" and "Expressive Network" in the introduction. Also, the whole model heavily relies on the assumption that "cohesive properties are closely tied to the graph label y", which might not be the case for all kinds of graphs (say graphs with high heterophily). In this case, the authors need to clearly demonstrate the scope where their model is applicable.
- The theoretical justification appears to be weak as well. The proof mostly follows from [40] which has nothing related to do with graphs. Also, the authors **do** need to show the proof for the contrastive schema between two augmented graphs, as this is what they claimed in the "Unified Framework" part, which involves deterministic and probabilistic augmentations separately.
- Experimental evaluation is also problematic with no recent baselines included (for example, Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning). The authors also need to clarify whether their framework is stronger than graph contrastive learning models without explicit augmentations (for example, SimGRACE: A Simple Framework for Graph Contrastive Learning without Data Augmentation).
questions: Please see the weaknesses part in the above section.
=========================================
Post rebuttal feedback:
I believe most of concerns have been cleared and I am willing to increase my score.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 3
technical_quality: 4
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
E17Hx61n0G | Graph Contrastive Learning with Cohesive Subgraph Awareness | [
"Yucheng Wu",
"Leye Wang",
"Xiao Han",
"Han-Jia Ye"
] | Graph contrastive learning (GCL) has emerged as a state-of-the-art strategy for learning representations of diverse graphs including social and biomedical networks. GCL widely uses stochastic graph topology augmentation, such as uniform node removal, to generate augmented graphs. However, such stochastic augmentations may severely damage the intrinsic properties of a graph and deteriorate the following representation learning process. Specifically, cohesive topological properties (e.g., $k$-core and $k$-truss) indicate strong and critical connections among multiple nodes; randomly removing nodes from a cohesive subgraph may remarkably alter the graph properties. In contrast, we argue that incorporating an awareness of cohesive subgraphs during the graph augmentation and learning processes has the potential to enhance GCL performance. To this end, we propose a novel unified framework called \textit{CTAug}, to seamlessly integrate cohesion awareness into various existing GCL mechanisms. In particular, \textit{CTAug} comprises two specialized modules: \textit{topology augmentation enhancement} and \textit{graph learning enhancement}. The former module generates augmented graphs that carefully preserve cohesion properties, while the latter module bolsters the graph encoder's ability to discern subgraph patterns. Theoretical analysis shows that \textit{CTAug} can strictly improve existing GCL mechanisms. Empirical experiments verify that \textit{CTAug} can achieve state-of-the-art performance for both graph and node representation learning, especially for graphs with high degrees. | [
"social networks",
"graph contrastive learning",
"self-supervised learning",
"cohesive subgraph"
] | https://openreview.net/pdf?id=E17Hx61n0G | QB2mtMjmey | official_review | 1,701,352,185,269 | E17Hx61n0G | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission950/Reviewer_nEpW"
] | review: This paper utilizes cohesive topological properties to generate graph augmentation and proposes a Subgraph-aware GNN Encoder to encode subgraphs, which is a novel approach.
The paper provides a detailed description of the model architecture and follows a logical structure which is easy to understand.
The experiments in this paper are comprehensive. Both the classification experiments and ablation studies demonstrate the effectiveness of the proposed method.
questions: 1. The purpose of using Cohesive Subgraph to generate augmentation is not clear, utilizing cohesive topological properties seems to be not a general approach for all graphs, rather a method for dealing with specific graphs.
2. Equations 13 and 14 lack information; it's not evident how to perform the aggregation. I hope these two equations can offer a clearer description.
3. Lack of some SOTA baselines for both node and graph classification[ 1]
Spectral Augmentation for Self-Supervised Learning on Graphs
ethics_review_flag: No
ethics_review_description: No
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
E17Hx61n0G | Graph Contrastive Learning with Cohesive Subgraph Awareness | [
"Yucheng Wu",
"Leye Wang",
"Xiao Han",
"Han-Jia Ye"
] | Graph contrastive learning (GCL) has emerged as a state-of-the-art strategy for learning representations of diverse graphs including social and biomedical networks. GCL widely uses stochastic graph topology augmentation, such as uniform node removal, to generate augmented graphs. However, such stochastic augmentations may severely damage the intrinsic properties of a graph and deteriorate the following representation learning process. Specifically, cohesive topological properties (e.g., $k$-core and $k$-truss) indicate strong and critical connections among multiple nodes; randomly removing nodes from a cohesive subgraph may remarkably alter the graph properties. In contrast, we argue that incorporating an awareness of cohesive subgraphs during the graph augmentation and learning processes has the potential to enhance GCL performance. To this end, we propose a novel unified framework called \textit{CTAug}, to seamlessly integrate cohesion awareness into various existing GCL mechanisms. In particular, \textit{CTAug} comprises two specialized modules: \textit{topology augmentation enhancement} and \textit{graph learning enhancement}. The former module generates augmented graphs that carefully preserve cohesion properties, while the latter module bolsters the graph encoder's ability to discern subgraph patterns. Theoretical analysis shows that \textit{CTAug} can strictly improve existing GCL mechanisms. Empirical experiments verify that \textit{CTAug} can achieve state-of-the-art performance for both graph and node representation learning, especially for graphs with high degrees. | [
"social networks",
"graph contrastive learning",
"self-supervised learning",
"cohesive subgraph"
] | https://openreview.net/pdf?id=E17Hx61n0G | OStggohkWl | official_review | 1,700,592,042,851 | E17Hx61n0G | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission950/Reviewer_vsR3"
] | review: Contributions:
Motivated by the desire to protect cohesive graph properties against graph augmentation during graph contrastive learning (GCL), the authors propose a framework called CTAug that is cohesion-aware. CTAug retains the most
cohesive subgraphs from the original graph (e.g., k-clique, k-core) when generating augmented views. CTAug comprises topology augmentation (to preserve cohesion) and graph learning (to discern subgraphs) enhancement modules. The authors aim to show that CTAug theoretically and empirically improve SOTA performance on graph and node representation learning.
My recommendation is based on S1, S2, S3, W1, W2, W3. I am happy to raise my scores based on the authors' responses to my questions and clarification/justification of W1, W2, W3.
Quality:
Pros:
- The authors provide thorough justification of why cohesive substructures are relevant to downstream tasks (Section 2.1).
- CTAUG is simple and intuitive.
- (S1) Experiments: The authors run experiments on a variety of datasets and relevant baselines. The usage of CTAUG significantly increases graph classification accuracy on all the datasets. The ablation experiment results (that both augmentation and Graph Substructure Network are needed) are convincing. CTAUG achieves comparable accuracy to baselines on node classification datasets.
Cons:
- The authors should analyze the probability and extent to which cohesive subgraphs are actually preserved with probabilistic topology augmentation. They can do this theoretically, and empirically using the datasets with which they experiment.
- The authors should comment (if possible) on how cohesive subgraphs are relevant in the context of each graph learning task/dataset they consider.
- Line 594: If $f_1$ is sufficient, then is it ever possible $I(f_1(G), G) = I(G, G) < I(f(G), G) = I(G, G)$ for another sufficient encoder $f$? Per definition 4.1, it appears that all sufficient encoders are minimally sufficient.
- (W3) Theorem 4.3: If $f$ is not a minimal sufficient encoder, will $I(f(G), y)$ still increase?
- Theorem 4.4: While GSN is more expressive than GIN, its ability to distinguish relevant substructures is contingent on learning appropriate parameters [1].
Clarity:
Pros:
- (S3) The writing is generally clear and well-organized.
Originality:
Pros:
- (S2) CTAUG can be flexibly applied to existing stochastic and deterministic GCL mechanisms to consider cohesive subgraphs.
Cons:
- (W1) CTAUG is a relatively simple combination of link re-weighting heuristics and Graph Substructure Network.
Significance:
Pros:
- The authors address the important issue that augmented views of graphs may not preserve important components of the graph (e.g., k-core).
Cons:
- (W2) The authors should elaborate further on the limitations of their approach, e.g., in which situations may cohesive subgraphs *not* be relevant (beyond graphs with low-degree nodes)?
Miscellaneous suggestions:
- The authors could investigate how CTAUG impacts graph/node classification precision/recall.
[1] Xu, Keyulu, et al. "How powerful are graph neural networks?." arXiv preprint arXiv:1810.00826 (2018).
EDIT: I have read the authors' rebuttal.
questions: Please see Review (above).
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
E17Hx61n0G | Graph Contrastive Learning with Cohesive Subgraph Awareness | [
"Yucheng Wu",
"Leye Wang",
"Xiao Han",
"Han-Jia Ye"
] | Graph contrastive learning (GCL) has emerged as a state-of-the-art strategy for learning representations of diverse graphs including social and biomedical networks. GCL widely uses stochastic graph topology augmentation, such as uniform node removal, to generate augmented graphs. However, such stochastic augmentations may severely damage the intrinsic properties of a graph and deteriorate the following representation learning process. Specifically, cohesive topological properties (e.g., $k$-core and $k$-truss) indicate strong and critical connections among multiple nodes; randomly removing nodes from a cohesive subgraph may remarkably alter the graph properties. In contrast, we argue that incorporating an awareness of cohesive subgraphs during the graph augmentation and learning processes has the potential to enhance GCL performance. To this end, we propose a novel unified framework called \textit{CTAug}, to seamlessly integrate cohesion awareness into various existing GCL mechanisms. In particular, \textit{CTAug} comprises two specialized modules: \textit{topology augmentation enhancement} and \textit{graph learning enhancement}. The former module generates augmented graphs that carefully preserve cohesion properties, while the latter module bolsters the graph encoder's ability to discern subgraph patterns. Theoretical analysis shows that \textit{CTAug} can strictly improve existing GCL mechanisms. Empirical experiments verify that \textit{CTAug} can achieve state-of-the-art performance for both graph and node representation learning, especially for graphs with high degrees. | [
"social networks",
"graph contrastive learning",
"self-supervised learning",
"cohesive subgraph"
] | https://openreview.net/pdf?id=E17Hx61n0G | O6WEgrGokV | decision | 1,705,909,212,852 | E17Hx61n0G | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: The paper addresses issues in graph contrastive learning that arise in the process of graph augmentation. At the heart of their approach is a simple and intuitive augmentation procedure that "preserves" cohesive topological properties of the initial graph. This may be seen as a limited contribution, but I found the idea interesting and appreciated the theoretical analysis and good experimental results. The authors also made a significant effort to address the reviewer's questions and comments in detail, and I would strongly advise them to incorporate the suggested modifications into their revised manuscript. |
E17Hx61n0G | Graph Contrastive Learning with Cohesive Subgraph Awareness | [
"Yucheng Wu",
"Leye Wang",
"Xiao Han",
"Han-Jia Ye"
] | Graph contrastive learning (GCL) has emerged as a state-of-the-art strategy for learning representations of diverse graphs including social and biomedical networks. GCL widely uses stochastic graph topology augmentation, such as uniform node removal, to generate augmented graphs. However, such stochastic augmentations may severely damage the intrinsic properties of a graph and deteriorate the following representation learning process. Specifically, cohesive topological properties (e.g., $k$-core and $k$-truss) indicate strong and critical connections among multiple nodes; randomly removing nodes from a cohesive subgraph may remarkably alter the graph properties. In contrast, we argue that incorporating an awareness of cohesive subgraphs during the graph augmentation and learning processes has the potential to enhance GCL performance. To this end, we propose a novel unified framework called \textit{CTAug}, to seamlessly integrate cohesion awareness into various existing GCL mechanisms. In particular, \textit{CTAug} comprises two specialized modules: \textit{topology augmentation enhancement} and \textit{graph learning enhancement}. The former module generates augmented graphs that carefully preserve cohesion properties, while the latter module bolsters the graph encoder's ability to discern subgraph patterns. Theoretical analysis shows that \textit{CTAug} can strictly improve existing GCL mechanisms. Empirical experiments verify that \textit{CTAug} can achieve state-of-the-art performance for both graph and node representation learning, especially for graphs with high degrees. | [
"social networks",
"graph contrastive learning",
"self-supervised learning",
"cohesive subgraph"
] | https://openreview.net/pdf?id=E17Hx61n0G | 7UG5CDgrca | official_review | 1,700,819,528,282 | E17Hx61n0G | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission950/Reviewer_gFhZ"
] | review: This paper proposes a novel unified framework called CTAug, to seamlessly integrate cohesion awareness into various existing GCL mechanisms. In particular, CTAug comprises two specialized modules: topology augmentation enhancement and graph learning enhancement. The former module generates augmented graphs that carefully preserve cohesion properties, while the latter module bolsters the graph encoder’s ability to discern subgraph patterns.
Strengths:
Theoretical analysis from the perspective of mutual information shows that CTAug can improve the existing GCL mechanism, especially for graphs with high degrees
Weaknesses:
--The author appears to have merely transposed an existing method into the realm of comparative learning, lacking a pronounced sense of novelty. The employed structure closely resembles MVGRL. However, a point of departure is identified in the utilization of 𝑘-core and 𝑘-truss within the data augmentation segment to attain subgraph data enhancement.
--The effectiveness of data augmentation in various categorical datasets, specifically regarding the persistence of 𝑘-truss and 𝑘-core on datasets of different types, has not been elucidated in the paper. This lack of clarification is noteworthy, given that the title of the paper suggests a universal approach to data augmentation.
--For diverse datasets, the modules of the Cohesive Subgraph Awareness model require sufficient flexibility during the learning process, and they have not adequately taken into account other types of attributes.
--The author did not provide a comparison with some of the most recent methods for comparative learning data augmentation.
questions: --While the primary focus of data augmentation is on topology, one might contemplate integrating structural data augmentation to enhance overall effectiveness. (It is noteworthy that the experimental outcomes for the novel unsupervised method do not appear highly promising, as indicated by the results presented in https://paperswithcode.com/sota/graph-classification-on-collab.)
-- As delineated in the ablation section, the abatement of Cohesion Property raises questions about the current efficiency of the model design. Does the utilization of solely the second module in Module Ablation seem to exert a diminished influence on accuracy?
--Random data augmentation may indeed undermine the semantic information of the graph. However, the data augmentation methods 𝑘-core and 𝑘-truss, as discussed in the paper, have attracted attention well before their application in social networks. I am uncertain about the universal applicability of these methods as effective data augmentation techniques across all types of graphs, especially in the case of graphs containing noise. How does the performance of these methods manifest in graphs with noise?
ethics_review_flag: No
ethics_review_description: No
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
DfCh1nn1nd | Navigating the Post-API Dilemma: Search Engine Results Pages Present a Biased View of Social Media Data | [
"Amrit Poudel",
"Tim Weninger"
] | Recent decisions to discontinue access to social media APIs are beginning to have detrimental effects on Internet research and the field of computational social science as a whole. This lack of access to data has been dubbed the Post-API era of Internet research. Fortunately, popular search engines have the means to crawl, capture, and surface social media data on their Search Engine Results Pages (SERP) if provided the proper search query, and may provide a solution to this dilemma. In the present work we ask: does SERP provide a complete and unbiased sample of social media data? Is SERP a viable alternative to direct API-access? To answer this question, we perform a comparative analysis between (Google) SERP results and nonsampled data from Reddit and Twitter/X. We find that SERP results are highly biased in favor of popular posts; against political, pornographic, and vulgar posts; are more positive in their sentiment; and have large topical gaps. Overall, we conclude that SERP is not a viable alternative to social media API access. | [
"Social Media",
"Data Access",
"Bias",
"Search",
"Post-API era"
] | https://openreview.net/pdf?id=DfCh1nn1nd | pBGO3AC7ye | official_review | 1,700,518,324,392 | DfCh1nn1nd | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1166/Reviewer_S9Af"
] | review: ## Summary
This study explores using results from search engines (SERP) as a replacement for the (now) virtually inaccessible Twitter and Reddit API. The studies assess the differences between the data obtained from SERP and the Twitter, Reddit APIs. The summary of their findings is well presented in the discussion section.
Pros:
- This is important work and relatively untouched territory.
- Well written (though some clarification can help better the paper. See Questions section)
- Formally introduces and presents a reasonable analysis of an alternative (SERP) to collecting data Twitter/Reddit API. The analysis circles how different/similar data from SERP is.
Cons:
- Completeness of the dataset which the search engine results were compared against.
- A strong justification for the metric used to define popularity in Reddit and Twitter is missing. Given the final results are dependent on this metric, it seems essential to test the robustness of the findings against other metrics of popularity that can be defined and justified.
- It is unclear if a different sample of keywords used to query SERP would have changed the conclusions of this work.
Overall, I find this to be important and meaningful work.
I thank the authors for choosing to work on this problem.
However, I suggest and request some clarifications summarised in the "Questions" box below.
questions: - Elaborate on the Tokenisation scheme.
(a) Did you just split on spaces and remove alphanumerics? Or did you use sentence-piece or something like that?
(b) Did you remove stop words? Why or why is this not important to do?
(c) Comment on how this step may better the stratified sampling of keywords.
- Elaborate on stratified sampling: Assuming you grouped tokens (made the strata) based on document frequency, what are the widths of these bins? In total, were there just 1000 bins and you sampled one word from each bin?
- Equation 3 has an extra 1 in the denominator (?).
- I would rename the section from ``token-based comparison`` to ``keyword-based comparison`` because what it is essentially doing (according to line 502) is comparing the ranks of the 1000 keywords used to query SERP with their rank in the reddit/twitter data.
- I think R1 and R2 are used differently in line 580 compared to their usage before (e.g., in equation 3 or line 443).
- How do you know “This dataset is considered to be a nearly-complete set of tweets” Does the paper say so? Perhaps hashtag search does that. Please clarify this.
- Given that the sample you are comparing with plays such a major role, it will be useful to offer a short summary of how this comparison sample was curated (especially in the case of X)
- The Twitter query to SERP was of the form: site:twitter.com {hashtag} {keyword} for each keyword. Clarify if this is an AND search or an OR search.
- It is unclear if the sample of keywords and hashtags used in the case of Twitter were based on stratified sampling like in the case of Reddit. It seems like hashtags were sampled randomly. Why not stratified sampling? Please clarify.
- Is the total number of comments or upvotes available in the SERP data? Maybe better to use that as a popularity metric than a score (which is the number of upvotes minus downvotes…a highly popular post may have a 0 score). Similarly, the number of retweets instead of the number of followers might be a better judge of the popularity of the post. The latter judges individual popularity. In other words, how robust are your findings to different metrics of popularity?
- "The results gathered from SERP were surprisingly small. In total SERP gathered 1,296,958 results from Reddit 318 and 70,018 tweets from Twitter/X."
Was this because of some default parameters in the SERP API since the previous paragraph mentions that the default parameters were used...
- In line 461, it is mentioned we use “ alpha = ⅓ empirically”. What do you mean by “empirically?” Elaborate how this was empirically obtained.
- In line 458 you say you take care to preserve the sign but equations 1,2, and 3 indicate that the absolute value (shown by the modulus operator) of element-wise RTD is taken. This is confusing. Please clarify
- While an RTD of 0.3 on random comparisons was found, are the numbers shown in Table 1 "significantly" different from this? Please clarify the usage of significance here. Were any statistical tests performed to judge this?
- Is there a benefit to using RTD for term-level analysis to produce (say) figure 3 over something like log-odds with a Dirichlet prior?
- In sec 4.1, I suggest renaming ``term`` to ``token`` to be consistent with the previous sections
how is a distribution of term level divergences obtained? From my understanding, for the word "support," you calculate its rank in the SERP data and that in Twitter and evaluate the RTD for this based on equation 2. This will result in a single number. Where then is a distribution for the RTDs of "support" coming from? Perhaps I’m missing something/further clarification is required.
- In line 632, how do you know that the terms with medium document frequency are the "more-informative" words? What does "more informative" mean?
- What is the value of N1,2;alpha in equation 3?
- What statistical test was used in line 735? Please clarify. I also see a decrease in positive sentiment tweets in Fig 6 Twitter (not sure if this is significant or not). How then can you say in line 736 that there is a shift to more positive on Twitter simply cause there is an increase in neutral and a decrease in negative?
- Since sentiment analysis is done before topic, I suggest the figure for it should come before the one for topic.
- Regarding the topical analysis, given that SERP sample was created based on a list of 1000 keywords from stratified sampled based on the document frequency of tokens in the non-sampled dataset, it is expected that this misses certain topics. Of course one can think of sampling 1000 keywords such that it covers a topical distribution of the non-sampled data. So concluding that SERP does not offer the expected coverage of topics is not the right conclusion according to me unless the authors convince us (the readers) otherwise.
- A major concern of this work is how the keyword sample was created to query SERPA. A different sample of keywords would have changed the conclusions of this work. This should be perhaps mentioned in section 7.2. I would soften the language in the conclusion. For example, in line 904 “is not a viable alternative” to “may not be a viable alternative”
- I need help understanding line 918. What do you mean they are actually an unbiased sample?
Also, perhaps it is useful to discuss the possibility that results from other search engine APIs might have other kinds of biases if not for the ones found in this study and is an interesting direction for future work.
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
DfCh1nn1nd | Navigating the Post-API Dilemma: Search Engine Results Pages Present a Biased View of Social Media Data | [
"Amrit Poudel",
"Tim Weninger"
] | Recent decisions to discontinue access to social media APIs are beginning to have detrimental effects on Internet research and the field of computational social science as a whole. This lack of access to data has been dubbed the Post-API era of Internet research. Fortunately, popular search engines have the means to crawl, capture, and surface social media data on their Search Engine Results Pages (SERP) if provided the proper search query, and may provide a solution to this dilemma. In the present work we ask: does SERP provide a complete and unbiased sample of social media data? Is SERP a viable alternative to direct API-access? To answer this question, we perform a comparative analysis between (Google) SERP results and nonsampled data from Reddit and Twitter/X. We find that SERP results are highly biased in favor of popular posts; against political, pornographic, and vulgar posts; are more positive in their sentiment; and have large topical gaps. Overall, we conclude that SERP is not a viable alternative to social media API access. | [
"Social Media",
"Data Access",
"Bias",
"Search",
"Post-API era"
] | https://openreview.net/pdf?id=DfCh1nn1nd | UTtDaKv4z6 | official_review | 1,701,122,135,232 | DfCh1nn1nd | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1166/Reviewer_uMAQ"
] | review: The paper discusses the possibilities of using data provided by SEPR as a potential replacement for data from platform API which are no longer available. It concludes that there are substantial distinctions between the samples provided by SEPR and by APIs.
Pros: 1) The topic is highly relevant for web research and addresses the problem in a novel way; 2) The findings generated through the research design are interesting and can inform future studies' design regarding the potential of using SEPR data; 3) The methodology is sound and offers a comprehensive comparison of samples provided by SEPR and APIs; 4) The discussion offers a well-thought summary of the study, including some of its limitations (albeit not all).
Cons: 1) My major concern relates the basic premise of the study - i.e. the comparison between SEPR and API data. As far as I see, the study does not report when SEPR data was acquired, but it compares it with API data (for Twitter) coming from 2020; for Reddit data, it is not clear when exactly API-based dataset was established. Under these circumstances, my question is: to what degree differences in SEPR data are attributed to the different time points when data were collected - could not some data from Twitter been deleted (and, thus, disappear from SEPR data)? It can be a rational explanation of the differences in terms of thematic composition between Twitter API data and Twitter SEPR data; 2) My second concern relates to the initial premise of the article: i.e. that SEPR data can be viewed as a replacement for API data (unless SERP data is biased). However, I am not fully convinced this premise is valid concerning many limitations of SEs indexing approaches in the context of social media data. I.e. how realistic at all to rely on SERP data for studying social media platforms? Would not be web archives (like Internet Wayback Machine) be a more feasible alternative in the first place? 3) My third concern relates to the lack of related research section. I think it would be useful to add a bit more discussion of the existing discussions of sampling issues / limitations of different approaches for social media data aquisition (including digital archiving).
questions: How much of observed differences between API and SEPR datasets can be attributed to the time-based differences between the datasets?
Is it realistic at all to use SEPR data for sampling social media data? Are there any studies attempting it (also considering the non-deterministic nature of SE outputs noted by the authors)?
ethics_review_flag: No
ethics_review_description: No
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
DfCh1nn1nd | Navigating the Post-API Dilemma: Search Engine Results Pages Present a Biased View of Social Media Data | [
"Amrit Poudel",
"Tim Weninger"
] | Recent decisions to discontinue access to social media APIs are beginning to have detrimental effects on Internet research and the field of computational social science as a whole. This lack of access to data has been dubbed the Post-API era of Internet research. Fortunately, popular search engines have the means to crawl, capture, and surface social media data on their Search Engine Results Pages (SERP) if provided the proper search query, and may provide a solution to this dilemma. In the present work we ask: does SERP provide a complete and unbiased sample of social media data? Is SERP a viable alternative to direct API-access? To answer this question, we perform a comparative analysis between (Google) SERP results and nonsampled data from Reddit and Twitter/X. We find that SERP results are highly biased in favor of popular posts; against political, pornographic, and vulgar posts; are more positive in their sentiment; and have large topical gaps. Overall, we conclude that SERP is not a viable alternative to social media API access. | [
"Social Media",
"Data Access",
"Bias",
"Search",
"Post-API era"
] | https://openreview.net/pdf?id=DfCh1nn1nd | D7EEGAL3i6 | official_review | 1,698,425,989,101 | DfCh1nn1nd | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1166/Reviewer_7BHH"
] | review: The paper investigates the use of search engines to obtain datasets from social media platforms like Twitter and Reddit. This paper is extremely relevant and useful nowadays, especially after Twitter and Reddit made changes to their APIs that make getting large-scale datasets a difficult task. To the best of my knowledge, this paper is novel, and I have not seen any paper that attempts to study this phenomenon. Despite the fact that the paper has a negative outcome (i.e., it concludes that the use of search engines for obtaining datasets from social media platforms is biased and incomplete) indicating that search engines can not replace the platforms’ APIs, I still believe that the paper has merit and can inform other researchers that seek alternative methods in collecting datasets.
Overall the paper has the following notable strengths:
1. Important and timely work that can have a big impact on the Computational Social Science community
2. Use of large-scale datasets across multiple platforms (Twitter and Reddit).
3. Analysis across multiple aspects, including popularity, topics, and sentiment.
4. Employed methods are well-suited for the intended analysis.
At the same time, I believe that the paper, naturally, has some weaknesses (I elaborate on these weaknesses below):
1. The sentiment analysis method is not validated across the two platforms
2. Unclear if the differences are due to pagination on Google or due to the differences in content moderation policies
3. The Twitter dataset is only related to COVID-19, which is limiting, and it is unclear how this might affect the results (i.e., likely Google moderated COVID-19-related information)
4. The analysis of the coverage is shallow and does not provide a complete picture. For instance, there is no subreddit-level analysis that will be extremely useful to researchers to figure out if specific subreddits are not available on search engines at all or if there are substantial coverage differences across subreddits.
One of my concerns with the paper is that it uses a sentiment analysis tool that is trained on Twitter data, however, the paper uses it on Twitter and Reddit datasets that are substantially different. So it’s unclear if the presented differences in the sentiment are actually because of the sentiment or the difference in the performance of the classifier across the two platforms. I suggest to the authors to consider validating the performance of the sentiment analysis across the two platforms by performing a small-scale validation on a random sample of posts from Reddit and Twitter to quantify the performance across the two platforms.
Second, the paper finds that the data from SERP are incomplete, however it is unclear whether this lack of coverage is due to the pagination and ranking of the results on Google or due to the fact that Google has a substantially different moderation policy compared to the other platforms. I would have liked to see a deeper analysis on these issues.
Third, an important limitation of this paper is that the Twitter dataset is only related to COVID-19. Overall, it’s unclear how this factor affects the presented results, especially because it’s likely that Google moderated content related to COVID-19 because of the severity and impact of the issue in society. I suggest to the authors to discuss this limitation and how it can affect the results and probably consider expanding their analysis with another dataset from Twitter that is not related to COVID-19.
Finally, I believe that the presented analysis of Reddit is very shallow and lacks a subreddit-specific analysis. After reading this paper, I am wondering whether there are substantial coverage differences across subreddits (i.e., are some subreddits completely absent while others are complete). Also, it’s unclear if researchers can use search engines to obtain data from specific subreddits rather than Reddit as a whole. Overall, I believe that the paper missed a great opportunity to make a deeper analysis on Reddit that will provide important information to the research community.
Taken all together, I believe that this paper makes a nice contribution, however, there are also some important limitations and weaknesses with the presented analysis. Due to this, I would classify this submission as “accept if room.”
questions: 1. In the case of Reddit, have you observed specific subreddits not included at all in the results returned from SERP?
2. What is the performance of the sentiment analysis tool in your datasets?
3. How are the Twitter results affected because all tweets are related to COVID-19, content that might get moderated by Google?
ethics_review_flag: No
ethics_review_description: No ethical issues
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
DfCh1nn1nd | Navigating the Post-API Dilemma: Search Engine Results Pages Present a Biased View of Social Media Data | [
"Amrit Poudel",
"Tim Weninger"
] | Recent decisions to discontinue access to social media APIs are beginning to have detrimental effects on Internet research and the field of computational social science as a whole. This lack of access to data has been dubbed the Post-API era of Internet research. Fortunately, popular search engines have the means to crawl, capture, and surface social media data on their Search Engine Results Pages (SERP) if provided the proper search query, and may provide a solution to this dilemma. In the present work we ask: does SERP provide a complete and unbiased sample of social media data? Is SERP a viable alternative to direct API-access? To answer this question, we perform a comparative analysis between (Google) SERP results and nonsampled data from Reddit and Twitter/X. We find that SERP results are highly biased in favor of popular posts; against political, pornographic, and vulgar posts; are more positive in their sentiment; and have large topical gaps. Overall, we conclude that SERP is not a viable alternative to social media API access. | [
"Social Media",
"Data Access",
"Bias",
"Search",
"Post-API era"
] | https://openreview.net/pdf?id=DfCh1nn1nd | 31cIzlPdtK | decision | 1,705,909,234,616 | DfCh1nn1nd | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: All reviewers agree about the merits of this paper. I also appreciate the efforts put forth by the authors to clarify many of the points raised in the reviews. I recommend acceptance of this paper, contingent on addressing the asks mentioned in the reviews. Here is a non-exhaustive list of the asks:
- Reviewer uMAQ was not convinced about the following point made in the paper: "Data may have been deleted from X/Twitter, but the Google/SERP indexer...." and they ask for additional clarity in the revised version of the paper.
- Reviewer S9Af asks for multiple clarifications points to be included in the final revised version of the paper and I hope the authors will be able to make those updates
- Reviewer 7BHH asks authors to be upfront about the limitations of this work. Please expand the limitation section in the revised version.
Finally, I suggest that the authors pay close attention to each point mentioned by the reviewer, reflect on their plan to address those and then update/revise the paper accordingly. I think this will make the work stronger. |
DU4Qp5oH1A | Optimizing Polynomial Graph Filters: A Novel Adaptive Krylov Subspace Approach | [
"Keke Huang",
"Wencai Cao",
"Hoang Ta",
"Xiaokui Xiao",
"Pietro Lio"
] | Graph Neural Networks (GNNs) as spectral graph filters, enhancing specific frequencies of graph signals while suppressing the rest, find a wide range of applications in web networks. To bypass eigendecomposition, polynomial graph filters are proposed to approximate graph filters by leveraging various polynomial bases for filter training. However, no existing studies have explored the diverse polynomial graph filters from a unified perspective for optimization.
In this paper, we first unify polynomial graph filters, as well as the optimal filters of *identical* degrees into the Krylov subspace of the *same* order, thus providing equivalent expressive power theoretically. Next, we investigate the asymptotic convergence property of polynomials from the unified Krylov subspace perspective, revealing their limited adaptability in graphs with varying heterophily degrees. Inspired by those facts, we design a novel adaptive Krylov subspace approach to optimize polynomial bases with provable controllability over the graph spectrum so as to adapt various heterophily graphs. Subsequently, we propose AdaptKry, an optimized polynomial graph filter utilizing bases from the adaptive Krylov subspaces. Meanwhile, in light of the diverse spectral properties of complex graphs comprising numerous components, we extend AdaptKry by leveraging multiple adaptive Krylov bases without incurring extra training costs. As a consequence, extended AdaptKry is able to capture the intricate characteristics of graphs and provide insights into their inherent complexity. We conduct extensive experiments across a series of real-world datasets. The experimental results demonstrate the superior filtering capability of AdaptKry, as well as the optimized efficacy of the adaptive Krylov basis. | [
"Spectral Graph Neural Networks",
"Supervised Classification",
"Krylov Subspace"
] | https://openreview.net/pdf?id=DU4Qp5oH1A | vfs1UHLFFy | decision | 1,705,909,218,902 | DU4Qp5oH1A | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: The reviewers are in agreement about the result's high relevance, technical quality, and novelty. There are, however, sufficient concerns about technical details, as well as experimental comaprisons/scope, that make this result feel like an intermediate step to me. |
DU4Qp5oH1A | Optimizing Polynomial Graph Filters: A Novel Adaptive Krylov Subspace Approach | [
"Keke Huang",
"Wencai Cao",
"Hoang Ta",
"Xiaokui Xiao",
"Pietro Lio"
] | Graph Neural Networks (GNNs) as spectral graph filters, enhancing specific frequencies of graph signals while suppressing the rest, find a wide range of applications in web networks. To bypass eigendecomposition, polynomial graph filters are proposed to approximate graph filters by leveraging various polynomial bases for filter training. However, no existing studies have explored the diverse polynomial graph filters from a unified perspective for optimization.
In this paper, we first unify polynomial graph filters, as well as the optimal filters of *identical* degrees into the Krylov subspace of the *same* order, thus providing equivalent expressive power theoretically. Next, we investigate the asymptotic convergence property of polynomials from the unified Krylov subspace perspective, revealing their limited adaptability in graphs with varying heterophily degrees. Inspired by those facts, we design a novel adaptive Krylov subspace approach to optimize polynomial bases with provable controllability over the graph spectrum so as to adapt various heterophily graphs. Subsequently, we propose AdaptKry, an optimized polynomial graph filter utilizing bases from the adaptive Krylov subspaces. Meanwhile, in light of the diverse spectral properties of complex graphs comprising numerous components, we extend AdaptKry by leveraging multiple adaptive Krylov bases without incurring extra training costs. As a consequence, extended AdaptKry is able to capture the intricate characteristics of graphs and provide insights into their inherent complexity. We conduct extensive experiments across a series of real-world datasets. The experimental results demonstrate the superior filtering capability of AdaptKry, as well as the optimized efficacy of the adaptive Krylov basis. | [
"Spectral Graph Neural Networks",
"Supervised Classification",
"Krylov Subspace"
] | https://openreview.net/pdf?id=DU4Qp5oH1A | kM5qaz87lM | official_review | 1,700,828,516,473 | DU4Qp5oH1A | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2413/Reviewer_SQaZ"
] | review: This paper unifies the polynomial filters through the Krylov subspace perspective. And authors demonstrate that polynomial filters included in the Krylov subspace of the same order has an equivalent expressive power theoretically. The authors propose AdaptKry to learn the graph spectrum filters, and utilize graph heat equation to design propagation matrix $\mathbf{P}$ to construct the Krylov subspace.
In experiments, AdaptKry shows best scores in most real world datasets in node classification task. Ablation study demonstrates the influence of parameter $\tau$ on different datasets.
**Strengths:**
1. Unifying poly-based spectral GNNs via Krylov subspace perspective is inspiring.
2. The introduced model can make a superior performance on most datasets.
3. Writing and expression are clear.
**Weaknesses:**
1. Could you provide further insights regarding AdaptKry's potential to learn more complex filters, such as comb or band-pass filters? I am uncertain whether Figure 1 represents the results obtained from AdaptKry.
2. Avoiding eigendecomposition is a significant advantage when utilizing polynomials as filter bases. However, it seems that the experimental section could be further strengthened by evaluating on large-scale graphs.
3. The selection of propagation matrix $\mathbf{P}_\tau$ seems like parameterization normalized Laplacian matrix. Can i say the performance improvment most depends on the specific propagation matrix? It should be more disccusion about the propagation matrix.
questions: see weakness.
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
DU4Qp5oH1A | Optimizing Polynomial Graph Filters: A Novel Adaptive Krylov Subspace Approach | [
"Keke Huang",
"Wencai Cao",
"Hoang Ta",
"Xiaokui Xiao",
"Pietro Lio"
] | Graph Neural Networks (GNNs) as spectral graph filters, enhancing specific frequencies of graph signals while suppressing the rest, find a wide range of applications in web networks. To bypass eigendecomposition, polynomial graph filters are proposed to approximate graph filters by leveraging various polynomial bases for filter training. However, no existing studies have explored the diverse polynomial graph filters from a unified perspective for optimization.
In this paper, we first unify polynomial graph filters, as well as the optimal filters of *identical* degrees into the Krylov subspace of the *same* order, thus providing equivalent expressive power theoretically. Next, we investigate the asymptotic convergence property of polynomials from the unified Krylov subspace perspective, revealing their limited adaptability in graphs with varying heterophily degrees. Inspired by those facts, we design a novel adaptive Krylov subspace approach to optimize polynomial bases with provable controllability over the graph spectrum so as to adapt various heterophily graphs. Subsequently, we propose AdaptKry, an optimized polynomial graph filter utilizing bases from the adaptive Krylov subspaces. Meanwhile, in light of the diverse spectral properties of complex graphs comprising numerous components, we extend AdaptKry by leveraging multiple adaptive Krylov bases without incurring extra training costs. As a consequence, extended AdaptKry is able to capture the intricate characteristics of graphs and provide insights into their inherent complexity. We conduct extensive experiments across a series of real-world datasets. The experimental results demonstrate the superior filtering capability of AdaptKry, as well as the optimized efficacy of the adaptive Krylov basis. | [
"Spectral Graph Neural Networks",
"Supervised Classification",
"Krylov Subspace"
] | https://openreview.net/pdf?id=DU4Qp5oH1A | PY8ikwIXR5 | official_review | 1,700,544,423,597 | DU4Qp5oH1A | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2413/Reviewer_qLZn"
] | review: **Summary**
The paper introduces AdaptKry, a novel approach for optimizing polynomial graph filters. It unifies different filters into a Krylov subspace, improves adaptability to varying graph structures, and extends to handle complex graphs efficiently. Experimental results show AdaptKry's superior filtering capabilities and its ability to capture intricate graph characteristics.
**Strength**
1. Unifying polynomial filters into the Krylov subspace is an interesting idea.
2. AdaptKry performs well on some datasets.
3. The paper is well-written and easy to read.
**Weakness**
1. Proposition 2 defines the optimal polynomial filters, and it seems that AdaptKry does not satisfy this Proposition. I would like to understand how much AdaptKry differs from the optimal polynomial filters.
2. The main differences between AdaptKry and methods based on polynomial filters lie in the use of a propagation matrix with tao and the final concatenation operation. Because the concatenation operation increases more parameters, I believe it is necessary to conduct ablation experiments to demonstrate that the benefits of AdaptKry do not solely come from the concatenation operation.
3. The improvement of AdaptKry on heterophilic graphs is marginal.
4. There are not enough datasets for heterophilic graphs. Please note that previous work has highlighted some issues with the Chameleon and Squirrel datasets [1]. Therefore, I recommend conducting experiments using more extensive heterophilic graph datasets, such as the latest benchmarks available [1] and [2].
[1] Platonov, Oleg, et al. "A critical look at the evaluation of GNNs under heterophily: Are we really making progress?." _The Eleventh International Conference on Learning Representations_. 2022.
[2] Lim, Derek, et al. "Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods." _Advances in Neural Information Processing Systems_ 34 (2021): 20887-20902.
questions: Please refer to the Weaknesses as described above for details.
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
DU4Qp5oH1A | Optimizing Polynomial Graph Filters: A Novel Adaptive Krylov Subspace Approach | [
"Keke Huang",
"Wencai Cao",
"Hoang Ta",
"Xiaokui Xiao",
"Pietro Lio"
] | Graph Neural Networks (GNNs) as spectral graph filters, enhancing specific frequencies of graph signals while suppressing the rest, find a wide range of applications in web networks. To bypass eigendecomposition, polynomial graph filters are proposed to approximate graph filters by leveraging various polynomial bases for filter training. However, no existing studies have explored the diverse polynomial graph filters from a unified perspective for optimization.
In this paper, we first unify polynomial graph filters, as well as the optimal filters of *identical* degrees into the Krylov subspace of the *same* order, thus providing equivalent expressive power theoretically. Next, we investigate the asymptotic convergence property of polynomials from the unified Krylov subspace perspective, revealing their limited adaptability in graphs with varying heterophily degrees. Inspired by those facts, we design a novel adaptive Krylov subspace approach to optimize polynomial bases with provable controllability over the graph spectrum so as to adapt various heterophily graphs. Subsequently, we propose AdaptKry, an optimized polynomial graph filter utilizing bases from the adaptive Krylov subspaces. Meanwhile, in light of the diverse spectral properties of complex graphs comprising numerous components, we extend AdaptKry by leveraging multiple adaptive Krylov bases without incurring extra training costs. As a consequence, extended AdaptKry is able to capture the intricate characteristics of graphs and provide insights into their inherent complexity. We conduct extensive experiments across a series of real-world datasets. The experimental results demonstrate the superior filtering capability of AdaptKry, as well as the optimized efficacy of the adaptive Krylov basis. | [
"Spectral Graph Neural Networks",
"Supervised Classification",
"Krylov Subspace"
] | https://openreview.net/pdf?id=DU4Qp5oH1A | G5k9JqwGoh | official_review | 1,701,134,356,372 | DU4Qp5oH1A | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2413/Reviewer_Wxf6"
] | review: Summary:
This paper considers the design of graph spectral filters via consideration of Krylov subspaces generated by input feature vectors $x$ and propagation matrices $P$. It notes that for a fixed signal $x$ and propagation matrix $P$, the vector space generated by $x, Px, ..., P^k x, ...$ is finite-dimensional (say, dimension $t$), and so the spectral filter given by diagonalization of the graph Laplacian is equivalent to some polynomial filter of appropriate degree. It then connects the homophily ratio of a signal on a graph $G$ to its frequency responses. Motivated by this, it proposes an adaptive spectral filter with a trainable parameter. The paper then proves that this parameter can be varied to accommodate signals with a wide range of homophily ratios.
Pros:
1.) The paper gives solid theoretical motivation for its approach.
2.) The problem of the design of spectral filters for heterophilous graph signals is an important one.
3.) The empirical results seem promising.
Cons:
1.) The writing is confusing at times. Throughout, there is some awkward wording.
2.) I am not sure that Theorem 1 is particularly novel. It seems to be some variant of a theorem giving a convergence guarantee for the power method (for approximating the largest eigenpair of a matrix).
questions: 1.) Can the authors respond to my second con point?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
DU4Qp5oH1A | Optimizing Polynomial Graph Filters: A Novel Adaptive Krylov Subspace Approach | [
"Keke Huang",
"Wencai Cao",
"Hoang Ta",
"Xiaokui Xiao",
"Pietro Lio"
] | Graph Neural Networks (GNNs) as spectral graph filters, enhancing specific frequencies of graph signals while suppressing the rest, find a wide range of applications in web networks. To bypass eigendecomposition, polynomial graph filters are proposed to approximate graph filters by leveraging various polynomial bases for filter training. However, no existing studies have explored the diverse polynomial graph filters from a unified perspective for optimization.
In this paper, we first unify polynomial graph filters, as well as the optimal filters of *identical* degrees into the Krylov subspace of the *same* order, thus providing equivalent expressive power theoretically. Next, we investigate the asymptotic convergence property of polynomials from the unified Krylov subspace perspective, revealing their limited adaptability in graphs with varying heterophily degrees. Inspired by those facts, we design a novel adaptive Krylov subspace approach to optimize polynomial bases with provable controllability over the graph spectrum so as to adapt various heterophily graphs. Subsequently, we propose AdaptKry, an optimized polynomial graph filter utilizing bases from the adaptive Krylov subspaces. Meanwhile, in light of the diverse spectral properties of complex graphs comprising numerous components, we extend AdaptKry by leveraging multiple adaptive Krylov bases without incurring extra training costs. As a consequence, extended AdaptKry is able to capture the intricate characteristics of graphs and provide insights into their inherent complexity. We conduct extensive experiments across a series of real-world datasets. The experimental results demonstrate the superior filtering capability of AdaptKry, as well as the optimized efficacy of the adaptive Krylov basis. | [
"Spectral Graph Neural Networks",
"Supervised Classification",
"Krylov Subspace"
] | https://openreview.net/pdf?id=DU4Qp5oH1A | Dk3cbvDR5V | official_review | 1,701,055,632,488 | DU4Qp5oH1A | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2413/Reviewer_w9rP"
] | review: This paper theoretically unifies polynomial graph filters and optimal filters of identical degrees into the Krylov subspace, offering equivalent expressive power. Further exploration of the asymptotic convergence properties of polynomials from the unified Krylov subspace perspective reveals their limited adaptability in heterogeneous graphs. Inspired by these findings, an adaptive Krylov subspace approach AdaptKry is proposed to optimize polynomial bases and can be extended by multiple adaptive Krylov bases for complex graphs. The proposed AdaptKry demonstrates superior filtering capabilities and optimized efficacy of the adaptive Krylov basis on real-world datasets.
questions: 1. Based on Equation 6, polynomial filters utilize various propagation matrices P. Here, I have some confusion: (a) Why is the propagation matrix P for BernNet I-L/2 and I-L for JacobiConv at Line 353-354? (b) Are the propagation matrices in Equations 4 and 5 linearly correlated with the normalized Laplacian matrix L?
2. Can you theoretically or experimentally demonstrate the relationship between tau and the homophily ratio? Providing some evidence for selecting tau based on the homophily ratio would be valuable, and you may consider incorporating an ablation study to further investigate this.
3. At line 3 of Algorithm 1, is there a computational cost or numerical stability issue associated with the matrix inverse operation of the normalized adjacency matrix? Have any optimizations from PyTorch Geometric been utilized? Could you compare the runtime with baselines, especially on large graphs?
4. The performance on larger datasets should be verified like BernNet and ChebNetII, such as the arXiv dataset. In the section regarding parameter settings at line 1190, only 6 datasets are mentioned, not 10.
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
DU4Qp5oH1A | Optimizing Polynomial Graph Filters: A Novel Adaptive Krylov Subspace Approach | [
"Keke Huang",
"Wencai Cao",
"Hoang Ta",
"Xiaokui Xiao",
"Pietro Lio"
] | Graph Neural Networks (GNNs) as spectral graph filters, enhancing specific frequencies of graph signals while suppressing the rest, find a wide range of applications in web networks. To bypass eigendecomposition, polynomial graph filters are proposed to approximate graph filters by leveraging various polynomial bases for filter training. However, no existing studies have explored the diverse polynomial graph filters from a unified perspective for optimization.
In this paper, we first unify polynomial graph filters, as well as the optimal filters of *identical* degrees into the Krylov subspace of the *same* order, thus providing equivalent expressive power theoretically. Next, we investigate the asymptotic convergence property of polynomials from the unified Krylov subspace perspective, revealing their limited adaptability in graphs with varying heterophily degrees. Inspired by those facts, we design a novel adaptive Krylov subspace approach to optimize polynomial bases with provable controllability over the graph spectrum so as to adapt various heterophily graphs. Subsequently, we propose AdaptKry, an optimized polynomial graph filter utilizing bases from the adaptive Krylov subspaces. Meanwhile, in light of the diverse spectral properties of complex graphs comprising numerous components, we extend AdaptKry by leveraging multiple adaptive Krylov bases without incurring extra training costs. As a consequence, extended AdaptKry is able to capture the intricate characteristics of graphs and provide insights into their inherent complexity. We conduct extensive experiments across a series of real-world datasets. The experimental results demonstrate the superior filtering capability of AdaptKry, as well as the optimized efficacy of the adaptive Krylov basis. | [
"Spectral Graph Neural Networks",
"Supervised Classification",
"Krylov Subspace"
] | https://openreview.net/pdf?id=DU4Qp5oH1A | 4Eeh32qN2N | official_review | 1,700,596,746,622 | DU4Qp5oH1A | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2413/Reviewer_LgWU"
] | review: **Summary**
This work proposes a differentiable framework for (undirected and unweighted)
graph filters that generalizes several well-studied polynomial approximations:
ChebNet, GPR-GNN, BernNet, and JacobiConv. In particular, it proposes using
Krylov subspaces (based on the normalized Laplacian matrix with a parameterized
amount of self-loops) and trainable per-term scale factors. The `AdaptKry`
algorithm then feeds this projection into an MLP as input for the downstream
learning task. The paper also provides extensive experiments for node
classification on academic benchmarks.
**Strengths**
- The Krylov subspace approach with $K$ trainable parameters generalizes several
previous works on (spectral) graph filters with polynomial approximations:
* ChebNet (Defferrard et al., NIP 2016)
* GPR-GNN (Chien et al., ICLR 2021)
* BernNet (He et al., NeurIPS 2021)
* JacobiConv (Wang-Zhang, ICML 2022)
- The paper is well organized and mostly well written.
- The node classification experiments are comprehensive and compare to many
previous works.
**Weaknesses**
- Theorem 1 is fairly well known. Please cite this result, e.g., from the
Markov chain mixing time literature.
- Section 3.3 could benefit from being made more rigorous, e.g., quantifying
how much information could we lose if $K$ is too small.
**Typos and suggestions**
- [line 67] nit: "a graph $G$" --> "an undirected graph $G$" since you're using
its eigendecomposition.
- [line 201] suggestion: "For simplicity purposes," --> "For simplicity,"
- [line 206] typo: "Normalized Laplacian matrix" --> "The normalized Laplacian
matrix"
- [line 215] suggestion: Remove "Without loss of generality," if this is always
true (i.e., without needing to assume anything).
- [line 240] typo: "ration" --> "ratio"
- [line 242] suggestion: consider using $(u,v)$ or $\{u,v\}$ for edges since
$\langle u, v \rangle$ looks like an inner product without context.
questions: - [line 329] Isn't the grade of $(x, P)$ just the rank of the Krylov subspace?
If so, why introduce new notation?
- [line 356] Up until this point in the paper, there was no mention of graphs
not being bipartite. If this assumption is needed for your results, it makes
sense to make the assumption more explicit early on in Section 2.
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
DNejOeivdM | Can One Embedding Fit All? A Multi-Interest Learning Paradigm Towards Improving User Interest Diversity Fairness | [
"Yuying Zhao",
"Minghua Xu",
"Huiyuan Chen",
"Yuzhong Chen",
"Yiwei Cai",
"Rashidul Islam",
"Yu Wang",
"Tyler Derr"
] | Recommender systems have gained widespread applications across various domains owing to their superior ability to understand and capture users' interests. However, the complexity and nuanced nature of users' interests, which can span a wide range of diversity, pose a significant challenge in delivering fair recommendations. In real-world scenarios, user preferences vary significantly; some users show a clear preference toward certain item categories, while others have a broad interest in diverse ones. Even though it is expected that all users should receive high-quality recommendations, the effectiveness of recommender systems in catering to this disparate interest diversity remains under-explored.
In this work, we investigate whether users in different groups with varied levels of interest diversity are treated fairly. Our empirical experiments reveal an inherent disparity: users who have a wider range of interests often receive lower-quality recommendations. To achieve fairer recommendations, we propose a multi-interest framework that uses multiple (virtual) interest embeddings, rather than the utilization of single embedding to represent individual users. Specifically, the framework consists of stacked multi-interest representation layers. Each layer includes an interest embedding generator that derives virtual interests from globally shared interest parameters, and a center embedding aggregator that facilitates multi-hop aggregation. The experiments have demonstrated the effectiveness of the proposed method in achieving better trade-off between fairness and utility across various datasets and backbones. Our code and datasets are available at: https://anonymous.4open.science/r/User-Interest-Diversity-Fairness-4BBE/. | [
"Fairness",
"Diversity",
"Multi-Interest Recommendations"
] | https://openreview.net/pdf?id=DNejOeivdM | uwFm7y3dxX | official_review | 1,700,824,909,334 | DNejOeivdM | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2089/Reviewer_8Mqa"
] | review: Summary: The paper studies whether users with diverse interests are treated fairly in recommendation systems. First, the paper shows that users with more diverse interests receive poorer recommendations in recommender systems. Second, the paper proposes an alternative method for representing user interests, in which each user is represented by multiple embedding vectors. Third, the paper evaluates this multiple-embedding system on a series of datasets and reports higher fairness and utility.
Strengths:
- Considering the impacts of recommender systems on different groups of users is an important topic and this problem seems well-motivated.
Suggestion:
- I think Figure 5 could be improved to more clearly communicate the set-up. For example, I found Figure 4 very clear, but Figure 5 had me lost.
Weaknesses:
- I found the experimental results in Table 3 a little hard to contextualize. A caption for Table 3 to orient the reader would also be helpful. I've added more questions for table 3 below.
questions: My understanding is that Table 3 compares RS backbones with and without the multi-interest framework. I have a few questions about the interpretation of these results.
- What is the number of parameters for each model with and without the multi-interest framework? I am wondering whether improvements in performance are due to the specific architecture of the multi-interest framework or to differences in the sizes of the models.
- What does the "-" bar in Table 3 refer to?
- Are the results in Table 3 statistically significant? I am wondering how to contextualize the differences in performance between different models.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
DNejOeivdM | Can One Embedding Fit All? A Multi-Interest Learning Paradigm Towards Improving User Interest Diversity Fairness | [
"Yuying Zhao",
"Minghua Xu",
"Huiyuan Chen",
"Yuzhong Chen",
"Yiwei Cai",
"Rashidul Islam",
"Yu Wang",
"Tyler Derr"
] | Recommender systems have gained widespread applications across various domains owing to their superior ability to understand and capture users' interests. However, the complexity and nuanced nature of users' interests, which can span a wide range of diversity, pose a significant challenge in delivering fair recommendations. In real-world scenarios, user preferences vary significantly; some users show a clear preference toward certain item categories, while others have a broad interest in diverse ones. Even though it is expected that all users should receive high-quality recommendations, the effectiveness of recommender systems in catering to this disparate interest diversity remains under-explored.
In this work, we investigate whether users in different groups with varied levels of interest diversity are treated fairly. Our empirical experiments reveal an inherent disparity: users who have a wider range of interests often receive lower-quality recommendations. To achieve fairer recommendations, we propose a multi-interest framework that uses multiple (virtual) interest embeddings, rather than the utilization of single embedding to represent individual users. Specifically, the framework consists of stacked multi-interest representation layers. Each layer includes an interest embedding generator that derives virtual interests from globally shared interest parameters, and a center embedding aggregator that facilitates multi-hop aggregation. The experiments have demonstrated the effectiveness of the proposed method in achieving better trade-off between fairness and utility across various datasets and backbones. Our code and datasets are available at: https://anonymous.4open.science/r/User-Interest-Diversity-Fairness-4BBE/. | [
"Fairness",
"Diversity",
"Multi-Interest Recommendations"
] | https://openreview.net/pdf?id=DNejOeivdM | YDg30DFvgv | official_review | 1,701,398,981,174 | DNejOeivdM | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2089/Reviewer_qDwd"
] | review: My overall understanding of this work:
Starts by identifying and defining a source of disparity (user "interest diversity"/broadness and how users with broad interests overall receive poorer quality recommendations), shows how this disparity exists within current graph-based models for multiple datasets, and proposes and tests a new multi-interest framework that can be used with current graph-based model outlines (using center and virtual embeddings together instead of only a single embedding for items+users) that is designed to address this disparity.
Pros:
- Originality/Significance: This is a neat problem and it's glad to see an approach suggested!
- Clarity: Nice notation table and general clarity in defining functions.
- Clarity: Figure 4 is a clear summary of the "one embedding is not good enough" motivation. (However, it also caused some confusion - on first read I missed the fact that your framework is designed to allow center and virtual embeddings for both users and items, because Figure 4 seemed to imply that you would work with center/virtual embeddings for users only.)
- Quality: The research questions you have outlined seem like a very neat breakdown of the important points, especially with RQ3 through RQ5 exploring additional benefits on top of simple alignment and performance tests. (I'm curious how the future work for these last three RQs will go.)
Cons:
- Clarity: I'm confused what the (A), (B), (C) label texts in Figure 2 are supposed to mean (outside of them being used as LaTeX reference anchors). It looks like all the graphs are showing model utility/performance across a range of models, datasets, and across multiple user subgroups on the X-axis split different ways for each subfigure, so the texts are just creating extra confusion.
- Clarity: This may be GCN-specific context - I'm still lost on what a useful high-level descriptor of the global interest embeddings would be. They aren't described in detail anywhere in this paper.
- Clarity: Figure 5, the "center embedding aggregator" focus area is very confusing (see questions)
- Quality: Not sure why the "multi-interest framework works better with large dataset" performance comparison focuses first on dataset size vs. category count or domain-specific usage details. I'm curious to see more about how diversity of items and categories in this dataset affects results, instead of only the overall dataset size.
Specific suggestions:
- Clarity: Section 3.1 (source of unfairness) may tie better into section 2 (exploring preliminary results + building a motivation) as a way of wrapping it up and providing transition motivation. Currently section 3 seems like it's supposed to introduce your new framework but 3.1 only goes back to talking about previous preliminary performance and motivations.
Context for my review:
I have generally surveyed (focusing on motivations and application challenges, not technical implementation) various works in fairness in recommender and classification systems, largely those focused on fairness for users across distinct identity groups and some work with ranking fairness. I am not familiar with the methods of LightGCN or CAGCN, and I am still unsure how exactly your framework ties into these backbones after reading the paper or what backbone-specific details I am lacking the context for (e.g. I'm not sure what "number of hops" as discussed in RQ5 means exactly and am assuming this is part of the GCN functionality outside of your new framework). This makes judging originality and significance a bit trickier too, so most of my opinion is based on quality and clarity.
questions: If I try to summarize the framework based on Figure 5 right now, I'm understanding it as:
1. Input: a user-item interaction graph, with some prebuilt / seed embeddings for each user and each item (the original center embeddings)
2. Some part of the representation layers (the interest embedding generator): for each user's item interaction history, apply some number of global interest embeddings (attention? grouping similar items together? rough binning of recognized interest types?) to generate a set of user virtual embeddings, which essentially describe a set of interests for each user.
3. Some part of the representation layers (the center embedding aggregator): for each user's item interaction history, identify some set of item virtual embeddings with one for each item (where do the item virtual embeddings come from? are these also generated with the interest embedding generator, just with the user/item positions flipped? how is this exact set selected (I did see the process described in text, but the figure is confusing)?) and do something with them. Based on the figure and reading from top down, it looks like you are generating virtual embeddings based on this selection, but based on the heading and text description it looks like the point is to generate an updated user center embedding.
4. After some number of layers, end up outputting a set of updated center embeddings for each user+item and multiple sets of virtual embeddings for each user and item (possibly of varying count, the figure doesn't make this clear but the text mentions this).
Is this broadly accurate? (if yes, most of my clarity complaints in the review are still there because Figure 5 did take a while to parse.)
ethics_review_flag: No
ethics_review_description: n/a
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 7
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
DNejOeivdM | Can One Embedding Fit All? A Multi-Interest Learning Paradigm Towards Improving User Interest Diversity Fairness | [
"Yuying Zhao",
"Minghua Xu",
"Huiyuan Chen",
"Yuzhong Chen",
"Yiwei Cai",
"Rashidul Islam",
"Yu Wang",
"Tyler Derr"
] | Recommender systems have gained widespread applications across various domains owing to their superior ability to understand and capture users' interests. However, the complexity and nuanced nature of users' interests, which can span a wide range of diversity, pose a significant challenge in delivering fair recommendations. In real-world scenarios, user preferences vary significantly; some users show a clear preference toward certain item categories, while others have a broad interest in diverse ones. Even though it is expected that all users should receive high-quality recommendations, the effectiveness of recommender systems in catering to this disparate interest diversity remains under-explored.
In this work, we investigate whether users in different groups with varied levels of interest diversity are treated fairly. Our empirical experiments reveal an inherent disparity: users who have a wider range of interests often receive lower-quality recommendations. To achieve fairer recommendations, we propose a multi-interest framework that uses multiple (virtual) interest embeddings, rather than the utilization of single embedding to represent individual users. Specifically, the framework consists of stacked multi-interest representation layers. Each layer includes an interest embedding generator that derives virtual interests from globally shared interest parameters, and a center embedding aggregator that facilitates multi-hop aggregation. The experiments have demonstrated the effectiveness of the proposed method in achieving better trade-off between fairness and utility across various datasets and backbones. Our code and datasets are available at: https://anonymous.4open.science/r/User-Interest-Diversity-Fairness-4BBE/. | [
"Fairness",
"Diversity",
"Multi-Interest Recommendations"
] | https://openreview.net/pdf?id=DNejOeivdM | Ky6VZ0KVuZ | official_review | 1,700,716,060,170 | DNejOeivdM | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2089/Reviewer_9Rwp"
] | review: **Summary:**
The paper studies the performance of recommender systems on users with multiple interests, focusing on the fairness for this users. The paper is motivated by ensuring fairness for users with multiple interests, which differs from classical group fairness objectives because the group membership is implicit. The main contributions of the paper are (1) empirically showing that existing recommendation algorithms perform poorly on users with multiple interests, (2) designing a new framework to improve performance on these users, and (3) empirically evaluating the framework.
The paper measures interest diversity based on two metrics. The first metric is the diversity of the item categories shown in user’s historical interactions (to be used when category information is available). The second metric is diversity of interests in terms of the item embeddings (as measured by inner product) shown in user’s historical interactions (to be used when category information is unavailable).
The framework to improve performance of these users is based on the following key idea: assign a user multiple embeddings reflecting the multiplicity of interests, in addition to a center embedding reflecting the user’s key characteristics. The score function is adjusted to reflect the maximum score with respect to any of the interests. The paper provides an empirical evaluation of LightGCN and CAGCN* modified according to the multi-interest framework on the ml-1m, epinion, cosmetics, and anime dataset, and compares with DRO and ARL as baselines. The paper shows cases where the proposed framework achieves better fairness-utility tradeoffs than these baselines.
**Strengths:**
- The problem of users having multiple interests is well-motivated, and the finding that typical recommendation algorithms perform poorly on these users is practically relevant and interesting.
- The idea of introducing multiple embeddings for users to reflect different interests is natural and well-motivated.
- For baselines, in addition to the unmodified LightGCN and CAGCN*, the paper considers modified versions of LightGCN and CAGCN* with other fair baselines (DRO and ARL) that are group agnostic. This provides a nuanced comparison and helps clearly evaluate the proposed framework.
**Weaknesses:**
- In the framework, both the users and the items have multiple embeddings. It is not clear why the items also need to have multiple virtual embeddings (and not just the users)—see question below.
- While the paper frames the goal as ensuring that users with many interests are treated as a *fairness* problem, it is not clear fairness for the group of users with varied interests is a first-order goal in practice (see question below). In particular:
- The authors do provide the following example on p.1: “While homosexual and heterosexual users have more specific preferences related to gender interests, bisexual users might exhibit a broader range of interests.” However in this example, on a dating app, the platform might request this axis of the users’ sexual orientation and explicitly ensure fairness for this protected group using standard fairness approaches for “explicit sensitive attributes”.
- That being said, performing well on users with varied interests seems well-motivated as an approach to improve utility across the whole population (as the paper examines in Section 4.1). However, the empirical evaluation of the paper in Table 3 suggests that the proposed method does not outperform existing approaches on the basis of accuracy (recall) alone.
questions: Why do the items also need to have multiple virtual embeddings?
Please discuss the fairness motivation in greater detail. Could the authors provide more real-world examples where a worse recommendation quality for users with varied interests might lead to *fairness* concerns from a practical perspective?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
DNejOeivdM | Can One Embedding Fit All? A Multi-Interest Learning Paradigm Towards Improving User Interest Diversity Fairness | [
"Yuying Zhao",
"Minghua Xu",
"Huiyuan Chen",
"Yuzhong Chen",
"Yiwei Cai",
"Rashidul Islam",
"Yu Wang",
"Tyler Derr"
] | Recommender systems have gained widespread applications across various domains owing to their superior ability to understand and capture users' interests. However, the complexity and nuanced nature of users' interests, which can span a wide range of diversity, pose a significant challenge in delivering fair recommendations. In real-world scenarios, user preferences vary significantly; some users show a clear preference toward certain item categories, while others have a broad interest in diverse ones. Even though it is expected that all users should receive high-quality recommendations, the effectiveness of recommender systems in catering to this disparate interest diversity remains under-explored.
In this work, we investigate whether users in different groups with varied levels of interest diversity are treated fairly. Our empirical experiments reveal an inherent disparity: users who have a wider range of interests often receive lower-quality recommendations. To achieve fairer recommendations, we propose a multi-interest framework that uses multiple (virtual) interest embeddings, rather than the utilization of single embedding to represent individual users. Specifically, the framework consists of stacked multi-interest representation layers. Each layer includes an interest embedding generator that derives virtual interests from globally shared interest parameters, and a center embedding aggregator that facilitates multi-hop aggregation. The experiments have demonstrated the effectiveness of the proposed method in achieving better trade-off between fairness and utility across various datasets and backbones. Our code and datasets are available at: https://anonymous.4open.science/r/User-Interest-Diversity-Fairness-4BBE/. | [
"Fairness",
"Diversity",
"Multi-Interest Recommendations"
] | https://openreview.net/pdf?id=DNejOeivdM | JikGMHpsnp | official_review | 1,700,803,086,691 | DNejOeivdM | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2089/Reviewer_Xizy"
] | review: ### Summary
This paper tackles the issue that in recommender systems some users are harder to model than others. Specifically, some users have a single area of interest while others are interested ina wide, diverse array of areas. Many current recommender systems give recommendations of poor quality for users of diverse interests. To mitigate this issue, the authors present a model that learns multiple user embeddings for each user where each embedding corresponds to a different area of interest. The paper presents a motivating example, introduces measures of interest diversity as well as their model architecture, and provides thorough experimental results. Overall I find this to be a well-written and thorough paper targeting an important issue.
### Pros
* The problem is well-defined and Figure 2 provides good motivation for the rest of the paper.
* I found that the model architecture, though complex, was clearly presented.
* The experiments consider not only the accuracy vs fairness tradeoff but also analyze other properties specific to the algorithm (alignment, interest matching, recommendation diversity).
### Cons
* It is left as future work but I think the paper would benefit from a deeper analysis of the Interest Matching limitations. Given that each user has k virtual embeddings regardless of interest diversity, for users that have undiverse interests do the superfluous virtual embeddings harm recommendation quality?
* The majority of the introduction to the paper focuses on multi-interest from the user perspective. However, the model also learns virtual embeddings for each item. This appears detached from the initial motivation and it is unclear why the item virtual embeddings are needed.
### Miscellaneous
* In Equation 5, the $E_C^l[v]$ should be $E_C^l[v_n]$
questions: * Building on the first con, do the authors believe that the multi-interest framework hurts users who have singular interests?
* Regarding the second con, are the item virtual embeddings needed for the current results?
* For the motivating example in Figure 2, is it possible to verify whether the users who have a high-interest diversity in the training set also have a high-interest diversity in the test set? This would alleviate the confounding explanation that for user $i$ the set of interests expressed in the training set is mutually exclusive from the set reflected in the test set i.e. the cause of performance decrease is more directly connected to interest diversity instead of interest drift.
* For the sum in the BPR loss in 3.4, is the sum over all pairs of positive and negative items? If so does the number of terms in the sum become prohibitively large for sparse datasets and would negative sampling be needed in practice?
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
DNejOeivdM | Can One Embedding Fit All? A Multi-Interest Learning Paradigm Towards Improving User Interest Diversity Fairness | [
"Yuying Zhao",
"Minghua Xu",
"Huiyuan Chen",
"Yuzhong Chen",
"Yiwei Cai",
"Rashidul Islam",
"Yu Wang",
"Tyler Derr"
] | Recommender systems have gained widespread applications across various domains owing to their superior ability to understand and capture users' interests. However, the complexity and nuanced nature of users' interests, which can span a wide range of diversity, pose a significant challenge in delivering fair recommendations. In real-world scenarios, user preferences vary significantly; some users show a clear preference toward certain item categories, while others have a broad interest in diverse ones. Even though it is expected that all users should receive high-quality recommendations, the effectiveness of recommender systems in catering to this disparate interest diversity remains under-explored.
In this work, we investigate whether users in different groups with varied levels of interest diversity are treated fairly. Our empirical experiments reveal an inherent disparity: users who have a wider range of interests often receive lower-quality recommendations. To achieve fairer recommendations, we propose a multi-interest framework that uses multiple (virtual) interest embeddings, rather than the utilization of single embedding to represent individual users. Specifically, the framework consists of stacked multi-interest representation layers. Each layer includes an interest embedding generator that derives virtual interests from globally shared interest parameters, and a center embedding aggregator that facilitates multi-hop aggregation. The experiments have demonstrated the effectiveness of the proposed method in achieving better trade-off between fairness and utility across various datasets and backbones. Our code and datasets are available at: https://anonymous.4open.science/r/User-Interest-Diversity-Fairness-4BBE/. | [
"Fairness",
"Diversity",
"Multi-Interest Recommendations"
] | https://openreview.net/pdf?id=DNejOeivdM | GThhyPzMPB | decision | 1,705,909,221,650 | DNejOeivdM | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: Our decision is to accept. Please see the AC's review below and improve the work considering that and the reviewers' feedback for cemera-ready submission.
"This paper studies the question of fairness in item recommendation - specifically, the authors consider "fairness" as giving similar levels of utility to users who have a) more specific, narrow preferences and b) users who have more variable/wide types of preferences. The authors achieve this goal by allowing for multiple embeddings representing different types of interest.
Overall the reviewers appreciated that the paper was well-written in general (although note a few points where clarity could be improved, such as part of Figures 2 and 5, and certain key terms), and appreciated the importance of the problem. Some reviewers raised concerns about how this new approach compares with benchmarks: for example, Table 3 shows that their proposed method only sometimes outperforms benchmarks on recall. In response, during the rebuttal phase the authors reran the experiments with more simulations and showed that their approach has the highest rank (tradeoff between fairness and accuracy). Also during the rebuttal phase, the authors explored datasets with larger numbers of items and categories, exploring distribution shift between the training and test dataset.
In one more minor point, I think that the dating example used by the authors isn't ideal - at least two reviewers mentioned this as suboptimal (one informally in an offline conversation). I've added a note to the authors about this example in a separate comment, but I think this doesn't substantively affect my opinion of the main contribution of the paper." |
DJttojBfnX | Adversarial Mask Explainer for Graph Neural Networks | [
"Wei Zhang",
"XIAOFAN LI",
"Wolfgang Nejdl"
] | The Graph Neural Networks (GNNs) model is a powerful tool for integrating node information with graph topology to learn representations and make predictions. However, the complex graph structure of GNNs has led to a lack of clear explainability in the decision-making process. Recently, there has been a growing interest in seeking instance-level explanations of the GNNs model, which aims to uncover the decision-making process of the GNNs model and provide insights into how it arrives at its final output. Previous works have focused on finding a set of weights (masks) for edges/nodes/node features to determine their importance. These works have adopted a regularization term and a hyperparameter $K$ to control the explanation size during the training process and keep only the top-$K$ weights as the explanation set. However, the true size of the explanation is typically unknown to users, making it difficult to provide reasonable values for the regularization term and $K$. In this work, we propose a novel framework AMExplainer which leverages the concept of adversarial networks to achieve a dual optimization objective in the target function. This approach ensures both accurate prediction of the mask and sparsity of the explanation set. In addition, we devise a novel scaling function to automatically sense and amplify the weights of the informative part of the graph, which filters out insignificant edges/nodes/node features for expediting the convergence of the solution during training. Our extensive experiments show that AMExplainer yields a more compelling explanation by generating a sparse set of masks while simultaneously maintaining fidelity. | [
"Explainability",
"Graph Neural Networks",
"Graph Analysis"
] | https://openreview.net/pdf?id=DJttojBfnX | xqQaM8eeDz | official_review | 1,701,057,042,825 | DJttojBfnX | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1775/Reviewer_8PCX"
] | review: This paper proposed a simple dual optimization target function by leveraging the concept of adversarial networks to generate a sparse set of masks with high fidelity in order to explain the predictions made by GNNs.
questions: 1. When explaining predictions made by GNNs, Explanation Accuracy is commonly used as an evaluation metric. However, this paper deviates from this standard and employs an alternative metric, Absolute Fidelity. What are the advantages of using this metric? Additionally, the paper should incorporate Inference Time to assess model efficiency.
2. It is advisable to consider GRAD and ATT as additional baselines.
3. Could the authors please provide specifics on how the mutual information between label and graph, as described in Eq 1, is calculated?
4. The temperature parameter $\beta$ is crucial, and the authors should investigate how varying its values across a broader range affects the experimental results.
5. What is the hyperparameter K mentioned in the abstract?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 3
technical_quality: 3
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
DJttojBfnX | Adversarial Mask Explainer for Graph Neural Networks | [
"Wei Zhang",
"XIAOFAN LI",
"Wolfgang Nejdl"
] | The Graph Neural Networks (GNNs) model is a powerful tool for integrating node information with graph topology to learn representations and make predictions. However, the complex graph structure of GNNs has led to a lack of clear explainability in the decision-making process. Recently, there has been a growing interest in seeking instance-level explanations of the GNNs model, which aims to uncover the decision-making process of the GNNs model and provide insights into how it arrives at its final output. Previous works have focused on finding a set of weights (masks) for edges/nodes/node features to determine their importance. These works have adopted a regularization term and a hyperparameter $K$ to control the explanation size during the training process and keep only the top-$K$ weights as the explanation set. However, the true size of the explanation is typically unknown to users, making it difficult to provide reasonable values for the regularization term and $K$. In this work, we propose a novel framework AMExplainer which leverages the concept of adversarial networks to achieve a dual optimization objective in the target function. This approach ensures both accurate prediction of the mask and sparsity of the explanation set. In addition, we devise a novel scaling function to automatically sense and amplify the weights of the informative part of the graph, which filters out insignificant edges/nodes/node features for expediting the convergence of the solution during training. Our extensive experiments show that AMExplainer yields a more compelling explanation by generating a sparse set of masks while simultaneously maintaining fidelity. | [
"Explainability",
"Graph Neural Networks",
"Graph Analysis"
] | https://openreview.net/pdf?id=DJttojBfnX | eT2LsVwD7W | official_review | 1,700,545,790,160 | DJttojBfnX | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1775/Reviewer_6y4Z"
] | review: This paper studies the problem of instance-level explanations for GNNs. The authors claim the previous methods rely on the predefined explanation size, and propose the AMExplainer, which leverages the scaling function to automatically select the desired subgraphs.
Pros:
1. The problem of finding instance-level explanations for GNNs is interesting and worth exploring.
2. The proposed method can automatically choose the size of the explanation subgraph while maintaining sparsity.
3. Experimental results demonstrate that the proposed method can reduce the fidelity in both node classification tasks.
Cons:
1. The motivation that complementary subgraph has no prediction ability on any class is unclear.
2. The graphs used in the experiments are quite small and specific. Can the proposed method provide explanations for the general node classification task?
3. The authors leverage fidelity as the metric for both node classification and graph classification. However, methods such as GSAT and OrphiX can achieve near 0 fidelity, alough their spasity might be 0. Does the fidelity a good metric for this task?
questions: 1. Where does observation 1 come from?
2. For the node classification tasks, how do you select the number of layers of GNNs? For example, does the node beyond the k-hop of the target node belong to the explanation subgraph if the GNN only has k layers?
3. How do you calculate the mutual information between the complementary subgraph and the uniform distribution for the node classification task? If the target node's first-hop neighbors all belong to the explanation subgraph, the complementary graph would isolate the target node.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
DJttojBfnX | Adversarial Mask Explainer for Graph Neural Networks | [
"Wei Zhang",
"XIAOFAN LI",
"Wolfgang Nejdl"
] | The Graph Neural Networks (GNNs) model is a powerful tool for integrating node information with graph topology to learn representations and make predictions. However, the complex graph structure of GNNs has led to a lack of clear explainability in the decision-making process. Recently, there has been a growing interest in seeking instance-level explanations of the GNNs model, which aims to uncover the decision-making process of the GNNs model and provide insights into how it arrives at its final output. Previous works have focused on finding a set of weights (masks) for edges/nodes/node features to determine their importance. These works have adopted a regularization term and a hyperparameter $K$ to control the explanation size during the training process and keep only the top-$K$ weights as the explanation set. However, the true size of the explanation is typically unknown to users, making it difficult to provide reasonable values for the regularization term and $K$. In this work, we propose a novel framework AMExplainer which leverages the concept of adversarial networks to achieve a dual optimization objective in the target function. This approach ensures both accurate prediction of the mask and sparsity of the explanation set. In addition, we devise a novel scaling function to automatically sense and amplify the weights of the informative part of the graph, which filters out insignificant edges/nodes/node features for expediting the convergence of the solution during training. Our extensive experiments show that AMExplainer yields a more compelling explanation by generating a sparse set of masks while simultaneously maintaining fidelity. | [
"Explainability",
"Graph Neural Networks",
"Graph Analysis"
] | https://openreview.net/pdf?id=DJttojBfnX | ablJeQswTQ | official_review | 1,700,828,325,501 | DJttojBfnX | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1775/Reviewer_q6Re"
] | review: This paper focuses on the issue of graph neural network interpretability. Specifically, the authors utilize the concept of adversarial networks to implement a dual optimization objective in the objective function to ensure accurate prediction of the mask and sparsity of the interpretation set. And a scaling function is designed to automatically sense and scale the weights of the information part of the graph, which filters out unimportant edge/node/node features to speed up the convergence of the solution during the training process.
Strengths:
1. This paper focuses on an important issue of explaining graph neural networks.
2. The proposed method is reasonable and technically sound.
3. The results are promising when compared to representative perturbation-based interpreters.
Weaknesses:
1. Some of the author's statements are confusing, for example, in line 160, "the complement set of the explanation edge set (indicated by solid lines, while the complement set is marked by dotted lines)", whether the solid lines indicate the explanation edge set or the complement set?
2. As far as I know, there are some decomposition methods to explain graphical neural networks that have also achieved good results. However, in the experimental section, the authors do not compare the decomposition methods with the proposed framework. It is suggested to add a decomposition-based interpreter as a baseline.
3. Whether the authors attempted node classification experiments on a wider range of real datasets such as Cora, PubMed, ogbn-arxiv, and others.
4. It is suggested to add some visualizations to show more clearly the interpretability of the proposed method.
5. Given the relative complexity of the proposed methodology, a temporal and spatial complexity analysis or an empirical speed assessment would be useful in assessing its significance.
questions: Please refer to the weaknesses.
ethics_review_flag: No
ethics_review_description: None
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
DJttojBfnX | Adversarial Mask Explainer for Graph Neural Networks | [
"Wei Zhang",
"XIAOFAN LI",
"Wolfgang Nejdl"
] | The Graph Neural Networks (GNNs) model is a powerful tool for integrating node information with graph topology to learn representations and make predictions. However, the complex graph structure of GNNs has led to a lack of clear explainability in the decision-making process. Recently, there has been a growing interest in seeking instance-level explanations of the GNNs model, which aims to uncover the decision-making process of the GNNs model and provide insights into how it arrives at its final output. Previous works have focused on finding a set of weights (masks) for edges/nodes/node features to determine their importance. These works have adopted a regularization term and a hyperparameter $K$ to control the explanation size during the training process and keep only the top-$K$ weights as the explanation set. However, the true size of the explanation is typically unknown to users, making it difficult to provide reasonable values for the regularization term and $K$. In this work, we propose a novel framework AMExplainer which leverages the concept of adversarial networks to achieve a dual optimization objective in the target function. This approach ensures both accurate prediction of the mask and sparsity of the explanation set. In addition, we devise a novel scaling function to automatically sense and amplify the weights of the informative part of the graph, which filters out insignificant edges/nodes/node features for expediting the convergence of the solution during training. Our extensive experiments show that AMExplainer yields a more compelling explanation by generating a sparse set of masks while simultaneously maintaining fidelity. | [
"Explainability",
"Graph Neural Networks",
"Graph Analysis"
] | https://openreview.net/pdf?id=DJttojBfnX | YQqStxPi7D | decision | 1,705,909,216,438 | DJttojBfnX | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: The reviewers appreciate the submission's contribution in terms of novel methods and new angles to tackle the tradeoff between sparsity and explanation faithfulness. The approach is also sound and justified.
The reviewers have raised a number of concerns, including consideration of baselines; hyperparameters; confusing writing; code. Most of the concerns are addressed. |
DJttojBfnX | Adversarial Mask Explainer for Graph Neural Networks | [
"Wei Zhang",
"XIAOFAN LI",
"Wolfgang Nejdl"
] | The Graph Neural Networks (GNNs) model is a powerful tool for integrating node information with graph topology to learn representations and make predictions. However, the complex graph structure of GNNs has led to a lack of clear explainability in the decision-making process. Recently, there has been a growing interest in seeking instance-level explanations of the GNNs model, which aims to uncover the decision-making process of the GNNs model and provide insights into how it arrives at its final output. Previous works have focused on finding a set of weights (masks) for edges/nodes/node features to determine their importance. These works have adopted a regularization term and a hyperparameter $K$ to control the explanation size during the training process and keep only the top-$K$ weights as the explanation set. However, the true size of the explanation is typically unknown to users, making it difficult to provide reasonable values for the regularization term and $K$. In this work, we propose a novel framework AMExplainer which leverages the concept of adversarial networks to achieve a dual optimization objective in the target function. This approach ensures both accurate prediction of the mask and sparsity of the explanation set. In addition, we devise a novel scaling function to automatically sense and amplify the weights of the informative part of the graph, which filters out insignificant edges/nodes/node features for expediting the convergence of the solution during training. Our extensive experiments show that AMExplainer yields a more compelling explanation by generating a sparse set of masks while simultaneously maintaining fidelity. | [
"Explainability",
"Graph Neural Networks",
"Graph Analysis"
] | https://openreview.net/pdf?id=DJttojBfnX | SMbWfucUBC | official_review | 1,701,387,327,845 | DJttojBfnX | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1775/Reviewer_bHiH"
] | review: This paper introduces Adversarial Mask Explainer (AMExplainer), a framework for achieving instance-level explainability in Graph Neural Networks (GNNs). The authors elucidate the significance of each node in the graph by generating a set of masks, noting that the complement of these masks serves as an indicator of a selected node's importance. AMExplainer employs adversarial networks to optimize both accurate mask prediction and the sparsity of the explanation set. Additionally, it introduces a scaling function to amplify the weights of the informative part of the graph, enhancing convergence. The method has been experimentally proven to be accurate and interpretable, applicable to various downstream tasks. Experimental results demonstrate that AMExplainer generates a sparse set of masks, significantly improving prediction effectiveness compared to previous methods. In general, this paper presents a novel approach to explainability in GNNs that addresses a key challenge in the field. The proposed method is well-motivated and well-evaluated. From the description of the paper, it seems that the authors utilize a set of masks to conduct node classification prediction with the help of adversarial network and scaling function. The topic discussed is still far away from explainability of the GNNs.
questions: - Can the authors provide more theoretical results about the explainability not expressing them just by word.
- Could the authors provide details about the selection of the hyperparameters? Is the model robust to different hyperparameter settings?
- Could the authors briefly describe the mask initialization process?
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 3
technical_quality: 2
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
DJttojBfnX | Adversarial Mask Explainer for Graph Neural Networks | [
"Wei Zhang",
"XIAOFAN LI",
"Wolfgang Nejdl"
] | The Graph Neural Networks (GNNs) model is a powerful tool for integrating node information with graph topology to learn representations and make predictions. However, the complex graph structure of GNNs has led to a lack of clear explainability in the decision-making process. Recently, there has been a growing interest in seeking instance-level explanations of the GNNs model, which aims to uncover the decision-making process of the GNNs model and provide insights into how it arrives at its final output. Previous works have focused on finding a set of weights (masks) for edges/nodes/node features to determine their importance. These works have adopted a regularization term and a hyperparameter $K$ to control the explanation size during the training process and keep only the top-$K$ weights as the explanation set. However, the true size of the explanation is typically unknown to users, making it difficult to provide reasonable values for the regularization term and $K$. In this work, we propose a novel framework AMExplainer which leverages the concept of adversarial networks to achieve a dual optimization objective in the target function. This approach ensures both accurate prediction of the mask and sparsity of the explanation set. In addition, we devise a novel scaling function to automatically sense and amplify the weights of the informative part of the graph, which filters out insignificant edges/nodes/node features for expediting the convergence of the solution during training. Our extensive experiments show that AMExplainer yields a more compelling explanation by generating a sparse set of masks while simultaneously maintaining fidelity. | [
"Explainability",
"Graph Neural Networks",
"Graph Analysis"
] | https://openreview.net/pdf?id=DJttojBfnX | IQjPrvz6B5 | official_review | 1,700,775,981,867 | DJttojBfnX | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1775/Reviewer_tpH9"
] | review: The paper studies the problem of explainability of graph neural networks (GNN). The paper proposes a novel approach for explaining graph neural networks by identifying an explanation set whose complement set consists of uninformative edges, and then employing a scaling function go further remove additional uninformative edges.
The paper provides a nice motivation and clear introduction to the topic of explainability for GNNs, and the challenges of using standard approaches. However, the paper is not clear in several parts (see below), and the originality is somehow limited, since at the end the approach is very similar to a regularized version of a problem of selecting a small subgraph with performance similar to the full model, that has been proposed before. Moreover, the experimental evaluation considers only 1 real dataset and includes synthetic datasets where very simple explanation are planted in the graph, making it difficult to assess the significance of the approach in practice.
PROS
- The abstract and introduction provide a nice motivation and clear introduction to the topic
- Overall, the idea of using as a second objective the mutual information of the complementary subgraph and the uniform distribution is interesting
- The experimental results show that the proposed method works better than previous approaches on the datasets considered
CONS
- Several parts of the paper are not clear, here is a (partial) list.
Observation 1is not clearly written; please rewrite.
Line 308: “with our intrinsic since” what is an “intrinsic”?
Lines 381-382 (“Another reason is that … to comprehending graphs”) is entirely unclear; what is the “cognitive logic and methodology” of a GNN?
Idea 2: there is no proof that the solution of equation 3 is equivalent to the solution in idea 1.
How is the beta parameter set?
In section 3.4: the selection of threshold 0.1 for including an edge in the explanatory set is in contrast with observation 2 (such edges show converge to values close to 1).
- While the method is described in terms of mutual information, at the end the approach uses mean square error. At this point, it is unclear why the description is not made in terms of MSE, or why the two notions should be equivalent.
- From equation 3, at the end the approach is very similar to a regularization approach, where beta governs the sparsity of the solution (that does not derive from pursuing the interpretability of solutions, but it is enforced by the algorithm).
- The code is not available, and there is no mention of whether it will be made available if the paper is accepted
questions: - Can you address the comments in the first point of CONS (with at most 3 sentence each)?
- What is the value of describing the method in terms of mutual information, if at the end MSE is used?
- How is the beta parameter set in your experiments? Is this different from fixing a regularization parameter with (cross-)validation?
- Can you provide an anonymous repository with the code for the method and to reproduce the results?
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 3
technical_quality: 3
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
D8pnmfpS74 | Unified Uncertainty Estimation for Cognitive Diagnosis Models | [
"Fei Wang",
"Qi Liu",
"Enhong Chen",
"Charles Liu",
"Zhenya Huang",
"Jinze Wu",
"Shijin Wang"
] | Cognitive diagnosis models have been widely used in different areas, especially intelligent education, to measure users' proficiency levels on knowledge concepts, from which users can get reasonable instructions. As the measurement is not always reliable due to the weak links of the models and data, the uncertainty of measurement also offers important information for decisions. However, the research on the uncertainty estimation lags behind that on advanced model structures for cognitive diagnosis. Existing approaches have limited inefficiency and leave an academic blank for sophisticated models which have interaction function parameters (e.g., deep learning-based models). To address these problems, we propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models. Specifically, based on the idea of estimating the posterior distributions of cognitive diagnosis model parameters, we first provide a unified objective function for mini-batch based optimization that can be more efficiently applied to a wide range of models and large datasets. Then, we modify the reparameterization approach in order to adapt to parameters defined on different domains. Furthermore, we decompose the uncertainty of diagnostic parameters into data aspect and model aspect, which better explains the source of uncertainty. Extensive experiments demonstrate that our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis. | [
"intelligent education",
"cognitive diagnosis",
"uncertainty"
] | https://openreview.net/pdf?id=D8pnmfpS74 | Og3SU5sUQF | decision | 1,705,909,246,596 | D8pnmfpS74 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: While I think the paper is highly relevant to the UMAP track and I personally enjoy a hierarchical Bayesian (deep) model for psychometric modeling, the reviews (both explicitly stated and implied in terms of what many reviewers commented on) suggest that the work's impact on TheWebConf may be somewhat narrowly focused on a small audience.
Since most reviewer concerns are not deeply technical so I felt obligated to read the paper myself given that it's final review scores leave it borderline.
Having read through the paper, I note that it could use a pass for grammatical (use of articles, prepositions, noun-verb agreement) and technical terminology improvement (e.g., the algorithm stands for Expectation Maximization, not Expectation Maximum). These are minor and infrequent and thus did not disrupt my reading, but they should be fixed on revision.
At first glance, as one reviewer notes, the paper does seem to reflect a variational Bayesian approach to posterior estimation through minimizing KL divergence. But as the authors note in their response, their approach is much more customized for Cognitive Diagnosis through it's decomposition of uncertainty parameters. So I do believe that the overall novelty of this paper is quite high. Technical clarifications raised by reviewers can be easily fixed on revision.
My primary concern is that for this tool to be useful, we must be able to run it easily and reliably without manual tuning on test data. No details of the training process are provided beyond Algorithm 1 which is the core optimization routine for the training data. What concerns me is that there does not appear to be any specific discussion of how the various hyperparameter (learning rate, M_c), optimizer choice (SGD, Adam), and the stopping criteria are determined, nor whether held-out validation data was used to determine any of these choices or hyperparameters. The authors provide code, which presumably would answer these questions for anyone willing to read the code, but it is critical for the methodology to be explained in the paper (or an Appendix) in order for readers to understand how this approach is applied in practice to reproduce the experimental results.
Due to the above reasons, I think the paper remains on the borderline going into the final decision. It has high novelty though a somewhat narrow scope of impact. Further, the paper would significantly benefit from an Appendix describing details on the training methodology to understand how key training and hyperparameter choices should be made by a practitioner using this fairly complex methodology. |
D8pnmfpS74 | Unified Uncertainty Estimation for Cognitive Diagnosis Models | [
"Fei Wang",
"Qi Liu",
"Enhong Chen",
"Charles Liu",
"Zhenya Huang",
"Jinze Wu",
"Shijin Wang"
] | Cognitive diagnosis models have been widely used in different areas, especially intelligent education, to measure users' proficiency levels on knowledge concepts, from which users can get reasonable instructions. As the measurement is not always reliable due to the weak links of the models and data, the uncertainty of measurement also offers important information for decisions. However, the research on the uncertainty estimation lags behind that on advanced model structures for cognitive diagnosis. Existing approaches have limited inefficiency and leave an academic blank for sophisticated models which have interaction function parameters (e.g., deep learning-based models). To address these problems, we propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models. Specifically, based on the idea of estimating the posterior distributions of cognitive diagnosis model parameters, we first provide a unified objective function for mini-batch based optimization that can be more efficiently applied to a wide range of models and large datasets. Then, we modify the reparameterization approach in order to adapt to parameters defined on different domains. Furthermore, we decompose the uncertainty of diagnostic parameters into data aspect and model aspect, which better explains the source of uncertainty. Extensive experiments demonstrate that our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis. | [
"intelligent education",
"cognitive diagnosis",
"uncertainty"
] | https://openreview.net/pdf?id=D8pnmfpS74 | LRmuO7F9jH | official_review | 1,701,045,939,756 | D8pnmfpS74 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1082/Reviewer_HApC"
] | review: Summary:
The paper presents a unified approach to uncertainty estimation for cognitive diagnosis models (CDMs), commonly used in intelligent education to assess user proficiency levels. This approach addresses the challenge of unreliable measurements in CDMs by introducing a batch-based optimization method applicable to various models and large datasets. It modifies the reparameterization approach for better adaptation to parameters defined in different domains and decomposes the uncertainty of diagnostic parameters into data and model aspects, enabling a more accurate and reliable assessment of user proficiency levels.
Strengths:
1. The paper introduces a novel method for estimating uncertainty in CDMs, a critical aspect often overlooked in traditional approaches.
2. The proposed method's compatibility with a wide range of models and its efficiency in handling large datasets make it highly versatile and applicable in diverse educational settings.
3. By decomposing uncertainty into data and model aspects, the paper offers a more nuanced understanding of the sources of uncertainty, leading to more reliable proficiency assessments.
4. The modification of the reparameterization approach to suit parameters across different domains enhances the model's adaptability and accuracy.
Weaknesses:
1. The proposed method's complexity, especially in terms of uncertainty decomposition and reparameterization, might make it challenging to implement and understand for practitioners new to the field.
2. The approach might be prone to overfitting, especially when applied to highly specific or limited datasets.
3. The paper does not explicitly address the method's generalizability across different educational settings or subject matters.
questions: Could you provide insights into the scalability of your approach in various educational settings, particularly those with limited resources?
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
D8pnmfpS74 | Unified Uncertainty Estimation for Cognitive Diagnosis Models | [
"Fei Wang",
"Qi Liu",
"Enhong Chen",
"Charles Liu",
"Zhenya Huang",
"Jinze Wu",
"Shijin Wang"
] | Cognitive diagnosis models have been widely used in different areas, especially intelligent education, to measure users' proficiency levels on knowledge concepts, from which users can get reasonable instructions. As the measurement is not always reliable due to the weak links of the models and data, the uncertainty of measurement also offers important information for decisions. However, the research on the uncertainty estimation lags behind that on advanced model structures for cognitive diagnosis. Existing approaches have limited inefficiency and leave an academic blank for sophisticated models which have interaction function parameters (e.g., deep learning-based models). To address these problems, we propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models. Specifically, based on the idea of estimating the posterior distributions of cognitive diagnosis model parameters, we first provide a unified objective function for mini-batch based optimization that can be more efficiently applied to a wide range of models and large datasets. Then, we modify the reparameterization approach in order to adapt to parameters defined on different domains. Furthermore, we decompose the uncertainty of diagnostic parameters into data aspect and model aspect, which better explains the source of uncertainty. Extensive experiments demonstrate that our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis. | [
"intelligent education",
"cognitive diagnosis",
"uncertainty"
] | https://openreview.net/pdf?id=D8pnmfpS74 | JQSwbKZCl5 | official_review | 1,700,414,014,470 | D8pnmfpS74 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1082/Reviewer_4Aad"
] | review: This paper address important issue of uncertainty estimation in deep learning based Cognitive Diagnosis Models.
Paper proposes to use mini-batch based optimization and reparameterization trick.
The proposed approach is compared with existing uncertainty estimation methods in Cognitive Diagnosis Models.
Pros
The novelty of paper is in application of the approach for Cognitive Diagnosis Models
Cons
The proposed approach of uncertainty estimation has been used previously in variational inference.
questions: 1. Could you describe novelty of the uncetainty estimation method itself vs novelty of application of existing method to a new area of Cognitive Diagnosis Models.
2. Could you assess the accuracy of estimated uncertainty with proposed method and compare to existing methods
ethics_review_flag: No
ethics_review_description: no ethics concerns
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 3
technical_quality: 4
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
D8pnmfpS74 | Unified Uncertainty Estimation for Cognitive Diagnosis Models | [
"Fei Wang",
"Qi Liu",
"Enhong Chen",
"Charles Liu",
"Zhenya Huang",
"Jinze Wu",
"Shijin Wang"
] | Cognitive diagnosis models have been widely used in different areas, especially intelligent education, to measure users' proficiency levels on knowledge concepts, from which users can get reasonable instructions. As the measurement is not always reliable due to the weak links of the models and data, the uncertainty of measurement also offers important information for decisions. However, the research on the uncertainty estimation lags behind that on advanced model structures for cognitive diagnosis. Existing approaches have limited inefficiency and leave an academic blank for sophisticated models which have interaction function parameters (e.g., deep learning-based models). To address these problems, we propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models. Specifically, based on the idea of estimating the posterior distributions of cognitive diagnosis model parameters, we first provide a unified objective function for mini-batch based optimization that can be more efficiently applied to a wide range of models and large datasets. Then, we modify the reparameterization approach in order to adapt to parameters defined on different domains. Furthermore, we decompose the uncertainty of diagnostic parameters into data aspect and model aspect, which better explains the source of uncertainty. Extensive experiments demonstrate that our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis. | [
"intelligent education",
"cognitive diagnosis",
"uncertainty"
] | https://openreview.net/pdf?id=D8pnmfpS74 | 8eW7RmwTSB | official_review | 1,700,727,960,418 | D8pnmfpS74 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1082/Reviewer_anbH"
] | review: This work presents an approach called Unified Uncertainty Estimation for Cognitive Diagnosis Models (UCD) to estimate the uncertainty of measurement in cognitive diagnosis models. UCD proposes a unified objective function for mini-batch based optimization and modifies the reparameterization approach. The uncertainty of diagnostic parameters is divided into two aspects, data aspect and model aspect, for better explainability. The authors verify their method with comprehensive experiments.
Pros:
1. Better efficiency provided by UCD with good generalizability.
2. Detailed descriptions on proposed method with formulas and pseudo codes.
questions: -
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 1: The reviewer's evaluation is an educated guess |
D8pnmfpS74 | Unified Uncertainty Estimation for Cognitive Diagnosis Models | [
"Fei Wang",
"Qi Liu",
"Enhong Chen",
"Charles Liu",
"Zhenya Huang",
"Jinze Wu",
"Shijin Wang"
] | Cognitive diagnosis models have been widely used in different areas, especially intelligent education, to measure users' proficiency levels on knowledge concepts, from which users can get reasonable instructions. As the measurement is not always reliable due to the weak links of the models and data, the uncertainty of measurement also offers important information for decisions. However, the research on the uncertainty estimation lags behind that on advanced model structures for cognitive diagnosis. Existing approaches have limited inefficiency and leave an academic blank for sophisticated models which have interaction function parameters (e.g., deep learning-based models). To address these problems, we propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models. Specifically, based on the idea of estimating the posterior distributions of cognitive diagnosis model parameters, we first provide a unified objective function for mini-batch based optimization that can be more efficiently applied to a wide range of models and large datasets. Then, we modify the reparameterization approach in order to adapt to parameters defined on different domains. Furthermore, we decompose the uncertainty of diagnostic parameters into data aspect and model aspect, which better explains the source of uncertainty. Extensive experiments demonstrate that our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis. | [
"intelligent education",
"cognitive diagnosis",
"uncertainty"
] | https://openreview.net/pdf?id=D8pnmfpS74 | 7gkoT7JWzN | official_review | 1,700,319,642,423 | D8pnmfpS74 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1082/Reviewer_LNAr"
] | review: Quality:
In this paper, the authors propose a Unified Uncertainty Estimation (UCD) approach for cognitive diagnosis models. This approach is applicable to both traditional latent trait models and deep learning models, filling a gap in the latter. The UCD method encompasses several key components:
1.Unified Objective Function: Based on the concept of learning posterior distributions of parameters, a unified objective function is developed for mini-batch based optimization, suitable for both deep and non-deep learning models.
2.Derivative Reparameterization Approach: This method facilitates efficient gradient descent-based training and is adaptable to parameters with different domains of definition.
3.Differentiation between Diagnostic and Function Parameters: By considering the differences between diagnostic and function parameters, the uncertainty of diagnostic parameters is decomposed into data uncertainty and model uncertainty.
The quality of the paper is high, providing detailed explanations of the methodology, mathematical derivations, and experimental setups. However, some technical details might be challenging for readers unfamiliar with Bayesian methods and cognitive diagnostic models. In terms of quality, UCD shows its versatility and applicability through comprehensive experiments across different cognitive diagnostic models and datasets.
Clarity:
The overall structure of the paper is clear. The research motivation is well-defined, and each section of the paper has a distinct theme. Symbols in formulas are clearly explained, and tables/figures are clear. However, there are some minor errors that could be addressed.
1. In line 15, the suggestion is to change "limited inefficiency" to "limited efficiency";
2. The interpretation of the experimental results on the model aspect in RQ2 is unclear;
3. Title 5.6's “R4” should be changed to “RQ4”;
4. Providing results across different datasets and models would enhance the persuasiveness of the findings in RQ4.
Originality:
The innovation of the paper lies in its proposal of a unified solution for uncertainty estimation in cognitive diagnosis models (UCD). This approach is noteworthy for several reasons:
Bayesian Strategy: UCD adopts a Bayesian strategy, aligning with contemporary statistical approaches but enhancing them in specific ways.
Efficiency and Effectiveness: Compared to traditional methods, UCD offers better efficiency. This improvement is crucial in computational models where efficiency can significantly impact practical usability.
Modeling Differences in Uncertainty: The approach more effectively captures the differences in uncertainty stemming from both data and model aspects. This dual focus is essential in accurately reflecting the real-world complexities of cognitive diagnosis.
Applicability: UCD is versatile, applicable not only to traditional non-deep learning latent trait models but also capable of addressing gaps in deep learning-based models.
Overall, the paper's innovation is in its comprehensive and efficient approach to uncertainty estimation, broad applicability, and enhancement of existing Bayesian strategies, however, in certain aspects, such as Bayesian strategy and reparameterization methods, research has already been conducted in related backgrounds, requiring clearer elucidation of their novelty.
Significance:
This paper introduces a novel uncertainty estimation method for cognitive diagnostic models, referred to as the Unified Certification Solution (UCD). UCD adopts a Bayesian strategy and efficiently estimates uncertainty in both traditional and deep learning-based cognitive diagnostic models through the introduction of a unified objective function and a reparameterization method. The method is extensively validated across various cognitive diagnostic models and multiple datasets, demonstrating its robustness and reliability.
The significance of the paper lies in addressing a crucial challenge in the field of cognitive diagnosis by proposing a universally applicable method that effectively estimates uncertainty in different models and datasets.
In conclusion, this paper provides a high-quality research contribution, offering a novel and comprehensive solution to the uncertainty estimation problem in cognitive diagnostic models.
Pros:
1. The overall structure of the paper is clear, providing detailed explanations of the methodology, mathematical derivations, and experimental setups.
2. This paper proposes a unified solution to the uncertainty estimation of cognitive diagnosis models, provides better efficiency.
3. The author conducted a series of comprehensive experiments to validate the effectiveness of the proposed method.
4. UCD is not only applicable to traditional non-deep learning latent trait models but also suitable for deep learning models, filling a gap in this field.
Cons:
1. There may be some aspects in the explanation of certain charts and formulas that require clearer clarification to ensure that readers can accurately comprehend the author's points.
2. The research of Bayesian strategy and reparameterization methods has already been conducted in related backgrounds, this paper may lack a certain degree of innovation.
3. The author's explanation of the experimental results regarding the model's uncertainty is not sufficiently clear.
4. The experiments regarding RQ4 could be further enriched.
questions: 1. Could you provide a clearer explanation of the model uncertainty?
2. In RQ2, I do not understand the experimental results of the model aspect. What kind of correlation exists between the model and the parameters?
ethics_review_flag: No
ethics_review_description: No
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
D8pnmfpS74 | Unified Uncertainty Estimation for Cognitive Diagnosis Models | [
"Fei Wang",
"Qi Liu",
"Enhong Chen",
"Charles Liu",
"Zhenya Huang",
"Jinze Wu",
"Shijin Wang"
] | Cognitive diagnosis models have been widely used in different areas, especially intelligent education, to measure users' proficiency levels on knowledge concepts, from which users can get reasonable instructions. As the measurement is not always reliable due to the weak links of the models and data, the uncertainty of measurement also offers important information for decisions. However, the research on the uncertainty estimation lags behind that on advanced model structures for cognitive diagnosis. Existing approaches have limited inefficiency and leave an academic blank for sophisticated models which have interaction function parameters (e.g., deep learning-based models). To address these problems, we propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models. Specifically, based on the idea of estimating the posterior distributions of cognitive diagnosis model parameters, we first provide a unified objective function for mini-batch based optimization that can be more efficiently applied to a wide range of models and large datasets. Then, we modify the reparameterization approach in order to adapt to parameters defined on different domains. Furthermore, we decompose the uncertainty of diagnostic parameters into data aspect and model aspect, which better explains the source of uncertainty. Extensive experiments demonstrate that our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis. | [
"intelligent education",
"cognitive diagnosis",
"uncertainty"
] | https://openreview.net/pdf?id=D8pnmfpS74 | 45lapGnYfe | official_review | 1,701,334,504,282 | D8pnmfpS74 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1082/Reviewer_4DEH"
] | review: Cognitive diagnostic models have found widespread application across different domains. However, the research on estimating model uncertainty still faces several limitations, including 1) a limited range of applicable algorithms, 2) inadequate parameter analysis, and 3) low efficiency in uncertainty estimation. To overcome these challenges, this study introduces a unified uncertainty estimation method called the Uncertainty Estimation Approach for Cognitive Diagnosis models (UCD), which can be applied to diverse cognitive diagnostic models. In contrast to traditional approaches, UCD adopts a Bayesian strategy that offers improved efficiency and effectively captures parameter differences as uncertainty from both data and model perspectives. Consequently, UCD is not only suitable for traditional non-deep learning latent trait models but also fills the gap in uncertainty estimation for deep learning-based models.
Pros:
1. The paper exhibits good organization and clarity in its presentation, resulting in excellent readability.
2. The examples and case studies used in the paper clearly illustrate the problem and its practical significance (e.g., Figure 1 and Figure 6).
3. The topic addressed is intriguing and relevant to current industry needs. The proposed method is applicable to multiple models and different problems.
Cons:
1. There is a need to adjust the font sizes of certain figures in the paper to align them with the font size used in the main text. Specifically, the fonts in Fig. 2 and Fig. 3 are oversized, while the font in Fig. 5 appears to be undersized.
2. The authors primarily provided the mean results in the experiments, and it is requested that they also provide information about the variance of the results.
3. In some tables, it would be beneficial to highlight the better results by bolding them for easier comparison.
questions: 1. What is the variance information of the experimental results?
ethics_review_flag: No
ethics_review_description: No
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
D8Mb4c7V0t | DSLR: Diversity Enhancement and Structure Learning for Rehearsal-based Graph Continual Learning | [
"Seungyoon Choi",
"Wonjoong Kim",
"Sungwon Kim",
"Yeonjun In",
"Sein Kim",
"Chanyoung Park"
] | We investigate the replay buffer in rehearsal-based approaches for graph continual learning (GCL) methods. Existing rehearsal-based
GCL methods select the most representative nodes for each class and store them in a replay buffer for later use in training subsequent
tasks. However, we discovered that considering only the class representativeness of each replayed node makes the replayed nodes to be
concentrated around the center of each class, incurring a potential risk of overfitting to nodes residing in those regions, which aggravates catastrophic forgetting. Moreover, as the rehearsal-based approach heavily relies on a few replayed nodes to retain knowledge obtained from previous tasks, involving the replayed nodes that have irrelevant neighbors in the model training may have a significant detrimental impact on model performance. In this paper, we propose a GCL model named Diversity enhancement and Structure Learning for Rehearsal-based graph continual learning (DSLR). Specifically, we devise a coverage-based diversity (CD) approach to consider both the class representativeness and the diversity within each class of the replayed nodes. Moreover, we adopt graph structure learning (GSL) to ensure that the replayed nodes are connected to truly informative neighbors. Extensive experimental results demonstrate the effectiveness and efficiency of DSLR. Our source code is available at https://anonymous.4open.science/r/DSLR-F525. | [
"continual learning",
"graph neural networks",
"rehearsal approach",
"structure learning"
] | https://openreview.net/pdf?id=D8Mb4c7V0t | oGWbXBNGam | official_review | 1,700,634,522,200 | D8Mb4c7V0t | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1450/Reviewer_rydM"
] | review: This paper introduces a continual learning method for graph network. The authors start from the problem of concentrated replayed node and propose a new method to sample diverse nodes. Additionally, the diverse nodes are connected to informative neighbors with graph structure learning. The experiment and ablation study are solid to verify the hypothesis and method effectiveness.
Strength:
1) Paper is well-written. The motivation, methodology, and experiment are clear to read.
2) The experiment and analysis are exhaustive. They are solid and verify the hypothesis and method effectiveness.
Weakness:
1) The dataset and experimental setting are very different from the baseline ER-GNN, resulting the lack of validation.
questions: 1) If the primary comparison is between the proposed method and ER-GNN, why not employing the same datasets?
2) For the only shared dataset Amazon Computer, the experimental setting is different. ER-GNN uses 10 classes as 5 tasks, while the proposed method only uses 8 classes as 4 tasks. The authors add a reason "excluding the two classes with the fewest number of nodes", but it is not convincing.
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
Subsets and Splits