forum_id
stringlengths 8
20
| forum_title
stringlengths 4
171
| forum_authors
sequencelengths 0
25
| forum_abstract
stringlengths 4
4.27k
| forum_keywords
sequencelengths 1
10
| forum_pdf_url
stringlengths 38
50
| note_id
stringlengths 8
13
| note_type
stringclasses 6
values | note_created
int64 1,360B
1,736B
| note_replyto
stringlengths 8
20
| note_readers
sequencelengths 1
5
| note_signatures
sequencelengths 1
1
| note_text
stringlengths 10
16.6k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
8HTwfqUYRz | Spot Check Equivalence: an Interpretable Metric for Information Elicitation Mechanisms | [
"Shengwei Xu",
"Yichi Zhang",
"Paul Resnick",
"Grant Schoenebeck"
] | Because high-quality data is like oxygen for AI systems, effectively eliciting information from crowdsourcing workers has become a first-order problem for developing high-performance machine learning algorithms. Two prevalent paradigms, spot-checking and peer prediction, enable the design of mechanisms to evaluate and incentivize high-quality data from human labelers. So far, at least three metrics have been proposed to compare the performances of these techniques [Zhang and Schoenebeck 2023, Gao et al. 2016, Burrell and Schoenebeck 2023]. However, different metrics lead to divergent and even contradictory results in various contexts. In this paper, we harmonize these divergent stories, showing that two of these metrics are actually the same within certain contexts and explain the divergence of the third. Moreover, we unify these different contexts by introducing Spot Check Equivalence, which offers an interpretable metric for the effectiveness of a peer prediction mechanism. Finally, we present two approaches to compute spot check equivalence in various contexts, where simulation results prove the effectiveness of our proposed metric. | [
"Algorithmic Game Theory",
"Information Elicitation",
"Incentive for Effort",
"Peer Prediction"
] | https://openreview.net/pdf?id=8HTwfqUYRz | HTu6MtahIy | official_review | 1,700,462,164,269 | 8HTwfqUYRz | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2200/Reviewer_U4Z5"
] | review: ### Summary
This work presents a methodology for understanding the effectiveness, particularly the motivational proficiency, of information elicitation mechanisms across diverse contexts, encapsulating the concept of Spot Check Equivalence. Consequently, this findings seem to contribute valuable insights for crafting incentive mechanisms that are not only effective but also efficient in fostering the acquisition of high-quality information
### Strength
1. **Introduction of Spot Check Equivalence:** The incorporation of the Spot Check Equivalence concept provides a robust framework for evaluating the comparability of different information elicitation mechanisms. This theoretical contribution offers a fresh perspective on our understanding of information incentive mechanisms.
2. **Insights into Effective and Efficient Incentive Mechanisms:** The findings of your study contribute valuable insights for the design of incentive mechanisms that are both effective and efficient. This holds practical significance for promoting the acquisition of high-quality information, with far-reaching implications for both academic and practical applications in related fields.
3. **In-Depth Exploration of the Relationship between Incentives and Performance:** Through a thorough analysis of motivational efficacy in incentive mechanisms, your study provides profound insights into our understanding of the relationship between incentives and performance.
### Weakness
1. The author mentions several metric methods in the abstract, such as [2], [7], [9], but I do not find comparative results between the proposed algorithm in this paper and these existing algorithms in the experimental section.
2. The scope of agent-based model (ABM) seems relatively narrow, potentially impacting the generalizability of the research findings. Consider whether it is feasible to broaden the sample size or include a more diverse range of contexts to enhance the external validity of the study.
3. In the results section, the interpretation of some findings appears somewhat succinct. I suggest delving deeper into the discussion of each observed trend or relationship to ensure readers gain a more comprehensive understanding of the research outcomes.
### Mentioned references:
[2] Noah Burrell and Grant Schoenebeck. 2021. Measurement Integrity in Peer Prediction: A Peer Assessment Case Study. *arXiv preprint arXiv:2108.05521* (2021).
[7] Alice Gao, James R Wright, and Kevin Leyton-Brown. 2016. Incentivizing evaluation via limited access to ground truth: Peer-prediction makes things worse. *arXiv preprint arXiv:1606.07042* (2016).
[9] Naman Goel and Boi Faltings. 2019. Deep bayesian trust: A dominant and fair incentive mechanism for crowd. In *Proceedings of the AAAI Conference on Arti*ficial Intelligence, Vol. 33. 1996–2003.
questions: Please answer the questions to weakness 1-3.
ethics_review_flag: No
ethics_review_description: N/A.
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 4
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
8HTwfqUYRz | Spot Check Equivalence: an Interpretable Metric for Information Elicitation Mechanisms | [
"Shengwei Xu",
"Yichi Zhang",
"Paul Resnick",
"Grant Schoenebeck"
] | Because high-quality data is like oxygen for AI systems, effectively eliciting information from crowdsourcing workers has become a first-order problem for developing high-performance machine learning algorithms. Two prevalent paradigms, spot-checking and peer prediction, enable the design of mechanisms to evaluate and incentivize high-quality data from human labelers. So far, at least three metrics have been proposed to compare the performances of these techniques [Zhang and Schoenebeck 2023, Gao et al. 2016, Burrell and Schoenebeck 2023]. However, different metrics lead to divergent and even contradictory results in various contexts. In this paper, we harmonize these divergent stories, showing that two of these metrics are actually the same within certain contexts and explain the divergence of the third. Moreover, we unify these different contexts by introducing Spot Check Equivalence, which offers an interpretable metric for the effectiveness of a peer prediction mechanism. Finally, we present two approaches to compute spot check equivalence in various contexts, where simulation results prove the effectiveness of our proposed metric. | [
"Algorithmic Game Theory",
"Information Elicitation",
"Incentive for Effort",
"Peer Prediction"
] | https://openreview.net/pdf?id=8HTwfqUYRz | ATNivOrW4E | decision | 1,705,909,206,628 | 8HTwfqUYRz | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: Summary: Clarifies the relationships between different metrics for evaluating crowdsourcing data quality control techniques & offers a new interpretable metric.
Strengths:
+ Addresses an important challenge of evaluating and incentivizing high-quality data from crowdsourced workers
+ A new metric (SCE) to interpret the effectiveness of peer prediction mechanisms and approaches to compute them
+ Unifies two existing metrics
+ Well-written
Weaknesses:
- Some aspects of writing could be improved including interpretation/discussion of results & stating assumptions/limitations clearly
- Could demonstrate how the proposed approach varies in performance across different contexts
- No justification why other metrics were not considered
- No heterogeneous cost functions
- Limited evaluations
Recommendation: An interesting work with sound theoretical backing. Please address the issues raised by the reviewers and fulfill the promises made during the rebuttal. |
8CFcC8nU0D | Online Billion-Scale Recommender Systems with Macro Graph Neural Networks | [
"Hao Chen",
"Yuanchen Bei",
"Qijie Shen",
"Yue Xu",
"Sheng Zhou",
"Wenbing Huang",
"Feiran Huang",
"Senzhang Wang",
"Xiao Huang"
] | Predicting Click-Through Rate (CTR) in billion-scale recommender systems poses a long-standing challenge for Graph Neural Networks (GNNs) due to the overwhelming computational complexity involved in aggregating billions of neighbors. To tackle this, GNN-based CTR models usually sample hundreds of neighbors out of the billions to facilitate efficient online recommendations. However, sampling only a small portion of neighbors results in a severe sampling bias and the failure to encompass the full spectrum of user or item behavioral patterns. To address this challenge, we name the conventional user-item recommendation graph as "micro recommendation graph" and introduce a more suitable MAcro Recommendation Graph (MAG) for billion-scale recommendations. MAG resolves the computational complexity problems in the infrastructure by reducing the node count from billions to hundreds. Specifically, MAG groups micro nodes (users and items) with similar behavior patterns to form macro nodes. Subsequently, we introduce tailored Macro Graph Neural Networks (MacGNN) to aggregate information on a macro level and revise the embeddings of macro nodes. MacGNN has already served one of the biggest shopping platforms for two months, providing recommendations for over one billion users. Extensive offline experiments on three public benchmark datasets and an industrial dataset present that MacGNN significantly outperforms twelve CTR baselines while remaining computationally efficient. Besides, online A/B tests confirm MacGNN's superiority in billion-scale recommender systems. | [
"graph-based CTR prediction",
"large-scale recommendation"
] | https://openreview.net/pdf?id=8CFcC8nU0D | snD4gXyUyU | official_review | 1,700,831,660,213 | 8CFcC8nU0D | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1227/Reviewer_XXZn"
] | review: The paper introduces an approach to billion-scale recommender systems by proposing a Macro Recommendation Graph (MAG) and a corresponding Macro Graph Neural Network (MacGNN). The key contributions of the paper can be summarized as follows:
MAG involves the construction of macro nodes, macro edges, and macro subgraphs. This customization reduces the number of neighbors from billions to hundreds, eliminating the need for sampling strategies and facilitating Graph Neural Network (GNN) operations in billion-scale recommender systems.
MacGNN is introduced as a novel paradigm for efficient CTR prediction in billion-scale recommender systems. It aggregates macro-graph information and updates macro-node embeddings, providing a solution to the challenges faced by traditional GNNs in handling large-scale neighbor complexities.
Extensive offline experiments conducted on three public benchmark datasets and a billion-scale industrial dataset show the performance of MacGNN compared to state-of-the-art CTR baselines.
Online A/B tests further confirm the performance of MacGNN in real-world billion-scale recommender systems.
While the paper introduces a promising approach to address challenges in billion-scale recommender systems through MAG and MacGNN, it is essential to acknowledge certain limitations and concerns, including the insufficiency of code details for result reproduction.
The paper may fall short in providing sufficient code details and implementation specifics for reproducing the results. The absence of a comprehensive codebase or clear guidelines might hinder researchers or practitioners from replicating the experiments, potentially impacting the credibility and transparency of the proposed approach.
questions: Why have you chosen that clustering strategy that resembles k-means? Have you tried other options?
Could you better detail how macro-edges are computed?
ethics_review_flag: No
ethics_review_description: No issues
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
8CFcC8nU0D | Online Billion-Scale Recommender Systems with Macro Graph Neural Networks | [
"Hao Chen",
"Yuanchen Bei",
"Qijie Shen",
"Yue Xu",
"Sheng Zhou",
"Wenbing Huang",
"Feiran Huang",
"Senzhang Wang",
"Xiao Huang"
] | Predicting Click-Through Rate (CTR) in billion-scale recommender systems poses a long-standing challenge for Graph Neural Networks (GNNs) due to the overwhelming computational complexity involved in aggregating billions of neighbors. To tackle this, GNN-based CTR models usually sample hundreds of neighbors out of the billions to facilitate efficient online recommendations. However, sampling only a small portion of neighbors results in a severe sampling bias and the failure to encompass the full spectrum of user or item behavioral patterns. To address this challenge, we name the conventional user-item recommendation graph as "micro recommendation graph" and introduce a more suitable MAcro Recommendation Graph (MAG) for billion-scale recommendations. MAG resolves the computational complexity problems in the infrastructure by reducing the node count from billions to hundreds. Specifically, MAG groups micro nodes (users and items) with similar behavior patterns to form macro nodes. Subsequently, we introduce tailored Macro Graph Neural Networks (MacGNN) to aggregate information on a macro level and revise the embeddings of macro nodes. MacGNN has already served one of the biggest shopping platforms for two months, providing recommendations for over one billion users. Extensive offline experiments on three public benchmark datasets and an industrial dataset present that MacGNN significantly outperforms twelve CTR baselines while remaining computationally efficient. Besides, online A/B tests confirm MacGNN's superiority in billion-scale recommender systems. | [
"graph-based CTR prediction",
"large-scale recommendation"
] | https://openreview.net/pdf?id=8CFcC8nU0D | WpNs6L7h2w | official_review | 1,701,389,802,684 | 8CFcC8nU0D | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1227/Reviewer_Jd2x"
] | review: The paper titled "Online Billion-Scale Recommender Systems with Macro Graph Neural Networks" addresses the challenge of predicting Click-Through Rate (CTR) in billion-scale recommender systems, a pressing issue in platforms with vast numbers of users, items, and interactions. Conventional Graph Neural Networks (GNNs) struggle with the computational complexity involved in aggregating information from billions of neighbors, often resorting to sampling a small portion of neighbors which leads to severe sampling bias and fails to encompass the full spectrum of user or item behavioral patterns. Authors introduce MacGNN by aggregating nodes into Macro nodes, which effectively reduces the neighbor number and efficiency of graph-based CTR prediction. Especially, the algorithm is further tested in online systems, which is another good point for this paper.
Strength:
1. The macro node idea is applicable and interesting, which greatly reduces the computation cost.
2. This paper discerns the neighborhood distribution of user/item, which is overlooked by previous research.
3. The online testing makes this paper unique among all the other candidates.
questions: This paper is novel, solid, and interesting. I do not have further questions.
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
8CFcC8nU0D | Online Billion-Scale Recommender Systems with Macro Graph Neural Networks | [
"Hao Chen",
"Yuanchen Bei",
"Qijie Shen",
"Yue Xu",
"Sheng Zhou",
"Wenbing Huang",
"Feiran Huang",
"Senzhang Wang",
"Xiao Huang"
] | Predicting Click-Through Rate (CTR) in billion-scale recommender systems poses a long-standing challenge for Graph Neural Networks (GNNs) due to the overwhelming computational complexity involved in aggregating billions of neighbors. To tackle this, GNN-based CTR models usually sample hundreds of neighbors out of the billions to facilitate efficient online recommendations. However, sampling only a small portion of neighbors results in a severe sampling bias and the failure to encompass the full spectrum of user or item behavioral patterns. To address this challenge, we name the conventional user-item recommendation graph as "micro recommendation graph" and introduce a more suitable MAcro Recommendation Graph (MAG) for billion-scale recommendations. MAG resolves the computational complexity problems in the infrastructure by reducing the node count from billions to hundreds. Specifically, MAG groups micro nodes (users and items) with similar behavior patterns to form macro nodes. Subsequently, we introduce tailored Macro Graph Neural Networks (MacGNN) to aggregate information on a macro level and revise the embeddings of macro nodes. MacGNN has already served one of the biggest shopping platforms for two months, providing recommendations for over one billion users. Extensive offline experiments on three public benchmark datasets and an industrial dataset present that MacGNN significantly outperforms twelve CTR baselines while remaining computationally efficient. Besides, online A/B tests confirm MacGNN's superiority in billion-scale recommender systems. | [
"graph-based CTR prediction",
"large-scale recommendation"
] | https://openreview.net/pdf?id=8CFcC8nU0D | PUgEGkxlZl | official_review | 1,700,904,163,752 | 8CFcC8nU0D | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1227/Reviewer_srJW"
] | review: In this paper, the authors propose a macro recommendation graph for online billion-scale recommendations. Reducing the complexity of graph neural networks is definitively an essential problem.
One unclear part is the design choices for constructing nodes and edges in the graph. I consider the main idea of the paper is to group users and items into several groups to reduce the size of the graph. Therefore, I hope the authors clearly compare the design choices of the grouping.
questions: Please correct me if I have some misunderstanding parts in the review.
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 3
technical_quality: 4
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
8CFcC8nU0D | Online Billion-Scale Recommender Systems with Macro Graph Neural Networks | [
"Hao Chen",
"Yuanchen Bei",
"Qijie Shen",
"Yue Xu",
"Sheng Zhou",
"Wenbing Huang",
"Feiran Huang",
"Senzhang Wang",
"Xiao Huang"
] | Predicting Click-Through Rate (CTR) in billion-scale recommender systems poses a long-standing challenge for Graph Neural Networks (GNNs) due to the overwhelming computational complexity involved in aggregating billions of neighbors. To tackle this, GNN-based CTR models usually sample hundreds of neighbors out of the billions to facilitate efficient online recommendations. However, sampling only a small portion of neighbors results in a severe sampling bias and the failure to encompass the full spectrum of user or item behavioral patterns. To address this challenge, we name the conventional user-item recommendation graph as "micro recommendation graph" and introduce a more suitable MAcro Recommendation Graph (MAG) for billion-scale recommendations. MAG resolves the computational complexity problems in the infrastructure by reducing the node count from billions to hundreds. Specifically, MAG groups micro nodes (users and items) with similar behavior patterns to form macro nodes. Subsequently, we introduce tailored Macro Graph Neural Networks (MacGNN) to aggregate information on a macro level and revise the embeddings of macro nodes. MacGNN has already served one of the biggest shopping platforms for two months, providing recommendations for over one billion users. Extensive offline experiments on three public benchmark datasets and an industrial dataset present that MacGNN significantly outperforms twelve CTR baselines while remaining computationally efficient. Besides, online A/B tests confirm MacGNN's superiority in billion-scale recommender systems. | [
"graph-based CTR prediction",
"large-scale recommendation"
] | https://openreview.net/pdf?id=8CFcC8nU0D | Aq8sLoi3Pu | official_review | 1,700,615,951,124 | 8CFcC8nU0D | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1227/Reviewer_2fdi"
] | review: To take more useful neighbor information from user-item interactions, this paper propose a Macro Graph Neural Network-based approach. Different from sequential user-behavior modeling, such as DIN and SIM, this work is able to utilize high-order relationship between users and items. Compared to traditional GNN-based user behavior modeling, this approach alleviates the sampling bias problem and captures the full spectrum of user or item behavioral pattern. Both online and offline experiments are conducted.
questions: From the section METHODOLOGY, the macro nodes are generated by grouping the micro nodes' embedding? An how to define the number of macro nodes?
Based on the generated macro nodes, how many kinds of methods can construct the relationship between these nodes? Pls. give a detailed discussion. This part is interesting.
In Figure-2, only 2-Hop neighbors are utilized, what is the highest order of neighbors which is helpful for final performance.
How to treat and assign the macro nodes for the new items and users?
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
8CFcC8nU0D | Online Billion-Scale Recommender Systems with Macro Graph Neural Networks | [
"Hao Chen",
"Yuanchen Bei",
"Qijie Shen",
"Yue Xu",
"Sheng Zhou",
"Wenbing Huang",
"Feiran Huang",
"Senzhang Wang",
"Xiao Huang"
] | Predicting Click-Through Rate (CTR) in billion-scale recommender systems poses a long-standing challenge for Graph Neural Networks (GNNs) due to the overwhelming computational complexity involved in aggregating billions of neighbors. To tackle this, GNN-based CTR models usually sample hundreds of neighbors out of the billions to facilitate efficient online recommendations. However, sampling only a small portion of neighbors results in a severe sampling bias and the failure to encompass the full spectrum of user or item behavioral patterns. To address this challenge, we name the conventional user-item recommendation graph as "micro recommendation graph" and introduce a more suitable MAcro Recommendation Graph (MAG) for billion-scale recommendations. MAG resolves the computational complexity problems in the infrastructure by reducing the node count from billions to hundreds. Specifically, MAG groups micro nodes (users and items) with similar behavior patterns to form macro nodes. Subsequently, we introduce tailored Macro Graph Neural Networks (MacGNN) to aggregate information on a macro level and revise the embeddings of macro nodes. MacGNN has already served one of the biggest shopping platforms for two months, providing recommendations for over one billion users. Extensive offline experiments on three public benchmark datasets and an industrial dataset present that MacGNN significantly outperforms twelve CTR baselines while remaining computationally efficient. Besides, online A/B tests confirm MacGNN's superiority in billion-scale recommender systems. | [
"graph-based CTR prediction",
"large-scale recommendation"
] | https://openreview.net/pdf?id=8CFcC8nU0D | 8aBQrvNel2 | official_review | 1,700,468,507,176 | 8CFcC8nU0D | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1227/Reviewer_jSpn"
] | review: ### Paper Summary
This work proposed a novel macro node (clusters) of user/item nodes in the 1-hop and 2-hop neighbors of user-item graph, with the benefits of lower computational complexity and better ctr performances. The experiments are conducted in both open datasets and one industrial-level dataset as an online a/b test.
### Clarity
The overall workflow can be easily understood by the figure. However, the equations are chaotic. In equation 5, what is and how to calculate $r_{ab}$? Is that a rating score? What is the dimension of $\textbf{b}_v$? The L2-norm output is a scalar instead of an embedding vector.
Is the workflow from equations 2 to 4 just the kmeans clustering? Is there any popularity bias introduced in the clustering stage if the calculation is on the raw interaction vector?
### Originality
I believe the idea of this work sounds novel and interesting. It is pretty interesting to see a work utilizing the cluster structure for both better efficiency and accuracy.
### Significance
This work can benefit both the research and industrial communities for better efficiency and accuracy for ctr prediction. This can also inspire the research community to realize the importance of using cluster or inherent structures of users and items in personalized recommendation.
**Pros**
The idea of this work sounds novel and interesting, and will have beneficial impact to the community.
**Cons**
The equations are chaotic and sometimes difficult to understand.
questions: * Please help clarify the unclear points in the clarity section above.
* As both modeling modules in equations 8 - 11 are just self-attention mechanism, why don't authors also compare the transformer approach? such as pinformer, autoint, or regular transformer?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
8CFcC8nU0D | Online Billion-Scale Recommender Systems with Macro Graph Neural Networks | [
"Hao Chen",
"Yuanchen Bei",
"Qijie Shen",
"Yue Xu",
"Sheng Zhou",
"Wenbing Huang",
"Feiran Huang",
"Senzhang Wang",
"Xiao Huang"
] | Predicting Click-Through Rate (CTR) in billion-scale recommender systems poses a long-standing challenge for Graph Neural Networks (GNNs) due to the overwhelming computational complexity involved in aggregating billions of neighbors. To tackle this, GNN-based CTR models usually sample hundreds of neighbors out of the billions to facilitate efficient online recommendations. However, sampling only a small portion of neighbors results in a severe sampling bias and the failure to encompass the full spectrum of user or item behavioral patterns. To address this challenge, we name the conventional user-item recommendation graph as "micro recommendation graph" and introduce a more suitable MAcro Recommendation Graph (MAG) for billion-scale recommendations. MAG resolves the computational complexity problems in the infrastructure by reducing the node count from billions to hundreds. Specifically, MAG groups micro nodes (users and items) with similar behavior patterns to form macro nodes. Subsequently, we introduce tailored Macro Graph Neural Networks (MacGNN) to aggregate information on a macro level and revise the embeddings of macro nodes. MacGNN has already served one of the biggest shopping platforms for two months, providing recommendations for over one billion users. Extensive offline experiments on three public benchmark datasets and an industrial dataset present that MacGNN significantly outperforms twelve CTR baselines while remaining computationally efficient. Besides, online A/B tests confirm MacGNN's superiority in billion-scale recommender systems. | [
"graph-based CTR prediction",
"large-scale recommendation"
] | https://openreview.net/pdf?id=8CFcC8nU0D | 5lPgV4xVCS | decision | 1,705,909,247,265 | 8CFcC8nU0D | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: By summarizing the review comments and responses, the ideas of this paper are applicable and interesting, and the authors also did extensive experiments to prove the effectiveness of their proposed method. However, one reviewer has concerns about how to construct graphs. And there are some details about the method that need to be clarified. I recommend that the authors should fix all issues in their camera-ready version. |
84szxJZS1w | Graph Anomaly Detection with Bi-level Optimization | [
"Yuan Gao",
"Junfeng Fang",
"Yongduo Sui",
"Yangyang Li",
"Xiang Wang",
"HuaMin Feng",
"Yongdong Zhang"
] | Graph anomaly detection (GAD) has various applications in finance, healthcare, and security. Graph Neural Networks (GNNs) are now the primary method for GAD, treating it as a task of semi-supervised node classification (normal vs. anomalous). However, most traditional GNNs aggregate and average embeddings from all neighbors, without considering their labels, which can hinder detecting actual anomalies. To address this issue, previous methods try to selectively aggregate neighbors. However, the same selection strategy is applied regardless of normal and anomalous classes, which does not fully solve this issue.
This study discovers that nodes with different classes yet similar neighbor label distributions (NLD) tend to have opposing loss curves, which we term it as ''loss rivalry''. By introducing Contextual Stochastic Block Model (CSBM) and defining NLD distance, we explain this phenomenon theoretically and propose a **B**i-level **o**ptimization **G**raph **N**eural **N**etwork (BioGNN), based on these observations. In a nutshell, the lower level of BioGNN segregates nodes based on their classes and NLD, while the upper level trains the anomaly detector using separation outcomes. Our experiments demonstrate that BioGNN outperforms state-of-the-art methods on four benchmarks and effectively mitigates ''loss rivalry''. | [
"Graph Anomaly Detection",
"Bi-level Optimization",
"Neighbor Label Distribution"
] | https://openreview.net/pdf?id=84szxJZS1w | e3UHzviFjm | official_review | 1,700,154,459,816 | 84szxJZS1w | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2141/Reviewer_wLyQ"
] | review: The paper addresses challenges in Graph Anomaly Detection (GAD) using Graph Neural Networks (GNNs). Traditional GNNs often aggregate embeddings from all neighbors without considering their labels, hindering the detection of anomalies. Previous methods selectively aggregate neighbors, but this selection strategy is consistent for both normal and anomalous classes, limiting effectiveness. The study introduces the concept of "loss rivalry," observing that nodes with different classes yet similar neighbor label distributions tend to have opposing loss curves. The proposed solution, BioGNN, utilizes a Contextual Stochastic Block Model (CSBM) and NLD distance to segregate nodes based on classes and NLD at a lower level, while the upper level trains the anomaly detector using separation outcomes. Experimental results show that BioGNN outperforms existing methods and effectively mitigates the "loss rivalry" phenomenon.
**Strong Points:**
- Clarity and Accuracy in Descriptions: The paper is commended for its mostly clear descriptions with no typos. The methodology section could be improved with more intuition.
- Significance of Research Area: The paper addresses an important research area, Graph Anomaly Detection (GAD), which has applications in finance, healthcare, and security. This significance contributes to the relevance and potential impact of the research.
- Good Results: The paper reports very good results, surpassing state-of-the-art methods. This is a strong indicator of the effectiveness of the proposed solution, BioGNN, and underscores its potential practical utility.
- Analysis of NLD: The authors bring a nice perspective to the analysis of Neighbor Label Distributions (NLD) by incorporating the Contextual Stochastic Block Model (CSBM) from a graph generation standpoint.
**Weak Points:**
- Formula Intuition: Some of the formulas, notably eq6, lack accompanying intuition in their descriptions. Providing more context or explanation for key formulas can enhance the understanding of readers, especially those not deeply familiar with the specific mathematical formulations.
- Lemma 4.1 Clarification: The paper has a weak point related to Lemma 4.1, where the lack of a clear definition or proof may cause confusion. If it is presented more as an observation rather than a formal lemma, it should be explicitly stated as such. Clearing up this ambiguity will improve the overall credibility of the theoretical foundations.
- Time Complexity Analysis: The time complexity analysis is mentioned but lacks a formal proof. The statement about the constant C in the O() notation, along with its resemblance to a statement from another source ([40]), raises concerns about originality of the paper.
**I read the rebuttal and all my concerns were sufficiently addressed.**
questions: 1. would it make sense to consider the NLD using a k-step neighborhood?
2. Why is the input graph complete?
3. Please elaborate on the running time complexity. What is C? Why is the constant mentioned in the O() notation?
ethics_review_flag: No
ethics_review_description: na
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
84szxJZS1w | Graph Anomaly Detection with Bi-level Optimization | [
"Yuan Gao",
"Junfeng Fang",
"Yongduo Sui",
"Yangyang Li",
"Xiang Wang",
"HuaMin Feng",
"Yongdong Zhang"
] | Graph anomaly detection (GAD) has various applications in finance, healthcare, and security. Graph Neural Networks (GNNs) are now the primary method for GAD, treating it as a task of semi-supervised node classification (normal vs. anomalous). However, most traditional GNNs aggregate and average embeddings from all neighbors, without considering their labels, which can hinder detecting actual anomalies. To address this issue, previous methods try to selectively aggregate neighbors. However, the same selection strategy is applied regardless of normal and anomalous classes, which does not fully solve this issue.
This study discovers that nodes with different classes yet similar neighbor label distributions (NLD) tend to have opposing loss curves, which we term it as ''loss rivalry''. By introducing Contextual Stochastic Block Model (CSBM) and defining NLD distance, we explain this phenomenon theoretically and propose a **B**i-level **o**ptimization **G**raph **N**eural **N**etwork (BioGNN), based on these observations. In a nutshell, the lower level of BioGNN segregates nodes based on their classes and NLD, while the upper level trains the anomaly detector using separation outcomes. Our experiments demonstrate that BioGNN outperforms state-of-the-art methods on four benchmarks and effectively mitigates ''loss rivalry''. | [
"Graph Anomaly Detection",
"Bi-level Optimization",
"Neighbor Label Distribution"
] | https://openreview.net/pdf?id=84szxJZS1w | a0v1fvQ2m1 | official_review | 1,700,760,600,336 | 84szxJZS1w | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2141/Reviewer_qZ8H"
] | review: This paper presents an innovative approach in Graph Neural Networks (GNNs) for Graph Anomaly Detection (GAD). To address the shortcomings of traditional GNNs in GAD, which often inaccurately detect anomalies due to improper aggregation of neighbor embeddings, the paper introduces the Bi-level optimization Graph Neural Network (BioGNN). This proposed method incorporates the Contextual Stochastic Block Model (CSBM) and introduces the Neighbor Label Distribution (NLD) distance metric. BioGNN uniquely tackles the "loss rivalry" issue, where nodes with different classes but similar NLDs exhibit opposing loss curves, affecting model convergence. The model operates on two levels: segregating nodes based on classes and NLD at the lower level, and training the anomaly detector at the upper level. Through extensive experiments, BioGNN demonstrates superior performance over existing methods, effectively enhancing anomaly detection accuracy in various scenarios, including financial fraud detection and identifying misinformation in social networks, marking a significant advancement in the field of GAD.
Strengths:
1. The motivation of the paper is well justified. The paper successfully identifies and mitigates the "loss rivalry" phenomenon, where nodes of different classes but with similar Neighbor Label Distributions (NLD) have opposing loss curves. This is a significant advancement in improving model convergence and accuracy.
2. The use of the Contextual Stochastic Block Model (CSBM) and the introduction of NLD distance provide a theoretical foundation for the proposed method.
3. The background is presented nicely and the paper is organized and easy to follow.
4. Comprehensive experimental results under different settings are provided to demonstrate the effectiveness of the model.
Weaknesses:
1. The proposed method is strongly dependent on the assumptions of the Contextual Stochastic Block Model (CSBM).
2. The model is highly dependent on the accurate prediction of the Neighborhood Label Distribution (NLD). When the proportion of anomalous nodes in the processed dataset is low or the labeling information is limited, the model performance may be slightly degraded due to the lack of accurate prediction of node NLDs.
3. In the real-world case, the percentage of the anomaly is pretty small, (e.g., less than 5% or even 1%). In the experiment, the percentage of the labeled nodes is more than 40% and the numbers of anomalies for most datasets are more than 10% (e.g., 10% for Amazon and 205 for YelpChi). Can the proposed method could still achieve good performance with limited label information as the proposed method only aggregates the message from labeled neighbors to avoid getting the information from anomalies? The proposed method seems to heavily rely on the good performance of the masking strategy and the quality of the mask seems to rely on a large amount of label information. My major concern is whether the proposed method still performs well for these highly imbalanced datasets or with limited labeled nodes. It's highly recommended to conduct experiments on some datasets with less percentage of anomalies.
questions: Q1: The theoretical basis of the model relies strongly on the assumptions of the CSBM. What are some ways to reduce the reliance on these assumptions and thus enhance the applicability and robustness of the model under different graph generation processes?
Q2. The model performance degrades slightly when the percentage of anomalous nodes in the dataset is low. Are there ways to improve the prediction accuracy of NLD, especially in the case of unbalanced data or limited labeling information?
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
84szxJZS1w | Graph Anomaly Detection with Bi-level Optimization | [
"Yuan Gao",
"Junfeng Fang",
"Yongduo Sui",
"Yangyang Li",
"Xiang Wang",
"HuaMin Feng",
"Yongdong Zhang"
] | Graph anomaly detection (GAD) has various applications in finance, healthcare, and security. Graph Neural Networks (GNNs) are now the primary method for GAD, treating it as a task of semi-supervised node classification (normal vs. anomalous). However, most traditional GNNs aggregate and average embeddings from all neighbors, without considering their labels, which can hinder detecting actual anomalies. To address this issue, previous methods try to selectively aggregate neighbors. However, the same selection strategy is applied regardless of normal and anomalous classes, which does not fully solve this issue.
This study discovers that nodes with different classes yet similar neighbor label distributions (NLD) tend to have opposing loss curves, which we term it as ''loss rivalry''. By introducing Contextual Stochastic Block Model (CSBM) and defining NLD distance, we explain this phenomenon theoretically and propose a **B**i-level **o**ptimization **G**raph **N**eural **N**etwork (BioGNN), based on these observations. In a nutshell, the lower level of BioGNN segregates nodes based on their classes and NLD, while the upper level trains the anomaly detector using separation outcomes. Our experiments demonstrate that BioGNN outperforms state-of-the-art methods on four benchmarks and effectively mitigates ''loss rivalry''. | [
"Graph Anomaly Detection",
"Bi-level Optimization",
"Neighbor Label Distribution"
] | https://openreview.net/pdf?id=84szxJZS1w | OHpBF76vkv | decision | 1,705,909,257,111 | 84szxJZS1w | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: The paper introduces the Bi-level optimization Graph Neural Network (BioGNN) for graph-based anomaly detection, addressing the limitations of traditional Graph Neural Networks (GNNs) in this area. By focusing on the "loss rivalry" phenomenon and employing a bi-level optimization approach, BioGNN demonstrates potential advancements in graph anomaly detection (GAD), a field with wide applications in finance, healthcare, and security.
## Strengths:
1. Novel Approach and Theoretical Foundation: The introduction of BioGNN, which uses a bi-level optimization approach and incorporates the Contextual Stochastic Block Model (CSBM) and Neighbor Label Distribution (NLD) distance, is innovative and theoretically grounded. This approach addresses the critical issue of "loss rivalry," as highlighted by several reviewers.
2. Experimental Validation: The experimental results, conducted on various real-world datasets, demonstrate the superior performance of BioGNN over existing methods, reinforcing the paper's contribution to the field.
3. Clarity and Organization: The paper is generally well-organized and clearly written, with a detailed methodology section and comprehensive experiments.
## Weaknesses:
1. Methodological Clarifications and Justifications: Reviewers expressed concerns regarding the clarity of certain methodological aspects, including the design of the low-level optimization and the connection between theoretical intentions and the model design. Reviewer 1 and 4 pointed out the need for more justification in model design and clearer definitions of notations.
2. Dependence on CSBM Assumptions and NLD Predictions: Reviewer 2 noted that the model's effectiveness is strongly dependent on the assumptions of the CSBM and the accuracy of NLD predictions, which may limit its applicability in certain real-world scenarios.
3. Lack of Comparison with Relevant Baselines: Several reviewers, including Reviewer 3 and 4, mentioned the absence of comparisons with relevant baselines, such as GBK-GNN and other related works, which are critical for establishing the method's novelty and effectiveness.
4. Need for More Detailed Analysis: Reviewers suggested conducting in-depth analysis, such as ablation tests, to assess the effectiveness of individual components of BioGNN. Reviewer 4 and 5 emphasized the need for clearer explanations of key formulas and the time complexity analysis.
The authors' response indeed does a good job to address some of the above concerns, which makes several reviewers increase their evaluation after reading the response. However, some reviewers also mentioned that fully addressing the concerns may be too much and need another round of review, which I share some similar spirit. Overall, I give a borderline recommendation. The technical idea of this paper is good. Even if the paper gets rejected this time, after proper revision, this will become a strong paper for the future round of review. |
84szxJZS1w | Graph Anomaly Detection with Bi-level Optimization | [
"Yuan Gao",
"Junfeng Fang",
"Yongduo Sui",
"Yangyang Li",
"Xiang Wang",
"HuaMin Feng",
"Yongdong Zhang"
] | Graph anomaly detection (GAD) has various applications in finance, healthcare, and security. Graph Neural Networks (GNNs) are now the primary method for GAD, treating it as a task of semi-supervised node classification (normal vs. anomalous). However, most traditional GNNs aggregate and average embeddings from all neighbors, without considering their labels, which can hinder detecting actual anomalies. To address this issue, previous methods try to selectively aggregate neighbors. However, the same selection strategy is applied regardless of normal and anomalous classes, which does not fully solve this issue.
This study discovers that nodes with different classes yet similar neighbor label distributions (NLD) tend to have opposing loss curves, which we term it as ''loss rivalry''. By introducing Contextual Stochastic Block Model (CSBM) and defining NLD distance, we explain this phenomenon theoretically and propose a **B**i-level **o**ptimization **G**raph **N**eural **N**etwork (BioGNN), based on these observations. In a nutshell, the lower level of BioGNN segregates nodes based on their classes and NLD, while the upper level trains the anomaly detector using separation outcomes. Our experiments demonstrate that BioGNN outperforms state-of-the-art methods on four benchmarks and effectively mitigates ''loss rivalry''. | [
"Graph Anomaly Detection",
"Bi-level Optimization",
"Neighbor Label Distribution"
] | https://openreview.net/pdf?id=84szxJZS1w | NEFzPHbntH | official_review | 1,700,695,551,694 | 84szxJZS1w | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2141/Reviewer_hHiV"
] | review: ## Updates after Authors' Response
I appreciate the detailed responses from the authors, which have addressed the questions I raised in my original review. I am happy to adjust the score to take into account the additional discussions and efforts the authors put in their response.
With that being said, I feel that there are many substantial updates (e.g., adding additional baselines, discussions to other related works, important clarification/contextual discussions to the theorems, etc.) being pledged in this response (and the responses to other reviews). **While I appreciate the significant efforts the authors made to include these additional results/discussions in the rebuttal, I feel these changes are too much to be reviewed only as patches scattered in the responses; they should be reviewed comprehensively in the form of a complete paper revision. For this reason, I would not champion for the acceptance of the current version of the work before seeing a completed revision.**
---
## Summary
This paper focuses on improving the Graph Neural Network (GNN) performance on graph anomaly detection in the form of semi-supervised binary node classification. Specifically, the authors propose to (1) separate the nodes into two sets using a learnable separation function, and (2) employ two separate graph encoder (one with a low-pass filter and one with a high-pass filter) for each set of nodes. Experiment results show that the proposed approach demonstrates mostly marginal improvement to the existing approaches. The authors also provide theoretical analysis from the perspective of Neighborhood Label Distribution (NLD) which they claim to support the proposed approach.
## Strengths
1. Fig. 4 provides a nice high-level illustration of the proposed approach.
2. The experiments are bing conducted on a extensive set of baselines (even though missing a baseline that is more relevant to the proposed approach).
## Weakness
1. **For theoretical analysis, Proposition 3.1 seems to be similar to the results of Equation (4) in the prior work [33].** Specifically, [33] has given detailed derivation on the connections between the distance of Neighborhood Label Distribution (NLD) with the distance of their expected hidden representation for a pair of nodes from two different classes under the Contextual Stochastic Block Model (CSBM) model. The authors should cite this prior work in their analysis and address the similarities between their analysis and the analysis in [33].
2. **It is hard to understand how the proposed approach is motivated in the theoretical analysis.** There are some confusing discussions that aim to justify the proposed approach based on theoretical analysis in Line 344-347:
> Proposition 3.2 indicates that capturing the difference in spectral label distribution is equivalent to measuring the similarity between NLDs. Furthermore, the proposition elucidates that different nodes with similar NLD retain rather different frequency components.
>
These two claims seem to be in conflict with each other: if the first sentence is true, how can different nodes with similar NLD can retain rather different frequency components? Also, the overall reasoning here seems to be based on the 2nd sentence here: because abnormalities and normal nodes cannot be distinguished from NLDs, they need to be distinguished from the spectral perspective. If that is the case, then what is the purpose of Proposition 3.1 here?
3. **For the proposed approach, another relevant work that the authors didn’t address or compare against in their work is GBK-GNN** [A1]: GBK-GNN also employs a gated selection mechanism that adapts to homophilous or heterophilous connections, despite that [A1] apply the gated mechanism on the edge level for message passing, while this work applies it on the node level for selecting graph encoder. Given both works are employing this bi-kernel design, I think the authors should address their differences in the related works and add GBK-GNN as an additional baseline in their experiments.
4. **Some technical details of the proposed approach are not clearly described:**
a. For the choice of $g_1$ and $g_2$ in Equation (13), Appendix B.1 only shows the choice of $\alpha$ and $\beta$ which does not specify how they mapped to $g_1$ or $g_2$.
b. Section 4.3 (Initialization of BioGNN) feels unclear in its current writing. I would suggest the authors to add an illustration to show the end-to-end process of parameter initialization.
c. Figure 5, 6 and their discussions are hard to understand.
## References
[33] Yao Ma, Xiaorui Liu, Neil Shah, and Jiliang Tang. 2022. Is homophily a necessity for graph neural networks?. In ICLR.
[A1] Lun Du, Xiaozhou Shi, Qiang Fu, Xiaojun Ma, Hengyu Liu, Shi Han, and Dongmei Zhang. "Gbk-gnn: Gated bi-kernel graph neural networks for modeling both homophily and heterophily." In *Proceedings of the ACM Web Conference 2022*, pp. 1550-1558. 2022.
questions: I would like to see the authors address the questions raised in the weakness section of the review, especially for points (1)-(3).
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 3
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
84szxJZS1w | Graph Anomaly Detection with Bi-level Optimization | [
"Yuan Gao",
"Junfeng Fang",
"Yongduo Sui",
"Yangyang Li",
"Xiang Wang",
"HuaMin Feng",
"Yongdong Zhang"
] | Graph anomaly detection (GAD) has various applications in finance, healthcare, and security. Graph Neural Networks (GNNs) are now the primary method for GAD, treating it as a task of semi-supervised node classification (normal vs. anomalous). However, most traditional GNNs aggregate and average embeddings from all neighbors, without considering their labels, which can hinder detecting actual anomalies. To address this issue, previous methods try to selectively aggregate neighbors. However, the same selection strategy is applied regardless of normal and anomalous classes, which does not fully solve this issue.
This study discovers that nodes with different classes yet similar neighbor label distributions (NLD) tend to have opposing loss curves, which we term it as ''loss rivalry''. By introducing Contextual Stochastic Block Model (CSBM) and defining NLD distance, we explain this phenomenon theoretically and propose a **B**i-level **o**ptimization **G**raph **N**eural **N**etwork (BioGNN), based on these observations. In a nutshell, the lower level of BioGNN segregates nodes based on their classes and NLD, while the upper level trains the anomaly detector using separation outcomes. Our experiments demonstrate that BioGNN outperforms state-of-the-art methods on four benchmarks and effectively mitigates ''loss rivalry''. | [
"Graph Anomaly Detection",
"Bi-level Optimization",
"Neighbor Label Distribution"
] | https://openreview.net/pdf?id=84szxJZS1w | G5MgQ8GgKF | official_review | 1,701,231,785,309 | 84szxJZS1w | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2141/Reviewer_vFv3"
] | review: The paper proposes a method named BioGNN for graph-based anomaly detection. BioGNN adopts a bi-level optimization approach, where the first level optimizes an encoder for masking the graph to segregate nodes based on neighbor label distribution (NLD) distance, and the second level optimizes the classifier, as the problem is cast as semi-supervised node classification problem. The problem studied by the paper is important, and may draw wide interests in the community. The approach proposed by the paper is novel and interesting. Overall the writing is well polished, despite of a few issues, such as references of undefined notations in Section 3.1 and 3.2, a few broken sentences/grammar mistakes (e.g., Figure 2 caption, and some sentences in Section 6 ). Another strength of the paper is that it makes the code available, which is great for fast reproducibility. One suggestion here is to make a readily runnable script or demo, so that interested readers could easily verify certain experiments.
One of my major concerns is the design of the low level optimization, since it is not quite intuitive that the encoder theta would serve its purpose of assigning different masks to nodes with the different class but similar NLDs. In other words, the paper doesn't make enough justification on how the actual model design is connected with the theoretical intention. Actually, in another perspective besides the NLDs, the model design proposed by the paper looks quite related to the recent graph cleaning/de-noising (some also call it sanitation) line of works. The low level optimization serves the purpose of filtering nodes of undesired distribution in this sense. One related work is titled Graph Sanitation with Application to Node Classification, where bi-level optimization technique is also applied.
Another concern is about the experiment. It'd be great if the authors could share some insights in why the learning curves of high and low frequency shown in Figure 5 are so unsmooth. Sudden and huge jumps could be seen in around 50-th epoch.
For the experiments, it'd be great if the main results shown in table 3 could also contain standard deviation/statistical significance results on several runs, since the metrics reported on certain models are quite close.
One question about the Section 4.3 is that why it mentions that the input data a complete graph rather than a ego net? Is it because the datasets are complete graph or any other reasons? Please clarify this question.
questions: See above
ethics_review_flag: No
ethics_review_description: no ethic issues
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 4
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
84szxJZS1w | Graph Anomaly Detection with Bi-level Optimization | [
"Yuan Gao",
"Junfeng Fang",
"Yongduo Sui",
"Yangyang Li",
"Xiang Wang",
"HuaMin Feng",
"Yongdong Zhang"
] | Graph anomaly detection (GAD) has various applications in finance, healthcare, and security. Graph Neural Networks (GNNs) are now the primary method for GAD, treating it as a task of semi-supervised node classification (normal vs. anomalous). However, most traditional GNNs aggregate and average embeddings from all neighbors, without considering their labels, which can hinder detecting actual anomalies. To address this issue, previous methods try to selectively aggregate neighbors. However, the same selection strategy is applied regardless of normal and anomalous classes, which does not fully solve this issue.
This study discovers that nodes with different classes yet similar neighbor label distributions (NLD) tend to have opposing loss curves, which we term it as ''loss rivalry''. By introducing Contextual Stochastic Block Model (CSBM) and defining NLD distance, we explain this phenomenon theoretically and propose a **B**i-level **o**ptimization **G**raph **N**eural **N**etwork (BioGNN), based on these observations. In a nutshell, the lower level of BioGNN segregates nodes based on their classes and NLD, while the upper level trains the anomaly detector using separation outcomes. Our experiments demonstrate that BioGNN outperforms state-of-the-art methods on four benchmarks and effectively mitigates ''loss rivalry''. | [
"Graph Anomaly Detection",
"Bi-level Optimization",
"Neighbor Label Distribution"
] | https://openreview.net/pdf?id=84szxJZS1w | A3NYn1X0dq | official_review | 1,700,185,846,804 | 84szxJZS1w | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2141/Reviewer_9KcF"
] | review: In this paper, the authors address the heterophily issue in a graph anomaly detection task. They first observe a phenomenon (i.e., loss rivalry) that the nodes with different class labels yet similar neighbor label distributions tend to have opposing loss curves and explain this phenomenon theoretically. Then, the authors propose BioGNN that has two key components: (1) a mask generator and (2) two well-designed GNN encoders. Through the experiments using real-world datasets, they demonstrate the effectiveness of BioGNN. The strengths and weaknesses of this paper are as follows.
- Strengths.
S1. The authors observed the loss rivalry phenomenon
S2. The authors proposed BioGNN that addresses the heterophile and thus improves the performance of the graph anomaly detection task
S3. The authors demonstrated the effectiveness of BioGNN through the following experiments with real-world datasets
- Weakness.
W1. One key paper is missing, which is critical. The authors need to show the key difference between their work and this work and compare the performances.
- [1] Y. Gao et al., “Addressing Heterophily in Graph Anomaly Detection: A Perspective of Graph Spectrum,” In Proc. ACM WWW, 2023.
W2. The notations used in the paper need to be defined more precisely and explained with clarity.
- For example, on page 3, 'alpha' is described as the spectrum of a one-hot vector, while on page 5, it is defined as a hyperparameter.
- The explanations of the meaning behind each encoder (theta, phi, large phi) are missing.
W3. It is necessary to conduct an in-depth analysis of each component.
- The authors need to conduct an ablation test to demonstrate the effectiveness of each component and verify whether each component effectively addresses its intended purpose.
questions: Q1. Compared to the paper [1], what are the strengths and weaknesses of the proposed method?
ethics_review_flag: No
ethics_review_description: None
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
7tKdDT0bDs | Understanding GDPR Non-Compliance in Privacy Policies of Alexa Skills in European Marketplaces | [
"Song Liao",
"Mohammed Aldeen",
"Jingwen Yan",
"Long Cheng",
"Xiapu Luo",
"Haipeng Cai",
"Hongxin Hu"
] | Amazon Alexa is one of the largest Voice Personal Assistant (VPA) platforms and it allows third-party developers to publish their voice
apps, named skills, to the Alexa skill store. To satisfy the needs of European users, Amazon Alexa established multiple skill marketplaces
in Europe and allows developers to publish skills in their native languages, such as German, French, Italian, and Spanish. Skills targeting users in European countries are required to comply with GDPR (General Data Protection Regulation), which imposes strict obligations on data collection and processing. Skills that involve data collection should provide a privacy policy to disclose the data practice to users and meet GDPR requirements.
In this work, we analyze privacy policies of skills in European marketplaces, focusing on whether skills’ privacy policies and data collection behaviors comply with GDPR. We collect a large-scale European skill dataset that includes skills in all European marketplaces with privacy policies. To classify whether a sentence in a privacy policy provides GDPR information, we gather a labeled dataset consisting of skills’ privacy policy sentences and train a BERT model for classification. Then we analyze the GDPR compliance of European skills. Using a dynamic testing tool based on ChatGPT, we check whether skills’ privacy policies comply with GDPR and are consistent with the actual data collection behaviors. Surprisingly, we find that 67% of privacy policies fail to comply with GDPR and don’t provide necessary GDPR-related information. For 1,187 skills with data collection behaviors, we find that 603 skills (50.8%) don’t provide a complete privacy policy and 1,128 skills (95%) have GDPR non-compliance issues in their privacy policies. Meanwhile, we find that the GDPR has a positive influence on European privacy policies when compared to non-European marketplaces, such as the United States, Mexico and Brazil. | [
"Amazon alexa",
"GDPR",
"privacy policy"
] | https://openreview.net/pdf?id=7tKdDT0bDs | pYOwgkgLKD | official_review | 1,700,662,067,282 | 7tKdDT0bDs | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission538/Reviewer_juzK"
] | review: In this research authors have analyzed privacy policies of European voice personal assistants regarding GDPR. The study presents interesting conclusions about the services provided, indicating that most of them do not follow GDPR. The automatic process described to reach the conclusions is useful for the monitoring of such services in the future and opens space to improvements in the automatic methodology. Regarding the results, it would be good to see more effort to assert the quality of the automatic method presented. Authors could, for instance, take samples of the classification produced and ask specialists to classify the services in such samples, showing at the end the accuracy of their method.
questions: Could you give more information about the quality of the method presented, including more validation with specialists to check, for instance, whether a sample of services would receive from the specialists the same classification assigned by your method ?
ethics_review_flag: No
ethics_review_description: none
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
7tKdDT0bDs | Understanding GDPR Non-Compliance in Privacy Policies of Alexa Skills in European Marketplaces | [
"Song Liao",
"Mohammed Aldeen",
"Jingwen Yan",
"Long Cheng",
"Xiapu Luo",
"Haipeng Cai",
"Hongxin Hu"
] | Amazon Alexa is one of the largest Voice Personal Assistant (VPA) platforms and it allows third-party developers to publish their voice
apps, named skills, to the Alexa skill store. To satisfy the needs of European users, Amazon Alexa established multiple skill marketplaces
in Europe and allows developers to publish skills in their native languages, such as German, French, Italian, and Spanish. Skills targeting users in European countries are required to comply with GDPR (General Data Protection Regulation), which imposes strict obligations on data collection and processing. Skills that involve data collection should provide a privacy policy to disclose the data practice to users and meet GDPR requirements.
In this work, we analyze privacy policies of skills in European marketplaces, focusing on whether skills’ privacy policies and data collection behaviors comply with GDPR. We collect a large-scale European skill dataset that includes skills in all European marketplaces with privacy policies. To classify whether a sentence in a privacy policy provides GDPR information, we gather a labeled dataset consisting of skills’ privacy policy sentences and train a BERT model for classification. Then we analyze the GDPR compliance of European skills. Using a dynamic testing tool based on ChatGPT, we check whether skills’ privacy policies comply with GDPR and are consistent with the actual data collection behaviors. Surprisingly, we find that 67% of privacy policies fail to comply with GDPR and don’t provide necessary GDPR-related information. For 1,187 skills with data collection behaviors, we find that 603 skills (50.8%) don’t provide a complete privacy policy and 1,128 skills (95%) have GDPR non-compliance issues in their privacy policies. Meanwhile, we find that the GDPR has a positive influence on European privacy policies when compared to non-European marketplaces, such as the United States, Mexico and Brazil. | [
"Amazon alexa",
"GDPR",
"privacy policy"
] | https://openreview.net/pdf?id=7tKdDT0bDs | eN8HIWE2pi | official_review | 1,700,655,917,021 | 7tKdDT0bDs | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission538/Reviewer_CJNj"
] | review: The paper presents an analysis of the General Data Protection Regulation (GDPR) compliance in the context of Alexa skills' privacy policies within European marketplaces. The methodology includes data collection, classification model development using BERT, and dynamic testing using ChatGPT. The paper is well-organized with clear sections, including background, methodology, analysis, and discussion. Concepts are explained in detail, aiding comprehension for readers who may not be familiar with GDPR or Alexa’s skill ecosystem.
The focus on Alexa skills in the European marketplaces regarding GDPR compliance is a relatively unexplored area, making this work original in its scope. With increasing concerns about data privacy and the widespread use of voice assistants, this research is pertinent. The findings may provide insights for developers, policymakers, and researchers in understanding and improving GDPR compliance in VPA platforms.
Pros
Covers a wide range of marketplaces and skills, providing a holistic view of GDPR compliance in this context.
Offers actionable insights for improving privacy policy compliance in voice assistant platforms.
Employs robust data collection and analysis methods, enhancing the study's reliability.
Cons
The technical details of the methodology might be challenging for readers without a background in AI or data analysis.
The study is focused on European marketplaces, which might limit its applicability to other regions.
Given the rapidly evolving nature of technology and regulations, some findings might become outdated quickly.
questions: How adaptable is your methodology to account for changes in GDPR regulations or updates in Alexa's skills' functionalities?
Could your findings and insights be generalized to other voice assistant platforms beyond Alexa, such as Google Assistant or Apple's Siri?
How do cultural and legal differences across European countries impact GDPR compliance in Alexa skills' privacy policies?
Could you elaborate on any limitations you encountered using BERT and ChatGPT in your analysis, and how these might have impacted your results?
Did your research uncover any notable variations in GDPR compliance across different European marketplaces, and if so, what might be driving these differences?
ethics_review_flag: Yes
ethics_review_description: The research involves analyzing privacy policies and potentially user data or interactions with Alexa skills. It is crucial to ensure that the data used in the study was obtained and processed in a manner that respects user privacy and adheres to consent requirements.
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
7tKdDT0bDs | Understanding GDPR Non-Compliance in Privacy Policies of Alexa Skills in European Marketplaces | [
"Song Liao",
"Mohammed Aldeen",
"Jingwen Yan",
"Long Cheng",
"Xiapu Luo",
"Haipeng Cai",
"Hongxin Hu"
] | Amazon Alexa is one of the largest Voice Personal Assistant (VPA) platforms and it allows third-party developers to publish their voice
apps, named skills, to the Alexa skill store. To satisfy the needs of European users, Amazon Alexa established multiple skill marketplaces
in Europe and allows developers to publish skills in their native languages, such as German, French, Italian, and Spanish. Skills targeting users in European countries are required to comply with GDPR (General Data Protection Regulation), which imposes strict obligations on data collection and processing. Skills that involve data collection should provide a privacy policy to disclose the data practice to users and meet GDPR requirements.
In this work, we analyze privacy policies of skills in European marketplaces, focusing on whether skills’ privacy policies and data collection behaviors comply with GDPR. We collect a large-scale European skill dataset that includes skills in all European marketplaces with privacy policies. To classify whether a sentence in a privacy policy provides GDPR information, we gather a labeled dataset consisting of skills’ privacy policy sentences and train a BERT model for classification. Then we analyze the GDPR compliance of European skills. Using a dynamic testing tool based on ChatGPT, we check whether skills’ privacy policies comply with GDPR and are consistent with the actual data collection behaviors. Surprisingly, we find that 67% of privacy policies fail to comply with GDPR and don’t provide necessary GDPR-related information. For 1,187 skills with data collection behaviors, we find that 603 skills (50.8%) don’t provide a complete privacy policy and 1,128 skills (95%) have GDPR non-compliance issues in their privacy policies. Meanwhile, we find that the GDPR has a positive influence on European privacy policies when compared to non-European marketplaces, such as the United States, Mexico and Brazil. | [
"Amazon alexa",
"GDPR",
"privacy policy"
] | https://openreview.net/pdf?id=7tKdDT0bDs | Ocs1Z5K7K4 | decision | 1,705,909,219,607 | 7tKdDT0bDs | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: Our decision is to accept. Please see the AC's review below and improve the work considering that and the reviewers' feedback for cemera-ready submission.
"The paper analyzes the privacy policies of Alexa skills and assesses whether they comply with the GDPR requirements. The reviewers appreciated the comprehensiveness of the analysis and the fact that the datasets have been open-sourced. Additionally, the reviewers found the adapted methodology to use ChatGPT to simulate the data collection behaviors of skills to be interesting." |
7tKdDT0bDs | Understanding GDPR Non-Compliance in Privacy Policies of Alexa Skills in European Marketplaces | [
"Song Liao",
"Mohammed Aldeen",
"Jingwen Yan",
"Long Cheng",
"Xiapu Luo",
"Haipeng Cai",
"Hongxin Hu"
] | Amazon Alexa is one of the largest Voice Personal Assistant (VPA) platforms and it allows third-party developers to publish their voice
apps, named skills, to the Alexa skill store. To satisfy the needs of European users, Amazon Alexa established multiple skill marketplaces
in Europe and allows developers to publish skills in their native languages, such as German, French, Italian, and Spanish. Skills targeting users in European countries are required to comply with GDPR (General Data Protection Regulation), which imposes strict obligations on data collection and processing. Skills that involve data collection should provide a privacy policy to disclose the data practice to users and meet GDPR requirements.
In this work, we analyze privacy policies of skills in European marketplaces, focusing on whether skills’ privacy policies and data collection behaviors comply with GDPR. We collect a large-scale European skill dataset that includes skills in all European marketplaces with privacy policies. To classify whether a sentence in a privacy policy provides GDPR information, we gather a labeled dataset consisting of skills’ privacy policy sentences and train a BERT model for classification. Then we analyze the GDPR compliance of European skills. Using a dynamic testing tool based on ChatGPT, we check whether skills’ privacy policies comply with GDPR and are consistent with the actual data collection behaviors. Surprisingly, we find that 67% of privacy policies fail to comply with GDPR and don’t provide necessary GDPR-related information. For 1,187 skills with data collection behaviors, we find that 603 skills (50.8%) don’t provide a complete privacy policy and 1,128 skills (95%) have GDPR non-compliance issues in their privacy policies. Meanwhile, we find that the GDPR has a positive influence on European privacy policies when compared to non-European marketplaces, such as the United States, Mexico and Brazil. | [
"Amazon alexa",
"GDPR",
"privacy policy"
] | https://openreview.net/pdf?id=7tKdDT0bDs | KUqm9wcTNO | official_review | 1,700,817,378,726 | 7tKdDT0bDs | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission538/Reviewer_Zy19"
] | review: ***Paper summary:***
The study examines the GDPR compliance of voice apps, known as skills, in European marketplaces. The research focuses on privacy policies and data collection behaviors of these skills. Using a large dataset and a BERT model for classification, the analysis reveals that a significant portion (67%) of privacy policies fail to comply with GDPR. Among skills with data collection, half lack complete privacy policies, and 95% exhibit GDPR non-compliance issues. The study notes a positive impact of GDPR on European privacy policies compared to non-European marketplaces.
***Detailed comments for authors ***
Thank you for submitting your paper on GDPR Non-Compliance in Privacy Policies of Alexa Skills in European Marketplaces. I find the topic quite interesting, and I appreciate the effort you have put into your research. Below, I provide a detailed review:
**Reasons to Accept the Paper:**
- Well-Written and Organized:
The paper is nicely written and well-organized, making it easy to follow.
- Robust Data Analysis:
The use of a large dataset consisting of 23,927 privacy policies is commendable. Moreover, training a BERT model for predicting GDPR-related sentences and conducting a large-scale analysis on GDPR non-compliance demonstrates thorough research.
- Open Source Dataset:
The decision to make the dataset open source adds value to the research community.
- In-Depth Analysis in Section 6:
Section 6, where you analyze the inconsistency in privacy policies by comparing them against actual data collection behaviors, is particularly insightful.
**Reason Not to Accept the Paper:**
While analyzing voice apps is relatively new, the examination of privacy policies and GDPR non-compliance has been extensively studied.
Also, the conclusion that GDPR has a positive influence on European privacy policies has already been validated by previous works. I suggest extending the related work section to discuss in more detail how your work compares to previous studies analyzing privacy policies.
questions: - Can you provide more context on the existing studies that have analyzed privacy policies and GDPR compliance, and how your work builds upon or differs from these previous contributions?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
7tKdDT0bDs | Understanding GDPR Non-Compliance in Privacy Policies of Alexa Skills in European Marketplaces | [
"Song Liao",
"Mohammed Aldeen",
"Jingwen Yan",
"Long Cheng",
"Xiapu Luo",
"Haipeng Cai",
"Hongxin Hu"
] | Amazon Alexa is one of the largest Voice Personal Assistant (VPA) platforms and it allows third-party developers to publish their voice
apps, named skills, to the Alexa skill store. To satisfy the needs of European users, Amazon Alexa established multiple skill marketplaces
in Europe and allows developers to publish skills in their native languages, such as German, French, Italian, and Spanish. Skills targeting users in European countries are required to comply with GDPR (General Data Protection Regulation), which imposes strict obligations on data collection and processing. Skills that involve data collection should provide a privacy policy to disclose the data practice to users and meet GDPR requirements.
In this work, we analyze privacy policies of skills in European marketplaces, focusing on whether skills’ privacy policies and data collection behaviors comply with GDPR. We collect a large-scale European skill dataset that includes skills in all European marketplaces with privacy policies. To classify whether a sentence in a privacy policy provides GDPR information, we gather a labeled dataset consisting of skills’ privacy policy sentences and train a BERT model for classification. Then we analyze the GDPR compliance of European skills. Using a dynamic testing tool based on ChatGPT, we check whether skills’ privacy policies comply with GDPR and are consistent with the actual data collection behaviors. Surprisingly, we find that 67% of privacy policies fail to comply with GDPR and don’t provide necessary GDPR-related information. For 1,187 skills with data collection behaviors, we find that 603 skills (50.8%) don’t provide a complete privacy policy and 1,128 skills (95%) have GDPR non-compliance issues in their privacy policies. Meanwhile, we find that the GDPR has a positive influence on European privacy policies when compared to non-European marketplaces, such as the United States, Mexico and Brazil. | [
"Amazon alexa",
"GDPR",
"privacy policy"
] | https://openreview.net/pdf?id=7tKdDT0bDs | 7LPABGc5Qz | official_review | 1,700,607,047,254 | 7tKdDT0bDs | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission538/Reviewer_K1S4"
] | review: This paper analyzes GDPR violations in the Alexa Skill marketplace. To study GDPR violations, they first collected the privacy policies of all Alexa Skills. Then, they translated them into English and predicted which sentences were relevant to GDPR compliance. Then they manually coded violations of the policy by searching for whether every category required by the GDPR is represented in the privacy policy. They provide descriptive statistics about the violations found (like relative numbers of violations across different countries’ markets). They also used ChatGPT to try to determine if Skills are asking for personal information without having given appropriate notice in the privacy policy.
The overall finding that a large majority of Alexa skills have GDPR violations is very interesting and would have significant policy implications if validated.
A few questions and comments on the methodological approach:
- Why is the sentence-level classification task necessary if you eventually do a keyword search for each GDPR category anyway?
- I would have liked to see more description of the sentence classification task. What are the labels in the datasets used to train the model? Does each dataset have the same set of labels? If not, how did you reconcile labels across datasets?
- I would have liked to see a quantitative comparison between the performance of your sentence classifier and others you trained or those proposed in the literature. Did you try fine-tuning models? This can yield better results, especially given limited computation, than training from scratch.
- I would have also liked to have seen more analysis of the performance of the model and what biases this may introduce into the subsequent GDPR violation analysis. How might the predicted GDPR-relevant differ from the ground truth? Will this introduce bias? Without more details on these questions, it is difficult to evaluate the subsequent analysis of GDPR violations. I also have concerns about the dynamic skills testing. The paper says: “If any personal data semantically follows the word “your”, e.g., “your name”, we consider it a data collection.” but I imagine that many non-data collection-related sentences might use “your”. How do we validate that these are actual instances of data collection versus a false positive like “What is your answer?” in an Alexa game of Jeopardy.
questions: See the bullets above.
ethics_review_flag: No
ethics_review_description: N/A
scope: 2: The connection to the Web is incidental, e.g., use of Web data or API
novelty: 3
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
7tKdDT0bDs | Understanding GDPR Non-Compliance in Privacy Policies of Alexa Skills in European Marketplaces | [
"Song Liao",
"Mohammed Aldeen",
"Jingwen Yan",
"Long Cheng",
"Xiapu Luo",
"Haipeng Cai",
"Hongxin Hu"
] | Amazon Alexa is one of the largest Voice Personal Assistant (VPA) platforms and it allows third-party developers to publish their voice
apps, named skills, to the Alexa skill store. To satisfy the needs of European users, Amazon Alexa established multiple skill marketplaces
in Europe and allows developers to publish skills in their native languages, such as German, French, Italian, and Spanish. Skills targeting users in European countries are required to comply with GDPR (General Data Protection Regulation), which imposes strict obligations on data collection and processing. Skills that involve data collection should provide a privacy policy to disclose the data practice to users and meet GDPR requirements.
In this work, we analyze privacy policies of skills in European marketplaces, focusing on whether skills’ privacy policies and data collection behaviors comply with GDPR. We collect a large-scale European skill dataset that includes skills in all European marketplaces with privacy policies. To classify whether a sentence in a privacy policy provides GDPR information, we gather a labeled dataset consisting of skills’ privacy policy sentences and train a BERT model for classification. Then we analyze the GDPR compliance of European skills. Using a dynamic testing tool based on ChatGPT, we check whether skills’ privacy policies comply with GDPR and are consistent with the actual data collection behaviors. Surprisingly, we find that 67% of privacy policies fail to comply with GDPR and don’t provide necessary GDPR-related information. For 1,187 skills with data collection behaviors, we find that 603 skills (50.8%) don’t provide a complete privacy policy and 1,128 skills (95%) have GDPR non-compliance issues in their privacy policies. Meanwhile, we find that the GDPR has a positive influence on European privacy policies when compared to non-European marketplaces, such as the United States, Mexico and Brazil. | [
"Amazon alexa",
"GDPR",
"privacy policy"
] | https://openreview.net/pdf?id=7tKdDT0bDs | 3d7iGVb9bc | official_review | 1,700,905,057,659 | 7tKdDT0bDs | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission538/Reviewer_7nNZ"
] | review: **Summary**
- This paper delves into privacy concerns surrounding third-party developed applications, or ‘skills’, on Amazon’s Alexa platform.
- The focus is on European skills compliance with GDPR, where stringent data collection and processing rules apply.
- A large-scale, European skill dataset is analysed, and BERT and ChatGPT based testing tools are used to check the conformity of skill privacy policies with GDPR and their consistency with actual data collection behaviours.
**Strong points**
- The topic is relevant and timely in the current age of data privacy concerns and GDPR implications upon data collection practices.
- The use of GDPR as a reference point ensures a clear and standardised benchmark for analysis.
- The methodology, involving a combination of policy analysis and testing tool application, is interesting.
- The paper contributes to the community by sharing the dataset, model, and results.
**Weak points**
- The authors have invested significant effort to understand the GDPR compliance practices for skills in Alexa Skills Store. Yet, the document fails to clarify why this issue bears importance and how the authors’ findings may apply to other websites or applications.
- The description of the training and validation dataset for the BERT and translation methodologies could be improved. It is unclear if the authors have used the 2586 sentences to train the BERT model, or whether these are only for validation purposes. Besides, it would help to know if the translation evaluation dataset is balanced. The Macro F1 score might also prove crucial, specifically if the dataset is imbalanced.
- The authors’ ChatGPT based method to simulate the data collection process followed by skills is indeed interesting. Although the paper claims that any personal data following the word “your” is considered data collection, the accuracy of this method is unclear. For instance, is “your questions” also categorized as data collection?
questions: see weak points above
ethics_review_flag: No
ethics_review_description: n/a
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
7hfssuKQ8P | Multi-Label Zero-Shot Product Attribute-Value Extraction | [
"Jiaying Gong",
"Hoda Eldardiry"
] | E-commerce platforms must provide detailed product descriptions (attribute values) for effective product search and recommendation. However, attribute value information is typically not available for new products. To predict unseen attribute values, large quantities of labeled training data are needed to train a traditional supervised learning model. Typically, it is difficult, time-consuming, and costly to manually label large quantities of new product profiles. In this paper, we propose a novel method to efficiently and effectively extract unseen attribute values from new products in the absence of labeled data (zero-shot setting). We propose HyperPAVE, a multi-label zero-shot attribute value extraction model that leverages inductive inference in heterogeneous hypergraphs. In particular, our proposed technique constructs heterogeneous hypergraphs to capture complex higher-order relations (i.e. user behavior information) to learn more accurate feature representations for graph nodes. Furthermore, our proposed HyperPAVE model uses an inductive link prediction mechanism to infer future connections between unseen nodes. This enables HyperPAVE to identify new attribute values without the need for labeled training data. We conduct extensive experiments with ablation studies on different categories of the MAVE dataset. The results demonstrate that our proposed HyperPAVE model significantly outperforms existing classification-based, generation-based and graph-based models for attribute value extraction in the zero-shot setting. | [
"attribute value extraction",
"zero-shot learning",
"heterogeneous hypergraph"
] | https://openreview.net/pdf?id=7hfssuKQ8P | ylEY0AAnxp | official_review | 1,700,361,668,203 | 7hfssuKQ8P | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1984/Reviewer_xmfQ"
] | review: This paper aims to leverage inductive inference in heterogeneous hypergraphs to solve the problem of zero-shot multi-label attribute-value extraction. The zero-shot multi-label attribute-value extraction plays an important role in different areas. The inductive link prediction is very suitable to the unseen nodes, and the heterogeneous hypergraph can capture the complex high-order relations. So the solution is reasonable and feasible.
Strength:
1. The zero-shot multi-label attribute-value extraction is important, this paper proposes inductive inference in heterogeneous hypergraphs to solve the problem.
2. The method section is detailed and well-organized.
3. The experiments are validated on different datasets, solid, rich and diverse.
Weakness:
1. What are the special challenges of the zero-shot multi-label attribute-value extraction?
2. Lack of highly relevant references in related work section, such as using hypergraphs to solve zero-shot related work.
3. The multi-label problem does not seem to have been particularly resolved.
4. Why can the proposed method achieve the minimum time and space complexity, and which ones play an important role in it?
questions: 1. What are the special challenges of the zero-shot multi-label attribute-value extraction?
2. Lack of highly relevant references in related work section, such as using hypergraphs to solve zero-shot related work.
3. The multi-label problem does not seem to have been particularly resolved.
4. Why can the proposed method achieve the minimum time and space complexity, and which ones play an important role in it?
ethics_review_flag: No
ethics_review_description: No
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 3
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
7hfssuKQ8P | Multi-Label Zero-Shot Product Attribute-Value Extraction | [
"Jiaying Gong",
"Hoda Eldardiry"
] | E-commerce platforms must provide detailed product descriptions (attribute values) for effective product search and recommendation. However, attribute value information is typically not available for new products. To predict unseen attribute values, large quantities of labeled training data are needed to train a traditional supervised learning model. Typically, it is difficult, time-consuming, and costly to manually label large quantities of new product profiles. In this paper, we propose a novel method to efficiently and effectively extract unseen attribute values from new products in the absence of labeled data (zero-shot setting). We propose HyperPAVE, a multi-label zero-shot attribute value extraction model that leverages inductive inference in heterogeneous hypergraphs. In particular, our proposed technique constructs heterogeneous hypergraphs to capture complex higher-order relations (i.e. user behavior information) to learn more accurate feature representations for graph nodes. Furthermore, our proposed HyperPAVE model uses an inductive link prediction mechanism to infer future connections between unseen nodes. This enables HyperPAVE to identify new attribute values without the need for labeled training data. We conduct extensive experiments with ablation studies on different categories of the MAVE dataset. The results demonstrate that our proposed HyperPAVE model significantly outperforms existing classification-based, generation-based and graph-based models for attribute value extraction in the zero-shot setting. | [
"attribute value extraction",
"zero-shot learning",
"heterogeneous hypergraph"
] | https://openreview.net/pdf?id=7hfssuKQ8P | vfroW6jtKS | official_review | 1,700,824,684,630 | 7hfssuKQ8P | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1984/Reviewer_cffN"
] | review: The approach proposes a novel method to automatically extract attributes of products in a zero-shot setting, by constructing a hypergraph from the product and using inductive link prediction. The approach shows state-of-the-art results and it could help e-commerces.
questions: - The approach relies on constructing hypergraphs for the products by using the 'also buy' and 'also view' relations. However, I think for new products this won't be available, how does the approach deal with this?
- The same way as before, how does the approach work with unconnected products or products of a new category?
ethics_review_flag: No
ethics_review_description: None
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
7hfssuKQ8P | Multi-Label Zero-Shot Product Attribute-Value Extraction | [
"Jiaying Gong",
"Hoda Eldardiry"
] | E-commerce platforms must provide detailed product descriptions (attribute values) for effective product search and recommendation. However, attribute value information is typically not available for new products. To predict unseen attribute values, large quantities of labeled training data are needed to train a traditional supervised learning model. Typically, it is difficult, time-consuming, and costly to manually label large quantities of new product profiles. In this paper, we propose a novel method to efficiently and effectively extract unseen attribute values from new products in the absence of labeled data (zero-shot setting). We propose HyperPAVE, a multi-label zero-shot attribute value extraction model that leverages inductive inference in heterogeneous hypergraphs. In particular, our proposed technique constructs heterogeneous hypergraphs to capture complex higher-order relations (i.e. user behavior information) to learn more accurate feature representations for graph nodes. Furthermore, our proposed HyperPAVE model uses an inductive link prediction mechanism to infer future connections between unseen nodes. This enables HyperPAVE to identify new attribute values without the need for labeled training data. We conduct extensive experiments with ablation studies on different categories of the MAVE dataset. The results demonstrate that our proposed HyperPAVE model significantly outperforms existing classification-based, generation-based and graph-based models for attribute value extraction in the zero-shot setting. | [
"attribute value extraction",
"zero-shot learning",
"heterogeneous hypergraph"
] | https://openreview.net/pdf?id=7hfssuKQ8P | dHaNV1dJYD | decision | 1,705,909,232,527 | 7hfssuKQ8P | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: This paper introduces HyperPAVE, a model tackling the challenge of extracting unseen attributes from new products without labeled data. The innovation is based on utilizing heterogeneous hypergraphs, incorporating fine-tuned BERT embeddings and hyperedge weighting for inductive link prediction.
Reviewers appreciate the paper's relevance, clarity, and grounding in inductive GNN and embedding advances, presenting good results validated on different datasets, though with limited performance gains (ompared to efficient baselines like HGNN+ and HyperGCN). The authors commit to open soruce the method/code, which addresses reproducibility concerns.
The authors are encourage to revise the paper to include the responses to the reviewers and further proofreading |
7hfssuKQ8P | Multi-Label Zero-Shot Product Attribute-Value Extraction | [
"Jiaying Gong",
"Hoda Eldardiry"
] | E-commerce platforms must provide detailed product descriptions (attribute values) for effective product search and recommendation. However, attribute value information is typically not available for new products. To predict unseen attribute values, large quantities of labeled training data are needed to train a traditional supervised learning model. Typically, it is difficult, time-consuming, and costly to manually label large quantities of new product profiles. In this paper, we propose a novel method to efficiently and effectively extract unseen attribute values from new products in the absence of labeled data (zero-shot setting). We propose HyperPAVE, a multi-label zero-shot attribute value extraction model that leverages inductive inference in heterogeneous hypergraphs. In particular, our proposed technique constructs heterogeneous hypergraphs to capture complex higher-order relations (i.e. user behavior information) to learn more accurate feature representations for graph nodes. Furthermore, our proposed HyperPAVE model uses an inductive link prediction mechanism to infer future connections between unseen nodes. This enables HyperPAVE to identify new attribute values without the need for labeled training data. We conduct extensive experiments with ablation studies on different categories of the MAVE dataset. The results demonstrate that our proposed HyperPAVE model significantly outperforms existing classification-based, generation-based and graph-based models for attribute value extraction in the zero-shot setting. | [
"attribute value extraction",
"zero-shot learning",
"heterogeneous hypergraph"
] | https://openreview.net/pdf?id=7hfssuKQ8P | QqpC0Yz0QW | official_review | 1,700,809,173,473 | 7hfssuKQ8P | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1984/Reviewer_pgmM"
] | review: # Review of "Multi-Label Zero-Shot Product Attribute Value Extraction":
- This paper introduces the concept of representing product information as a heterogeneous (typed) hypergraph, that allows complex relations, enhancing traditional knowledge graphs, and allowing the incorporation of relations like 'also bought' and 'also viewed', which are key to the ability to generalize over what is known about a specific product. By taking a zero-shot approach to product attribute value extraction utilizing multiple types of information, the paper's method reduces the amount of manual labeling required by current, supervised learning approaches. The paper clearly explains the method used, credibly benchmarks this method against previous work, and discusses efficiency considerations. This reviewer believes that the paper is appropriate for the conference audience.
## Pros
- The use of typed hypergraphs for representing product information is new to this reviewer, offering a more complex and detailed structure than traditional knowledge graphs, effectively incorporating consumer behavior using relations such as 'also bought' and 'also viewed'.
- The paper's zero-shot approach to utilizing multiple types of information for this approach is clearly explained, especially with reference to the use of embeddings and data sampling.
- There is a thorough comparison with existing methods, providing a clear understanding of the paper's advancements, as well as a practical understanding of the application’s implications.
## Cons
- The complexity of the hypergraph knowledge representation in this task could pose challenges in terms of understanding and implementation by others reproducing this work.
questions: - Can you provide a description of how the mapping from your hypergraph representation to a labeled typed property graph? This reviewer believes describing such a mapping, even an informal one, would help motivate the benefits of the hypergraph knowledge representation, as well as make it easier to approach from the perspective of conference attendees who may not be familiar with it.
- Understanding that this is called out as a topic for future work, is it possible to elaborate on plausible approaches to integrated multimodality into your approach?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
7hfssuKQ8P | Multi-Label Zero-Shot Product Attribute-Value Extraction | [
"Jiaying Gong",
"Hoda Eldardiry"
] | E-commerce platforms must provide detailed product descriptions (attribute values) for effective product search and recommendation. However, attribute value information is typically not available for new products. To predict unseen attribute values, large quantities of labeled training data are needed to train a traditional supervised learning model. Typically, it is difficult, time-consuming, and costly to manually label large quantities of new product profiles. In this paper, we propose a novel method to efficiently and effectively extract unseen attribute values from new products in the absence of labeled data (zero-shot setting). We propose HyperPAVE, a multi-label zero-shot attribute value extraction model that leverages inductive inference in heterogeneous hypergraphs. In particular, our proposed technique constructs heterogeneous hypergraphs to capture complex higher-order relations (i.e. user behavior information) to learn more accurate feature representations for graph nodes. Furthermore, our proposed HyperPAVE model uses an inductive link prediction mechanism to infer future connections between unseen nodes. This enables HyperPAVE to identify new attribute values without the need for labeled training data. We conduct extensive experiments with ablation studies on different categories of the MAVE dataset. The results demonstrate that our proposed HyperPAVE model significantly outperforms existing classification-based, generation-based and graph-based models for attribute value extraction in the zero-shot setting. | [
"attribute value extraction",
"zero-shot learning",
"heterogeneous hypergraph"
] | https://openreview.net/pdf?id=7hfssuKQ8P | BGcpb6f3pB | official_review | 1,700,912,910,205 | 7hfssuKQ8P | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1984/Reviewer_XjpU"
] | review: The paper tackles the problem of efficiently and effectively extracting unseen attribute from new products in the absence of labelled data (i.e., in a zero-shot setting). Specifically, the technique put forward (HyperPAVE) constructs heterogeneous hyper-graphs to capture higher-order relations between products to infer links inductively. The model is enhanced with fine-tuned BERT embeddings to provide additional context on the labels of the nodes, and with hyperedge weighting to discriminate the importance of the various hyperedges types in the final node representations.
Pros:
- Overall the paper is interesting and well-written (although it contains too many typos and some broken reference). The method is clear and technically sound, and is directly based on recent advances in GNNs and embeddings.
- The experimental setup is clear, and so are the experimental results. Overall, the technique introduced in the paper yields good results (although performance gains are rather limited compared to efficient baselines such as HGNN+ and HyperGCN).
Cons:
- The motivation behind the zero-shot setting is not entirely clear. The example given in the paper (i.e., sneaker example with a new brand) is not exactly helping in this context. A number of recent works consider a few-shot setting for AVE, and I wonder how important the zero-shot scenario that is put forward in the paper is in practice.
- Many inductive techniques have been proposed recently, both on graphs and hypergraphs, and it is unclear how novel the proposed technique is, compared to the baselines or to related work (e.g., "Knowledge-Enhanced Multi-Label Few-Shot Product Attribute-Value Extraction" CIKM2023, among other efforts). I wonder for instance how difficult it would be to take an efficient baseline (such as HGNN+ or HyperGCN) and extend the baseline with an Embedding Module similar to HyperPAVE.
questions: 1. How important is the zero-shot setting you consider in practice? Can you analyse the dataset (e.g., throughout time) to give some statistics on how prevalent this case is, over let's say a few-shot setting?
2. Can you describe the main innovative feature of the technique you put forward?
3. How difficult would it be to take an efficient baseline (such as HGNN+ or HyperGCN) and extend the baseline with an Embedding Module similar to HyperPAVE? Can you comment on the potential performance of the model? (since those baselines are competitive w.r.t. HyperPAVE, I wonder if modifying them slightly for the task you consider would lead some substantial gains or not).
4. Will you open-source your method and experimental setup if the paper is accepted?
ethics_review_flag: No
ethics_review_description: -
scope: 2: The connection to the Web is incidental, e.g., use of Web data or API
novelty: 3
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
7hfssuKQ8P | Multi-Label Zero-Shot Product Attribute-Value Extraction | [
"Jiaying Gong",
"Hoda Eldardiry"
] | E-commerce platforms must provide detailed product descriptions (attribute values) for effective product search and recommendation. However, attribute value information is typically not available for new products. To predict unseen attribute values, large quantities of labeled training data are needed to train a traditional supervised learning model. Typically, it is difficult, time-consuming, and costly to manually label large quantities of new product profiles. In this paper, we propose a novel method to efficiently and effectively extract unseen attribute values from new products in the absence of labeled data (zero-shot setting). We propose HyperPAVE, a multi-label zero-shot attribute value extraction model that leverages inductive inference in heterogeneous hypergraphs. In particular, our proposed technique constructs heterogeneous hypergraphs to capture complex higher-order relations (i.e. user behavior information) to learn more accurate feature representations for graph nodes. Furthermore, our proposed HyperPAVE model uses an inductive link prediction mechanism to infer future connections between unseen nodes. This enables HyperPAVE to identify new attribute values without the need for labeled training data. We conduct extensive experiments with ablation studies on different categories of the MAVE dataset. The results demonstrate that our proposed HyperPAVE model significantly outperforms existing classification-based, generation-based and graph-based models for attribute value extraction in the zero-shot setting. | [
"attribute value extraction",
"zero-shot learning",
"heterogeneous hypergraph"
] | https://openreview.net/pdf?id=7hfssuKQ8P | 7oR7Vywgnb | official_review | 1,700,861,457,930 | 7hfssuKQ8P | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1984/Reviewer_rfrD"
] | review: This work addresses the problem of creating attribute-value pairs for products in e-commerce.
The solution HyperPAVE uses an interesting combination of a language model with a variant of graph neural networks for hypergraphs (describing amongst others user behaviour)
The approach is evaluated experimentally. HyperPAVE performs better than the baselines on most tasks and according to most metrics.
I don’t think this submission fits the Semantic Web and Knowledge track.
The writing quality is also unsatisfactory with many grammatical errors, of which a few are listed below.
*Details*
l 211: NEVER, the caps are not needed.
l 212: "Different with … that building ": I don’t understand this? The hypergraphs build hyperedges?
l 213: Missng reference [?]
l 224: Let D = … denotes → should be "denote"
l 276 "could be defined" → "is defined"
l 335: "Details … is introduced" → "are introduced"
l 342: Capital letter after comma
etc., there are too many mistakes like this.
questions: 1. What is the relevance of this work to the Semantic Web?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 6
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
7fw6EAxUI7 | Scalable and Effective Generative Information Retrieval | [
"Hansi Zeng",
"Chen Luo",
"Bowen Jin",
"Sheikh Muhammad Sarwar",
"Tianxin Wei",
"Hamed Zamani"
] | Recent research has shown that transformer networks can be used as differentiable search indexes by representing each document as a sequences of document ID tokens. These generative retrieval models cast the retrieval problem to a document ID generation problem for each given query. Despite their elegant design, existing generative retrieval models only perform well on artificially-constructed and small-scale collections. This has led to serious skepticism in the research community on their real-world impact. This paper represents an important milestone in generative retrieval research by showing, for the first time, that generative retrieval models can
be trained to perform effectively on large-scale standard retrieval benchmarks. For doing so, we propose RIPOR– an optimization framework for generative retrieval that can be adopted by any encoder-decoder architecture. RIPOR is designed based on two often-overlooked fundamental design considerations in generative retrieval. First, given the sequential decoding nature of document ID generation, assigning accurate relevance scores to documents based on the whole document ID sequence is not sufficient. To ad- dress this issue, RIPOR introduces a novel prefix-oriented ranking optimization algorithm. Second, initial document IDs should be constructed based on relevance associations between queries and documents, instead of the syntactic and semantic information in the documents. RIPOR addresses this issue using a relevance-based document ID construction approach that quantizes relevance-based representations learned for documents. Evaluation on MSMARCO and TREC Deep Learning Track reveals that RIPOR surpasses state-of-the-art generative retrieval models by a large margin (e.g., 30.5%) MRR improvements on MS MARCO Dev Set), and perform better on par with popular dense retrieval models | [
"Generative Retrieval",
"Learning-to-rank"
] | https://openreview.net/pdf?id=7fw6EAxUI7 | yjASsBeOZ2 | official_review | 1,699,282,594,405 | 7fw6EAxUI7 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1010/Reviewer_yxq9"
] | review: The paper proposes a pipeline of model training characteristics to make generative IR scale to MSMARCO size (~8M passages). In particular, the paper addresses a ranking-oriented optimisation. Experiments are conducted using three querysets on MSMARCO.
The contribute has 4 key contributions - each of these are ablated and shown to benefit effectiveness. I like this paper and would be happy to see it in WWW. It moves on the generative IR field to be on par with some dense retrieval models.
Significant points:
I would however note that there are other single-rep dense retrieval models that can be as effective e.g TCT-ColBERT (see https://dl.acm.org/doi/10.1145/3477495.3531721). You might argue that TCT-ColBERT is a teacher/student model with multiple training steps, but the proposed RIPOR also has multiple training stages.
I also would like some treatment of training time in this paper. We know one aspect was trained for 250k steps, but how does training time compare to other models?
Clarifications:
- In Section 4.3.3/Figure 4(right), be clear on WHAT is being replaced, e.g. WHAT is being replaced with PQ?
- the argumentation around line 408-412 about distortion error and MAP is difficult to follow. How was kmeans being used? Is distortion error different from reconstruction error?
Negative points:
- I would like the baseline models to have been better characterised - e.g. the definition of TAS-B is incremental over MarginMSE, and a casual reader would not understand if its single-representation dense retrieval, etc. Why not make families of baselines?
- ANCE is described as a state-of-the-art dense retrieval model. I would disagree
- I found Figure 3 to follow - I think the caption could be extended.
Minor points:
- abstract: "perform better on par" -- cannot parse.
- In Table 1, why aren't 0.629 0.633 bold?
- Reformulate: "Literature suggests"
- line 534: use full stop; line 133 ditto.
- line 402: use small "where"
questions: - Please discuss the applicability of the ANCE as a state-of-the-art dense retrieval model, and discuss comparison to other effective single-rep dense retrieval models
- Please discuss training time
ethics_review_flag: No
ethics_review_description: N.A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
7fw6EAxUI7 | Scalable and Effective Generative Information Retrieval | [
"Hansi Zeng",
"Chen Luo",
"Bowen Jin",
"Sheikh Muhammad Sarwar",
"Tianxin Wei",
"Hamed Zamani"
] | Recent research has shown that transformer networks can be used as differentiable search indexes by representing each document as a sequences of document ID tokens. These generative retrieval models cast the retrieval problem to a document ID generation problem for each given query. Despite their elegant design, existing generative retrieval models only perform well on artificially-constructed and small-scale collections. This has led to serious skepticism in the research community on their real-world impact. This paper represents an important milestone in generative retrieval research by showing, for the first time, that generative retrieval models can
be trained to perform effectively on large-scale standard retrieval benchmarks. For doing so, we propose RIPOR– an optimization framework for generative retrieval that can be adopted by any encoder-decoder architecture. RIPOR is designed based on two often-overlooked fundamental design considerations in generative retrieval. First, given the sequential decoding nature of document ID generation, assigning accurate relevance scores to documents based on the whole document ID sequence is not sufficient. To ad- dress this issue, RIPOR introduces a novel prefix-oriented ranking optimization algorithm. Second, initial document IDs should be constructed based on relevance associations between queries and documents, instead of the syntactic and semantic information in the documents. RIPOR addresses this issue using a relevance-based document ID construction approach that quantizes relevance-based representations learned for documents. Evaluation on MSMARCO and TREC Deep Learning Track reveals that RIPOR surpasses state-of-the-art generative retrieval models by a large margin (e.g., 30.5%) MRR improvements on MS MARCO Dev Set), and perform better on par with popular dense retrieval models | [
"Generative Retrieval",
"Learning-to-rank"
] | https://openreview.net/pdf?id=7fw6EAxUI7 | xNAY0PamGt | official_review | 1,700,992,661,172 | 7fw6EAxUI7 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1010/Reviewer_egUH"
] | review: The paper introduces RIPOR, a novel generation retrieval framework with a prefix-oriented ranking optimization algorithm and a relevance-based document ID construction. The paper claims that RIPOR significantly outperforms existing generative retrieval models on benchmarks like MSMARCO and TREC Deep Learning Track.
### Strengths:
1. The topic is important as generative retrieval has been shown to not work well on big corpora.
2. The writing is clear and easy to follow.
3. The RIPOR achieves significant performance gain against existing generative retrieval methods on big corpora.
### Weakness:
1. The reported results on MSMARCO Dev seem to be incorrect. The reported metrics for baseline models are not consistent with existing paper. For example, ANCE should achieve 33.0 in MRR@10[1], instead of 30.1 reported in the paper; TAS-B should achieve 34.0 in MRR@10[2]. The proposed RIPOR cannot significantly outperform these baselines.
2. Efficiency concern. There are multiple iterations of optimization including 2 iterations in DocID Initialization, 1 in Seq2seq Pre-training, and 3 in Rank-oriented Fine-tuning. Can you provide an amortized time cost for each training iterations, including the time for mining negatives and training with these negatives?
3. Compared with traditional dense retriever, all iterations after DocID Initialization are specific to RIPOR. A direct comparison of $M^0$ (as dense retriever) and RIPOR is desired to show if all the efforts for optimizing the model worth it.
4. The reproducibility is another concern oof the paper. There is no source code provided, the training recipe looks complicated.
questions: Questions:
1. What's the teacher $T(q,d^+,d^-)$ in your implementation? Is it a reranker based on docids prefixes?
2. Do you use peudo-queries in Initial Fine-tuning, Prefix-Oriented Rank-oriented Fine-tuning, and Self-Negative Fine-tuning? If so, what is the time cost for mining negatives for these queries in each stage? If not, as there is only 532K training queries but 8.8M passages, how do you guarantee that $M^4$ can produce accurate relevance estimation for unseen docids?
ethics_review_flag: No
ethics_review_description: No
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
7fw6EAxUI7 | Scalable and Effective Generative Information Retrieval | [
"Hansi Zeng",
"Chen Luo",
"Bowen Jin",
"Sheikh Muhammad Sarwar",
"Tianxin Wei",
"Hamed Zamani"
] | Recent research has shown that transformer networks can be used as differentiable search indexes by representing each document as a sequences of document ID tokens. These generative retrieval models cast the retrieval problem to a document ID generation problem for each given query. Despite their elegant design, existing generative retrieval models only perform well on artificially-constructed and small-scale collections. This has led to serious skepticism in the research community on their real-world impact. This paper represents an important milestone in generative retrieval research by showing, for the first time, that generative retrieval models can
be trained to perform effectively on large-scale standard retrieval benchmarks. For doing so, we propose RIPOR– an optimization framework for generative retrieval that can be adopted by any encoder-decoder architecture. RIPOR is designed based on two often-overlooked fundamental design considerations in generative retrieval. First, given the sequential decoding nature of document ID generation, assigning accurate relevance scores to documents based on the whole document ID sequence is not sufficient. To ad- dress this issue, RIPOR introduces a novel prefix-oriented ranking optimization algorithm. Second, initial document IDs should be constructed based on relevance associations between queries and documents, instead of the syntactic and semantic information in the documents. RIPOR addresses this issue using a relevance-based document ID construction approach that quantizes relevance-based representations learned for documents. Evaluation on MSMARCO and TREC Deep Learning Track reveals that RIPOR surpasses state-of-the-art generative retrieval models by a large margin (e.g., 30.5%) MRR improvements on MS MARCO Dev Set), and perform better on par with popular dense retrieval models | [
"Generative Retrieval",
"Learning-to-rank"
] | https://openreview.net/pdf?id=7fw6EAxUI7 | u8QRcYrNyj | official_review | 1,700,640,926,922 | 7fw6EAxUI7 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1010/Reviewer_Gndm"
] | review: The paper proposes a training pipeline that can achieve good performance on the MS MARCO dataset compared to previous generative models. The proposed prefix-oriented optimization, multi-objective loss, residual quantization for docIDs are fairly effective in boosting the final performance of the trained model. The author conducts experiments in various aspects and the write up is easy to follow.
Pros:
1. The authors proposed to use the conditional logit loss instead of the traditional log-conditioned probability that better suits the ranking use case.
2. The authors compare with a good amount of generative retrieval baselines including the latest ones.
3. The ablation study demonstrates the effectiveness of each proposed component which is informative.
Cons:
1. Lacking necessary aspects in the related work: as the major contribution of the pipeline is the training loss and the way document id is constructed. I would expect a description on the history of these lines of work in the Related Work section to clarify the scope of contribution.
2. As the training pipeline is pretty complicated with pre training for docIDs, seq2seq and three rounds of fine-tuning with hard-negative mining and fine-tuning. I would expect the authors provide more experimental results on clearing up the contribution of each stage. Please refer to question 1 and 2 below for details.
questions: 1. As the proposed training procedure has three rounds of fine-tuning, it would be necessary to report the model performance after each round to clearly demonstrate the improvements from round to round. Training takes time, readers might have the restrictions or preference on only training one round or two rounds. It would help reproducing and understanding the pipeline with performance after each round reported.
2. Knowledge Distillation can greatly boost a ranking model’s performance. When comparing the performance in the tables,it would be fair to compare the distilled model with baseline models also trained with knowledge distillation such as LTRGR. For baselines trained without knowledge distillation, such as MINDER, DPR, etc., it would be great if the author could replace marginMSE with a contrastive loss in the fine-tuning stages and compare the un-distilled model with the baselines. Based on the description of the pipeline, training a model without knowledge distillation seems to be a feasible option.
3. More evidence on Scalability: It seems that the author markets the proposed training pipeline as scalable because it outperforms other GR baselines on the relatively large MS MARCO dataset. However, to demonstrate the scalability, MS MARCO with 8 million documents doesn’t seem to be big enough. There are other datasets such as ClueWeb that is much larger. If scalability is a contribution, then the readers would expect more experimental support from the author.
4. Besides, in the ablation for docID length, vocab size, it would be very helpful if the author could also report the time latency with respect to different setup. Time efficiency would be crucial when we scale up the corpus size.
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
7fw6EAxUI7 | Scalable and Effective Generative Information Retrieval | [
"Hansi Zeng",
"Chen Luo",
"Bowen Jin",
"Sheikh Muhammad Sarwar",
"Tianxin Wei",
"Hamed Zamani"
] | Recent research has shown that transformer networks can be used as differentiable search indexes by representing each document as a sequences of document ID tokens. These generative retrieval models cast the retrieval problem to a document ID generation problem for each given query. Despite their elegant design, existing generative retrieval models only perform well on artificially-constructed and small-scale collections. This has led to serious skepticism in the research community on their real-world impact. This paper represents an important milestone in generative retrieval research by showing, for the first time, that generative retrieval models can
be trained to perform effectively on large-scale standard retrieval benchmarks. For doing so, we propose RIPOR– an optimization framework for generative retrieval that can be adopted by any encoder-decoder architecture. RIPOR is designed based on two often-overlooked fundamental design considerations in generative retrieval. First, given the sequential decoding nature of document ID generation, assigning accurate relevance scores to documents based on the whole document ID sequence is not sufficient. To ad- dress this issue, RIPOR introduces a novel prefix-oriented ranking optimization algorithm. Second, initial document IDs should be constructed based on relevance associations between queries and documents, instead of the syntactic and semantic information in the documents. RIPOR addresses this issue using a relevance-based document ID construction approach that quantizes relevance-based representations learned for documents. Evaluation on MSMARCO and TREC Deep Learning Track reveals that RIPOR surpasses state-of-the-art generative retrieval models by a large margin (e.g., 30.5%) MRR improvements on MS MARCO Dev Set), and perform better on par with popular dense retrieval models | [
"Generative Retrieval",
"Learning-to-rank"
] | https://openreview.net/pdf?id=7fw6EAxUI7 | mUlETGHVZ5 | official_review | 1,700,655,392,876 | 7fw6EAxUI7 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1010/Reviewer_Pdfo"
] | review: ## Pros:
- The paper tackles an important problem of scaling up generative retrieval models to large datasets, which has been a major limitation of existing methods. The paper provides useful analysis on the challenges of generative retrieval, identifying two key issues: (1) the sequential nature of document ID generation and (2) using relevance signals for document ID construction. The proposed techniques of prefix-oriented ranking and residual quantization effectively address these issues.
- Thorough experiments are conducted on standard IR benchmarks like MS MARCO and TREC DL, demonstrating the efficacy of RIPOR in scaling up generative retrieval.
- This paper is well organized and presented. Most parts of the paper are easy to understand.
## Cons:
- A major advantage of generative retrieval is that it can be trained end-to-end as the authors state in Line 75. However, the proposed RIPOR requires a multi-step complex training process and also needs the golden margin predicted by the additional teacher model (i.e., dense retrieval model **MarginMSE**), which may conflict with the motivation.
- The consideration of baselines is not comprehensive enough. There are other ways of constructing semantically structured identifiers, such as Ultron-PQ[1] and GenRet[2]. The authors should compare these strongly related methods.
- When 𝐿 is set to 32 and 𝑉 is set to 256, what is the repetition rate of DocID? The author should explain how to solve the problem of multiple documents sharing the same ID.
- If two documents share similar topics, it makes sense for them to have similar prefixes. However, in general, a document may contain different topics. The authors need to explain how to deal with this case when generating prefixes for DocIDs. Some works have proposed designing multiple DocIDs to represent a document [3].
- I find the related work section less convincing. Only two works on generative retrieval are introduced in Line 914 (Section Related Work). Several noteworthy works have been published in reputable conferences such as SIGIR [4,5], CIKM [6,7], NeurIPS [2] and ACL [3,8,9,10]. Many of these works could be considered as baselines for comparison.
- I believe the statement in the abstract, "This paper represents an important milestone in generative retrieval research by showing, for the first time, that generative retrieval models can be trained to perform effectively on large-scale standard retrieval benchmarks" (Lines 13-17), is overclaimed. The experiments are conducted on MS MARCO and TREC DL, which have been widely used in previous generative retrieval works. I would agree with the claim if the experiments were conducted on larger datasets, not limited to millions of data.
- Regarding Table 2 in the experiments, it would be beneficial to consider other advanced Product Quantization (PQ) techniques for comparison. There are numerous works building upon PQ, within the IR community as well, that are not discussed in the contribution [11,12].
## Summary:
This paper addresses the key challenges of scaling up generative retrieval models. The proposed techniques and thorough experiments demonstrate that generative retriever can achieve high effectiveness at scale. However, the existing experimental results may insufficiently support the conclusion. Besides, some important discussions are missing.
[1] Zhou, Yujia, Jing Yao, Zhicheng Dou, Ledell Wu, Peitian Zhang, and Ji-Rong Wen. "Ultron: An ultimate retriever on corpus with a model-based indexer." arXiv preprint arXiv:2208.09257.
[2] Sun, Weiwei, Lingyong Yan, Zheng Chen, Shuaiqiang Wang, Haichao Zhu, Pengjie Ren, Zhumin Chen, Dawei Yin, Maarten de Rijke, and Zhaochun Ren. "Learning to Tokenize for Generative Retrieval." NeurIPS 2023.
[3] Li, Yongqi, Nan Yang, Liang Wang, Furu Wei, and Wenjie Li. "Multiview Identifiers Enhanced Generative Retrieval." ACL 2023.
[4] Chen, Jiangui, Ruqing Zhang, Jiafeng Guo, Yixing Fan, and Xueqi Cheng. "GERE: Generative evidence retrieval for fact verification." SIGIR 2022.
[5] Chen, Jiangui, Ruqing Zhang, Jiafeng Guo, Maarten de Rijke, Yiqun Liu, Yixing Fan, and Xueqi Cheng. "A Unified Generative Retriever for Knowledge-Intensive Language Tasks via Prompt Learning." SIGIR 2023.
[6] Wang, Zihan, Yujia Zhou, Yiteng Tu, and Zhicheng Dou. "NOVO: Learnable and Interpretable Document Identifiers for Model-Based IR." CIKM 2023.
[7] Chen, Jiangui, Ruqing Zhang, Jiafeng Guo, Yiqun Liu, Yixing Fan, and Xueqi Cheng. "CorpusBrain: Pre-train a Generative Retrieval Model for Knowledge-Intensive Language Tasks." CIKM 2022.
[8] Ren, Ruiyang, Wayne Xin Zhao, Jing Liu, Hua Wu, Ji-Rong Wen, and Haifeng Wang. "TOME: A Two-stage Approach for Model-based Retrieval." ACL 2023.
[9] Chen, Xiaoyang, Yanjiang Liu, Ben He, Le Sun, and Yingfei Sun. "Understanding Differential Search Index for Text Retrieval." ACL 2023.
[10] Ziems, Noah, Wenhao Yu, Zhihan Zhang, and Meng Jiang. "Large Language Models are Built-in Autoregressive Search Engines." ACL 2023.
[11] Zhang, Han, Hongwei Shen, Yiming Qiu, Yunjiang Jiang, Songlin Wang, Sulong Xu, Yun Xiao, Bo Long, and Wen-Yun Yang. "Joint learning of deep retrieval model and product quantization based embedding index." SIGIR 2021.
[12] Zhan, Jingtao, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, and Shaoping Ma. "Learning discrete representations via constrained clustering for effective and efficient dense retrieval." WSDs 2022.
questions: 1. Could you provide some details on how to solve the problem of DocID repetition, i.e., multiple documents sharing the same DocID?
2. You demonstrated that residuals quantization (RQ) is more effective for document ID construction than product quantization (PQ). Can you provide some analysis on the causes behind this - does the hierarchical embedding structure explain the difference?
3. You focused on ranking metrics but ultimately usability, latency and throughput are vital. Can generative retrieval achieve competitive speed and resource utilization compared to established sparse/dense retrievers?
4. If two documents share similar topics, it makes sense for them to have similar prefixes. However, in general, a document may contain different topics, could you explain how to deal with this case for the generated prefix of DocIDs?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 3
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
7fw6EAxUI7 | Scalable and Effective Generative Information Retrieval | [
"Hansi Zeng",
"Chen Luo",
"Bowen Jin",
"Sheikh Muhammad Sarwar",
"Tianxin Wei",
"Hamed Zamani"
] | Recent research has shown that transformer networks can be used as differentiable search indexes by representing each document as a sequences of document ID tokens. These generative retrieval models cast the retrieval problem to a document ID generation problem for each given query. Despite their elegant design, existing generative retrieval models only perform well on artificially-constructed and small-scale collections. This has led to serious skepticism in the research community on their real-world impact. This paper represents an important milestone in generative retrieval research by showing, for the first time, that generative retrieval models can
be trained to perform effectively on large-scale standard retrieval benchmarks. For doing so, we propose RIPOR– an optimization framework for generative retrieval that can be adopted by any encoder-decoder architecture. RIPOR is designed based on two often-overlooked fundamental design considerations in generative retrieval. First, given the sequential decoding nature of document ID generation, assigning accurate relevance scores to documents based on the whole document ID sequence is not sufficient. To ad- dress this issue, RIPOR introduces a novel prefix-oriented ranking optimization algorithm. Second, initial document IDs should be constructed based on relevance associations between queries and documents, instead of the syntactic and semantic information in the documents. RIPOR addresses this issue using a relevance-based document ID construction approach that quantizes relevance-based representations learned for documents. Evaluation on MSMARCO and TREC Deep Learning Track reveals that RIPOR surpasses state-of-the-art generative retrieval models by a large margin (e.g., 30.5%) MRR improvements on MS MARCO Dev Set), and perform better on par with popular dense retrieval models | [
"Generative Retrieval",
"Learning-to-rank"
] | https://openreview.net/pdf?id=7fw6EAxUI7 | eZlzEn20Yp | decision | 1,705,909,223,105 | 7fw6EAxUI7 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: This is a metareview based on the reviews, author responses, and my own opinion. This paper proposes RIPOR, a generative ranker model for large-scale web-search. RIPOR's prefix-oriented ranking optimisation and relevance-based document ID construction are novel and address key limitations of existing generative retrieval models. All of the the referees agree that the work is timely, interesting, and relevant to the community. One key weakness that the authors should address is the oversight of important relevant work in the related work section. While this is very important, I believe that the referees and discussion have given the authors plenty of valuable feedback on how to fix this in the camera ready version of the paper, and I urge the authors to take this seriously. The claim of being "first" to effectively use a generative LLM for ranking is not true, but this does not diminish the novelty of what they have proposed. So, perhaps tone it down a little in the camera ready and focus of the practical impact your approach offers -- which is just as valuable as being "first". This can be addressed with a careful revision of the related work. In general, I like this work and believe it should be accepted. We hope the detailed reviews and discussion will help you produce a fantastic camera ready version for the conference. |
7fw6EAxUI7 | Scalable and Effective Generative Information Retrieval | [
"Hansi Zeng",
"Chen Luo",
"Bowen Jin",
"Sheikh Muhammad Sarwar",
"Tianxin Wei",
"Hamed Zamani"
] | Recent research has shown that transformer networks can be used as differentiable search indexes by representing each document as a sequences of document ID tokens. These generative retrieval models cast the retrieval problem to a document ID generation problem for each given query. Despite their elegant design, existing generative retrieval models only perform well on artificially-constructed and small-scale collections. This has led to serious skepticism in the research community on their real-world impact. This paper represents an important milestone in generative retrieval research by showing, for the first time, that generative retrieval models can
be trained to perform effectively on large-scale standard retrieval benchmarks. For doing so, we propose RIPOR– an optimization framework for generative retrieval that can be adopted by any encoder-decoder architecture. RIPOR is designed based on two often-overlooked fundamental design considerations in generative retrieval. First, given the sequential decoding nature of document ID generation, assigning accurate relevance scores to documents based on the whole document ID sequence is not sufficient. To ad- dress this issue, RIPOR introduces a novel prefix-oriented ranking optimization algorithm. Second, initial document IDs should be constructed based on relevance associations between queries and documents, instead of the syntactic and semantic information in the documents. RIPOR addresses this issue using a relevance-based document ID construction approach that quantizes relevance-based representations learned for documents. Evaluation on MSMARCO and TREC Deep Learning Track reveals that RIPOR surpasses state-of-the-art generative retrieval models by a large margin (e.g., 30.5%) MRR improvements on MS MARCO Dev Set), and perform better on par with popular dense retrieval models | [
"Generative Retrieval",
"Learning-to-rank"
] | https://openreview.net/pdf?id=7fw6EAxUI7 | JoztgvJ76r | official_review | 1,700,801,232,696 | 7fw6EAxUI7 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1010/Reviewer_21dC"
] | review: Summary :
The paper introduces RIPOR (Relevance-based Identifiers for Prefix-Oriented Ranking), a framework designed to enhance generative retrieval models by addressing two primary issues inherent in existing models. These models, which use transformer networks as differentiable search indexes, have previously struggled with large-scale, real-world data, limiting their practical application. However, the authors used a prefix-oriented ranking optimization algorithm to overcome some of the current methods' drawbacks. By focusing on prefix-oriented ranking, the proposed strategy aims to reduce noise in the beam search decoding. Additionally, the authors propose quantizing relevance-based representations learned for documents, aligning document IDs more closely with query-document relevance associations. The authors demonstrate that their method shows superior performance compared to other generative IR models and even competitive performance with state-of-the-art dense retrievers.
---------
Strenghts:
1. RIPOR's prefix-oriented ranking optimization and relevance-based document ID construction are novel and address key limitations of existing generative retrieval models. In general, the approach is highly interesting and incorporates several technical innovations.
2. RIPOR significantly outperforms state-of-the-art generative retrieval models on standard retrieval benchmarks.
3. The focus on prefix-oriented optimization specifically targets the challenges posed by beam search in generative retrieval. This idea could potentially be applied to other applications where noise in the initial stages of beam search affects performance.
4. The proposed methodology's components are thoroughly analyzed through detailed ablation studies.
5. The inclusion of a comprehensive set of baselines.
--------------
Weaknesses
1. While the approach is novel and shows promising performance, the complexity and computational cost of the model are not discussed in comparison to other baselines.
2. One concern relates to prefix optimization in scenarios where labels are not sparse. For instance, when there are multiple relevant documents per query, generating training triples with query, positive document, and negative documents may introduce challenges if these documents share a significant portion of prefixes. This could result in labeling paradoxes within the prefixes.
3. The related work section is relatively short. It would be beneficial for the authors to consolidate and provide a more comprehensive review of related work in a dedicated section.
--------------
Comments
- For future venues, I suggest that the authors consider sharing their code for review through https://anonymous.4open.science/. This would facilitate a better understanding of the paper's reproducibility by reviewers.
questions: 1. In section 3.2, why did the authors choose to normalize \alpha_i in that specific manner? Additionally, the rationale behind selecting \beta as 2 is not clear.
2. Please address my concern about potential conflicts in the prefix labels when dealing with comprehensive labels (i.e., more than one relevant document per query). How does RIPOR handle paradoxes in the labels within the prefixes?
3. I wonder if there is any way to analyze the extent to which errors arise from stacking error in beam search or from the last equation of section 3.1 involving previous prefixes.
4. Could you provide information about the training time of RIPOR?
ethics_review_flag: Yes
ethics_review_description: The authors put the paper on Arxiv a few days ago. I am not sure if this is against web conf codes of conduct or not. but I assumed putting the paper on arxiv could have been allowed only before the submission.
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
7024czziih | VPNSniffer: Identifying VPN Servers Through Graph-Represented Behaviors | [
"chenxu wang",
"Jiangyi Yin",
"Zhao Li",
"Hongbo Xu",
"Zhongyi Zhang",
"Qingyun Liu"
] | Identifying VPN servers is a crucial task in various situations, such as geo-fraud detection, bot traffic analysis and network attack identification. Although numerous studies that focus on network traffic detection have achieved excellent performance in closed-world scenarios, particularly those methods based on deep learning, they may exhibit significant performance degradation due to changes in the network environment. To mitigate this issue, a few studies have attempted to use methods based on active probing to detect VPN servers. However, these methods still have some limitations. They cannot handle situations where probing responses are absent, and lack generalization due to their focus on specific VPNs. In this work, we propose VPNSniffer, which utilizes the graph-represented behaviors to detect VPN servers in real-world scenarios. VPNSniffer outperforms existing methods in four offline datasets. The results based on our datasets, which contain multiple different VPNs, also indicate that VPNSniffer has better generalization. Furthermore, we deploy VPNSniffer in an Internet Service Provider's (ISP) environment to evaluate its effectiveness. The results show that VPNSniffer can improve the coverage of sophisticated detection engines and serve as a complement to existing methods. | [
"VPN Detection",
"Active Probing",
"Node Classification"
] | https://openreview.net/pdf?id=7024czziih | owrm2ClGQZ | official_review | 1,700,613,373,478 | 7024czziih | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1393/Reviewer_pCWb"
] | review: I thank the authors for submitting their work to WWW 24. This paper improves VPN detection rates through constructing communication graphs. The authors primarily leverage two insights: (1) the connection relationship for a normal server is more intricate, and (2) a client might access multiple servers simultaneously. They compare their system with state-of-the-art academic detection systems (in an offline setting) as well as industrial detection engines and demonstrate that their system achieves good performance.
Overall, I really enjoyed reading this paper. It is well-written (modulo various typos), with a solid methodology and systematic evaluation. That being said, below, I list some questions that I have after reading the paper:
* The online evaluation is limited to one dataset (512k server IPs). I wonder if the performance stays stable if the authors sample a subset of IPs from their current dataset. Additionally, I am curious about whether doubling the number of IPs changes the performance (this one might be harder).
* Section 4.3 is currently a bit unsatisfying. I wish the authors could comment on (a) why 16.51% of the servers are not labeled by any industry engine (one way to do this is manually examining a few such servers); (b) how many extra servers are identified by the proposed system; and (c) compared to other industry engines, why IpInfo performs so poorly (e.g., is it because it only labels a specific kind of server?).
Nits:
* Table 1: UCP -> UDP?
* We show their performance in Table 2. It should be Table 3.
* Why 65636? Maybe 65536?
**Post-rebuttal**: I thank the authors for their comments and have no further questions.
questions: See my reviews above.
ethics_review_flag: No
ethics_review_description: Ethical concerns are properly addressed.
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
7024czziih | VPNSniffer: Identifying VPN Servers Through Graph-Represented Behaviors | [
"chenxu wang",
"Jiangyi Yin",
"Zhao Li",
"Hongbo Xu",
"Zhongyi Zhang",
"Qingyun Liu"
] | Identifying VPN servers is a crucial task in various situations, such as geo-fraud detection, bot traffic analysis and network attack identification. Although numerous studies that focus on network traffic detection have achieved excellent performance in closed-world scenarios, particularly those methods based on deep learning, they may exhibit significant performance degradation due to changes in the network environment. To mitigate this issue, a few studies have attempted to use methods based on active probing to detect VPN servers. However, these methods still have some limitations. They cannot handle situations where probing responses are absent, and lack generalization due to their focus on specific VPNs. In this work, we propose VPNSniffer, which utilizes the graph-represented behaviors to detect VPN servers in real-world scenarios. VPNSniffer outperforms existing methods in four offline datasets. The results based on our datasets, which contain multiple different VPNs, also indicate that VPNSniffer has better generalization. Furthermore, we deploy VPNSniffer in an Internet Service Provider's (ISP) environment to evaluate its effectiveness. The results show that VPNSniffer can improve the coverage of sophisticated detection engines and serve as a complement to existing methods. | [
"VPN Detection",
"Active Probing",
"Node Classification"
] | https://openreview.net/pdf?id=7024czziih | olgNdfZ7Er | official_review | 1,700,611,796,968 | 7024czziih | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1393/Reviewer_adPJ"
] | review: The paper presents a system VPNSniffer which utilizes a graph represented behavior to detect VPN servers in real-world scenarios and presents a comparison outperforming the methods used in 4 offline datasets. Compared to prior works, the authors identify 16.44% more VPN servers through the approach in VPNSniffer which could not be detected through prior approaches used in similar measurements. A large part of the focus of the authors' efforts are in the generalizability and the ability to perform these detections without using the information from the packet payloads. Compared to prior efforts, the VPN datasets in this work present at least 43 VPNs and the deployment at an ISP is a valuable contribution.
This work contributes new observations about stealth ports, presents a new technique with VPNSniffer, and combines features of active probing to node communication relationships. A system like VPNSniffer is intended to be deployed in middleboxes or at ISPs to observe flows and make classifications about the server being contacted.
While the authors mention a small ethics note in their paper it falls short of having a detailed discussion about the challenges and potential mis-uses of the system. The deployment at ISPs to classify VPN gateways/servers with higher accuracy opens up tremendous privacy risks allowing ISPs to learn behaviors about users' VPN usage and could amplify censorship attempts which are rampant on the web today especially by authoritarian governments with a strict control over their ISPs and cellular network services providing Internet access. While no technology is immune to this, the work in this paper is important and worthy of investigation to indicate to the anti-censorship community the ability to overcome detection through this system. As a reviewer, While presenting any mitigation or advice to VPN software developers is out-of-scope of the paper, I'd like the authors to include a more thoughtful ethics section in the paper detailing the potential harms a system such as VPNSniffer could cause which has currently been brushed aside.
The effort in this paper is commendable but some of the formalism presented in the graph construction can be simplified in its writing and can sometimes be hard to follow especially because the reader encounters new terms "Probing port combination", "Observed Port", and additional factors $PL, PC, DT, DP$ which come later. It might be valuable to tabulate the terms used and present the construction as an algorithm keeping it shorter and more focused than being scattered across two pages. This would also leave space for the authors to have a detailed discussion around the attacks against the model, risks of the model, ethical implications and limitations for the work.
questions: 1. Line 105-110 talks about the existing approaches used to determine whether the server is a VPN server based on the response information obtained during the scan. Are these HTTP CONNECT messages or something different? A relatively minor clarification or reference for the statement in Line 110-111 is useful to provide.
2. The VPNSniffer system makes a strong assumption that compared to VPN servers, normal servers are accessed by numerous clients and exhibit a more sophisticated connection relationship (Line 122-124). However with the emergence of public proxies such as the Apple iCloud Private Relay and more architectures which rely on third party proxies, it is hard to argue that the behavior continues to be similar, how does VPNSniffer adapt to these architectural changes in the network affecting the packet flows?
3. What was the anonymization mechanism used to anonymize the IP addresses from the ISP partner? Is it the last octet of an IPv4 address and last X=80 bits of the IPv6 address? Is it hashes of the IP address?
4. Line 350-353 indicates that VPN servers mimic TLS traffic and use port 443 but do not offer TLS services with response behavior being different from normal servers. How do the authors define what normal behavior is? Is this through active probing? Also TLS certificates present on the server might reveal information about the host in the SAN fields of the certificates? How does this work against CDN networks which have millions of requests and possibly to the same set of ports? The approach presented would have high similarity $ζ$ for a large CDN and a VPN service using HTTP CONNECT based mechanism.
5. The classification of DNS response behavior based on connections to port 53 may indicate the existence of an open DNS resolver if $RT_d=1$, but other reasons for errors such as DNS Code 2 (SERVFAIL), or 5 (REFUSED) which return the errors described could mean correctly configured DNS name servers but those which aren't authoritative for the `google.com` query. Did the authors perform reverse PTR queries for the IP addresses to find any associated domains before concluding that the server might be a fake DNS resolver (Line 413). The conclusion seems rather abrupt and could benefit from clarification.
6. Line 600 lists ASNs 9009 (M247) openly advertises VPN capabilities and services from all its global datacenters but 60068 (CDN77) is a CDN network like many others (Akamai, Cloudflare, Fastly -- all of whom have VPN like services). This choice for the seed list potentially might affect the correctness of the classification of the servers.
Minor issues:
1. Line 156, Line 157: should `piracy` instead be `privacy`?
2. Consider Figure 6 to span full width of two columns below Table 4.
3. Line 317, Typo `Poring` --> `Probing`
ethics_review_flag: No
ethics_review_description: Authors have an ethics statement but needs more work detailing the immediate harms of such models and tools.
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
7024czziih | VPNSniffer: Identifying VPN Servers Through Graph-Represented Behaviors | [
"chenxu wang",
"Jiangyi Yin",
"Zhao Li",
"Hongbo Xu",
"Zhongyi Zhang",
"Qingyun Liu"
] | Identifying VPN servers is a crucial task in various situations, such as geo-fraud detection, bot traffic analysis and network attack identification. Although numerous studies that focus on network traffic detection have achieved excellent performance in closed-world scenarios, particularly those methods based on deep learning, they may exhibit significant performance degradation due to changes in the network environment. To mitigate this issue, a few studies have attempted to use methods based on active probing to detect VPN servers. However, these methods still have some limitations. They cannot handle situations where probing responses are absent, and lack generalization due to their focus on specific VPNs. In this work, we propose VPNSniffer, which utilizes the graph-represented behaviors to detect VPN servers in real-world scenarios. VPNSniffer outperforms existing methods in four offline datasets. The results based on our datasets, which contain multiple different VPNs, also indicate that VPNSniffer has better generalization. Furthermore, we deploy VPNSniffer in an Internet Service Provider's (ISP) environment to evaluate its effectiveness. The results show that VPNSniffer can improve the coverage of sophisticated detection engines and serve as a complement to existing methods. | [
"VPN Detection",
"Active Probing",
"Node Classification"
] | https://openreview.net/pdf?id=7024czziih | UqoHXn3C5Z | decision | 1,705,909,227,135 | 7024czziih | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: The major issues with the paper are related to the data used for testing the solution (lack of generality---a single source (ISP) was used) as well as the intrusiveness of the methods (network probes are needed).
However, the methodology is sound, the problem interesting, and the approach shows some elements of novelty.
The interactions between the authors and the reviewers has led to some interesting discussions.
--- |
7024czziih | VPNSniffer: Identifying VPN Servers Through Graph-Represented Behaviors | [
"chenxu wang",
"Jiangyi Yin",
"Zhao Li",
"Hongbo Xu",
"Zhongyi Zhang",
"Qingyun Liu"
] | Identifying VPN servers is a crucial task in various situations, such as geo-fraud detection, bot traffic analysis and network attack identification. Although numerous studies that focus on network traffic detection have achieved excellent performance in closed-world scenarios, particularly those methods based on deep learning, they may exhibit significant performance degradation due to changes in the network environment. To mitigate this issue, a few studies have attempted to use methods based on active probing to detect VPN servers. However, these methods still have some limitations. They cannot handle situations where probing responses are absent, and lack generalization due to their focus on specific VPNs. In this work, we propose VPNSniffer, which utilizes the graph-represented behaviors to detect VPN servers in real-world scenarios. VPNSniffer outperforms existing methods in four offline datasets. The results based on our datasets, which contain multiple different VPNs, also indicate that VPNSniffer has better generalization. Furthermore, we deploy VPNSniffer in an Internet Service Provider's (ISP) environment to evaluate its effectiveness. The results show that VPNSniffer can improve the coverage of sophisticated detection engines and serve as a complement to existing methods. | [
"VPN Detection",
"Active Probing",
"Node Classification"
] | https://openreview.net/pdf?id=7024czziih | QDY6jx5enr | official_review | 1,700,687,996,571 | 7024czziih | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1393/Reviewer_K8Lu"
] | review: This paper proposes a graph-based approach to identify VPN servers. It uses active probe packets for nine application layer protocols (e.g., SSTP, DNS, and HTTP) to elicit responses from the suspicious server machines. From the response packets, the proposed method extracts various features and also construct a graph characterizing communications among different servers. The standard GraphSage technique is used to learn the embeddings of the nodes and thus for VPN server detection. Two types of experiments are performed. For offline experiments, the method is compared against five other existing methods on a labeled dataset contributed by an industry partner. For online experiments, a prototype system is deployed in the partner ISP''s network, showing that the VPN servers detected can be confirmed by existing industry engines.
Strengths:
+ The works considered practical deployment of the proposed system in a real ISP network environment for VPN server detection.
+ The work leveraged various subtle differences in server responses as features to identify VPN servers.
Weaknesses:
- The proposed method is intrusive, as active probe packets are needed. Such packets can also be exploited by a VPN server to hide its existence (e.g., blocking such probing packets).
- The evaluation work is performed on a single dataset contributed by the partnering ISP. It's hard to know whether the techniques can be generalized to other datasets.
- The detection performances do not show consistent improvement over the existing methods.
questions: * The results presented in Table 3 do not show that VPNSniffer's detection accuracy outperforms the alternative approaches. Particularly, its precision results indicated by precision scores are worse than some other methods. This shows that GraphSage used by VPNSniffer may not be a perfect choice for the detection task.
* Without the ground truth data about the VPN servers detected, it's hard to interpret the detection accuracy results shown in Table 5. Would it be possible that the VPN servers discovered by VPNSniffer could be false alarms?
* The work could be improved by applying the VPNSniffer on other existing VPN datasets. This will help demonstrate the generalization capability of the proposed method.
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 3
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
7024czziih | VPNSniffer: Identifying VPN Servers Through Graph-Represented Behaviors | [
"chenxu wang",
"Jiangyi Yin",
"Zhao Li",
"Hongbo Xu",
"Zhongyi Zhang",
"Qingyun Liu"
] | Identifying VPN servers is a crucial task in various situations, such as geo-fraud detection, bot traffic analysis and network attack identification. Although numerous studies that focus on network traffic detection have achieved excellent performance in closed-world scenarios, particularly those methods based on deep learning, they may exhibit significant performance degradation due to changes in the network environment. To mitigate this issue, a few studies have attempted to use methods based on active probing to detect VPN servers. However, these methods still have some limitations. They cannot handle situations where probing responses are absent, and lack generalization due to their focus on specific VPNs. In this work, we propose VPNSniffer, which utilizes the graph-represented behaviors to detect VPN servers in real-world scenarios. VPNSniffer outperforms existing methods in four offline datasets. The results based on our datasets, which contain multiple different VPNs, also indicate that VPNSniffer has better generalization. Furthermore, we deploy VPNSniffer in an Internet Service Provider's (ISP) environment to evaluate its effectiveness. The results show that VPNSniffer can improve the coverage of sophisticated detection engines and serve as a complement to existing methods. | [
"VPN Detection",
"Active Probing",
"Node Classification"
] | https://openreview.net/pdf?id=7024czziih | 3RLxwrJDQC | official_review | 1,699,436,668,188 | 7024czziih | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1393/Reviewer_aBYp"
] | review: ### Summary
This paper proposes VPNSniffer, a VPN server detection system that utilizes graph-represented probing and communication information to enhance detection performance. VPNSniffer constructs the probing graph and the communication graph, extracts node features from active probing and network traffic, and applies graph neural networks to classify VPN servers. The authors collect and construct new VPN server detection datasets from their ISP partner. In experiments, VPNSniffer can outperform active probing-based methods on offline datasets by a noticeable margin. For online experiments, VPNSniffer can also serve as a complement to existing industry detection engines.
### Pros
1. The idea of constructing and utilizing the probing graph seems interesting and novel.
2. The datasets provided by the authors seem comprehensive and useful.
3. The topic of VPN server detection is significant with a broad range of interests in the web domain.
### Cons
1. The GNN-based VPN server detection methodology is not novel.
2. Missing network traffic-based baselines.
3. Generalization ability claim not justified.
4. No efficiency analysis.
5. Poor clarity. There are many typos, grammatical errors, or inconsistencies throughout the paper.
questions: 1. The authors claim that VPNSniffer has better generalization ability compared to previous methods. But there are no experiments explicitly supporting that claim. For example, The authors may consider an experiment that trains the model on $\mathcal{D}_2$ and evaluates the model on $\mathcal{D}_3$ to verify the claim.
2. There are currently no experimental results of network traffic-based methods. The authors might need to provide some to justify VPNSniffer's superiority to them.
3. VPNSniffer needs to collect information from the ISP network and construct the probing graph and the communication graph, which could cost a considerable amount of computing resources and probing network traffic, especially if the ISP network is large. How is the efficiency of VPNSniffer compared to existing detection methods and industry engines?
4. In online experiments, are all 512,170 servers fed to the industry engines for checking, or only the 6,143 VPNSniffer-identified servers are fed to those engines? It is hard to believe that the VPN servers identified by these industry engines are all within the 6,143 ones identified by VPNSniffer.
5. In the ablation study, there is a variant replacing the classifier with a Linear layer, which is a bit confusing. Isn't VPNSniffer already using a linear classifier, as described in Equation (14)?
6. According to the sensitivity experiments, the performance of VPNSniffer is better when the access sequence length is shorter. Then why do the authors set the length to be 50 for their main experiments, according to line 663?
7. It would be better for the paper to have a dedicated section to formally define the special terms used in this paper, such as "observed port", "probing port", "probing port combination", etc.
8. It would be better for the authors to provide a complete spreadsheet describing all the node features utilized, for both the probing graph and the communication graph.
9. There are many typos, grammatical errors, or inconsistencies throughout the paper. The authors should proofread the paper carefully. Some instances are shown below:
- Equation (1): why do the authors use summation here? It seems that the summation operation should be changed to the union operation as $s_{(i,*)}^t$ are all sets.
- Line 316-318: the sentence should be checked again and corrected accordingly. There are some typos and grammatical errors. E.g., "Give" should be "Given", "Poring" should be "Probing".
- Line 435-436: there should be a "we" before "refer to"
- Line 555: "show" -> "shown"
- Inconsistency of notations. The notation of the probing graph is in the calligraphic font ($\mathcal{PG}$) in line 325, but suddenly changed to the normal font ($PG$) in line 737-738. There is a similar issue for the notation of the communication graph.
10. In Section 5, who are the "attackers"? This term first appears in this section and does not make sense.
11. Will the authors release the code implementation of VPNSniffer?
ethics_review_flag: Yes
ethics_review_description: The dataset collected by the authors contains plaintext server IPs, which may be sensitive information with security concerns.
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
6rs3ES8wgX | Predictive Relevance Uncertainty for Recommendation Systems | [
"Charul Paliwal",
"Anirban Majumder",
"Sivaramakrishnan R Kaveri"
] | Click-through Rate (CTR) module is the foundation block of recommendation system and used for search, content selection, advertising, video streaming etc. CTR is modelled as a classification problem and extensive research is done to improve the CTR models. However, uncertainty method for these models are still an unexplored area. In this work we analyse popular uncertainty methods in the context of recommendation system. We found that popular uncertainty models fails to capture the predictive uncertainty of the CTR model that exist unique to the recommendation models and is not prevalent in the traditional classification models. We empirical show why a different uncertainty measure is required for the recommendation system CTR prediction models.
We propose PRU (Predictive Relevance Uncertainty), a single forward pass uncertainty approach for a sample as a distance from the predictive relevance samples of the training data. We show the efficacy of the proposed predictive relevance uncertainty (PRU) on selective prediction. Further, we demonstrate the utility of the proposed framework on the downstream task of OOD detection and active learning while maintaining the latency of a single pass deterministic model. | [
"Recommendation Systems",
"Uncertainty Quantification",
"CTR Prediction"
] | https://openreview.net/pdf?id=6rs3ES8wgX | fXoT7ERLXm | decision | 1,705,909,250,084 | 6rs3ES8wgX | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: This paper addresses a crucial yet often overlooked issue in click-through rate (CTR) prediction: uncertainty quantification. The authors effectively highlight the unique challenges of uncertainty quantification in CTR prediction, as evidenced by their experimental findings. This aspect is particularly intriguing as it reveals the inadequacy of traditional methods from general classification when applied to CTR prediction. These insights are not only fresh but also highly relevant to the field.
Furthermore, the authors introduce an innovative approach for quantifying uncertainty in CTR prediction. The extensive evaluation presented in the paper convincingly demonstrates the efficacy of this approach in enhancing CTR predictions. This novel contribution is commendable and adds significant value to the paper.
I appreciate the paper for pinpointing a key issue in recommender systems that has been underrepresented in current literature. The reviewers have brought up several points for improvement, which should be incorporated in the final version of this paper. In addition, I would like to see the authors share their code for the proposed method and its evaluations. |
6rs3ES8wgX | Predictive Relevance Uncertainty for Recommendation Systems | [
"Charul Paliwal",
"Anirban Majumder",
"Sivaramakrishnan R Kaveri"
] | Click-through Rate (CTR) module is the foundation block of recommendation system and used for search, content selection, advertising, video streaming etc. CTR is modelled as a classification problem and extensive research is done to improve the CTR models. However, uncertainty method for these models are still an unexplored area. In this work we analyse popular uncertainty methods in the context of recommendation system. We found that popular uncertainty models fails to capture the predictive uncertainty of the CTR model that exist unique to the recommendation models and is not prevalent in the traditional classification models. We empirical show why a different uncertainty measure is required for the recommendation system CTR prediction models.
We propose PRU (Predictive Relevance Uncertainty), a single forward pass uncertainty approach for a sample as a distance from the predictive relevance samples of the training data. We show the efficacy of the proposed predictive relevance uncertainty (PRU) on selective prediction. Further, we demonstrate the utility of the proposed framework on the downstream task of OOD detection and active learning while maintaining the latency of a single pass deterministic model. | [
"Recommendation Systems",
"Uncertainty Quantification",
"CTR Prediction"
] | https://openreview.net/pdf?id=6rs3ES8wgX | cxBClf6bSe | official_review | 1,701,009,572,659 | 6rs3ES8wgX | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2284/Reviewer_E8bv"
] | review: The paper proposes Predictive Relevance Uncertainty (PRU) for improving the accuracy of click-through rate (CTR) predictions in recommendation systems. PRU measures how far the new data is from the known and reliable data points. It was tested for effectiveness in various scenarios by using two datasets. The results showed the effectiveness in making recommendation systems more accurate and reliable.
questions: 1. The paper points out the challenges in CTR prediction from the viewpoint of infrequent occurrence of positive samples, leading to inadequate training data for models with large number of parameters. This situation results in variability in predictions and difficulties in managing dynamic user behaviors, new customers, and external events. Although the paper proposed Predictive Relevance Uncertainty (PRU) as a solution, I am not fully understand how PRU effectively address these the challenges of class imbalance and overlap, particularly in highly dynamic recommendation scenarios.
2. PRU involves identifying training samples with significant predictive relevance, fitting a density estimator on these samples' regularized feature space, and then estimating the uncertainty of a test sample accordingly. It would better if the author could give more comprehensive discussion on PRU's computational complexity and feasibility in real-world, large-scale recommendation systems.
3. The experiments involves a limited range of backbones (e.g., using DeepFM and Wide&Deep). It is better to test the proposed method with more state of the art models as backbones.
4. The authors mention a number of applications in the introduction section. However, these claims are not tested on in the experiments.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
6rs3ES8wgX | Predictive Relevance Uncertainty for Recommendation Systems | [
"Charul Paliwal",
"Anirban Majumder",
"Sivaramakrishnan R Kaveri"
] | Click-through Rate (CTR) module is the foundation block of recommendation system and used for search, content selection, advertising, video streaming etc. CTR is modelled as a classification problem and extensive research is done to improve the CTR models. However, uncertainty method for these models are still an unexplored area. In this work we analyse popular uncertainty methods in the context of recommendation system. We found that popular uncertainty models fails to capture the predictive uncertainty of the CTR model that exist unique to the recommendation models and is not prevalent in the traditional classification models. We empirical show why a different uncertainty measure is required for the recommendation system CTR prediction models.
We propose PRU (Predictive Relevance Uncertainty), a single forward pass uncertainty approach for a sample as a distance from the predictive relevance samples of the training data. We show the efficacy of the proposed predictive relevance uncertainty (PRU) on selective prediction. Further, we demonstrate the utility of the proposed framework on the downstream task of OOD detection and active learning while maintaining the latency of a single pass deterministic model. | [
"Recommendation Systems",
"Uncertainty Quantification",
"CTR Prediction"
] | https://openreview.net/pdf?id=6rs3ES8wgX | SZPbLyjOY4 | official_review | 1,700,213,945,901 | 6rs3ES8wgX | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2284/Reviewer_qhRW"
] | review: The study focus on the uncertainty estimation techniques for recommendation. The authors find the SOTA uncertainty estimation techniques do not work for recommendation problems, and define the uncertainty as a distance from the predictive relevance sample of the training data. Based on the above analysis, the study proposes predictive relevance uncertainty (PRU) to quantify uncertainty for CTR prediction models, then evaluate PRU on three tasks. The experimental results on three datasets prove the effectiveness of PRU.
Advantages:
1. Uncertainty estimation for recommendation is an important research question.
2. The authors evaluate the SOTA methods for recommendation tasks, and analyze the special issues in recommendation data sets: overlap and class imbalance.
3. The authors propose PRU approach to correctly define uncertainty for recommendation task, and evaluate its effectiveness on 3 tasks.
Improvement:
I suggest the authors consider the differences between recommendation task and other machine learning tasks such as image recognition. Where the uncertain come from? In my opinion, people are complex, and their requirements are easily affected by context, so more deeply analysis are needed.
questions: I suggest the authors consider the differences between recommendation task and other machine learning tasks such as image recognition. Where the uncertain come from? In my opinion, people are complex, and their requirements are easily affected by context, so more deeply analysis are needed.
ethics_review_flag: No
ethics_review_description: NO
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
6rs3ES8wgX | Predictive Relevance Uncertainty for Recommendation Systems | [
"Charul Paliwal",
"Anirban Majumder",
"Sivaramakrishnan R Kaveri"
] | Click-through Rate (CTR) module is the foundation block of recommendation system and used for search, content selection, advertising, video streaming etc. CTR is modelled as a classification problem and extensive research is done to improve the CTR models. However, uncertainty method for these models are still an unexplored area. In this work we analyse popular uncertainty methods in the context of recommendation system. We found that popular uncertainty models fails to capture the predictive uncertainty of the CTR model that exist unique to the recommendation models and is not prevalent in the traditional classification models. We empirical show why a different uncertainty measure is required for the recommendation system CTR prediction models.
We propose PRU (Predictive Relevance Uncertainty), a single forward pass uncertainty approach for a sample as a distance from the predictive relevance samples of the training data. We show the efficacy of the proposed predictive relevance uncertainty (PRU) on selective prediction. Further, we demonstrate the utility of the proposed framework on the downstream task of OOD detection and active learning while maintaining the latency of a single pass deterministic model. | [
"Recommendation Systems",
"Uncertainty Quantification",
"CTR Prediction"
] | https://openreview.net/pdf?id=6rs3ES8wgX | PYf93f9f2h | official_review | 1,701,189,222,085 | 6rs3ES8wgX | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2284/Reviewer_gNB8"
] | review: The issue of relevance estimation is an interesting and useful one, with many possible applications, so the paper takles an interesting problem.
However I have identified severai weaknesses:
First, I found the paper difficult to follow.
Figure 2 shows the "feature distribution" but it is not explained which features are those and how this plot is generated, hence it is not clear what it represents.
Figure 3 with the discussion on the simulation of the behaviour in recommender systems is not very clear to me, the example seems contrived and its parameters seem chosen arbitrarily. I would ask the authors to clarify this since it is used to describe the specific scenario of recommendation.
In the experimental evaluation the preprocessing and data split are not described, instead a reference is provided. I would recommend that the experimental protocol should be clearly described. Especially considering that the referred paper does not in fact indicate how the data is split and preprocessed but one has to follow a further reference to "AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks.", yet it is stated the splitted data comes from another paper still "Adaptive factorization network: Learning adaptive-order feature interactions."
- The datasets are rather small.
- It is stated that the embedding dimension is set to 16, that is an anomalously low number in my experience. The hyperparameters listed do not appear to have been optimized, which means that the reported methods are not optimal. While the goal of this paper is not to show that any particular recommender is better than another, without proper hyperparameter optimization the risk is that the underlying recommender will exhibit poor quality and have very few highly reliable predictions, biasing the evaluation.
- There is no information on how the baseline uncertainty quantification methods have been optimized. This means we cannot judge the reliability of the experiment.
- It is not clear why the two parts of the analysis use two different datasets. I understand the lack of category information may have prevented the use fo Avazu, but Taobao could have been added to the first experiment. Furthermore, item features are generally bad at representing user interactions, so I would suggest to first check whether they can be used successfully or not. A sample OOD wrt the features may not be at all considering the collaborative information.
questions: - How was the data processed and split? Why was Taobao not used for the first experiment?
- Was the statistical significance test corrected for the multiple comparisons you are performing?
ethics_review_flag: No
ethics_review_description: No
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 4
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
6rs3ES8wgX | Predictive Relevance Uncertainty for Recommendation Systems | [
"Charul Paliwal",
"Anirban Majumder",
"Sivaramakrishnan R Kaveri"
] | Click-through Rate (CTR) module is the foundation block of recommendation system and used for search, content selection, advertising, video streaming etc. CTR is modelled as a classification problem and extensive research is done to improve the CTR models. However, uncertainty method for these models are still an unexplored area. In this work we analyse popular uncertainty methods in the context of recommendation system. We found that popular uncertainty models fails to capture the predictive uncertainty of the CTR model that exist unique to the recommendation models and is not prevalent in the traditional classification models. We empirical show why a different uncertainty measure is required for the recommendation system CTR prediction models.
We propose PRU (Predictive Relevance Uncertainty), a single forward pass uncertainty approach for a sample as a distance from the predictive relevance samples of the training data. We show the efficacy of the proposed predictive relevance uncertainty (PRU) on selective prediction. Further, we demonstrate the utility of the proposed framework on the downstream task of OOD detection and active learning while maintaining the latency of a single pass deterministic model. | [
"Recommendation Systems",
"Uncertainty Quantification",
"CTR Prediction"
] | https://openreview.net/pdf?id=6rs3ES8wgX | 3szHpEkCey | official_review | 1,700,225,833,700 | 6rs3ES8wgX | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2284/Reviewer_egca"
] | review: This paper presents an innovative approach to addressing the uncertainty in Click-through Rate (CTR) models, which are pivotal in recommendation systems used across various digital platforms. The authors introduce a novel concept, Predictive Relevance Uncertainty (PRU), designed to better capture the unique predictive uncertainties inherent in recommendation systems, which are not adequately addressed by traditional uncertainty models in classification problems.
The paper demonstrates the practical application of PRU in selective prediction and its utility in tasks such as Out-Of-Distribution (OOD) detection and active learning, while maintaining the efficiency of a single pass deterministic model. The approach appears promising, particularly in enhancing the robustness and reliability of recommendation systems.
However, there are several areas where the manuscript could be strengthened:
1. The introduction lacks adequate literature support for the role of uncertainty estimates in recommendation systems (Lines 90-105). It would be beneficial to provide references to existing work in this area to contextualize the study and establish its relevance.
2. In Figure 2, where the authors visualize data from Avazu and MovieLens, the methods used for this visualization are not clearly described. Elaborating on these methods would enhance the clarity and reproducibility of the results.
3. The choice to use simulated datasets in Section 3, instead of real recommendation data, raises questions about the applicability of the findings to real-world scenarios. It would be useful to provide justification for this choice or consider incorporating real data to validate the findings.
4. The experimental section relies on a limited range of backbone models (DeepFM and Wide&Deep), which are relatively older. Expanding the range of models tested could provide a more comprehensive validation of the PRU approach and its effectiveness across different model architectures.
Overall, while the paper addresses a significant gap in the field of recommendation systems, enhancing its empirical support and broadening the scope of its experimental validation would substantially improve its contribution to the field.
questions: 1. The introduction lacks adequate literature support for the role of uncertainty estimates in recommendation systems (Lines 90-105). It would be beneficial to provide references to existing work in this area to contextualize the study and establish its relevance.
2. In Figure 2, where the authors visualize data from Avazu and MovieLens, the methods used for this visualization are not clearly described. Elaborating on these methods would enhance the clarity and reproducibility of the results.
3. The choice to use simulated datasets in Section 3, instead of real recommendation data, raises questions about the applicability of the findings to real-world scenarios. It would be useful to provide justification for this choice or consider incorporating real data to validate the findings.
4. The experimental section relies on a limited range of backbone models (DeepFM and Wide&Deep), which are relatively older. Expanding the range of models tested could provide a more comprehensive validation of the PRU approach and its effectiveness across different model architectures.
ethics_review_flag: No
ethics_review_description: No
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
6qncjuadJW | Poisoning Attack on Federated Knowledge Graph Embedding | [
"Enyuan Zhou",
"Song Guo",
"Zhixiu Ma",
"Zicong Hong",
"Tao GUO",
"Peiran Dong"
] | Federated Knowledge Graph Embedding (FKGE) is an emerging collaborative learning technique for deriving expressive representations (i.e., embeddings) from client-maintained distributed knowledge graphs (KGs). However, poisoning attacks in FKGE, which lead to biased decisions by downstream applications, remain unexplored. This paper is the first work to systematise the risks of FKGE poisoning attacks, from which we develop a novel framework for poisoning attacks that force the victim client to predict specific false facts. The challenge is that FKGE maintains KGs for training locally on clients, preventing attackers in centralized KGEs from injecting poisoned data directly into the victim's training data. Thus, an attacker needs to create poisoned data without the victim's local KG, and inject the poisoned data indirectly into the victim's embeddings via FKGE aggregation. Specifically, to create poisoned data, the attacker first infers the targeted relations in the victim's local KG via a new KG component inference attack. Then, to accurately mislead the victim's embeddings via aggregation, the attacker locally trains a shadow model using the poisoned data and uses an optimised dynamic poisoning scheme to adjust the model and generate progressive poisoned updates. Our experimental results demonstrate the attack's effectiveness, achieving a remarkable success rate on various KGE models (e.g. 100\% on TransE with WNRR), while keeping the original task's performance nearly unchanged. | [
"Knowledge Graph Embedding",
"Federated Learning",
"Poisoning Attack"
] | https://openreview.net/pdf?id=6qncjuadJW | osMosfVldf | official_review | 1,700,774,167,528 | 6qncjuadJW | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission620/Reviewer_6vj7"
] | review: The paper describes a poisoning attack on Federated Knowledge Graph Embedding as well as a defence against the poisoning attack. It considers a federation composed of participants with their own private KG and a central server. The n participants compute locally their graph embedding and send entity embeddings to a central server. The central server aggregates entity embeddings and redistributes them to participants. Shared entity embeddings help to have more accurate predictive models for each participant while keeping KG private.
The paper demonstrates how a malicious participant or malicious central server can add a fake relation in a participant model. For example, a malicious participant adds a relation (tom, is-allergic-to, aspiring) into another KG model, biassing all downstream applications on the victim model.
The attack is challenging as the malicious server or participant doesn't know the relations embeddings of the victim, and cannot modify directly relations embeddings of the victim. [19] already tackle the problem of discovering relation embedding, the originality of the paper is to incite the victim model to predict a fake relationship between 2 entities without modifying any relation embedding.
The paper describes precisely the attack and a possible defence. It evaluates the attack and the defence on a simulated setup. The experiment is conducted on several well-known datasets with different graph embedding techniques.
Strong points:
* I consider that the problem is timely. Federated Learning with knowledge graphs is a very appealing context. Describing precisely how such an approach can be poisoned is very valuable for the community.
* The paper is very well-written, with strong motivations and convincing examples.
* Positioning vs State of the Art is clear : poisoning attacks exist on centralised Knowledge Graph Embeddings, not on federated knowledge Graph Embeddings. On Federated Knowledge Graph Embedding, [19] described how it is possible to infer some relations from entity embeddings, but not to poison a participant model.
* The experimentations of attacks and defence are convincing
Weak points:
* It seems that the heart of the proposal is how to force a target model to predict a fake relation without modifying directly relation embeddings. This is done by training a shadow model on the central server or on a participant. Such a part is mainly explained by giving the function to train the shadow model (L419-455). I would know how, by just changing target entity embeddings, the target model is forced to predict a fake relation ? Can you elaborate more on this ?
* From the experiments, It seems that the Defense is more complex than the attack, especially for the server-initiate attack. Is it possible for a participant just to detect that the server is attacking, maybe using the same blockchain as for the client-initiate attack (Q2) ??
I did not see any reference to public implementation of the attack and the defence prototypes (maybe I missed it ?). Are the attack and defence experiments reproducible thanks to a public repository (Q3) ??
Overall, I found the paper very interesting to read with a significant scientific contribution.
questions: * I would know how, by just changing target entity embeddings, the target model is forced to predict a fake relation ? Can you elaborate more on this ?
* Is it possible for a participant just to detect that the server is attacking, maybe using the same blockchain as for the client-initiate attack ?
* Are the attack and defence experiments reproducible thanks to a public repository ??
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
6qncjuadJW | Poisoning Attack on Federated Knowledge Graph Embedding | [
"Enyuan Zhou",
"Song Guo",
"Zhixiu Ma",
"Zicong Hong",
"Tao GUO",
"Peiran Dong"
] | Federated Knowledge Graph Embedding (FKGE) is an emerging collaborative learning technique for deriving expressive representations (i.e., embeddings) from client-maintained distributed knowledge graphs (KGs). However, poisoning attacks in FKGE, which lead to biased decisions by downstream applications, remain unexplored. This paper is the first work to systematise the risks of FKGE poisoning attacks, from which we develop a novel framework for poisoning attacks that force the victim client to predict specific false facts. The challenge is that FKGE maintains KGs for training locally on clients, preventing attackers in centralized KGEs from injecting poisoned data directly into the victim's training data. Thus, an attacker needs to create poisoned data without the victim's local KG, and inject the poisoned data indirectly into the victim's embeddings via FKGE aggregation. Specifically, to create poisoned data, the attacker first infers the targeted relations in the victim's local KG via a new KG component inference attack. Then, to accurately mislead the victim's embeddings via aggregation, the attacker locally trains a shadow model using the poisoned data and uses an optimised dynamic poisoning scheme to adjust the model and generate progressive poisoned updates. Our experimental results demonstrate the attack's effectiveness, achieving a remarkable success rate on various KGE models (e.g. 100\% on TransE with WNRR), while keeping the original task's performance nearly unchanged. | [
"Knowledge Graph Embedding",
"Federated Learning",
"Poisoning Attack"
] | https://openreview.net/pdf?id=6qncjuadJW | VRiGSSRFFv | decision | 1,705,909,230,011 | 6qncjuadJW | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: This article introduces a poison attack framework for Federated Knowledge Graph Embedding, which can trigger server and client attacks.
Using this framework, various defense techniques were investigated.
All reviewers agree that this is a valuable contribution for the Web Conference, and deserves to be accepted.
We recommend the authors to incorporate the comments and clarifications that arose during the discussions. |
6qncjuadJW | Poisoning Attack on Federated Knowledge Graph Embedding | [
"Enyuan Zhou",
"Song Guo",
"Zhixiu Ma",
"Zicong Hong",
"Tao GUO",
"Peiran Dong"
] | Federated Knowledge Graph Embedding (FKGE) is an emerging collaborative learning technique for deriving expressive representations (i.e., embeddings) from client-maintained distributed knowledge graphs (KGs). However, poisoning attacks in FKGE, which lead to biased decisions by downstream applications, remain unexplored. This paper is the first work to systematise the risks of FKGE poisoning attacks, from which we develop a novel framework for poisoning attacks that force the victim client to predict specific false facts. The challenge is that FKGE maintains KGs for training locally on clients, preventing attackers in centralized KGEs from injecting poisoned data directly into the victim's training data. Thus, an attacker needs to create poisoned data without the victim's local KG, and inject the poisoned data indirectly into the victim's embeddings via FKGE aggregation. Specifically, to create poisoned data, the attacker first infers the targeted relations in the victim's local KG via a new KG component inference attack. Then, to accurately mislead the victim's embeddings via aggregation, the attacker locally trains a shadow model using the poisoned data and uses an optimised dynamic poisoning scheme to adjust the model and generate progressive poisoned updates. Our experimental results demonstrate the attack's effectiveness, achieving a remarkable success rate on various KGE models (e.g. 100\% on TransE with WNRR), while keeping the original task's performance nearly unchanged. | [
"Knowledge Graph Embedding",
"Federated Learning",
"Poisoning Attack"
] | https://openreview.net/pdf?id=6qncjuadJW | NZpeUwIiZW | official_review | 1,700,736,855,096 | 6qncjuadJW | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission620/Reviewer_H4Wx"
] | review: Quality: the quality of the descriptions and experiments is mostly good, with some comments on the clarity below. Research questions are clearly formulated and give a good picture of the approach.
Clarity: mostly readability and clarity is good, but with a number of typos that could be easily avoided. I recommend fixing this to in crease the overall quality of the text
Originality: the approach is original and interesting - of course the indivual approaches have been used before in other context, but the result is in my opinion quite original
Signifiance: the evaluation give a clear picture of the significance, in all of the four research questions posed
Pros:
* Interesting, current topic
* Clear research questions and answers
Cons:
* Some care for the clarity (typos, ...) would improve overall appreciation
questions: I have in most parts a clear picture of the paper, and only minor questions:
1. You talk about "holistic" study - what in your opinion constitutes such a study (and what would not)?
2. In terms of the designed methodology, I understand very clearly the steps proposed. Could you give some additional comments on why you exactly chose this methodology compared to nearby alternatives?
ethics_review_flag: No
ethics_review_description: /
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
6qncjuadJW | Poisoning Attack on Federated Knowledge Graph Embedding | [
"Enyuan Zhou",
"Song Guo",
"Zhixiu Ma",
"Zicong Hong",
"Tao GUO",
"Peiran Dong"
] | Federated Knowledge Graph Embedding (FKGE) is an emerging collaborative learning technique for deriving expressive representations (i.e., embeddings) from client-maintained distributed knowledge graphs (KGs). However, poisoning attacks in FKGE, which lead to biased decisions by downstream applications, remain unexplored. This paper is the first work to systematise the risks of FKGE poisoning attacks, from which we develop a novel framework for poisoning attacks that force the victim client to predict specific false facts. The challenge is that FKGE maintains KGs for training locally on clients, preventing attackers in centralized KGEs from injecting poisoned data directly into the victim's training data. Thus, an attacker needs to create poisoned data without the victim's local KG, and inject the poisoned data indirectly into the victim's embeddings via FKGE aggregation. Specifically, to create poisoned data, the attacker first infers the targeted relations in the victim's local KG via a new KG component inference attack. Then, to accurately mislead the victim's embeddings via aggregation, the attacker locally trains a shadow model using the poisoned data and uses an optimised dynamic poisoning scheme to adjust the model and generate progressive poisoned updates. Our experimental results demonstrate the attack's effectiveness, achieving a remarkable success rate on various KGE models (e.g. 100\% on TransE with WNRR), while keeping the original task's performance nearly unchanged. | [
"Knowledge Graph Embedding",
"Federated Learning",
"Poisoning Attack"
] | https://openreview.net/pdf?id=6qncjuadJW | MS6OYetMUQ | official_review | 1,700,820,515,141 | 6qncjuadJW | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission620/Reviewer_u75f"
] | review: **Summary**:
This paper proposes a poison attack framework for Federated Knowledge Graph Embedding. The framework can lead to sever-initiate and client-initiate attacks. The authors also study the corresponding defense mechanism. The experimental results verify the superiority of the proposed attacks.
**Strengths**:
S1. The presentation of the paper is clear.
S2. The proposed attack framework is effective regarding the poisoned triples on victim model.
S3. The analysis of the framework is comprehensive, and two attack mechanisms and a defense method are well discussed.
**Weaknesses**:
W1. Based on Figure 5 and Figure 6, the attack performance does not steadily increase as the budget grows, reflecting the instability of the framework.
W2. Based on Figure 6, the attack seems to be effective when injecting a certain amount of poisoned triplets on some datasets, which could lead to unnoticability issues (which are not well discussed in the paper).
W3. The attack settings might be so impractical that the adversary requires the relation embeddings.
questions: 1. What are the computational costs of the framework?
2. How do we ensure the auxiliary dataset originates from the same domain?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
6qncjuadJW | Poisoning Attack on Federated Knowledge Graph Embedding | [
"Enyuan Zhou",
"Song Guo",
"Zhixiu Ma",
"Zicong Hong",
"Tao GUO",
"Peiran Dong"
] | Federated Knowledge Graph Embedding (FKGE) is an emerging collaborative learning technique for deriving expressive representations (i.e., embeddings) from client-maintained distributed knowledge graphs (KGs). However, poisoning attacks in FKGE, which lead to biased decisions by downstream applications, remain unexplored. This paper is the first work to systematise the risks of FKGE poisoning attacks, from which we develop a novel framework for poisoning attacks that force the victim client to predict specific false facts. The challenge is that FKGE maintains KGs for training locally on clients, preventing attackers in centralized KGEs from injecting poisoned data directly into the victim's training data. Thus, an attacker needs to create poisoned data without the victim's local KG, and inject the poisoned data indirectly into the victim's embeddings via FKGE aggregation. Specifically, to create poisoned data, the attacker first infers the targeted relations in the victim's local KG via a new KG component inference attack. Then, to accurately mislead the victim's embeddings via aggregation, the attacker locally trains a shadow model using the poisoned data and uses an optimised dynamic poisoning scheme to adjust the model and generate progressive poisoned updates. Our experimental results demonstrate the attack's effectiveness, achieving a remarkable success rate on various KGE models (e.g. 100\% on TransE with WNRR), while keeping the original task's performance nearly unchanged. | [
"Knowledge Graph Embedding",
"Federated Learning",
"Poisoning Attack"
] | https://openreview.net/pdf?id=6qncjuadJW | 2khgMittqN | official_review | 1,700,651,583,888 | 6qncjuadJW | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission620/Reviewer_GEha"
] | review: This paper introduces a poisoning attack for knowledge graph embeddings that are shared in a federated environment. Essentially, it shows that in an environment where embeddings are shared between network nodes via a server both corrupted clients and servers can improve the performance on link prediction on corrupted relations on other clients. Evaluation is done using standard link prediction benchmarks with modifications to test the specific attack.
_Strengths_
1. An interesting idea for the attack especially the idea of using auxiliary data.
2. Clear description of the problem setting and the related work.
3. Interesting battery of experiments
_Weaknesses_
1. The case for the importance of this kind of attack is a bit unclear. Is federated training of embeddings where corrupting link prediction performance that common?
2. In places, it's not clear whether the malicious client and server are working together or if this is tested or not.
__After Rebuttal__
The comments addressed my questions in particular the additional experiments were helpful for understanding.
questions: - In Table 2 can you clarify how many clients are used?
- There's a suggestion that the increase in the number of clients decreases the effectiveness of the attack. Do you have a deeper rationale why that is? Can that mitigation be done without a distributed environment or does one have to have a distributed environment.
- Can you clarify if you test the case where a malicious client and server are colluding.
- Can yo provide a deeper justification of why this attack might occur and in particular what use cases?
ethics_review_flag: No
ethics_review_description: no
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
6nnkyxQayj | On the Feasibility of Simple Transformer for Dynamic Graph Modeling | [
"Yuxia Wu",
"Yuan Fang",
"Lizi Liao"
] | Dynamic graph modeling is crucial for understanding complex structures in web graphs, spanning applications in social networks, recommender systems, and more. Most existing methods primarily emphasize structural dependencies and their temporal changes. However, these approaches often overlook detailed temporal aspects or struggle with long-term dependencies. Furthermore, many solutions overly complicate the process by emphasizing intricate module designs to capture dynamic evolutions.
In this work, we harness the strength of the Transformer's self-attention mechanism, known for adeptly handling long-range dependencies in sequence modeling. Our approach offers a simple Transformer model tailored for dynamic graph modeling without complex modifications. We re-conceptualize dynamic graphs as a sequence modeling challenge and introduce an innovative temporal alignment technique. This technique not only captures the inherent temporal evolution patterns within dynamic graphs but also streamlines the modeling process of their evolution. As a result, our method becomes versatile, catering to an array of applications. Our model's effectiveness is underscored through rigorous experiments on four real-world datasets from various sectors, solidifying its potential in dynamic graph modeling. The datasets and codes are available. | [
"Dynamic graphs",
"Transformer",
"graph representation learning"
] | https://openreview.net/pdf?id=6nnkyxQayj | qJucqAfOh8 | official_review | 1,700,839,941,378 | 6nnkyxQayj | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1847/Reviewer_WECH"
] | review: The paper focuses on utilizing the Transformer model for dynamic graph modeling. The authors challenge the existing complex methodologies in dynamic graph modeling, which often neglect detailed temporal aspects and struggle with long-term dependencies. Their proposed method, SimpleDyG, leverages the self-attention mechanism of Transformers to handle these long-range dependencies without intricate modifications. This approach reconceptualizes dynamic graphs as sequence modeling challenges and introduces a temporal alignment technique to capture temporal evolution patterns. The effectiveness of this model is demonstrated through experiments on various real-world datasets.
Pros:
The paper proposes a novel method of applying the Transformer architecture, primarily used in NLP and CV, to dynamic graph modeling.
SimpleDyG's utilization of the inherent self-attention mechanism in Transformers without complex modifications stands out as a strength.
Cons:
1. The experimental evaluation is not sufficient. a) Better to adopt more datasets, like datasets in [1-2]. b) Missing important dynamic GNN baselines.
2. The limited novelty is a major concern. The authors claim that 'all these previous Transformer-based approaches only focus on static graphs, leaving unanswered questions about the feasibility for dynamic graphs', but there already exist several works about dynamic graph transformers[2-6]. It is a misclaim, and the authors are expected to tell the difference between this paper and these papers.
[1] Huang, Shenyang, et al. "Temporal graph benchmark for machine learning on temporal graphs." arXiv preprint arXiv:2307.01026 (2023).
[2] Yu, Le, et al. "Towards Better Dynamic Graph Learning: New Architecture and Unified Library." arXiv preprint arXiv:2303.13047 (2023).
[3] Wang, Lu, et al. "Tcl: Transformer-based dynamic graph modelling via contrastive learning." arXiv preprint arXiv:2105.07944 (2021).
[4] Liu, Yixin, et al. "Anomaly detection in dynamic graphs via transformer." IEEE Transactions on Knowledge and Data Engineering (2021).
[5] Cong, Weilin, et al. "Dynamic graph representation learning via graph transformer networks." (2021).
[6] Wang, Zehong, et al. "Temporal graph transformer for dynamic network." International Conference on Artificial Neural Networks. Cham: Springer Nature Switzerland, 2022.
questions: see weaknesses
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 2
technical_quality: 4
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
6nnkyxQayj | On the Feasibility of Simple Transformer for Dynamic Graph Modeling | [
"Yuxia Wu",
"Yuan Fang",
"Lizi Liao"
] | Dynamic graph modeling is crucial for understanding complex structures in web graphs, spanning applications in social networks, recommender systems, and more. Most existing methods primarily emphasize structural dependencies and their temporal changes. However, these approaches often overlook detailed temporal aspects or struggle with long-term dependencies. Furthermore, many solutions overly complicate the process by emphasizing intricate module designs to capture dynamic evolutions.
In this work, we harness the strength of the Transformer's self-attention mechanism, known for adeptly handling long-range dependencies in sequence modeling. Our approach offers a simple Transformer model tailored for dynamic graph modeling without complex modifications. We re-conceptualize dynamic graphs as a sequence modeling challenge and introduce an innovative temporal alignment technique. This technique not only captures the inherent temporal evolution patterns within dynamic graphs but also streamlines the modeling process of their evolution. As a result, our method becomes versatile, catering to an array of applications. Our model's effectiveness is underscored through rigorous experiments on four real-world datasets from various sectors, solidifying its potential in dynamic graph modeling. The datasets and codes are available. | [
"Dynamic graphs",
"Transformer",
"graph representation learning"
] | https://openreview.net/pdf?id=6nnkyxQayj | lPYbEc1Bnm | official_review | 1,700,832,855,208 | 6nnkyxQayj | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1847/Reviewer_CFgg"
] | review: This paper re-conceptualizes dynamic graphs as a sequence modeling challenge and introduces an innovative temporal alignment technique. This technique not only captures the inherent temporal evolution patterns within dynamic graphs but also streamlines the modeling process of their evolution.
Strengths:
Propose a simple yet surprisingly effective Transformer-based approach for dynamic graphs, called SimpleDyG, without complex modifications.
Introduce a novel strategy to map a dynamic graph into a set of sequences, by considering the history of each node as a temporal ego-graph.
Weaknesses:
--Lack of the time complexity analysis and the corresponding time efficiency experiments.
--Lack of the baselines proposed in recent years.
-- Explanation is needed for the superscript in Equation (8).
--It is better to add some ablation studies.
--Some typos, such as ‘we and divide’
questions: Some dataset split into different time steps following[36]. Is there real dynamic network dataset?
ethics_review_flag: No
ethics_review_description: no
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
6nnkyxQayj | On the Feasibility of Simple Transformer for Dynamic Graph Modeling | [
"Yuxia Wu",
"Yuan Fang",
"Lizi Liao"
] | Dynamic graph modeling is crucial for understanding complex structures in web graphs, spanning applications in social networks, recommender systems, and more. Most existing methods primarily emphasize structural dependencies and their temporal changes. However, these approaches often overlook detailed temporal aspects or struggle with long-term dependencies. Furthermore, many solutions overly complicate the process by emphasizing intricate module designs to capture dynamic evolutions.
In this work, we harness the strength of the Transformer's self-attention mechanism, known for adeptly handling long-range dependencies in sequence modeling. Our approach offers a simple Transformer model tailored for dynamic graph modeling without complex modifications. We re-conceptualize dynamic graphs as a sequence modeling challenge and introduce an innovative temporal alignment technique. This technique not only captures the inherent temporal evolution patterns within dynamic graphs but also streamlines the modeling process of their evolution. As a result, our method becomes versatile, catering to an array of applications. Our model's effectiveness is underscored through rigorous experiments on four real-world datasets from various sectors, solidifying its potential in dynamic graph modeling. The datasets and codes are available. | [
"Dynamic graphs",
"Transformer",
"graph representation learning"
] | https://openreview.net/pdf?id=6nnkyxQayj | gbw2M5CK9U | official_review | 1,701,406,548,648 | 6nnkyxQayj | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1847/Reviewer_bUWR"
] | review: **Summary**
The paper introduces a straightforward yet effective transformer-based model tailored for dynamic graph modeling. This method excels in capturing the inherent temporal evolution patterns across an entire timeline, effectively addressing the challenges posed by dynamic graphs. The key innovation lies in transforming a dynamic graph into a sequence of data points, which are then processed using a transformer model. The authors support their methodology with extensive experiments across various datasets, demonstrating the proposed model's superior performance in capturing the evolving patterns in dynamic graphs.
**Pros**
1. The paper is notably well-written, offering clear and detailed explanations complemented by illustrative figures that effectively outline the algorithm's pipeline. These visuals significantly enhance comprehension, making the complex methodologies and concepts more accessible to readers.
2. The approach presented in the paper is both simple and highly effective. The authors introduce an intelligent method to incorporate graph information into a sequence of tokens, making it particularly suitable for dynamic graphs. This technique, specifically using transformers, excels in capturing granular temporal information.
**Cons**
In the experimental section, providing additional statistics about the temporal patterns in each dataset would greatly enhance the paper. For example, whether the changes in the datasets are gradual or dramatic would be particularly useful. Insight into the specific temporal characteristics of each dataset can offer a clearer view of how the proposed method performs under varying temporal dynamics.
questions: In Section 5.2, it's noted that the Hepth dataset is considered a more inductive scenario. Could the authors elaborate on the rationale behind this characterization? Providing a deeper understanding of why the Hepth dataset fits this scenario would aid in comprehending how different types of algorithms perform across various datasets.
ethics_review_flag: No
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
6nnkyxQayj | On the Feasibility of Simple Transformer for Dynamic Graph Modeling | [
"Yuxia Wu",
"Yuan Fang",
"Lizi Liao"
] | Dynamic graph modeling is crucial for understanding complex structures in web graphs, spanning applications in social networks, recommender systems, and more. Most existing methods primarily emphasize structural dependencies and their temporal changes. However, these approaches often overlook detailed temporal aspects or struggle with long-term dependencies. Furthermore, many solutions overly complicate the process by emphasizing intricate module designs to capture dynamic evolutions.
In this work, we harness the strength of the Transformer's self-attention mechanism, known for adeptly handling long-range dependencies in sequence modeling. Our approach offers a simple Transformer model tailored for dynamic graph modeling without complex modifications. We re-conceptualize dynamic graphs as a sequence modeling challenge and introduce an innovative temporal alignment technique. This technique not only captures the inherent temporal evolution patterns within dynamic graphs but also streamlines the modeling process of their evolution. As a result, our method becomes versatile, catering to an array of applications. Our model's effectiveness is underscored through rigorous experiments on four real-world datasets from various sectors, solidifying its potential in dynamic graph modeling. The datasets and codes are available. | [
"Dynamic graphs",
"Transformer",
"graph representation learning"
] | https://openreview.net/pdf?id=6nnkyxQayj | YGS8UlyZnf | official_review | 1,700,737,298,837 | 6nnkyxQayj | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1847/Reviewer_6j6U"
] | review: It introduces a novel strategy for tokenizing dynamic graphs for Transformer-based architectures. Based on the strategy, the proposed method outperforms the existing representation learning methods for dynamic graphs while it does not require complicated architecture or a heavy amount of computation. In addition, The paper is well-written and easy to read. However, the proposed method only supports incremental settings and cannot handle fully dynamic graphs with the deletion of existing links.
questions: - There is a typo: Figure 3.1 -> Figure 2?
- It would be better if the proposed model supports link deletion operation. It seems that the current version only supports link insertions.
- The authors need to provide the reason why DySAT is not suitable for inductive scenarios on the Hepth dataset.
- The considered datasets are too small and all the reported metric scores are too low (<0.25). Could you provide the reasons?
- As shown in Table 4, using the current design related to temporal tokens is not so effective, compared to the other designs provided in the table.
ethics_review_flag: No
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
6nnkyxQayj | On the Feasibility of Simple Transformer for Dynamic Graph Modeling | [
"Yuxia Wu",
"Yuan Fang",
"Lizi Liao"
] | Dynamic graph modeling is crucial for understanding complex structures in web graphs, spanning applications in social networks, recommender systems, and more. Most existing methods primarily emphasize structural dependencies and their temporal changes. However, these approaches often overlook detailed temporal aspects or struggle with long-term dependencies. Furthermore, many solutions overly complicate the process by emphasizing intricate module designs to capture dynamic evolutions.
In this work, we harness the strength of the Transformer's self-attention mechanism, known for adeptly handling long-range dependencies in sequence modeling. Our approach offers a simple Transformer model tailored for dynamic graph modeling without complex modifications. We re-conceptualize dynamic graphs as a sequence modeling challenge and introduce an innovative temporal alignment technique. This technique not only captures the inherent temporal evolution patterns within dynamic graphs but also streamlines the modeling process of their evolution. As a result, our method becomes versatile, catering to an array of applications. Our model's effectiveness is underscored through rigorous experiments on four real-world datasets from various sectors, solidifying its potential in dynamic graph modeling. The datasets and codes are available. | [
"Dynamic graphs",
"Transformer",
"graph representation learning"
] | https://openreview.net/pdf?id=6nnkyxQayj | Mfo6mvrKmn | decision | 1,705,909,216,622 | 6nnkyxQayj | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: There was some discussion on this paper and I thought quite a bit about it.
The paper proposes a simple architecture and demonstrate that transformers work for temporal graph modeling. I think the simplicity is a plus, especially if previous work was more complicated (which is what I gather from the reviews). The experiments are on fairly small datasets, so I'm not sure how scalable the method is. But the experiments seem quite comprehensive,
I'll rate as a weak accept paper, though I could also see it as borderline. |
6VRZQdN9zz | Poisoning Federated Recommender Systems with Fake Users | [
"Ming Yin",
"Yichang Xu",
"Minghong Fang",
"Neil Zhenqiang Gong"
] | Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks, from user to server-side vulnerabilities. Poisoning attacks are particularly notable among user-side attacks, as participants upload malicious model updates to deceive the global model, often intending to promote or demote specific targeted items. This study investigates strategies for executing promotion attacks in federated recommender systems.
Current poisoning attacks on federated recommender systems often rely on additional information, such as the local training data of genuine users or item popularity. However, such information is challenging for the potential attacker to obtain. Thus, there is a need to develop an attack that requires no extra information apart from item embeddings obtained from the server. In this paper, we introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item in federated recommender systems without requiring knowledge about user-item rating data, user attributes, or the aggregation rule used by the server. Extensive experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen targeted item to a large portion of genuine users and outperform current benchmarks that rely on additional information about the system. We further observe that the model updates from both genuine and fake users are indistinguishable within the latent space. | [
"Federated Recommender Systems",
"Poisoning Attacks",
"Fake Users"
] | https://openreview.net/pdf?id=6VRZQdN9zz | wRZ6RjltGi | official_review | 1,700,723,749,833 | 6VRZQdN9zz | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1102/Reviewer_SNHk"
] | review: To develop an attack that requires no extra information apart from item embeddings obtained from the server, this paper introduces a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item in federated recommender systems without requiring knowledge about user-item rating data, user attributes, or the aggregation rule used by the server. Extensive experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen targeted item to a large portion of genuine users and outperform current benchmarks that rely on additional information about the system.
Pros:
1. Authors introduce a novel poisoning attack on FedRecs that uses fake users, requiring no prior knowledge of genuine user information or access to local training data.
2. Authors systematically evaluate the performance of our proposed attack under various settings, and authors find that PoisonFRS significantly outperforms baseline attacks.
3. Extensive experiments demonstrate that authors proposed PoisonFRS could promote the targeted item to a large fraction of genuine users with a small proportion of fake users, and authors attack cannot be detected by the server.
Cons:
1. Although this article considers many scenarios to verify the effectiveness of PoisonFRS, the text does not consider the issue of new users. How do authors distinguish between new users and fake users? Please provide more detailed explanations from the authors. Can the PoisonFRS algorithm still operate normally when encountering new users.
2. In Section 4, I suggest that the author introduce a specific or intuitive example to simplify the understanding of the motivation behind the algorithm and its specific steps proposed in this article.
questions: 1. As is well known, there is very little information about new users. How does the author define false users to prevent new users from being mistakenly harmed as false users?
2. Why is the number of fillers set to 59 in authors' proposed poison FRS and all baseline attacks?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
6VRZQdN9zz | Poisoning Federated Recommender Systems with Fake Users | [
"Ming Yin",
"Yichang Xu",
"Minghong Fang",
"Neil Zhenqiang Gong"
] | Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks, from user to server-side vulnerabilities. Poisoning attacks are particularly notable among user-side attacks, as participants upload malicious model updates to deceive the global model, often intending to promote or demote specific targeted items. This study investigates strategies for executing promotion attacks in federated recommender systems.
Current poisoning attacks on federated recommender systems often rely on additional information, such as the local training data of genuine users or item popularity. However, such information is challenging for the potential attacker to obtain. Thus, there is a need to develop an attack that requires no extra information apart from item embeddings obtained from the server. In this paper, we introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item in federated recommender systems without requiring knowledge about user-item rating data, user attributes, or the aggregation rule used by the server. Extensive experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen targeted item to a large portion of genuine users and outperform current benchmarks that rely on additional information about the system. We further observe that the model updates from both genuine and fake users are indistinguishable within the latent space. | [
"Federated Recommender Systems",
"Poisoning Attacks",
"Fake Users"
] | https://openreview.net/pdf?id=6VRZQdN9zz | oTl3n8aSas | official_review | 1,699,929,691,514 | 6VRZQdN9zz | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1102/Reviewer_U4F3"
] | review: The authors develop an attack in federated recommendation systems that requires no extra information apart from item embeddings obtained from the server. Their fake user based poisoning attack named PoisonFRS promotes the attacker-chosen targeted item in federated recommender systems without requiring knowledge about user-item rating data, user attributes, or the aggregation rule used by the server.
They argue that the requirement for information from genuine users or access to local training data, which can pose significant challenges, especially for recently registered fake users.
In general the paper is well written and procedures, baselines and aggregation strategies well motivated. I like their ablation study on amplification factor, number of popular items, attack initiation times and filler items.
I think the work has some evaluation issues with respect to their chosen baselines. In table 3, most of the baselines perform with HR@5 of 0. This information on all those baselines gives no benefit for the reader except that no baseline seems to work. The authors should consider finding baselines that can provide an actual comparison and show where the performance of the proposed model lies. Alternatively, different evaluation metrics in addition to HR@5 could be shown to check at which point the baselines start or stop working. The reader has no possibility to tell whether there is an issue with the implementation, the assumptions made, or other things that prevent the baselines to work at all. In addition, theres little information about the actual fake users that should be added. From Figure 4, it seems like there are only 10 fake users in the whole approach.
questions: Will their fake user data and code be made available?
The authors mention:
''We can observe from Table 3 and Table 6 that centralized recommender system-based attacks almost show no attacking effect, which means attacks tailored to FedRecs are quite needed.''
and
''From Table 3 and Table 6, we conclude that for most baseline attacks, their effect will be weakened to some extent when facing defensive aggregation rules''
However I do not think the conclusion can be easily understood for the reader. Maybe the authors can expand on how they develop those conclusions.
Can figure captions be edited to contain more information, such that they can be read in isolation from the text? I think this would be beneficial for the reader.
ethics_review_flag: No
ethics_review_description: n/a
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 4
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
6VRZQdN9zz | Poisoning Federated Recommender Systems with Fake Users | [
"Ming Yin",
"Yichang Xu",
"Minghong Fang",
"Neil Zhenqiang Gong"
] | Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks, from user to server-side vulnerabilities. Poisoning attacks are particularly notable among user-side attacks, as participants upload malicious model updates to deceive the global model, often intending to promote or demote specific targeted items. This study investigates strategies for executing promotion attacks in federated recommender systems.
Current poisoning attacks on federated recommender systems often rely on additional information, such as the local training data of genuine users or item popularity. However, such information is challenging for the potential attacker to obtain. Thus, there is a need to develop an attack that requires no extra information apart from item embeddings obtained from the server. In this paper, we introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item in federated recommender systems without requiring knowledge about user-item rating data, user attributes, or the aggregation rule used by the server. Extensive experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen targeted item to a large portion of genuine users and outperform current benchmarks that rely on additional information about the system. We further observe that the model updates from both genuine and fake users are indistinguishable within the latent space. | [
"Federated Recommender Systems",
"Poisoning Attacks",
"Fake Users"
] | https://openreview.net/pdf?id=6VRZQdN9zz | ap0O3M9TnP | official_review | 1,700,761,264,675 | 6VRZQdN9zz | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1102/Reviewer_Ggd6"
] | review: Summary: This paper proposes a novel attack for federated recommender system that promotes items of interest by fake users. Specifically, the proposed method create user-side model updates that could drag the recommender model to promote items of interest. Experiment on Steam, Yelp, and ML-10/20M demonstrate the proposed method is suprior to various baselines, and can break current defenses for recommender systems.
Strength:
- [S1] The stting of the attack model (i.e. creating only user-side updates) is realistic and interesting.
- [S2] The paper is paired with a wide selection of baselines on representative datasets to back the claim well.
Weakness:
- [W1] The claim that the proposed method can remain undetected based on T-SNE visualization seems a bit weak.
questions: See W1 - is there any good justification on using T-SNE as the detection baseline?
ethics_review_flag: No
ethics_review_description: no cerns
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
6VRZQdN9zz | Poisoning Federated Recommender Systems with Fake Users | [
"Ming Yin",
"Yichang Xu",
"Minghong Fang",
"Neil Zhenqiang Gong"
] | Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks, from user to server-side vulnerabilities. Poisoning attacks are particularly notable among user-side attacks, as participants upload malicious model updates to deceive the global model, often intending to promote or demote specific targeted items. This study investigates strategies for executing promotion attacks in federated recommender systems.
Current poisoning attacks on federated recommender systems often rely on additional information, such as the local training data of genuine users or item popularity. However, such information is challenging for the potential attacker to obtain. Thus, there is a need to develop an attack that requires no extra information apart from item embeddings obtained from the server. In this paper, we introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item in federated recommender systems without requiring knowledge about user-item rating data, user attributes, or the aggregation rule used by the server. Extensive experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen targeted item to a large portion of genuine users and outperform current benchmarks that rely on additional information about the system. We further observe that the model updates from both genuine and fake users are indistinguishable within the latent space. | [
"Federated Recommender Systems",
"Poisoning Attacks",
"Fake Users"
] | https://openreview.net/pdf?id=6VRZQdN9zz | MI9LEDqcEl | official_review | 1,700,554,354,504 | 6VRZQdN9zz | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1102/Reviewer_kxR6"
] | review: This paper focuses on the security issue of Federated recommender system and proposes a new poisoning attack method named PoisonFRS. Instead of requiring additional information related to genuine users’ training data or genuine users’ training data or the distribution of items like recent studies, the attacker can only insert fake users into the recommender system and fetch the item embedding from the server. To mitigate the need for fake users to have local data, PoisonFRS proposes to generate a target item embedding and computes the model updates directly aimed at aligning the embedding of the target item with the target embedding. To evade detection, PoisonFRS also selects filler items and update their embedding updates aiming at maintaining the original embeddings of other items.
pros:
1. This paper conducts adequate experiments on four real-world datasets to evaluate the effectiveness of the proposed PoisonFRS. In addition, the authors compare the proposed PoisonFRS with five traditional poisoning attack and three state-of-the-art poisoning attacks on FedRecs using different robust aggregation rules.
2. The proposed PoisonFRS requires only the item embeddings, instead of requiring any additional information related to genuine users, which is more realistic compared with recent studies.
cons:
1. The attack reults demonstrate the proposed PoisonFRS is an effective poisoning attack method, yet the adaptive defense against this attack is not discussed in this paper.
2. I still wonder why this method works. The kernel of this method lies on the assumption that the average of all item features are unpopular and popular items must exhibit significant dissimilarity from the average item feature. However, there is no proof of the authenticity of this assumption in this paper. Moreover, the parameter $\lambda$ is also a bit confusing.
questions: 1. Adaptive defense against the proposed PoisonFRS requires more discussion or experimentation.
2. More experiments are needed verify the difference between the embeddings of popular projects and the average embedding in the real world. A visualization of item embeddings with varying popularity is sufficient.
3. The attack performamce inproves as the parameter $\lambda$ improves. I am wondering whether the large $\lambda$ would make the value in the target embedding is not in the same level as the size of other embeddings. More explanation about it would be better.
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
6VRZQdN9zz | Poisoning Federated Recommender Systems with Fake Users | [
"Ming Yin",
"Yichang Xu",
"Minghong Fang",
"Neil Zhenqiang Gong"
] | Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks, from user to server-side vulnerabilities. Poisoning attacks are particularly notable among user-side attacks, as participants upload malicious model updates to deceive the global model, often intending to promote or demote specific targeted items. This study investigates strategies for executing promotion attacks in federated recommender systems.
Current poisoning attacks on federated recommender systems often rely on additional information, such as the local training data of genuine users or item popularity. However, such information is challenging for the potential attacker to obtain. Thus, there is a need to develop an attack that requires no extra information apart from item embeddings obtained from the server. In this paper, we introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item in federated recommender systems without requiring knowledge about user-item rating data, user attributes, or the aggregation rule used by the server. Extensive experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen targeted item to a large portion of genuine users and outperform current benchmarks that rely on additional information about the system. We further observe that the model updates from both genuine and fake users are indistinguishable within the latent space. | [
"Federated Recommender Systems",
"Poisoning Attacks",
"Fake Users"
] | https://openreview.net/pdf?id=6VRZQdN9zz | 7EJIBnEHDe | official_review | 1,700,742,127,751 | 6VRZQdN9zz | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1102/Reviewer_sUXq"
] | review: The paper proposes a poisoning attack on federated recommender systems that allows to promote an item chosen by an attacker to genuine users. The attack uses a small amount of fake users and carefully crafted updates to the local models to influence the recommendations issued by the global model.
The paper is well written and structured but could benefit from improved explanations of some important concepts.
The contributions are clearly stated and the author's introduce preliminary technical details on the concepts they use. However, I believe that some theoretical aspects should be explained further/better: for instance, subsection 2.1 presents federated recommender systems but lacks depth and explanations. I believe this subsection is important as it lays the foundation for a better understanding of the paper. In 4.2, key terminologies should be further described: for example, when the authors talk about "item features", what do the features include? In Section 2, the authors denote $v_i$ to represent the embeddings of item $i$ but in 4.2.1 and 3.2.3, $v^l_i$ and $v_{t}^{'}$ are the "global model at the $l$-th global round" and the "target model" respectively.
The presented methodology looks robust and the authors compare their algorithm to multiple existing works in the literature along with multiple aggregation methods. In all the cases presented, the authors' technique outperforms existing works by a significant margin. The author's choices are well warranted and shown to work through experimentation on real-world datasets. Furthermore, the authors validate the effect of multiple hyper parameters, showing the robustness of their work.
In my opinion, the major drawback here is that the baseline's choices are not justified well enough. I also found the "filler items" paragraph to be complicated to understand, despite the concept being straightforward. Finally, in 4.1, the authors assume that most popular items share roughly similar features: I believe that this claim should be verified or proven in the results section.
I also think the paper would benefit from a small subsection that addresses the limitations of the author's technique and potentially present some suggested defenses.
questions: a. In my opinion, 4.2.2 and 4.2.3 need reworking. In 4.2.2, I'm not sure I understand what is minimized. What exactly does $v_i$ represent? I feel like a clear definition of what $v$ stands for would clarify multiple aspects of this work.
b. What are the reasons for chosing the mentioned baseline attacks? What is their prevalence in the SOTA's benchmarks? Are they commonly used as a comparing ground?
c. Table 4 shows that the baselines have a significantly different top-5 hit ratio on different datasets, while the author's technique remains relatively stable. What is the reason for this? More precisely, FedRecAttack reaches a score of 0.91 on the Steam dataset but stagnates around 0 and 0.32 at maximum on other datasets. PoisonFRS on the other hand remains stable between 0.72 and 1.00. Is there a reason for this? What were the evaluation conditions, for example, of FedRecAttack? Are they similar to the paper's threat model?
c. Why are the "Median" and "FedAvg" aggregation techniques chosen for the results in Section 5.2? Why not the other mentioned aggregation algorithms?
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
6VRZQdN9zz | Poisoning Federated Recommender Systems with Fake Users | [
"Ming Yin",
"Yichang Xu",
"Minghong Fang",
"Neil Zhenqiang Gong"
] | Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks, from user to server-side vulnerabilities. Poisoning attacks are particularly notable among user-side attacks, as participants upload malicious model updates to deceive the global model, often intending to promote or demote specific targeted items. This study investigates strategies for executing promotion attacks in federated recommender systems.
Current poisoning attacks on federated recommender systems often rely on additional information, such as the local training data of genuine users or item popularity. However, such information is challenging for the potential attacker to obtain. Thus, there is a need to develop an attack that requires no extra information apart from item embeddings obtained from the server. In this paper, we introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item in federated recommender systems without requiring knowledge about user-item rating data, user attributes, or the aggregation rule used by the server. Extensive experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen targeted item to a large portion of genuine users and outperform current benchmarks that rely on additional information about the system. We further observe that the model updates from both genuine and fake users are indistinguishable within the latent space. | [
"Federated Recommender Systems",
"Poisoning Attacks",
"Fake Users"
] | https://openreview.net/pdf?id=6VRZQdN9zz | 4Q9f6GGa2y | decision | 1,705,909,246,705 | 6VRZQdN9zz | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: This paper delves into the issue of targeted poisoning attacks on federated recommendation systems, a topic that has garnered significant attention in recent times. Notably, the paper introduces an innovative attack strategy termed PoisonFRS, designed to manipulate federated recommender systems by leveraging item embeddings obtained solely from the server, thereby eliminating the need for additional information. Through extensive experiments conducted on multiple real-world datasets, the paper compares PoisonFRS with existing targeted poisoning attack methods, demonstrating its promising results and showcasing its advantages.
However, certain aspects of the paper could benefit from further refinement. Firstly, the absence of a discussion or proposal for a new countermeasure is notable, especially given that the experimental results indicate the ineffectiveness of existing defense methods against PoisonFRS. Addressing this gap would contribute to the completeness of the paper by considering potential defenses to mitigate the impact of the introduced attack. Furthermore, the motivation and rationale behind the main algorithm in Section 4 should be more explicitly illustrated and clarified to enhance clarity and comprehension. A more detailed explanation of the underlying principles would aid readers in grasping the intricacies of the proposed approach. Additionally, it would be beneficial for the paper to discuss the limitations of the PoisonFRS attack method. For instance, highlighting scenarios where the method may not be applicable, such as those involving new users, would provide a more comprehensive understanding of its scope and applicability. |
6EIkXlVPLM | Diagrammatic Reasoning for ALC visualizations with Logic Graphs | [
"Ildar Baimuratov"
] | User studies show the demand for diagrammatic reasoning techniques
for knowledge representation formats. OWL ontologies are highly relevant for Web 3.0, however, existing ontology visualization tools do not support diagrammatic reasoning, while existing diagrammatic reasoning systems utilize suboptimal visual languages. The purpose of this research is to facilitate the usage of OWL ontologies by providing a diagrammatic reasoning system over their visual representations. We focus on the ALC description logic, which covers most of the expressivity of the ontologies. As a visual language to reason about, we utilize Logic Graphs, which provide simplest visualizations regarding graph- and information-theoretic properties. We adapt the tableau algorithm to LGs to reason about concept satisfiability, prove the correctness of the proposed system and illustrate it with examples. The proposed diagrammatic reasoning system allows reasoning over ontologies, reducing complex concepts step by step, and identifying elements that produce a contradiction. | [
"Ontology visualization",
"Diagrammatic reasoning",
"Existential graphs",
"Description logic",
"Tableau algorithm"
] | https://openreview.net/pdf?id=6EIkXlVPLM | wlRX4aD134 | decision | 1,705,909,254,834 | 6EIkXlVPLM | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: The paper proposes a new diagrammatic reasoning system for ALC description logic, which is a core part of OWL ontologies. The system is based on Logic Graphs, a visual language that aims to improve the human interpretability and accessibility of ontologies. The paper provides a formal proof of the soundness and completeness of the system, as well as some examples and application scenarios.
The paper is written clearly, has high technical quality and has a nice contribution to the field. There are minor improvements suggested, such as providing some draft visualizations or discussing the target users of the system. I believe these additions will make the paper better, and I strongly recommend the authors to add them in the final version.
Based on this summary, I would recommend accepting this paper. |
6EIkXlVPLM | Diagrammatic Reasoning for ALC visualizations with Logic Graphs | [
"Ildar Baimuratov"
] | User studies show the demand for diagrammatic reasoning techniques
for knowledge representation formats. OWL ontologies are highly relevant for Web 3.0, however, existing ontology visualization tools do not support diagrammatic reasoning, while existing diagrammatic reasoning systems utilize suboptimal visual languages. The purpose of this research is to facilitate the usage of OWL ontologies by providing a diagrammatic reasoning system over their visual representations. We focus on the ALC description logic, which covers most of the expressivity of the ontologies. As a visual language to reason about, we utilize Logic Graphs, which provide simplest visualizations regarding graph- and information-theoretic properties. We adapt the tableau algorithm to LGs to reason about concept satisfiability, prove the correctness of the proposed system and illustrate it with examples. The proposed diagrammatic reasoning system allows reasoning over ontologies, reducing complex concepts step by step, and identifying elements that produce a contradiction. | [
"Ontology visualization",
"Diagrammatic reasoning",
"Existential graphs",
"Description logic",
"Tableau algorithm"
] | https://openreview.net/pdf?id=6EIkXlVPLM | aqMh3wnJiS | official_review | 1,700,737,191,034 | 6EIkXlVPLM | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1759/Reviewer_rhae"
] | review: The paper proposes a diagrammatic reasoning method for ALC, an expressive description logic, inspired by the tableau algorithm, and prove that it is sound and complete. The method is based on the Logic Graphs, a visual language for ontologies that derives from Peirce's existential graphs, with some additions.
The paper is very well written, easy to follow, and self-contained. The contribution is clearly positioned with respect to the relevant literature. The paper is technically sound: I have checked all the proofs and they are correct.
Having said that, I find that the diagrammatic explanation of proofs, as proposed in the paper, are intuitive for a user provided that the user is familiar with the tableau algorithms of description logics, or, better, with the four rules of the Logic-Graphs-based diagrammatic reasoning system. It remains to be debated to what degree one can expect this from a casual user, while it is not obvious that a description logic expert would really benefit from a visual explanation of inferences...
questions: What would be the target user of the proposed reasoning system?
ethics_review_flag: No
ethics_review_description: Not applicable
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 7
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
6EIkXlVPLM | Diagrammatic Reasoning for ALC visualizations with Logic Graphs | [
"Ildar Baimuratov"
] | User studies show the demand for diagrammatic reasoning techniques
for knowledge representation formats. OWL ontologies are highly relevant for Web 3.0, however, existing ontology visualization tools do not support diagrammatic reasoning, while existing diagrammatic reasoning systems utilize suboptimal visual languages. The purpose of this research is to facilitate the usage of OWL ontologies by providing a diagrammatic reasoning system over their visual representations. We focus on the ALC description logic, which covers most of the expressivity of the ontologies. As a visual language to reason about, we utilize Logic Graphs, which provide simplest visualizations regarding graph- and information-theoretic properties. We adapt the tableau algorithm to LGs to reason about concept satisfiability, prove the correctness of the proposed system and illustrate it with examples. The proposed diagrammatic reasoning system allows reasoning over ontologies, reducing complex concepts step by step, and identifying elements that produce a contradiction. | [
"Ontology visualization",
"Diagrammatic reasoning",
"Existential graphs",
"Description logic",
"Tableau algorithm"
] | https://openreview.net/pdf?id=6EIkXlVPLM | Y0QzeY91uB | official_review | 1,701,049,101,186 | 6EIkXlVPLM | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1759/Reviewer_6QzH"
] | review: The paper proposes a new description annotation language for ACL description logic.
The paper provides some details about the proposed new annotation language. However, it does not provide any sound reason why this new language might be needed or useful. It describes some of the other description language but it does not provide any evidence as to why those visual languages are suboptimal.
questions: How can the authors know that the proposed new language can increase human understandability of the underlying logic and its expressivity?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 3
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
6EIkXlVPLM | Diagrammatic Reasoning for ALC visualizations with Logic Graphs | [
"Ildar Baimuratov"
] | User studies show the demand for diagrammatic reasoning techniques
for knowledge representation formats. OWL ontologies are highly relevant for Web 3.0, however, existing ontology visualization tools do not support diagrammatic reasoning, while existing diagrammatic reasoning systems utilize suboptimal visual languages. The purpose of this research is to facilitate the usage of OWL ontologies by providing a diagrammatic reasoning system over their visual representations. We focus on the ALC description logic, which covers most of the expressivity of the ontologies. As a visual language to reason about, we utilize Logic Graphs, which provide simplest visualizations regarding graph- and information-theoretic properties. We adapt the tableau algorithm to LGs to reason about concept satisfiability, prove the correctness of the proposed system and illustrate it with examples. The proposed diagrammatic reasoning system allows reasoning over ontologies, reducing complex concepts step by step, and identifying elements that produce a contradiction. | [
"Ontology visualization",
"Diagrammatic reasoning",
"Existential graphs",
"Description logic",
"Tableau algorithm"
] | https://openreview.net/pdf?id=6EIkXlVPLM | QS350pCpL3 | official_review | 1,700,642,557,366 | 6EIkXlVPLM | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1759/Reviewer_pLdk"
] | review: The paper proposes a theoretical method to facilitate the usage of OWL ontologies by the help of a diagrammatic reasoning system
over the ontologies’ visual representations. The focus falls on the ALC description logic, as it covers the expressivity of most available OWL ontologies according to a survey.
The paper is overall well-written and not difficult to follow.
There’s an issue with the very first example in the introduction, because the concepts are ill defined. A vegan is a specific kind of vegetarian (meaning that a vegan IS indeed a vegetarian - you can actually check that) and if seen so, your reasoning will not yield the empty set. Hence, I’m struggling to grasp the motivation and application of the proposed algorithm.
The related work section contains formal parts which are not suited for this section — consider moving them to the Background section and focus here only on related approaches within the scope of the paper and the position of the latter wrt the former.
The paper only includes a formal evaluation of the proposed approach, no experiments are given, nor at least a set of clearly defined use-cases that could help understand the practical value of the approach. Indeed, as the authors mention, the algorithm is not yet implemented, which makes less convincing its applicability.
questions: - What is the practical value of the proposed method?
- How does this system compare to NLP-based approaches (pretrainded large language models) who could show similar functionalities on an arguably lesser cost?
- What is the envisioned experimental evaluation setting?
ethics_review_flag: No
ethics_review_description: no issue
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
6EIkXlVPLM | Diagrammatic Reasoning for ALC visualizations with Logic Graphs | [
"Ildar Baimuratov"
] | User studies show the demand for diagrammatic reasoning techniques
for knowledge representation formats. OWL ontologies are highly relevant for Web 3.0, however, existing ontology visualization tools do not support diagrammatic reasoning, while existing diagrammatic reasoning systems utilize suboptimal visual languages. The purpose of this research is to facilitate the usage of OWL ontologies by providing a diagrammatic reasoning system over their visual representations. We focus on the ALC description logic, which covers most of the expressivity of the ontologies. As a visual language to reason about, we utilize Logic Graphs, which provide simplest visualizations regarding graph- and information-theoretic properties. We adapt the tableau algorithm to LGs to reason about concept satisfiability, prove the correctness of the proposed system and illustrate it with examples. The proposed diagrammatic reasoning system allows reasoning over ontologies, reducing complex concepts step by step, and identifying elements that produce a contradiction. | [
"Ontology visualization",
"Diagrammatic reasoning",
"Existential graphs",
"Description logic",
"Tableau algorithm"
] | https://openreview.net/pdf?id=6EIkXlVPLM | GRyN03K7s1 | official_review | 1,700,842,719,158 | 6EIkXlVPLM | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1759/Reviewer_ozdX"
] | review: Pros: very clear paper, very well written, provides examples and application scenarios
Cons: no obvious shortcomings. Authors have openly stated about the status of their work, and outlined scenarios for the future work
In summary I see that this paper is publishable as it is. The topic should be of great interest for the Web Conference participants, especially since it provides clear and well communicated application scenarios and examples in addition to (outstanding) presentation about their diagrammatic reasoning algorithm.
questions: Some draft visualizations would be interesting to see already in this paper, or at least in the very near future in your next papers.
ethics_review_flag: No
ethics_review_description: -
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 7
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
6EIkXlVPLM | Diagrammatic Reasoning for ALC visualizations with Logic Graphs | [
"Ildar Baimuratov"
] | User studies show the demand for diagrammatic reasoning techniques
for knowledge representation formats. OWL ontologies are highly relevant for Web 3.0, however, existing ontology visualization tools do not support diagrammatic reasoning, while existing diagrammatic reasoning systems utilize suboptimal visual languages. The purpose of this research is to facilitate the usage of OWL ontologies by providing a diagrammatic reasoning system over their visual representations. We focus on the ALC description logic, which covers most of the expressivity of the ontologies. As a visual language to reason about, we utilize Logic Graphs, which provide simplest visualizations regarding graph- and information-theoretic properties. We adapt the tableau algorithm to LGs to reason about concept satisfiability, prove the correctness of the proposed system and illustrate it with examples. The proposed diagrammatic reasoning system allows reasoning over ontologies, reducing complex concepts step by step, and identifying elements that produce a contradiction. | [
"Ontology visualization",
"Diagrammatic reasoning",
"Existential graphs",
"Description logic",
"Tableau algorithm"
] | https://openreview.net/pdf?id=6EIkXlVPLM | 0KjjFuBbB7 | official_review | 1,700,747,322,149 | 6EIkXlVPLM | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1759/Reviewer_2Pjn"
] | review: Summary:
This paper focuses on enhancing the usage of OWL ontologies through a diagrammatic reasoning system. This system is designed to make the ontologies more human-interpretable, which is crucial for knowledge-based systems, especially in the context of Web 3.0. The research specifically addresses ALC (Attributive Concept Description Language with Complements) description logic, which is a fundamental part of OWL ontologies.
The main contribution of the research includes:
1. Developing a diagrammatic reasoning system for visualizing ontologies using Logic Graphs (LGs).
2. Adapting the tableau algorithm for ALC to these Logic Graphs.
3. Proving the correctness of this system
Strengths:
1. The diagrammatic reasoning system improves the human interpretability of OWL ontologies, which is important for trustworthiness and explainability in Web 3.0 contexts.
2. The use of LGs provides simpler visualizations for reasoning about ontologies, making complex concepts more accessible.
3. The adaptation of the tableau algorithm for ALC to LGs offers a practical tool for reasoning about concept satisfiability in ontologies.
Weaknesses:
1. As I am not well-versed in logical proofs, ALC, and OWL domains, I am unable to fully assess the correctness of the algorithmic proof process proposed by the authors.
2. I think that the logic tree-based reasoning algorithm proposed by the authors is significantly innovative. It offers powerful interpretability for knowledge reasoning, especially fitting the needs of the current learning and artificial intelligence fields. This methodology might provide a new perspective for many logic reasoning tasks.
3. The discussion section mentions issues related to the implementation of the algorithm. I would like the authors to elaborate further on the specific application plans of this algorithm in practice, as well as how it integrates with existing technologies and tools.
4. I am particularly interested in how this work could offer new insights or potential solutions to the reasoning challenges of current Large Language Models (LLM). Could the authors discuss this aspect, envisioning the potential contributions and scope of this algorithm in future LLM research?
questions: see weakness.
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
6DPGbrM3Dh | Question Difficulty Consistent Knowledge Tracing | [
"Guimei Liu",
"Huijing Zhan",
"Jung-jae Kim"
] | Knowledge tracing aims to estimate knowledge states of students based on their historical learning activities. Many deep learning models have been developed for knowledge tracing with impressive performance. Early works like DKT use skill IDs and student responses only. Recent works also incorporate questions IDs into their models and achieve much improved performance. However, predictions made by these models are thus on specific questions, and it is not straightforward to translate them to estimation of students' knowledge states over skills. In this paper, we propose to replace question IDs with question difficulty levels in deep knowledge tracing models, which transforms the knowledge tracing problem to ``predicting whether a student can answer any question of a given skill at a given difficulty level correctly". The predictions made by our model can be more readily translated to students' knowledge states over skills. Furthermore, by using question difficulty levels to replace question IDs, we can also alleviate the cold-start problem in knowledge tracing as online learning platforms are updated frequently with new questions. We further use two techniques to smooth the predicted scores. One is to combine embeddings of nearby difficulty levels using a Hann function. The other is to constrain the predicted probabilities to be consistent with question difficulty levels by imposing a penalty if they are not consistent. We conduct extensive experiments to study the performance of the proposed model. Our experiment results show that our model outperforms latest knowledge tracing models in terms of both AUC/RMSE and consistency with question difficulty levels. | [
"knowledge tracing; learning activities"
] | https://openreview.net/pdf?id=6DPGbrM3Dh | mXrhByhGQt | official_review | 1,701,361,470,790 | 6DPGbrM3Dh | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1593/Reviewer_pJyo"
] | review: In this paper, the authors propose a novel approach to knowledge tracing, replacing question IDs with difficulty levels in deep learning models. This shift allows predictions to focus on a student's ability to answer any question of a specific skill at a given difficulty level, facilitating cold start. The authors introduce two techniques to enhance predictions, involving difficulty level embeddings and a consistent constraint on predicted scores. Experiments show the approach, combining LSTM for learning representations and a feed-forward neural network for predictions, outperforms more complex models in terms of consistency with question difficulty levels, efficiency, and accuracy.
Strengths
+ Very well-written and well-structured, the authors conveyed their idea in a clear and concise way.
+ Well-summarized contributions and connection to prior work, with well-motivated choices.
+ The paper covers four datasets, from two different platforms, which allow to have a good level of generalizability.
Limitations
- Knowledge tracing is a very important topic for the community working on educational data mining. As the paper stands, I found however hard to grasp the core connection with respect to the topics of the conference. The connection seems limited, as the authors mainly touch on the Web for the data collection. Without a proper contextualization on how the proposed method affect the Web, the paper can be only of marginal interest for this community and, in any case, such interest might be narrowed to a limited sub-community which is working on educational data mining.
- The novel contribution of this paper appears to be an incremental addition to existing knowledge about the task. In this sense, I appreciated the extensive results the authors provided to counterbalance the limited technical novelty. Nevertheless I believe that the gains showed with respect to the best baseline, e.g., in Table 2, seem to tell that the proposed contribution requires more elaboration to have concrete impact on students' learning.
- The experimental results highlight that framework's outperforms of several baselines. However, the reported gains, often at the second or third decimal, leave room for ambiguity regarding the real impact on students' learning. Moreover, the authors merely focus on machine-learning oriented metrics, like AUC and RMSE, without any connection to how this can enable a better learning on the Web. To show the significance of the improvement, it is suggested that the authors consider an online evaluation with A/B testing to provide a more practical context for their findings.
- The results (see Section 4) cover a very large set of experiments and metrics, touching both on accuracy and efficiency in terms of memory and running time. However, from the way they are presented, it is hard to grasp the concrete trade-off of each method with respect to this set of metrics and, this, deciding which method should be preferred with respect to the others once we should move to real-world implementation. Radar plots might be a more effective way of showing the results and let the reader grasp the trade-off easily.
- Although the paper covers various datasets and baselines with detailed implementation information, the absence of shared source code may pose challenges for other researchers attempting to reproduce the work, given also that some baselines were re-created from scratch. Sharing the source code would contribute to the reproducibility.
In conclusion, while the paper makes strides in knowledge tracing, its limited connection to broader conference themes, incremental contribution, and the need for further elaboration on practical impact suggest areas for improvement. Addressing the contextualization of the proposed method's impact on the web, enhancing technical novelty, providing clearer real-world implications through A/B testing, employing more effective result visualization methods, and sharing source code for reproducibility would significantly strengthen the overall contribution of this work.
questions: - To what extent the gains in the offline experiments might concretely translate into benefits in online experiments with learners?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 6
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
6DPGbrM3Dh | Question Difficulty Consistent Knowledge Tracing | [
"Guimei Liu",
"Huijing Zhan",
"Jung-jae Kim"
] | Knowledge tracing aims to estimate knowledge states of students based on their historical learning activities. Many deep learning models have been developed for knowledge tracing with impressive performance. Early works like DKT use skill IDs and student responses only. Recent works also incorporate questions IDs into their models and achieve much improved performance. However, predictions made by these models are thus on specific questions, and it is not straightforward to translate them to estimation of students' knowledge states over skills. In this paper, we propose to replace question IDs with question difficulty levels in deep knowledge tracing models, which transforms the knowledge tracing problem to ``predicting whether a student can answer any question of a given skill at a given difficulty level correctly". The predictions made by our model can be more readily translated to students' knowledge states over skills. Furthermore, by using question difficulty levels to replace question IDs, we can also alleviate the cold-start problem in knowledge tracing as online learning platforms are updated frequently with new questions. We further use two techniques to smooth the predicted scores. One is to combine embeddings of nearby difficulty levels using a Hann function. The other is to constrain the predicted probabilities to be consistent with question difficulty levels by imposing a penalty if they are not consistent. We conduct extensive experiments to study the performance of the proposed model. Our experiment results show that our model outperforms latest knowledge tracing models in terms of both AUC/RMSE and consistency with question difficulty levels. | [
"knowledge tracing; learning activities"
] | https://openreview.net/pdf?id=6DPGbrM3Dh | dwYcDWVVH2 | official_review | 1,701,267,281,505 | 6DPGbrM3Dh | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1593/Reviewer_Npp2"
] | review: This paper proposes a new knowledge tracing model where question difficulty levels instead of question IDs (in the state-of-the-art methods) are incorporated to predict a redefined problem from whether a student can answer a specific question correctly to whether a student can answer any question of a given skill at a given difficulty level. This proposed solution not only provides a more direct translation to students' knowledge states over skills but also helps mitigate the cold-start problem in knowledge tracing. The framework also integrates two smoothing methods to further facilitate the prediction.
The question of this paper is well defined and highly relevant to the data mining field, especially given the background that online education has increasing demand after the pandemic. In general, the paper was well written with extensive experiments on baselines and ablation studies. I specifically have two questions, the answer for which hopefully help the paper development.
*Potential data leakage
Based on the sampling strategy described in the paper, for each data split the student and the progress state were randomly drawn and one student could appear multiple times in different data splits. Therefore, it's likely that the training split 1, student A's status at time T2 is collected, and the testing split contains student A's status at time T1 (T1<T2). However, in this case, the model may already "see" student A's question-answer result at T1 in the training phase. More discussion on this would be expected.
* Performance
It seems like the proposed model only moderately outperforms the best baseline (such as 0.7893 vs 0.7852). There are two questions related to performance given that: 1. How did you set up the experiment for baselines? Did you use the same grid search? 2. How to justify the outperformance on such a small scale is important?
questions: Clarifications on 1. potential data leakage; 2. model performance and baseline experiments
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
6DPGbrM3Dh | Question Difficulty Consistent Knowledge Tracing | [
"Guimei Liu",
"Huijing Zhan",
"Jung-jae Kim"
] | Knowledge tracing aims to estimate knowledge states of students based on their historical learning activities. Many deep learning models have been developed for knowledge tracing with impressive performance. Early works like DKT use skill IDs and student responses only. Recent works also incorporate questions IDs into their models and achieve much improved performance. However, predictions made by these models are thus on specific questions, and it is not straightforward to translate them to estimation of students' knowledge states over skills. In this paper, we propose to replace question IDs with question difficulty levels in deep knowledge tracing models, which transforms the knowledge tracing problem to ``predicting whether a student can answer any question of a given skill at a given difficulty level correctly". The predictions made by our model can be more readily translated to students' knowledge states over skills. Furthermore, by using question difficulty levels to replace question IDs, we can also alleviate the cold-start problem in knowledge tracing as online learning platforms are updated frequently with new questions. We further use two techniques to smooth the predicted scores. One is to combine embeddings of nearby difficulty levels using a Hann function. The other is to constrain the predicted probabilities to be consistent with question difficulty levels by imposing a penalty if they are not consistent. We conduct extensive experiments to study the performance of the proposed model. Our experiment results show that our model outperforms latest knowledge tracing models in terms of both AUC/RMSE and consistency with question difficulty levels. | [
"knowledge tracing; learning activities"
] | https://openreview.net/pdf?id=6DPGbrM3Dh | XKNXftBlJ7 | decision | 1,705,909,254,286 | 6DPGbrM3Dh | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: The paper advocates a new principled problem formulation for knowledge tracing. Instead of predicting at the question-ID level, the authors argue for a coarser granularity of the question-difficulty level. Such coarse granularity can be achieved through smoothing and regularization, and slightly outperform SoTA models while ensuring consistency with question difficulty levels.
Reviewers noted that although the presented methodology is not the most novel, but the problem and the solution are both well-motivated and important to the area. The reviewers also indicate that their questions are mostly addressed during the rebuttal. I encourage the authors to carefully revise their paper according to the reviews and rebuttal comments. |
6DPGbrM3Dh | Question Difficulty Consistent Knowledge Tracing | [
"Guimei Liu",
"Huijing Zhan",
"Jung-jae Kim"
] | Knowledge tracing aims to estimate knowledge states of students based on their historical learning activities. Many deep learning models have been developed for knowledge tracing with impressive performance. Early works like DKT use skill IDs and student responses only. Recent works also incorporate questions IDs into their models and achieve much improved performance. However, predictions made by these models are thus on specific questions, and it is not straightforward to translate them to estimation of students' knowledge states over skills. In this paper, we propose to replace question IDs with question difficulty levels in deep knowledge tracing models, which transforms the knowledge tracing problem to ``predicting whether a student can answer any question of a given skill at a given difficulty level correctly". The predictions made by our model can be more readily translated to students' knowledge states over skills. Furthermore, by using question difficulty levels to replace question IDs, we can also alleviate the cold-start problem in knowledge tracing as online learning platforms are updated frequently with new questions. We further use two techniques to smooth the predicted scores. One is to combine embeddings of nearby difficulty levels using a Hann function. The other is to constrain the predicted probabilities to be consistent with question difficulty levels by imposing a penalty if they are not consistent. We conduct extensive experiments to study the performance of the proposed model. Our experiment results show that our model outperforms latest knowledge tracing models in terms of both AUC/RMSE and consistency with question difficulty levels. | [
"knowledge tracing; learning activities"
] | https://openreview.net/pdf?id=6DPGbrM3Dh | Wgc1FuvaEn | official_review | 1,700,628,788,996 | 6DPGbrM3Dh | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1593/Reviewer_XXmg"
] | review: This paper proposed a novel method for knowledge tracing, called Question Difficulty Consistent Knowledge Tracing. It aims to calculate difficulty levels for questions answered in historical data of student’s record, and utilize a LSTM based network to predict on the outcome, given any arbitrary difficulty levels. This method is different from existing question-based work that predict on whether the student can correctly answer a new question. Experiments on the model’s performance against baselines are conducted, as well as ablation studies, score consistency and running efficiency. However, the author did not provide a convincing explanation for why the new methods can be beneficial and are conceptually superior than the existing work, since some of the claims in the introduction section are not grounded by experiments. Thus, although the model has achieved the best performance against the baselines, the paper either requires revision on the work’s objective, or more experiments are needed.
questions: 1. The author state that the goal of this model is to “predict whether a student can answer ANY question of a given skill at a given difficulty”, which is different from existing methods that aims to “predict whether a student can answer the next specific question of a given skill correctly”. It seems not very reasonable to replace the “old” goal with the proposed new goal, since they are not mutually exclusive, and this new model can just add question ID as a feature and become capable of directly predicting label for a given question ID. The goal can be changed to “predict whether a student can answer a specific question or any question of specific skill and difficulty level”, which gives the model predictive capability with 2 forms of input: question ID or difficulty.
2. Continuing from the previous question: Can the author provide more explanation about the reason for excluding question ID from the model? From my understanding, the existing work directly uses question ID as a feature, while the proposed work utilizes difficult level of q (calculated by Eq. 4 and 5). Why not utilize them both in the model?
3. From the difficulty equation, which is Eq. 4, we can see that for a given question q_i, the diff(q) is calculated by a deterministic function. Then for a given dataset D, all q_i are predetermined, so f_q works like an extra calculated feature column. Intuitively, using
both difficulty and question ID as input feature should be included as one of the experiment settings, but in the Table 3 and 4, there is no experiment on QDCKT+ID. Thus, the experiments conducted is somewhat incomplete.
4. Can author include more statistics of dataset in Table 1? Such as average student question, skills, etc.
5. For section 4.4, can author add more explanations for the plot? For example, what would the predicted line looks like if the model has a perfect fit?
6. In introduction, some of the statements are not grounded by experiments. For example, in the third paragraph, the author stated: “Furthermore, by using question difficulty levels to replace question IDs, we can also alleviate the cold-start problem in knowledge tracing as online learning platforms are updated frequently with new questions”. But there are no experiments that directly demonstrate how this model alleviate the cold-start problem. And in the Experiments Settings, “students with less than 10 activities are removed”. It might be very helpful to use those removed data to demonstrate the potential benefit of the proposed model on the cold-start problem.
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 4
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
6DPGbrM3Dh | Question Difficulty Consistent Knowledge Tracing | [
"Guimei Liu",
"Huijing Zhan",
"Jung-jae Kim"
] | Knowledge tracing aims to estimate knowledge states of students based on their historical learning activities. Many deep learning models have been developed for knowledge tracing with impressive performance. Early works like DKT use skill IDs and student responses only. Recent works also incorporate questions IDs into their models and achieve much improved performance. However, predictions made by these models are thus on specific questions, and it is not straightforward to translate them to estimation of students' knowledge states over skills. In this paper, we propose to replace question IDs with question difficulty levels in deep knowledge tracing models, which transforms the knowledge tracing problem to ``predicting whether a student can answer any question of a given skill at a given difficulty level correctly". The predictions made by our model can be more readily translated to students' knowledge states over skills. Furthermore, by using question difficulty levels to replace question IDs, we can also alleviate the cold-start problem in knowledge tracing as online learning platforms are updated frequently with new questions. We further use two techniques to smooth the predicted scores. One is to combine embeddings of nearby difficulty levels using a Hann function. The other is to constrain the predicted probabilities to be consistent with question difficulty levels by imposing a penalty if they are not consistent. We conduct extensive experiments to study the performance of the proposed model. Our experiment results show that our model outperforms latest knowledge tracing models in terms of both AUC/RMSE and consistency with question difficulty levels. | [
"knowledge tracing; learning activities"
] | https://openreview.net/pdf?id=6DPGbrM3Dh | Ot3H7HM4z7 | official_review | 1,701,221,167,898 | 6DPGbrM3Dh | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1593/Reviewer_Hoox"
] | review: This work proposes a novel approach to knowledge tracing which is a problem that tracks student knowledge by utilizing the students' responses to questions. This paper takes a step back and, instead of predicting whether a student will successfully complete the next question, it predicts whether the student will complete any question right at a specific difficulty level. The method is evaluated against other competing approaches in terms of AUC scores, RMSE, consistency, runtime, and memory consumption. An ablation study is also performed to evaluate the contribution of each component of the model.
The paper is easy to follow and read. The notions are explained properly, as well as the novel contributions of this paper. The experimental evaluation is extensive. The datasets used are publicly available. Most importantly, the strongest contribution of this paper is its ability to perform well related to the cold start problem, i.e., when a new question becomes available. The previous approaches would not be able to generate good predictions for these cases, as the new questions do not appear in many (or any) previous sequences. The proposed approach is properly motivated and its components are intuitive.
While most of the paper is clear, the description of the proposed model is not as well explained. Are all the elements s_q, e_q^h, h_q simple numbers (like f_q) The paper does not discuss well (nor at least offer some references) what is the s_0, and how exactly the embedding layer works, the matrices S, D, and C. Why do the elements [1, L-1] appear both in history and query sequences? Additionally, the hyperparameter alpha from Eq. 2 is not discussed (e.g., values used in experiments).
Suggestions:
- since the paper tackles the cold-start problem, there could be some evaluation of how it performs on questions with varying popularity.
- The statistics in tables 5 and 6 could be averaged out over all the datasets to save some space since we do not have a lot of other dataset-specific info to understand how the running time and memory consumption relate to that.
- Figures 2 and 3 are too complicated to be easily understood; only the best 3-4 methods could be used.
questions: - Why do the elements [1, L-1] appear both in history and query sequences?
- Why is there a claim about the interpretability of the model in the conclusion?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
6DPGbrM3Dh | Question Difficulty Consistent Knowledge Tracing | [
"Guimei Liu",
"Huijing Zhan",
"Jung-jae Kim"
] | Knowledge tracing aims to estimate knowledge states of students based on their historical learning activities. Many deep learning models have been developed for knowledge tracing with impressive performance. Early works like DKT use skill IDs and student responses only. Recent works also incorporate questions IDs into their models and achieve much improved performance. However, predictions made by these models are thus on specific questions, and it is not straightforward to translate them to estimation of students' knowledge states over skills. In this paper, we propose to replace question IDs with question difficulty levels in deep knowledge tracing models, which transforms the knowledge tracing problem to ``predicting whether a student can answer any question of a given skill at a given difficulty level correctly". The predictions made by our model can be more readily translated to students' knowledge states over skills. Furthermore, by using question difficulty levels to replace question IDs, we can also alleviate the cold-start problem in knowledge tracing as online learning platforms are updated frequently with new questions. We further use two techniques to smooth the predicted scores. One is to combine embeddings of nearby difficulty levels using a Hann function. The other is to constrain the predicted probabilities to be consistent with question difficulty levels by imposing a penalty if they are not consistent. We conduct extensive experiments to study the performance of the proposed model. Our experiment results show that our model outperforms latest knowledge tracing models in terms of both AUC/RMSE and consistency with question difficulty levels. | [
"knowledge tracing; learning activities"
] | https://openreview.net/pdf?id=6DPGbrM3Dh | CqD8XLd7PJ | official_review | 1,701,419,812,731 | 6DPGbrM3Dh | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1593/Reviewer_3cSy"
] | review: This paper studies the knowledge tracing problem which estimates knowledge states of students based on their historical learning activities. It is an interesting and practical problem in the education area. My major concerns are listed as follows:
Cons:
1. The technique novelty is limited. Using question difficulty levels to replace question IDs is intuitive. The improvement is small and incremental.
2. The most simple way to evaluate the knowledge states of students is to calculate the statistics of different difficulty levels of questions. Such statistical methods should be compared.
3. It's not clear how to get the difficulty level of questions.
questions: 1. How to get the difficulty levels of questions?
2. What's the influence on results by conducting smoothing operations on the embeddings?
ethics_review_flag: No
ethics_review_description: NO
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 3
technical_quality: 3
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
5v9s4isvlF | Physical Trajectory Inference Attack and Defense in Decentralized POI Recommendation | [
"Jing Long",
"Tong Chen",
"Guanhua Ye",
"Kai Zheng",
"Quoc Viet Hung Nguyen",
"Hongzhi Yin"
] | As an indispensable personalized service within Location-Based Social Networks (LBSNs), the Point-of-Interest (POI) recommendation aims to assist individuals in discovering attractive and engaging places. However, the accurate recommendation capability relies on the powerful server collecting a vast amount of users' historical check-in data, posing significant risks of privacy breaches. Although several collaborative learning (CL) frameworks for POI recommendation enhance recommendation resilience and allow users to keep personal data on-device, they still share personal knowledge to improve recommendation performance, thus leaving vulnerabilities for potential attackers. Given this, we design a new Physical Trajectory Inference Attack (PITA) to expose users' historical trajectories. Specifically, for each user, we identify the set of interacted POIs by analyzing the aggregated information from the target POIs and their correlated POIs. We evaluate the effectiveness of PITA on two real-world datasets across two types of decentralized CL frameworks for POI recommendation. Empirical results demonstrate that PITA poses a significant threat to users' historical trajectories. Furthermore, Local Differential Privacy (LDP), the traditional privacy-preserving method for CL frameworks, has also been proven ineffective against PITA. In light of this, we propose a novel defense mechanism (AGD) against PITA based on an adversarial game to eliminate sensitive POIs and their information in correlated POIs. After conducting intensive experiments, AGD has been proven precise and practical, with minimal impact on recommendation performance. | [
"Point-of-Interest Recommendation; Decentralized collaborative Learning; Trajectory Inference Attack and Defense"
] | https://openreview.net/pdf?id=5v9s4isvlF | RlBceVKxw4 | official_review | 1,700,717,757,667 | 5v9s4isvlF | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission539/Reviewer_EkrW"
] | review: This study introduces the Physical Trajectory Inference Attack (PITA) to highlight privacy risks in POI recommendation systems and proposes a novel defense mechanism (AGD) for enhanced security.
Pros:
Focused on Privacy: Addresses critical user privacy issues in LBSNs.
Innovative Methods: PITA exposes vulnerabilities; AGD counters these effectively.
Empirically Tested: Validated on real-world datasets, ensuring practical relevance.
Cons:
Implementation Complexity: AGD may be complex to integrate.
Specific Attack Focus: Solutions may be limited to countering PITA.
Privacy-Functionality Balance: Balancing enhanced privacy with recommendation efficiency could be challenging.
Overall, the study contributes significantly to privacy in LBSNs, offering a novel attack model and an effective defense mechanism, with a focus on maintaining recommendation performance.
questions: 1. I recommend adding a diagram to illustrate the distinctions between model-sharing Collaborative Learning (CL) and Knowledge-Distillation CL for enhanced explanation.
2. Based on Figure 2, I think the public POI model, presumably STAN, which can learn visited POI features. So how do you get unvisited POI features?
3. Your task should be a click problem, but STAN is a sequence POI recommendation model. I think they are different tasks. Why do you choose STAN?
4. The experimental part should add the performance of STAN under the two datasets, that is, it does not include any attack and defense measures.
5. Why can the F1 metric evaluate the effectiveness of an attack?
6. While HR@10 might be an appropriate metric for click problems, POI recommendation tasks typically employ NDCG and Recall for a more holistic evaluation. Do you believe that the current metrics deployed in your study sufficiently capture the overall effectiveness of the model in this context?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 4
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
5v9s4isvlF | Physical Trajectory Inference Attack and Defense in Decentralized POI Recommendation | [
"Jing Long",
"Tong Chen",
"Guanhua Ye",
"Kai Zheng",
"Quoc Viet Hung Nguyen",
"Hongzhi Yin"
] | As an indispensable personalized service within Location-Based Social Networks (LBSNs), the Point-of-Interest (POI) recommendation aims to assist individuals in discovering attractive and engaging places. However, the accurate recommendation capability relies on the powerful server collecting a vast amount of users' historical check-in data, posing significant risks of privacy breaches. Although several collaborative learning (CL) frameworks for POI recommendation enhance recommendation resilience and allow users to keep personal data on-device, they still share personal knowledge to improve recommendation performance, thus leaving vulnerabilities for potential attackers. Given this, we design a new Physical Trajectory Inference Attack (PITA) to expose users' historical trajectories. Specifically, for each user, we identify the set of interacted POIs by analyzing the aggregated information from the target POIs and their correlated POIs. We evaluate the effectiveness of PITA on two real-world datasets across two types of decentralized CL frameworks for POI recommendation. Empirical results demonstrate that PITA poses a significant threat to users' historical trajectories. Furthermore, Local Differential Privacy (LDP), the traditional privacy-preserving method for CL frameworks, has also been proven ineffective against PITA. In light of this, we propose a novel defense mechanism (AGD) against PITA based on an adversarial game to eliminate sensitive POIs and their information in correlated POIs. After conducting intensive experiments, AGD has been proven precise and practical, with minimal impact on recommendation performance. | [
"Point-of-Interest Recommendation; Decentralized collaborative Learning; Trajectory Inference Attack and Defense"
] | https://openreview.net/pdf?id=5v9s4isvlF | LpuKbj0xhI | official_review | 1,700,557,319,108 | 5v9s4isvlF | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission539/Reviewer_oXk7"
] | review: To address the privacy concerns related to users’ real mobility trajectories in decentralized collaborative learning POI recommendations, this paper introduces a new attack called Physical Trajectory Inference Attack (PITA) that aims to expose users’ historical trajectories by analyzing aggregated information from target POIs and their correlated POIs. The effectiveness of PITA is evaluated on two real-world datasets, and it is shown to pose a significant threat to users’ historical trajectories. Additionally, a defense mechanism called Adversarial Game Defense (AGD) is also proposed to eliminate sensitive POIs and their information in corrleted POIs. Overall, this paper offers a comprehensive analysis of the privacy risks in decentralized collaborative filtering recommender systems and proposes effective attack and defense mechanisms.
Pros:
1. The motivation of this paper, i.e., addressing the privacy concerns related to users’ real mobility trajectories in decentralized collaborative learning-based POI recommendations, is interesting and less explored.
2. The proposed Physical trajectory inference attack (PTIA) and Adversarial game-based defense mechanism (AGD) make sense to me, and the finding that Local Differential Privacy (LDP) is proven ineffective against PITA is intriguing and meaningful.
3. The paper conducts comprehensive experiments to evaluate the performance of the physical trajectory inference attack (PTIA) and the effectiveness of the corresponding defense mechanism based on an adversarial game (AGD).
Cons:
1. At the end of Section 3.3, given differences between POI embeddings and initial distribution, it is not clear how to utlize elbow method to identify visited regions.
2. The quality of figure 4 could be improved. Normally, the y-axis should be labelled with the metric it’s representing, such as F1-socre or hit ratio. However, the second and third subgraphs label their y-axes with “DCLR” and “MAC”, which are identifiers for models rather than metrics, potentially leading to confusion. To clarify, the y-axis labels should state the name of the metric, and the legend should differentiate between the "DCLR" and "MAC" series to avoid any ambiguity for the reader.
questions: 1. How to utlize elbow method to identify visited regions?
2. Why has the author not included the hit ratio for varying numbers of shadow sequences, denoted as 'V', in the Figure 4?
ethics_review_flag: No
ethics_review_description: NULL
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
5v9s4isvlF | Physical Trajectory Inference Attack and Defense in Decentralized POI Recommendation | [
"Jing Long",
"Tong Chen",
"Guanhua Ye",
"Kai Zheng",
"Quoc Viet Hung Nguyen",
"Hongzhi Yin"
] | As an indispensable personalized service within Location-Based Social Networks (LBSNs), the Point-of-Interest (POI) recommendation aims to assist individuals in discovering attractive and engaging places. However, the accurate recommendation capability relies on the powerful server collecting a vast amount of users' historical check-in data, posing significant risks of privacy breaches. Although several collaborative learning (CL) frameworks for POI recommendation enhance recommendation resilience and allow users to keep personal data on-device, they still share personal knowledge to improve recommendation performance, thus leaving vulnerabilities for potential attackers. Given this, we design a new Physical Trajectory Inference Attack (PITA) to expose users' historical trajectories. Specifically, for each user, we identify the set of interacted POIs by analyzing the aggregated information from the target POIs and their correlated POIs. We evaluate the effectiveness of PITA on two real-world datasets across two types of decentralized CL frameworks for POI recommendation. Empirical results demonstrate that PITA poses a significant threat to users' historical trajectories. Furthermore, Local Differential Privacy (LDP), the traditional privacy-preserving method for CL frameworks, has also been proven ineffective against PITA. In light of this, we propose a novel defense mechanism (AGD) against PITA based on an adversarial game to eliminate sensitive POIs and their information in correlated POIs. After conducting intensive experiments, AGD has been proven precise and practical, with minimal impact on recommendation performance. | [
"Point-of-Interest Recommendation; Decentralized collaborative Learning; Trajectory Inference Attack and Defense"
] | https://openreview.net/pdf?id=5v9s4isvlF | 6jdlHBbYcY | official_review | 1,700,571,414,425 | 5v9s4isvlF | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission539/Reviewer_gF8J"
] | review: This paper presents a study on inference attacks to the physical trajectory in POI recommendation tasks, which then motivates the design of the corresponding defensive mechanisms. The attack approach is able to tackle two different recommendation paradigms, i.e., model-sharing and knowledge distillation paradigms. The proposed defensive mechanism uses adversarial training to ensure that the sharable knowledge from either POI recommendation paradigms does not reveal users’ historical POI interactions. Overall, this work presents an interesting idea, and the developed methods are relatively easy to follow. My detailed comments on the pros and cons of this paper can be found below.
Pros:
+ Due to the location-sensitive nature of POI recommendation, it is of interest to discuss the level of privacy in state-of-the-art, decentralized POI recommender systems.
+ This paper considers two practical settings of the collaborative learning POI recommendation schemes, namely the model-sharing and the knowledge distillation schemes.
+ The proposed approaches are well-motivated and interesting, and their efficacy in attack and defense are respectively supported by experimental results.
Cons:
- Some more detailed background about the two collaborative learning paradigms is expected in the early parts of the paper (e.g., introduction and task definition), given that this notion is fairly new in POI recommendation.
- Some parts of the paper’s presentation can be enhanced. This applies to both textual and visual presentations. For example, in Section 4.1, “the effectiveness of PTIA” (should be PITA). Also in this section, the sentence “The situation varies when evaluating the performance of the defense mechanism” is not easily understandable and should be further clarified. In Figure 1, it is unclear how user’s POI model generates adversary’s knowledge logits.
- The experimental results can use a more extensive analysis. For example, the relationship between the number of interactions and the attack F1 after applying AGD can also be visualized (similar to Figure 3). Also, the changes in HR@10 when varying V can be plotted as well since the performance of the attacker PITA will likely to have an impact on the recommender during adversarial training.
questions: Q1. What is the connection between the POI inference attack in POI recommendation tasks and the membership inference attack in general recommendation tasks? Can those defensive mechanisms against membership inference attack be used for preventing POI inference attacks?
Q2. Is PITA and AGD generalizable to other collaborative learning (e.g., federated learning) POI recommender systems?
Q3. What is the efficiency gain after applying the region-based filtering strategy during the POI inference attack?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
5v9s4isvlF | Physical Trajectory Inference Attack and Defense in Decentralized POI Recommendation | [
"Jing Long",
"Tong Chen",
"Guanhua Ye",
"Kai Zheng",
"Quoc Viet Hung Nguyen",
"Hongzhi Yin"
] | As an indispensable personalized service within Location-Based Social Networks (LBSNs), the Point-of-Interest (POI) recommendation aims to assist individuals in discovering attractive and engaging places. However, the accurate recommendation capability relies on the powerful server collecting a vast amount of users' historical check-in data, posing significant risks of privacy breaches. Although several collaborative learning (CL) frameworks for POI recommendation enhance recommendation resilience and allow users to keep personal data on-device, they still share personal knowledge to improve recommendation performance, thus leaving vulnerabilities for potential attackers. Given this, we design a new Physical Trajectory Inference Attack (PITA) to expose users' historical trajectories. Specifically, for each user, we identify the set of interacted POIs by analyzing the aggregated information from the target POIs and their correlated POIs. We evaluate the effectiveness of PITA on two real-world datasets across two types of decentralized CL frameworks for POI recommendation. Empirical results demonstrate that PITA poses a significant threat to users' historical trajectories. Furthermore, Local Differential Privacy (LDP), the traditional privacy-preserving method for CL frameworks, has also been proven ineffective against PITA. In light of this, we propose a novel defense mechanism (AGD) against PITA based on an adversarial game to eliminate sensitive POIs and their information in correlated POIs. After conducting intensive experiments, AGD has been proven precise and practical, with minimal impact on recommendation performance. | [
"Point-of-Interest Recommendation; Decentralized collaborative Learning; Trajectory Inference Attack and Defense"
] | https://openreview.net/pdf?id=5v9s4isvlF | 6UJ7rW4qI8 | decision | 1,705,909,244,256 | 5v9s4isvlF | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: By summarizing the review comments and responses, this paper proposed a novel solution to solve the POI recommendation problem, and the experiment is solid and gets the sota results. However, the reviewers still have some concerns about the details of this paper, such as more background information, adding a diagram, etc. I recommend that the authors should fix these problems in their camera-ready version. |
5uQFXyFJPM | TikTok and the Art of Personalization: Investigating Exploration and Exploitation on Social Media Feeds | [
"Karan Vombatkere",
"Sepehr Mousavi",
"Savvas Zannettou",
"Franziska Roesner",
"Krishna P. Gummadi"
] | Recommendation algorithms for social media feeds often function as black boxes from the perspective of users. We aim to detect whether social media feed recommendations are personalized to users, and to characterize the factors contributing to personalization in these feeds.
We introduce a general framework to examine a set of social media feed recommendations for a user as a timeline.
We label items in the timeline as the result of exploration vs. exploitation of the user's interests on the part of the recommendation algorithm and introduce a set of metrics to capture the extent of personalization across user timelines.
We apply our framework to a real TikTok dataset and validate our results using a baseline generated from automated TikTok bots, as well as a randomized baseline. We also investigate the extent to which factors such as video viewing duration, liking, and following drive the personalization of content on TikTok.
Our results demonstrate that our framework produces intuitive and explainable results, and can be used to audit and understand personalization in social media feeds. | [
"Personalization",
"Social Media Feed",
"Algorithmic Recommendations",
"TikTok"
] | https://openreview.net/pdf?id=5uQFXyFJPM | tcL17gMzpJ | official_review | 1,700,296,967,108 | 5uQFXyFJPM | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1732/Reviewer_Bqkm"
] | review: The manuscript "TikTok and the Art of Personalization: Investigating Exploration and Exploitation on Social Media Feeds" presents an innovative approach to understanding personalization in social media feeds, with a focus on TikTok. The study introduces a novel framework for analyzing user timelines and applies this to real data, demonstrating the ability to audit and understand the recommendation algorithms used by TikTok.
However, the study has certain limitations that could be addressed in future work. Firstly, it primarily focuses on TikTok, and extending the analysis to other platforms like Instagram and YouTube Shorts could provide a more comprehensive view and enable valuable cross-platform comparisons. Secondly, the complexity of recommendation systems, often involving intricate neural networks and human-designed rules, suggests that observational analysis might not fully reveal their internal mechanisms. A more direct approach, possibly involving legal mandates for model transparency, could offer deeper insights.
Overall, the paper contributes significantly to the field of social media recommendation systems, offering a framework for auditing and understanding personalization. Enhancing the scope of the study and addressing its inherent complexities could further solidify its impact and relevance.
questions: 1. The paper introduces an interesting research topic. However, from the perspective of a professional working in recommendation systems, it's important to note that these systems are composed of complex neural networks and various modules, and they also incorporate numerous human-designed rules. Therefore, it is quite challenging to understand the internal mechanisms of these systems solely through observational statistical data. A more direct approach might be to open-source the models in accordance with legal regulations for a thorough audit.
2. The analysis in the paper is limited to TikTok. It would be more convincing if the study, as mentioned in the introduction, also included an analysis of Instagram and YouTube Shorts, and provided a horizontal comparison across these platforms.
3. The section on Limitations & Future Work in the paper addresses very important issues. The authors are encouraged to further develop and elaborate on these points for a more comprehensive understanding.
ethics_review_flag: No
ethics_review_description: No
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 2
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
5uQFXyFJPM | TikTok and the Art of Personalization: Investigating Exploration and Exploitation on Social Media Feeds | [
"Karan Vombatkere",
"Sepehr Mousavi",
"Savvas Zannettou",
"Franziska Roesner",
"Krishna P. Gummadi"
] | Recommendation algorithms for social media feeds often function as black boxes from the perspective of users. We aim to detect whether social media feed recommendations are personalized to users, and to characterize the factors contributing to personalization in these feeds.
We introduce a general framework to examine a set of social media feed recommendations for a user as a timeline.
We label items in the timeline as the result of exploration vs. exploitation of the user's interests on the part of the recommendation algorithm and introduce a set of metrics to capture the extent of personalization across user timelines.
We apply our framework to a real TikTok dataset and validate our results using a baseline generated from automated TikTok bots, as well as a randomized baseline. We also investigate the extent to which factors such as video viewing duration, liking, and following drive the personalization of content on TikTok.
Our results demonstrate that our framework produces intuitive and explainable results, and can be used to audit and understand personalization in social media feeds. | [
"Personalization",
"Social Media Feed",
"Algorithmic Recommendations",
"TikTok"
] | https://openreview.net/pdf?id=5uQFXyFJPM | iByP6QLPGp | official_review | 1,701,245,591,235 | 5uQFXyFJPM | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1732/Reviewer_Wkc9"
] | review: The paper explores personalization in social media feeds, with a focus on the TikTok platform. Concretely, the study presents a framework that helps distinguish between exploration and exploitation in users' timelines whereas exploitation is defined as the algorithm creating recommendations based on the user's preferences and previous actions and an exploration recommendation is a recommendation that is not the result of user personalization but derived from the algorithm exploring if a user might like an item.
Pros
- The evaluation setup is interesting and based on user trails and bot-generated trails.
- Ethics are clearly described and addressed.
- Clear and comprehensive methodology
- Findings are interesting and make a significant contribution to the field (in particular, it is interesting to see that the algorithm of TikTok seems to exploit user interests in up to 50% of the recommended videos).
- The introduced personalization score is a good tool to quantify the extent of personalization.
- Findings are embedded in a broader discussion about transparency and privacy, and implications of the study for various stakeholders are addressed.
- The paper explicitly mentions limitations, including that the sample used for the study is not representative.
Suggestions for improvement:
- The paper leverages a non-representative sample of TikTok users (which is acknowledged in the paper), which raises questions related to generalizability of the findings.
- The distinction between exploration and exploitation is not well backed up with theory and lacks intuition. More clarity is advisable here.
- Integrating qualitative aspects through user interviews or surveys could help understand user perceptions and experience of the impact of personalization and "exploration recommendations".
- It is not well described how recommendations are labeled into exploration and exploitation - also the used local and global features are introduced only briefly and not well motivated. They seem to be very specific for the TikTok platform, and the question is what features would be needed in other similar platforms.
- The study is conducted only on TikTok, and it is not clear if findings translate also to other platforms.
- Some design choices and parameter settings lack a motivation (filtering out users, window size, amount of hashtags,..).
- A deeper discussion of ethical implications on having up to 50% of recommendations coming not directly from user behavior, would be valuable (e.g., related to impacts on user behavior, etc.)
- Figures are in too low quality and need to be revised
- No publicly available implementation is described, which limits the reproducibility of the work.
Assessment after the rebuttal:
- The authors clarified several issues and I updated my scores based on these clarifications.
questions: - How did you come to the definitions of exploration and exploitation?
- Can you provide more details on how the recommendations were labeled into exploration and exploitation?
- Do the findings translate also to other, similar platforms?
- What ethical implications do you see with having up to 50% of recommendations not directly resulting from user behavior?
- Can you provide justifications for parameter settings and design choices (see above)?
ethics_review_flag: No
ethics_review_description: Relevant ethics are discussed in the paper.
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 7
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
5uQFXyFJPM | TikTok and the Art of Personalization: Investigating Exploration and Exploitation on Social Media Feeds | [
"Karan Vombatkere",
"Sepehr Mousavi",
"Savvas Zannettou",
"Franziska Roesner",
"Krishna P. Gummadi"
] | Recommendation algorithms for social media feeds often function as black boxes from the perspective of users. We aim to detect whether social media feed recommendations are personalized to users, and to characterize the factors contributing to personalization in these feeds.
We introduce a general framework to examine a set of social media feed recommendations for a user as a timeline.
We label items in the timeline as the result of exploration vs. exploitation of the user's interests on the part of the recommendation algorithm and introduce a set of metrics to capture the extent of personalization across user timelines.
We apply our framework to a real TikTok dataset and validate our results using a baseline generated from automated TikTok bots, as well as a randomized baseline. We also investigate the extent to which factors such as video viewing duration, liking, and following drive the personalization of content on TikTok.
Our results demonstrate that our framework produces intuitive and explainable results, and can be used to audit and understand personalization in social media feeds. | [
"Personalization",
"Social Media Feed",
"Algorithmic Recommendations",
"TikTok"
] | https://openreview.net/pdf?id=5uQFXyFJPM | gdNNOTGUwN | official_review | 1,700,669,829,098 | 5uQFXyFJPM | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1732/Reviewer_cQ32"
] | review: Pros
1. The researcher delves into the degree of personalization on TikTok, which is currently the most popular short video recommendation platform. This topic is both intriguing and relatively underexplored.
2. With real-world TikTok dataset to, the authors propose a framework for measuring the level of personalization on short video recommendation platforms. This framework can potentially be applied to other platforms as well. The conclusions drawn from the study hold significance in the field.
Cons
1. The author's experiment suffers from a lack of user samples, potentially leading to biased results.
2. The features selected by the authors may not align with the features used by the actual TikTok recommendation system. As a result, it becomes challenging to directly determine whether a video is considered "explore" or "exploit."
questions: 1. Most recommendation systems rely on neural networks, which lack interpretability. How can we assess whether a video falls under the "explore" or "exploit" category?
2. In comparison to BQ, TQ shows a slight increase in the early skip rate. This finding contradicts intuition, as one would expect users to skip videos more frequently when personalization is high. Could you provide a reasonable explanation for this?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 3
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
5uQFXyFJPM | TikTok and the Art of Personalization: Investigating Exploration and Exploitation on Social Media Feeds | [
"Karan Vombatkere",
"Sepehr Mousavi",
"Savvas Zannettou",
"Franziska Roesner",
"Krishna P. Gummadi"
] | Recommendation algorithms for social media feeds often function as black boxes from the perspective of users. We aim to detect whether social media feed recommendations are personalized to users, and to characterize the factors contributing to personalization in these feeds.
We introduce a general framework to examine a set of social media feed recommendations for a user as a timeline.
We label items in the timeline as the result of exploration vs. exploitation of the user's interests on the part of the recommendation algorithm and introduce a set of metrics to capture the extent of personalization across user timelines.
We apply our framework to a real TikTok dataset and validate our results using a baseline generated from automated TikTok bots, as well as a randomized baseline. We also investigate the extent to which factors such as video viewing duration, liking, and following drive the personalization of content on TikTok.
Our results demonstrate that our framework produces intuitive and explainable results, and can be used to audit and understand personalization in social media feeds. | [
"Personalization",
"Social Media Feed",
"Algorithmic Recommendations",
"TikTok"
] | https://openreview.net/pdf?id=5uQFXyFJPM | YawKnfP4iH | official_review | 1,701,033,120,817 | 5uQFXyFJPM | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1732/Reviewer_cLes"
] | review: The problem addressed is very interesting: figurin out which recommendations are personalized and which not. However,
RQ2 is not well phrased. You cannot use "certain factors", it's too vague. Be specific!
The technical part is not easy to follow. From what I understood, the authors created the ground truth by understanding if the recommended item is related to the previous user behaviour. However this relation has been arbitrarily seected by the authors. Hence a different "activation" condition would yield different labels.
I still can't understand how the ground truth labels have been collected, if at all.
Other than the missing info about data labeling I think the work has important implications as the users elaborated in the discussion.
Update: the authors have answered but could not provide improvement over the issues I raised. I am keeping my score.
questions: NA
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
5uQFXyFJPM | TikTok and the Art of Personalization: Investigating Exploration and Exploitation on Social Media Feeds | [
"Karan Vombatkere",
"Sepehr Mousavi",
"Savvas Zannettou",
"Franziska Roesner",
"Krishna P. Gummadi"
] | Recommendation algorithms for social media feeds often function as black boxes from the perspective of users. We aim to detect whether social media feed recommendations are personalized to users, and to characterize the factors contributing to personalization in these feeds.
We introduce a general framework to examine a set of social media feed recommendations for a user as a timeline.
We label items in the timeline as the result of exploration vs. exploitation of the user's interests on the part of the recommendation algorithm and introduce a set of metrics to capture the extent of personalization across user timelines.
We apply our framework to a real TikTok dataset and validate our results using a baseline generated from automated TikTok bots, as well as a randomized baseline. We also investigate the extent to which factors such as video viewing duration, liking, and following drive the personalization of content on TikTok.
Our results demonstrate that our framework produces intuitive and explainable results, and can be used to audit and understand personalization in social media feeds. | [
"Personalization",
"Social Media Feed",
"Algorithmic Recommendations",
"TikTok"
] | https://openreview.net/pdf?id=5uQFXyFJPM | XJ81welL8M | official_review | 1,700,397,803,785 | 5uQFXyFJPM | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1732/Reviewer_VhNK"
] | review: The paper presents a framework to analyse and measure the degree of recommendation of social networks and apply it to a dataset of Tik Tok users. To measure the degree of personalisation they propose a set of metrics to compare items engaged by users in a window of time to the items recommended in their timeline. The authors propose the framework to be used as an auditing tool which will become more important after the recent European Digital Services Act legislation. The authors also mention this framework could be used for explainability although it’s not the focus of the paper.
PROS:
* The topic of algorithm auditing is very relevant for practitioners in the area of personalization and recommender systems, specially after the upcoming DSA regulation.
* The presented framework is applied to a TikTok dataset with users that have given consent on donating their data.
* The paper is well written and clear.
* The proposed framework is simple but technically sound and can help in assessing the degree of exploitation vs exploration of recommendations in social networks.
* Findings match those of related work that uses automated accounts instead of real accounts.
CONS:
* The paper is focused on a specific social network which is TikTok and uses a small dataset of users (347 users filtered down to 220). This is a reasonable limitation when dealing with donated data but also makes findings less strong.
* Related work is focused on analysing and understanding the TikTok algorithm but misses on citing work related to algorithm auditing frameworks which is the main contribution of the paper.
* The framework only measures the degree of personalization that users receive in social networks but misses important aspects such as gender or race biases which are important in ethical algorithmic auditing.
* No code is submitted which makes the framework harder to apply.
Overall the topic of algorithm auditing is very relevant and work such as this is needed. This paper presents an auditing framework that aims to detect the degree of personalisation of algorithms (i.e. exploitation vs explorations). However, personalisation in itself is not a problem that concerns regulators, but more its possible adverse effects, such as promoting negative or biased content that could lead to people’s mental health issues like depression or reinforcing discrimination towards other people and beliefs. Although this framework would be a starting point it does not directly address these issues. In this sense, this work covers just one side of algorithmic auditing but misses important parts.
questions: Issues that would make the paper stronger:
* Include related work about algorithm auditing frameworks and compare their solution to theirs.
* Submit code of their framework.
* Extend the framework by incorporating other important aspects for algorithmic auditing such as gender or race bias or detection of negative topic reinforcement.
ethics_review_flag: No
ethics_review_description: No issues.
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
5uQFXyFJPM | TikTok and the Art of Personalization: Investigating Exploration and Exploitation on Social Media Feeds | [
"Karan Vombatkere",
"Sepehr Mousavi",
"Savvas Zannettou",
"Franziska Roesner",
"Krishna P. Gummadi"
] | Recommendation algorithms for social media feeds often function as black boxes from the perspective of users. We aim to detect whether social media feed recommendations are personalized to users, and to characterize the factors contributing to personalization in these feeds.
We introduce a general framework to examine a set of social media feed recommendations for a user as a timeline.
We label items in the timeline as the result of exploration vs. exploitation of the user's interests on the part of the recommendation algorithm and introduce a set of metrics to capture the extent of personalization across user timelines.
We apply our framework to a real TikTok dataset and validate our results using a baseline generated from automated TikTok bots, as well as a randomized baseline. We also investigate the extent to which factors such as video viewing duration, liking, and following drive the personalization of content on TikTok.
Our results demonstrate that our framework produces intuitive and explainable results, and can be used to audit and understand personalization in social media feeds. | [
"Personalization",
"Social Media Feed",
"Algorithmic Recommendations",
"TikTok"
] | https://openreview.net/pdf?id=5uQFXyFJPM | UsFEy7EuNY | decision | 1,705,909,248,766 | 5uQFXyFJPM | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: The paper studies explore-exploit trade offs when personalizing the TikTok feed.
Reviewers appreciated many aspects in this paper. It explores an important topic that touches upon everyday life routines of many people.
Methodology is sound at most, many reviewers commented on the code, and the authors have responded. I appreciate some data and parameters cannot be disclosed and it seems the authors do a good job providing transparency in other means, given this limitation.
many reviewers agreed the paper presents intriguing findings, which are of interest to the field, as well as interesting concepts, such as the personalization score.
the limitation section also received positive feedback.
While the paper has weaknesses, such as lack of information about sensitive parameters, design choices, and the fact it is conducted only on TikTok, I think it offers enough merits to warrant publication.
Authors are asked to make sure their code is public. And please address the minor-change comments from reviewers if accepted. |
5nINTZKe4d | Investigations of Top-Level Domain Name Collisions in Blockchain Naming Services | [
"Daiki Ito",
"Yuta Takata",
"Hiroshi Kumagai",
"Masaki Kamizono"
] | Traditionally, top-level domains (TLDs) are managed by the Internet corporation for assigned names and numbers (ICANN), and the domain names under them are managed by registrars. Against such centralized management, a blockchain naming service (BNS) has been proposed to manage TLDs on blockchains without authority intervention. BNS users can register TLD strings as non-fungible tokens and manage the TLD root zone. However, such decentralized management results in the introduction of a new security issue, BNS TLD name collision, wherein the same TLD is registered in several different BNSs.
In this study, we investigated BNS TLD name collisions by analyzing TLDs registered on two BNSs: Handshake and Decentraweb. Specifically, we collected TLDs registered in Handshake and Decentraweb and the associated data, and analyzed the data registration status of BNS TLDs and BNS TLD name collisions. The analysis of 11,595,406 Handshake and 11,889 Decentraweb TLDs revealed 6,973 BNS TLD name collisions. In particular, lastname TLDs, which are intended for use as person names, yielded a large number of registered domain names. In addition, the analysis identified 10 name collisions between the BNS and operational ICANN TLDs. Further, the ICANN TLD candidates under review also had name collisions against the BNS TLDs. Consequently, based on the characteristics of these name collisions and discussions in BNS communities, we considered countermeasures against BNS TLD name collisions. For the further development of BNSs, we believe that it is essential to discuss with the existing Internet communities and coexist with the existing Internet. | [
"Blockchain Naming Service",
"Top-Level Domain",
"Name Collision"
] | https://openreview.net/pdf?id=5nINTZKe4d | zg1QKkF0w3 | official_review | 1,700,773,573,947 | 5nINTZKe4d | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission823/Reviewer_nY5H"
] | review: This paper provides a quantitative analysis of TLD name collisions between the canonical ICANN DNS system and alternative blockchain-based systems, namely Handshake and Decentraweb.
The paper provides a meaningful overview of a problem that can expected to grow in importance in the future with tendencies to "re-decentralize" the web.
However, the paper also has potential for improvement:
* While the writing style is generally well-readable, some parts tend to be slightly "stiff" or mildly repetitive. For instance, "Handshake's blockchain uses Bcoin fork" in Section 3.2.1. Furthermore, the "Restrictions for TLD Registration" paragraphs in Section 3.2 are very similar in phrasing.
* At multiple occasions, the paper claims or at least implies that the bidding structure for acquiring TLDs in Handshake can be free in few situations. However, Section 5 reveals that 80% of all TLDs have been acquired for free.
This aspect also highlights where the paper's untapped potentials lie: Section 5 mostly chains results and gives some explanation, but more insights and assessments of the consequences would have been desirable. For instance, do these results imply that Handshake does not have a sustainable business model, or do the few expensive bids make up for the bulk of free TLDs?
* Similarly, the phrasing of observations actively contributes to make the results more underwhelming at times. The authors could consider swapping their observations and the numbers underpinning those observations at times, i.e., focus on giving Section 5 more of a storyline instead of "mechanically" working off one aspect after the other.
* The proposed countermeasures are limited and hidden in the "Discussion" section, which is odd in face of my previous point.
How could the countermeasures prevent deliberate collisions, e.g., to prepare attacks outlined by the authors?
Most notably, the countermeasures do not consider a likely unavoidable tension between ICANN and any alternative DNS: Realistically, ICANN will have precedence over other systems in case of conflicts for practicability reasons. Contrarily, this cannot align with the philosophy behind other systems emphasizing their decentralized approach. Wouldn't a countermeasure, at least to prevent collisions with ICANN, be for ICANN to name a suffix for TLDs they will never consider registering?
* Finally, the paper would profit from a more detailed technical background on how BNSs would be used by the end-users and so, by extension, who would be affected by, e.g., attacks stemming from deliberate collisions?
**Minor Remarks:**
* Section 1: It could be argued that DNS is rather part of the Web's backbone, as the Internet's backbone could be more closely associated with the physical (network) infrastructure.
* Section 1: "non-falsifiable tokens" vs. "non-fungible tokens" (in the abstract).
* Section 5.1: Broken sentence: "(1.0We ..."
* Footnote 1 is used twice
Update: I acknowledge that I have read the authors' rebuttal comments.
questions: See review text
ethics_review_flag: No
ethics_review_description: -
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
5nINTZKe4d | Investigations of Top-Level Domain Name Collisions in Blockchain Naming Services | [
"Daiki Ito",
"Yuta Takata",
"Hiroshi Kumagai",
"Masaki Kamizono"
] | Traditionally, top-level domains (TLDs) are managed by the Internet corporation for assigned names and numbers (ICANN), and the domain names under them are managed by registrars. Against such centralized management, a blockchain naming service (BNS) has been proposed to manage TLDs on blockchains without authority intervention. BNS users can register TLD strings as non-fungible tokens and manage the TLD root zone. However, such decentralized management results in the introduction of a new security issue, BNS TLD name collision, wherein the same TLD is registered in several different BNSs.
In this study, we investigated BNS TLD name collisions by analyzing TLDs registered on two BNSs: Handshake and Decentraweb. Specifically, we collected TLDs registered in Handshake and Decentraweb and the associated data, and analyzed the data registration status of BNS TLDs and BNS TLD name collisions. The analysis of 11,595,406 Handshake and 11,889 Decentraweb TLDs revealed 6,973 BNS TLD name collisions. In particular, lastname TLDs, which are intended for use as person names, yielded a large number of registered domain names. In addition, the analysis identified 10 name collisions between the BNS and operational ICANN TLDs. Further, the ICANN TLD candidates under review also had name collisions against the BNS TLDs. Consequently, based on the characteristics of these name collisions and discussions in BNS communities, we considered countermeasures against BNS TLD name collisions. For the further development of BNSs, we believe that it is essential to discuss with the existing Internet communities and coexist with the existing Internet. | [
"Blockchain Naming Service",
"Top-Level Domain",
"Name Collision"
] | https://openreview.net/pdf?id=5nINTZKe4d | jCAWIkGO15 | decision | 1,705,909,239,256 | 5nINTZKe4d | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: The paper received 5 reviews and the reviews were generally positive leaning. The authors engaged extensively with the reviewers during the discussion phase. However, some reviewers felt their responses were not addressing the core issues but rather the minor points from the reviews. Following the discussions, the reviewer recommendations were as follows: 3 recommending borderline and 2 recommending accept. I have thus settled in the middle and am recommending a weak acceptance. I believe this captures the reviews and ensuing discussions with the authors. The paper has many positives but also a few critical shortcomings. |
5nINTZKe4d | Investigations of Top-Level Domain Name Collisions in Blockchain Naming Services | [
"Daiki Ito",
"Yuta Takata",
"Hiroshi Kumagai",
"Masaki Kamizono"
] | Traditionally, top-level domains (TLDs) are managed by the Internet corporation for assigned names and numbers (ICANN), and the domain names under them are managed by registrars. Against such centralized management, a blockchain naming service (BNS) has been proposed to manage TLDs on blockchains without authority intervention. BNS users can register TLD strings as non-fungible tokens and manage the TLD root zone. However, such decentralized management results in the introduction of a new security issue, BNS TLD name collision, wherein the same TLD is registered in several different BNSs.
In this study, we investigated BNS TLD name collisions by analyzing TLDs registered on two BNSs: Handshake and Decentraweb. Specifically, we collected TLDs registered in Handshake and Decentraweb and the associated data, and analyzed the data registration status of BNS TLDs and BNS TLD name collisions. The analysis of 11,595,406 Handshake and 11,889 Decentraweb TLDs revealed 6,973 BNS TLD name collisions. In particular, lastname TLDs, which are intended for use as person names, yielded a large number of registered domain names. In addition, the analysis identified 10 name collisions between the BNS and operational ICANN TLDs. Further, the ICANN TLD candidates under review also had name collisions against the BNS TLDs. Consequently, based on the characteristics of these name collisions and discussions in BNS communities, we considered countermeasures against BNS TLD name collisions. For the further development of BNSs, we believe that it is essential to discuss with the existing Internet communities and coexist with the existing Internet. | [
"Blockchain Naming Service",
"Top-Level Domain",
"Name Collision"
] | https://openreview.net/pdf?id=5nINTZKe4d | gfWFwUWA60 | official_review | 1,699,273,427,544 | 5nINTZKe4d | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission823/Reviewer_9LjN"
] | review: ### Summary:
The paper conducts a study to look into domain name collisions both between blockchain naming services (BNSs) and traditional ICANN TLDs and among BNS services. Since BNSs build on blockchain technology, analyzing TLD-related data is a quite straightforward approach and promises to provide a complete overview. The issue of domain name collisions will only become more relevant in the future since (i) additional generic TLDs will be assigned by ICANN and (ii) the popularity of BNSs could potentially increase. Given that domain name collisions can have negative consequences, especially in light of Web3 and the transfer of funds, looking into this topic is an important matter.
### Pros:
+1: Very interesting, timely, and novel research topic
+2: Detailed analysis with lots of background information and explanations
### Cons:
-1: No mention of research artifacts or longitudinal updates
-2: Butterfly protocol is not part of the paper/study
The paper is a really interesting read, presents the information in a concise and easy-to-understand manner, and covers a previously uncovered topic. While I am also listing a few issues here, the overall quality of the paper is already quite good. Thus, please take these comments as a means to further improve your paper/work.
### Detailed Comments:
#### -1: Research Artifacts
Even though the paper presents a lot of diverse information to give a good overview of the topic at hand, the collected data probably contains even more gems. Unfortunately, the authors do not state whether they are planning to open-source their research artifacts, both in terms of software and data artifacts. In addition to allowing other researchers to go through the raw data, having access to the software/code could enable and ease follow-up research.
On a slightly different note, I would like to know whether the authors are planning to continuously update their analysis and data over time, for example, by automatically publishing new "measurements" and/or data on a dedicated website.
Both of these means could enable interesting longitudinal studies. Thus, I look forward to hearing whether this matter is on the authors' plate.
#### -2: Butterfly Protocol
In the current version, the paper considers two services for its in-depth analysis, namely, Handshake and Decentraweb. The paper also conveys why other services, such as Namecoin, Emercoin, ENS, or Unstoppable Domains, are not being considered as part of this research. However, I am surprised to discover that the Butterfly protocol (https://www.butterflyprotocol.io/) is not even mentioned in the paper. To the best of my understanding, this service is quite related to the selected ones. Consequently, at the very least, I would like to see a mention of this BNS in the main body of the paper as well as an extension of Table 11. Please educate me if I am missing something here.
That being said, I believe that the paper has significant contribution in its current form. Hence, I do not believe that the authors need to extend their analysis to include the Butterfly protocol (even if appropriate).
#### Other:
- The paper states in Section 3.2.1 and Table 11 that Handshake "uses Bcoin fork". This information appears to be incorrect since Bcoin is only a client and not a blockchain. Looking at the technical outline (https://hsd-dev.org/files/handshake.txt) seems to confirm this aspect. I believe that this aspect must be corrected.
- Would it be possible to prepare a list of TLDs that you would expect as new gTLDs for the appendix of the paper (cf. end of Section 3.3)?
- I believe that the following sentence in Section 5.4 is not correct "[...] top five IPv4 addresses set in the DNS RRs in the same manner as the DNS RRs". In any case, I am not able to grasp what the authors are trying to convey.
- The authors speculate that 191 NS records for Handshake TLDs possibly follow from the previously set default values. I am wondering whether the authors tried to look through the services' GitHub history or tried to contact the developers to confirm this hunch.
- The last sentence of the main body of the paper is quite political, in my view. Why would the authors like to see the spread of BNSs in the future? Without specifying this matter, i.e., giving a concrete reason, I would recommend the authors to omit such wording from the paper. The first part of the sentence is not affected.
- Table 11 is a nice addition to the paper and compares the different BNSs in a compact way. However, I am wondering whether the authors could extend this overview with the following properties: (a) Who operates the (underlying) blockchain of each service, (b) how many nodes are involved in the operation of the blockchain (at the time of writing), and (c) how decentralized are the nodes (in terms of operators).
#### Nits:
- Introduction: NFTs is written as "non-falsifiable tokens" in the paper; shouldn't it be "non-fungible tokens" instead?
- Related Work: Placing a comma after "best of our knowledge" would improve the readability.
- Background: Placing commas around "compliant with the ERC-721 standard" and after "SNS account IDs" would improve the readability.
- Investigation Results: Section 5.1 contains a broken sentence, which starts with "However, only 78 TLDs (1.0". Moreover, is "By contrast" in Section 5.2 correct? I would have rather used "In contrast".
### Post-Rebuttal
I kindly thank the authors for responding to the reviews and outlining their proposed changes.
Moreover, I am happy that we were able to resolve a misconception during the discussion period.
The path forward for the paper looks promising and I still would like to see the results published, despite the lack of discussing the implications of the reported findings in more detail.
questions: Is there a reason for not considering the Butterfly Protocol (https://www.butterflyprotocol.io/) both in the paper and the conducted study?
ethics_review_flag: No
ethics_review_description: n/a
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
5nINTZKe4d | Investigations of Top-Level Domain Name Collisions in Blockchain Naming Services | [
"Daiki Ito",
"Yuta Takata",
"Hiroshi Kumagai",
"Masaki Kamizono"
] | Traditionally, top-level domains (TLDs) are managed by the Internet corporation for assigned names and numbers (ICANN), and the domain names under them are managed by registrars. Against such centralized management, a blockchain naming service (BNS) has been proposed to manage TLDs on blockchains without authority intervention. BNS users can register TLD strings as non-fungible tokens and manage the TLD root zone. However, such decentralized management results in the introduction of a new security issue, BNS TLD name collision, wherein the same TLD is registered in several different BNSs.
In this study, we investigated BNS TLD name collisions by analyzing TLDs registered on two BNSs: Handshake and Decentraweb. Specifically, we collected TLDs registered in Handshake and Decentraweb and the associated data, and analyzed the data registration status of BNS TLDs and BNS TLD name collisions. The analysis of 11,595,406 Handshake and 11,889 Decentraweb TLDs revealed 6,973 BNS TLD name collisions. In particular, lastname TLDs, which are intended for use as person names, yielded a large number of registered domain names. In addition, the analysis identified 10 name collisions between the BNS and operational ICANN TLDs. Further, the ICANN TLD candidates under review also had name collisions against the BNS TLDs. Consequently, based on the characteristics of these name collisions and discussions in BNS communities, we considered countermeasures against BNS TLD name collisions. For the further development of BNSs, we believe that it is essential to discuss with the existing Internet communities and coexist with the existing Internet. | [
"Blockchain Naming Service",
"Top-Level Domain",
"Name Collision"
] | https://openreview.net/pdf?id=5nINTZKe4d | cRs236TWvf | official_review | 1,700,374,143,966 | 5nINTZKe4d | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission823/Reviewer_Y26W"
] | review: This paper investigates the top-level domain name collisions in blockchain naming services. The authors collected a large-scale dataset from two BNSs and identified existing BNS TLD name collisions.
pros:
1. This paper studies a practical and important issue.
2. The authors conducted detailed experiments.
cons:
1. How to verify the correctness of the results? Did the authors report the results to operators?
2. In addition to displaying results, it is better to conduct in-depth research on the causes of name collisions.
3. This paper has insufficient technical contributions. For example, in Section 6.1, technical solutions for resolving name collisions should be provided instead of simple discussions.
questions: all cons.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
5nINTZKe4d | Investigations of Top-Level Domain Name Collisions in Blockchain Naming Services | [
"Daiki Ito",
"Yuta Takata",
"Hiroshi Kumagai",
"Masaki Kamizono"
] | Traditionally, top-level domains (TLDs) are managed by the Internet corporation for assigned names and numbers (ICANN), and the domain names under them are managed by registrars. Against such centralized management, a blockchain naming service (BNS) has been proposed to manage TLDs on blockchains without authority intervention. BNS users can register TLD strings as non-fungible tokens and manage the TLD root zone. However, such decentralized management results in the introduction of a new security issue, BNS TLD name collision, wherein the same TLD is registered in several different BNSs.
In this study, we investigated BNS TLD name collisions by analyzing TLDs registered on two BNSs: Handshake and Decentraweb. Specifically, we collected TLDs registered in Handshake and Decentraweb and the associated data, and analyzed the data registration status of BNS TLDs and BNS TLD name collisions. The analysis of 11,595,406 Handshake and 11,889 Decentraweb TLDs revealed 6,973 BNS TLD name collisions. In particular, lastname TLDs, which are intended for use as person names, yielded a large number of registered domain names. In addition, the analysis identified 10 name collisions between the BNS and operational ICANN TLDs. Further, the ICANN TLD candidates under review also had name collisions against the BNS TLDs. Consequently, based on the characteristics of these name collisions and discussions in BNS communities, we considered countermeasures against BNS TLD name collisions. For the further development of BNSs, we believe that it is essential to discuss with the existing Internet communities and coexist with the existing Internet. | [
"Blockchain Naming Service",
"Top-Level Domain",
"Name Collision"
] | https://openreview.net/pdf?id=5nINTZKe4d | F0RdOqBYmC | official_review | 1,699,241,393,985 | 5nINTZKe4d | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission823/Reviewer_TKk2"
] | review: ### Summary
In this paper, the authors conduct an empirical study about the domain name collisions in blockchain naming services. After collecting necessary information from BNS and ICANN, the authors get some interesting findings, like the characteristics of collided TLD names, the corresponding resource records, and the distribution of the owner addresses.
### Strength
- This paper investigates an interesting topic, i.e., name collisions in blockchain naming service, which is underestimated by the community currently.
- This paper delivers some interesting preliminary insights, like the existence of name collisions, the centralized management of such a decentralized domain name service, and the limited application of such a blockchain naming service.
### Weakness
- The revealed insights are not deep enough, and the adopted methods are too simple. Basically, current insights can all be concluded through some basic data processes.
- The paper is a little bit unreadable, and some notations are not introduced properly, leading to the self-contained issue. For example, in introducing DNS, the authors should also give a brief introduction of what is a resource record.
### Comments
Except for the two main concerns I raised above, here are some minor concerns.
The authors conduct empirical studies on Handshake and Decentraweb. However, there exist other blockchain naming service providers. Why choosing these two should be clarified.
At L252, the authors claim Handshake uses a fork from Bcoin. However, Bcoin is not a widely-adopted notation or a well-known blockchain platform, the authors should clarify this.
In Section 4.2.1, the authors claim that “we identified TLDs that exactly matched the names of famous organizations, corporations, brands, and web services”. However, in Section 3.2.1 and Section 3.2.2, the authors say that the registration should avoid famous companies and brands that are listed in Alexa. So, how is this possible?
At L462, the authors have collected data located in a two week period. Why choose these 14 days? Moreover, this description here is vague and unclear. The BNS data should include Handshake related data in my opinion, so why collect another bunch of Handshake data in the following? This data collecting part should be revised and clarified.
At L503, the authors claim that “However, Decentraweb users … was low”. I don’t think this is a reasonable explanation. Because at L295, the authors say registering a TLD requires at least 50 USD. Compared to the transaction fee, I don’t think the owners would not spend another piece of money to set up the resource records. I think this part should be clarified. Moreover, in Table 2, we can see that there are only 3 and 74 pieces of TLDs in Decentraweb that have DNS RRs and other records, respectively. I am curious about the reasons behind such small numbers. And I doubt the representative of Decentraweb.
In Table 3, all these notations are not well-defined. It is hard to understand for readers with a little domain knowledge.
Some descriptions are not clear. For example, the statements in Section 4.2.3. The sentence at L565, “Since 2023, … collisions occurred continuously”. And the sentence at L620, “We identified name collisions … 10 unique TLDs”.
Some typos also exist:
- L14: such decentralized management -> such a decentralized management;
- L81: NFT should be defined as non-fungible token;
- L310: metadata include the owner -> metadata includes the owner.
questions: Please refer to the above `Review` part.
ethics_review_flag: No
ethics_review_description: -
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 3
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
5nINTZKe4d | Investigations of Top-Level Domain Name Collisions in Blockchain Naming Services | [
"Daiki Ito",
"Yuta Takata",
"Hiroshi Kumagai",
"Masaki Kamizono"
] | Traditionally, top-level domains (TLDs) are managed by the Internet corporation for assigned names and numbers (ICANN), and the domain names under them are managed by registrars. Against such centralized management, a blockchain naming service (BNS) has been proposed to manage TLDs on blockchains without authority intervention. BNS users can register TLD strings as non-fungible tokens and manage the TLD root zone. However, such decentralized management results in the introduction of a new security issue, BNS TLD name collision, wherein the same TLD is registered in several different BNSs.
In this study, we investigated BNS TLD name collisions by analyzing TLDs registered on two BNSs: Handshake and Decentraweb. Specifically, we collected TLDs registered in Handshake and Decentraweb and the associated data, and analyzed the data registration status of BNS TLDs and BNS TLD name collisions. The analysis of 11,595,406 Handshake and 11,889 Decentraweb TLDs revealed 6,973 BNS TLD name collisions. In particular, lastname TLDs, which are intended for use as person names, yielded a large number of registered domain names. In addition, the analysis identified 10 name collisions between the BNS and operational ICANN TLDs. Further, the ICANN TLD candidates under review also had name collisions against the BNS TLDs. Consequently, based on the characteristics of these name collisions and discussions in BNS communities, we considered countermeasures against BNS TLD name collisions. For the further development of BNSs, we believe that it is essential to discuss with the existing Internet communities and coexist with the existing Internet. | [
"Blockchain Naming Service",
"Top-Level Domain",
"Name Collision"
] | https://openreview.net/pdf?id=5nINTZKe4d | 5KhP641Zfs | official_review | 1,700,191,707,688 | 5nINTZKe4d | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission823/Reviewer_eUic"
] | review: **Paper summary**
The paper investigates the security implications of decentralized management through Blockchain Naming Services (BNS) for Top-Level Domains (TLDs). In particular, it focuses on TLD name collisions. It collects and analyzes TLD registration data from two BNS platforms, i.e., Handshake and Decentraweb, and reveals significant challenges associated with BNS TLD name collisions.
**Strengths**
+ A new exploration on the TLD name collisions in BNS, and a necessary complementary of the studies on the collision problem in traditional ICANN
+ A large-scale analysis. More than 11 million registrations are analyzed.
+ Meaningful findings. The paper detects around 7,000 BNS TLD name collisions.
+ The paper is well-structured and well-written
**Weaknesses**
- The selection of the two BNS platforms needs to be justified
- The completeness of the data collection needs to be elaborated on
- The domain name squatting portion is a bit confusing
- Impact of the findings could be explored further
- Ethical consideration needs to be made clearer
**Detailed comments**
This paper conducts an investigation on the security implications of decentralized management through BNS for TLDs. It is a meaningful and practical study. The paper is well-written. Below I elaborate on the weaknesses listed above.
There should be a justification for selecting Handshake and Decentraweb as the BNS platforms for analysis. For a reader who is unfamiliar with this domain, the rationale behind this selection is not clearly articulated, raising questions about the representativeness of these platforms in the broader landscape of BNS solutions.
From the description on the data collection (Section 4.1.2), it is hard to assess the completeness of the data collection process. It would be good to provide an elaboration or an analysis on the completeness of the collected data to enhance the reliability of the study.
I am a bit confused by the domain name squatting analysis in Section 4.2.1. Why is it a critical component in the TLD collision problem?
The paper falls short in exploring the potential impact of the findings on the broader landscape of Internet governance and domain management. Even though domain names have collision on the TLD, the whole domain name may still differ. A more comprehensive discussion or data on how the identified collisions could affect the domain management is needed. It would be even better if real-world cases can be provided.
The paper should include a clear articulation of responsible disclosure. As the name collision can have far-reaching consequences, it is essential to responsibly disclose the findings to the stakeholders.
The sentence in line 512-514 is broken.
questions: Please refer to the comments above
ethics_review_flag: Yes
ethics_review_description: I would request the authors to responsibly disclose the identified TLD collision to the stakeholders.
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
5ezQQKTjUN | A Matrix Calibration Method for Similarity Matrix with Incomplete Query | [
"Changyi Ma",
"Runsheng Yu",
"Youzhi Zhang"
] | The similarity matrix is at the core of similarity search problems. However, incomplete observations are ubiquitous in real scenarios making the similarity matrix less accurate. To estimate a high-quality similarity matrix, one popular trend is to impute the missing values into the vectors directly, which provides a simple and highly efficient way to recover the similarity matrix. However, these methods lack of theoretical guarantee due to ignoring the entire similarity matrix property directly. In this paper, based on the key insight that the similarity matrix is symmetric and enjoys the positive semi-definiteness (PSD) property, we proposed a novel similarity matrix calibration method, which is scalable, adaptive, and sound. Specifically, we first show the similarity matrix provably holds the PSD property as the constraint. Then, we proposed a parallel matrix calibration method to estimate the similarity matrix to approximate the unknown fully observed ground-truth similarity matrix. Further, we discover its factored form which bypasses the computation of singular values and allows fast optimization by general optimization algorithm. Stable recovery and convergence are guaranteed. Extensive similarity matrix calibration experiments on the real-world dataset demonstrated that the proposed method obtains superior performance while being the fastest in comparison to baseline methods. | [
"Similarity Search",
"Incomplete Query",
"Positive Semi-definiteness",
"Similarity Matrix",
"Matrix Calibration"
] | https://openreview.net/pdf?id=5ezQQKTjUN | ch5gkvophE | decision | 1,705,909,222,891 | 5ezQQKTjUN | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: This is the meta-review. The paper proposes a novel similarity matrix calibration method to estimate and approximate the unknown fully observed similarity matrix with incomplete query.
During the discussion phase, the authors provided thorough information. Additional experimental results have been added. Some major and common concerns of the original reviews have been clarified clearly, such as adding various similarity metrics, adding one additional text dataset, and better explanations on data missing scenarios.
Pros:
+ The idea is interesting and it studies an important and common problem in real applications.
+ The proposed approach has technical quality and novelty, which uses the positive semi-definiteness (PSD) property to estimate the similarity matrix from the incomplete data.
+ Extensive experiments have been performed on 5+1 dataset, and the comparative results verify that the model is effective (in most cases) and efficient.
Cons:
- The writing can be improved, which the author has promised to be improved in the revised version.
- I'm somewhat worrying about whether the additional experimental results and major explanations can be fully addressed in the revised paper. But such supplement information is necessary and helpful. Careful re-organization and improved representation of the content is needed.
Given the above evaluation, and since no paper is perfect, I think the paper is ready for publication, if the authors do add these improvements during the discussions into the final version. |
5ezQQKTjUN | A Matrix Calibration Method for Similarity Matrix with Incomplete Query | [
"Changyi Ma",
"Runsheng Yu",
"Youzhi Zhang"
] | The similarity matrix is at the core of similarity search problems. However, incomplete observations are ubiquitous in real scenarios making the similarity matrix less accurate. To estimate a high-quality similarity matrix, one popular trend is to impute the missing values into the vectors directly, which provides a simple and highly efficient way to recover the similarity matrix. However, these methods lack of theoretical guarantee due to ignoring the entire similarity matrix property directly. In this paper, based on the key insight that the similarity matrix is symmetric and enjoys the positive semi-definiteness (PSD) property, we proposed a novel similarity matrix calibration method, which is scalable, adaptive, and sound. Specifically, we first show the similarity matrix provably holds the PSD property as the constraint. Then, we proposed a parallel matrix calibration method to estimate the similarity matrix to approximate the unknown fully observed ground-truth similarity matrix. Further, we discover its factored form which bypasses the computation of singular values and allows fast optimization by general optimization algorithm. Stable recovery and convergence are guaranteed. Extensive similarity matrix calibration experiments on the real-world dataset demonstrated that the proposed method obtains superior performance while being the fastest in comparison to baseline methods. | [
"Similarity Search",
"Incomplete Query",
"Positive Semi-definiteness",
"Similarity Matrix",
"Matrix Calibration"
] | https://openreview.net/pdf?id=5ezQQKTjUN | aPThqWwxve | official_review | 1,701,299,539,359 | 5ezQQKTjUN | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission791/Reviewer_Y2Tp"
] | review: The paper describes a fast matrix calibration method when a full similarity matrix is not available due to storage/transmission issues. The paper details the theoretical foundation of the proposed approach, describes algorithm and overheads and demonstrates the improved performance in relation to 2 baselines.
Pros:
- (as far as I can tell) technically interesting, novel approach
Cons:
- The graphs and figures in Section 6 are so small that they're basically unreadable in the printed form of the paper.
- There are a number of spelling/grammar issues in the paper and the English is often difficult to follow...
- The two motivating scenarios regarding missing entries in the similarity matrix outline in the first paragraph of the introduction both aren't fully compelling; what do the authors mean exactly by "being measures by the incomplete data samples"? What is the cause for features being unknown? A more concrete example would be very beneficial here. Also, with error-correcting codes and redundant storage systems, it is not clear to be my storage/transmission errors should be a common occurrence. This is especially true as Section 2.2. suggests that a somewhat large fraction of the data should be missing for the proposed method to be applicable.
- The use of the calibrated similarity metric in the context of similarity search isn't fully compelling either; to the best of my knowledge, nearly all ANN techniques use relatively simple similarity measures over vectors of values (or a model to approximate the similarity function), but not explicitly materialized matrices.
- While similarity estimation is relevant to several different types of applications that are relevant to the Web Conference, I am less sure that the technical contributions of this paper would be a good match with the conference audience.
questions: - Are there any other application scenarios (other than data being lost in storage/transit) for which the proposed techniques are relevant?
- It would be important to understand what other similarity measures the proposed method can reconstruct (Section 3.2. only gives pointers to [35,48]) and I would like the authors to describe this in more detail (and also, if and how this would affect accuracy).
- What do the authors mean by "the assumption of data samples" (Section 1)?
ethics_review_flag: No
ethics_review_description: None
scope: 2: The connection to the Web is incidental, e.g., use of Web data or API
novelty: 5
technical_quality: 6
reviewer_confidence: 1: The reviewer's evaluation is an educated guess |
5ezQQKTjUN | A Matrix Calibration Method for Similarity Matrix with Incomplete Query | [
"Changyi Ma",
"Runsheng Yu",
"Youzhi Zhang"
] | The similarity matrix is at the core of similarity search problems. However, incomplete observations are ubiquitous in real scenarios making the similarity matrix less accurate. To estimate a high-quality similarity matrix, one popular trend is to impute the missing values into the vectors directly, which provides a simple and highly efficient way to recover the similarity matrix. However, these methods lack of theoretical guarantee due to ignoring the entire similarity matrix property directly. In this paper, based on the key insight that the similarity matrix is symmetric and enjoys the positive semi-definiteness (PSD) property, we proposed a novel similarity matrix calibration method, which is scalable, adaptive, and sound. Specifically, we first show the similarity matrix provably holds the PSD property as the constraint. Then, we proposed a parallel matrix calibration method to estimate the similarity matrix to approximate the unknown fully observed ground-truth similarity matrix. Further, we discover its factored form which bypasses the computation of singular values and allows fast optimization by general optimization algorithm. Stable recovery and convergence are guaranteed. Extensive similarity matrix calibration experiments on the real-world dataset demonstrated that the proposed method obtains superior performance while being the fastest in comparison to baseline methods. | [
"Similarity Search",
"Incomplete Query",
"Positive Semi-definiteness",
"Similarity Matrix",
"Matrix Calibration"
] | https://openreview.net/pdf?id=5ezQQKTjUN | YL6X3v4B6H | official_review | 1,700,699,479,163 | 5ezQQKTjUN | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission791/Reviewer_Bfra"
] | review: Advantage:
1.Quality: This paper proposes a new similarity matrix calibration method and efficient algorithm to address incomplete observations in similarity search problems. The quality of the paper is high.
2.Clarity: The discourse of the paper is clear and clear, and the content of each part is naturally connected and easy to understand. For the proposed algorithm, the paper provides pseudo code with detailed steps and explanations, making it easy for readers to understand and implement.
3.Originality: This paper proposes a novel matrix calibration method that use positive semi-definiteness (PSD) property to estimate the similarity matrix from the incomplete data. This method has not been reported in existing literature, therefore it has high originality.
4.Significance: The method proposed in this paper is of great significance for solving incomplete observations of similarity search problems. In real life, incomplete observations are a common phenomenon, so this method has broad application prospects.
Disadvantage:
1.For certain specific situations, this method may not achieve the best results. The experimental results in the paper also indicate that the method did not achieve optimal results in all cases.
2.The paper conducted experiments on five datasets, but only one text dataset was included. This cannot fully demonstrate the effectiveness of this method in different fields and datasets with higher dimensions.
questions: 1.The article mentions the use of cosine similarity as a measure. However, in other contexts, other similarity measures such as Euclidean distance, Jaccard index, etc. may be more appropriate. Can the methods in the article be extended to cover these other similarity measures?
2.The article mentions various application scenarios for matrix calibration problems, but does not delve into the specific implementation details and performance evaluation in these application scenarios. If more information can be provided on the specific implementation and performance evaluation in these application scenarios, it will help readers better understand how to use matrix calibration methods in practical problems.
3.If the experimental results on more text datasets of the methods in the article can be provided, it will greatly improve the persuasiveness of the article.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
5ezQQKTjUN | A Matrix Calibration Method for Similarity Matrix with Incomplete Query | [
"Changyi Ma",
"Runsheng Yu",
"Youzhi Zhang"
] | The similarity matrix is at the core of similarity search problems. However, incomplete observations are ubiquitous in real scenarios making the similarity matrix less accurate. To estimate a high-quality similarity matrix, one popular trend is to impute the missing values into the vectors directly, which provides a simple and highly efficient way to recover the similarity matrix. However, these methods lack of theoretical guarantee due to ignoring the entire similarity matrix property directly. In this paper, based on the key insight that the similarity matrix is symmetric and enjoys the positive semi-definiteness (PSD) property, we proposed a novel similarity matrix calibration method, which is scalable, adaptive, and sound. Specifically, we first show the similarity matrix provably holds the PSD property as the constraint. Then, we proposed a parallel matrix calibration method to estimate the similarity matrix to approximate the unknown fully observed ground-truth similarity matrix. Further, we discover its factored form which bypasses the computation of singular values and allows fast optimization by general optimization algorithm. Stable recovery and convergence are guaranteed. Extensive similarity matrix calibration experiments on the real-world dataset demonstrated that the proposed method obtains superior performance while being the fastest in comparison to baseline methods. | [
"Similarity Search",
"Incomplete Query",
"Positive Semi-definiteness",
"Similarity Matrix",
"Matrix Calibration"
] | https://openreview.net/pdf?id=5ezQQKTjUN | J3JGAtnbzO | official_review | 1,701,231,995,219 | 5ezQQKTjUN | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission791/Reviewer_jhKk"
] | review: To solve the problem of incomplete data when calculate the similarity matrix, this paper introduces a new algorithm based on the positive semi-definiteness (PSD) property of the similarity matrix, where the similarity matrix is estimated by approximating the unknown fully observed ground-truth similarity matrix iteratively. Overall, the motivation and formula derivation process are clear. However, the results and results analysis are confusing and incorrect sometimes.
Cons:
(1)The caption of Table 1 is wrong. “n=1000 query items” should be “n=1000 complete search samples” as described in the former sections.
(2)In section 6.5, the conclusion that “ Overall, the RMSE decreased with the increase of missing ratio rho” is wrong by comparing the fig1-fi3. For example, taking the SIFT as example, we can not draw this conclusion. Besides, to check the changes of RMSE of different rho, it is better to merge the figure1-3 together. Currently, it is difficult to draw a right conclusion from these three separate figures.
(3)The conclusion that “RMSE was not much changed with a fixed missing ratio on a specific rho in most cases.” is also wrong in section 6.5. It is apparent that RMSE changes a lot for specific tol1 for CG and tol2 for QN in the MINIST, CIFAR, PROTEIN and RCV1 datasets. Such situation is apparent for different rho.
(4)To verify the goodness of the matrix calibration method, it is better to conduct experiments at a specific retrieval task using the similarity matrix. In this way, the superiority of the proposed method can be better validated.
questions: (1)In the similarity matrix initialization phase, what is the meaning of “common features that are observed in both x_i and x_j.” Can you give an example?
(2)Can you explain more details about the adopted evaluation metric? Has this metric been used by other baseline methods? Does this metric is suitable for the Missing Value Imputation Methods? As the authors report, the RMSE of Missing Value Imputation Methods is really bigger than the proposed method. So I am curious if the adopted metric is suitable for these baseline methods.
(3)The authors should concern the conclusion statements concluded from the tables and figures.
ethics_review_flag: No
ethics_review_description: No ethical issue.
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 3
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
5ezQQKTjUN | A Matrix Calibration Method for Similarity Matrix with Incomplete Query | [
"Changyi Ma",
"Runsheng Yu",
"Youzhi Zhang"
] | The similarity matrix is at the core of similarity search problems. However, incomplete observations are ubiquitous in real scenarios making the similarity matrix less accurate. To estimate a high-quality similarity matrix, one popular trend is to impute the missing values into the vectors directly, which provides a simple and highly efficient way to recover the similarity matrix. However, these methods lack of theoretical guarantee due to ignoring the entire similarity matrix property directly. In this paper, based on the key insight that the similarity matrix is symmetric and enjoys the positive semi-definiteness (PSD) property, we proposed a novel similarity matrix calibration method, which is scalable, adaptive, and sound. Specifically, we first show the similarity matrix provably holds the PSD property as the constraint. Then, we proposed a parallel matrix calibration method to estimate the similarity matrix to approximate the unknown fully observed ground-truth similarity matrix. Further, we discover its factored form which bypasses the computation of singular values and allows fast optimization by general optimization algorithm. Stable recovery and convergence are guaranteed. Extensive similarity matrix calibration experiments on the real-world dataset demonstrated that the proposed method obtains superior performance while being the fastest in comparison to baseline methods. | [
"Similarity Search",
"Incomplete Query",
"Positive Semi-definiteness",
"Similarity Matrix",
"Matrix Calibration"
] | https://openreview.net/pdf?id=5ezQQKTjUN | 1Gui1T6Hxz | official_review | 1,700,674,864,332 | 5ezQQKTjUN | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission791/Reviewer_Rasf"
] | review: The paper proposes a novel similarity matrix calibration method. The authors propose a parallel matrix calibration method to estimate the similarity matrix to approximate the unknown fully observed ground-truth similarity matrix. They also discover its factored form, which bypasses the computation of singular values and allows fast optimization by a general optimization algorithm. Stable recovery and convergence are guaranteed. Extensive similarity matrix calibration experiments on the real-world dataset demonstrated that the proposed method obtains superior performance while being the fastest compared to baseline methods.
The idea presented in the paper is interesting. The paper falls within the WWW 2024 call for papers, and the similarity search problem justifies the need for such an approach. The paper is an interesting piece of work to read. The novelty is multifold. Two different algorithms are proposed: first, the authors introduce the Basic Similarity Vector Calibration (BSVC) method to solve the similarity search problem with incomplete queries. Then they further reduce the computational complexity with Conjugate Gradient and Quasi-Newton-based Approximated BSVC (CQABSVC) methods to find the approximated solutions. The novelty of the paper is represented by the two algorithms and their theoretical justification. Moreover, the authors propose an experimental evaluation of the performance of the two new algorithms on five public datasets. The new proposed approaches significantly advance the state of the art. From an experimental point of view, the implementation of the algorithms (MATLAB) is not available.
From an experimental point of view, while Table 1 looks sound, some of the details in Table 2 are unclear. First, it is not clear what are the results from n = 1,000 and n = 5,000. Assuming n = 1,000 is the table on the left, why for some datasets (CIFAR / SIFT / PROTEIN) computing for n = 1,000 costs more than computing for n = 5,000 (SMC, QASMC)? Please explain this behavior.
Overall, although the paper significantly falls outside my main area of expertise, I found it an interesting piece of work to read, and I think it reaches the bar for WWW 2024.
questions: 1) Could you please explain the behavior shown in Table 2 for some datasets? Why do some methods present lower time costs for n = 5,000? IMHO it is counterintuitive.
The authors answer the question above during rebuttal.
ethics_review_flag: No
ethics_review_description: does not apply
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
5OClaaZpBL | MSynFD: Multi-hop Syntax aware Fake News Detection | [
"Liang Xiao",
"Qi Zhang",
"Chongyang Shi",
"Shoujin Wang",
"Usman Naseem",
"Liang Hu"
] | The proliferation of social media platforms has fueled the rapid dissemination of fake news, posing threats to our real-life society. Existing methods use multimodal data or contextual information to enhance the detection of fake news by analyzing news content and/or its social context. However, these methods often overlook essential textual news content (articles) and heavily rely on sequential modeling and global attention to extract semantic information. These existing methods fail to handle the complex, subtle twists in news articles, such as syntax-semantics mismatches and prior biases, leading to lower performance and potential failure when modalities or social context are missing. To bridge these significant gaps, we propose a novel multi-hop syntax aware fake news detection (MSynFD) method, which incorporates complementary syntax information to deal with subtle twists in fake news. Specifically, we introduce a syntactical dependency graph and design a multi-hop subgraph aggregation mechanism to capture multi-hop syntax. It extends the effect of word perception, leading to effective noise filtering and adjacent relation enhancement. Subsequently, a sequential relative position-aware Transformer is designed to capture the sequential information, together with an elaborate keyword debiasing module to mitigate the prior bias. Extensive experimental results on two public benchmark datasets verify the effectiveness and superior performance of our proposed MSynFD over state-of-the-art detection models. | [
"Fake News Detection",
"Graph Neural Network",
"Debias"
] | https://openreview.net/pdf?id=5OClaaZpBL | vF3SHThmWw | decision | 1,705,909,253,131 | 5OClaaZpBL | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: # Strengths:
* Innovative Approach: The use of a graph neural network to leverage syntax information for detecting subtleties in fake news is innovative.
* Effective Integration: The paper effectively integrates a syntactical dependency tree with a transformer model, showcasing improvements in fake news detection.
* Detailed Documentation: The approach is well-documented, aiding replicability.
* Empirical Validation: The effectiveness of the MSynFD method is supported by empirical validation on two different datasets.
* Quality Presentation: The paper is well-structured, with clear illustrations and coherent writing.
# Weaknesses:
* Marginal Performance Gains: The performance improvement over prior methods is marginal (about 1%).
* Lack of Genre Analysis: The paper could benefit from a breakdown of results by the genre of fake news.
* Limited Discussion on Related Work: Some reviewers noted a lack of discussion on graph-based semantic enhancement methods, specifically in the context of fake news detection.
* Perceived Disconnect in Modules: The relevance and integration of the Keywords Debiasing module with other parts of the MSynFD were questioned.
* Relevance to Fake News Detection: The relevance of the proposed method to the specific task of fake news detection, as opposed to general NLU tasks, was not clear.
# Overall:
The paper presents a novel and useful contribution to the field of fake news detection, particularly in handling subtle linguistic nuances in news articles. While the approach is well-motivated and empirically validated, the paper could be strengthened by addressing the weaknesses noted by the reviewers, such as marginal performance gains, lack of genre-specific analysis, and clearer integration and relevance of different model components. The paper would also benefit from addressing the reviewers' questions in future revisions to enhance clarity and understanding. |
5OClaaZpBL | MSynFD: Multi-hop Syntax aware Fake News Detection | [
"Liang Xiao",
"Qi Zhang",
"Chongyang Shi",
"Shoujin Wang",
"Usman Naseem",
"Liang Hu"
] | The proliferation of social media platforms has fueled the rapid dissemination of fake news, posing threats to our real-life society. Existing methods use multimodal data or contextual information to enhance the detection of fake news by analyzing news content and/or its social context. However, these methods often overlook essential textual news content (articles) and heavily rely on sequential modeling and global attention to extract semantic information. These existing methods fail to handle the complex, subtle twists in news articles, such as syntax-semantics mismatches and prior biases, leading to lower performance and potential failure when modalities or social context are missing. To bridge these significant gaps, we propose a novel multi-hop syntax aware fake news detection (MSynFD) method, which incorporates complementary syntax information to deal with subtle twists in fake news. Specifically, we introduce a syntactical dependency graph and design a multi-hop subgraph aggregation mechanism to capture multi-hop syntax. It extends the effect of word perception, leading to effective noise filtering and adjacent relation enhancement. Subsequently, a sequential relative position-aware Transformer is designed to capture the sequential information, together with an elaborate keyword debiasing module to mitigate the prior bias. Extensive experimental results on two public benchmark datasets verify the effectiveness and superior performance of our proposed MSynFD over state-of-the-art detection models. | [
"Fake News Detection",
"Graph Neural Network",
"Debias"
] | https://openreview.net/pdf?id=5OClaaZpBL | Wvde8Te6y7 | official_review | 1,700,762,078,371 | 5OClaaZpBL | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission938/Reviewer_s76Y"
] | review: This paper proposes a multi-hop syntax aware fake news detection method MSynFD, which incorporates complementary syntax information to deal with fake news with subtle twists.
Pros: This paper proposes a useful method, MSynFD to handle the complex, subtle twists1 in news articles. The effectiveness of this method is substantiated through empirical validation.
Cons:
1. This paper lacks a discussion on graph-based semantic enhancement methods. In fact, even when narrowing the scope to the fake news detection task, similar methods still exist:
Xu, Weizhi, et al. "Evidence-aware fake news detection with graph neural networks." Proceedings of the ACM Web Conference 2022. 2022.
2. The Keywords Debiasing module exhibits a certain degree of disconnection with other modules of MSynFD. The relevance between this module and the core motivation of the present study appears to be weak.
3. The motivation of this paper and the proposed MSynFD method seem applicable to all NLU tasks requiring fine-grained semantic comprehension, and the relevance to the fake news detection task seems to be weak.
questions: Apart from the paper mentioned in Cons 2, there is another parallel work closely related to this paper. What differences do you perceive between this paper and MSynFD?
Chen, Zhendong, et al. "A syntactic multi-level interaction network for rumor detection." *Neural Computing and Applications* (2023): 1-14.
ethics_review_flag: No
ethics_review_description: none
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 3
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
5OClaaZpBL | MSynFD: Multi-hop Syntax aware Fake News Detection | [
"Liang Xiao",
"Qi Zhang",
"Chongyang Shi",
"Shoujin Wang",
"Usman Naseem",
"Liang Hu"
] | The proliferation of social media platforms has fueled the rapid dissemination of fake news, posing threats to our real-life society. Existing methods use multimodal data or contextual information to enhance the detection of fake news by analyzing news content and/or its social context. However, these methods often overlook essential textual news content (articles) and heavily rely on sequential modeling and global attention to extract semantic information. These existing methods fail to handle the complex, subtle twists in news articles, such as syntax-semantics mismatches and prior biases, leading to lower performance and potential failure when modalities or social context are missing. To bridge these significant gaps, we propose a novel multi-hop syntax aware fake news detection (MSynFD) method, which incorporates complementary syntax information to deal with subtle twists in fake news. Specifically, we introduce a syntactical dependency graph and design a multi-hop subgraph aggregation mechanism to capture multi-hop syntax. It extends the effect of word perception, leading to effective noise filtering and adjacent relation enhancement. Subsequently, a sequential relative position-aware Transformer is designed to capture the sequential information, together with an elaborate keyword debiasing module to mitigate the prior bias. Extensive experimental results on two public benchmark datasets verify the effectiveness and superior performance of our proposed MSynFD over state-of-the-art detection models. | [
"Fake News Detection",
"Graph Neural Network",
"Debias"
] | https://openreview.net/pdf?id=5OClaaZpBL | N1ttXkjbnS | official_review | 1,701,427,872,679 | 5OClaaZpBL | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission938/Reviewer_168y"
] | review: This paper presents a new approach for fake news detection that uses graph neural networks to make use of syntax information in order to catch twists in the framing of the argument. The authors implement this technique and connect it with a transformer model and show that it improves upon the prior state-of-the-art on two different fake news detection datasets, one in Chinese (Weibo) and one in English (GossipCop). They also present ablation studies and representative examples from the datasets to help analyze the model's performance.
This is an interesting paper that makes a useful contribution to the detection of fake news in situations where the exact stance of the article is difficult to understand due to subtle changes in wording.
The approach is well-motivated and documented in detail, which aids its replicability.
The ablation analyses and case studies presented are useful in understanding which factors affect the system’s performance.
However, the performance gains achieved are marginal (~1%) over prior approaches.
It would be helpful to break these results down by the genre of the fake news articles to better understand if there are specific types that this model is better suited for.
The paper presentation is of high quality and the figures are very well made.
---
I have read and replied to the authors' response and made the necessary changes to my review.
questions: Do the authors have any intuition as to why the optimal number of hops for the two datasets is 4 and 3? Does it relate only to the average length of their examples or also the language- and domain-specific phrase structure?
Typos and presentation improvements:
Lines 620 - 224: future tense is used to describe the experimental setup but it should be past tense instead.
Line 714: “... mitigate the [prior] bias from …”
Line 800 - 870: the style of quotation marks should be standardized.
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
5OClaaZpBL | MSynFD: Multi-hop Syntax aware Fake News Detection | [
"Liang Xiao",
"Qi Zhang",
"Chongyang Shi",
"Shoujin Wang",
"Usman Naseem",
"Liang Hu"
] | The proliferation of social media platforms has fueled the rapid dissemination of fake news, posing threats to our real-life society. Existing methods use multimodal data or contextual information to enhance the detection of fake news by analyzing news content and/or its social context. However, these methods often overlook essential textual news content (articles) and heavily rely on sequential modeling and global attention to extract semantic information. These existing methods fail to handle the complex, subtle twists in news articles, such as syntax-semantics mismatches and prior biases, leading to lower performance and potential failure when modalities or social context are missing. To bridge these significant gaps, we propose a novel multi-hop syntax aware fake news detection (MSynFD) method, which incorporates complementary syntax information to deal with subtle twists in fake news. Specifically, we introduce a syntactical dependency graph and design a multi-hop subgraph aggregation mechanism to capture multi-hop syntax. It extends the effect of word perception, leading to effective noise filtering and adjacent relation enhancement. Subsequently, a sequential relative position-aware Transformer is designed to capture the sequential information, together with an elaborate keyword debiasing module to mitigate the prior bias. Extensive experimental results on two public benchmark datasets verify the effectiveness and superior performance of our proposed MSynFD over state-of-the-art detection models. | [
"Fake News Detection",
"Graph Neural Network",
"Debias"
] | https://openreview.net/pdf?id=5OClaaZpBL | DDwBQQRbFf | official_review | 1,700,702,769,474 | 5OClaaZpBL | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission938/Reviewer_c63p"
] | review: This research utilizes a graph attention model and a dependency tree for text embedding. This embedding is then merged with another obtained from a transformer-based model, aiming to identify fake news. The effectiveness of this approach is demonstrated through its performance across six evaluation metrics and two datasets, highlighting the framework's superiority.
Key Strengths:
- Incorporating a dependency tree into the graph attention models enhances the focus on significant words and reduces noise, as compared to relying on the text's original sequential order. This approach is innovative.
- The paper's illustrations effectively convey the authors' concepts, aiding in the clarity and comprehension of their work.
- The writing is coherent and well-structured, particularly in the sections introducing concepts, which facilitates easy understanding of the paper.
questions: - A news may contain multiple sentences, how could your framework use the dependency tree to represent this news?
- What criteria are used to select a central word in a sentence? Additionally, how many central words are required for a news article? Does this depend on the number of sentences or the overall length of the news?
- In Figure 2, the three (+) symbols are used differently: two in the Multi-hop Syntax Aware Module (Equation 4) and the Semantic Aware Module (Equation 6) signify addition, while the one in the Fake News Detector indicates embedding concatenation (line 500). It's recommended to use distinct symbols for these different operations to prevent confusion.
- Please recheck the notation in Figure 2 (top right corner) to confirm if the 'm_R' in the Syntax Aware Module should be 'm_G' as shown in Equation 4. If not, an explanation of what 'm_G' represents would be helpful.
- There's a discrepancy in terminology between Figure 2, which mentions a Sequence Aware Module, and the Methods section, which refers to a Semantic Aware Module (4.3). Consistency in terminology is advised.
- The use of only two datasets (Weibo and GossipCop) in this study may limit its generalizability.
- In the ablation study, could an additional experiment be conducted: an ensemble of the graph attention model and the transformer-based model?
ethics_review_flag: No
ethics_review_description: There are no ethics issues for this work.
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
4zmXFCXFI7 | Efficiency of Non-Truthful Auctions in Auto-bidding with Budget Constraints | [
"Christopher Liaw",
"Aranyak Mehta",
"Wennan Zhu"
] | We study the efficiency of non-truthful auctions for auto-bidders with both return on spend (ROS) and budget constraints. The efficiency of a mechanism is measured by the price of anarchy (PoA), which is the worst case ratio between the liquid welfare of any equilibrium and the optimal (possibly randomized) allocation. Our first main result is that the first-price auction (FPA) is optimal, among deterministic mechanisms, in this setting. Without any assumptions, the PoA of FPA is $n$ which we prove is tight for any deterministic mechanism. However, under a mild assumption that a bidder's value for any query does not exceed their total budget, we show that the PoA is at most $2$. This bound is also tight as it matches the optimal PoA without a budget constraint. We next analyze two randomized mechanisms: randomized FPA (rFPA) and "quasi-proportional" FPA. We prove two results that highlight the efficacy of randomization in this setting. First, we show that the PoA of rFPA for two bidders is at most $1.8$ without requiring any assumptions. This extends prior work which focused only on an ROS constraint. Second, we show that quasi-proportional FPA has a PoA of $2$ for any number of bidders, without any assumptions. Both of these bypass lower bounds in the deterministic setting. Finally, we study the setting where bidders are assumed to bid uniformly. We show that uniform bidding can be detrimental for efficiency in deterministic mechanisms while being beneficial for randomized mechanisms, which is in stark contrast with the settings without budget constraints. | [
"auto-bidding",
"auction design",
"price of anarchy",
"mechanism design"
] | https://openreview.net/pdf?id=4zmXFCXFI7 | nC37N2Mfdk | official_review | 1,700,219,168,607 | 4zmXFCXFI7 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1925/Reviewer_cZGF"
] | review: This paper studies the efficiency of First Price Auction (FPA) in auto-bidding with return on spend (ROS) and budget constraints. The efficiency is measured by the price of anarchy (PoA), which is the worst ratio between the liquid welfare of optimal fractional allocation and any equilibrium. This paper first shows that under both ROS and budget constraints, the PoA of FPA is n, which is tight. With a natural assumption that each bidder’s value for any query does not exceed her budget, the tight PoA is 2, which also matches the tight result without budget constraints. This paper also introduces the Integral Price of Anarchy (I-PoA) which is the worst ratio between the liquid welfare of optimal integral allocation and any equilibrium. They further show that the I-PoA of FPA is 2, which is also tight. Besides FPA, this paper proposes two randomized mechanisms called randomized FPA (rFPA) and quasi-proportional FPA and shows that the PoA of rFPA is 1.8, and the PoA of quasi-proportional FPA is 2.
Strengths:
1) The auto-bidding problem is a central problem in the Web Conference since it has rich applications related to the web internet and economics, e.g., online advertising.
2) This paper studies a natural extension of auto-bidding, by considering both ROS and budget constraints, which fills the gaps in previous research. At the same time, this paper also draws a near-complete picture for the efficiency of FPA, under these two constraints.
Weaknesses:
1) The technical contribution of the article is not very strong. While the results are almost complete, most of them are based on similar analyses. They can be regarded as a natural extension of each other, e.g., Theorem 3.3, Theorem 3.6, and Theorem 3.8.
2) The structure of the paper can be improved. For example, Section 3 covers most of the results of this paper: including the upper and lower bounds of PoA of FPA, the upper and lower bounds of I-PoA of FPA, and the upper and lower bounds of PoA under the small value assumption. However, all these results are placed in the same subsection. It would be better if they were separated in 2 to 3 subsections.
questions: Regarding the quasi-proportional FPA, in my understanding, when n approaches infinity, this mechanism essentially does the same thing as the (orginal) FPA? (Allocate as much as possible to the largest bidder). But their PoA is very different, one is 2 and the other is n. Can you explain why?
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
4zmXFCXFI7 | Efficiency of Non-Truthful Auctions in Auto-bidding with Budget Constraints | [
"Christopher Liaw",
"Aranyak Mehta",
"Wennan Zhu"
] | We study the efficiency of non-truthful auctions for auto-bidders with both return on spend (ROS) and budget constraints. The efficiency of a mechanism is measured by the price of anarchy (PoA), which is the worst case ratio between the liquid welfare of any equilibrium and the optimal (possibly randomized) allocation. Our first main result is that the first-price auction (FPA) is optimal, among deterministic mechanisms, in this setting. Without any assumptions, the PoA of FPA is $n$ which we prove is tight for any deterministic mechanism. However, under a mild assumption that a bidder's value for any query does not exceed their total budget, we show that the PoA is at most $2$. This bound is also tight as it matches the optimal PoA without a budget constraint. We next analyze two randomized mechanisms: randomized FPA (rFPA) and "quasi-proportional" FPA. We prove two results that highlight the efficacy of randomization in this setting. First, we show that the PoA of rFPA for two bidders is at most $1.8$ without requiring any assumptions. This extends prior work which focused only on an ROS constraint. Second, we show that quasi-proportional FPA has a PoA of $2$ for any number of bidders, without any assumptions. Both of these bypass lower bounds in the deterministic setting. Finally, we study the setting where bidders are assumed to bid uniformly. We show that uniform bidding can be detrimental for efficiency in deterministic mechanisms while being beneficial for randomized mechanisms, which is in stark contrast with the settings without budget constraints. | [
"auto-bidding",
"auction design",
"price of anarchy",
"mechanism design"
] | https://openreview.net/pdf?id=4zmXFCXFI7 | la45kUUKQq | official_review | 1,700,425,931,643 | 4zmXFCXFI7 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1925/Reviewer_txby"
] | review: ### Summary
The paper studies the performance of first-price auctions (FPA) and variants in the autobidding world, with both ROS constraints and budget constraints. The authors present fairly comprehensive results, including matching upper and lower bounds in natural several settings, as well as upper bounds for randomized and "smoothed" versions of FPA.
### Strengths
The problem studied is of theoretical and practical importance. The focus on budget constraints is, to my knowledge, a novel perspective. To this end, the paper presents a strong conceptual message: budget constraints do make things significantly different for FPA. The proofs contain ideas that might be useful for future research in autobidding with budget constraints.
### Weaknesses
The paper is a bit specific in that it talks almost exclusively about FPA and variants. Another gap that I'd like to see filled is the (non)existence of equilibria, which the authors also mention as a future direction.
questions: Around line 103, definition of PoA: defined this way the PoA should be at most 1?
Line 157, "in a couple setting": "a couple of settings"?
Lines 390 and 392: extra "("
Line 427, "OPT": I know what you mean but technically OPT is a real number...
Line 449, "when a Nash equilibrium exists": is there anything you can say about its existence?
FPA + uniform bidding: the authors say the source of inefficiency is the "all or nothing" phenomenon, which makes sense. So I'm curious what would happen in a smoothed world. For example, if we take the worst-case instance and add a tiny noise to every value, is it still possible to find an instance (before adding noise) that forces a PoA of almost n? Relatedly, is it possible that rFPA gets a much better PoA than n with n > 2 bidders?
ethics_review_flag: No
ethics_review_description: n/a
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
4zmXFCXFI7 | Efficiency of Non-Truthful Auctions in Auto-bidding with Budget Constraints | [
"Christopher Liaw",
"Aranyak Mehta",
"Wennan Zhu"
] | We study the efficiency of non-truthful auctions for auto-bidders with both return on spend (ROS) and budget constraints. The efficiency of a mechanism is measured by the price of anarchy (PoA), which is the worst case ratio between the liquid welfare of any equilibrium and the optimal (possibly randomized) allocation. Our first main result is that the first-price auction (FPA) is optimal, among deterministic mechanisms, in this setting. Without any assumptions, the PoA of FPA is $n$ which we prove is tight for any deterministic mechanism. However, under a mild assumption that a bidder's value for any query does not exceed their total budget, we show that the PoA is at most $2$. This bound is also tight as it matches the optimal PoA without a budget constraint. We next analyze two randomized mechanisms: randomized FPA (rFPA) and "quasi-proportional" FPA. We prove two results that highlight the efficacy of randomization in this setting. First, we show that the PoA of rFPA for two bidders is at most $1.8$ without requiring any assumptions. This extends prior work which focused only on an ROS constraint. Second, we show that quasi-proportional FPA has a PoA of $2$ for any number of bidders, without any assumptions. Both of these bypass lower bounds in the deterministic setting. Finally, we study the setting where bidders are assumed to bid uniformly. We show that uniform bidding can be detrimental for efficiency in deterministic mechanisms while being beneficial for randomized mechanisms, which is in stark contrast with the settings without budget constraints. | [
"auto-bidding",
"auction design",
"price of anarchy",
"mechanism design"
] | https://openreview.net/pdf?id=4zmXFCXFI7 | k6xLfP0UJ6 | official_review | 1,701,447,183,422 | 4zmXFCXFI7 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1925/Reviewer_CnoD"
] | review: This is a very well-written paper with very strong results. Its contributions are as follows:
1. It shows that non-truthfulness in bidding has no extra utility in the deterministic auction setting. This is shown by achieving a price of anarchy that matches that achieved by truthful bidders (in the deterministic setting) in the case of two bidders.
2. In the randomized setting, it shows that nontruthfulness does provide a benefit by showing a PoA of 1.8 (in contrast with that of 1.9 as achieved by the truthful mechanism) in the case of two bidders.
3. Finally, the paper shows that no auction (even randomized, non-truthful) can
improve upon a PoA bound of 2 as the number of advertisers grow to infinity.
In all these settings, the paper considers the bidder to have a value-maximizing objective and budget and ROS constraints. This is in contrast to much of prior work that assumes only the ROS constraint.
The proofs are all simple but very clever, and I think the results are strong.
Even though this is an "auctions" paper, I think it would be good to cite some of the related "algorithms" papers dealing with ROS + budget constraints, e.g.:
1. Online Bidding in Repeated Non-Truthful Auctions under Budget and ROI constraints by Castiglioni, Celli, Kroer
2. Online Bidding Algorithms for Return-on-Spend Constrained Advertisers by Feng, Padmanabhan, Wang
I know there are a few more published in last year's NeurIPS and ICML, I think those could be included too.
questions: N/A
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
4zmXFCXFI7 | Efficiency of Non-Truthful Auctions in Auto-bidding with Budget Constraints | [
"Christopher Liaw",
"Aranyak Mehta",
"Wennan Zhu"
] | We study the efficiency of non-truthful auctions for auto-bidders with both return on spend (ROS) and budget constraints. The efficiency of a mechanism is measured by the price of anarchy (PoA), which is the worst case ratio between the liquid welfare of any equilibrium and the optimal (possibly randomized) allocation. Our first main result is that the first-price auction (FPA) is optimal, among deterministic mechanisms, in this setting. Without any assumptions, the PoA of FPA is $n$ which we prove is tight for any deterministic mechanism. However, under a mild assumption that a bidder's value for any query does not exceed their total budget, we show that the PoA is at most $2$. This bound is also tight as it matches the optimal PoA without a budget constraint. We next analyze two randomized mechanisms: randomized FPA (rFPA) and "quasi-proportional" FPA. We prove two results that highlight the efficacy of randomization in this setting. First, we show that the PoA of rFPA for two bidders is at most $1.8$ without requiring any assumptions. This extends prior work which focused only on an ROS constraint. Second, we show that quasi-proportional FPA has a PoA of $2$ for any number of bidders, without any assumptions. Both of these bypass lower bounds in the deterministic setting. Finally, we study the setting where bidders are assumed to bid uniformly. We show that uniform bidding can be detrimental for efficiency in deterministic mechanisms while being beneficial for randomized mechanisms, which is in stark contrast with the settings without budget constraints. | [
"auto-bidding",
"auction design",
"price of anarchy",
"mechanism design"
] | https://openreview.net/pdf?id=4zmXFCXFI7 | Qo14tvv2dr | decision | 1,705,909,208,982 | 4zmXFCXFI7 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: I will spare the authors another summary of their work, given the detailed reviews prepared by several of the PC members -- I thank them for their efforts in evaluating this work, both in the original reviews and in the subsequent rebuttal period.
Given that, across the 4 reviewers, there is ample support for this paper (and no real opposition), I recommend acceptance. |
4zmXFCXFI7 | Efficiency of Non-Truthful Auctions in Auto-bidding with Budget Constraints | [
"Christopher Liaw",
"Aranyak Mehta",
"Wennan Zhu"
] | We study the efficiency of non-truthful auctions for auto-bidders with both return on spend (ROS) and budget constraints. The efficiency of a mechanism is measured by the price of anarchy (PoA), which is the worst case ratio between the liquid welfare of any equilibrium and the optimal (possibly randomized) allocation. Our first main result is that the first-price auction (FPA) is optimal, among deterministic mechanisms, in this setting. Without any assumptions, the PoA of FPA is $n$ which we prove is tight for any deterministic mechanism. However, under a mild assumption that a bidder's value for any query does not exceed their total budget, we show that the PoA is at most $2$. This bound is also tight as it matches the optimal PoA without a budget constraint. We next analyze two randomized mechanisms: randomized FPA (rFPA) and "quasi-proportional" FPA. We prove two results that highlight the efficacy of randomization in this setting. First, we show that the PoA of rFPA for two bidders is at most $1.8$ without requiring any assumptions. This extends prior work which focused only on an ROS constraint. Second, we show that quasi-proportional FPA has a PoA of $2$ for any number of bidders, without any assumptions. Both of these bypass lower bounds in the deterministic setting. Finally, we study the setting where bidders are assumed to bid uniformly. We show that uniform bidding can be detrimental for efficiency in deterministic mechanisms while being beneficial for randomized mechanisms, which is in stark contrast with the settings without budget constraints. | [
"auto-bidding",
"auction design",
"price of anarchy",
"mechanism design"
] | https://openreview.net/pdf?id=4zmXFCXFI7 | FBAJdIvZDK | official_review | 1,700,605,670,362 | 4zmXFCXFI7 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1925/Reviewer_eT6T"
] | review: The authors consider the problem of designing auctions for value-maximizing bidders with both return on spend and budget constraints (i.e., an auto-bidding context). An auctioneer aims to maximize the social welfare whereas the bidders attempt to maximize their own individual value subject to their spend not exceeding a budget constraint or a return on spend constraint. In particular, the authors examine the price of anarchy (i.e., ratio between the optimal achievable liquid welfare over the liquid welfare at any equilibrium) of the (deterministic) first-price auction and the randomized first-price auction and quasi-proportional first-price auction.
First, the authors show that the first-price auction obtains a price of anarchy of $n$ (which is optimal among deterministic mechanisms) and this bound improves to $2$ with the additional assumption that bidders do not have any value for any query exceeding their budget. Secondly, they demonstrate how the use of randomization can lead to improved guarantees by showing that the randomized first price auction obtains a price of anarchy no greater than $1.8$ when there exists only two bidders and that the quasi-proportional first-price auction achieves a price of anarchy $2$ without any additional assumptions.
On the whole, this paper makes a substantial contribution to the literature on the price of anarchy of different auction formats in the auto-bidding context. The first-price auction and its variants are well-known and practically-favored mechanisms for advertising auctions and, thus, better understanding their welfare guarantees in settings in which bidders have both return-on-spend and budget constraints is natural. Finally, the results are neatly presented and the paper is largely well-written. While it is not completely surprising that many of the price of anarchy results previously shown in the presence of only return-on-spend constraints when social welfare is the objective carry over to the setting with added budget constraints when liquid welfare is the objective, the proofs are non-trivial and the authors provide a nice separation between the two settings in the case of uniform bidding. As such, while the results are somewhat narrow, I believe they are likely to be of interest to the portion of the community interested in auto-bidding models.
While I outline some smaller presentational issues below, there is one larger issue I noticed with the presentation. In particular, I am slightly confused by the proof of Section 5. You say that the price of anarchy converges to $1/\eta + 1$ and then conclude that the price of anarchy is $2$. However, the assumption in the statement of Lemma 5.3 is only that $\eta \in (0,1)$. The price of anarchy tends to $2$, then, if you are allowed to choose $\eta$ but $\infty$ if you are not allowed to choose $\eta$. When you state that “$\eta$ is arbitrary”, it reads like an adversary would get to set it. This proof and surrounding argument should be made much clearer to show that the price of anarchy indeed is $2$.
Line 367: “total expected is no more” -> “total expected spend is no more”
Line 390: There is an extra parenthesis in the definition of OPT
Line 487: “an Equilibrium” -> “an equilibrium”
Line 723: I believe you want the inequality to use SPEND$(j’)$.
Line 694 and 742: In the statement of the lemma (or in the proof itself at line 742) you probably want to point out that the inequality regarding the $\pi^*$ summation being less than $1$ is by assumption.
[After rebuttal] The authors have, in my view, adequately addressed my question and I appreciate that they will clarify the proof in future revisions.
questions: Can you clarify the proof of Section 5, particularly regarding the point about $\eta$ being arbitrary referenced in the review?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Subsets and Splits