forum_id
stringlengths 8
20
| forum_title
stringlengths 4
171
| forum_authors
sequencelengths 0
25
| forum_abstract
stringlengths 4
4.27k
| forum_keywords
sequencelengths 1
10
| forum_pdf_url
stringlengths 38
50
| note_id
stringlengths 8
13
| note_type
stringclasses 6
values | note_created
int64 1,360B
1,736B
| note_replyto
stringlengths 8
20
| note_readers
sequencelengths 1
5
| note_signatures
sequencelengths 1
1
| note_text
stringlengths 10
16.6k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
D8Mb4c7V0t | DSLR: Diversity Enhancement and Structure Learning for Rehearsal-based Graph Continual Learning | [
"Seungyoon Choi",
"Wonjoong Kim",
"Sungwon Kim",
"Yeonjun In",
"Sein Kim",
"Chanyoung Park"
] | We investigate the replay buffer in rehearsal-based approaches for graph continual learning (GCL) methods. Existing rehearsal-based
GCL methods select the most representative nodes for each class and store them in a replay buffer for later use in training subsequent
tasks. However, we discovered that considering only the class representativeness of each replayed node makes the replayed nodes to be
concentrated around the center of each class, incurring a potential risk of overfitting to nodes residing in those regions, which aggravates catastrophic forgetting. Moreover, as the rehearsal-based approach heavily relies on a few replayed nodes to retain knowledge obtained from previous tasks, involving the replayed nodes that have irrelevant neighbors in the model training may have a significant detrimental impact on model performance. In this paper, we propose a GCL model named Diversity enhancement and Structure Learning for Rehearsal-based graph continual learning (DSLR). Specifically, we devise a coverage-based diversity (CD) approach to consider both the class representativeness and the diversity within each class of the replayed nodes. Moreover, we adopt graph structure learning (GSL) to ensure that the replayed nodes are connected to truly informative neighbors. Extensive experimental results demonstrate the effectiveness and efficiency of DSLR. Our source code is available at https://anonymous.4open.science/r/DSLR-F525. | [
"continual learning",
"graph neural networks",
"rehearsal approach",
"structure learning"
] | https://openreview.net/pdf?id=D8Mb4c7V0t | bCzcp18iue | official_review | 1,700,578,619,888 | D8Mb4c7V0t | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1450/Reviewer_qW6q"
] | review: This paper investigates replay buffers in rehearsal-based approaches for graph continual learning (GCL). Existing methods tend to concentrate replayed nodes around class centers, risking overfitting and exacerbating catastrophic forgetting. To address this, we propose DSLR, a GCL model with two components: CD for selecting replayed nodes, considering class representativeness and diversity; GSL for enhancing graph structure, ensuring replayed nodes connect to truly informative neighbors. Extensive experiments demonstrate DSLR's superiority over state-of-the-art GCL methods, even with a small replay buffer size. Hence, if the authors revise their paper, I am pleased to give a positive judgment:
Overall, this paper exhibits sufficient innovativeness, but there are still some issues that need attention.
1. The segmentation of the two bars in Figure 8 may lead to misunderstandings. Consider revising the visualization to enhance clarity.
2. There is an abundance of descriptive text in the experiments, but specific data explanations are lacking. Please provide more detailed explanations of the data presented in the experiments.
3. It is recommended to include references for the models mentioned in the tables.
4. The last line in the appendix, "Received 20 February 2007; revised 12 March 2009; accepted 5 June 2009," seems misplaced or irrelevant. Please clarify or remove this line.
5. The number of references is relatively low, and most of them are from before 2022. It is recommended to consider adding more recent research papers to enhance the relevance and currency of the references.
questions: This paper investigates replay buffers in rehearsal-based approaches for graph continual learning (GCL). Existing methods tend to concentrate replayed nodes around class centers, risking overfitting and exacerbating catastrophic forgetting. To address this, we propose DSLR, a GCL model with two components: CD for selecting replayed nodes, considering class representativeness and diversity; GSL for enhancing graph structure, ensuring replayed nodes connect to truly informative neighbors. Extensive experiments demonstrate DSLR's superiority over state-of-the-art GCL methods, even with a small replay buffer size. Hence, if the authors revise their paper, I am pleased to give a positive judgment:
Overall, this paper exhibits sufficient innovativeness, but there are still some issues that need attention.
1. The segmentation of the two bars in Figure 8 may lead to misunderstandings. Consider revising the visualization to enhance clarity.
2. There is an abundance of descriptive text in the experiments, but specific data explanations are lacking. Please provide more detailed explanations of the data presented in the experiments.
3. It is recommended to include references for the models mentioned in the tables.
4. The last line in the appendix, "Received 20 February 2007; revised 12 March 2009; accepted 5 June 2009," seems misplaced or irrelevant. Please clarify or remove this line.
5. The number of references is relatively low, and most of them are from before 2022. It is recommended to consider adding more recent research papers to enhance the relevance and currency of the references.
ethics_review_flag: No
ethics_review_description: -
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
D8Mb4c7V0t | DSLR: Diversity Enhancement and Structure Learning for Rehearsal-based Graph Continual Learning | [
"Seungyoon Choi",
"Wonjoong Kim",
"Sungwon Kim",
"Yeonjun In",
"Sein Kim",
"Chanyoung Park"
] | We investigate the replay buffer in rehearsal-based approaches for graph continual learning (GCL) methods. Existing rehearsal-based
GCL methods select the most representative nodes for each class and store them in a replay buffer for later use in training subsequent
tasks. However, we discovered that considering only the class representativeness of each replayed node makes the replayed nodes to be
concentrated around the center of each class, incurring a potential risk of overfitting to nodes residing in those regions, which aggravates catastrophic forgetting. Moreover, as the rehearsal-based approach heavily relies on a few replayed nodes to retain knowledge obtained from previous tasks, involving the replayed nodes that have irrelevant neighbors in the model training may have a significant detrimental impact on model performance. In this paper, we propose a GCL model named Diversity enhancement and Structure Learning for Rehearsal-based graph continual learning (DSLR). Specifically, we devise a coverage-based diversity (CD) approach to consider both the class representativeness and the diversity within each class of the replayed nodes. Moreover, we adopt graph structure learning (GSL) to ensure that the replayed nodes are connected to truly informative neighbors. Extensive experimental results demonstrate the effectiveness and efficiency of DSLR. Our source code is available at https://anonymous.4open.science/r/DSLR-F525. | [
"continual learning",
"graph neural networks",
"rehearsal approach",
"structure learning"
] | https://openreview.net/pdf?id=D8Mb4c7V0t | WCdSUzkj2v | official_review | 1,700,978,386,537 | D8Mb4c7V0t | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1450/Reviewer_j92f"
] | review: The authors propose a new rehearsal-based method for graph continual learning, which considers both the class representativeness and the diversity within each class of the replayed nodes. Moreover, they adopt graph structure learning to reformulate the graph structure through a trained link prediction module. Experimental results demonstrate the effectiveness of the proposed method in improving performance.
The proposed method is well motivated and easy to follow. The idea of selecting representative and diverse nodes is intuitive and reasonable. Furthermore, the authors introduce a new structure learning method that effectively utilizes the replayed nodes. However, there are some concerns that need to be addressed.
questions: 1. In Figure 4, it is observed that the improvement of PM for DSLR is limited as the buffer size increases, while ER-GNN and ContinualGNN show significant improvements. Does this mean that the proposed DSLR method only works well for relatively small buffer sizes? It is recommended to analyze this situation and provide an explanation.
2. According to the definition of Forgetting Mean, it should be a negative value since the performance on previous tasks typically decreases after training on a new task. However, the results shown in Table 2 do not align with this definition. Is this a mistake? Please clarify.
3. The hyperparameter N in Equation (9) is important for the proposed method. Although the authors provide some experimental results, it is still not clear how it affects the performance. For instance, the authors use a threshold \tau in Equation (10) to define the operation of edge deletion, but it could also be similarly defined by argmin_{v_j} S. Can the authors explain why they chose to use a threshold \tau in Equation (10)?
4. It is suggested to provide more details on how D_link is constructed in Eq(6).
5. Since the proposed method involves buffer selection and training a link prediction module, it is suggested to discuss its efficiency compared to other methods.
6. The authors claim that a large \beta in Eq(12) makes the model focus on the current task, while a small \beta directs the model鈥檚 attention toward the replay buffer to minimize catastrophic forgetting. Can the authors provide the corresponding experimental evidence to support this claim? Furthermore, can you investigate the performance when cross-entropy is directly computed on both D_i^tr and B without introducing \beta in Equation (12)?
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
D8Mb4c7V0t | DSLR: Diversity Enhancement and Structure Learning for Rehearsal-based Graph Continual Learning | [
"Seungyoon Choi",
"Wonjoong Kim",
"Sungwon Kim",
"Yeonjun In",
"Sein Kim",
"Chanyoung Park"
] | We investigate the replay buffer in rehearsal-based approaches for graph continual learning (GCL) methods. Existing rehearsal-based
GCL methods select the most representative nodes for each class and store them in a replay buffer for later use in training subsequent
tasks. However, we discovered that considering only the class representativeness of each replayed node makes the replayed nodes to be
concentrated around the center of each class, incurring a potential risk of overfitting to nodes residing in those regions, which aggravates catastrophic forgetting. Moreover, as the rehearsal-based approach heavily relies on a few replayed nodes to retain knowledge obtained from previous tasks, involving the replayed nodes that have irrelevant neighbors in the model training may have a significant detrimental impact on model performance. In this paper, we propose a GCL model named Diversity enhancement and Structure Learning for Rehearsal-based graph continual learning (DSLR). Specifically, we devise a coverage-based diversity (CD) approach to consider both the class representativeness and the diversity within each class of the replayed nodes. Moreover, we adopt graph structure learning (GSL) to ensure that the replayed nodes are connected to truly informative neighbors. Extensive experimental results demonstrate the effectiveness and efficiency of DSLR. Our source code is available at https://anonymous.4open.science/r/DSLR-F525. | [
"continual learning",
"graph neural networks",
"rehearsal approach",
"structure learning"
] | https://openreview.net/pdf?id=D8Mb4c7V0t | F2pelYHI1x | official_review | 1,699,345,421,110 | D8Mb4c7V0t | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1450/Reviewer_TY8g"
] | review: This paper studies the rehearsal-based GCL, and proposes a CD selecting strategy and to improve the node quality via message passing.
1. The selection strategy is interesting. Why CD guarantee both representativeness and diversity. I think the two objectives are in contrast, how to balance them and achieve the trade-off.
2. To solve the drawback of MF, it should consider many other rehearsal methods, such as the simple randomness.
3. Typo error L369: Given the a graph
questions: see the review.
ethics_review_flag: No
ethics_review_description: No needed.
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
D8Mb4c7V0t | DSLR: Diversity Enhancement and Structure Learning for Rehearsal-based Graph Continual Learning | [
"Seungyoon Choi",
"Wonjoong Kim",
"Sungwon Kim",
"Yeonjun In",
"Sein Kim",
"Chanyoung Park"
] | We investigate the replay buffer in rehearsal-based approaches for graph continual learning (GCL) methods. Existing rehearsal-based
GCL methods select the most representative nodes for each class and store them in a replay buffer for later use in training subsequent
tasks. However, we discovered that considering only the class representativeness of each replayed node makes the replayed nodes to be
concentrated around the center of each class, incurring a potential risk of overfitting to nodes residing in those regions, which aggravates catastrophic forgetting. Moreover, as the rehearsal-based approach heavily relies on a few replayed nodes to retain knowledge obtained from previous tasks, involving the replayed nodes that have irrelevant neighbors in the model training may have a significant detrimental impact on model performance. In this paper, we propose a GCL model named Diversity enhancement and Structure Learning for Rehearsal-based graph continual learning (DSLR). Specifically, we devise a coverage-based diversity (CD) approach to consider both the class representativeness and the diversity within each class of the replayed nodes. Moreover, we adopt graph structure learning (GSL) to ensure that the replayed nodes are connected to truly informative neighbors. Extensive experimental results demonstrate the effectiveness and efficiency of DSLR. Our source code is available at https://anonymous.4open.science/r/DSLR-F525. | [
"continual learning",
"graph neural networks",
"rehearsal approach",
"structure learning"
] | https://openreview.net/pdf?id=D8Mb4c7V0t | 6a2RqoKRIc | decision | 1,705,909,214,499 | D8Mb4c7V0t | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: The paper proposes a new technique for graph continual learning, focusing on better ways of selecting the replay buffer (i.e. a small amount of data used for future training tasks to avoid catastrophic forgetting). It uses both the representativeness and the diversity within each class of replayed nodes, as well as graph structure learning to ensure replayed nodes are well-connected to informative neighbors. Experiments demonstrate the effectiveness and efficiency of the approach.
The paper is clear, with interesting techniques and conclusive experiments. Although reviewer consensus is that the paper has broad appeal for TheWebConf, the area of graph continual learning is somewhat more niche.
Strengths
* Well-motivated and easy to follow.
* The coverage-based diversity approach (using both representativeness and diversity) is intuitively well-justified.
* Experiments are (after rebuttals) thorough, showing conclusive improvements.
Weaknesses:
* Recommendation to add more recent related work. |
D8Mb4c7V0t | DSLR: Diversity Enhancement and Structure Learning for Rehearsal-based Graph Continual Learning | [
"Seungyoon Choi",
"Wonjoong Kim",
"Sungwon Kim",
"Yeonjun In",
"Sein Kim",
"Chanyoung Park"
] | We investigate the replay buffer in rehearsal-based approaches for graph continual learning (GCL) methods. Existing rehearsal-based
GCL methods select the most representative nodes for each class and store them in a replay buffer for later use in training subsequent
tasks. However, we discovered that considering only the class representativeness of each replayed node makes the replayed nodes to be
concentrated around the center of each class, incurring a potential risk of overfitting to nodes residing in those regions, which aggravates catastrophic forgetting. Moreover, as the rehearsal-based approach heavily relies on a few replayed nodes to retain knowledge obtained from previous tasks, involving the replayed nodes that have irrelevant neighbors in the model training may have a significant detrimental impact on model performance. In this paper, we propose a GCL model named Diversity enhancement and Structure Learning for Rehearsal-based graph continual learning (DSLR). Specifically, we devise a coverage-based diversity (CD) approach to consider both the class representativeness and the diversity within each class of the replayed nodes. Moreover, we adopt graph structure learning (GSL) to ensure that the replayed nodes are connected to truly informative neighbors. Extensive experimental results demonstrate the effectiveness and efficiency of DSLR. Our source code is available at https://anonymous.4open.science/r/DSLR-F525. | [
"continual learning",
"graph neural networks",
"rehearsal approach",
"structure learning"
] | https://openreview.net/pdf?id=D8Mb4c7V0t | 4ANe1uoeZB | official_review | 1,700,774,116,158 | D8Mb4c7V0t | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1450/Reviewer_7zbE"
] | review: The paper presents DSLR, a method for graph continual learning. In continual learning, one wants to learn from new data without retraining on the entire dataset. The main challenge is catastrophic forgetting, where older tasks see a decline in performance after the model is trained with newer data. The paper proposes a novel method based on the use of a replay buffer, that stores a small amount of data from previous tasks for later user when training subsequent tasks. The method builds on two main contributions: build the replay buffer so to both capture class representativeness as well as the diversity in each class, while previous approaches ignored diversity; change the structure (edges) of the graphs so that nodes in the replay buffer are connected to more informative nodes. The proposed method is assessed with a wide experimental evaluation on 3 datasets, assessing the overall performance compared to previous approaches, the contribution of the diversity component and of the structure altering component of the approach, as well as other aspects, including an ablation study.
The paper is well written overall, and the main ideas and contributions are clearly presented. While the ideas of using a reply buffer and of changing the structure of the network have been proposed and explored before, the use of a diverse set of reply nodes and the specific approach for altering the graph structure are original. The experimental evaluation shows that the proposed approach consistently improves over previous approaches, even if the improvement is often limited and within the variance in the estimates of the performance measures (in fact, the main gain is to provide a smaller variance in performance).
PROS
- The paper is well written overall, with nice introduction to the problem, and nice motivation and intuition for each step.
- The idea of using diversity to choose nodes for the reply buffer is really nice and intuitive
- The experimental evaluation covers various aspects/components of the proposed approach
CONS
- The abstract is not geared for a general audience, and it is not clear for nonexperts in continual learning and rehearsal-based approaches.
- The overall gain provided by the proposed approach is somehow limited, with the main positive aspect being a reduction in the variance of the performance
- The paper does not provide a link to the code and there is no mention of whether the code will be made available
questions: - Can you provide an alternative abstract that is more clear for a general audience?
- Can you provide a link to an anonymous repository with the code for the method and to reproduce the experiments?
- In section 4.2.3, what do you mean with the sentence “… requires the computation of O(|V^t|\cdot |B|)”?
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
CpBArDFCQ3 | Prior-Free Mechanisms with Welfare Estimates | [
"Guru Guruganesh",
"Jon Schneider",
"Joshua Ruizhi Wang"
] | We consider the problem of designing prior-free revenue-maximizing mechanisms for allocating items to $n$ buyers when the mechanism is additionally provided with an estimate for the optimal welfare (which is guaranteed to be correct to within a multiplicative factor of $1/\alpha$). In the digital goods setting (where we can allocate items to an arbitrary subset of the buyers), we demonstrate a mechanism which achieves revenue that is $O(\log n/\alpha)$-competitive with the optimal welfare. In the public goods setting (where we either must allocate the item to all buyers or to no buyers), we demonstrate a mechanism which is $O(n\log 1/\alpha)$ competitive. In both settings, we show the dependence on $\alpha$ and $n$ is tight. Finally, we discuss generalizations to broader classes of allocation constraints. | [
"prior-free distributions",
"mechanism design",
"digital goods"
] | https://openreview.net/pdf?id=CpBArDFCQ3 | wRQbR2aS51 | decision | 1,705,909,208,248 | CpBArDFCQ3 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: The paper studies the problem of designing prior-free revenue-maximizing mechanisms for allocating items to n buyers, where the mechanism has access to an estimate of the optimal welfare (guaranteed to be correct within some factor). They then develop mechanisms for three different settings.
The reviewers are in agreement that the mechanisms are natural, the bounds are tight, and the proofs are simple (a feature, not a bug, in my opinion). There is some disagreement among the reviewers about whether the results are incremental; this, to me, is a fair question, but given the conciseness with which the results are presented, I lean towards evaluating the paper positively. |
CpBArDFCQ3 | Prior-Free Mechanisms with Welfare Estimates | [
"Guru Guruganesh",
"Jon Schneider",
"Joshua Ruizhi Wang"
] | We consider the problem of designing prior-free revenue-maximizing mechanisms for allocating items to $n$ buyers when the mechanism is additionally provided with an estimate for the optimal welfare (which is guaranteed to be correct to within a multiplicative factor of $1/\alpha$). In the digital goods setting (where we can allocate items to an arbitrary subset of the buyers), we demonstrate a mechanism which achieves revenue that is $O(\log n/\alpha)$-competitive with the optimal welfare. In the public goods setting (where we either must allocate the item to all buyers or to no buyers), we demonstrate a mechanism which is $O(n\log 1/\alpha)$ competitive. In both settings, we show the dependence on $\alpha$ and $n$ is tight. Finally, we discuss generalizations to broader classes of allocation constraints. | [
"prior-free distributions",
"mechanism design",
"digital goods"
] | https://openreview.net/pdf?id=CpBArDFCQ3 | gGJf4NzUTL | official_review | 1,700,825,135,594 | CpBArDFCQ3 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1143/Reviewer_yNkC"
] | review: This paper studies the auction setting where a set of identical items is allocated to unit-demand agents and the mechanism does not have information on agents’ valuations (prior-free) other than that the welfare is within a range [alpha, 1]. The authors consider three different settings regarding feasible allocations:
- digital goods: the quantity of items is unbounded and can be allocated to any subset of agents.
- matroid setting: the agents are elements of a matroid and items can only be allocated to agents forming an independent set.
- public good setting: either everyone receives an item or no one receives an item.
The authors design a mechanism in each of the above three settings. For each mechanism, the authors analyze its competitive ratio – in terms of the number of agents and the parameter alpha – with respect to the optimal welfare. The authors show that the competitive ratio achieved by their mechanism is tight in both parameters in each of the three settings. All these results hold for alpha <= 0.5, and the regime for alpha > 0.5 is left for future work.
This paper is well-written and easy to follow. The setting studied in the paper is reasonable to me. The mechanisms are simple and natural. The competitive ratios are also tight.
However, I think the overall contribution of this paper is incremental. Technically, it seems to me that most of the analysis uses existing techniques. Conceptually, the main contribution to me is the new feature that the welfare is known to be within a certain range, which makes the model in between the completely prior-free model and the Bayesian model. This kind of feature already exists in other problems (e.g., online algorithms with predictions). Therefore, I also think the conceptual contribution of this paper is incremental.
questions: No questions.
ethics_review_flag: No
ethics_review_description: N.A.
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 3
technical_quality: 4
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
CpBArDFCQ3 | Prior-Free Mechanisms with Welfare Estimates | [
"Guru Guruganesh",
"Jon Schneider",
"Joshua Ruizhi Wang"
] | We consider the problem of designing prior-free revenue-maximizing mechanisms for allocating items to $n$ buyers when the mechanism is additionally provided with an estimate for the optimal welfare (which is guaranteed to be correct to within a multiplicative factor of $1/\alpha$). In the digital goods setting (where we can allocate items to an arbitrary subset of the buyers), we demonstrate a mechanism which achieves revenue that is $O(\log n/\alpha)$-competitive with the optimal welfare. In the public goods setting (where we either must allocate the item to all buyers or to no buyers), we demonstrate a mechanism which is $O(n\log 1/\alpha)$ competitive. In both settings, we show the dependence on $\alpha$ and $n$ is tight. Finally, we discuss generalizations to broader classes of allocation constraints. | [
"prior-free distributions",
"mechanism design",
"digital goods"
] | https://openreview.net/pdf?id=CpBArDFCQ3 | dESaoXpN0N | official_review | 1,700,546,049,374 | CpBArDFCQ3 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1143/Reviewer_Sx4J"
] | review: This paper considers the question of designing auctions which obtain high revenue in a prior-free setting when the auctioneer has some (reasonably accurate) estimate of the optimal social welfare. More concretely, the authors consider a setting in which an auctioneer faces a subset of bidders with private values who are looking to procure some good/service and the auctioneer knows that the optimal social welfare lies between a given “hint” $\alpha$ and $1$ (precisely, the authors consider a special case where $\alpha \in (0, 1/2]$. The goal of the auctioneer is to maximize her expected revenue and she compares her revenue to the optimal social welfare, i.e., the maximum possible revenue collectable by any IC/IR mechanism ex-post. In particular, the authors examine auctions with matroid settings, wherein subsets of bidders which can be feasibly served correspond to independent sets in an underlying matroid (e.g., digital goods settings), and a public goods setting, wherein the feasible subsets of bidders are the set of all bidders or the empty set. In both settings, the authors give (asymptotically) tight approximation guarantees in terms of $n$ (the number of bidders) and $\alpha$ by providing mechanisms with upper bound performance guarantees and instances showing matching lower bounds against any mechanism.
On the positive side, the mechanisms proposed by the authors are simple and natural, yet optimal. Moreover, the analysis is clearly presented and straightforward. Finally, I appreciate that the authors examine two well-motivated and well-studied settings – the natural setting of matroid constraints over bidders (which is well studied economically) and the digital goods setting, which largely inspired the line of literature on prior-free auctions for revenue. On the other hand, from a practical perspective it isn’t clear that the model of having an estimate of the optimal welfare but not other bidder level statistics (or aggregate estimates of distributional information) is natural, so, in my view, this paper is more theoretical than practical. Secondly, the paper only answers the question for $\alpha \leq 1/2$. It would be interesting to give some discussion regarding challenges in handling the cases where $\alpha$ is large. Finally, the techniques to prove the bounds and the mechanisms themselves are not too novel or surprising and it does not seem likely, in my view, that the results would be of significantly broad interest nor would the techniques lead to many new results elsewhere. In summary, while there are positive aspects of this paper, there are also negative aspects which detract from the overall picture. I outline some smaller comments below.
Line 132: $L/R$ is a fraction less than $1$, you probably want to say $\alpha \leq 1$ here.
Line 276: I wouldn’t call equal-revenue distributions “a technique”. Instead, they are just a class of distributions.
Line 515: “all all” -> “all”
Line 584: Typically one refers to a set system as being downward-closed rather than just a set.
Line 639: You refer to Algorithm 1 as Mechanism 1 in the text. I would suggest following a consistent naming convention.
Line 730: “that exists” -> “that there exists”
[After rebuttal] I thank the authors for their responses to my questions and the questions of the other reviewers.
questions: Can you comment on what settings we may expect an estimate of welfare to exist, but not other statistics common in the literature?
Can you comment on the case of $\alpha > 1/2$?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
CpBArDFCQ3 | Prior-Free Mechanisms with Welfare Estimates | [
"Guru Guruganesh",
"Jon Schneider",
"Joshua Ruizhi Wang"
] | We consider the problem of designing prior-free revenue-maximizing mechanisms for allocating items to $n$ buyers when the mechanism is additionally provided with an estimate for the optimal welfare (which is guaranteed to be correct to within a multiplicative factor of $1/\alpha$). In the digital goods setting (where we can allocate items to an arbitrary subset of the buyers), we demonstrate a mechanism which achieves revenue that is $O(\log n/\alpha)$-competitive with the optimal welfare. In the public goods setting (where we either must allocate the item to all buyers or to no buyers), we demonstrate a mechanism which is $O(n\log 1/\alpha)$ competitive. In both settings, we show the dependence on $\alpha$ and $n$ is tight. Finally, we discuss generalizations to broader classes of allocation constraints. | [
"prior-free distributions",
"mechanism design",
"digital goods"
] | https://openreview.net/pdf?id=CpBArDFCQ3 | D4G4j3Xc8e | official_review | 1,700,539,665,165 | CpBArDFCQ3 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1143/Reviewer_W93Z"
] | review: This paper studies the problem of designing prior-free revenue-maximizing mechanisms for allocating identical items to buyers. They focus on the scenario where the mechanism is provided with an estimate for the optimal welfare, which is guaranteed to be correct within a certain ratio. This paper presents mechanisms for the digital goods setting (with no constraint on the set of allocated buyers), the matroid setting (with the constraint that the set of allocated buyers form an independent set), and the public goods setting (with the constraint of sell-to-all or sell-to-no-one) and analyses their competitive ratios. The mechanisms for digital goods setting and the matroid setting are designed based on the single-buyer mechanism. For public goods setting, this paper gives a threshold mechanism. For all three settings, they show that the mechanisms achieve (asymptotically) optimal approximation ratios.
The model of prior-free revenue-maximizing mechanisms is interesting and the results of tight ratio are strong. However, the techniques used to derive these results seem quite simple. The writing is the paper is in general good but not very easy to read to non-experts. Also, this paper is very short. I think the authors should use more words to explain the basic concepts, notions, and definitions (to non-experts). For example, how does a “mechanism” work, what are the actions of buyers and sellers, what does truthful mean, etc. The uses of terminologies are sometimes arbitrary: please formally define what are “posted price”, “revenue”, “welfare”, “buyer”, “bidder”, “value”, etc, preferably using mathematical notions.
Minor comments:
1. Abstract, “a mechanism which achieves” should be “a mechanism that achieves”
2. Line 515, “all all” should be “all”.
3. Line 568, “Lema” should be “Lemma”.
questions: In Algorithm 1, what does e mean?
ethics_review_flag: No
ethics_review_description: None
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 4
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
CpBArDFCQ3 | Prior-Free Mechanisms with Welfare Estimates | [
"Guru Guruganesh",
"Jon Schneider",
"Joshua Ruizhi Wang"
] | We consider the problem of designing prior-free revenue-maximizing mechanisms for allocating items to $n$ buyers when the mechanism is additionally provided with an estimate for the optimal welfare (which is guaranteed to be correct to within a multiplicative factor of $1/\alpha$). In the digital goods setting (where we can allocate items to an arbitrary subset of the buyers), we demonstrate a mechanism which achieves revenue that is $O(\log n/\alpha)$-competitive with the optimal welfare. In the public goods setting (where we either must allocate the item to all buyers or to no buyers), we demonstrate a mechanism which is $O(n\log 1/\alpha)$ competitive. In both settings, we show the dependence on $\alpha$ and $n$ is tight. Finally, we discuss generalizations to broader classes of allocation constraints. | [
"prior-free distributions",
"mechanism design",
"digital goods"
] | https://openreview.net/pdf?id=CpBArDFCQ3 | 6Y5cShV3YD | official_review | 1,700,765,737,115 | CpBArDFCQ3 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1143/Reviewer_r88P"
] | review: This paper studies a problem in which there are $n$ unit-demand buyers, and the goal is to design a dominant-strategy truthful mechanism to allocate identical items to the buyers with good revenue guarantees. The paper studies three different settings: i) digital goods: we can allocate to any subset of $n$ buyers, ii) matroid: the chosen subset of buyers should be an independent set of a matroid, and iii) public goods: the set of chosen buyers is either the null set or all the buyers. For each setting, the authors first provide a mechanism along with its competitive ratio guarantees, and then, they provide matching lower bounds to prove the optimality of their mechanisms.
The paper contributes to prior-free single-dimensional mechanism design and its contributions are of potential interest to a wide audience. The proposed mechanisms are all simple yet effective, and the proofs are easy to follow. In particular, the construction of lower-bound results are quite interesting. On the other hand, the paper lacks any numerical examples/experiments to verify the effectiveness of their mechanisms in practice.
questions: - The mechanism design with hints framework studied in this paper is closely related to the ``Algorithms with predictions'' setup (see https://algorithms-with-predictions.github.io/ for a list of all papers on this topic). However, you haven't mentioned this in your related work. How does your result and contributions compare to the existing works in mechanism design with predictions? Are there any papers that are similar to yours?
- In this paper, you have focused on modular utilities, i.e., the utility of a subset of chosen buyers is simply the sum of their individual utilities. Have you thought about submodular utility functions? To what extent do your results hold if we move beyond modular utilities? What are the challenges with non-modular utilities?
- If the hints were potentially inaccurate, is it possible to design a mechanism that is robust (performs well even if the hints are wrong) and consistent (performs well when the hints are correct)? You have shown the consistency of your proposed mechanisms, are they robust as well?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
CpBArDFCQ3 | Prior-Free Mechanisms with Welfare Estimates | [
"Guru Guruganesh",
"Jon Schneider",
"Joshua Ruizhi Wang"
] | We consider the problem of designing prior-free revenue-maximizing mechanisms for allocating items to $n$ buyers when the mechanism is additionally provided with an estimate for the optimal welfare (which is guaranteed to be correct to within a multiplicative factor of $1/\alpha$). In the digital goods setting (where we can allocate items to an arbitrary subset of the buyers), we demonstrate a mechanism which achieves revenue that is $O(\log n/\alpha)$-competitive with the optimal welfare. In the public goods setting (where we either must allocate the item to all buyers or to no buyers), we demonstrate a mechanism which is $O(n\log 1/\alpha)$ competitive. In both settings, we show the dependence on $\alpha$ and $n$ is tight. Finally, we discuss generalizations to broader classes of allocation constraints. | [
"prior-free distributions",
"mechanism design",
"digital goods"
] | https://openreview.net/pdf?id=CpBArDFCQ3 | 08vQwKgUcX | official_review | 1,700,425,754,940 | CpBArDFCQ3 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1143/Reviewer_N8JH"
] | review: ### Summary
The paper studies prior-free mechanism design where the goal is to maximize revenue as a fraction of the optimal welfare in the worst case. The authors consider single-parameter buyers and binary allocations. The main results are tight approximation bounds for (1) the matroid setting, (2) by implication, the digital good setting, and (3) the public good setting where either all buyers are allocated or none.
### Strengths
The paper studies a clean and meaningful problem and provides complete and tight results. The proofs are not super lengthy but exhibit some interesting ideas. A minor point is that the proofs are very well written and easy to follow, and I appreciate the authors' candidness not trying to make things look more complicate than they are.
### Weaknesses
I'd be more excited to also see bounds for "revenue vs revenue", which would capture other, equally important aspects of the hardness of these settings.
questions: Lines 137-144: technically, isn't the first bullet point a special case of the second?
Lower bounds: it seems these bounds are for "revenue against welfare" (which is reasonable and matches the upper bounds in the paper). Still I wonder if it's possible to get better bounds if one uses the optimal BIC revenue as the benchmark.
Properties of equal revenue distributions: the authors already say this, but I'd cite something to make it clear these were known before (it doesn't hurt anyways).
Remart 4.2, point (i): without thinking too much, it seems one can just replace phase (1) of the matroid mechanism by picking the set of bidders with the largest total value (and is feasible)? This is monotone in each v_i and doesn't seem to lose efficiency? Then the approximation ratio would depend on the rank of the feasibility constraint, which is the size largest feasible set (this is also a fairly common parameter especially when downward-closed feasibility is under consideration)?
ethics_review_flag: No
ethics_review_description: n/a
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
CoaB9M2BcF | Causal Question Answering with Reinforcement Learning | [
"Lukas Blübaum",
"Stefan Heindorf"
] | Causal questions inquire about causal relationships between different events or phenomena. Specifically, they often aim to determine whether there is a relationship between two phenomena, or to identify all causes/effects of a phenomenon. Causal questions are important for a variety of use cases, including virtual assistants and search engines. However, many current approaches to causal question answering cannot provide explanations or evidence for their answers. Hence, in this paper, we aim to answer causal questions with CauseNet, a large-scale dataset of causal relations and their provenance data. Inspired by recent, successful applications of reinforcement learning to knowledge graph tasks, such as link prediction and fact-checking, we explore the application of reinforcement learning on CauseNet for causal question answering. We introduce an Actor-Critic based agent which learns to search through the graph to answer causal questions. We bootstrap the agent with a supervised learning procedure to deal with large action spaces and sparse rewards. Our evaluation shows that the agent successfully prunes the search space to answer binary causal questions by visiting less than 30 nodes per question compared to over 3,000 nodes by a naive breadth-first search. Our ablation study indicates that our supervised learning strategy provides a strong foundation upon which our reinforcement learning agent improves. The paths returned by our agent explain the mechanisms by which a cause produces an effect. Moreover, for each edge on a path, CauseNet stores its original source on the web allowing for easy verification of paths. | [
"question answering",
"causality graphs",
"reinforcement learning"
] | https://openreview.net/pdf?id=CoaB9M2BcF | lsSpMgNkWX | official_review | 1,700,492,971,863 | CoaB9M2BcF | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1786/Reviewer_h2Lx"
] | review: This paper takes an interesting approach to the problem of binary causal question answering - which is it tries to use a reinforcement learning approach to answer questions over a causal knowledge graph (CauseNet). The approach is definitely useful, it has the advantage of generating answers that lead to explainable results and as far as I know, the approach is novel. Brute force BFS ends up searching far more nodes than the RL approach - so the approach seems quite novel and useful. Having said that, I think there are many different aspects of the current version that could be strengthened. I do think the paper can provide an interesting contribution to the field but I think it is not yet ready for publication in a top tier venue in its current form. All my concerns are described as questions below.
questions: 1. Why was the scope for QA so severely constrained to just binary causal question answering? Surely the technique can be broadened to any binary QA or causal QA or QA in general?
2. The number of datasets used for evaluation is a bit underwhelming. Why wasnt BoolQ (https://arxiv.org/abs/1905.10044) for instance considered? Also for MSMarco, could one not add more questions to make it more balanced and see if the performance changes for large language models?
3. In the age of chatGPT and LLMs, it seems a bit limiting to test only with a single LLM. Here's an example response from a version of mpt for one of the questions in the paper:
Does Xanax cause hiccups? Answer yes or no. If yes, explain. If no, explain why.
Xanax is a medication used to treat anxiety disorders. It is a benzodiazepine, which is a class of drugs that also includes Valium and Ativan. Benzodiazepines are known to cause hiccups. The exact mechanism is not known, but it is thought to be due to the way they affect the brain.
I realize this is a one-off example and that may not be correct even, but surely this points to a need for more comprehensive analysis here to make the case that the RL approach proposed here provides better, more explainable results consistently.
4. The results were a bit underwhelming as well - even on SemEval, the difference between LLM and the RL approach was 4% for the 4 hop case (assuming more hops should be better). The point about MSMarco is well taken, but again this needs to be offset by more datasets.
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
CoaB9M2BcF | Causal Question Answering with Reinforcement Learning | [
"Lukas Blübaum",
"Stefan Heindorf"
] | Causal questions inquire about causal relationships between different events or phenomena. Specifically, they often aim to determine whether there is a relationship between two phenomena, or to identify all causes/effects of a phenomenon. Causal questions are important for a variety of use cases, including virtual assistants and search engines. However, many current approaches to causal question answering cannot provide explanations or evidence for their answers. Hence, in this paper, we aim to answer causal questions with CauseNet, a large-scale dataset of causal relations and their provenance data. Inspired by recent, successful applications of reinforcement learning to knowledge graph tasks, such as link prediction and fact-checking, we explore the application of reinforcement learning on CauseNet for causal question answering. We introduce an Actor-Critic based agent which learns to search through the graph to answer causal questions. We bootstrap the agent with a supervised learning procedure to deal with large action spaces and sparse rewards. Our evaluation shows that the agent successfully prunes the search space to answer binary causal questions by visiting less than 30 nodes per question compared to over 3,000 nodes by a naive breadth-first search. Our ablation study indicates that our supervised learning strategy provides a strong foundation upon which our reinforcement learning agent improves. The paths returned by our agent explain the mechanisms by which a cause produces an effect. Moreover, for each edge on a path, CauseNet stores its original source on the web allowing for easy verification of paths. | [
"question answering",
"causality graphs",
"reinforcement learning"
] | https://openreview.net/pdf?id=CoaB9M2BcF | j4Q9Fo3ZxD | official_review | 1,700,381,769,434 | CoaB9M2BcF | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1786/Reviewer_fEbv"
] | review: Summary:
The paper applies reinforcement learning on CauseNet for causal question answering and provides explanations or evidence for answers. The paper introduces a new causal QA dataset. The technique details are described clearly in the paper. It would be better for the authors to assign a name to their new dataset and appropriately cite it in the paper. The main drawback of the paper lies in the experimental part. From the table of main results, it is rather difficult to check how good the overall performance of the proposed method is and draw a conclusion. I highly appreciate that the authors provide the performance of GPT4 during the rebuttal. The experiments show that GPT4 performs very well (better than the baselines and the proposed method at most metrics). Based on the experimental results and the overall quality of the paper, I re-considered and decided to maintain the scores.
Pros:
+ The paper represents the pioneering effort in incorporating a reinforcement learning approach for addressing causal question answering on knowledge graphs.
+ The paper introduces a novel dataset specifically designed for causal questions.
Cons:
- The paper does not include a comparison with GPT models, but it is suggested that such a comparison could be conducted, especially given that the answers in the QA dataset are binary (Yes or No), making the comparison potentially less challenging.
- Additionally, more information about the dataset, such as the average question length and the average number of entities in each question, would be beneficial for understanding the complexity of the questions.
questions: * In Table 1, it would be better for there to be a total number of both datasets in each column.
* The description of QA dataset construction is different between Sec.4.1 and Sec.7.2. In Sec.4.1, the authors employ subsets of causal questions from two datasets but in Sec.7.2, the authors extracted the questions from Webis-CausalQA-22 that does not contain SemEval. Or did I miss some points? Hope the authors can explain.
* Can the authors use GPT (3.5 or 4) as a baseline to perform on the test set and provide the performance comparison?
* Why do the authors use GloVe as the pre-trained embeddings which is quite old? Is there any other better choice? Since there are lots of medical terminologies in the dataset, why don't use the domain-specific embeddings or models, e.g. BioBERT, ClinicalBERT, etc as the pre-trained model?
ethics_review_flag: No
ethics_review_description: No
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 3
technical_quality: 3
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
CoaB9M2BcF | Causal Question Answering with Reinforcement Learning | [
"Lukas Blübaum",
"Stefan Heindorf"
] | Causal questions inquire about causal relationships between different events or phenomena. Specifically, they often aim to determine whether there is a relationship between two phenomena, or to identify all causes/effects of a phenomenon. Causal questions are important for a variety of use cases, including virtual assistants and search engines. However, many current approaches to causal question answering cannot provide explanations or evidence for their answers. Hence, in this paper, we aim to answer causal questions with CauseNet, a large-scale dataset of causal relations and their provenance data. Inspired by recent, successful applications of reinforcement learning to knowledge graph tasks, such as link prediction and fact-checking, we explore the application of reinforcement learning on CauseNet for causal question answering. We introduce an Actor-Critic based agent which learns to search through the graph to answer causal questions. We bootstrap the agent with a supervised learning procedure to deal with large action spaces and sparse rewards. Our evaluation shows that the agent successfully prunes the search space to answer binary causal questions by visiting less than 30 nodes per question compared to over 3,000 nodes by a naive breadth-first search. Our ablation study indicates that our supervised learning strategy provides a strong foundation upon which our reinforcement learning agent improves. The paths returned by our agent explain the mechanisms by which a cause produces an effect. Moreover, for each edge on a path, CauseNet stores its original source on the web allowing for easy verification of paths. | [
"question answering",
"causality graphs",
"reinforcement learning"
] | https://openreview.net/pdf?id=CoaB9M2BcF | hvZ8JrTywB | official_review | 1,700,914,399,566 | CoaB9M2BcF | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1786/Reviewer_YH8s"
] | review: The work introduces a reinforcement learning-based method for causal learning/Q&A. an Authors introduce an Actor-Critic based agent which learns to search through the graph to answer causal questions. Empirical results seems good. The key focus is- whether authors can model the causal question-answering task as a sequential decision problem over CauseNet.
The idea is fresh and novel. I can not recall on top of my head if I have read a similar idea recently that attempts to handle causal Q&A using reinforcement learning. I am also okay with proposed evaluation and results are satisfactory.
questions: Q1. The statement: As CauseNet [13] does not contain negative information (see Section 7.1 in appendix), we only train the agent on positive causal questions, i.e., questions whose answer is “yes”--> is it a limitation of the proposed work that it is too dataset specific?
Q2. I see a lot of mention of CausalNET, making the work too specific to a single Causal KG. My suggestion is trim down reference of CausalNET and make it more generic- something like causal KG. Currently paper looks like it is aiming to “crack” one KG. I do know this domain is new, but will grow over time. Hence, making the work too specific to one KG, specially in writing is not giving proper message.
Q3: For explainability, human evaluation is missing. Empirical evaluation normally is good as first step, but human evaluation normally makes the work solid and full-proof. Can author explain why human evaluation has been omitted from the empirical study?
Q4: What are the limitation of this work? What are concrete items which research community can learn from this work that are clear assumptions in this work, that can be relaxed in following up work (next work on this topic).
Q5. The system like CONQUER [18] also does reinforcement learning, but in conversational setting. I do miss strong baselines in this paper. Would it make sense to include CONQUER [18] as one of the baseline turning off its dialog history? BFS by heuristic will perform bad.
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
CoaB9M2BcF | Causal Question Answering with Reinforcement Learning | [
"Lukas Blübaum",
"Stefan Heindorf"
] | Causal questions inquire about causal relationships between different events or phenomena. Specifically, they often aim to determine whether there is a relationship between two phenomena, or to identify all causes/effects of a phenomenon. Causal questions are important for a variety of use cases, including virtual assistants and search engines. However, many current approaches to causal question answering cannot provide explanations or evidence for their answers. Hence, in this paper, we aim to answer causal questions with CauseNet, a large-scale dataset of causal relations and their provenance data. Inspired by recent, successful applications of reinforcement learning to knowledge graph tasks, such as link prediction and fact-checking, we explore the application of reinforcement learning on CauseNet for causal question answering. We introduce an Actor-Critic based agent which learns to search through the graph to answer causal questions. We bootstrap the agent with a supervised learning procedure to deal with large action spaces and sparse rewards. Our evaluation shows that the agent successfully prunes the search space to answer binary causal questions by visiting less than 30 nodes per question compared to over 3,000 nodes by a naive breadth-first search. Our ablation study indicates that our supervised learning strategy provides a strong foundation upon which our reinforcement learning agent improves. The paths returned by our agent explain the mechanisms by which a cause produces an effect. Moreover, for each edge on a path, CauseNet stores its original source on the web allowing for easy verification of paths. | [
"question answering",
"causality graphs",
"reinforcement learning"
] | https://openreview.net/pdf?id=CoaB9M2BcF | XufcuuM3dT | official_review | 1,700,753,368,331 | CoaB9M2BcF | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1786/Reviewer_Mg2e"
] | review: The authors present an approach to causal question answering using a RL
framework applied to a large-scale dataset of causal relations,
CauseNet. The authors aim to address the limitations of current causal
question-answering methods that often lack explanations or evidence
supporting their answers. They propose an Actor-Critic based RL agent,
which is bootstrapped with supervised learning to manage large action
spaces and sparse rewards.
Pros
======
+ This paper introduces CauseNet, a dataset of causal relations with
provenance, that provides a solid foundation for causal reasoning
research.
+ The authors propose an Actor-Critic based RL agent, which is
bootstrapped with supervised learning to manage large action spaces
and sparse rewards. The approach is designed to prune the search
space efficiently, demonstrating significant improvements over
traditional methods like breadth-first search in terms of nodes
visited per question.
+ The authors address a gap in the field of causal question answering
by leveraging reinforcement learning and a large-scale causal
relation dataset.
+ The authors present a clear and detailed methodology, effectively
combining reinforcement learning with supervised learning for
initial bootstrapping. The sequential decision-making framework for
navigating the causal knowledge graph is a notable strength and a
new application.
+ The paper is well-written and nicely formatted.
+ The inclusion of an ablation study supports the hypothesis and adds
depth to the experimentation section.
+ The running example of pneumonia diagnosis in the introduction and
throughout the paper solidifies the significance of the paper.
+ The contributions of the paper are sound: RL for causal QA on KG, a
supervised learning algorithm, and a new dataset.
+ The experiments and the results support the hypothesis to develop an
RL approach for binary causal reasoning on KGs. I think this will
be of interest to the web community.
+ The reduction in the number of visited nodes expanded (99%!) is
impressive. I think the authors can highlight this in the
discussion/conclusion as well.
Weaknesses
============
- The current dataset only considers a single relation (the
cause-and-effect relation), but more validating more complex
question-answers would be insightful. (See question 1)
- There is only one final reward. I'm not sure if this accurately
reflects the medical diagnosis setup (where small diagnoses /
treatments may indicate incremental rewards).
- There is limited comparisons with existing QA models. I think
including comparisons with more recent and advanced
question-answering models might provide a better understanding of
the RL agent's performance.
Rebuttal Update
============
After reading the author's comments, I have revised my technical quality score.
questions: 1. I wonder how the current approach would scale on other types of KG
relations?
2. What would training on negative and positive information look like?
3. I appreciate the footnote 4 about the episodes of different lengths
(and not using STOP). I'm wondering how the findings of this paper
would look like with paths of different lengths?
4. I'm wondering how this approach would scale to negative examples and/or
stochastic actions?
5. How was the learning rate found of 0.0001 found? (It seems that that
was constant before the hyperparameter optimization).
6. How well does this method work when the amount of data or the
complexity of questions increases significantly? Can it handle
larger datasets without losing accuracy or speed?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
CoaB9M2BcF | Causal Question Answering with Reinforcement Learning | [
"Lukas Blübaum",
"Stefan Heindorf"
] | Causal questions inquire about causal relationships between different events or phenomena. Specifically, they often aim to determine whether there is a relationship between two phenomena, or to identify all causes/effects of a phenomenon. Causal questions are important for a variety of use cases, including virtual assistants and search engines. However, many current approaches to causal question answering cannot provide explanations or evidence for their answers. Hence, in this paper, we aim to answer causal questions with CauseNet, a large-scale dataset of causal relations and their provenance data. Inspired by recent, successful applications of reinforcement learning to knowledge graph tasks, such as link prediction and fact-checking, we explore the application of reinforcement learning on CauseNet for causal question answering. We introduce an Actor-Critic based agent which learns to search through the graph to answer causal questions. We bootstrap the agent with a supervised learning procedure to deal with large action spaces and sparse rewards. Our evaluation shows that the agent successfully prunes the search space to answer binary causal questions by visiting less than 30 nodes per question compared to over 3,000 nodes by a naive breadth-first search. Our ablation study indicates that our supervised learning strategy provides a strong foundation upon which our reinforcement learning agent improves. The paths returned by our agent explain the mechanisms by which a cause produces an effect. Moreover, for each edge on a path, CauseNet stores its original source on the web allowing for easy verification of paths. | [
"question answering",
"causality graphs",
"reinforcement learning"
] | https://openreview.net/pdf?id=CoaB9M2BcF | 6CYXXWXUeA | decision | 1,705,909,232,147 | CoaB9M2BcF | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: The reviewers agree that the paper is novel and original. There are points of concern including in particular ad-hocness to one KGs only (CauseNet), as well as suggestions for improvement . Yet there seems to be consensus on the strengths including solid experimental setting and interesting problem to tackle. |
CoaB9M2BcF | Causal Question Answering with Reinforcement Learning | [
"Lukas Blübaum",
"Stefan Heindorf"
] | Causal questions inquire about causal relationships between different events or phenomena. Specifically, they often aim to determine whether there is a relationship between two phenomena, or to identify all causes/effects of a phenomenon. Causal questions are important for a variety of use cases, including virtual assistants and search engines. However, many current approaches to causal question answering cannot provide explanations or evidence for their answers. Hence, in this paper, we aim to answer causal questions with CauseNet, a large-scale dataset of causal relations and their provenance data. Inspired by recent, successful applications of reinforcement learning to knowledge graph tasks, such as link prediction and fact-checking, we explore the application of reinforcement learning on CauseNet for causal question answering. We introduce an Actor-Critic based agent which learns to search through the graph to answer causal questions. We bootstrap the agent with a supervised learning procedure to deal with large action spaces and sparse rewards. Our evaluation shows that the agent successfully prunes the search space to answer binary causal questions by visiting less than 30 nodes per question compared to over 3,000 nodes by a naive breadth-first search. Our ablation study indicates that our supervised learning strategy provides a strong foundation upon which our reinforcement learning agent improves. The paths returned by our agent explain the mechanisms by which a cause produces an effect. Moreover, for each edge on a path, CauseNet stores its original source on the web allowing for easy verification of paths. | [
"question answering",
"causality graphs",
"reinforcement learning"
] | https://openreview.net/pdf?id=CoaB9M2BcF | 3EF6PsUNne | official_review | 1,700,778,370,082 | CoaB9M2BcF | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1786/Reviewer_ABPN"
] | review: This paper proposes to use reinforcement learning methods for performing explanatory binary causal question answering. The main contributions of the paper are centred around the use of knowledge graphs coupled with reinforcement learning for deriving causal paths that can be used for understanding how particular YES decisions are found. In general, the article is well-written and the approach is well-explained. From the description of the algorithms, it is not totally clear how different is the proposed approach compared to previous works besides the use of the CauseNet knowledge graph. One important discussion missing from the result comparison section and discussion section is the impact of not using NO causal relationships for pre-training the model and the fact that at inference time the approach defaults to a NO answer when no cause and effect can be found. This behaviour may lead to false negatives which is not discussed in the paper. Wouldn't it be more appropriate to return UNKNOWN when there is simply a lack of information in the knowledge graph? I believe the paper should include a discussion about the impact of the defaulting behaviour in relation to the false negatives (this may explain the low recall of the proposed model which is not discussed in the paper). The discussion of the result also does not appropriately discuss results besides the precision which tends to favour their models compared to the other metrics (for example the Accuracy of the proposed model is very low on MS MARCO compared to UnifiedQA) this lack of detailed discussion make the significant of the paper weaker.
Pros:
- The paper is well written and the running example helps to clarify the aims of the paper.
- The ablation study is thorough and adds some additional knowledge about the strength of the individual model components.
- The proposed reinforcement learning approach appears to have clear benefits when compared to the BFS baseline.
Cons:
- The impact of defaulting to NO answer is not discussed in the paper and how it may affect the results (in particular the false negatives).
- The authors focus their analysis on the precision and fail to acknowledge the other reported metrics such as the really poor accuracy of their model on MS MARCO and the evaluation only focuses on one alternative model.
- Path explanations are only valid for YES answers. There is no discussion about how future work could deal with such an issue.
- The novelty of the approach is unclear besides the use of CausalNet (most of the model appears to be heavily based on DeepPath, MINERVA and SRN).
--- Rebutal Update ---
After reading the author's comments, I have revised my novelty rating. Please make sure that the information provided in the comments is added to the paper.
questions: - How different is the proposed approach (besides the use of CausalNet) compared to DeepPath, MINERVA, SRN and others?
- Why do the proposed models have lower recall than the other models? Why is the accuracy much lower on MS MARCO?
- What is the impact of only pre-training on YES relationships? What is the impact of defaulting to a NO answer when no cause and effect can be found?
- What would be a good approach for dealing with NO explanation paths (future work)?
ethics_review_flag: No
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
CoCiPVtNSu | GNNShap: Scalable and Accurate GNN Explanation using Shapley Values | [
"Selahattin Akkas",
"Ariful Azad"
] | Graph neural networks (GNNs) are popular machine learning models for graphs with many applications across scientific domains.
However, GNNs are considered black box models, and it is challenging to understand how the model makes predictions. Game theory-based Shapley value approaches are popular explanation methods in other domains but are not well-studied for graphs. Some studies have proposed Shapley value-based GNN explanations, yet they have several limitations: they consider limited samples to approximate Shapley values; some mainly focus on small and large coalition sizes, and they are an order of magnitude slower than other explanation methods, making them inapplicable to even moderate-size graphs. In this work, we propose GNNShap, which provides explanations for edges since they provide more natural explanations for graphs and more fine-grained explanations. We overcome the limitations by sampling from all coalition sizes, parallelizing the sampling on GPUs, and speeding up model predictions by batching. GNNShap gives better fidelity scores and faster explanations than baselines on real-world datasets. | [
"GNN explainability",
"Shapley value",
"game theory"
] | https://openreview.net/pdf?id=CoCiPVtNSu | wdSo6AdI5I | official_review | 1,701,215,309,789 | CoCiPVtNSu | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1723/Reviewer_PhfE"
] | review: ## **Summary**
This paper proposes a Shapley value based explainer for GNN with better efficiency. The authors first prune the copmutational graph for a node, then apply parallelized fast sampling with better coverage and utilize a efficient matrix multiplication based Shapley value computation to generate explanation. Experimental results show that the proposed method can find explanation with higher fidelity scores in a shorter time.
## **Strong Points**
S1. This paper achieves a better efficiency in Shapley value based GNN explanation.
S2. Experimental results show that it can find explanation with better fidelity scores.
## **Weak Points**
W1. Shapley value is claimed as non-structure-aware in GStarX while Hamiache-Navarro (HN) value used in GStarX is known to be structure-aware. Can the authors elaborate the advantage of using Shapley value instead of HN value?
W2. From reading the paper, it seems that the main contribution is the efficiency improvement compared to EdgeSHAPer, GraphSVX and SubgraphX. Is it really true that the proposed method could handle all different task scenarios for these three methods? It would be better to provide more in-depth discussion about the similarities and differences between this work and the others.
W3. Figure 2 is confusing: (1) Traditionally when we talk about computational graph, edge between 1866 and 1701 (and similarly edge between 1862 and 2582) is not included; the current definition of computational graph seems conflicting with the commonly-used one. (2) This graph considers one undirected edge as two directed edges, this may break an undirected graph to a directed graph and may not be intuitive for real-world undirected graphs. I am curious about the intuition behind this choice. It seems that this does not benefit the computation in section 3.5 because, for an undirected edge, you don't need to count it twice in matrix M.
W4. What is the benefit of the proposed method over MCTS-based sampling in SubgraphX?
W5. The authors should provide more details on why $\phi$ is equivalent to computing Shapley value. Right now there is no detail but one function only.
W6. The linear regression model for matrix inversion is very unclear. Why do the authors choose linear regression over SVD? What is the intution of using linear regression? What is the objective function, input and output?
W7. In Table 5, For some datasets, it is much faster than baseline methods, while for some datasets it only has marginal improvement over baseline method (e.g., SVXSampler on Coauthor-Physics). Can the authors provide more insights about it?
W8. Still in Table 5, it seems that the increase in running time does not have any pattern. The increase from GNNShap 10k to GNNShap 25k to GNNShap 50k is neither linear nor sublinear nor superlinear based on the provided results. Is it possible to analyze the complexity of generating explanation by GNNShap?
questions: Please see weak points above.
ethics_review_flag: No
ethics_review_description: NA.
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
CoCiPVtNSu | GNNShap: Scalable and Accurate GNN Explanation using Shapley Values | [
"Selahattin Akkas",
"Ariful Azad"
] | Graph neural networks (GNNs) are popular machine learning models for graphs with many applications across scientific domains.
However, GNNs are considered black box models, and it is challenging to understand how the model makes predictions. Game theory-based Shapley value approaches are popular explanation methods in other domains but are not well-studied for graphs. Some studies have proposed Shapley value-based GNN explanations, yet they have several limitations: they consider limited samples to approximate Shapley values; some mainly focus on small and large coalition sizes, and they are an order of magnitude slower than other explanation methods, making them inapplicable to even moderate-size graphs. In this work, we propose GNNShap, which provides explanations for edges since they provide more natural explanations for graphs and more fine-grained explanations. We overcome the limitations by sampling from all coalition sizes, parallelizing the sampling on GPUs, and speeding up model predictions by batching. GNNShap gives better fidelity scores and faster explanations than baselines on real-world datasets. | [
"GNN explainability",
"Shapley value",
"game theory"
] | https://openreview.net/pdf?id=CoCiPVtNSu | tNegQpf4t6 | decision | 1,705,909,216,112 | CoCiPVtNSu | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: This paper provides a more efficient way to compute shapely values for GNN explanations. All the reviewers liked the paper and also appreciated the extra experimental results conducted by the authors during the rebuttal paper. While some reviewers also noted that a few claims in the paper may be exaggerated, overall this a paper worthy of acceptance. |
CoCiPVtNSu | GNNShap: Scalable and Accurate GNN Explanation using Shapley Values | [
"Selahattin Akkas",
"Ariful Azad"
] | Graph neural networks (GNNs) are popular machine learning models for graphs with many applications across scientific domains.
However, GNNs are considered black box models, and it is challenging to understand how the model makes predictions. Game theory-based Shapley value approaches are popular explanation methods in other domains but are not well-studied for graphs. Some studies have proposed Shapley value-based GNN explanations, yet they have several limitations: they consider limited samples to approximate Shapley values; some mainly focus on small and large coalition sizes, and they are an order of magnitude slower than other explanation methods, making them inapplicable to even moderate-size graphs. In this work, we propose GNNShap, which provides explanations for edges since they provide more natural explanations for graphs and more fine-grained explanations. We overcome the limitations by sampling from all coalition sizes, parallelizing the sampling on GPUs, and speeding up model predictions by batching. GNNShap gives better fidelity scores and faster explanations than baselines on real-world datasets. | [
"GNN explainability",
"Shapley value",
"game theory"
] | https://openreview.net/pdf?id=CoCiPVtNSu | nfKTcDd52N | official_review | 1,700,689,500,465 | CoCiPVtNSu | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1723/Reviewer_DyPd"
] | review: Graph neural networks (GNNs) are powerful to make predictions over graphs. This work introduces GNNShap, an approach that provides explanations for GNNs. GNNShap employs comprehensive sampling across all coalition sizes, parallelizes sampling on GPUs, and enhances prediction speed through batching.
Pros:
- The paper is easy to read.
- The method is faster than the previous Shapley Value based method for GNN explanation.
- The efficiency is achieved through using only computation graphs.
Cons:
- It is hard to evaluate the novelty of the proposed method.
- The method is motivated via efficiency, but the datasets used in the experiments are small.
- Some of the experimental design needs more explanation. The results overall are also not strong.
questions: 1. Could you please explain how the running times are being reported for all the algorithms? Usually, the learning-based algorithms are much faster. The numbers don't make sense.
2. Could you please elaborate the exact differences with GraphSVX given that Shapley value has been used in the past for this problem as well as the sampling techniques are know? There are some papers (e.g., [1]) that even prove guarantees for sampling to compute Shapley value fast.
3. Do you have more results for Fidelity? The experiments only involve 30% removal. What happens in other settings?
[1] Mitchell, Rory, Joshua Cooper, Eibe Frank, and Geoffrey Holmes. "Sampling permutations for shapley value estimation." The Journal of Machine Learning Research 23, no. 1 (2022): 2082-2127.
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 3
technical_quality: 4
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
CoCiPVtNSu | GNNShap: Scalable and Accurate GNN Explanation using Shapley Values | [
"Selahattin Akkas",
"Ariful Azad"
] | Graph neural networks (GNNs) are popular machine learning models for graphs with many applications across scientific domains.
However, GNNs are considered black box models, and it is challenging to understand how the model makes predictions. Game theory-based Shapley value approaches are popular explanation methods in other domains but are not well-studied for graphs. Some studies have proposed Shapley value-based GNN explanations, yet they have several limitations: they consider limited samples to approximate Shapley values; some mainly focus on small and large coalition sizes, and they are an order of magnitude slower than other explanation methods, making them inapplicable to even moderate-size graphs. In this work, we propose GNNShap, which provides explanations for edges since they provide more natural explanations for graphs and more fine-grained explanations. We overcome the limitations by sampling from all coalition sizes, parallelizing the sampling on GPUs, and speeding up model predictions by batching. GNNShap gives better fidelity scores and faster explanations than baselines on real-world datasets. | [
"GNN explainability",
"Shapley value",
"game theory"
] | https://openreview.net/pdf?id=CoCiPVtNSu | U1dRy06ngL | official_review | 1,700,810,463,323 | CoCiPVtNSu | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1723/Reviewer_QTcW"
] | review: Strengths:
1. The adaptation of Shapley values from game theory to GNN interpretability is a natural fit, and I am pleased to see the authors develop work in this direction.
2. The authors provide an excellent and clear summary of current Shapley value-based interpretability algorithms, which is beneficial for the development of this field.
3. The architecture design in the method section is well done, very clear and easy to read, friendly to the readers, and the logical structure is also well designed.
4. A wealth of experiments validate that the proposed method has clear advantages over traditional Shapley value-based interpretability algorithms.
Weaknesses of the Paper:
1. There are instances of overclaiming, such as the statement in the contribution section that "GNNShap detects many unimportant edges that can be removed from the graph to expedite GNN inferences." Without substantial experimental validation, this should not be presented as a contribution. To my knowledge, most interpretability papers do not mention this in the contribution section because the impact of directly removing unimportant subgraphs is unpredictable.
2. Section 3.3 needs a clearer motivation for the innovative design of the weights. The current article only explains how the new design of the weights works, as shown in Figure 3. However, after reading the entire Section 3.3, I still cannot fully understand why the previous weights (like the green line in Figure 3) are detrimental to the calculation of Shapley values. I think some theoretical explanations or ablation experiments are needed on this point.
3. A major issue currently facing post-hoc interpretability is the OOD problem, meaning there is a distribution difference between the subgraph and the original graph. This can lead to traditional removal-based evaluation methods (such as fidelity) possibly not truly reflecting the real effect of the interpretability algorithm. Therefore, I think the authors should consider adding datasets that include ground truth and use ground truth-based evaluation metrics (such as recall and precision) to assess the pros and cons of interpretability algorithms.
4. Baselines for experiments are missing. Although I understand that the authors have already demonstrated that the proposed method is better than most known Shapley value-based methods, many of the latest post-hoc interpretability methods are not based on Shapley values, and they are still worth comparing, such as RCExplainer, ReFine, Gem, SubgraphX, etc.
questions: See the weaknesses
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 3
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
CoCiPVtNSu | GNNShap: Scalable and Accurate GNN Explanation using Shapley Values | [
"Selahattin Akkas",
"Ariful Azad"
] | Graph neural networks (GNNs) are popular machine learning models for graphs with many applications across scientific domains.
However, GNNs are considered black box models, and it is challenging to understand how the model makes predictions. Game theory-based Shapley value approaches are popular explanation methods in other domains but are not well-studied for graphs. Some studies have proposed Shapley value-based GNN explanations, yet they have several limitations: they consider limited samples to approximate Shapley values; some mainly focus on small and large coalition sizes, and they are an order of magnitude slower than other explanation methods, making them inapplicable to even moderate-size graphs. In this work, we propose GNNShap, which provides explanations for edges since they provide more natural explanations for graphs and more fine-grained explanations. We overcome the limitations by sampling from all coalition sizes, parallelizing the sampling on GPUs, and speeding up model predictions by batching. GNNShap gives better fidelity scores and faster explanations than baselines on real-world datasets. | [
"GNN explainability",
"Shapley value",
"game theory"
] | https://openreview.net/pdf?id=CoCiPVtNSu | NfA667Qa8F | official_review | 1,700,707,147,839 | CoCiPVtNSu | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1723/Reviewer_YZ7F"
] | review: Considering the overwhelming time consumption of existing Shapley value-based GNN explanation methods, this paper introduces several technics to reduce the time consumption. The proposed technics are well motivated and reasonable. But the technical contribution seems to be limited.
**Pros**:
1. GNN explanation is a critical problem in Graph learning. The proposed technics in this paper are well motivated and reasonable.
2. Authors propose many mechanisms to reduce the time consumption of Shapley value-based GNN explanation method.
3. Authors conduct many experiments to evaluate the proposed method.
**Cons**:
1. Although the proposed mechanisms to reduce time consumption are technically reasonable, they are either simple and intuitive or just simple modification of existing methods. The technical contribution of the proposed mechanisms are kind of limited.
2. Lacking ablation study. The effects of different components in the proposed method are unclear.
3. The time consumption of GNNShap on several datasets is greater than that of SVXSampler.
questions: As listed in Cons.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
CoCiPVtNSu | GNNShap: Scalable and Accurate GNN Explanation using Shapley Values | [
"Selahattin Akkas",
"Ariful Azad"
] | Graph neural networks (GNNs) are popular machine learning models for graphs with many applications across scientific domains.
However, GNNs are considered black box models, and it is challenging to understand how the model makes predictions. Game theory-based Shapley value approaches are popular explanation methods in other domains but are not well-studied for graphs. Some studies have proposed Shapley value-based GNN explanations, yet they have several limitations: they consider limited samples to approximate Shapley values; some mainly focus on small and large coalition sizes, and they are an order of magnitude slower than other explanation methods, making them inapplicable to even moderate-size graphs. In this work, we propose GNNShap, which provides explanations for edges since they provide more natural explanations for graphs and more fine-grained explanations. We overcome the limitations by sampling from all coalition sizes, parallelizing the sampling on GPUs, and speeding up model predictions by batching. GNNShap gives better fidelity scores and faster explanations than baselines on real-world datasets. | [
"GNN explainability",
"Shapley value",
"game theory"
] | https://openreview.net/pdf?id=CoCiPVtNSu | KRlr7woieD | official_review | 1,700,726,446,657 | CoCiPVtNSu | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1723/Reviewer_eQAE"
] | review: This paper proposes a Shapley Value-based GNN explanation method. The authors introduce several mechanisms to reduce the computational costs of computing Shapley Values. The motivation of this paper is well described and the proposed method is technically sound.
**Pros**:
1. This paper is well-motivated and the proposed method is technically sound.
2. The experiments are comprehensive and convincing.
**Cons**:
1. The technical contribution of this paper remains unclear. The authors should clarify the difference between the proposed sampling method and existing methods.
2. Baselines are kind of out of date, more recent baselines should be compared.
3. In table(5), the time consumption of SVXSampler and GNNShap variants are changing randomly with respect to different numbers of nodes and edges, authors should make further analysis of such results.
questions: See Cons.
ethics_review_flag: No
ethics_review_description: None
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
CPt3ZmekXe | Efficient Exact and Approximate Betweenness Centrality Computation for Temporal Graphs | [
"Tianming Zhang",
"Yunjun Gao",
"jie zhao",
"Lu Chen",
"Lu Jin",
"Zhengyi Yang",
"Bin Cao",
"JING FAN"
] | Betweenness centrality of a vertex in a graph evaluates how often the vertex occurs in the shortest paths. It is a widely used metric of vertex importance in graph analytics. While betweenness centrality on static graphs has been extensively investigated, many real-world graphs are time-varying and modeled as temporal graphs. Examples include social networks, telecommunication networks, and transportation networks, where a relationship between two vertices occurs at a specific time. Hence, in this paper, we target efficient methods for temporal betweenness centrality computation. We firstly propose an exact algorithm with the new notion of time instance graph, based on which, we derive a temporal dependency accumulation theory for iterative computation. To reduce the size of the time instance graph and improve the efficiency, we propose an additional optimization, which compresses the time instance graph with equivalent vertices and edges, and extends the dependency theory to the compressed graph. Since it is theoretically complex to compute temporal betweenness centrality, we further devise a probabilistically guaranteed, high-quality approximate method to handle massive temporal graphs. Extensive experimental results on real-world temporal networks demonstrate the superior performance of the proposed methods. In particular, our exact and approximate methods outperform the state-of-the-art methods by up to two and five orders of magnitude, respectively. | [
"Temporal Graph",
"Temporal Path",
"Betweenness Centrality",
"Algorithm"
] | https://openreview.net/pdf?id=CPt3ZmekXe | gHs9hGmlFf | decision | 1,705,909,233,940 | CPt3ZmekXe | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: This paper considers the computation of Temporal Betweenness Centrality in temporal graphs, providing algorithms with theoretical guarantees along with experimental results.
The paper is quite technical and hard to read, but the results are interesting and valuable, and I could not find flaws, neither did any of the reviewers as far as I can see.
Especially one of the reviewers was very critical due essentially to some missing intuitions that could help the reader understand the definitions and why/how they are sound.
The same problem was also raised by another reviewer, that was anyway finally convinced of the soundness of the results.
The first reviewer, even after a long debate with the authors, concluded that (s)he was still unsatisfied.
I had a read myself and I agree on the fact that the technical part does not help much the reader.
Nonetheless, I think that the authors did a good job in presenting their proofs in the appendix, and did their best during the discussion to convince the reviewers.
They at least convinced me. My suggestion is that, in case of acceptance, they try to follow the directions suggested by the reviewers to improve the writing as much as possible. |
CPt3ZmekXe | Efficient Exact and Approximate Betweenness Centrality Computation for Temporal Graphs | [
"Tianming Zhang",
"Yunjun Gao",
"jie zhao",
"Lu Chen",
"Lu Jin",
"Zhengyi Yang",
"Bin Cao",
"JING FAN"
] | Betweenness centrality of a vertex in a graph evaluates how often the vertex occurs in the shortest paths. It is a widely used metric of vertex importance in graph analytics. While betweenness centrality on static graphs has been extensively investigated, many real-world graphs are time-varying and modeled as temporal graphs. Examples include social networks, telecommunication networks, and transportation networks, where a relationship between two vertices occurs at a specific time. Hence, in this paper, we target efficient methods for temporal betweenness centrality computation. We firstly propose an exact algorithm with the new notion of time instance graph, based on which, we derive a temporal dependency accumulation theory for iterative computation. To reduce the size of the time instance graph and improve the efficiency, we propose an additional optimization, which compresses the time instance graph with equivalent vertices and edges, and extends the dependency theory to the compressed graph. Since it is theoretically complex to compute temporal betweenness centrality, we further devise a probabilistically guaranteed, high-quality approximate method to handle massive temporal graphs. Extensive experimental results on real-world temporal networks demonstrate the superior performance of the proposed methods. In particular, our exact and approximate methods outperform the state-of-the-art methods by up to two and five orders of magnitude, respectively. | [
"Temporal Graph",
"Temporal Path",
"Betweenness Centrality",
"Algorithm"
] | https://openreview.net/pdf?id=CPt3ZmekXe | VYO111NsrH | official_review | 1,701,082,957,116 | CPt3ZmekXe | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission681/Reviewer_zWci"
] | review: The paper proposes new algorithms to compute Temporal Betweenness Centrality (TBC), along with theoretical guarantees and empirical results.
The first contribution is the definition of a transformed time instance graph and the derivation of a new recursive temporal dependency formulation, which is at the core of the efficiency of the proposed algorithms.
The first algorithm proposed, called Exact Temporal Betweenness Centrality (ETBC), arises directly from the transformed time instance graph and the new recursive temporal dependency formulation, and it is suitable for different optimal temporal paths (namely, shortest temporal path, earliest temporal path, and their combination).
The second algorithm proposed, called Optimized Temporal Betweenness Centrality (OTBC), is, as the name suggests, an optimized (and still exact) version of ETBC. The optimization is obtained by compressing vertices and edges of the time instance graph. As a matter of fact, I see no advantage in using ETBC instead of OTBC, but having both in the paper is helpful in understanding the derivation process that ultimately led to OTBC.
Finally, an approximate algorithm, called Approximate Temporal Betweenness Centrality (ATBC), is introduced. Error upper bounds are derived using Rademacher Averages.
Pros:
- A new recursive temporal dependency formulation is introduced.
- Both the exact and approximate algorithms proposed are really efficient.
- Extensive experiments are performed, all methods compared are implemented in C++ and the entire experimental setup seems really fair.
Cons:
- Perhaps a more discursive style would help the reader better understand all the ideas behind the article. On the other hand I understand that there is a limit of 8 pages and in these cases a more concise and formal form is preferred.
---
I have read other reviewers questions and authors responses and I confirm my evaluation as it is.
questions: The following is a mix of questions and suggestions:
- The quality of Figure 1 can be improved. Additionally, adding at least two parallel edges would help the reader better interpret the meaning of the illustrated toy example.
- (Lines 310-314) When you say "the subpaths of optimal temporal paths may not be optimal" it may be useful to provide a brief counterexample for each of the two optimal temporal paths considered.
- In tables, especially Table 4 which extensively uses numbers with three decimal places, it is hard to distinguish between commas and dots. Maybe consider using spacing instead of commas to represent large numbers.
- (Lines 868-869) "OTBC does not finish on wikitalk". In Table 4 no error is reported for wikitalk (as expected). But column "OTBC" contains values, is it a typo?
ethics_review_flag: No
ethics_review_description: None.
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 7
technical_quality: 7
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
CPt3ZmekXe | Efficient Exact and Approximate Betweenness Centrality Computation for Temporal Graphs | [
"Tianming Zhang",
"Yunjun Gao",
"jie zhao",
"Lu Chen",
"Lu Jin",
"Zhengyi Yang",
"Bin Cao",
"JING FAN"
] | Betweenness centrality of a vertex in a graph evaluates how often the vertex occurs in the shortest paths. It is a widely used metric of vertex importance in graph analytics. While betweenness centrality on static graphs has been extensively investigated, many real-world graphs are time-varying and modeled as temporal graphs. Examples include social networks, telecommunication networks, and transportation networks, where a relationship between two vertices occurs at a specific time. Hence, in this paper, we target efficient methods for temporal betweenness centrality computation. We firstly propose an exact algorithm with the new notion of time instance graph, based on which, we derive a temporal dependency accumulation theory for iterative computation. To reduce the size of the time instance graph and improve the efficiency, we propose an additional optimization, which compresses the time instance graph with equivalent vertices and edges, and extends the dependency theory to the compressed graph. Since it is theoretically complex to compute temporal betweenness centrality, we further devise a probabilistically guaranteed, high-quality approximate method to handle massive temporal graphs. Extensive experimental results on real-world temporal networks demonstrate the superior performance of the proposed methods. In particular, our exact and approximate methods outperform the state-of-the-art methods by up to two and five orders of magnitude, respectively. | [
"Temporal Graph",
"Temporal Path",
"Betweenness Centrality",
"Algorithm"
] | https://openreview.net/pdf?id=CPt3ZmekXe | IDX7hHmECS | official_review | 1,700,866,143,840 | CPt3ZmekXe | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission681/Reviewer_6UGA"
] | review: The topic and paper is very interesting. The authors investigate on the problem of computing betweenness centrality in temporal graphs (in this case, on discrete time). The authors provide and exact and approximate algorithm, and run it on real-world datasets.
My only question, that could be addressed during the rebuttal, is regarding the related works; the authors do not consider fastest paths, even though they make sense in the setting of betweenness centrality (more than foremost for example, which induce a strong bias on time). The authors missed lines of work that already define and compute BW centrality on temporal graphs. See the following papers :
Stream graphs and link streams for the modeling of interactions over time
Temporal betweenness centrality in dynamic graphs
Temporal node centrality in complex networks
Temporal Betweenness Centrality on Shortest Paths Variants
The authors addressed my comments in a satisfactory manner.
questions: See above
ethics_review_flag: No
ethics_review_description: --
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 7
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
CPt3ZmekXe | Efficient Exact and Approximate Betweenness Centrality Computation for Temporal Graphs | [
"Tianming Zhang",
"Yunjun Gao",
"jie zhao",
"Lu Chen",
"Lu Jin",
"Zhengyi Yang",
"Bin Cao",
"JING FAN"
] | Betweenness centrality of a vertex in a graph evaluates how often the vertex occurs in the shortest paths. It is a widely used metric of vertex importance in graph analytics. While betweenness centrality on static graphs has been extensively investigated, many real-world graphs are time-varying and modeled as temporal graphs. Examples include social networks, telecommunication networks, and transportation networks, where a relationship between two vertices occurs at a specific time. Hence, in this paper, we target efficient methods for temporal betweenness centrality computation. We firstly propose an exact algorithm with the new notion of time instance graph, based on which, we derive a temporal dependency accumulation theory for iterative computation. To reduce the size of the time instance graph and improve the efficiency, we propose an additional optimization, which compresses the time instance graph with equivalent vertices and edges, and extends the dependency theory to the compressed graph. Since it is theoretically complex to compute temporal betweenness centrality, we further devise a probabilistically guaranteed, high-quality approximate method to handle massive temporal graphs. Extensive experimental results on real-world temporal networks demonstrate the superior performance of the proposed methods. In particular, our exact and approximate methods outperform the state-of-the-art methods by up to two and five orders of magnitude, respectively. | [
"Temporal Graph",
"Temporal Path",
"Betweenness Centrality",
"Algorithm"
] | https://openreview.net/pdf?id=CPt3ZmekXe | 5yBY4lBrci | official_review | 1,701,113,860,281 | CPt3ZmekXe | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission681/Reviewer_ijXx"
] | review: Summary: the goal of this paper is to introduce a new way of finding the betweenness centrality in temporal graphs.
In static graphs, shortest paths are used in the definition of betweenness centrality. In temporal graphs, any optimal path can replace the role of shortest paths in the definition: earlier finish time path, fastest path, shortest path etc.
One main problem in design of efficient algorithms for betweenness centrality in temporal graphs is that the recursive formula of Brandes [1] is invalid in temporal graphs.
The papers proposes transforming a given temporal graph to a static graph which they call time instance graph (definition presented in lines 320-342). In lemma 2 (line 367) they show a recursive formula which is similar to the recursive formula of Brandes. Using this formulation they claim that they can design an efficient algorithm for all the definitions of betweenness centrality in temporal graphs.
[1] Brandes, A faster algorithm for betweenness centrality 2001, Journal of mathematical sociology.
Cons: (1) the paper is hard to read and there are many mistakes in the paper. (2) I am not sure if the results are valid.
Detailed comments:
Line 344: in the definition of \delta_{s\cdot } the sums should be over all v. The way it is written, it seems like it is over the triple of s,z,v . Same notation problem in definition 6 .
In definition 6 is s a vertex in the original graph or a vertex instance in time instance graph?
In algorithm 1. How can the FLAG be obtained from a shortest path algorithm so that it is adaptable to earlier/shortest and all other concepts of optimality?
I didn't understand the proof of lemma 2. In particular what is \sigma_{sz}((v,t_v), {(v,t_v), ((w,t_w))})
----
After reading the authors explanation, I still don't have an answer to my questions.
I don't have any intuition why the results would hold true, and I got lost verifying some proofs. I think the paper would benefit from major revisions.
questions: In definition 6 is s a vertex in the original graph or a vertex instance in time instance graph?
In algorithm 1. How can the FLAG be obtained from a shortest path algorithm so that it is adaptable to earlier/shortest and all other concepts of optimality?
I didn't understand the proof of lemma 2. In particular what is \sigma_{sz}((v,t_v), {(v,t_v), ((w,t_w))})
ethics_review_flag: No
ethics_review_description: no issue
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 3
technical_quality: 1
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
CPt3ZmekXe | Efficient Exact and Approximate Betweenness Centrality Computation for Temporal Graphs | [
"Tianming Zhang",
"Yunjun Gao",
"jie zhao",
"Lu Chen",
"Lu Jin",
"Zhengyi Yang",
"Bin Cao",
"JING FAN"
] | Betweenness centrality of a vertex in a graph evaluates how often the vertex occurs in the shortest paths. It is a widely used metric of vertex importance in graph analytics. While betweenness centrality on static graphs has been extensively investigated, many real-world graphs are time-varying and modeled as temporal graphs. Examples include social networks, telecommunication networks, and transportation networks, where a relationship between two vertices occurs at a specific time. Hence, in this paper, we target efficient methods for temporal betweenness centrality computation. We firstly propose an exact algorithm with the new notion of time instance graph, based on which, we derive a temporal dependency accumulation theory for iterative computation. To reduce the size of the time instance graph and improve the efficiency, we propose an additional optimization, which compresses the time instance graph with equivalent vertices and edges, and extends the dependency theory to the compressed graph. Since it is theoretically complex to compute temporal betweenness centrality, we further devise a probabilistically guaranteed, high-quality approximate method to handle massive temporal graphs. Extensive experimental results on real-world temporal networks demonstrate the superior performance of the proposed methods. In particular, our exact and approximate methods outperform the state-of-the-art methods by up to two and five orders of magnitude, respectively. | [
"Temporal Graph",
"Temporal Path",
"Betweenness Centrality",
"Algorithm"
] | https://openreview.net/pdf?id=CPt3ZmekXe | 0sLsA62c8u | official_review | 1,700,822,081,786 | CPt3ZmekXe | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission681/Reviewer_MivX"
] | review: The paper proposed a novel betweenness centrality extracting method for temporal graph, there are three parts of contribution, a temporal influence accumulating method ETBC, a compression method OTBC to accelerate TBC computation, and an approximate method ATBC to further reduce the algorithm complexity. The proposed method is technical sound and prove to be effiecient in both theoretical analysis and experiment performance.
questions: There are some questions that may help me understand more about the proposed method:
1. Can the proposed graph instance extracting and compressing method helps to compute other graph statistical features like cluster coefficiency and graph entropy?
2. Can the proposed temporal influence accumulating method support temporal Resistance Distance computing?
3. What is the detailed distribution of betweenness centrality for each node in the temporal graph under different extracting methods?
4. For network updation, how does the error rate accumulate by the edge insertion or deletion? How does the initial graph size influence such accumulating?
ethics_review_flag: No
ethics_review_description: no
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
C94r4icrBb | Modeling the Impact of Timeline Algorithms on Opinion Dynamics Using Low-rank Updates | [
"Tianyi Zhou",
"Stefan Neumann",
"Kiran Garimella",
"Aristides Gionis"
] | Timeline algorithms are key parts of online social networks, but during recent years they have been blamed for increasing the polarization and disagreement in popular social networks. One of the key obstacles to explaining these phenomena is that polarization and disagreement appear in a *global network-level*, whereas timeline algorithms operate on a *local user-level*. Bridging between these two levels of abstraction is a major challenge. In particular, while network-level polarization and disagreement have been successfully studied using opinion-formation models, it has remained an open question of how these models can be augmented to take into account the fine-grained impact of user-level timeline algorithms.
We make progress on this question by providing a way to model the impact of timeline algorithms on opinion dynamics. Specifically, we show how the popular Friedkin--Johnsen opinion-formation model can be augmented based on *aggregate information*, extracted from timeline data. Our idea is to combine the underlying follow-graph of the online social network with a graph that is induced by data from a timeline algorithm. The aggregate information that we consider are the topics that are discussed in the social network, as well as the users' interests and influence on these topics. To the best of our knowledge, this is the first work that allows to obtain theoretical guarantees for combining an opinion-formation model with a graph induced by a timeline algorithm.
We use our model to study the problem of minimizing the polarization and disagreement; we assume that we are allowed to make small changes to the users' timeline compositions by strengthening some topics of discussion and penalizing some others. We present a gradient descent-based algorithm for this problem, and show that under realistic parameter settings, our algorithm computes a $(1+\varepsilon)$-approximate solution in time $\tilde{O}(m\sqrt{n} \lg(1/\varepsilon))$, where $m$ is the number of edges in the graph and $n$ is the number of vertices. We also present an algorithm that provably computes an $\varepsilon$-approximation of our model in near-linear time. We evaluate our method on real-world data and show that it effectively reduces the polarization and disagreement in the network. We also show that our algorithm is orders of magnitude faster than a non-optimized black-box optimization approach. Finally, we release an anonymized graph dataset with ground-truth opinions and more than 27,000 nodes (the previously largest publicly available dataset contains less than 550 nodes). | [
"opinion dynamics",
"opinion-formation",
"Friedkin--Johnsen model",
"social networks",
"polarization and disagreement in social networks"
] | https://openreview.net/pdf?id=C94r4icrBb | tOIupMXqBt | official_review | 1,700,714,741,104 | C94r4icrBb | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2502/Reviewer_ZLbT"
] | review: The authors address the potential societal issues caused by existing timeline algorithms and the technical challenges of integrating different levels, namely network-level and user-level. In response to these challenges, the authors extend the existing FJ model to aggregate information from different levels. This allows for the mitigation of polarization by modifying user timelines. In this process, the GDPM is introduced, and the authors provide approximate bounds on its time complexity and solution. To foster the development of the research community, the authors have made the dataset openly accessible.
While the paper is not well-written, its content aligns with the aim and scope of WWW. Specific strengths and weaknesses are outlined as follows.
**Strengths**
+ S1: Significant research question.
+ S2: Contribution to the research community through data open access.
+ S3: Thorough theoretical validation.
The authors address the potential impact of existing timeline algorithms on polarization and disagreement in societies within social networks, which is a significant and pressing issue in contemporary society. This imbues the study with strong real-world relevance. Furthermore, the authors have made their data openly accessible, greatly expanding the existing dataset's scale. The increase in the number of nodes by one to two orders of magnitude provides a more comprehensive foundation for research in this field, making it a notable contribution. Additionally, the authors provide thorough theoretical proofs to ensure that their proposed algorithm can achieve near-linear time complexity, along with bounds on the approximation range. Such theoretical guarantees are relatively uncommon in related research.
**Weaknesses**
- W1: Lengthy and unclear abstract.
- W2: Poor paper writing.
- W3: Lack of clear overall framework.
The most significant issue in this paper is the excessively long and intricate abstract. The abstract fails to indicate the authors' work and contributions, making it challenging for readers to gauge the insights the paper offers. Similarly, the organization of the main body of the paper is problematic, with the section from lines 123 to 183 excessively elaborating on the contributions, leading to a lack of clarity. Authors should avoid overly detailed descriptions of their methods in this section. Additionally, an overarching framework diagram could help readers better understand the authors' intentions.
questions: 1. The authors conducted experiments with a substantial amount of datasets, suggesting that their proposed method applies to a broad range of data distribution patterns. What mechanism enables this adaptability? Are there potential data scenarios in real-world applications where this method might face challenges in adaptation?
2. The authors mention the integration of network-level and user-level, but how is this integration specifically manifested? Are there similar mechanisms in other existing methods?
ethics_review_flag: No
ethics_review_description: none
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
C94r4icrBb | Modeling the Impact of Timeline Algorithms on Opinion Dynamics Using Low-rank Updates | [
"Tianyi Zhou",
"Stefan Neumann",
"Kiran Garimella",
"Aristides Gionis"
] | Timeline algorithms are key parts of online social networks, but during recent years they have been blamed for increasing the polarization and disagreement in popular social networks. One of the key obstacles to explaining these phenomena is that polarization and disagreement appear in a *global network-level*, whereas timeline algorithms operate on a *local user-level*. Bridging between these two levels of abstraction is a major challenge. In particular, while network-level polarization and disagreement have been successfully studied using opinion-formation models, it has remained an open question of how these models can be augmented to take into account the fine-grained impact of user-level timeline algorithms.
We make progress on this question by providing a way to model the impact of timeline algorithms on opinion dynamics. Specifically, we show how the popular Friedkin--Johnsen opinion-formation model can be augmented based on *aggregate information*, extracted from timeline data. Our idea is to combine the underlying follow-graph of the online social network with a graph that is induced by data from a timeline algorithm. The aggregate information that we consider are the topics that are discussed in the social network, as well as the users' interests and influence on these topics. To the best of our knowledge, this is the first work that allows to obtain theoretical guarantees for combining an opinion-formation model with a graph induced by a timeline algorithm.
We use our model to study the problem of minimizing the polarization and disagreement; we assume that we are allowed to make small changes to the users' timeline compositions by strengthening some topics of discussion and penalizing some others. We present a gradient descent-based algorithm for this problem, and show that under realistic parameter settings, our algorithm computes a $(1+\varepsilon)$-approximate solution in time $\tilde{O}(m\sqrt{n} \lg(1/\varepsilon))$, where $m$ is the number of edges in the graph and $n$ is the number of vertices. We also present an algorithm that provably computes an $\varepsilon$-approximation of our model in near-linear time. We evaluate our method on real-world data and show that it effectively reduces the polarization and disagreement in the network. We also show that our algorithm is orders of magnitude faster than a non-optimized black-box optimization approach. Finally, we release an anonymized graph dataset with ground-truth opinions and more than 27,000 nodes (the previously largest publicly available dataset contains less than 550 nodes). | [
"opinion dynamics",
"opinion-formation",
"Friedkin--Johnsen model",
"social networks",
"polarization and disagreement in social networks"
] | https://openreview.net/pdf?id=C94r4icrBb | gwQVg7Yrjs | decision | 1,705,909,237,252 | C94r4icrBb | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: The paper introduces a compelling variant of the Friedkin-Johnsen (FJ) model to depict the influence of timeline recommendation algorithms on user exposure to various topics. It proposes an optimization problem aiming to minimize the disagreement-polarization index and presents efficient algorithms with theoretical guarantees and strong practical performance.
Overall, the paper offers a novel approach to an intriguing problem, supported by a balanced mix of theoretical and experimental results. However, the writing quality leaves something to be desired. |
C94r4icrBb | Modeling the Impact of Timeline Algorithms on Opinion Dynamics Using Low-rank Updates | [
"Tianyi Zhou",
"Stefan Neumann",
"Kiran Garimella",
"Aristides Gionis"
] | Timeline algorithms are key parts of online social networks, but during recent years they have been blamed for increasing the polarization and disagreement in popular social networks. One of the key obstacles to explaining these phenomena is that polarization and disagreement appear in a *global network-level*, whereas timeline algorithms operate on a *local user-level*. Bridging between these two levels of abstraction is a major challenge. In particular, while network-level polarization and disagreement have been successfully studied using opinion-formation models, it has remained an open question of how these models can be augmented to take into account the fine-grained impact of user-level timeline algorithms.
We make progress on this question by providing a way to model the impact of timeline algorithms on opinion dynamics. Specifically, we show how the popular Friedkin--Johnsen opinion-formation model can be augmented based on *aggregate information*, extracted from timeline data. Our idea is to combine the underlying follow-graph of the online social network with a graph that is induced by data from a timeline algorithm. The aggregate information that we consider are the topics that are discussed in the social network, as well as the users' interests and influence on these topics. To the best of our knowledge, this is the first work that allows to obtain theoretical guarantees for combining an opinion-formation model with a graph induced by a timeline algorithm.
We use our model to study the problem of minimizing the polarization and disagreement; we assume that we are allowed to make small changes to the users' timeline compositions by strengthening some topics of discussion and penalizing some others. We present a gradient descent-based algorithm for this problem, and show that under realistic parameter settings, our algorithm computes a $(1+\varepsilon)$-approximate solution in time $\tilde{O}(m\sqrt{n} \lg(1/\varepsilon))$, where $m$ is the number of edges in the graph and $n$ is the number of vertices. We also present an algorithm that provably computes an $\varepsilon$-approximation of our model in near-linear time. We evaluate our method on real-world data and show that it effectively reduces the polarization and disagreement in the network. We also show that our algorithm is orders of magnitude faster than a non-optimized black-box optimization approach. Finally, we release an anonymized graph dataset with ground-truth opinions and more than 27,000 nodes (the previously largest publicly available dataset contains less than 550 nodes). | [
"opinion dynamics",
"opinion-formation",
"Friedkin--Johnsen model",
"social networks",
"polarization and disagreement in social networks"
] | https://openreview.net/pdf?id=C94r4icrBb | MjJteXlaY8 | official_review | 1,700,697,783,302 | C94r4icrBb | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2502/Reviewer_Hcuw"
] | review: In this paper, the authors augment the Friedkin-Johnson model by considering aggregated timeline algorithms to address polarization. My review focuses on the following points:
Positive Points:
1) The topic is socially relevant and important, especially in the context of responsible AI.
2) The paper proposes a gradient-based algorithm as a theoretical contribution.
3) I recognize the theoretical results in the appendix as an important contribution. However I note that some of them are only straightforward derivations of basic linear algebra manipulations. Moreover, they are not standardly written, defining each variable again (see, for example, C.6.) which makes harder to read.
Negative Points:
1) Regarding the organization of the paper, it needs significant improvements. While the text is easy to read, the organization of some sections is suboptimal. For instance, contributions are extensively discussed in one section, covering topics dealt with in other parts of the paper. The abstract is excessively large. The experimental section must have a better organization in flow of the ideas.
2) Some of the motivations presented by the authors lack substantiation in related works. For instance, stating that timeline algorithms are based on a user's local neighborhood without considering only local information is vague. There is an extensive related literature on personalization that deals with polarization, and these are neglected. An extended section analyzing the problem from the perspective of personalization would enhance the paper.
3) The proposed baselines seem to be very weak. Could you better motivate the choose of the baselines?
4) Although I recognized the gradient-based algorithm as a positive aspect for optimizing the problem, I feel that the comparison with black box solvers seems overstated. Note that, while I agree that the authors' method should be more efficient over time, the conditions of comparison regarding memory are not clear.
5) Minor: There is an issue with the numeration of the paper. Lemma 1, Problem 2, Proposition 3—all of them should be numbered independently.
questions: Could you please provide more detailed explanations for the motivation behind your model, particularly concerning local-information-based timeline algorithms?
Can you clarify the conditions for comparison with optimizers? You cite Convex.jl as a black box, but it relies on popular solvers. Are they really black boxes? Could you elaborate on why, in terms of memory efficiency, your method is still more efficient than such solvers based on that discussion?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 4
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
C94r4icrBb | Modeling the Impact of Timeline Algorithms on Opinion Dynamics Using Low-rank Updates | [
"Tianyi Zhou",
"Stefan Neumann",
"Kiran Garimella",
"Aristides Gionis"
] | Timeline algorithms are key parts of online social networks, but during recent years they have been blamed for increasing the polarization and disagreement in popular social networks. One of the key obstacles to explaining these phenomena is that polarization and disagreement appear in a *global network-level*, whereas timeline algorithms operate on a *local user-level*. Bridging between these two levels of abstraction is a major challenge. In particular, while network-level polarization and disagreement have been successfully studied using opinion-formation models, it has remained an open question of how these models can be augmented to take into account the fine-grained impact of user-level timeline algorithms.
We make progress on this question by providing a way to model the impact of timeline algorithms on opinion dynamics. Specifically, we show how the popular Friedkin--Johnsen opinion-formation model can be augmented based on *aggregate information*, extracted from timeline data. Our idea is to combine the underlying follow-graph of the online social network with a graph that is induced by data from a timeline algorithm. The aggregate information that we consider are the topics that are discussed in the social network, as well as the users' interests and influence on these topics. To the best of our knowledge, this is the first work that allows to obtain theoretical guarantees for combining an opinion-formation model with a graph induced by a timeline algorithm.
We use our model to study the problem of minimizing the polarization and disagreement; we assume that we are allowed to make small changes to the users' timeline compositions by strengthening some topics of discussion and penalizing some others. We present a gradient descent-based algorithm for this problem, and show that under realistic parameter settings, our algorithm computes a $(1+\varepsilon)$-approximate solution in time $\tilde{O}(m\sqrt{n} \lg(1/\varepsilon))$, where $m$ is the number of edges in the graph and $n$ is the number of vertices. We also present an algorithm that provably computes an $\varepsilon$-approximation of our model in near-linear time. We evaluate our method on real-world data and show that it effectively reduces the polarization and disagreement in the network. We also show that our algorithm is orders of magnitude faster than a non-optimized black-box optimization approach. Finally, we release an anonymized graph dataset with ground-truth opinions and more than 27,000 nodes (the previously largest publicly available dataset contains less than 550 nodes). | [
"opinion dynamics",
"opinion-formation",
"Friedkin--Johnsen model",
"social networks",
"polarization and disagreement in social networks"
] | https://openreview.net/pdf?id=C94r4icrBb | LUPwMEP8SP | official_review | 1,698,353,323,049 | C94r4icrBb | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2502/Reviewer_fhCx"
] | review: ### Summary
The authors consider the problem of reducing polarization and disagreement in a variant of the Friedkin-Johnsen (FJ) model that captures the effect of timeline recommendation algorithms on user exposure to different topics. In their FJ variant, the adjacency matrix of the graph is augmented with a weighted matrix capturing user influence via a low-rank decomposition through topics. This provides a natural way of modeling the influence of a platform's recommendation algorithm, which can increase the fraction of content a user sees on a particular topic. The authors propose an optimization problem to minimize the disagreement-polarization index of the graph with a constrained change to the user-topic exposure matrix. While the optimization problem is convex, and therefore solvable in polynomial time, naive black-box optimization is still prohibitively expensive. To address this, the authors provide an efficient algorithm to approximate the expressed opinions in their modified FJ model. Then, they develop an approximate gradient descent algorithm to efficiently solve the optimization problem. These algorithms are supported by approximation guarantees. The authors evaluate their algorithm on several real-world graphs, including a novel Twitter dataset which includes reasonable ground-truth opinions. Their approach is orders of magnitude faster than black-box optimization, which exceeds resource constraints in some cases. Moreover, it produces better solutions than two greedy baselines.
### Overall impression
Thank you to the authors for an interesting submission! I think this is a great paper, with a nice mix of theoretical guarantees and experiments. The approach to capturing timeline recommendation in the FJ model is much more natural than some of the direct edge addition approaches in prior literature, and the speed and performance of the proposed algorithm is very impressive. The new Twitter dataset is also a nice bonus. However, I think the paper would benefit from a little bit of polishing, particularly in the structure of Section 6.
### Strengths
1. The problem addressed is important and the new variant of the Friedkin-Johnsen model provides a useful new approach for tying together the theory of opinion dynamics and the practice of timeline recommendation.
2. The algorithms are supported by theoretical guarantees and perform well in practice, being much faster than naive black-box optimization and giving better results than heuristic baselines.
3. The paper is very well-written and generally quite clear.
4. The technical quality seems very good. (But, given the length of the appendix, I cannot attest to the correctness of the proofs).
### Weaknesses
1. The results section is structured awkwardly, but I think this is easy to fix (see comments below for suggestions).
### Comments
1. The matrix $M$ is used in the statement of proposition 3 is only defined in the proof sketch. It would be clearer to define $M$ alongside $U$ and $V$.
2. I'm used to smaller learning rate corresponding to smaller steps, whereas the "learning rate" in section 6 is the inverse (larger = smaller steps). My suggestion would be to redefine $L$ to what is currently $L^{-1}$, so you can call $L$ the learning rate in a more usual sense.
3. Section 6 felt out of order. I think it would be clearest to focus on results in order of importance: (1) GDPM gives good solutions, (2) it does so quickly, and (3) here's how. This would suggest putting "Comparison with greedy baselines" and "performance of the optimization algorithms" first, followed by "understanding the behavior of GDPM".
4. Along the lines of the previous comment, I don't think "Impact of learning rate" should be the first subsection in the evaluation, as this isn't really critical. This could be moved to the appendix without losing much. In fact, I think Figure 2 should be in the appendix in favor of Figure 7 that shows a more important result: that GDPM is very fast (or some other figure demonstrating fast runtime, maybe one that also includes BL-1 and BL-2).
5. I don't think Figure 1 benefits from the quadratic fits--I would suggest removing them. I also think Figure 1 should come after the figures that demonstrate good performance and runtime, as Figure 1 is about investigating how GDPM achieves its good performance (which we don't need until we've seen the good performance).
6. I found myself forgetting what $\theta$ was by the time I got to Figure 3. I would remind the reader what $\theta$ represents in the text of "comparison with greedy baselines" and/or the Figure 3 caption (a reminder about $C$ would also be useful).
7. It seems like TwitterLarge and TwitterSmall could be added to Table 2, as could the results of the black-box solver Convex.jl (if it ran out of memory or times out, then this can be noted and is even more evidence for the benefit of GDPM; or, if it achieves the same improvement as GDPM, then this attests to the correctness of your algorithm).
8. GDPM, through Algorithm 1, relies on the Solve() subroutine, which as far as I can tell isn't described in the paper, and is only mentioned to be an algorithm of Koutis, Miller and Peng in the appendix. Since Solve() is used in the proof sketch of Proposition 3, I suggest adding a citation in the main text and saying explicitly that you use as a subroutine their algorithm, which you call Solve(), and referring the reader to their paper for details.
9. In general, when referring to a figure or table in the appendix, I would suggest saying explicitly that they are located in the appendix (for instance "see Tables 3 and 2" on p.7, line 750).
questions: 1. Is there any intuition for the bound on $||V M^{-1} U||_2$ in Proposition 3, or for what this matrix captures?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 7
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
C94r4icrBb | Modeling the Impact of Timeline Algorithms on Opinion Dynamics Using Low-rank Updates | [
"Tianyi Zhou",
"Stefan Neumann",
"Kiran Garimella",
"Aristides Gionis"
] | Timeline algorithms are key parts of online social networks, but during recent years they have been blamed for increasing the polarization and disagreement in popular social networks. One of the key obstacles to explaining these phenomena is that polarization and disagreement appear in a *global network-level*, whereas timeline algorithms operate on a *local user-level*. Bridging between these two levels of abstraction is a major challenge. In particular, while network-level polarization and disagreement have been successfully studied using opinion-formation models, it has remained an open question of how these models can be augmented to take into account the fine-grained impact of user-level timeline algorithms.
We make progress on this question by providing a way to model the impact of timeline algorithms on opinion dynamics. Specifically, we show how the popular Friedkin--Johnsen opinion-formation model can be augmented based on *aggregate information*, extracted from timeline data. Our idea is to combine the underlying follow-graph of the online social network with a graph that is induced by data from a timeline algorithm. The aggregate information that we consider are the topics that are discussed in the social network, as well as the users' interests and influence on these topics. To the best of our knowledge, this is the first work that allows to obtain theoretical guarantees for combining an opinion-formation model with a graph induced by a timeline algorithm.
We use our model to study the problem of minimizing the polarization and disagreement; we assume that we are allowed to make small changes to the users' timeline compositions by strengthening some topics of discussion and penalizing some others. We present a gradient descent-based algorithm for this problem, and show that under realistic parameter settings, our algorithm computes a $(1+\varepsilon)$-approximate solution in time $\tilde{O}(m\sqrt{n} \lg(1/\varepsilon))$, where $m$ is the number of edges in the graph and $n$ is the number of vertices. We also present an algorithm that provably computes an $\varepsilon$-approximation of our model in near-linear time. We evaluate our method on real-world data and show that it effectively reduces the polarization and disagreement in the network. We also show that our algorithm is orders of magnitude faster than a non-optimized black-box optimization approach. Finally, we release an anonymized graph dataset with ground-truth opinions and more than 27,000 nodes (the previously largest publicly available dataset contains less than 550 nodes). | [
"opinion dynamics",
"opinion-formation",
"Friedkin--Johnsen model",
"social networks",
"polarization and disagreement in social networks"
] | https://openreview.net/pdf?id=C94r4icrBb | 8Al2fmJWC5 | official_review | 1,700,753,961,307 | C94r4icrBb | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2502/Reviewer_Dxqo"
] | review: This paper introduces a method to modify the recommendations of the timeline algorithm, with the objective of reducing polarization and disagreement by allowing subtle adjustments to users' attention towards specific topics. To this end, the paper first presents a model that can quantify the impact of timeline algorithms on polarization and disagreements, and then presents a theoretical analysis of the approximation and time complexity bounds of the proposed algorithm. This paper supports future research by publishing its code and collected datasets.
Strong points.
- S1: This paper provides its code and collected datasets, surpassing the size of previously available datasets with ground-truth opinions. The accessibility of this code and dataset will prove valuable for future research.
- S2: This paper introduces a novel model based on the Friedkin-Johnsen (FJ) model, incorporating aggregate information from the timeline algorithm. This new model quantifies the influence of timeline algorithms on polarization and disagreements.
- S3: This paper introduces a gradient descent-based algorithm aimed at minimizing polarization and disagreement through minor adjustments to users' attention towards specific topics. Additionally, this paper provides theoretical approximations for the algorithm's bound and time complexity.
Weak points.
- W1: The studied problem may be appealing in theory, but it is not clear if there is any real social network that will use the proposed method to reduce polarization and disagreements. Evidence to support the practical utility of the problem setting (i.e., Problem 2) would be great. In addition, I am wondering how well the adopted index I(G) can measure polarization in real-world applications.
- W2: This paper conducts experiences on two different parameters, C and \theta, by separately changing one while keeping the other constant. It would be a plus if this paper conducted experiments where both C and \theta are simultaneously changed.
- W3: It would be better if the paper could explain why the first baseline fails to minimize polarization and disagreement.
questions: Please see the weak points.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 6
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
C94r4icrBb | Modeling the Impact of Timeline Algorithms on Opinion Dynamics Using Low-rank Updates | [
"Tianyi Zhou",
"Stefan Neumann",
"Kiran Garimella",
"Aristides Gionis"
] | Timeline algorithms are key parts of online social networks, but during recent years they have been blamed for increasing the polarization and disagreement in popular social networks. One of the key obstacles to explaining these phenomena is that polarization and disagreement appear in a *global network-level*, whereas timeline algorithms operate on a *local user-level*. Bridging between these two levels of abstraction is a major challenge. In particular, while network-level polarization and disagreement have been successfully studied using opinion-formation models, it has remained an open question of how these models can be augmented to take into account the fine-grained impact of user-level timeline algorithms.
We make progress on this question by providing a way to model the impact of timeline algorithms on opinion dynamics. Specifically, we show how the popular Friedkin--Johnsen opinion-formation model can be augmented based on *aggregate information*, extracted from timeline data. Our idea is to combine the underlying follow-graph of the online social network with a graph that is induced by data from a timeline algorithm. The aggregate information that we consider are the topics that are discussed in the social network, as well as the users' interests and influence on these topics. To the best of our knowledge, this is the first work that allows to obtain theoretical guarantees for combining an opinion-formation model with a graph induced by a timeline algorithm.
We use our model to study the problem of minimizing the polarization and disagreement; we assume that we are allowed to make small changes to the users' timeline compositions by strengthening some topics of discussion and penalizing some others. We present a gradient descent-based algorithm for this problem, and show that under realistic parameter settings, our algorithm computes a $(1+\varepsilon)$-approximate solution in time $\tilde{O}(m\sqrt{n} \lg(1/\varepsilon))$, where $m$ is the number of edges in the graph and $n$ is the number of vertices. We also present an algorithm that provably computes an $\varepsilon$-approximation of our model in near-linear time. We evaluate our method on real-world data and show that it effectively reduces the polarization and disagreement in the network. We also show that our algorithm is orders of magnitude faster than a non-optimized black-box optimization approach. Finally, we release an anonymized graph dataset with ground-truth opinions and more than 27,000 nodes (the previously largest publicly available dataset contains less than 550 nodes). | [
"opinion dynamics",
"opinion-formation",
"Friedkin--Johnsen model",
"social networks",
"polarization and disagreement in social networks"
] | https://openreview.net/pdf?id=C94r4icrBb | 5LWesXOAla | official_review | 1,698,669,758,160 | C94r4icrBb | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2502/Reviewer_inex"
] | review: The authors point out critical problems of existing studies on polarization and disagreement in online social networks, with a particular focus on the impact of timeline algorithms. As a solution, they propose an augmented FJ model that combines a fixed underlying graph with timeline algorithm-derived aggregate information. This model is optimized by a novel algorithm that guarantees bounded running time.
The proposed method is also empirically evaluated on 27 real-world datasets, including newly collected ones. Extensive experiments including important ablation studies are conducted to confirm the model's effectiveness and verify its design components. The codes are submitted and will be released to the public, and future directions proposed by the authors sound reasonable and interesting.
The paper is constructive and well-organized. However, I have some questions for the authors, which I would like to clarify.
questions: - The authors fixed C=0.1 in most of their experiments. How is the running time affected by different C values?
- The authors compared their method with two greedy baselines. It would be valuable to explore other baseline methods for a more comprehensive evaluation.
- The method in this paper is specifically rooted in the FJ model. However, it would be worthwhile to explore (or at least discuss) potential extensions of this approach to other popular algorithms.
- Have the authors considered other metrics such as internal conflict, controversy, or disagreement-controversy? Please refer to the papers below.
Quantifying and minimizing risk of conflict in social networks (KDD 2018)\
Measuring and moderating opinion polarization in social networks (Data Mining and Knowledge Discovery 2017)\
Minimizing Polarization and Disagreement in Social Networks (WWW 2018)
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
C76EThDBIo | Budget-Constrained Auctions with Unassured Priors: Strategic Equivalence and Structural Properties | [
"Zhaohua Chen",
"Mingwei Yang",
"Chang Wang",
"Jicheng Li",
"Zheng Cai",
"Yukun Ren",
"zhihua zhu",
"Xiaotie Deng"
] | In today's online advertising markets, it is common for advertisers to set long-term budgets.
Correspondingly, advertising platforms adopt budget control methods to ensure that advertisers' payments lie within their budgets.
Most budget control methods rely on the value distributions of advertisers.
However, due to the complex advertising landscape and potential privacy concerns, the platform hardly learns advertisers' true priors.
Thus, it is crucial to understand how budget control auction mechanisms perform under unassured priors.
This work answers this problem from multiple aspects.
Specifically, we examine five budget-constrained parameterized mechanisms: bid-discount/pacing first-price/second-price auctions and the Bayesian revenue-optimal auction.
We consider the unassured prior game among the seller and all buyers induced by these five mechanisms in the stochastic model.
We restrict the parameterized mechanisms to satisfy the budget-extracting condition, which maximizes the seller's revenue by extracting buyers' budgets as effectively as possible.
Our main result shows that the Bayesian revenue-optimal mechanism and the budget-extracting bid-discount first-price mechanism yield the same set of Nash equilibrium outcomes in the unassured prior game.
This implies that simple mechanisms can be as robust as the optimal mechanism under unassured priors in the budget-constrained setting.
In the symmetric case, we further show that all these five (budget-extracting) mechanisms share the same set of possible outcomes.
We further dig into the structural properties of these mechanisms.
We characterize sufficient and necessary conditions on the budget-extracting parameter tuple for bid-discount/pacing first-price auctions.
Meanwhile, when buyers do not take strategic behaviors, we exploit the dominance relationships of these mechanisms by revealing their intrinsic structures.
In summary, our results establish vast connections among budget-constrained auctions with unassured priors and explore their structural properties, particularly highlighting the advantages of first-price mechanisms. | [
"Budget-Constrained Auctions",
"Unassured Priors",
"Strategic Equivalence",
"Structural Properties"
] | https://openreview.net/pdf?id=C76EThDBIo | rUO4jFreNE | decision | 1,705,909,206,922 | C76EThDBIo | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: The review team uniformly appreciated the fundamental nature of the results presented in the paper. The question of revenue- and strategic-equivalence between auctions with unknown priors is a very interesting one, and has recently seen important results, but without budget constraints. This paper extends those results to a setting with budget constraints, and also generalizes by considering many different auction types. The writing is clear, and the results will definitely be of interest to the AGT community researchers and practitioners alike.
A couple more references on the budget-pacing literature:
1) The Best of Many Worlds: Dual Mirror Descent for Online Allocation Problems; Santiago Balseiro, Haihao Lu and Vahab Mirrokni
2) Analysis of Dual-Based PID Controllers through Convolutional Mirror Descent; Santiago Balseiro, Haihai Lu, Vahab Mirrokni and Balasubramanian Sivan |
C76EThDBIo | Budget-Constrained Auctions with Unassured Priors: Strategic Equivalence and Structural Properties | [
"Zhaohua Chen",
"Mingwei Yang",
"Chang Wang",
"Jicheng Li",
"Zheng Cai",
"Yukun Ren",
"zhihua zhu",
"Xiaotie Deng"
] | In today's online advertising markets, it is common for advertisers to set long-term budgets.
Correspondingly, advertising platforms adopt budget control methods to ensure that advertisers' payments lie within their budgets.
Most budget control methods rely on the value distributions of advertisers.
However, due to the complex advertising landscape and potential privacy concerns, the platform hardly learns advertisers' true priors.
Thus, it is crucial to understand how budget control auction mechanisms perform under unassured priors.
This work answers this problem from multiple aspects.
Specifically, we examine five budget-constrained parameterized mechanisms: bid-discount/pacing first-price/second-price auctions and the Bayesian revenue-optimal auction.
We consider the unassured prior game among the seller and all buyers induced by these five mechanisms in the stochastic model.
We restrict the parameterized mechanisms to satisfy the budget-extracting condition, which maximizes the seller's revenue by extracting buyers' budgets as effectively as possible.
Our main result shows that the Bayesian revenue-optimal mechanism and the budget-extracting bid-discount first-price mechanism yield the same set of Nash equilibrium outcomes in the unassured prior game.
This implies that simple mechanisms can be as robust as the optimal mechanism under unassured priors in the budget-constrained setting.
In the symmetric case, we further show that all these five (budget-extracting) mechanisms share the same set of possible outcomes.
We further dig into the structural properties of these mechanisms.
We characterize sufficient and necessary conditions on the budget-extracting parameter tuple for bid-discount/pacing first-price auctions.
Meanwhile, when buyers do not take strategic behaviors, we exploit the dominance relationships of these mechanisms by revealing their intrinsic structures.
In summary, our results establish vast connections among budget-constrained auctions with unassured priors and explore their structural properties, particularly highlighting the advantages of first-price mechanisms. | [
"Budget-Constrained Auctions",
"Unassured Priors",
"Strategic Equivalence",
"Structural Properties"
] | https://openreview.net/pdf?id=C76EThDBIo | fCubef6hnP | official_review | 1,700,698,586,166 | C76EThDBIo | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission175/Reviewer_K4dc"
] | review: The paper studies various budget-constrained parametrized mechanisms in a multiple buyer setting with unassured priors. The paper provides theoretical results showing strategic equivalence and revenue-dominance relationships among budget-constrained parametrized mechanisms.
Strengths:
The paper’s model is motivated by the fact that platforms do not know the prior value distributions or actual values of advertisers. In practice, platforms observe only advertisers’ bids and may not know their values. This setting is different from existing works on budget control methods.
The paper’s strategic equivalence results provide a justification for major platforms’ recent transition to first-price auctions. More specifically, they show a strategic equivalence between the Bayesian revenue-optimal mechanism and the budget-extracting bid-discount first-price mechanism in the unassured prior setting.
questions: What is a utility-revenue profile? Is this a pair of realized utility and revenue? In Definition 4.1, are the utility-revenue profile and revenue-utility profile the same thing?
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 7
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
C76EThDBIo | Budget-Constrained Auctions with Unassured Priors: Strategic Equivalence and Structural Properties | [
"Zhaohua Chen",
"Mingwei Yang",
"Chang Wang",
"Jicheng Li",
"Zheng Cai",
"Yukun Ren",
"zhihua zhu",
"Xiaotie Deng"
] | In today's online advertising markets, it is common for advertisers to set long-term budgets.
Correspondingly, advertising platforms adopt budget control methods to ensure that advertisers' payments lie within their budgets.
Most budget control methods rely on the value distributions of advertisers.
However, due to the complex advertising landscape and potential privacy concerns, the platform hardly learns advertisers' true priors.
Thus, it is crucial to understand how budget control auction mechanisms perform under unassured priors.
This work answers this problem from multiple aspects.
Specifically, we examine five budget-constrained parameterized mechanisms: bid-discount/pacing first-price/second-price auctions and the Bayesian revenue-optimal auction.
We consider the unassured prior game among the seller and all buyers induced by these five mechanisms in the stochastic model.
We restrict the parameterized mechanisms to satisfy the budget-extracting condition, which maximizes the seller's revenue by extracting buyers' budgets as effectively as possible.
Our main result shows that the Bayesian revenue-optimal mechanism and the budget-extracting bid-discount first-price mechanism yield the same set of Nash equilibrium outcomes in the unassured prior game.
This implies that simple mechanisms can be as robust as the optimal mechanism under unassured priors in the budget-constrained setting.
In the symmetric case, we further show that all these five (budget-extracting) mechanisms share the same set of possible outcomes.
We further dig into the structural properties of these mechanisms.
We characterize sufficient and necessary conditions on the budget-extracting parameter tuple for bid-discount/pacing first-price auctions.
Meanwhile, when buyers do not take strategic behaviors, we exploit the dominance relationships of these mechanisms by revealing their intrinsic structures.
In summary, our results establish vast connections among budget-constrained auctions with unassured priors and explore their structural properties, particularly highlighting the advantages of first-price mechanisms. | [
"Budget-Constrained Auctions",
"Unassured Priors",
"Strategic Equivalence",
"Structural Properties"
] | https://openreview.net/pdf?id=C76EThDBIo | HGkCgIYxrV | official_review | 1,700,597,922,759 | C76EThDBIo | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission175/Reviewer_3JmV"
] | review: The authors consider an auction problem inspired by an auto-bidding context where an auctioneer aiming to maximize revenue faces a set of buyers with public ex-ante budget constraints (i.e., expected payments must be less than a budget cap) and private values drawn from independent distributions who aim to maximize their quasi-linear utility subject to the budget constraint. In contrast to the standard fully Bayesian model, the authors assume that these value distributions are unknown and the auctioneer instead only has access to the distribution of past bids (and, hence, the bidding quantile functions) of the participants. The goal of this paper is then to investigate the equilibria and strategic properties of various well-studied auction formats in this model - namely, bid-discounting and pacing first and second price auctions as well as the Bayesian revenue-optimal auction for the fully Bayesian setting.
The central results in this paper are that the Bayesian revenue-optimal auction is strategically equivalent to the budget-extracting bid-discount first-price auction (i.e., there is a mapping between strategies in one auction and the other such that outcomes are the same). Moreover, they demonstrate that the equilibrium outcomes in these two auctions are the same. They further show strategic equivalence between first and second price auctions when bidders are symmetric. Finally, they compare the revenue obtained by these auctions when bidders are not strategic and demonstrate the revenue superiority of the bid-discount first-price auction.
This paper has many strengths. First, it considers a well-studied model of auto-bidding that is of interest to the WebConf community from a new, and seemingly natural, angle of unassured priors. Second, it provides a very comprehensive set of results comparing many auctions proposed in the literature and in practice. Third, the proofs are non-trivial and the results offer interesting insights regarding the use of “simpler” mechanisms (e.g., first-price auctions) versus “complex” ones (e.g., the Bayesian revenue-optimal auction) in auto-bidding settings with unassured priors by showing that the simpler mechanisms induce the same outcomes as more complex ones.
On the negative side, this paper is, in a sense, “purely” theoretical so it is not clear whether or not the theoretical insights are born out in practice on real-world data. Second, while the unassured priors model is interesting and moves in a good direction, in my view, away from fully Bayesian assumptions, it isn’t totally clear that it is the “right” model for practice. Finally, I think the paper would benefit from a higher level “roadmap” of the results and some intuition for the proofs (which are quite technical). However, despite these (smaller) drawbacks, I am positive about this paper and think it makes a nice contribution to the literature on auctions in the auto-bidding world.
Smaller comments
Line 269-271: “there exists a maximum budget-feasible parameter…, and is budget-extracting” -> I think “and” should probably be replaced with “and this parameter” in this sentence.
Line 1294: I would suggest writing out what “qf” means here. You mention it in the body of the paper after the statement of the lemma which this proof is for.
[After rebuttal] Thank you for your responses. I remain positive about this paper and would recommend acceptance.
questions: Can you comment on what would fundamentally change in your modelling and in the results if we required the budget constraint to hold ex-post, rather than in expectation?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
C76EThDBIo | Budget-Constrained Auctions with Unassured Priors: Strategic Equivalence and Structural Properties | [
"Zhaohua Chen",
"Mingwei Yang",
"Chang Wang",
"Jicheng Li",
"Zheng Cai",
"Yukun Ren",
"zhihua zhu",
"Xiaotie Deng"
] | In today's online advertising markets, it is common for advertisers to set long-term budgets.
Correspondingly, advertising platforms adopt budget control methods to ensure that advertisers' payments lie within their budgets.
Most budget control methods rely on the value distributions of advertisers.
However, due to the complex advertising landscape and potential privacy concerns, the platform hardly learns advertisers' true priors.
Thus, it is crucial to understand how budget control auction mechanisms perform under unassured priors.
This work answers this problem from multiple aspects.
Specifically, we examine five budget-constrained parameterized mechanisms: bid-discount/pacing first-price/second-price auctions and the Bayesian revenue-optimal auction.
We consider the unassured prior game among the seller and all buyers induced by these five mechanisms in the stochastic model.
We restrict the parameterized mechanisms to satisfy the budget-extracting condition, which maximizes the seller's revenue by extracting buyers' budgets as effectively as possible.
Our main result shows that the Bayesian revenue-optimal mechanism and the budget-extracting bid-discount first-price mechanism yield the same set of Nash equilibrium outcomes in the unassured prior game.
This implies that simple mechanisms can be as robust as the optimal mechanism under unassured priors in the budget-constrained setting.
In the symmetric case, we further show that all these five (budget-extracting) mechanisms share the same set of possible outcomes.
We further dig into the structural properties of these mechanisms.
We characterize sufficient and necessary conditions on the budget-extracting parameter tuple for bid-discount/pacing first-price auctions.
Meanwhile, when buyers do not take strategic behaviors, we exploit the dominance relationships of these mechanisms by revealing their intrinsic structures.
In summary, our results establish vast connections among budget-constrained auctions with unassured priors and explore their structural properties, particularly highlighting the advantages of first-price mechanisms. | [
"Budget-Constrained Auctions",
"Unassured Priors",
"Strategic Equivalence",
"Structural Properties"
] | https://openreview.net/pdf?id=C76EThDBIo | 4aUUcX4FVL | official_review | 1,701,216,503,749 | C76EThDBIo | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission175/Reviewer_JHvH"
] | review: Summary
This paper introduces a notion of “strategic equivalence” between classes of auctions for budget-constrained bidders with unknown priors, and characterizes when common classes of auctions are equivalent.
More specifically, the authors consider an auction setting where bidders know their own priors but the auctioneer does not. At the beginning of the game, the bidders can report (perhaps untruthfully) their priors to the auctioneer. The auctioneer then chooses a mechanism from a parameterized class of mechanisms (e.g. a FPA where bidder i’s bid is weighted by alpha_i) and runs an auction amongst all the bidders. Instead of trying to solve the resulting game, the paper asks which parameterized classes give rise to the same underlying game between bidders and auctioneer. For example, is there a simple reduction from the case where a bidder chooses to run a FPA (with arbitrary bid weights) vs a SPA (with arbitrary bid weights)?
The authors consider 5 different classes of auction: discounted and pacing versions of both FPAs and SPAs, and the Bayesian revenue-optimal auction, each parametrized by weighting the bid of bidder i (or in the revenue-optimal case, the virtual bid) by a scaling parameter. The authors also introduce two notions of equivalence to capture this relationship: weak equivalence, which can transform the parameters arbitrarily, and strong equivalence, which must act independently on each bidder’s parameter.
The main result is that (under some Lipschitz continuity / budget consumption assumptions) the class of Bayesian-revenue optimal auctions is strongly equivalent to the class of bid-discounted FPAs. Furthermore, if all bidders are symmetric, it is possible to show that all these classes of auctions are weakly equivalent.
Evaluation
The growth of adoption of autobidding (with budget and target constraints) has made the classic auction design problem increasingly complex. This is an interesting result in auction theory as it (in some sense) simplifies the space of possible auctions to run, in a similar way that e.g. the revelation principle and Myerson’s lemma simplify the space of possible auctions in the classic prior-aware setting. With the caveat that I have not deeply thought about these questions before, I find it pretty interesting that the somewhat complex class of Bayesian revenue-optimal auctions (with virtual-value-reweighting) is equivalent to the class of first-price auctions (with bid discounts). I support this paper for acceptance -- I think it will be of interest to the more theoretical mechanism design / algorithmic economics crowd attending WebConf.
The paper was generally well-written and easy to read (although it is definitely a little tricky to parse the results when they first appear amidst the sea of acronyms).
questions: Feel free to respond to any comments / potential misunderstandings in the above review.
One thing which was not too clear to me from reading the paper (as someone who is not very familiar with the prior work) -- are these (or similar) strategic equivalence results already known in the setting where bidders are not budget-constrained, or is this notion of strategic equivalence entirely novel (even in that setting)? If it is, do any of the equivalence results hold in the budget-unconstrained setting? (I know some of the results require the parametrized auction to consume the entire budget -- perhaps this prevents some results from clearly extending).
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 6
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
C76EThDBIo | Budget-Constrained Auctions with Unassured Priors: Strategic Equivalence and Structural Properties | [
"Zhaohua Chen",
"Mingwei Yang",
"Chang Wang",
"Jicheng Li",
"Zheng Cai",
"Yukun Ren",
"zhihua zhu",
"Xiaotie Deng"
] | In today's online advertising markets, it is common for advertisers to set long-term budgets.
Correspondingly, advertising platforms adopt budget control methods to ensure that advertisers' payments lie within their budgets.
Most budget control methods rely on the value distributions of advertisers.
However, due to the complex advertising landscape and potential privacy concerns, the platform hardly learns advertisers' true priors.
Thus, it is crucial to understand how budget control auction mechanisms perform under unassured priors.
This work answers this problem from multiple aspects.
Specifically, we examine five budget-constrained parameterized mechanisms: bid-discount/pacing first-price/second-price auctions and the Bayesian revenue-optimal auction.
We consider the unassured prior game among the seller and all buyers induced by these five mechanisms in the stochastic model.
We restrict the parameterized mechanisms to satisfy the budget-extracting condition, which maximizes the seller's revenue by extracting buyers' budgets as effectively as possible.
Our main result shows that the Bayesian revenue-optimal mechanism and the budget-extracting bid-discount first-price mechanism yield the same set of Nash equilibrium outcomes in the unassured prior game.
This implies that simple mechanisms can be as robust as the optimal mechanism under unassured priors in the budget-constrained setting.
In the symmetric case, we further show that all these five (budget-extracting) mechanisms share the same set of possible outcomes.
We further dig into the structural properties of these mechanisms.
We characterize sufficient and necessary conditions on the budget-extracting parameter tuple for bid-discount/pacing first-price auctions.
Meanwhile, when buyers do not take strategic behaviors, we exploit the dominance relationships of these mechanisms by revealing their intrinsic structures.
In summary, our results establish vast connections among budget-constrained auctions with unassured priors and explore their structural properties, particularly highlighting the advantages of first-price mechanisms. | [
"Budget-Constrained Auctions",
"Unassured Priors",
"Strategic Equivalence",
"Structural Properties"
] | https://openreview.net/pdf?id=C76EThDBIo | 3r5nFo7BOg | official_review | 1,700,963,053,558 | C76EThDBIo | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission175/Reviewer_EYLe"
] | review: Summary:
This paper studies 5 different mechanism designs for budget constrained bidders: Bid-discount/Pacing x SPA/FPA, and Bayesian revenue-optimal auction, when the prior on the bidders value distributions is unknown.
Under mild assumptions, the authors show that
(1) budget-extracting bid-discount first-price auction is strongly strategic-equivalent to the Bayesian revenue-optimal auction.
(2) in the symmetric case, first-price and second-price auctions are weakly strategic-equivalent.
(3) without strategic bidding, bid-discount first- price auction dominates Bayesian revenue-optimal auction and pacing first-price auction, while Bayesian revenue-optimal auction outperforms two variants of second-price auctions.
Comments:
This paper is generally well-written and clear. The problem studied in this paper is interesting and relevant to auction design in online advertising. The collection of results is non-trivial and technically strong. The strong strategic equivalence between bid-discount first-price auction and Bayesian revenue-optimal auction is particularly interesting and demonstrates interesting insight that may be applicable for practice. As most proofs are deferred to appendix, the reviewer didn't check the details of the proofs but they look plausible in the hindsight.
questions: 1. The reviewer wonders what happens if the bidders can manipulate their budgets as well? Do the strategic equivalence results continue to hold?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Bvz8q5MzIF | Fact Embedding through Diffusion Model for Knowledge Graph Completion | [
"xiao Long",
"Liansheng Zhuang",
"Aodi Li",
"Houqiang Li",
"Shafei Wang"
] | Knowledge graph embedding (KGE) is an efficient and scalable method for knowledge graph completion tasks. Existing KGE models typically map entities and relations into a unified continuous vector space and define a score function to capture the connectivity patterns among the elements (entities and relations) of facts. The score on a fact measures its plausibility in a knowledge graph (KG). However, since the connectivity patterns are very complex in a real knowledge graph, it is difficult to define an explicit and efficient score function to capture them, which also limits their performance. This paper argues that plausible facts in a knowledge graph come from a distribution in the low-dimensional fact space. Inspired by this insight, this paper proposes a novel framework called Fact Embedding through Diffusion Model (FDM) to address the knowledge graph completion task. Instead of defining a score function to measure the plausibility of facts in a knowledge graph, this framework directly learns the distribution of plausible facts from the known knowledge graph and casts the entity prediction task into the conditional fact generation task. Specifically, we concatenate the elements embedding in a fact as a whole and take it as input. Then, we introduce a Conditional Fact Denoiser to learn the reverse denoising diffusion process and generate the target fact embedding from noised data. Extensive experiments demonstrate that FDM significantly outperforms existing state-of-the-art methods in three benchmark datasets. Especially on FB15k-237, FDM achieves a 16.8\% relative improvement in MRR scores compared to the state-of-the-art methods. | [
"Knowledge Graph",
"Knowledge Graph Embedding",
"Knowledge Graph Completetion",
"Diffusion Model"
] | https://openreview.net/pdf?id=Bvz8q5MzIF | QdMdKPeEeo | official_review | 1,701,161,930,411 | Bvz8q5MzIF | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission760/Reviewer_u5qZ"
] | review: The paper presents a link prediction method based on fact embeddings with diffusion models. The model uses Fact Embedding through Diffusion Model with Denoising Diffusion Probabilistic Models to learn the distribution of plausible facts. It casts the entity prediction task into a conditional fact generation task. The method is evaluated on 4 standard benchmark datasets against a large variety of recent link prediction models. The presented model outperforms all of them by a rather large margin.
Overall, this is a novel and highly technical paper, with a state-of-the-art result on a variety of benchmark datasets. The extensive experimental results show the advantages of the presented methods and explain different design choices by additional ablation studies.
**Strengths:**
- Well-motivated paper
- State-of-the-art results on a variety of common benchmark datasets
- Extensive related work
- Informative ablation studies for some of the design choices in the model
**Weaknesses:**
- Results only on smaller benchmark datasets
questions: - Why did you not use any of the larger datasets, e.g. Codex, Wikidata5m, and YAGO3-10?
- Do you have ideas for future work?
- What are the shortcomings of your method?
ethics_review_flag: No
ethics_review_description: No
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
Bvz8q5MzIF | Fact Embedding through Diffusion Model for Knowledge Graph Completion | [
"xiao Long",
"Liansheng Zhuang",
"Aodi Li",
"Houqiang Li",
"Shafei Wang"
] | Knowledge graph embedding (KGE) is an efficient and scalable method for knowledge graph completion tasks. Existing KGE models typically map entities and relations into a unified continuous vector space and define a score function to capture the connectivity patterns among the elements (entities and relations) of facts. The score on a fact measures its plausibility in a knowledge graph (KG). However, since the connectivity patterns are very complex in a real knowledge graph, it is difficult to define an explicit and efficient score function to capture them, which also limits their performance. This paper argues that plausible facts in a knowledge graph come from a distribution in the low-dimensional fact space. Inspired by this insight, this paper proposes a novel framework called Fact Embedding through Diffusion Model (FDM) to address the knowledge graph completion task. Instead of defining a score function to measure the plausibility of facts in a knowledge graph, this framework directly learns the distribution of plausible facts from the known knowledge graph and casts the entity prediction task into the conditional fact generation task. Specifically, we concatenate the elements embedding in a fact as a whole and take it as input. Then, we introduce a Conditional Fact Denoiser to learn the reverse denoising diffusion process and generate the target fact embedding from noised data. Extensive experiments demonstrate that FDM significantly outperforms existing state-of-the-art methods in three benchmark datasets. Especially on FB15k-237, FDM achieves a 16.8\% relative improvement in MRR scores compared to the state-of-the-art methods. | [
"Knowledge Graph",
"Knowledge Graph Embedding",
"Knowledge Graph Completetion",
"Diffusion Model"
] | https://openreview.net/pdf?id=Bvz8q5MzIF | GK6kZaYiRJ | official_review | 1,700,625,704,152 | Bvz8q5MzIF | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission760/Reviewer_gCsk"
] | review: Summary:
This paper introduces a framework called Fact Embedding through Diffusion Model (FDM). FDM learns the distribution of plausible facts directly from the known knowledge graph, transforming the entity prediction task into a conditional fact generation task. It uses a Conditional Fact Denoiser to create fact embeddings from noised data. Experiments show that FDM achieves SOTA on FB15k-237 dataset.
Strengths:
1. State-of-the-art performance on FB15k-237 dataset.
2. Model architecture is quite novel.
Weakness:
1. The motivation could be clarified further. The issue of explicit scoring functions being inadequate for modeling KG patterns needs more comprehensive discussion.
2. It would be beneficial if the authors included data on training and inference times, along with the size of the model parameters for the proposed FDM, comparing these aspects with baseline methods.
3. The ablation study appears overly simplistic, with merely substituting a condition encoder for a transformer-based architecture. More extensive ablation studies are suggested to substantiate the optimal model design.
4. In the case study, using only a single sample does not sufficiently demonstrate FDM's effectiveness. Additional analysis, particularly on the advantages of the diffusion architecture, would be valuable.
5. The current experiments are generic and could apply to any KGE work. Conducting experiments specifically tailored to the diffusion model design would strengthen the paper's contribution.
questions: N/A
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Bvz8q5MzIF | Fact Embedding through Diffusion Model for Knowledge Graph Completion | [
"xiao Long",
"Liansheng Zhuang",
"Aodi Li",
"Houqiang Li",
"Shafei Wang"
] | Knowledge graph embedding (KGE) is an efficient and scalable method for knowledge graph completion tasks. Existing KGE models typically map entities and relations into a unified continuous vector space and define a score function to capture the connectivity patterns among the elements (entities and relations) of facts. The score on a fact measures its plausibility in a knowledge graph (KG). However, since the connectivity patterns are very complex in a real knowledge graph, it is difficult to define an explicit and efficient score function to capture them, which also limits their performance. This paper argues that plausible facts in a knowledge graph come from a distribution in the low-dimensional fact space. Inspired by this insight, this paper proposes a novel framework called Fact Embedding through Diffusion Model (FDM) to address the knowledge graph completion task. Instead of defining a score function to measure the plausibility of facts in a knowledge graph, this framework directly learns the distribution of plausible facts from the known knowledge graph and casts the entity prediction task into the conditional fact generation task. Specifically, we concatenate the elements embedding in a fact as a whole and take it as input. Then, we introduce a Conditional Fact Denoiser to learn the reverse denoising diffusion process and generate the target fact embedding from noised data. Extensive experiments demonstrate that FDM significantly outperforms existing state-of-the-art methods in three benchmark datasets. Especially on FB15k-237, FDM achieves a 16.8\% relative improvement in MRR scores compared to the state-of-the-art methods. | [
"Knowledge Graph",
"Knowledge Graph Embedding",
"Knowledge Graph Completetion",
"Diffusion Model"
] | https://openreview.net/pdf?id=Bvz8q5MzIF | EAcEmvZwX1 | decision | 1,705,909,230,268 | Bvz8q5MzIF | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: The reviewers have noted the following strengths associated with the paper:
* Good motivation / relevant problem (though one reviewer believes the motivation could be improved).
* Promising experimental results.
* Good experimental design (though one reviewer finds the ablation study simplistic).
* Extensive related work.
* Good relevance to conference and track.
* Novel approach.
* The paper is relatively easy to follow.
And the following weaknesses:
* Only small datasets are considered when larger datasets are available.
* Lacking detail about related works involving diffusion models.
* Missing qualitative analysis (considering limitations of benchmarks used).
* Missing training and inferencing times.
* Missing experiments tailored to the diffusion model design.
* Concerns about efficiency.
In their responses, the authors address some of the aforementioned concerns, leading some reviewers to improve their score.
Overall, the scores and comments of the reviewers consistently lean towards a positive evaluation (while not being overly positive) concerning relevance, technical quality and novely. Given the consistently positive reviews, and the lack of any clear reason against, I recommend an Accept. |
Bvz8q5MzIF | Fact Embedding through Diffusion Model for Knowledge Graph Completion | [
"xiao Long",
"Liansheng Zhuang",
"Aodi Li",
"Houqiang Li",
"Shafei Wang"
] | Knowledge graph embedding (KGE) is an efficient and scalable method for knowledge graph completion tasks. Existing KGE models typically map entities and relations into a unified continuous vector space and define a score function to capture the connectivity patterns among the elements (entities and relations) of facts. The score on a fact measures its plausibility in a knowledge graph (KG). However, since the connectivity patterns are very complex in a real knowledge graph, it is difficult to define an explicit and efficient score function to capture them, which also limits their performance. This paper argues that plausible facts in a knowledge graph come from a distribution in the low-dimensional fact space. Inspired by this insight, this paper proposes a novel framework called Fact Embedding through Diffusion Model (FDM) to address the knowledge graph completion task. Instead of defining a score function to measure the plausibility of facts in a knowledge graph, this framework directly learns the distribution of plausible facts from the known knowledge graph and casts the entity prediction task into the conditional fact generation task. Specifically, we concatenate the elements embedding in a fact as a whole and take it as input. Then, we introduce a Conditional Fact Denoiser to learn the reverse denoising diffusion process and generate the target fact embedding from noised data. Extensive experiments demonstrate that FDM significantly outperforms existing state-of-the-art methods in three benchmark datasets. Especially on FB15k-237, FDM achieves a 16.8\% relative improvement in MRR scores compared to the state-of-the-art methods. | [
"Knowledge Graph",
"Knowledge Graph Embedding",
"Knowledge Graph Completetion",
"Diffusion Model"
] | https://openreview.net/pdf?id=Bvz8q5MzIF | 9bpr5yWLHI | official_review | 1,700,774,700,624 | Bvz8q5MzIF | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission760/Reviewer_iToA"
] | review: The authors propose a graph embedding method for knowledge graph completion called Fact Embedding through Diffusion Model (FDM). The method takes as inspiration the diffusion models approach used e.g., for image generation models, and applies its principles to the graph embeddings training task. The model consists of two processes: the forward process consequently applying gaussian noise to the initial input and the reverse denoising diffusion process to generate the target fact embedding from noised data. The authors use an MLP-based denoiser as opposed to a transformer model. The authors test the method on 4 benchmark datasets (FB15k-237, WN18RR, Kinship, and UMLS) and show performance improvements in comparison with the state of the art.
The topic and the method are clearly relevant for the conference and the track. Adapting the diffusion models’ principles for graph representation learning is an interesting idea. The method looks promising, particularly, the comparative evaluation results.
There are, however, some aspects which make it difficult to evaluate properly the added value of the approach. First, applying diffusion models’ principles to graph data is not a completely novel idea [1] and the related work section should mention existing work in this area and highlight the difference of the proposed approach to existing algorithms.
There are also well-known issues with commonly used benchmarks, in particular, inherent biases of FB15k-237 and WN18RR [2, 3], such as symmetric relations, over-representation of “popular” entities becoming default answers, etc. Given the popularity of the known benchmarks it is hard to avoid using them for comparison tests, but at least it would be interesting to see a more in-depth discussion, e.g., on what kind of triples the proposed method tends to outperform the state of the art and what kind of triples are handled less well (e.g, the observation that FDM performs worse on WN18RR, which was found to be less affected by the 3 types of biases selected in [2])
One question regarding the evaluation: why the comparative evaluation included different sets of methods in Tables 2 and 3 (e.g., TransE only in Table 2, but ComplEx only in Table 3)?
Typos:
- P.3 $X^\tau = [𝑿^h;𝑿^r ;𝑿^t ] \in R^{2×𝑟+𝑒}$ $\rightarrow$ shouldn’t it be $2×e+r$ ?
- Fig. 1 (a) FDM Architecther -> FDM Architecture
1. Zhang, M. et al. A Survey on Graph Diffusion Models: Generative AI in Science for Molecule, Protein and Material. https://arxiv.org/abs/2304.01565
2. Rossi, A. et al. Knowledge Graph Embeddings or Bias Graph Embeddings? A Study of Bias in Link Prediction Models. 2022
3. Akrami, F. et al. Realistic Re-evaluation of Knowledge Graph Completion Methods: An Experimental Study. 2020
questions: (see the review section)
- How is the proposed method different from other algorithms applying diffusion models' principles to knowledge graph data?
- Given the inherent biases of common benchmarks, is it possible to evaluate how affected is the proposed method by them?
- Minor: why the comparative evaluation includes different sets of methods in Tables 2 and 3?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
Bvz8q5MzIF | Fact Embedding through Diffusion Model for Knowledge Graph Completion | [
"xiao Long",
"Liansheng Zhuang",
"Aodi Li",
"Houqiang Li",
"Shafei Wang"
] | Knowledge graph embedding (KGE) is an efficient and scalable method for knowledge graph completion tasks. Existing KGE models typically map entities and relations into a unified continuous vector space and define a score function to capture the connectivity patterns among the elements (entities and relations) of facts. The score on a fact measures its plausibility in a knowledge graph (KG). However, since the connectivity patterns are very complex in a real knowledge graph, it is difficult to define an explicit and efficient score function to capture them, which also limits their performance. This paper argues that plausible facts in a knowledge graph come from a distribution in the low-dimensional fact space. Inspired by this insight, this paper proposes a novel framework called Fact Embedding through Diffusion Model (FDM) to address the knowledge graph completion task. Instead of defining a score function to measure the plausibility of facts in a knowledge graph, this framework directly learns the distribution of plausible facts from the known knowledge graph and casts the entity prediction task into the conditional fact generation task. Specifically, we concatenate the elements embedding in a fact as a whole and take it as input. Then, we introduce a Conditional Fact Denoiser to learn the reverse denoising diffusion process and generate the target fact embedding from noised data. Extensive experiments demonstrate that FDM significantly outperforms existing state-of-the-art methods in three benchmark datasets. Especially on FB15k-237, FDM achieves a 16.8\% relative improvement in MRR scores compared to the state-of-the-art methods. | [
"Knowledge Graph",
"Knowledge Graph Embedding",
"Knowledge Graph Completetion",
"Diffusion Model"
] | https://openreview.net/pdf?id=Bvz8q5MzIF | 8G0aEzKpiY | official_review | 1,700,566,042,808 | Bvz8q5MzIF | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission760/Reviewer_qnKh"
] | review: The paper introduces the recent popular Denoising Diffusion Probabilistic Models (DDPM) into the KGE field, transforming the entity prediction task into a conditional fact generation task. Departing from traditional methods of defining a score function to capture the intricate connectivity patterns, the proposed FDM method learns the distribution of plausible facts directly from the knowledge graphs. This approach employs a Conditional Fact Denoiser to learn the reverse denoising diffusion process, thereby generating the target fact embedding from noised data. Experimental results on four datasets demonstrate the effectiveness of FDM.
Strengths:
1. This paper offers a fresh perspective by learning the distribution of plausible facts rather than relying on a score function. It is a creative use of diffusion models in the context of KGE, suggesting a novel direction for future research in this field.
2. A broad range of baselines are compared in experiments including both embedding-based and non-embedding-based recent models. The notable improvement on the FB15k-237 dataset, strongly supports the proposed framework's superiority over existing methods.
3. The manuscript is well-organized and easy to follow.
Weaknesses:
1. The paper does not fully discuss the computational complexity and scalability of FDM, particularly when applied to large-scale knowledge graphs. The datasets in experiments are relatively small, it would be better to evaluate on larger KGs, such as YAGO3-10 and obgl-wikiKG2.
2. The hidden size of CFDenoiser is too large (such as 500,1600,2000). How about the dimension of embedding vectors? The efficiency issue of FDM is concerning.
3. Although the authors present a case study on FB15k-237, the reason for the significant performance gain on this dataset is still unclear. In fact, except FB15k-237, the superiority of FDM on the other 3 datasets is not obvious enough. The advantages of the diffusion perspective should be analyzed in depth. In addition, the effect of explicit conditional constraints is not fully discussed in the experiments.
questions: please see the weaknesses in the above review
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Bukc7HhE3Y | Query in Your Tongue: Reinforce Large Language Models with Retrievers for Cross-lingual Search Generative Experience | [
"Ping Guo",
"Yue Hu",
"Yanan Cao",
"Yubing Ren",
"Yunpeng Li",
"Heyan Huang"
] | In the contemporary digital landscape, search engines play an invaluable role in information access, yet they often face challenges in Cross-Lingual Information Retrieval (CLIR). Though attempts are made to improve CLIR, current methods still leave users grappling with issues such as misplaced named entities and lost cultural context when querying in non-native languages. While some advances have been made using Neural Machine Translation models and cross-lingual representation, these are not without limitations. Enter the paradigm shift brought about by Large Language Models (LLMs), which have transformed search engines from simple retrievers to generators of contextually relevant information. This paper introduces the Multilingual Information Model for Intelligent Retrieval MIMIR. Built on the power of LLMs, MIMIR directly responds in the language of the user's query, reducing the need for post-search translations. Our model's architecture encompasses a dual-module system: a retriever for searching multilingual documents and a responder for crafting answers in the user's desired language. Through a unique unified training framework, with the retriever serving as a reward model supervising the responder, and in turn, the responder producing synthetic data to refine the retriever's proficiency, MIMIR's retriever and responder iteratively enhance each other. Performance evaluations via CLEF and MKQA benchmarks reveal MIMIR's superiority over existing models, effectively addressing traditional CLIR challenges. | [
"Large Language Models",
"Search Generative Experience",
"Cross-lingual Information Retrieval"
] | https://openreview.net/pdf?id=Bukc7HhE3Y | zvOYgNpybI | official_review | 1,700,874,598,411 | Bukc7HhE3Y | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2374/Reviewer_33jp"
] | review: This paper introduces a new iterative approach to CLIR, based on the use of an LLM to train a better retrieval-side model, and feed its output back to improve the LLM (Fig 1). Specifically, the LLM "responder" would first generate multilingual positive/negative queries of select documents. The generated data is fed into the retriever to improve its effectiveness by using contrastive learning. The same model is used later to provides reward signals for the responder-side generation. This algorithm is detailed in Sec 3.3, with the core design following a policy gradient approach PPO but applied to a cross-lingual setting. This research work is evaluated on two CLIR tasks, one based on CLEF and the other on MKQA dataset. The experimental results (Tables 1 & 2) show that the proposed method can outperform several baselines, including LLMs such as BLOOMZ-7B1 and GPT 3.5 Turbo.
I think on the methodology side the paper has some novelty. Performing contrastive learning over positive/negative query pairs is a very interesting twist (Eq 1). Has this or similar ideas been explored before (say in monolingual IR), and most importantly has this been compared to any doc-pair counterparts already? In your setting a straight-up comparison with doc-pair trained model is perhaps not possible, but might be worth some discussions.
X-PPO has this notion of dynamic clipping range $\epsilon_l$ (Eq 5), which is a main departure from PPO. Is this a novel invention from this work? Have you conducted any experiment to understand its impact (compared to original PPO)?
The literature review is a bit too broad, it covers some ground on the development of CLIR but doesn't really touch on the main influence that inspires this work. A focused treatment on relevant topics such as data augmentation and RL-guided generation would provide better context for the research audience. Sec 2.2 has mentioned some recent research on leveraging LLMs to improve IR. I would love to see this narrative expanded to also cover how generative models are used for data augmentation, and maybe some prior art on reward learning.
Is CLEF still the best benchmark nowadays for assessing CLIR performance? I'm asking this for that the dataset is already 20 years old, would like to know whether more recent options such as CLIRMatrix are also considered.
It seems evident from Tables 1 & 2 that MIMIC (MIMIR?) outperforms all listed baseline methods, but the ablation study is a bit lacking - it's hard to see how much improvement is made by performing query-pair based contrastive learning compared to no data augmentation at all. I would appreciate the experiments more if there were some comparison between zero-shot LLMs (no RL) and the proposed method.
Some minor comments
- The PPO training loop (Step 3, similar to [41]) should also be covered in Algorithm 1.
- In Sec 4.4, please provide some references for the MT components in SMT+BM25 and NMT+BM25.
- Table 1 should include a retriever only model (BLOOMZ-7B1?) as the baseline
Pros
- Proposing a novel iterative approach to CLIR, employing an LLM to enhance the retrieval-side model through contrastive learning with positive/negative query pairs.
- Empirical evidence of improvement over prominent LLMs such as BLOOMZ-7B1 and GPT 3.5 Turbo.
Cons
- Lack of a focused discussion on the main influence inspiring the work.
- Lack of suitable controls in the experiment to tease apart the influence of improvement on individual components
questions: - Is the warm-up step important to CLIR systems?
- Is the BLOOMZ-7B1 run zero-shot or fine-tuned?
ethics_review_flag: No
ethics_review_description: n/a
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Bukc7HhE3Y | Query in Your Tongue: Reinforce Large Language Models with Retrievers for Cross-lingual Search Generative Experience | [
"Ping Guo",
"Yue Hu",
"Yanan Cao",
"Yubing Ren",
"Yunpeng Li",
"Heyan Huang"
] | In the contemporary digital landscape, search engines play an invaluable role in information access, yet they often face challenges in Cross-Lingual Information Retrieval (CLIR). Though attempts are made to improve CLIR, current methods still leave users grappling with issues such as misplaced named entities and lost cultural context when querying in non-native languages. While some advances have been made using Neural Machine Translation models and cross-lingual representation, these are not without limitations. Enter the paradigm shift brought about by Large Language Models (LLMs), which have transformed search engines from simple retrievers to generators of contextually relevant information. This paper introduces the Multilingual Information Model for Intelligent Retrieval MIMIR. Built on the power of LLMs, MIMIR directly responds in the language of the user's query, reducing the need for post-search translations. Our model's architecture encompasses a dual-module system: a retriever for searching multilingual documents and a responder for crafting answers in the user's desired language. Through a unique unified training framework, with the retriever serving as a reward model supervising the responder, and in turn, the responder producing synthetic data to refine the retriever's proficiency, MIMIR's retriever and responder iteratively enhance each other. Performance evaluations via CLEF and MKQA benchmarks reveal MIMIR's superiority over existing models, effectively addressing traditional CLIR challenges. | [
"Large Language Models",
"Search Generative Experience",
"Cross-lingual Information Retrieval"
] | https://openreview.net/pdf?id=Bukc7HhE3Y | zEGOYiME1G | official_review | 1,701,422,621,255 | Bukc7HhE3Y | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2374/Reviewer_mj6W"
] | review: This paper focuses on improving cross-lingual retrieval and cross-lingual QA. The authors propose to generate synthetic cross-lingual queries using an LLM, creating training data for the retriever. The retriever than in-turn provides reward signals for the generator. The authors evaluated both the retrieval performance and the QA performance, and show that the proposed framework can improve both tasks.
Pros:
- This LLM based framework simplifies traditional translation + heuristic based pipeline. I believe there are much headroom for cross-lingual retrieval and QA tasks that can be addresses with framework.
- Generating negative queries is novel, since most existing work on dense retrieval rely on negative passages.
- Good experimental results.
- Paper is clear written and well-organized.
Cons:
- It is unclear to me why the RL can improve the generator on QA tasks, since the reward is on query generation, not answer generation. I hope the authors can better explain the intuition for RL.
- Most retrieval baselines used in the paper are BM25, but there are several multi-lingual/cross-lingual dense retrievers from existing work, e.g., mDPR (" Mr. TyDi: A Multilingual Benchmark for Dense Retrieval" Zhang et al, 2021) and mContriever ("Unsupervised Dense Information Retrieval with Contrastive Learning" Izacard et al, 2022). They should be discussed in related work and added to baselines. Comparing to these more recent baselines would justify if the synthetic query generation is necessary for training the retriever.
questions: Can you address my 2 concerns in the "Cons" section?
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Bukc7HhE3Y | Query in Your Tongue: Reinforce Large Language Models with Retrievers for Cross-lingual Search Generative Experience | [
"Ping Guo",
"Yue Hu",
"Yanan Cao",
"Yubing Ren",
"Yunpeng Li",
"Heyan Huang"
] | In the contemporary digital landscape, search engines play an invaluable role in information access, yet they often face challenges in Cross-Lingual Information Retrieval (CLIR). Though attempts are made to improve CLIR, current methods still leave users grappling with issues such as misplaced named entities and lost cultural context when querying in non-native languages. While some advances have been made using Neural Machine Translation models and cross-lingual representation, these are not without limitations. Enter the paradigm shift brought about by Large Language Models (LLMs), which have transformed search engines from simple retrievers to generators of contextually relevant information. This paper introduces the Multilingual Information Model for Intelligent Retrieval MIMIR. Built on the power of LLMs, MIMIR directly responds in the language of the user's query, reducing the need for post-search translations. Our model's architecture encompasses a dual-module system: a retriever for searching multilingual documents and a responder for crafting answers in the user's desired language. Through a unique unified training framework, with the retriever serving as a reward model supervising the responder, and in turn, the responder producing synthetic data to refine the retriever's proficiency, MIMIR's retriever and responder iteratively enhance each other. Performance evaluations via CLEF and MKQA benchmarks reveal MIMIR's superiority over existing models, effectively addressing traditional CLIR challenges. | [
"Large Language Models",
"Search Generative Experience",
"Cross-lingual Information Retrieval"
] | https://openreview.net/pdf?id=Bukc7HhE3Y | VkHPXllasX | official_review | 1,701,452,712,132 | Bukc7HhE3Y | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2374/Reviewer_nHcN"
] | review: The paper addresses the critical role of search engines in facilitating information access within the contemporary digital landscape, emphasizing the challenges faced in Cross-Lingual Information Retrieval (CLIR). While previous efforts have aimed to enhance CLIR through methods involving Neural Machine Translation models and cross-lingual representation, the paper rightly points out persistent issues such as misplaced named entities and lost cultural context in non-native language queries.
This work introduces a novel approach by proposing the utilization of a retriever to unsupervisedly provide reward signals, guiding the optimization of large models to generate more relevant queries. This aspect is commendable and demonstrates innovation in the field.
I am intrigued by the role of positive and negative prompts in training the retriever. Could the authors consider a more streamlined approach where only positive prompts are used to generate queries, treating them as positive queries, while utilizing other queries in the batch as negative queries for training the retriever? This method seems more elegant and aligns with the principle of Occam's Razor. This streamlined approach could potentially simplify the training process while maintaining effectiveness. I recommend the authors explore and discuss the feasibility and potential advantages of this approach in their work.
questions: 1. Could you elaborate on the decision-making process behind choosing specific positive and negative prompts for training the retriever? How do you ensure the representativeness of these prompts in capturing relevant and non-relevant information?
2. The proposed approach involves using both positive and negative prompts for training the retriever. Have you explored or considered an approach where only positive prompts are used, treating them as positive queries, while using other queries in the batch as negative queries?
3. How sensitive is MIMIR's performance to changes in the size of the retriever and responder modules? Could you conducte experiments to analyze the impact of varying model sizes on efficiency and effectiveness?
ethics_review_flag: No
ethics_review_description: no
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
Bukc7HhE3Y | Query in Your Tongue: Reinforce Large Language Models with Retrievers for Cross-lingual Search Generative Experience | [
"Ping Guo",
"Yue Hu",
"Yanan Cao",
"Yubing Ren",
"Yunpeng Li",
"Heyan Huang"
] | In the contemporary digital landscape, search engines play an invaluable role in information access, yet they often face challenges in Cross-Lingual Information Retrieval (CLIR). Though attempts are made to improve CLIR, current methods still leave users grappling with issues such as misplaced named entities and lost cultural context when querying in non-native languages. While some advances have been made using Neural Machine Translation models and cross-lingual representation, these are not without limitations. Enter the paradigm shift brought about by Large Language Models (LLMs), which have transformed search engines from simple retrievers to generators of contextually relevant information. This paper introduces the Multilingual Information Model for Intelligent Retrieval MIMIR. Built on the power of LLMs, MIMIR directly responds in the language of the user's query, reducing the need for post-search translations. Our model's architecture encompasses a dual-module system: a retriever for searching multilingual documents and a responder for crafting answers in the user's desired language. Through a unique unified training framework, with the retriever serving as a reward model supervising the responder, and in turn, the responder producing synthetic data to refine the retriever's proficiency, MIMIR's retriever and responder iteratively enhance each other. Performance evaluations via CLEF and MKQA benchmarks reveal MIMIR's superiority over existing models, effectively addressing traditional CLIR challenges. | [
"Large Language Models",
"Search Generative Experience",
"Cross-lingual Information Retrieval"
] | https://openreview.net/pdf?id=Bukc7HhE3Y | J4GvENEHBQ | decision | 1,705,909,224,433 | Bukc7HhE3Y | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: This paper presents an approach for cross-lingual generative retrieval based on reinforcement learning, retriever training with synthetic queries and query generation based on reward signals from signals
The paper was reviewed by five reviewers. The paper has clearly some merits. All reviewers agree on the technical quality and novely of the papers, but they also raise some comments still requirint a proper explanation. Please clarify this point in the camera-ready copy. |
Bukc7HhE3Y | Query in Your Tongue: Reinforce Large Language Models with Retrievers for Cross-lingual Search Generative Experience | [
"Ping Guo",
"Yue Hu",
"Yanan Cao",
"Yubing Ren",
"Yunpeng Li",
"Heyan Huang"
] | In the contemporary digital landscape, search engines play an invaluable role in information access, yet they often face challenges in Cross-Lingual Information Retrieval (CLIR). Though attempts are made to improve CLIR, current methods still leave users grappling with issues such as misplaced named entities and lost cultural context when querying in non-native languages. While some advances have been made using Neural Machine Translation models and cross-lingual representation, these are not without limitations. Enter the paradigm shift brought about by Large Language Models (LLMs), which have transformed search engines from simple retrievers to generators of contextually relevant information. This paper introduces the Multilingual Information Model for Intelligent Retrieval MIMIR. Built on the power of LLMs, MIMIR directly responds in the language of the user's query, reducing the need for post-search translations. Our model's architecture encompasses a dual-module system: a retriever for searching multilingual documents and a responder for crafting answers in the user's desired language. Through a unique unified training framework, with the retriever serving as a reward model supervising the responder, and in turn, the responder producing synthetic data to refine the retriever's proficiency, MIMIR's retriever and responder iteratively enhance each other. Performance evaluations via CLEF and MKQA benchmarks reveal MIMIR's superiority over existing models, effectively addressing traditional CLIR challenges. | [
"Large Language Models",
"Search Generative Experience",
"Cross-lingual Information Retrieval"
] | https://openreview.net/pdf?id=Bukc7HhE3Y | Hr2oZS3lsP | official_review | 1,700,840,897,251 | Bukc7HhE3Y | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2374/Reviewer_8YvP"
] | review: The authors propose a framework utilizing RL for the problem of Cross-Lingual Information Retrieval (CLIR). Instead of post-retrieval translation based solutions, authors train an end-to-end retrieval and responder combo using RL along with synthetic queries leveraging contrastive learning, which is the biggest contribution of the paper.
Strong Points
1. The paper is fairly easy to follow.
2. Exploration of an end-to-end architecture borrowing GAN's main idea utilizing RL is novel and sounds interesting.
3. The paper provides extensive set of experiments to validate their proposed solution.
Weak Points
1. The number of datasets on which experiments are carried out is not sufficient. Only single dataset for each retrieval and responder comparison is limited to validify their work.
2. The choice of baselines and the decision to separate experiments for retrieval and responder are confusing.
3. I believe the biggest problem is the comparison with baselines does not look fair. Performance gain is most probably due to fine-tuning on the synthetic dataset which is only used for a single baseline in the comparison.
4. Related to the previous point, performance gain especially for retrieval becomes incremental given that there are only 151 queries in the dataset.
questions: None
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
Bukc7HhE3Y | Query in Your Tongue: Reinforce Large Language Models with Retrievers for Cross-lingual Search Generative Experience | [
"Ping Guo",
"Yue Hu",
"Yanan Cao",
"Yubing Ren",
"Yunpeng Li",
"Heyan Huang"
] | In the contemporary digital landscape, search engines play an invaluable role in information access, yet they often face challenges in Cross-Lingual Information Retrieval (CLIR). Though attempts are made to improve CLIR, current methods still leave users grappling with issues such as misplaced named entities and lost cultural context when querying in non-native languages. While some advances have been made using Neural Machine Translation models and cross-lingual representation, these are not without limitations. Enter the paradigm shift brought about by Large Language Models (LLMs), which have transformed search engines from simple retrievers to generators of contextually relevant information. This paper introduces the Multilingual Information Model for Intelligent Retrieval MIMIR. Built on the power of LLMs, MIMIR directly responds in the language of the user's query, reducing the need for post-search translations. Our model's architecture encompasses a dual-module system: a retriever for searching multilingual documents and a responder for crafting answers in the user's desired language. Through a unique unified training framework, with the retriever serving as a reward model supervising the responder, and in turn, the responder producing synthetic data to refine the retriever's proficiency, MIMIR's retriever and responder iteratively enhance each other. Performance evaluations via CLEF and MKQA benchmarks reveal MIMIR's superiority over existing models, effectively addressing traditional CLIR challenges. | [
"Large Language Models",
"Search Generative Experience",
"Cross-lingual Information Retrieval"
] | https://openreview.net/pdf?id=Bukc7HhE3Y | 6bQxNnByva | official_review | 1,701,431,514,946 | Bukc7HhE3Y | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2374/Reviewer_5K8Q"
] | review: The paper presents an approach for cross-lingual generative retrieval based on reinforcement learning, retriever training with synthetic queries and query generation based on reward signals from signals. The motivation is clearly stated and the evaluation is done accordingly. Generative IR is a trendy topic while cross-lingual aspects seem to be less investigated despite the multilingual capacity of LLMs. The idea of combining reinforcement learning, synthetic queries cross-lingual proximal policy optimization seems to be novel. The proposal is particularly efficient for low-resource languages but less performing on medium and high-resource languages.
Limitations:
- The results are reported on a single dataset per aspect (retrieval/QA). The obtained conclusions might be biased to the design of a particular dataset.
- The limitations of the SOTA mentioned in L96-99 (NER issues and cultural topic context loss) and the advantage of Mimic are shown on 2 examples
- The quality of the generated positive and negative queries is not evaluated directly
Minors:
- the paper has too many self-promoting claims
- L376: the sentence seems to be unfinished
- L670: "in Table 1. An overarching observation is the dominance of Mimic across all 7 tested languages" does not reflect the Table 1 content correctly
- L596: Which NMT method?
- L732: it is not clear why Translate-Test is called the "strongest" baseline. It does not seem to be the strongest baseline for low-resource languages
- Fig.2: Why did the authors decide to present languages on X and beta values as trends and not vice-versa?
- L120-121: the claim is not evident
questions: - Section 5.5 does not provide enough evidence about the advantages of Mimic over the previous translation models. Only 2 examples are analyzed. Why these 2 examples are representative?
- The results of Table 2 are very interesting. Might this high performance of posthoc translation of top-ranked passages from English Wikipedia be explained by the design of the MKQA dataset as the raw answers in this dataset were searched on the web and linked to WikiData? Might it occur that the MKQA dataset contains a lot of answers very close to Wikipedia passages?
- The results are reported based on a single dataset per aspect (retrieval/QA). How do the authors ensure that the obtained results are not biased to a particular dataset design?
- Was the MKQA dataset used for pre-training the LLMs from the baselines or the proposed method?
- It would be interesting to investigate the quality of the positive and negative query generation based on the given prompts.
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Bl7rhZxtrQ | POLISH: Adaptive Online Cross-Modal Hashing for Class Incremental Data | [
"Yu-Wei Zhan",
"Xin Luo",
"Zhen-Duo Chen",
"Yongxin Wang",
"Yinwei Wei",
"Xin-Shun Xu"
] | In recent years, hashing-based online cross-modal retrieval has garnered growing attention. This trend is motivated by the fact that web data is increasingly delivered in a streaming manner as opposed to batch processing. Simultaneously, the sheer scale of web data sometimes makes it impractical to fully load for the training of hashing models. Despite the evolution of online cross-modal hashing techniques, several challenges remain: 1) Most existing methods learn hash codes by considering the relevance among newly arriving data or between new data and the existing data, often disregarding valuable global semantic information. 2) A common but limiting assumption in many methods is that the label space remains constant, implying that all class labels should be provided within the first data chunk. This assumption does not hold in real-world scenarios, and the presence of new labels in incoming data chunks can severely degrade or even break these methods.
To tackle these issues, we introduce a novel supervised online cross-modal hashing method named adaPtive Online cLass-Incremental haSHing (POLISH). Leveraging insights from language models, POLISH generates representations for new class label from multiple angles. Meanwhile, POLISH treats label embeddings, which remain unchanged once learned, as stable global information to produce high-quality hash codes. POLISH also puts forward an efficient optimization algorithm for hash code learning. Extensive experiments on two real-world benchmark datasets show the effectiveness of the proposed POLISH for class incremental data in the cross-modal hashing domain. | [
"Cross-modal Retrieval",
"Learning to Hash",
"Online Hashing",
"Efficient Discrete Optimization"
] | https://openreview.net/pdf?id=Bl7rhZxtrQ | qrUXPJo367 | official_review | 1,700,567,636,805 | Bl7rhZxtrQ | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2504/Reviewer_7bam"
] | review: The study proposes a method to create hash-code representations of high-dimensional streaming data(images and text) to be used in ANN search methods. It is targeted for online(streaming) cross-modal retrievals.
The main contributions are:
- able to work with incremental label spaces.
- designed to be used as a plug-in to any cross-modal hashing methods.
- makes use of LLMs to create embeddings for new labels, and then transforms them into hash code.
- efficient in terms of time-complexity.
Some weak areas:
- some choices are not very clear. In the related work they introduce the Hadamard matrices, but do not explore alternative, if they exist.
- other clarification points in questions.
questions: 1. Regarding the ablation studies, POLISH-2 is better than POLISH in one case, why is that? Why does not considering the correlation improve the performance? Could it have a relation with the number of labels or their properties?
2. How would this method work without the Hadamard matrices? As far as I understand, it is due to these matrices that the R(t) representation has the bit balance and maximal information properties. Would the method work correctly without such properties?
3. It is mentioned that the method grows in time linearly with regards to the size of the newly arrived data. (q5) Will this drastically affect the training time of methods that grow linearly themselves(e.g. LEMON)?
4. The smallest representations you have experimented with have 32 bits. Was there a reason you did not look into smaller ones(8 or 16)?
5. Small typo in Equation (1), it minimizes for R(1)
6. The POLISH-3 always performs worse, typo I think
ethics_review_flag: No
ethics_review_description: no issue
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 6
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
Bl7rhZxtrQ | POLISH: Adaptive Online Cross-Modal Hashing for Class Incremental Data | [
"Yu-Wei Zhan",
"Xin Luo",
"Zhen-Duo Chen",
"Yongxin Wang",
"Yinwei Wei",
"Xin-Shun Xu"
] | In recent years, hashing-based online cross-modal retrieval has garnered growing attention. This trend is motivated by the fact that web data is increasingly delivered in a streaming manner as opposed to batch processing. Simultaneously, the sheer scale of web data sometimes makes it impractical to fully load for the training of hashing models. Despite the evolution of online cross-modal hashing techniques, several challenges remain: 1) Most existing methods learn hash codes by considering the relevance among newly arriving data or between new data and the existing data, often disregarding valuable global semantic information. 2) A common but limiting assumption in many methods is that the label space remains constant, implying that all class labels should be provided within the first data chunk. This assumption does not hold in real-world scenarios, and the presence of new labels in incoming data chunks can severely degrade or even break these methods.
To tackle these issues, we introduce a novel supervised online cross-modal hashing method named adaPtive Online cLass-Incremental haSHing (POLISH). Leveraging insights from language models, POLISH generates representations for new class label from multiple angles. Meanwhile, POLISH treats label embeddings, which remain unchanged once learned, as stable global information to produce high-quality hash codes. POLISH also puts forward an efficient optimization algorithm for hash code learning. Extensive experiments on two real-world benchmark datasets show the effectiveness of the proposed POLISH for class incremental data in the cross-modal hashing domain. | [
"Cross-modal Retrieval",
"Learning to Hash",
"Online Hashing",
"Efficient Discrete Optimization"
] | https://openreview.net/pdf?id=Bl7rhZxtrQ | phvVoOUwRY | official_review | 1,700,759,384,890 | Bl7rhZxtrQ | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2504/Reviewer_feBM"
] | review: **Quality and Clarity:**
- The paper are clearly written, succinctly explaining the problem of adapting hashing-based retrieval systems to streaming web data. It then articulates the challenges in existing methods and proposes a solution with clarity.
**Originality:**
- The originality lies in the creation of the POLISH method, which adapts online cross-modal hashing to class incremental data. Incorporating category correlation and semantic information for hash code generation is an innovative aspect of this work.
**Significance:**
- The significance of this work is high, particularly in the context of real-time web data processing and retrieval.
- Offers potential improvement over batch-based cross-modal hashing methods, which is a substantial advancement in the field.
**Pros:**
- Addresses a crucial and timely issue in data retrieval with an innovative solution.
- POLISH is designed to update hash functions effectively, which is a challenge in current systems.
- Introduction of an efficient optimization algorithm for discrete learning of hash codes and label embeddings.
- Thorough experiments and comparison analysis demonstrate the effective of POLISH method in streaming data environments.
**Cons:**
- While the approach is novel, the efficiency compared to state-of-the-art methods is not discussed.
questions: What's the efficiency of POLISH compared with other methods? Is it scalable?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Bl7rhZxtrQ | POLISH: Adaptive Online Cross-Modal Hashing for Class Incremental Data | [
"Yu-Wei Zhan",
"Xin Luo",
"Zhen-Duo Chen",
"Yongxin Wang",
"Yinwei Wei",
"Xin-Shun Xu"
] | In recent years, hashing-based online cross-modal retrieval has garnered growing attention. This trend is motivated by the fact that web data is increasingly delivered in a streaming manner as opposed to batch processing. Simultaneously, the sheer scale of web data sometimes makes it impractical to fully load for the training of hashing models. Despite the evolution of online cross-modal hashing techniques, several challenges remain: 1) Most existing methods learn hash codes by considering the relevance among newly arriving data or between new data and the existing data, often disregarding valuable global semantic information. 2) A common but limiting assumption in many methods is that the label space remains constant, implying that all class labels should be provided within the first data chunk. This assumption does not hold in real-world scenarios, and the presence of new labels in incoming data chunks can severely degrade or even break these methods.
To tackle these issues, we introduce a novel supervised online cross-modal hashing method named adaPtive Online cLass-Incremental haSHing (POLISH). Leveraging insights from language models, POLISH generates representations for new class label from multiple angles. Meanwhile, POLISH treats label embeddings, which remain unchanged once learned, as stable global information to produce high-quality hash codes. POLISH also puts forward an efficient optimization algorithm for hash code learning. Extensive experiments on two real-world benchmark datasets show the effectiveness of the proposed POLISH for class incremental data in the cross-modal hashing domain. | [
"Cross-modal Retrieval",
"Learning to Hash",
"Online Hashing",
"Efficient Discrete Optimization"
] | https://openreview.net/pdf?id=Bl7rhZxtrQ | pHZ2yLMYMw | decision | 1,705,909,258,188 | Bl7rhZxtrQ | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: Overall, the reviewers recognize substantial novelty and potential for impact. The authors have engaged constructively in the reviewing process. In some cases the reviewers have acknowledged responses, and in any case, I believe most questions raised by the reviewers have been adequately addresses.
One reviewer in particular gives relatively low scores for novelty and technical quality. Their concerns are mostly presentational, and not anything that would block acceptance. The authors have responded to these concerns. While the reviewer did not comment further, in my opinion the authors have adequately addressed the concerns.
Below, I recommend oral presentation because I believe the paper would be interesting to a relatively broad range of attendees. However I am not well calibrated on the breakpoint between oral and poster presentation. |
Bl7rhZxtrQ | POLISH: Adaptive Online Cross-Modal Hashing for Class Incremental Data | [
"Yu-Wei Zhan",
"Xin Luo",
"Zhen-Duo Chen",
"Yongxin Wang",
"Yinwei Wei",
"Xin-Shun Xu"
] | In recent years, hashing-based online cross-modal retrieval has garnered growing attention. This trend is motivated by the fact that web data is increasingly delivered in a streaming manner as opposed to batch processing. Simultaneously, the sheer scale of web data sometimes makes it impractical to fully load for the training of hashing models. Despite the evolution of online cross-modal hashing techniques, several challenges remain: 1) Most existing methods learn hash codes by considering the relevance among newly arriving data or between new data and the existing data, often disregarding valuable global semantic information. 2) A common but limiting assumption in many methods is that the label space remains constant, implying that all class labels should be provided within the first data chunk. This assumption does not hold in real-world scenarios, and the presence of new labels in incoming data chunks can severely degrade or even break these methods.
To tackle these issues, we introduce a novel supervised online cross-modal hashing method named adaPtive Online cLass-Incremental haSHing (POLISH). Leveraging insights from language models, POLISH generates representations for new class label from multiple angles. Meanwhile, POLISH treats label embeddings, which remain unchanged once learned, as stable global information to produce high-quality hash codes. POLISH also puts forward an efficient optimization algorithm for hash code learning. Extensive experiments on two real-world benchmark datasets show the effectiveness of the proposed POLISH for class incremental data in the cross-modal hashing domain. | [
"Cross-modal Retrieval",
"Learning to Hash",
"Online Hashing",
"Efficient Discrete Optimization"
] | https://openreview.net/pdf?id=Bl7rhZxtrQ | JpkPfHA7et | official_review | 1,700,542,726,624 | Bl7rhZxtrQ | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2504/Reviewer_Kp2U"
] | review: The paper introduces a supervised online cross-modal hashing method named adaPtive Online cLass Incremental haSHing (POLISH) method. It does not rely on the assumption that the label space remains constant.
questions: Advantage
1. The topic of the paper is interesting. Cross-modal retrieval and online settings are useful and important in real-world applications.
2. The paper applies language models to guide model learning. It is a reasonable method.
3. In experiments, the proposed method outperforms baselines.
Disadvantage
1. The motivation should be further explained. The problem is not formulated clearly. Although the paper has a problem description in Section 3.1.2, the paper does not explain the input and output of the model. And based on the paper, it seems to learn the hash codes for each data directly instead of leveraging a deep neural network. However, deep multi-modality models such as CLIP can produce high-quality representation. If we add a hash layer at the top of CLIP to achieve hash codes, it seems the online scenario is not a big issue.
2. The usage of pre-trained language model is confusing. Language models should have context to generate meaningful embeddings. However, the paper just uses label words. Additionally, computing the similarity using the embedding from language models is not effective [1]. And this is the reason that using a more powerful language model does not result in better performance. Furthermore, it is not reasonable to use BLOOM, which is a generative model and not good at generating good representation.
3. The writing should be improved. First, there are multiple typos, such as line 354 \tilde{E}. Second, some technical details should be explained. For example, the meaning of line 496 “loss function used by the original method” should be explained. Third, There are many colloquial expressions。
[1] Reimers N, Gurevych I. Sentence-bert: Sentence embeddings using siamese bert-networks[J]. arXiv preprint arXiv:1908.10084, 2019.
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 3
technical_quality: 3
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
Bl7rhZxtrQ | POLISH: Adaptive Online Cross-Modal Hashing for Class Incremental Data | [
"Yu-Wei Zhan",
"Xin Luo",
"Zhen-Duo Chen",
"Yongxin Wang",
"Yinwei Wei",
"Xin-Shun Xu"
] | In recent years, hashing-based online cross-modal retrieval has garnered growing attention. This trend is motivated by the fact that web data is increasingly delivered in a streaming manner as opposed to batch processing. Simultaneously, the sheer scale of web data sometimes makes it impractical to fully load for the training of hashing models. Despite the evolution of online cross-modal hashing techniques, several challenges remain: 1) Most existing methods learn hash codes by considering the relevance among newly arriving data or between new data and the existing data, often disregarding valuable global semantic information. 2) A common but limiting assumption in many methods is that the label space remains constant, implying that all class labels should be provided within the first data chunk. This assumption does not hold in real-world scenarios, and the presence of new labels in incoming data chunks can severely degrade or even break these methods.
To tackle these issues, we introduce a novel supervised online cross-modal hashing method named adaPtive Online cLass-Incremental haSHing (POLISH). Leveraging insights from language models, POLISH generates representations for new class label from multiple angles. Meanwhile, POLISH treats label embeddings, which remain unchanged once learned, as stable global information to produce high-quality hash codes. POLISH also puts forward an efficient optimization algorithm for hash code learning. Extensive experiments on two real-world benchmark datasets show the effectiveness of the proposed POLISH for class incremental data in the cross-modal hashing domain. | [
"Cross-modal Retrieval",
"Learning to Hash",
"Online Hashing",
"Efficient Discrete Optimization"
] | https://openreview.net/pdf?id=Bl7rhZxtrQ | DBHwfdXenh | official_review | 1,700,402,190,321 | Bl7rhZxtrQ | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2504/Reviewer_i6cM"
] | review: This paper introduces an approach named POLISH for adaptive online cross-modal hashing. The authors highlight the importance of preserving label correlation relationships and propose a loss function to learn the embedding of class labels. Experimental results show that this method has better performance than the baselines.
**Paper Strengths**
The topic itself is very interesting. This paper is generally well-written and organized, making it easy to follow. The authors provide clear explanations of the concepts and techniques used in the proposed method.
**Paper Weaknesses**
1. This paper could benefit from further clarification of the notation used. It is inappropriate to use ~ and -> superscripts to distinguish matrix notations. Moreover, there are too many inline formulas used in the paper.
2. More detailed explanations of the experimental setup and experiments would support the proposed method. See details in Question.
questions: 1. Would the proposed method still work if a round in training occurs when there are a large number of new classes, even much more than the exsiting ones? This is very likely to happen, especially at the beginning of the training, which is not considered in the paper and there are no corresponding experiments. In addition, does the size of data volume in a new round during training have an impact on the results? This is also an issue that needs to be considered.
2. Why are the baselines of the NUS-WIDE dataset different from MIRFlickr in Table 1?
3. There were only two datasets used in the experiment. Does it still work on other datasets? For example, the MSCOCO dataset was used in OCMFH.
ethics_review_flag: No
ethics_review_description: n/a
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
Bl7rhZxtrQ | POLISH: Adaptive Online Cross-Modal Hashing for Class Incremental Data | [
"Yu-Wei Zhan",
"Xin Luo",
"Zhen-Duo Chen",
"Yongxin Wang",
"Yinwei Wei",
"Xin-Shun Xu"
] | In recent years, hashing-based online cross-modal retrieval has garnered growing attention. This trend is motivated by the fact that web data is increasingly delivered in a streaming manner as opposed to batch processing. Simultaneously, the sheer scale of web data sometimes makes it impractical to fully load for the training of hashing models. Despite the evolution of online cross-modal hashing techniques, several challenges remain: 1) Most existing methods learn hash codes by considering the relevance among newly arriving data or between new data and the existing data, often disregarding valuable global semantic information. 2) A common but limiting assumption in many methods is that the label space remains constant, implying that all class labels should be provided within the first data chunk. This assumption does not hold in real-world scenarios, and the presence of new labels in incoming data chunks can severely degrade or even break these methods.
To tackle these issues, we introduce a novel supervised online cross-modal hashing method named adaPtive Online cLass-Incremental haSHing (POLISH). Leveraging insights from language models, POLISH generates representations for new class label from multiple angles. Meanwhile, POLISH treats label embeddings, which remain unchanged once learned, as stable global information to produce high-quality hash codes. POLISH also puts forward an efficient optimization algorithm for hash code learning. Extensive experiments on two real-world benchmark datasets show the effectiveness of the proposed POLISH for class incremental data in the cross-modal hashing domain. | [
"Cross-modal Retrieval",
"Learning to Hash",
"Online Hashing",
"Efficient Discrete Optimization"
] | https://openreview.net/pdf?id=Bl7rhZxtrQ | 5nD9WxEN9R | official_review | 1,700,774,252,818 | Bl7rhZxtrQ | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2504/Reviewer_jB6V"
] | review: POLISH presents a novel and technically sophisticated solution to the problem of incremental label spaces in online cross-modal hashing. The incorporation of language models and the strategic use of embeddings for global guidance are commendable technical strengths. To enhance the paper’s technical depth, further exploration of language models and a more extensive comparative analysis could be beneficial.
questions: Strength:
Online cross-modal hashing involves learning representations (hash codes) for data instances, such as images and text, in an incremental and online manner. This means the system adapts to new data as it arrives, updating its knowledge without retraining on the entire dataset.
POLISH presents a distinctive viewpoint by explicitly taking into account situations where the label space undergoes incremental changes. This deviates from conventional approaches, which typically assume a static set of labels. The system dynamically adjusts to the incorporation of new classes in each learning iteration, showcasing a forward-thinking approach for real-world applications.
An essential technical advancement in POLISH involves incorporating language models such as Word2Vec, BERT, and CLIP to tap into latent knowledge embedded in labels. Through utilization of these models, POLISH produces valuable label embeddings that encompass both semantic details and correlations among labels. This methodology surpasses conventional hashing techniques, enhancing the creation of more comprehensive representations.
Weakness:
While the paper compares POLISH against several state-of-the-art methods, a more in-depth comparative analysis, especially against methods addressing incremental label spaces, could further highlight the unique advantages of POLISH. This would strength the argument for its novelty in addressing this specific challenge.
While POLISH demonstrates robust performance across different language models (Word2Vec, BERT, CLIP, etc.), the paper does not extensively explore the impact of different models on the system’s performance. Further investigation into the choice of language model and its implications could enhance the understanding of POLISH’S versatility.
The sensitivity of the system to the β parameter is acknowledged, and the paper reports consistent favorable results when β is set to 10. However, a deeper exploration of the sensitivity across a broader range of values and its implications on performance could provide more insights into the system’s behavior under different settings.
ethics_review_flag: No
ethics_review_description: -
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
BZeQmMsYLg | Entity Disambiguation with Extreme Multi-label Ranking | [
"Jyun-Yu Jiang",
"Wei-Cheng Chang",
"Jiong Zhang",
"Cho-Jui Hsieh",
"Hsiang-Fu Yu"
] | Entity disambiguation is one of the most important natural language tasks to identify entities behind ambiguous surface mentions within a knowledge base. Although many recent studies apply deep learning to achieve decent results, they need exhausting pre-training and mediocre recall in the retrieval stage. In this paper, we propose a novel framework, eXtreme Multi-label Ranking for Entity Disambiguation (XMRED), to address this challenge. An efficient zero-shot entity retriever with auxiliary data is first pre-trained to recall relevant entities based on linear models. Specifically, the retrieval process can be considered as an extreme multi-label ranking (XMR) task. Entities are first clustered at different scales to form a label tree, thereby learning multi-scale entity retrievers over the label tree with high recall. Moreover, XMRED applies deep cross-encoder as a re-ranker to achieve high precision based on high-quality candidates. Extensive experimental results based on the AIDA-CoNLL benchmark and five zero-shot testing datasets demonstrate that XMRED obtains 98% and over 95% recall scores for in-domain and zero-shot datasets with top-10 retrieved entities. With a deep cross-encoder as the re-ranker, XMRED further outperforms the previous state-of-the-art by 1.74% in In-KB micro-F1 scores on average with a significant improvement on the training efficiency from days to 3.48 hours. In addition, XMRED also beats the state-of-the-art for page-level document retrieval by 2.38% in accuracy and 1.90% in recall@5. | [
"entity disambiguation",
"extreme multi-label ranking",
"entity retriever"
] | https://openreview.net/pdf?id=BZeQmMsYLg | pTCGL1nApO | official_review | 1,700,883,249,810 | BZeQmMsYLg | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1141/Reviewer_4z7X"
] | review: Summary:
The authors of this paper first design a simple entity retriever to improve its efficiency for pre-training different knowledge bases. Then, the entity retrieval problem is regarded as an eXtreme Multi-label Ranking problem, and the prior probability relied on by previous studies is discarded. Finally, bert is used as the candidate entity and mention reranker.
Strengths:
1. The paper conducts extensive experiments to prove the effectiveness of the method
2. The paper makes a good attempt to abandon the strategy of prior probability, which requires a lot of prior knowledge.
Weaknesses:
1. The framework figure is a little cluttered, and it's hard for me to see what the key point of the framework is.
2. It seems that the newest baseline is in 2022. It would be better to introduce more newest methods in 2023 as baselines.
3. This paper is relatively difficult to understand and requires further effort to refine and polish.
questions: 1. It would be better to provide a clear definition for the eXtreme Multi-label Ranking task, which can be more readable for researchers.
2. I think that "Deep learning models could be too complicated to consider the whole entity space" proposed in the introduction is not very reasonable. Therefore, a simpler model is not necessarily needed as an entity retriever, and for complex entity retrievers the entity representation can be computed in advance without real-time computation.
ethics_review_flag: No
ethics_review_description: None
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 3
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
BZeQmMsYLg | Entity Disambiguation with Extreme Multi-label Ranking | [
"Jyun-Yu Jiang",
"Wei-Cheng Chang",
"Jiong Zhang",
"Cho-Jui Hsieh",
"Hsiang-Fu Yu"
] | Entity disambiguation is one of the most important natural language tasks to identify entities behind ambiguous surface mentions within a knowledge base. Although many recent studies apply deep learning to achieve decent results, they need exhausting pre-training and mediocre recall in the retrieval stage. In this paper, we propose a novel framework, eXtreme Multi-label Ranking for Entity Disambiguation (XMRED), to address this challenge. An efficient zero-shot entity retriever with auxiliary data is first pre-trained to recall relevant entities based on linear models. Specifically, the retrieval process can be considered as an extreme multi-label ranking (XMR) task. Entities are first clustered at different scales to form a label tree, thereby learning multi-scale entity retrievers over the label tree with high recall. Moreover, XMRED applies deep cross-encoder as a re-ranker to achieve high precision based on high-quality candidates. Extensive experimental results based on the AIDA-CoNLL benchmark and five zero-shot testing datasets demonstrate that XMRED obtains 98% and over 95% recall scores for in-domain and zero-shot datasets with top-10 retrieved entities. With a deep cross-encoder as the re-ranker, XMRED further outperforms the previous state-of-the-art by 1.74% in In-KB micro-F1 scores on average with a significant improvement on the training efficiency from days to 3.48 hours. In addition, XMRED also beats the state-of-the-art for page-level document retrieval by 2.38% in accuracy and 1.90% in recall@5. | [
"entity disambiguation",
"extreme multi-label ranking",
"entity retriever"
] | https://openreview.net/pdf?id=BZeQmMsYLg | nrTfF2zTu0 | official_review | 1,701,244,345,101 | BZeQmMsYLg | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1141/Reviewer_LZmk"
] | review: This paper proposes an extreme multi-label ranking framework for entity disambiguation. It also applies a deep cross-encoder as a re-ranker given high-quality candidates. Extensive experiments demonstrate both the effectiveness and efficiency of the proposed method, in both in-domain and zero-shot settings.
I have a few questions regarding the methodology design.
1. What if the entities do not follow hierarchical tree structure but instead a graph-like structure with more complex inter-entity relationships?
2. Is there a comparative analysis of deep features versus traditional TF-IDF features within the context of this framework?
3. Is it valid to assume the tree is balanced? How do other clustering algorithms affect the performance and the efficiency of the algorithm?
questions: Please see my questions in the "Review" part above. Additionally, I am curious about the paper's claim regarding the limitations of sequence-to-sequence approaches in handling sparse entity spaces. Given their ability to utilize semantic relationships among labels, these methods seem well-suited for transferring knowledge from well-represented labels to those with fewer instances.
ethics_review_flag: No
ethics_review_description: No need for ethical review.
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
BZeQmMsYLg | Entity Disambiguation with Extreme Multi-label Ranking | [
"Jyun-Yu Jiang",
"Wei-Cheng Chang",
"Jiong Zhang",
"Cho-Jui Hsieh",
"Hsiang-Fu Yu"
] | Entity disambiguation is one of the most important natural language tasks to identify entities behind ambiguous surface mentions within a knowledge base. Although many recent studies apply deep learning to achieve decent results, they need exhausting pre-training and mediocre recall in the retrieval stage. In this paper, we propose a novel framework, eXtreme Multi-label Ranking for Entity Disambiguation (XMRED), to address this challenge. An efficient zero-shot entity retriever with auxiliary data is first pre-trained to recall relevant entities based on linear models. Specifically, the retrieval process can be considered as an extreme multi-label ranking (XMR) task. Entities are first clustered at different scales to form a label tree, thereby learning multi-scale entity retrievers over the label tree with high recall. Moreover, XMRED applies deep cross-encoder as a re-ranker to achieve high precision based on high-quality candidates. Extensive experimental results based on the AIDA-CoNLL benchmark and five zero-shot testing datasets demonstrate that XMRED obtains 98% and over 95% recall scores for in-domain and zero-shot datasets with top-10 retrieved entities. With a deep cross-encoder as the re-ranker, XMRED further outperforms the previous state-of-the-art by 1.74% in In-KB micro-F1 scores on average with a significant improvement on the training efficiency from days to 3.48 hours. In addition, XMRED also beats the state-of-the-art for page-level document retrieval by 2.38% in accuracy and 1.90% in recall@5. | [
"entity disambiguation",
"extreme multi-label ranking",
"entity retriever"
] | https://openreview.net/pdf?id=BZeQmMsYLg | Q4rWNUNWI5 | official_review | 1,701,181,437,239 | BZeQmMsYLg | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1141/Reviewer_A9VL"
] | review: The paper is well-written
It offers novel entity disambiguation algorithms
Testing with other datasets and methods is thorough
questions: will XMRED be available on GitHub?
ethics_review_flag: No
ethics_review_description: X
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
BZeQmMsYLg | Entity Disambiguation with Extreme Multi-label Ranking | [
"Jyun-Yu Jiang",
"Wei-Cheng Chang",
"Jiong Zhang",
"Cho-Jui Hsieh",
"Hsiang-Fu Yu"
] | Entity disambiguation is one of the most important natural language tasks to identify entities behind ambiguous surface mentions within a knowledge base. Although many recent studies apply deep learning to achieve decent results, they need exhausting pre-training and mediocre recall in the retrieval stage. In this paper, we propose a novel framework, eXtreme Multi-label Ranking for Entity Disambiguation (XMRED), to address this challenge. An efficient zero-shot entity retriever with auxiliary data is first pre-trained to recall relevant entities based on linear models. Specifically, the retrieval process can be considered as an extreme multi-label ranking (XMR) task. Entities are first clustered at different scales to form a label tree, thereby learning multi-scale entity retrievers over the label tree with high recall. Moreover, XMRED applies deep cross-encoder as a re-ranker to achieve high precision based on high-quality candidates. Extensive experimental results based on the AIDA-CoNLL benchmark and five zero-shot testing datasets demonstrate that XMRED obtains 98% and over 95% recall scores for in-domain and zero-shot datasets with top-10 retrieved entities. With a deep cross-encoder as the re-ranker, XMRED further outperforms the previous state-of-the-art by 1.74% in In-KB micro-F1 scores on average with a significant improvement on the training efficiency from days to 3.48 hours. In addition, XMRED also beats the state-of-the-art for page-level document retrieval by 2.38% in accuracy and 1.90% in recall@5. | [
"entity disambiguation",
"extreme multi-label ranking",
"entity retriever"
] | https://openreview.net/pdf?id=BZeQmMsYLg | OcSZz3w5T4 | decision | 1,705,909,253,477 | BZeQmMsYLg | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: Proposes and evaluates a "an extreme multi-label ranking model" for entity disambiguation. The topic is good fit with the conference. The paper is fun to read. The approach has a high level of novelty. The experiments are thorough and convincing.
The authors have engaged with the reviewers, and in my opinion, they have adequately addressed all concerns raised by the reviewers. One reviewer, in particular, took time to go back and forth with the authors on several points.
I note that one review consists of only two lines and gives relatively high scores. Even if I discount this review, I think the remaining reviews provide support for acceptance.
I think this paper would be a fine candidate for oral presentation since it would attract some interest and attention. I also suspect the authors would give a good presentation. |
BZeQmMsYLg | Entity Disambiguation with Extreme Multi-label Ranking | [
"Jyun-Yu Jiang",
"Wei-Cheng Chang",
"Jiong Zhang",
"Cho-Jui Hsieh",
"Hsiang-Fu Yu"
] | Entity disambiguation is one of the most important natural language tasks to identify entities behind ambiguous surface mentions within a knowledge base. Although many recent studies apply deep learning to achieve decent results, they need exhausting pre-training and mediocre recall in the retrieval stage. In this paper, we propose a novel framework, eXtreme Multi-label Ranking for Entity Disambiguation (XMRED), to address this challenge. An efficient zero-shot entity retriever with auxiliary data is first pre-trained to recall relevant entities based on linear models. Specifically, the retrieval process can be considered as an extreme multi-label ranking (XMR) task. Entities are first clustered at different scales to form a label tree, thereby learning multi-scale entity retrievers over the label tree with high recall. Moreover, XMRED applies deep cross-encoder as a re-ranker to achieve high precision based on high-quality candidates. Extensive experimental results based on the AIDA-CoNLL benchmark and five zero-shot testing datasets demonstrate that XMRED obtains 98% and over 95% recall scores for in-domain and zero-shot datasets with top-10 retrieved entities. With a deep cross-encoder as the re-ranker, XMRED further outperforms the previous state-of-the-art by 1.74% in In-KB micro-F1 scores on average with a significant improvement on the training efficiency from days to 3.48 hours. In addition, XMRED also beats the state-of-the-art for page-level document retrieval by 2.38% in accuracy and 1.90% in recall@5. | [
"entity disambiguation",
"extreme multi-label ranking",
"entity retriever"
] | https://openreview.net/pdf?id=BZeQmMsYLg | N3UZgLrwul | official_review | 1,699,235,063,406 | BZeQmMsYLg | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1141/Reviewer_ocLt"
] | review: The paper proposes XMRED which is a new method for entity disambiguation (linking). XMRED is based on formulating the first retrieval stage of entity linking as an extreme multi-label ranking. The goal is to reduce the huge computational time needed by the contextualized neural language models to achieve a good recall in the first retrieval phase of entity linking. The authors proposed to build a hierarchical label tree for all the entities in the knowledge base using TF-IDF features computed with either Positive Instance Feature Aggregation (PIFA) for mentions of entities in the training dataset, or the title and abstract from the metadata of the Cirrus Search Wikipedia dump for missing entities in the training dataset. The experimental results show the effectiveness and efficiency of XMRED for the entity disambiguation task.
Pros
S1. The authors proposed to formulate the first retrieval stage of entity linking as a multi-label ranking. This formulation in the entity linking task is novel and interesting as it introduces fast operations compared to the previously proposed contextualized language models for entity linking that requires costly training in the first phase.
S2. Multiple one-versus-all linear SVMs are trained for the internal nodes of the label tree to predict the probabilistic rank score for each node given a TF-IDF representation of an input mention with its context. The computational cost of training is reduced by only considering negative samples from the same parent node.
S3. The authors showed the effectiveness of their proposed method by reporting evaluation metrics for both entity disambiguation and page-level document retrieval, and the efficiency by comparing the training time with multiple baselines for both the pre-training and fine-tuning phases.
Cons
W1. The authors chose to build the label tree using TF-IDF features, but the tree can actually be built from other features such as the text embeddings of mentions and contexts or titles and abstracts. Therefore, the choice of TF-IDF should be experimentally supported.
W2. For the SVM models, as long as the label tree is traversed, the amount of data that is used to train each SVM model is further reduced and this can lead to overfitting of the last SVM models to some specific exact matching tokens.
W3. The pre-trained contextualized language models for entity linking are more suitable than the label tree with SVMs to handle the case of adding a completely new entity to the knowledge base because of the semantic matching signals that are captured by these contextualized language models.
I acknowledge that I have read the rebuttal(s).
questions: In this paper, the authors proposed a new method, called XMRED, for the entity linking task. The main goal of the paper is to overcome the exhausting pre-trained in the first ranking stage of entity linking. This is achieved by formulating the first ranking stage as an extreme multi-label ranking where the label tree is built from TF-IDF based features, and then efficiently traversed with beam search. There are some points that should be taken into consideration:
1. The mentions and contexts in the training data, and the abstracts and texts in the knowledge base are mapped to TF-IDF vectors. The TF-IDF only captures the exact matching signal, so how does that compare to a text-embedding representation that can capture richer semantics? In addition, the TF-IDF is only computed for unigrams and bigrams, is that enough for a good generalization of the model? What about mentions and contexts that are expressed differently than the pre-computed TF-IDF of the entity? What is the dimension of the TF-IDF vector? Also, applying TF-IDF usually comes with additional text preprocessing to facilitate exact matching, it would be interesting to briefly describe these preprocessing steps in the experimental setup part.
2. As long as the tree is traversed, the number of instances that are used to train an SVM model are reduced. For example, if I understand correctly, for the last level of internal nodes, each node has 100 instances (when setting B to 100 as explained in the experimental setup), so there are 100 SVMs models for this node, and each model is trained with 1 positive instance and 99 negative instances. This may clearly lead to overfitting in the trained SVMs at this level. Please correct me if I’m missing anything, and please comment on this overfitting aspect.
3. In the case of adding completely new entities to the knowledge base, I can see that the pre-trained language models are still suitable given that they are pre-trained to capture rich semantic features. But, in the case of XMRED, how difficult is it to adapt XMRED to newly added entities? Some insights about this aspect can help the reader to have a better understanding about the applicability of the method.
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
BO8Shlh4Cn | Revisiting VAE for Unsupervised Time Series Anomaly Detection: A Frequency Perspective | [
"Zexin Wang",
"Changhua Pei",
"Minghua Ma",
"Xin Wang",
"Zhihan Li",
"Dan Pei",
"Saravan Rajmohan",
"Dongmei Zhang",
"Qingwei Lin",
"Haiming Zhang",
"Jianhui li",
"Gaogang Xie"
] | Time series Anomaly Detection (AD) plays a crucial role for web systems. Various web systems rely on time series data to monitor and identify anomalies in real time, as well as to initiate diagnosis and remediation procedures. Variational Autoencoders (VAEs) have gained popularity in recent decades due to their superior de-noising capabilities, which are useful for anomaly detection. However, our study reveals that VAE-based methods face challenges in capturing long-periodic heterogeneous patterns and detailed short-periodic trends simultaneously. To address these challenges, we propose Frequency-enhanced Conditional Variational Autoencoder (FCVAE), a novel unsupervised AD method for univariate time series. To ensure an accurate AD, FCVAE exploits an innovative approach to concurrently integrate both the global and local frequency features into the condition of Conditional Variational Autoencoder (CVAE) to significantly increase the accuracy of reconstructing the normal data. Together with a carefully designed "target attention" mechanism, our approach allows the model to pick the most useful information from the frequency domain for better short-periodic trend construction. Our FCVAE has been evaluated on public datasets and a large-scale cloud system, and the results demonstrate that it outperforms state-of-the-art methods. This confirms the practical applicability of our approach in addressing the limitations of current VAE-based anomaly detection models. | [
"Univariate time series",
"Anomaly detection",
"Conditional variational autoencoder",
"Frequency information"
] | https://openreview.net/pdf?id=BO8Shlh4Cn | jL6VUT7Z5T | official_review | 1,700,387,034,504 | BO8Shlh4Cn | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2479/Reviewer_N23X"
] | review: This paper presents a Conditional Variational Autoencoder-based anomaly detection method that incorporates frequency information.
**Pros:**
1. The paper is well-organized, featuring high-quality figures.
2. The challenge 1 is an important and interesting problem, which refers to detecting anomaly by a unified model from data with diverse normal patterns.
**Cons:**
1. The challenges proposed in this paper lack a comprehensive assessment of current anomaly detection methods. For instance, Challenge 2 asserts that Variational Autoencoders (VAEs) struggle with capturing detailed trends because they focus on minimizing overall reconstruction error instead of point-to-point dependencies. However, existing works, such as OmniAnomaly[1], fuse VAE with recurrent networks, considering temporal dependencies and effectively handling detailed trends.
[1] Su Y, Zhao Y, Niu C, et al. Robust anomaly detection for multivariate time series through stochastic recurrent neural network. Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining. 2019: 2828-2837.
2. The motivation behind introducing frequency information is unclear. The authors claim that VAE-based models fail to capture both heterogeneous periodic patterns and detailed trend patterns due to missing information in the frequency domain (lines 217-222). However, the connection between capturing heterogeneous periodic patterns and utilizing frequency domain information is not self-evident and requires further clarification.
3. It is difficult to say that the proposed method is significantly novel, as it merely combines Fedformer, MLP, attention mechanisms, and CVAE.
4. The proposed method only demonstrates a strong advantage over baselines on one dataset (WSD), while showing only marginal improvement on other datasets, especially KPI and NAB.
5. Some parts of the paper are hard to pass. For instance, non-standard terms are used without clarification (e.g., "sub-frequencies" in line 200). Additionally, there are grammar issues (e.g., 'A large number of sub-frequencies make the signal in condition noisy and difficult to use.' What does 'make the signal in condition noisy' mean?)."
questions: Please refer to cons in review.
ethics_review_flag: No
ethics_review_description: No ethics issue.
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
BO8Shlh4Cn | Revisiting VAE for Unsupervised Time Series Anomaly Detection: A Frequency Perspective | [
"Zexin Wang",
"Changhua Pei",
"Minghua Ma",
"Xin Wang",
"Zhihan Li",
"Dan Pei",
"Saravan Rajmohan",
"Dongmei Zhang",
"Qingwei Lin",
"Haiming Zhang",
"Jianhui li",
"Gaogang Xie"
] | Time series Anomaly Detection (AD) plays a crucial role for web systems. Various web systems rely on time series data to monitor and identify anomalies in real time, as well as to initiate diagnosis and remediation procedures. Variational Autoencoders (VAEs) have gained popularity in recent decades due to their superior de-noising capabilities, which are useful for anomaly detection. However, our study reveals that VAE-based methods face challenges in capturing long-periodic heterogeneous patterns and detailed short-periodic trends simultaneously. To address these challenges, we propose Frequency-enhanced Conditional Variational Autoencoder (FCVAE), a novel unsupervised AD method for univariate time series. To ensure an accurate AD, FCVAE exploits an innovative approach to concurrently integrate both the global and local frequency features into the condition of Conditional Variational Autoencoder (CVAE) to significantly increase the accuracy of reconstructing the normal data. Together with a carefully designed "target attention" mechanism, our approach allows the model to pick the most useful information from the frequency domain for better short-periodic trend construction. Our FCVAE has been evaluated on public datasets and a large-scale cloud system, and the results demonstrate that it outperforms state-of-the-art methods. This confirms the practical applicability of our approach in addressing the limitations of current VAE-based anomaly detection models. | [
"Univariate time series",
"Anomaly detection",
"Conditional variational autoencoder",
"Frequency information"
] | https://openreview.net/pdf?id=BO8Shlh4Cn | i3cSIaFXxM | official_review | 1,700,478,624,705 | BO8Shlh4Cn | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2479/Reviewer_4SxQ"
] | review: The authors present a solution to identify anomalies in univariate Time Series. They propose to enhance the performance of VAEs (Variational Autoencoders) by combining global and local features into an autoencoder.
While authors and to the paper a "Relevant Statement" paragraph to justify the fit of the paper into the "Systems and Infrastructure for Web, Mobile, and WoT”, in my opinion the paper is clearly out of the scope of this track, and it should be rejected for this reason.
The authors provide three arguments to justify the fit of the paper into this track:
1 - Web and WoT systems use anomaly detection in time series data for the monitoring of system performance. This might be true, but this does not justify the paper being a fit to the track. Web systems are currently deployed in Cloud Providers (e.g., AWS, Google Cloud, etc.) but an improvement in the performance of deployment of applications in a cloud system, wouldn't not make a paper in that topic to fit in the current track. Likewise, web system performance depends on the network infrastructure, hence a paper that propose an improvement in the network infrastructure or network protocols (e.g., a new version of tcp which makes web systems faster) wouldn't be a fit for this track, despite a web system may benefit from using the new version of the TCP protocol.
2- The paper offers a new perspective on data management and stream processing for web applications, while also sharing experiences and lessons from the deployment of our innovative web-based algorithm.
I haven't seen in the whole paper a description of how this solution would be applied into the stream processing of web applications or stream processing. And I don't know where the term "web-based" algorithm comes from. The authors define a ML algorithm, it is not a web-based algorithm. Moreover, the authors have deployed their algorithm in a cloud system, not in a web system.
3. Previous editions of WWW conference had papers in this topic. This is correct, but again this fact does not justify the fitting of the paper into the Systems and Infrastructure for Web, Mobile, and WoT. One should go to previous editions of the conference, identify the tracks offered in those editions and check which track the papers referred by the authors were submitted to. In this year edition, there might be tracks where this paper might be a good fit to, but not the Systems and Infrastructure for Web, Mobile, and WoT track.
In this respect, I haven't seen in my experience a reviewer of probably hundreds of papers, authors adding a "Relevance Statement" into a paper. This is an indication that authors have serious doubts regarding the fit of the paper in this track and that is why they added this paragraph.
Indeed, this is wrong. If the paper is a good fit for the track, authors should be able to make it clear in the introduction of the paper. In the introduction of the paper authors should have been able to frame their work such that there were no doubts that the paper was a good fit for the track, by defining a problem/context which is obviously in the scope of the track.
Having clarify the scope issue, I would like to acknowledge that my expertise falls in the area of the track and thus I consider myself unable to properly assess the quality or novelty of the paper, since it is far from my area of expertise.
My only consideration with regards of novelty is the fact that this paper focus on univariate time series, while as far as I know the most innovative works in the topic of time series and functional data analysis focuses on multivariate data. This in principle would affect the novelty of the paper. But as I acknowledge already, I am not an expert in the topic and then I cannot guarantee the correctness of this statement.
For the above, my review will stick to those aspects I am able to assess such as structure and clarity of the paper and the evaluation of the solution.
1. As I said before the introduction of the paper fails to frame the paper in the scope of the track.
2. The paper is in general well structured; the description of the proposed solution is fair and there is a likewise fair evaluation of the system.
3. My main comments relates with the evaluation of the system:
* The authors make the following statement: "Regarding hyperparameters, we conducted a grid search to identify the most effective parameters for different datasets". This I assume works well when using tagged datasets. However, when the solution is implemented in a real system, where there is not a ground truth of the anomalies, how the authors can guarantee the solution is operating in its best configuration? For instance, in the cloud system, they've implemented it, how they know the ~10% improvement they are getting is the best the solution can get?
* The authors only report F1-score, which is fine. However, this does not allow to Understand where the failures of the solutions come from. Reporting other metrics such as accuracy, precision, FPR or TPR, etc. would be beneficial to Understand the performance of the solution in detail.
* The evaluation lacks a proper analysis of the computational cost of the proposed solution vs. the state-of-the-art. The selection of the algorithm to use in a real use-case depends not only not the offered performance of the algorithm but its associated cost.
questions: 1- Can the authors provide a credible justification regarding the fit of the paper in the track?
2- Could the authors extend the evaluation to other metrics is addition to F1-score?
3- Could the authors provide an analysis of the computational cost of their solution in comparison with the state of the art?
ethics_review_flag: No
ethics_review_description: None
scope: 1: The work is irrelevant to the Web
novelty: 4
technical_quality: 4
reviewer_confidence: 1: The reviewer's evaluation is an educated guess |
BO8Shlh4Cn | Revisiting VAE for Unsupervised Time Series Anomaly Detection: A Frequency Perspective | [
"Zexin Wang",
"Changhua Pei",
"Minghua Ma",
"Xin Wang",
"Zhihan Li",
"Dan Pei",
"Saravan Rajmohan",
"Dongmei Zhang",
"Qingwei Lin",
"Haiming Zhang",
"Jianhui li",
"Gaogang Xie"
] | Time series Anomaly Detection (AD) plays a crucial role for web systems. Various web systems rely on time series data to monitor and identify anomalies in real time, as well as to initiate diagnosis and remediation procedures. Variational Autoencoders (VAEs) have gained popularity in recent decades due to their superior de-noising capabilities, which are useful for anomaly detection. However, our study reveals that VAE-based methods face challenges in capturing long-periodic heterogeneous patterns and detailed short-periodic trends simultaneously. To address these challenges, we propose Frequency-enhanced Conditional Variational Autoencoder (FCVAE), a novel unsupervised AD method for univariate time series. To ensure an accurate AD, FCVAE exploits an innovative approach to concurrently integrate both the global and local frequency features into the condition of Conditional Variational Autoencoder (CVAE) to significantly increase the accuracy of reconstructing the normal data. Together with a carefully designed "target attention" mechanism, our approach allows the model to pick the most useful information from the frequency domain for better short-periodic trend construction. Our FCVAE has been evaluated on public datasets and a large-scale cloud system, and the results demonstrate that it outperforms state-of-the-art methods. This confirms the practical applicability of our approach in addressing the limitations of current VAE-based anomaly detection models. | [
"Univariate time series",
"Anomaly detection",
"Conditional variational autoencoder",
"Frequency information"
] | https://openreview.net/pdf?id=BO8Shlh4Cn | hFRMuGM5qn | decision | 1,705,909,242,058 | BO8Shlh4Cn | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: In this paper, the authors propose a novel VAE-based method, FCVAE, for time series anomaly detection, addressing limitations in capturing periodic patterns and detailed trends. Reviewers recognize the value and the comprehensive evaluation of the proposed method, but overall reviewer confidence is very low (lowest in my batch of papers). Therefore, there are major concerns regarding the fit of the paper in the scope of the conference track. Also, some raised concerns arise regarding the perceived significance of improvement and clarity in the methodology.
The authors have been very active during the rebuttal phase, trying to convince the reviewers about the fit within the track scope, while also answering technical questions about the results. Even after many discussions, there is still one reviewer who is absolutely not convinced this contribution is fitting to the track, and still recommends a reject. Specifically, the paper monitors a system that is not a web system -- but a cloud system with a web interface for accessing the information related to the cloud system. |
BO8Shlh4Cn | Revisiting VAE for Unsupervised Time Series Anomaly Detection: A Frequency Perspective | [
"Zexin Wang",
"Changhua Pei",
"Minghua Ma",
"Xin Wang",
"Zhihan Li",
"Dan Pei",
"Saravan Rajmohan",
"Dongmei Zhang",
"Qingwei Lin",
"Haiming Zhang",
"Jianhui li",
"Gaogang Xie"
] | Time series Anomaly Detection (AD) plays a crucial role for web systems. Various web systems rely on time series data to monitor and identify anomalies in real time, as well as to initiate diagnosis and remediation procedures. Variational Autoencoders (VAEs) have gained popularity in recent decades due to their superior de-noising capabilities, which are useful for anomaly detection. However, our study reveals that VAE-based methods face challenges in capturing long-periodic heterogeneous patterns and detailed short-periodic trends simultaneously. To address these challenges, we propose Frequency-enhanced Conditional Variational Autoencoder (FCVAE), a novel unsupervised AD method for univariate time series. To ensure an accurate AD, FCVAE exploits an innovative approach to concurrently integrate both the global and local frequency features into the condition of Conditional Variational Autoencoder (CVAE) to significantly increase the accuracy of reconstructing the normal data. Together with a carefully designed "target attention" mechanism, our approach allows the model to pick the most useful information from the frequency domain for better short-periodic trend construction. Our FCVAE has been evaluated on public datasets and a large-scale cloud system, and the results demonstrate that it outperforms state-of-the-art methods. This confirms the practical applicability of our approach in addressing the limitations of current VAE-based anomaly detection models. | [
"Univariate time series",
"Anomaly detection",
"Conditional variational autoencoder",
"Frequency information"
] | https://openreview.net/pdf?id=BO8Shlh4Cn | g94aV8LEXa | official_review | 1,700,747,153,035 | BO8Shlh4Cn | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2479/Reviewer_gTj8"
] | review: This paper addresses the challenges faced by Variational Autoencoders (VAEs) in capturing both long-periodic heterogeneous patterns and detailed short-periodic trends. They proposes the Frequency-enhanced Conditional Variational Autoencoder (FCVAE), an unsupervised anomaly detection method for univariate time series. They incorporate a "target attention" mechanism designed to extract the most valuable information from the frequency domain for improved short-periodic trend construction.
There are a few weaknesses within this paper. For example, it’s relatively hard to read with experimental detail illustrated in the Introduction without a high-level description. The novelty is not clearly demonstrated. Some part of the methodology is not clear enough. And the evaluation metric is limited compared with other similar research.
questions: 1. In Section 1 Introduction, what’s the “optimal performance” mentioned in line 92?
2. In Section 1 Introduction, the statement like “we aim to re-examine the VAE model and improve its effectiveness in anomaly detection” is not strong enough in novelty.
3. In Section 1 Introduction, it’s hard to read with experimental details shown in the introduction part without high-level explanations. For example, the meaning of “reconstruction error” in effectiveness in anomaly detection.
4. In Section 2.2 VAEs and CVAEs, it’s better to claim what’s the relationship between FCVAE and these two frameworks.
5. In Section 3, only data augmentation is demonstrated however missing other preprocessing steps in Figure 3. And the correlation between Figure 3 and Figure 5 is not clearly displayed from the figures.
6. In Experiment results, only the F1 score is utilized for comparison. How about other metrics like accuracy, precision, recall, and etc? Why choose the F1 score only?
7. There are some small errors. For example, in Section 2.1 line 244, “Given a UTS data”
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
BO8Shlh4Cn | Revisiting VAE for Unsupervised Time Series Anomaly Detection: A Frequency Perspective | [
"Zexin Wang",
"Changhua Pei",
"Minghua Ma",
"Xin Wang",
"Zhihan Li",
"Dan Pei",
"Saravan Rajmohan",
"Dongmei Zhang",
"Qingwei Lin",
"Haiming Zhang",
"Jianhui li",
"Gaogang Xie"
] | Time series Anomaly Detection (AD) plays a crucial role for web systems. Various web systems rely on time series data to monitor and identify anomalies in real time, as well as to initiate diagnosis and remediation procedures. Variational Autoencoders (VAEs) have gained popularity in recent decades due to their superior de-noising capabilities, which are useful for anomaly detection. However, our study reveals that VAE-based methods face challenges in capturing long-periodic heterogeneous patterns and detailed short-periodic trends simultaneously. To address these challenges, we propose Frequency-enhanced Conditional Variational Autoencoder (FCVAE), a novel unsupervised AD method for univariate time series. To ensure an accurate AD, FCVAE exploits an innovative approach to concurrently integrate both the global and local frequency features into the condition of Conditional Variational Autoencoder (CVAE) to significantly increase the accuracy of reconstructing the normal data. Together with a carefully designed "target attention" mechanism, our approach allows the model to pick the most useful information from the frequency domain for better short-periodic trend construction. Our FCVAE has been evaluated on public datasets and a large-scale cloud system, and the results demonstrate that it outperforms state-of-the-art methods. This confirms the practical applicability of our approach in addressing the limitations of current VAE-based anomaly detection models. | [
"Univariate time series",
"Anomaly detection",
"Conditional variational autoencoder",
"Frequency information"
] | https://openreview.net/pdf?id=BO8Shlh4Cn | BDU2qBhKZN | official_review | 1,700,812,734,953 | BO8Shlh4Cn | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2479/Reviewer_cfBX"
] | review: **Summary:**
Time series anomaly detection plays a crucial role in practical applications. This paper introduces a novel VAE-based method to address existing limitations in VAE-based anomaly detection.
The authors' approach uniquely considers periodic patterns and detailed trends across different frequencies, employing specialized modules for each.
Their results demonstrate superior performance compared to existing methods.
**Strengths:**
- The paper is well-structured, effectively mapping the three identified challenges to the designs of the LFM, GFM, and attention mechanisms.
- The evaluation section comprehensively discusses these designs, offering valuable insights into their implementation and impact.
Weaknesses:
**Weaknesses:**
The overall improvement seems not significant.
questions: What is the precision and recall performance of the proposed method? While the F1 score provides a holistic view, precision, and recall are often crucial in specific scenarios. They are also used in [34].
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 1: The reviewer's evaluation is an educated guess |
BKfdwlU00z | AdFlush: A Real-World Deployable Machine Learning Solution for Effective Advertisement and Web Tracker Prevention | [
"Kiho Lee",
"Chaejin Lim",
"Beomjin Jin",
"Taeyoung Kim",
"Hyoungshick Kim"
] | Ad blocking and web tracking prevention tools are widely used, but traditional filter list-based methods struggle to cope with web content manipulation. Machine learning-based approaches have been proposed to address these limitations, but they have primarily focused on improving detection accuracy at the expense of practical considerations such as deployment overhead. In this paper, we present AdFlush, a lightweight machine learning model for ad blocking and web tracking prevention that is practically designed for the Chrome browser. To develop AdFlush, we first evaluated the effectiveness of 883 features, including 350 existing and 533 new features, and ultimately identified 27 key features that achieve optimal detection performance. We then evaluated AdFlush using a dataset of 10,000 real-world websites, achieving an F1 score of 0.98, which outperforms state-of-the-art models such as AdGraph (F1 score: 0.93), WebGraph (F1 score: 0.90), and WTAgraph (F1 score: 0.84). Importantly, AdFlush also exhibits a significantly reduced computational footprint, requiring 56% less CPU and 80% less memory than AdGraph. We also evaluated the robustness of AdFlush against adversarial manipulation, such as URL manipulation and JavaScript obfuscation. Our experimental results show that AdFlush exhibits superior robustness with F1 scores of 0.89–0.98, outperforming AdGraph and WebGraph, which achieved F1 scores of 0.81–0.87 against adversarial samples. To demonstrate the real-world applicability of AdFlush, we have implemented it as a Chrome browser extension and made it publicly available. We also conducted a six-month longitudinal study, which showed that AdFlush maintained a high F1 score above 0.97 without retraining, demonstrating its effectiveness. Additionally, AdFlush detected 642 URLs across 108 domains that were missed by commercial filter lists, which we reported to filter list providers. | [
"Ad blocking",
"Web tracking",
"Machine learning",
"Deployability",
"Web security"
] | https://openreview.net/pdf?id=BKfdwlU00z | utLMmwZ8yU | decision | 1,705,909,228,738 | BKfdwlU00z | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: This paper presents AdFlush, a lightweight ML-based ad/tracker detection system. To develop AdFlush, the authors first identify and analyze hundreds of features, and then condense them into 27 fearues that are used to build a classifier. AdFlush compares favorably against state-of-the-art systems graph-based ad detection systems, such as AdGraph, WebGraph, and WTAgraph. During the discussion phase, the authors also provided a comparison with traditional ad blockers such as uBlock Origin. Besides extensive comparisons against previous work in terms of accuracy and performance, the paper also includes an analysis of potential evasions, showing that AdFlush is more robust to adversarial manipulations, compared to previous work. Additionally, the paper also includes a longitudinal study of AdFlush's performance, showing that AdFlush remains effective over time.
Overall, the reviewers seem to appreciate the work, though they had some requests for improvement. For instance they asked for a comparison with traditional ad blockers and evaluation results that are more centered around reducing false positives, rather than simply measuring the F1 score. Also, they asked for a more in-depth analysis of the features. The authors responded well to the reviewers' questions and presented convincing evidence to support the effectivencess of AdFlush as a practical ad blocker.
--- |
BKfdwlU00z | AdFlush: A Real-World Deployable Machine Learning Solution for Effective Advertisement and Web Tracker Prevention | [
"Kiho Lee",
"Chaejin Lim",
"Beomjin Jin",
"Taeyoung Kim",
"Hyoungshick Kim"
] | Ad blocking and web tracking prevention tools are widely used, but traditional filter list-based methods struggle to cope with web content manipulation. Machine learning-based approaches have been proposed to address these limitations, but they have primarily focused on improving detection accuracy at the expense of practical considerations such as deployment overhead. In this paper, we present AdFlush, a lightweight machine learning model for ad blocking and web tracking prevention that is practically designed for the Chrome browser. To develop AdFlush, we first evaluated the effectiveness of 883 features, including 350 existing and 533 new features, and ultimately identified 27 key features that achieve optimal detection performance. We then evaluated AdFlush using a dataset of 10,000 real-world websites, achieving an F1 score of 0.98, which outperforms state-of-the-art models such as AdGraph (F1 score: 0.93), WebGraph (F1 score: 0.90), and WTAgraph (F1 score: 0.84). Importantly, AdFlush also exhibits a significantly reduced computational footprint, requiring 56% less CPU and 80% less memory than AdGraph. We also evaluated the robustness of AdFlush against adversarial manipulation, such as URL manipulation and JavaScript obfuscation. Our experimental results show that AdFlush exhibits superior robustness with F1 scores of 0.89–0.98, outperforming AdGraph and WebGraph, which achieved F1 scores of 0.81–0.87 against adversarial samples. To demonstrate the real-world applicability of AdFlush, we have implemented it as a Chrome browser extension and made it publicly available. We also conducted a six-month longitudinal study, which showed that AdFlush maintained a high F1 score above 0.97 without retraining, demonstrating its effectiveness. Additionally, AdFlush detected 642 URLs across 108 domains that were missed by commercial filter lists, which we reported to filter list providers. | [
"Ad blocking",
"Web tracking",
"Machine learning",
"Deployability",
"Web security"
] | https://openreview.net/pdf?id=BKfdwlU00z | Y87cuDwgb4 | official_review | 1,701,537,240,981 | BKfdwlU00z | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2363/Reviewer_iKtG"
] | review: Thank you for submitting your paper to WWW 2024. I do like the direction that you take in your work, but I am unconvinced about its efficacy and whether it is actually practical. This makes it difficult for me to support acceptance. The main problem I see is that you do not compare or evaluate your approach to non-ML ad blockers in either performance or accuracy. Instead, your approach tests whether you can detect some parts of the existing filter lists, but it is not clear which ones and what their reach is. Moreover, considering especially approaches like ad proxying, CNAME cloaking, and the recent tricks YouTube has been playing, blocking at the request-level may be insufficient for current advertisements (and some filter lists go way beyond specifying just request URLs, for example, uBlock Origin's lists, which are very different from EasyList filter lists).
Therefore, I find it is necessary to also evaluate against "traditional" ad blocks that are actually the state of the art in ad blocking (especially uBlock Origin), rather than focusing on ML-based ad blockers and research prototypes that have not been adopted or used widely. Indeed, many of the "traditional" ad blockers do not only block at the request level based on filter lists (like EasyList), but go beyond those, rendering a comparison at this level inaccurate and superficial.
Overall, I would have loved to see an actual end-to-end evaluation of your ad blocker on a portion of some top website list (e.g., Tranco) against existing traditional ad blockers and ML-based blockers, investigating how many ads you actually block and also what the performance overhead is. Currently, you do these investigations separately, without considering the environment your ad blocker runs in, which I do not find convincing.
questions: - If you would deploy AdFlush in a browser, how do website load times change? Prior work at WWW 2020 has shown that privacy-focused browser extensions can reduce load times, and it would be useful to understand how AdFlush compares to them.
- What are the ads that AdFlush cannot detect? Do they follow any pattern, or do they fall into specific groups?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
BKfdwlU00z | AdFlush: A Real-World Deployable Machine Learning Solution for Effective Advertisement and Web Tracker Prevention | [
"Kiho Lee",
"Chaejin Lim",
"Beomjin Jin",
"Taeyoung Kim",
"Hyoungshick Kim"
] | Ad blocking and web tracking prevention tools are widely used, but traditional filter list-based methods struggle to cope with web content manipulation. Machine learning-based approaches have been proposed to address these limitations, but they have primarily focused on improving detection accuracy at the expense of practical considerations such as deployment overhead. In this paper, we present AdFlush, a lightweight machine learning model for ad blocking and web tracking prevention that is practically designed for the Chrome browser. To develop AdFlush, we first evaluated the effectiveness of 883 features, including 350 existing and 533 new features, and ultimately identified 27 key features that achieve optimal detection performance. We then evaluated AdFlush using a dataset of 10,000 real-world websites, achieving an F1 score of 0.98, which outperforms state-of-the-art models such as AdGraph (F1 score: 0.93), WebGraph (F1 score: 0.90), and WTAgraph (F1 score: 0.84). Importantly, AdFlush also exhibits a significantly reduced computational footprint, requiring 56% less CPU and 80% less memory than AdGraph. We also evaluated the robustness of AdFlush against adversarial manipulation, such as URL manipulation and JavaScript obfuscation. Our experimental results show that AdFlush exhibits superior robustness with F1 scores of 0.89–0.98, outperforming AdGraph and WebGraph, which achieved F1 scores of 0.81–0.87 against adversarial samples. To demonstrate the real-world applicability of AdFlush, we have implemented it as a Chrome browser extension and made it publicly available. We also conducted a six-month longitudinal study, which showed that AdFlush maintained a high F1 score above 0.97 without retraining, demonstrating its effectiveness. Additionally, AdFlush detected 642 URLs across 108 domains that were missed by commercial filter lists, which we reported to filter list providers. | [
"Ad blocking",
"Web tracking",
"Machine learning",
"Deployability",
"Web security"
] | https://openreview.net/pdf?id=BKfdwlU00z | Urourpm6xJ | official_review | 1,701,388,774,802 | BKfdwlU00z | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2363/Reviewer_rrga"
] | review: Firstly, I would like to thank the authors for submitting their paper to WWW'24! It was definitely interesting to read and addresses a highly relevant and timely topic (ad/tracker blocking), and focuses on the practicality of deploying such a countermeasure. Overall, the paper demonstrates an improved performance, both in terms of detection ability as well as resource usage. Furthermore, I appreciated the longitudinal study showing that the feature drift over time is limited and consequently the drop in performance is acceptable.
Nevertheless, there are a couple of concerns with the study that I believe should be addressed:
- The study operates under the premise that advertisers/trackers are not trying to actively circumvent detection. Assuming that the model is published for all users (which would be required to run the extension on the users' browsers), the advertiser/tracker could keep modifying the scripts and setup until they are able to bypass the detection. As long as the recall is not 1, such cases can exist.
- Given that only 27 features are considered, it would have been interesting to see an overview of how difficult it would be for an advertiser/tracker to modify such features. For instance changing the number of storage gets to just a single one seems like mostly a development issue.
- The key feature identification was performed separately for existing and new features, and then those 47 and 11 features were just combined. This makes the assumption that the combination does not affect their performance.
questions: - Which features are intrinsic to the advertising/tracking ecosystem, i.e., which features would be extremely difficult to manipulate or adjust without changing the way the ecosystem works?
- Why were the new and existing features considered separately for the key feature identification?
ethics_review_flag: No
ethics_review_description: No ethical concerns.
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
BKfdwlU00z | AdFlush: A Real-World Deployable Machine Learning Solution for Effective Advertisement and Web Tracker Prevention | [
"Kiho Lee",
"Chaejin Lim",
"Beomjin Jin",
"Taeyoung Kim",
"Hyoungshick Kim"
] | Ad blocking and web tracking prevention tools are widely used, but traditional filter list-based methods struggle to cope with web content manipulation. Machine learning-based approaches have been proposed to address these limitations, but they have primarily focused on improving detection accuracy at the expense of practical considerations such as deployment overhead. In this paper, we present AdFlush, a lightweight machine learning model for ad blocking and web tracking prevention that is practically designed for the Chrome browser. To develop AdFlush, we first evaluated the effectiveness of 883 features, including 350 existing and 533 new features, and ultimately identified 27 key features that achieve optimal detection performance. We then evaluated AdFlush using a dataset of 10,000 real-world websites, achieving an F1 score of 0.98, which outperforms state-of-the-art models such as AdGraph (F1 score: 0.93), WebGraph (F1 score: 0.90), and WTAgraph (F1 score: 0.84). Importantly, AdFlush also exhibits a significantly reduced computational footprint, requiring 56% less CPU and 80% less memory than AdGraph. We also evaluated the robustness of AdFlush against adversarial manipulation, such as URL manipulation and JavaScript obfuscation. Our experimental results show that AdFlush exhibits superior robustness with F1 scores of 0.89–0.98, outperforming AdGraph and WebGraph, which achieved F1 scores of 0.81–0.87 against adversarial samples. To demonstrate the real-world applicability of AdFlush, we have implemented it as a Chrome browser extension and made it publicly available. We also conducted a six-month longitudinal study, which showed that AdFlush maintained a high F1 score above 0.97 without retraining, demonstrating its effectiveness. Additionally, AdFlush detected 642 URLs across 108 domains that were missed by commercial filter lists, which we reported to filter list providers. | [
"Ad blocking",
"Web tracking",
"Machine learning",
"Deployability",
"Web security"
] | https://openreview.net/pdf?id=BKfdwlU00z | 5SqHcZ9vue | official_review | 1,700,223,877,537 | BKfdwlU00z | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2363/Reviewer_UFnp"
] | review: The authors have done a very thorough piece of work in which they create a machine learning based algorithm that uses 27 features and removes (effectively detects) ads. They are of course limited by Google Manifest, however, they do the best that can be done. They use different machine learning algorithms, mostly based on boosting. The most interesting part of the study is the longitudinal study over a period of 6 months. This piece of work is reasonably solid and the comparison with related work is quite sound. The authors claim to have compared with three state-of-the-art works. The results are also favorable.
My review scores are still not stellar because as a problem, this is very old. Using ML algorithms, it is always possible to do a lot of fine tuning and tweaking to make it more robust and also get a good accuracy. I would have ideally loved to see a new kind of ML technique being used such as transformers or diffusion-based models (not sure about their relevance for this problem, though). Something new could have definitely made this paper shine. Also, there was no need to send so much of data to a third-party server (requestly.io). Can something be done more locally?
questions: 1. Can we say something definitive about state-of-the-art work? What makes them inferior?
2. Were more sophisticated ML models considered? Do the boosting based approaches considered by the authors the best?
3. Is there an element of explainability in the model?
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
BKfdwlU00z | AdFlush: A Real-World Deployable Machine Learning Solution for Effective Advertisement and Web Tracker Prevention | [
"Kiho Lee",
"Chaejin Lim",
"Beomjin Jin",
"Taeyoung Kim",
"Hyoungshick Kim"
] | Ad blocking and web tracking prevention tools are widely used, but traditional filter list-based methods struggle to cope with web content manipulation. Machine learning-based approaches have been proposed to address these limitations, but they have primarily focused on improving detection accuracy at the expense of practical considerations such as deployment overhead. In this paper, we present AdFlush, a lightweight machine learning model for ad blocking and web tracking prevention that is practically designed for the Chrome browser. To develop AdFlush, we first evaluated the effectiveness of 883 features, including 350 existing and 533 new features, and ultimately identified 27 key features that achieve optimal detection performance. We then evaluated AdFlush using a dataset of 10,000 real-world websites, achieving an F1 score of 0.98, which outperforms state-of-the-art models such as AdGraph (F1 score: 0.93), WebGraph (F1 score: 0.90), and WTAgraph (F1 score: 0.84). Importantly, AdFlush also exhibits a significantly reduced computational footprint, requiring 56% less CPU and 80% less memory than AdGraph. We also evaluated the robustness of AdFlush against adversarial manipulation, such as URL manipulation and JavaScript obfuscation. Our experimental results show that AdFlush exhibits superior robustness with F1 scores of 0.89–0.98, outperforming AdGraph and WebGraph, which achieved F1 scores of 0.81–0.87 against adversarial samples. To demonstrate the real-world applicability of AdFlush, we have implemented it as a Chrome browser extension and made it publicly available. We also conducted a six-month longitudinal study, which showed that AdFlush maintained a high F1 score above 0.97 without retraining, demonstrating its effectiveness. Additionally, AdFlush detected 642 URLs across 108 domains that were missed by commercial filter lists, which we reported to filter list providers. | [
"Ad blocking",
"Web tracking",
"Machine learning",
"Deployability",
"Web security"
] | https://openreview.net/pdf?id=BKfdwlU00z | 5LewebN3yY | official_review | 1,700,688,273,376 | BKfdwlU00z | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2363/Reviewer_xnxa"
] | review: Pros:
- The paper demonstrates practical improvements to many issues facing ML ad blockers.
- The problem is well motivated, since manual lists are easy for advertisers to bypass.
- The implementation uses clever ways to operate within the limits of the Manifest V3 requirements.
Cons:
- Claims sometimes stronger than evidence provided in the paper
My main review is about toning down the strength of some claims. For example:
"Manual maintenance of these filter lists requires significant human effort and they are prone to false-positive and false-negative errors." Machine learning models also arguably require human effort to curate. The model requires human effort to collect ground truth data, and periodically retrain with new data. Furthermore, the paper argues that the ML model is more robust against adversarial examples. The paper supports the claim that AdFlush is more robust to adversarial examples than other ML methods, but not that it is more robust than filter lists. I think more acknowledgment of these things in a limitations section would help.
The TPR at FP rate 0 is a more relevant metric for this situation than F1. For a practical deployment in the web scenario, controlling false positives is very important to users since a false positive can break websites. Therefore, for this application, I would argue that the most relevant metric would be the true positive rate at a FP rate close to zero (like somewhere between 10e-4 to 10e-7).
For performance evaluation, it would be more relevant to report the latency impact on loading a whole page. The metrics per request look promising, but the user experience is more closely linked to the entire latency impact on the page. A comparison to existing latency for filter lists would also be good here.
questions: Questions:
- What are the model's metric for TPR at a FPR close to 0?
- Of the 642 additional detections by the model, how many were not manually verified?
- How does latency compare to existing filter list implementations?
- How does latency affect the entire load time of the page?
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
BKfdwlU00z | AdFlush: A Real-World Deployable Machine Learning Solution for Effective Advertisement and Web Tracker Prevention | [
"Kiho Lee",
"Chaejin Lim",
"Beomjin Jin",
"Taeyoung Kim",
"Hyoungshick Kim"
] | Ad blocking and web tracking prevention tools are widely used, but traditional filter list-based methods struggle to cope with web content manipulation. Machine learning-based approaches have been proposed to address these limitations, but they have primarily focused on improving detection accuracy at the expense of practical considerations such as deployment overhead. In this paper, we present AdFlush, a lightweight machine learning model for ad blocking and web tracking prevention that is practically designed for the Chrome browser. To develop AdFlush, we first evaluated the effectiveness of 883 features, including 350 existing and 533 new features, and ultimately identified 27 key features that achieve optimal detection performance. We then evaluated AdFlush using a dataset of 10,000 real-world websites, achieving an F1 score of 0.98, which outperforms state-of-the-art models such as AdGraph (F1 score: 0.93), WebGraph (F1 score: 0.90), and WTAgraph (F1 score: 0.84). Importantly, AdFlush also exhibits a significantly reduced computational footprint, requiring 56% less CPU and 80% less memory than AdGraph. We also evaluated the robustness of AdFlush against adversarial manipulation, such as URL manipulation and JavaScript obfuscation. Our experimental results show that AdFlush exhibits superior robustness with F1 scores of 0.89–0.98, outperforming AdGraph and WebGraph, which achieved F1 scores of 0.81–0.87 against adversarial samples. To demonstrate the real-world applicability of AdFlush, we have implemented it as a Chrome browser extension and made it publicly available. We also conducted a six-month longitudinal study, which showed that AdFlush maintained a high F1 score above 0.97 without retraining, demonstrating its effectiveness. Additionally, AdFlush detected 642 URLs across 108 domains that were missed by commercial filter lists, which we reported to filter list providers. | [
"Ad blocking",
"Web tracking",
"Machine learning",
"Deployability",
"Web security"
] | https://openreview.net/pdf?id=BKfdwlU00z | 52k8zqR2vS | official_review | 1,700,646,982,482 | BKfdwlU00z | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2363/Reviewer_c4mY"
] | review: The paper presents a lightweight ML-based ad blocker that outperforms state-of-the-art solutions in both effectiveness (F1 score) and computation.
I appreciate the contributions and presentation of the work done, and would be happy to see the paper accepted at the conference. The practical angle makes the work all the more valuable. The paper is also written very clearly.
The paper makes a convincing case that it outperforms previous ML-based ad blocking methods. The finding that despite making the approach more lightweight, the model is able to also achieve a better detection rate, was very interesting and somewhat counter intuitive to me. I do not know how feasible it is to go deeper into why this is the case (e.g., providing an intuition for the feature set in Appendix B), but it is intriguing nonetheless.
There are quite a few deeper and comprehensive insights that I find very valuable, for example the feature distributions (Figure 2), longitudinal study, and robustness analysis.
I greatly appreciate that there will be a publicly available Chrome extension, and that the authors did the effort to report the undetected ad URLs to the main filter list providers.
I see a few main areas for improving the paper.
The first is to provide evidence for certain statements that are relevant to the contributions but are not (sufficiently) substantiated:
* S1: _"AdFlush does not transmit User-Agent values or cookies to third parties, upholding a strong commitment to user privacy"_ - who does this then?
* S2.3: _"automated filter rule generation is inherently unsuitable
for real-time ads and trackers detection" - why?
* S3.4: _"Additionally, obfuscation-related features, such
as Identifier lengths in ASTs and character counts in code lines,
would also enhance performance."_ - any proof for this?
* S4.2: _"We
analyzed the training data for each feature selection, identified the
top 10 models based on F1 scores, and selected the best-performing
model using 5-fold cross-validation"_ - can you give the results? It is unclear whether the GBM vastly outperforms the other models, or whether their performances are very similar instead.
The second is showing more concretely where the newly developed features (Sections 3.2, 3.4) stem from, and whether they were solicited systematically. The paper does not go in much detail beyond "we identified new features"; the only (short) explanation I found is in Section 3.2: _"These new features are derived
from requested JavaScript source code or embedded HTML scripts,
including n-gram frequencies parsed from the abstract syntax tree
(AST), the structure of the AST, and various script-based metrics."_
I would value a more detailed description of how these features were developed and selected.
The third is addressing the impact of two error-related issues:
* Figure 4: there is a fixed delay timeout of 300ms. What would happen if the evaluation takes longer than 300ms? Will the ad then be shown to the user? In this area, I was also confused by the fact that in S5.4, the standard deviation for a single request is 0.990s, bringing the total request time well above 300ms.
* What is the impact of false positives/negatives on the user experience, also per HTTP request type? Some broad insights would be helpful. For example, does it matter that there is a FNR of 13% for `main_frame` requests? Does this mean up to 13% of ads can be shown to the user? Or for the FPR of 2% for `ping`, are potentially necessary requests blocked, and could this break website functionality?
For the Runtime Overhead Evaluation (S5.4), I am confused by the use of such a high-resource machine (Intel Xeon CPU, 256 GB RAM) for evaluating what is claimed to be a "lightweight" approach. The results suggest that AdFlush is more and rather lightweight (e.g., 40 MB RAM usage), but I do not see why this should be evaluated on a non-consumer device. Some more detail on whether the evaluation results apply in total, to one request, to one web page, ... would also be very helpful. This feels necessary to me to clarify, as the computational performance is one of the main contributions claimed.
A final comment on the extension (S4.3). I wonder what the impact is of already having to resort to an older version of `declarativeNetRequest`, could this mean that the extension could very quickly stop working if such version would be fully deprecated or discouraged from using. (This comment does not affect my evaluation of the paper in any way.)
Minor nitpicks:
- S1. Introduction: _"To address these issues several tools are available, ..."_: missing a "such as"?
- S3.1: an average of 32.58 seconds per crawl gives me 32.58 s * 10000 sites / 7 crawlers = 13 hours, yet 11 hours are mentioned
- Appendix A (Table 5) could benefit from the "# of values" column that is already in Table 6 - this would also resolve the "confusion" touched upon in footnote 1.
The reasoning for my novelty score is that the area of ML-based ad blocking has had quite a body of research already (so the problem/solution category is less "novel"), but the paper still contributes greatly to improving the state of the art.
**I have read the rebuttal.**
questions: - Explain the procedure behind soliciting and selecting the newly developed features.
- Clarify the impact of the 300ms timeout, and the reason why a single request could take (much) longer to execute.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
B5LpbOJ8MH | MARIO: Model Agnostic Recipe for Improving OOD Generalization of Graph Contrastive Learning | [
"Yun Zhu",
"Haizhou Shi",
"Zhenshuo Zhang",
"Siliang Tang"
] | In contemporary research, large-scale graphs and graph neural networks (GNNs) serve as prevalent tools for organizing and modeling web-related data. Nevertheless, the dynamic nature of web content, characterized by continual change and evolution over time (e.g., the prevailing trends and citation patterns in online citation networks), presents a formidable challenge to the adaptability of GNNs in addressing these distributional shifts.
In this work, we investigate the problem of out-of-distribution (OOD) generalization for unsupervised learning methods on graph data.
To improve the robustness against such distributional shifts, we propose a $\underline{M}$odel-$\underline{A}$gnostic $\underline{R}$ecipe for $\underline{I}$mproving $\underline{O}$OD generalizability of unsupervised graph contrastive learning methods, which we refer to as MARIO. MARIO introduces two principles aimed at developing distributional-shift-robust graph contrastive methods to overcome the limitations of existing frameworks: (i) Invariant principle that incorporates adversarial graph augmentation to obtain invariant representations and (ii) Information Bottleneck (IB) principle for achieving generalizable representations through refining representation contrasting.
To the best of our knowledge, this is the first work that investigates the OOD generalization problem of graph contrastive learning, with a specific focus on node-level tasks. Through extensive experiments, we demonstrate that our method achieves state-of-the-art performance on the OOD test set, while maintaining comparable performance on the in-distribution test set when compared to existing approaches. | [
"Graph Neural Networks",
"Domain Generalization",
"Self-Supervised Learning",
"Graph Representation Learning",
"Pre-Training"
] | https://openreview.net/pdf?id=B5LpbOJ8MH | Uu5dms7GWp | official_review | 1,699,866,852,560 | B5LpbOJ8MH | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission77/Reviewer_Bco1"
] | review: This paper describes a new recipe of graph contrastive learning to improve model performance of OOD generalization in unsupervised scenario. The recipe includes two common principles: invariant representation learning and information bottleneck. Empirical results on multiple datasets show the superiority of the proposed framework.
Strengths:
1)This paper is well motivated. Investigating the generalization ability of unsupervised graph learning is an important and interesting direction.
2)This paper includes extensive experiments on graph tasks to verify the effectiveness of the proposed method.
3)This paper gives a detailed background of used techniques to introduce the proposed method.
Weaknesses:
1)The paper is not well written and the organization is unclear. Some used concepts and concrete equations are not well explained, such as the concrete definition of environment labels. Some proofs used in existing work are not the contributions of this paper, so it is confusing to append them in Appendix.
2)The model includes many incremental improvements so that it is a very complicated model, but the runtime complexity of this paper is not given.
3)The reproducible experiments are not provided.
questions: See above weaknesses.
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
B5LpbOJ8MH | MARIO: Model Agnostic Recipe for Improving OOD Generalization of Graph Contrastive Learning | [
"Yun Zhu",
"Haizhou Shi",
"Zhenshuo Zhang",
"Siliang Tang"
] | In contemporary research, large-scale graphs and graph neural networks (GNNs) serve as prevalent tools for organizing and modeling web-related data. Nevertheless, the dynamic nature of web content, characterized by continual change and evolution over time (e.g., the prevailing trends and citation patterns in online citation networks), presents a formidable challenge to the adaptability of GNNs in addressing these distributional shifts.
In this work, we investigate the problem of out-of-distribution (OOD) generalization for unsupervised learning methods on graph data.
To improve the robustness against such distributional shifts, we propose a $\underline{M}$odel-$\underline{A}$gnostic $\underline{R}$ecipe for $\underline{I}$mproving $\underline{O}$OD generalizability of unsupervised graph contrastive learning methods, which we refer to as MARIO. MARIO introduces two principles aimed at developing distributional-shift-robust graph contrastive methods to overcome the limitations of existing frameworks: (i) Invariant principle that incorporates adversarial graph augmentation to obtain invariant representations and (ii) Information Bottleneck (IB) principle for achieving generalizable representations through refining representation contrasting.
To the best of our knowledge, this is the first work that investigates the OOD generalization problem of graph contrastive learning, with a specific focus on node-level tasks. Through extensive experiments, we demonstrate that our method achieves state-of-the-art performance on the OOD test set, while maintaining comparable performance on the in-distribution test set when compared to existing approaches. | [
"Graph Neural Networks",
"Domain Generalization",
"Self-Supervised Learning",
"Graph Representation Learning",
"Pre-Training"
] | https://openreview.net/pdf?id=B5LpbOJ8MH | R2vw6LSLSq | official_review | 1,700,719,876,916 | B5LpbOJ8MH | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission77/Reviewer_oqvZ"
] | review: Contributions:
The authors observe that graph contrastive learning (GCL) methods differ in their robustness to OOD tests. To address this, the authors analyze the limitations of these methods and propose a contrastive learning framework called MARIO to improve robustness against distributional shifts. MARIO comprises adversarial augmentations (to push for invariant representations) and improved representation contrasting. The authors evaluate MARIO's performance on OOD and in-distribution test sets.
My recommendation is based on S1, S2, W1, W2, W3. I am happy to raise my scores based on the authors' responses to my questions and clarification/justification of W1, W2, W3.
Quality:
Pros:
- (S2) The authors' theoretical analysis of augmentations and representation contrasting is interesting and motivates the adversarial augmentation and CMI minimization components of MARIO well.
- (S3) Experiments: The authors evaluate MARIO in transductive and inductive settings and with respect to various relevant supervised, IRM, and graph OOD baselines. In the transductive setting, MARIO mostly outperforms the baselines both in-distribution and OOD for the datasets, but MARIO's accuracy is not significantly higher. MARIO consistently outperforms baselines in the inductive setting.
Cons:
- (W3) The authors should provide further justification for why online clustering labels serve as good pseudo-labels.
- What is the added time complexity of online clustering?
- Experiments: The authors should comment on the validity of using graph OOD benchmarks to evaluate their node OOD method.
Clarity:
Pros:
- The writing is extremely clear and well-organized, with vivid examples and solid interpretations of theoretical results.
Originality:
Pros:
- To the best of the authors' knowledge, they are the first to study the OOD generalization of graph contrastive learning for node-level tasks.
- The authors thoroughly compare/contrast MARIO to related methods.
Cons:
- (W1) The authors adopt FLAG (an existing graph augmentation mechanism) and the common practice of using prototypes in GCL (e.g., [1, 2]).
Significance:
Pros:
- (S1) The authors focus on node-level tasks, which are more challenging due to the interconnected nature of nodes and the diverse types of possible distribution shifts.
Cons:
- (W2) The authors should comment on the limitations of their method.
[1] Zhang, Shichang, et al. "Motif-driven contrastive learning of graph representations." arXiv preprint arXiv:2012.12533 (2020).
[2] Li, Bolian, Baoyu Jing, and Hanghang Tong. "Graph communal contrastive learning." Proceedings of the ACM Web Conference 2022. 2022.
EDIT: I have read the authors' rebuttal.
questions: Please see Review (above).
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 6
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
B5LpbOJ8MH | MARIO: Model Agnostic Recipe for Improving OOD Generalization of Graph Contrastive Learning | [
"Yun Zhu",
"Haizhou Shi",
"Zhenshuo Zhang",
"Siliang Tang"
] | In contemporary research, large-scale graphs and graph neural networks (GNNs) serve as prevalent tools for organizing and modeling web-related data. Nevertheless, the dynamic nature of web content, characterized by continual change and evolution over time (e.g., the prevailing trends and citation patterns in online citation networks), presents a formidable challenge to the adaptability of GNNs in addressing these distributional shifts.
In this work, we investigate the problem of out-of-distribution (OOD) generalization for unsupervised learning methods on graph data.
To improve the robustness against such distributional shifts, we propose a $\underline{M}$odel-$\underline{A}$gnostic $\underline{R}$ecipe for $\underline{I}$mproving $\underline{O}$OD generalizability of unsupervised graph contrastive learning methods, which we refer to as MARIO. MARIO introduces two principles aimed at developing distributional-shift-robust graph contrastive methods to overcome the limitations of existing frameworks: (i) Invariant principle that incorporates adversarial graph augmentation to obtain invariant representations and (ii) Information Bottleneck (IB) principle for achieving generalizable representations through refining representation contrasting.
To the best of our knowledge, this is the first work that investigates the OOD generalization problem of graph contrastive learning, with a specific focus on node-level tasks. Through extensive experiments, we demonstrate that our method achieves state-of-the-art performance on the OOD test set, while maintaining comparable performance on the in-distribution test set when compared to existing approaches. | [
"Graph Neural Networks",
"Domain Generalization",
"Self-Supervised Learning",
"Graph Representation Learning",
"Pre-Training"
] | https://openreview.net/pdf?id=B5LpbOJ8MH | IfcOXOPnz5 | official_review | 1,701,272,093,771 | B5LpbOJ8MH | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission77/Reviewer_S5XT"
] | review: Summary
The paper focuses on graph-structured data applications and the challenges in out-of-distribution (OOD) generalization, particularly with unlabeled graph data. It identifies two primary challenges: the non-Euclidean nature of graphs causing complex distributional shifts and the heavy reliance on label information in existing OOD generalization methods. So the paper introduces MARIO (Model-Agnostic Recipe for Improving OOD generalization of GCL methods), which operates on two key aspects of GCL: view generation and representation contrasting. MARIO integrates the Invariance principle, using adversarial graph augmentation, and the Information Bottleneck principle, aiming to develop robust GCL methods against distributional shifts. Extensive experiments demonstrate that MARIO effectively enhances the OOD generalization capabilities of GCL methods.
Pros
(1) Relevance of Study: The paper tackles important tasks in the domain of unsupervised learning on graph data, focusing on improving OOD generalization.
(2) New Approach: Introducing MARIO, a novel method that addresses the key challenges in unsupervised OOD generalization, is a significant contribution.
(3) Extensive Experiments: The paper's comprehensive experimental analysis lends credibility to its findings and the effectiveness of MARIO in various scenarios.
Cons
(1) Clarity and Readability: The paper requires improved clarity, especially in section 3. The connection of Definition 2 and Theorem 3.1 to the context is unclear, necessitating better explanations and possibly pseudocode for illustrating the framework. Some used concepts and concrete equations are not well explained.
(2) Lack of Experimental Details: There is an absence of detailed information about the parameters used in baseline methods, which is crucial for replicability and understanding the experiments.
questions: (1) can the proposed objective be regarded as a plugin and also help to improve the supervised methods? If not, can more explanation be provided?
ethics_review_flag: No
ethics_review_description: no
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
B5LpbOJ8MH | MARIO: Model Agnostic Recipe for Improving OOD Generalization of Graph Contrastive Learning | [
"Yun Zhu",
"Haizhou Shi",
"Zhenshuo Zhang",
"Siliang Tang"
] | In contemporary research, large-scale graphs and graph neural networks (GNNs) serve as prevalent tools for organizing and modeling web-related data. Nevertheless, the dynamic nature of web content, characterized by continual change and evolution over time (e.g., the prevailing trends and citation patterns in online citation networks), presents a formidable challenge to the adaptability of GNNs in addressing these distributional shifts.
In this work, we investigate the problem of out-of-distribution (OOD) generalization for unsupervised learning methods on graph data.
To improve the robustness against such distributional shifts, we propose a $\underline{M}$odel-$\underline{A}$gnostic $\underline{R}$ecipe for $\underline{I}$mproving $\underline{O}$OD generalizability of unsupervised graph contrastive learning methods, which we refer to as MARIO. MARIO introduces two principles aimed at developing distributional-shift-robust graph contrastive methods to overcome the limitations of existing frameworks: (i) Invariant principle that incorporates adversarial graph augmentation to obtain invariant representations and (ii) Information Bottleneck (IB) principle for achieving generalizable representations through refining representation contrasting.
To the best of our knowledge, this is the first work that investigates the OOD generalization problem of graph contrastive learning, with a specific focus on node-level tasks. Through extensive experiments, we demonstrate that our method achieves state-of-the-art performance on the OOD test set, while maintaining comparable performance on the in-distribution test set when compared to existing approaches. | [
"Graph Neural Networks",
"Domain Generalization",
"Self-Supervised Learning",
"Graph Representation Learning",
"Pre-Training"
] | https://openreview.net/pdf?id=B5LpbOJ8MH | E8BHaoRB68 | official_review | 1,700,386,079,106 | B5LpbOJ8MH | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission77/Reviewer_96AN"
] | review: The authors propose a model-agnostic recipe to address the challenges of distribution shifts in graph contrastive learning motivated by invariant learning and information bottleneck principles. The authors conduct sufficient theoretical proofs and experiments. The experimental results also illustrate the validity of the model proposed by the authors.
questions: 1. Do not add watermarks to the draft paper.
2. Since the authors propose many loss functions, one model diagram or a detailed algorithm is necessary to be included in the paper. The current algorithm 1 in the appendix does not clearly show the process of the algorithm.
3. Why not put Citation 40 (Sihang Li, Xiang Wang, An Zhang, Yingxin Wu, Xiangnan He, and Tat-Seng Chua. 2022. Let invariant rationale discovery inspire graph In International Conference on Machine Learning. PMLR, 13052-13065) as a baseline?
ethics_review_flag: No
ethics_review_description: no
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
B5LpbOJ8MH | MARIO: Model Agnostic Recipe for Improving OOD Generalization of Graph Contrastive Learning | [
"Yun Zhu",
"Haizhou Shi",
"Zhenshuo Zhang",
"Siliang Tang"
] | In contemporary research, large-scale graphs and graph neural networks (GNNs) serve as prevalent tools for organizing and modeling web-related data. Nevertheless, the dynamic nature of web content, characterized by continual change and evolution over time (e.g., the prevailing trends and citation patterns in online citation networks), presents a formidable challenge to the adaptability of GNNs in addressing these distributional shifts.
In this work, we investigate the problem of out-of-distribution (OOD) generalization for unsupervised learning methods on graph data.
To improve the robustness against such distributional shifts, we propose a $\underline{M}$odel-$\underline{A}$gnostic $\underline{R}$ecipe for $\underline{I}$mproving $\underline{O}$OD generalizability of unsupervised graph contrastive learning methods, which we refer to as MARIO. MARIO introduces two principles aimed at developing distributional-shift-robust graph contrastive methods to overcome the limitations of existing frameworks: (i) Invariant principle that incorporates adversarial graph augmentation to obtain invariant representations and (ii) Information Bottleneck (IB) principle for achieving generalizable representations through refining representation contrasting.
To the best of our knowledge, this is the first work that investigates the OOD generalization problem of graph contrastive learning, with a specific focus on node-level tasks. Through extensive experiments, we demonstrate that our method achieves state-of-the-art performance on the OOD test set, while maintaining comparable performance on the in-distribution test set when compared to existing approaches. | [
"Graph Neural Networks",
"Domain Generalization",
"Self-Supervised Learning",
"Graph Representation Learning",
"Pre-Training"
] | https://openreview.net/pdf?id=B5LpbOJ8MH | 7EDaUNBLGY | official_review | 1,700,726,301,606 | B5LpbOJ8MH | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission77/Reviewer_HF5w"
] | review: The Paper proposes a novel model-agnostic recipe for OOD generalization problem of graph contrastive learning methods, which mainly works on the view generation and representation contrasting component of GCL.
Strength
1)The authors investigate the OOD generalization problem of graph contrastive learning specifically for node-level tasks for the first time.
2)Formally formulated the problem of graph contrastive learning for OOD generalization.
3)The experiments are extensive with various baselines to certify the effectiveness of the approach.
4)The paper is well presented and structured.
Weakness
1)It is better to provide a figure of framework or overview for MARIO, which is clearer and more friendly for readers.
2)The sensitivity analysis does not cover all hyperparameters.
questions: See the weakness.
ethics_review_flag: No
ethics_review_description: No issues
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
B5LpbOJ8MH | MARIO: Model Agnostic Recipe for Improving OOD Generalization of Graph Contrastive Learning | [
"Yun Zhu",
"Haizhou Shi",
"Zhenshuo Zhang",
"Siliang Tang"
] | In contemporary research, large-scale graphs and graph neural networks (GNNs) serve as prevalent tools for organizing and modeling web-related data. Nevertheless, the dynamic nature of web content, characterized by continual change and evolution over time (e.g., the prevailing trends and citation patterns in online citation networks), presents a formidable challenge to the adaptability of GNNs in addressing these distributional shifts.
In this work, we investigate the problem of out-of-distribution (OOD) generalization for unsupervised learning methods on graph data.
To improve the robustness against such distributional shifts, we propose a $\underline{M}$odel-$\underline{A}$gnostic $\underline{R}$ecipe for $\underline{I}$mproving $\underline{O}$OD generalizability of unsupervised graph contrastive learning methods, which we refer to as MARIO. MARIO introduces two principles aimed at developing distributional-shift-robust graph contrastive methods to overcome the limitations of existing frameworks: (i) Invariant principle that incorporates adversarial graph augmentation to obtain invariant representations and (ii) Information Bottleneck (IB) principle for achieving generalizable representations through refining representation contrasting.
To the best of our knowledge, this is the first work that investigates the OOD generalization problem of graph contrastive learning, with a specific focus on node-level tasks. Through extensive experiments, we demonstrate that our method achieves state-of-the-art performance on the OOD test set, while maintaining comparable performance on the in-distribution test set when compared to existing approaches. | [
"Graph Neural Networks",
"Domain Generalization",
"Self-Supervised Learning",
"Graph Representation Learning",
"Pre-Training"
] | https://openreview.net/pdf?id=B5LpbOJ8MH | 6N3hYjTnPL | decision | 1,705,909,209,578 | B5LpbOJ8MH | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: **Meta-review**: This paper proposes MARIO, a recipe for improving the adaptability of graph contrastive learning methods. Reviewers generally liked the paper, and the discussion was generally productive with several reviewers revising their ratings.
**Strengths**:
+ Relevance of study (S5XT)
+ Interesting theoretical connection (oqvZ)
+ Extensive experimentation (oqvZ,Bco1)
**Weaknesses**: *mostly addressed during discussion*
- Writing needs improvement (S5XT, Bco1) |
B2xziJstXB | Global News Synchrony During the Start of the COVID-19 Pandemic | [
"Xi Chen",
"Scott A. Hale",
"David Jurgens",
"Mattia Samory",
"Ethan Zuckerman",
"Przemyslaw A. Grabowicz"
] | News coverage profoundly affects how countries and individuals behave in international relations. Yet, we have little empirical evidence of how news coverage varies across countries, languages, locations, political blocs, and time, because of challenges related to measuring and comparing news coverage at a global scale.
To address these challenges, we develop an efficient computational pipeline that comprises three components: 1) a transformer model to estimate multilingual news similarity; 2) a global event identification system that clusters news based on their similarity network; and 3) a method estimating and explaining the synchrony of news across countries and diversity of news within a country, measured based on the news coverage of global events. Each component achieves state-of-the art performance, scaling seamlessly to massive datasets of millions of news articles.
We apply the pipeline to study news articles published between January 1 and June 30, 2020, across 124 countries and 10 languages, and identify the factors explaining biases in national and international news coverage. Our analysis reveals that:
(1) news media tend to cover a more diverse set of events in countries that are internally varied: those with federalist governments, larger populations, more official languages, and higher inequality;
(2) news coverage is more synchronized between countries that not only actively participate in commercial and political relations---such as, pairs of countries with high bilateral trade volume, and countries that belong to the NATO military alliance or BRICS group of major emerging economies---but also countries that share certain traits---an official language, high GDP, and high democracy indices. | [
"international news network",
"news event synchrony",
"computational social science"
] | https://openreview.net/pdf?id=B2xziJstXB | xqaCecQxBX | official_review | 1,700,771,648,238 | B2xziJstXB | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1954/Reviewer_tpnj"
] | review: The manuscript studies the news synchrony during 2020 by analyzing news articles across the globe. The authors build a machine learning model to measure multilingual news similarity and create a method to identify events. Using the identified events, the authors quantify the diversity of news coverage within the countries and the similarities between countries. Through regression analyses, the authors identify key factors that correlate with news diversity and synchrony. I find it an interesting paper; however, some aspects of the study remain unclear to me.
According to the title, the research is about the synchrony of news coverage among different countries. And I think for most parts of the paper, the focus is on news synchrony. However, the authors also include analyses of the news diversity in the paper. It's not clear how these two concepts are closely related. In fact, I think removing the diversity component is probably helpful to keep the manuscript more streamlined and focused. But if the authors really think that part is critical, then I would suggest modifying the title, abstract, introduction, and conclusion accordingly to highlight the necessity of including it.
Another concern I have while reading the paper is that, although I think the analysis is interesting, it's not obvious to me why the findings matter other than deepening our understanding of global news coverage. The factors correlated with news synchrony between countries are not very surprising, and I have a hard time coming up with the study's implications. The authors suggest that the framework can assist studies on agenda settings, but again, it's not very obvious to me how. Please elaborate.
Some of the technical details are not clear to me either. For instance, I can't find a detailed description of the transformer model the authors adopted (or built?) to quantify multilingual news similarity. The authors suggest that their proposed method is more scalable than traditional cross-encoders, which have a complexity of N square. But if I understand it correctly, after converting the news articles into vector embeddings using the bi-encoders, one still needs to compare each pair to calculate their similarity. So, the complexity is still N square. This does not solve the scalability issue. Please elaborate. Finally, choices of the thresholds to reduce the number of news articles seem arbitrary. The authors include Figure 6 in the appendix, but I don't think it justifies the choices automatically. Please provide more details on how Figure 6 is generated and what the result means.
I also have a minor question. What does the sentence "A naive measure of news article synchrony or diversity could be the average news similarity between two countries of within a country, respectively." mean?
questions: Please see my comments above.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 3
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
B2xziJstXB | Global News Synchrony During the Start of the COVID-19 Pandemic | [
"Xi Chen",
"Scott A. Hale",
"David Jurgens",
"Mattia Samory",
"Ethan Zuckerman",
"Przemyslaw A. Grabowicz"
] | News coverage profoundly affects how countries and individuals behave in international relations. Yet, we have little empirical evidence of how news coverage varies across countries, languages, locations, political blocs, and time, because of challenges related to measuring and comparing news coverage at a global scale.
To address these challenges, we develop an efficient computational pipeline that comprises three components: 1) a transformer model to estimate multilingual news similarity; 2) a global event identification system that clusters news based on their similarity network; and 3) a method estimating and explaining the synchrony of news across countries and diversity of news within a country, measured based on the news coverage of global events. Each component achieves state-of-the art performance, scaling seamlessly to massive datasets of millions of news articles.
We apply the pipeline to study news articles published between January 1 and June 30, 2020, across 124 countries and 10 languages, and identify the factors explaining biases in national and international news coverage. Our analysis reveals that:
(1) news media tend to cover a more diverse set of events in countries that are internally varied: those with federalist governments, larger populations, more official languages, and higher inequality;
(2) news coverage is more synchronized between countries that not only actively participate in commercial and political relations---such as, pairs of countries with high bilateral trade volume, and countries that belong to the NATO military alliance or BRICS group of major emerging economies---but also countries that share certain traits---an official language, high GDP, and high democracy indices. | [
"international news network",
"news event synchrony",
"computational social science"
] | https://openreview.net/pdf?id=B2xziJstXB | o4TfiLdYNy | official_review | 1,700,724,965,962 | B2xziJstXB | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1954/Reviewer_LZ1V"
] | review: The paper addresses the gap in comprehensive studies of global news coverage by introducing a novel NLP methodology and overcoming some technical challenges. For the study, the authors use a dataset of 60M news articles across 10 languages from 124 countries that spans January to June 2020. Then, the study employs a transformer model for multilingual news similarity and identifies global news events to measure synchrony and diversity in global news coverage. The results provide insights into systematic differences in media agenda-setting and international relations, with implications for future media studies.
Strengths
- S1. While comprehending the global news ecosystem is imperative, its systematic analysis faces challenges due to language limitations and the vast volume of data. The paper offers insights from the analysis of 60M news articles in 10 different languages, which is impressive.
- S2. The proposed methods demonstrate solidity and undergo rigorous evaluation. They not only facilitate broader, long-term analyses across millions of articles in diverse languages but also provide valuable data for academics and media monitoring agencies.
- S3. Their findings, such as the correlation between diverse cultures and the diverse news events coverage and the stronger news synchrony among countries sharing public interests, ideology, or similar social issues rather than mere geographical proximity, are noteworthy.
- S4. The paper exhibits exceptional clarity, being extremely well-written and easy to follow.
I cannot think of any reason to reject this paper; however, I find the below points require further clarification.
- Media Cloud data. A short paragraph about the representativeness of the Media Cloud data would be appreciated.
- Were all other languages translated into English? If so, which tool or method was employed for translation?
- How was the number of clusters (i.e., the number of news events) determined in the OSLOM algorithm?
- Who were the three annotators?
questions: - In Section 5.4., the authors mention "By anchoring the news diversity and synchrony measures in global news events, we progress from an analysis rooted in 13.6 million news article pairs to an analysis encompassing 1.4 billion article pairs that are within clusters." It is unclear how those analyses can encompass 1.4 billion article pairs.
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 6
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
B2xziJstXB | Global News Synchrony During the Start of the COVID-19 Pandemic | [
"Xi Chen",
"Scott A. Hale",
"David Jurgens",
"Mattia Samory",
"Ethan Zuckerman",
"Przemyslaw A. Grabowicz"
] | News coverage profoundly affects how countries and individuals behave in international relations. Yet, we have little empirical evidence of how news coverage varies across countries, languages, locations, political blocs, and time, because of challenges related to measuring and comparing news coverage at a global scale.
To address these challenges, we develop an efficient computational pipeline that comprises three components: 1) a transformer model to estimate multilingual news similarity; 2) a global event identification system that clusters news based on their similarity network; and 3) a method estimating and explaining the synchrony of news across countries and diversity of news within a country, measured based on the news coverage of global events. Each component achieves state-of-the art performance, scaling seamlessly to massive datasets of millions of news articles.
We apply the pipeline to study news articles published between January 1 and June 30, 2020, across 124 countries and 10 languages, and identify the factors explaining biases in national and international news coverage. Our analysis reveals that:
(1) news media tend to cover a more diverse set of events in countries that are internally varied: those with federalist governments, larger populations, more official languages, and higher inequality;
(2) news coverage is more synchronized between countries that not only actively participate in commercial and political relations---such as, pairs of countries with high bilateral trade volume, and countries that belong to the NATO military alliance or BRICS group of major emerging economies---but also countries that share certain traits---an official language, high GDP, and high democracy indices. | [
"international news network",
"news event synchrony",
"computational social science"
] | https://openreview.net/pdf?id=B2xziJstXB | jA96lk6X0R | decision | 1,705,909,236,364 | B2xziJstXB | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: ike all reviewers, I concur that the paper addresses an important topic - addressing the challenge of examining news coverage at a global scale. The work contributes to a novel computational pipeline spanning a dataset of 60 million news articles across 10 languages and 124 countries. I commend the authors for this humongous effort. I am in agreement with reviewer LZ1V's comment about the rigorous evaluation and the systematic approach of collecting and analyzing data. k1gB also highlighted the strengths of the experimental results and the large data collection. Reviewers did raise a few concerns
- around the implications of this work, such as why this work matters (tpnj) or what's inspiring about the findings (nbHL),
- raised questions around generalizability (k1gB)
- asked for additional methodological details (LZ1V, nbHL)
- asked for alternative methodological approaches and questioned about bias in data (k1gB)
I appreciate the authors thoughtful and sincere attempts in responding to each of these concerns, in some cases redoing additional analyses (e.g. training gradient boosting regressor) and reframing sections of the paper.
I am happy to recommend acceptance for this work under the assumption that the authors will take the reviewer suggestion and their responses into consideration while revising the final camera ready version of this paper. |
B2xziJstXB | Global News Synchrony During the Start of the COVID-19 Pandemic | [
"Xi Chen",
"Scott A. Hale",
"David Jurgens",
"Mattia Samory",
"Ethan Zuckerman",
"Przemyslaw A. Grabowicz"
] | News coverage profoundly affects how countries and individuals behave in international relations. Yet, we have little empirical evidence of how news coverage varies across countries, languages, locations, political blocs, and time, because of challenges related to measuring and comparing news coverage at a global scale.
To address these challenges, we develop an efficient computational pipeline that comprises three components: 1) a transformer model to estimate multilingual news similarity; 2) a global event identification system that clusters news based on their similarity network; and 3) a method estimating and explaining the synchrony of news across countries and diversity of news within a country, measured based on the news coverage of global events. Each component achieves state-of-the art performance, scaling seamlessly to massive datasets of millions of news articles.
We apply the pipeline to study news articles published between January 1 and June 30, 2020, across 124 countries and 10 languages, and identify the factors explaining biases in national and international news coverage. Our analysis reveals that:
(1) news media tend to cover a more diverse set of events in countries that are internally varied: those with federalist governments, larger populations, more official languages, and higher inequality;
(2) news coverage is more synchronized between countries that not only actively participate in commercial and political relations---such as, pairs of countries with high bilateral trade volume, and countries that belong to the NATO military alliance or BRICS group of major emerging economies---but also countries that share certain traits---an official language, high GDP, and high democracy indices. | [
"international news network",
"news event synchrony",
"computational social science"
] | https://openreview.net/pdf?id=B2xziJstXB | X8sdAOLZl7 | official_review | 1,700,423,423,239 | B2xziJstXB | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1954/Reviewer_nbHL"
] | review: This paper presents a study on “Global News Synchrony During the Start of the COVID-19 Pandemic”. As such, the contribution addresses an interesting and relevant topic of the Social Networks, Social Media, and Society track of The Web Conference 2024. In general, the paper is clearly structured and well-written (except some very minor errors). In addition, the authors mention that they will release the data and code in case of acceptance, which is a definite surplus! However, the paper has severe “technical” and “conceptual” shortcomings. In first place, the paper lacks a mathematical formulation so that the overall understanding of the presented approach remains very vague. In particular, the technical details are almost entirely unclear (e.g., “By exploiting this property, we develop a heuristic to identify a smaller set of 13.6 million news article pairs that are more likely to be similar than random pairs and compute news similarity only for this reduced set of pairs. The heuristic preserves the computational viability of this study and it is based on the observation that related articles tend to mention the same named entities, e.g., people, organizations, or locations.”). Apart from that, the actual study is somewhat “uninspiring” since the results are quite (trivially) foreseeable (“similar” languages and cultures match, so what?). The overall paper the is mostly pure engineering and a genuine scientific contribution is missing. As indicated above, the technical details are very vague and, e.g., Appendix B and Appendix C do not really help understanding what is going on *in details*. As a result, the paper lacks a take-home message (at least from a scientific point of view).
In the light of the above, I recommend to reject the paper. At the same time, it might be worth considering the submission of this paper to one of the many workshops, in particular, those addressing spatio-temporal aspects.
I acknowledge that there were no responses.
questions: - What are your heuristics *in detail*?
ethics_review_flag: No
ethics_review_description: n.a.
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 2
technical_quality: 2
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
B2xziJstXB | Global News Synchrony During the Start of the COVID-19 Pandemic | [
"Xi Chen",
"Scott A. Hale",
"David Jurgens",
"Mattia Samory",
"Ethan Zuckerman",
"Przemyslaw A. Grabowicz"
] | News coverage profoundly affects how countries and individuals behave in international relations. Yet, we have little empirical evidence of how news coverage varies across countries, languages, locations, political blocs, and time, because of challenges related to measuring and comparing news coverage at a global scale.
To address these challenges, we develop an efficient computational pipeline that comprises three components: 1) a transformer model to estimate multilingual news similarity; 2) a global event identification system that clusters news based on their similarity network; and 3) a method estimating and explaining the synchrony of news across countries and diversity of news within a country, measured based on the news coverage of global events. Each component achieves state-of-the art performance, scaling seamlessly to massive datasets of millions of news articles.
We apply the pipeline to study news articles published between January 1 and June 30, 2020, across 124 countries and 10 languages, and identify the factors explaining biases in national and international news coverage. Our analysis reveals that:
(1) news media tend to cover a more diverse set of events in countries that are internally varied: those with federalist governments, larger populations, more official languages, and higher inequality;
(2) news coverage is more synchronized between countries that not only actively participate in commercial and political relations---such as, pairs of countries with high bilateral trade volume, and countries that belong to the NATO military alliance or BRICS group of major emerging economies---but also countries that share certain traits---an official language, high GDP, and high democracy indices. | [
"international news network",
"news event synchrony",
"computational social science"
] | https://openreview.net/pdf?id=B2xziJstXB | 2BGUw0bam5 | official_review | 1,700,960,402,770 | B2xziJstXB | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1954/Reviewer_k1gB"
] | review: This paper studies how news coverage synchrony and varies across countries. An efficient pipeline is proposed to estimate multilingual news similarity, making cluster analysis and explaining synchrony mechanisms. With the pipeline, the authors identify global events in 2020 and reveal the key factors of news synchrony. Experimental results show both the effectiveness and efficiency of the pipeline.
Strengths:
S1. The experimental results are quite convincing. Plenty of data and classic statistical methods make the results quite sound and reliable.
S2. The article is well written. It’s a pleasure to read the article.
S3. The idea of a global events cluster is quite novel. With these clusters, nice experimental results emerge naturally.
Weaknesses:
W1. The time complexity is an issue. Pipelines do not easily generalize to other efforts.
W2. Diversity within countries may have a litter problem. The articles unrelated to others are eliminated. This might influence the diversity within countries.
W3. More experiments need to be taken in regression. Other explainable algorithms, such as tree-based methods, could be taken.
questions: Q1: Is it possible to reduce the time complexity? In line 432, ‘13.6 million news article pairs across 2.2 million unique news article’. On average, no more than seven nodes are connected to one node. More nodes might be eliminated.
Q2: Does the data filtering process cause bias? The articles not related to others are eliminated before experiments. However, irrelevance means diversity to some extent. Will it introduce bias to the results?
Q3: Why don’t you use tree-based methods with regression? R^2 is about 0.5 in the experiments. Typically, boost methods can get much better results. What’s more, feature importance is also an important criterion to explain the results.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
AnGaF6rEo4 | FairSync: Ensuring Amortized Group Exposure in Distributed Recommendation Retrieval | [
"Chen Xu",
"Jun Xu",
"Yiming Ding",
"Xiao Zhang",
"Qi Qi"
] | Driven by considerations of fairness, business, and balanced development needs, the recommender system (RS) often necessitates ensuring that certain groups have a minimum level of exposure within a period of time. For example, RS platforms often have the demand to ensure adequate exposure for new providers or specific categories of items according to their needs.
Modern industry RS usually adopts a two-stage pipeline: stage-1 (retrieval stage) retrieves hundreds of candidates from millions of items distributed across various servers, and stage-2 (ranking stage) focuses on presenting a small-size but accurate selection from items chosen in stage-1. Existing efforts for ensuring amortized group exposures focus on stage-2, however, stage-1 is also critical for the task.
Without a high-quality set of candidates, the stage-2 ranker cannot ensure the required exposure for the selected groups.
Previous fairness-aware works designed for stage-2 typically require accessing and traversing of all items. In stage-1, however, millions of items are distributively stored in servers, making it infeasible to traverse all of them. How to ensure the global amortized group exposures in the distributed retrieval process is a challenging question. To address this issue, we introduce a model named FairSync, which transforms the problem into a constrained distributed optimization problem. Specifically, FairSync resolves the issue by moving it to the dual space, where a central node aggregates historical fairness data into a vector and distributes it to all servers. In theory, with local and distributed searching, we can ensure the amortized exposures within the dual space. To trade-off the efficiency and retrieval accuracy, the gradient descent technique is used to periodically update the parameter of the dual vector. The experiment results on two public recommender retrieval datasets showcased that FairSync outperformed all the baselines, achieving the desired minimum level of exposures while maintaining a high level of retrieval accuracy. | [
"distributed retrieval",
"recommender system",
"amortized group exposures"
] | https://openreview.net/pdf?id=AnGaF6rEo4 | qwc9EGYxtM | official_review | 1,700,792,446,755 | AnGaF6rEo4 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission561/Reviewer_bGxK"
] | review: ### Summary
The paper tackles the issue of fair recommendation retrieval. While most works focus on the second stage of retrieval in which a short list of candidates have already been selected, this paper focuses on generating a fair shortlist. For this reason, scalability is of particular concern as the set of candidate items is much larger in the first stage. The authors present a scalable algorithm that augments user embeddings with a "dual vector", which essentially provides a handicap for certain item categories. The dual vector can be calculated by optimizing an unconstrained objective. The authors present results that their method increases group exposure at the item level while preserving recommendation accuracy.
### Pros
* The problem definition is well-defined and the goals are made clear.
* The authors discuss scalability in a manner that considers the distributed nature of the servers.
* The algorithm presented is easily integrable on top of existing embedding algorithms and the dual vector is interpretable.
### Cons
* My main concern is that several steps in the proof for Theorem 1 appear to lack justification or make incorrect assumptions. I will elaborate in the questions section. As Theorem 1 introduces the dual objective and is the crux of the paper my assessment will depend heavily on clarifications of the proof.
* The problem setup in section 4.2, specifically Equation 2, considers the offline setting in which the item decisions for all time steps are optimized in one shot. In contrast, FairSync considers a setting in which decisions are made in an online setting one user at a time. It would be helpful to explain the transition from the offline setting to the online setting.
* I appreciate the goal of Figure 4 however the visual illustration of FairSync is not very clear as the top and middle rows are not very distinguishable.
### Miscellaneous
* On page 4 right below Equation 5 it should be the "i-th row" instead of the "i-th" column.
* On page 3 after Equation 1, I would consider d to be a distance measure instead of a distance metric as not all axioms of metrics are satisfied by the dot product.
questions: * Regarding Theorem 1, I am unsure of the use of the knapsack problem in this setting. Specifically, the application of Equation 8 appears to disregard the exposure values $e_g$ which are in fact defined in terms of $x_{u_t, i}$ and well as the exposure constraints. Given that the exposure values are dependent on $x$ it is not necessarily the case that the top K items will be recommended. Am I missing anything with the knapsack substitution?
* Expanding on the previous "con", it would be helpful if the authors could explain the connection between the offline problem introduced in Equation 2 and the online problem that FairSync solves. Is the dual vector still valid for an individual user when it is originally optimized over a set of users?
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 3
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
AnGaF6rEo4 | FairSync: Ensuring Amortized Group Exposure in Distributed Recommendation Retrieval | [
"Chen Xu",
"Jun Xu",
"Yiming Ding",
"Xiao Zhang",
"Qi Qi"
] | Driven by considerations of fairness, business, and balanced development needs, the recommender system (RS) often necessitates ensuring that certain groups have a minimum level of exposure within a period of time. For example, RS platforms often have the demand to ensure adequate exposure for new providers or specific categories of items according to their needs.
Modern industry RS usually adopts a two-stage pipeline: stage-1 (retrieval stage) retrieves hundreds of candidates from millions of items distributed across various servers, and stage-2 (ranking stage) focuses on presenting a small-size but accurate selection from items chosen in stage-1. Existing efforts for ensuring amortized group exposures focus on stage-2, however, stage-1 is also critical for the task.
Without a high-quality set of candidates, the stage-2 ranker cannot ensure the required exposure for the selected groups.
Previous fairness-aware works designed for stage-2 typically require accessing and traversing of all items. In stage-1, however, millions of items are distributively stored in servers, making it infeasible to traverse all of them. How to ensure the global amortized group exposures in the distributed retrieval process is a challenging question. To address this issue, we introduce a model named FairSync, which transforms the problem into a constrained distributed optimization problem. Specifically, FairSync resolves the issue by moving it to the dual space, where a central node aggregates historical fairness data into a vector and distributes it to all servers. In theory, with local and distributed searching, we can ensure the amortized exposures within the dual space. To trade-off the efficiency and retrieval accuracy, the gradient descent technique is used to periodically update the parameter of the dual vector. The experiment results on two public recommender retrieval datasets showcased that FairSync outperformed all the baselines, achieving the desired minimum level of exposures while maintaining a high level of retrieval accuracy. | [
"distributed retrieval",
"recommender system",
"amortized group exposures"
] | https://openreview.net/pdf?id=AnGaF6rEo4 | d8YNrvYtIo | official_review | 1,701,406,447,664 | AnGaF6rEo4 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission561/Reviewer_sFmC"
] | review: The paper proposes FairSync which is a retrieval method for ensuring that group exposure constraints can be satisfied in an amortized fashion when doing distributed retrieval in recommender systems. The problem is important because currently there isn't a known practice for implementing constrained nearest neighbor retrieval (say for fairness constraints). The method is very intuitive and straightforward though -- the authors translate the constrained nearest neighbor problem to an unconstrained dual form that works well with distributed retrieval. The dual form is also pretty intuitive -- each user embedding is appended with the current exposure info for different groups, and each item embedding (stored on the clusters) is appended with the group information of the items.
The authors show, through extensive experiments and comparisons with baseline algorithms, that Fairsync satisfies exposure constraints while maintaining high recall, NDCG, hit rate metrics.
Some potential weaknesses and limitations of the work are:
- The details for the distributed retrieval is fairly limited. The paper does mention that it is the distributed dense retrieval architecture but that is also pretty generic, and does not say much about other systems that might handle indexing and serving differently.
- What happens when the embedding space has a lot of very homogenous neighborhoods, i.e., the nearest neighborhoods of user embeddings are mostly composed of the same group, and searching for items from underrepresented groups can be very expensive in such an embedding. If the embedding homogeneity indeed affects performance of retrieval, it might be worth studying it to test for extremes on a toy dataset/embedding.
- The paper only considers a single kind of fairness constraint, where the exposure if constrained globally. How does the method work when the retrieved set may need to be controlled at a per query level, i.e., different users have different demand for diversity in the retrieved set.
- The baselines are not described with enough detail. Please provide better descriptions of the baseline methods in the main text or the appendix. For example, without reading the cited paper, it is not clear how the IPW method does not ensure that the constraint is satisfied.
- The paper is technically well written but has a bunch of typos and grammatical errors affecting readability. e.g.
-- Page 7, Section 5.1.3 Baselines and Base Models, first paragraph: "We implemente FairSync" missing "d" at the end of implemente.
-- Page 8, Figure 4 caption: "Figure 4 utilized t-SNE" should be "utilizes" or "uses" instead of "utilized".
-- Page 3, Section 4.1 Extra comma after ei should be removed.
-- Line 355, "can be written as" instead of "can be write as"
- The problem tackled in this paper is motivated in a recent industry paper, and a solution similar to one of the baselines (kNN method?) is also presented in: Representation Online Matters: Practical End-to-End Diversification in Search and Recommender Systems (Silva et al. 2023).
Overall, the paper studies an important and somewhat ignored problem in the space of fairness for recommender systems. The simplicity of the solution lends itself to practical impact in real world recommender systems, and hence the paper could be a good contribution to the community. However, the authors are encouraged to consider the feedback above.
questions: The questions to the authors are mentioned in the limitations and weaknesses part of the review above. Here are short summaries of the question:
- (How) does the method apply to other distributed retrieval architectures?
- What happens in the extreme case when the constraints might not be satisfiable? or the constraints cost a lot in terms of latency or recall?
- How do query level constraints work with this method?
- Can you clarify the baselines, and how some of them may or may not satisfy the constraint?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
AnGaF6rEo4 | FairSync: Ensuring Amortized Group Exposure in Distributed Recommendation Retrieval | [
"Chen Xu",
"Jun Xu",
"Yiming Ding",
"Xiao Zhang",
"Qi Qi"
] | Driven by considerations of fairness, business, and balanced development needs, the recommender system (RS) often necessitates ensuring that certain groups have a minimum level of exposure within a period of time. For example, RS platforms often have the demand to ensure adequate exposure for new providers or specific categories of items according to their needs.
Modern industry RS usually adopts a two-stage pipeline: stage-1 (retrieval stage) retrieves hundreds of candidates from millions of items distributed across various servers, and stage-2 (ranking stage) focuses on presenting a small-size but accurate selection from items chosen in stage-1. Existing efforts for ensuring amortized group exposures focus on stage-2, however, stage-1 is also critical for the task.
Without a high-quality set of candidates, the stage-2 ranker cannot ensure the required exposure for the selected groups.
Previous fairness-aware works designed for stage-2 typically require accessing and traversing of all items. In stage-1, however, millions of items are distributively stored in servers, making it infeasible to traverse all of them. How to ensure the global amortized group exposures in the distributed retrieval process is a challenging question. To address this issue, we introduce a model named FairSync, which transforms the problem into a constrained distributed optimization problem. Specifically, FairSync resolves the issue by moving it to the dual space, where a central node aggregates historical fairness data into a vector and distributes it to all servers. In theory, with local and distributed searching, we can ensure the amortized exposures within the dual space. To trade-off the efficiency and retrieval accuracy, the gradient descent technique is used to periodically update the parameter of the dual vector. The experiment results on two public recommender retrieval datasets showcased that FairSync outperformed all the baselines, achieving the desired minimum level of exposures while maintaining a high level of retrieval accuracy. | [
"distributed retrieval",
"recommender system",
"amortized group exposures"
] | https://openreview.net/pdf?id=AnGaF6rEo4 | Qqxd6S3Vy4 | official_review | 1,700,765,002,562 | AnGaF6rEo4 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission561/Reviewer_9dFZ"
] | review: ## Summary
The paper proposed a new method to provide better group exposure in the recommendation system with updates in the retrieval stage. The texts are easy to follow and the method proposed seems to be novel and well-descripted. Overall, I believe the paper meets the criteria of the conference with some comments below.
## Strengths
S1 - Points out the current weakness of the existing group exposure methods as they are more focused on the ranking stage.
S2 - Provides a thorough description of the novel method which could ensure the exposure in the retrieval stage.
S3 - The figures in the paper provide a good overview of the data and are mostly easy to follow.
## Weaknesses
W1 - Some figures can be relocated to another place for better readability.
W2 = the experiment process could be further described.
questions: ## Suggestions for improvements
- In section 2, the paper mainly discussed the related work that focuses on stage 2 to improve group exposure. As the topic of the paper is focused on stage 1 update, it would be better if there could be more introduction about the existing methods of stage 1 in the related work section.
- Within section 4.3, the paper referred to Figure 2 multiple times. However, figure 2 is on the previous page and it has not been referred to on that page. Due to that reason, it will be better to relocate Figure 2 to the page of section 4.3, which could largely improve the readability of that part.
- Section 5.1 points out that the experiment uses the datasets of Amazon-Book and Taobao for testing, besides, it also mentions that the pre-defined categories in the datasets also play an important role. Due to that reason, it is a bit confusing for audiences as there seems to be no discussion about whether these two different datasets are equivalent in the experiment.
- The experiment results have been discussed in section 5.2, which also referred to the multiple results of the baseline results. Currently, there seems not to be so much description about the comparison of the results between the results of the baseline and the new method.
# Questions
- What caused the differences in the performances of the models on the Amazon-Book and Taobal datasets?
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
AnGaF6rEo4 | FairSync: Ensuring Amortized Group Exposure in Distributed Recommendation Retrieval | [
"Chen Xu",
"Jun Xu",
"Yiming Ding",
"Xiao Zhang",
"Qi Qi"
] | Driven by considerations of fairness, business, and balanced development needs, the recommender system (RS) often necessitates ensuring that certain groups have a minimum level of exposure within a period of time. For example, RS platforms often have the demand to ensure adequate exposure for new providers or specific categories of items according to their needs.
Modern industry RS usually adopts a two-stage pipeline: stage-1 (retrieval stage) retrieves hundreds of candidates from millions of items distributed across various servers, and stage-2 (ranking stage) focuses on presenting a small-size but accurate selection from items chosen in stage-1. Existing efforts for ensuring amortized group exposures focus on stage-2, however, stage-1 is also critical for the task.
Without a high-quality set of candidates, the stage-2 ranker cannot ensure the required exposure for the selected groups.
Previous fairness-aware works designed for stage-2 typically require accessing and traversing of all items. In stage-1, however, millions of items are distributively stored in servers, making it infeasible to traverse all of them. How to ensure the global amortized group exposures in the distributed retrieval process is a challenging question. To address this issue, we introduce a model named FairSync, which transforms the problem into a constrained distributed optimization problem. Specifically, FairSync resolves the issue by moving it to the dual space, where a central node aggregates historical fairness data into a vector and distributes it to all servers. In theory, with local and distributed searching, we can ensure the amortized exposures within the dual space. To trade-off the efficiency and retrieval accuracy, the gradient descent technique is used to periodically update the parameter of the dual vector. The experiment results on two public recommender retrieval datasets showcased that FairSync outperformed all the baselines, achieving the desired minimum level of exposures while maintaining a high level of retrieval accuracy. | [
"distributed retrieval",
"recommender system",
"amortized group exposures"
] | https://openreview.net/pdf?id=AnGaF6rEo4 | DM6O8aOE7v | decision | 1,705,909,219,640 | AnGaF6rEo4 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: Our decision is to accept. Please see the AC's review below and improve the work considering that and the reviewers' feedback for cemera-ready submission.
"Reviewers generally praised this paper as studying an important problem relevant to TheWebConf, and producing a method that could be applied in real recommender systems. While some concerns were raised about the correctness of the theory in the paper, these seem to have been resolved in the rebuttal period. I think this paper provides incremental value to the conference and recommend it is accepted." |
AnGaF6rEo4 | FairSync: Ensuring Amortized Group Exposure in Distributed Recommendation Retrieval | [
"Chen Xu",
"Jun Xu",
"Yiming Ding",
"Xiao Zhang",
"Qi Qi"
] | Driven by considerations of fairness, business, and balanced development needs, the recommender system (RS) often necessitates ensuring that certain groups have a minimum level of exposure within a period of time. For example, RS platforms often have the demand to ensure adequate exposure for new providers or specific categories of items according to their needs.
Modern industry RS usually adopts a two-stage pipeline: stage-1 (retrieval stage) retrieves hundreds of candidates from millions of items distributed across various servers, and stage-2 (ranking stage) focuses on presenting a small-size but accurate selection from items chosen in stage-1. Existing efforts for ensuring amortized group exposures focus on stage-2, however, stage-1 is also critical for the task.
Without a high-quality set of candidates, the stage-2 ranker cannot ensure the required exposure for the selected groups.
Previous fairness-aware works designed for stage-2 typically require accessing and traversing of all items. In stage-1, however, millions of items are distributively stored in servers, making it infeasible to traverse all of them. How to ensure the global amortized group exposures in the distributed retrieval process is a challenging question. To address this issue, we introduce a model named FairSync, which transforms the problem into a constrained distributed optimization problem. Specifically, FairSync resolves the issue by moving it to the dual space, where a central node aggregates historical fairness data into a vector and distributes it to all servers. In theory, with local and distributed searching, we can ensure the amortized exposures within the dual space. To trade-off the efficiency and retrieval accuracy, the gradient descent technique is used to periodically update the parameter of the dual vector. The experiment results on two public recommender retrieval datasets showcased that FairSync outperformed all the baselines, achieving the desired minimum level of exposures while maintaining a high level of retrieval accuracy. | [
"distributed retrieval",
"recommender system",
"amortized group exposures"
] | https://openreview.net/pdf?id=AnGaF6rEo4 | D8jonmLhmQ | official_review | 1,700,787,269,917 | AnGaF6rEo4 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission561/Reviewer_Mn9G"
] | review: The article focuses on stage 1 of the two stage process of displaying recommendations to a user: the retrieval of candidate items from a distributed system of servers, so that in stage-2 these items can be ranked for presentation. The authors argue that most fair ranking work has focused on stage 2 even though the pool of candidates for sorting is heavily influenced by the retrieval in stage 1. To address this gap, the authors introduce a system for ensuring equitable group exposure at the retrieval level. The proposed system is tested on standard benchmarks datasets and it outperforms other fairness approaches at a small cost of computation time.
The article is difficult to parse at times because of small editing problems (or details that are not clarified enough for me to understand):
- in Remark 1 (Distributed solution in dual space). After the transformation of the original problem (Equation (1)) into its dualform (Equation (1)) -> the second equation should be Equation 2?
- in the description of Figure 4 authors write "Figure 4(a) illustrates that at the initial retrieval process (t = 30), the exposure levels for various categories (as depicted in the third column’s bar plots) are nearly equalized." Figure 4a is for t=30 but is not the third column and the exposure levels are not equalized? Cat 3 has exposure 1.0 (but the x-axis says 2.0?) and others have exposure 0. Then the authors write that in 4b,c Cat 3 and Cat 4 experience dominance in exposure levels but that does not track with the histograms in b) (it does track with c)). Then "In the original space (ComiRec-DR embeddings), it is evident that the user embeddings are closely aligned with the embeddings of category 3 and 4, resulting in the dominance of category 3 and 4." How is it evident in the figure? The item categories do not seem to be segregated on the two dimensions, so I'm not sure how one would judge that. Also Cat 4 is at no point dominant according to the histogram, so I'm not sure how to read this.
questions: The authors should clarify the description of Figure 4 or perhaps rethink how they are presenting these results. The item embeddings do not change at the three time points but this information is repeated six times but it's not clear why it's included in the first place.
**EDIT**: I read the rebuttal and I'm satisfied with the proposed answers to the issues I raised.
ethics_review_flag: No
ethics_review_description: No issues
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 5
reviewer_confidence: 1: The reviewer's evaluation is an educated guess |
AXEtC5inq8 | NCTM: A Novel Coded Transmission Mechanism for Short Video Deliveries | [
"Xu Zhenge",
"Qing Li",
"Wanxin Shi",
"Yong Jiang",
"Zhenhui Yuan",
"Peng Zhang",
"Gabriel Muntean"
] | With the rapid popularity of short video applications, a large number of short video transmissions occupy the bandwidth, placing a heavy load on the Internet. Due to the extensive number of short videos and the predominant service for mobile users, traditional approaches (e.g., CDN delivery, edge caching) struggle to achieve the expected performance, leading to a significant number of redundant transmissions. In order to reduce the amount of traffic, we design a Novel Coded Transmission Mechanism (NCTM), which transmits XOR-coded data instead of the original video content. NCTM caches the short videos that users have already watched in user devices, and encodes, broadcasts, and decodes XOR-coded files separately at the server, edge nodes, and clients, with the assistance of cached content. This approach enables NCTM to deliver more short video data given the limited bandwidth. Our extensive trace-driven simulations show how NCTM reduces network load by 3.02\%-14.75\%, cuts peak traffic by 23.01\%, and decreases rebuffering events by 43\%-85\% in comparison to a CDN-supported scheme and a naive edge caching scheme. Additionally, NCTM also increases the user's buffered video duration by 1.21x-13.53x, ensuring improved playback smoothness. | [
"short video delivery",
"coded transmission",
"client-side cache"
] | https://openreview.net/pdf?id=AXEtC5inq8 | v8hJdEBU1J | official_review | 1,700,718,659,345 | AXEtC5inq8 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission425/Reviewer_xFeW"
] | review: The paper aims at reducing the network overheads of streaming of short videos over the Internet. The key idea is to cache (some of) the videos that users already watched at the end devices, encode multiple videos in the same response, and decode the response given the cached videos. The goal of the paper is to maximize the number of encoded videos in a response while satisfying all user requests. The problem is transformed to a minimum clique coverage problem that the paper addresses. The paper also introduces a mechanism to modify the order of videos shown to users to leverage the previous algorithm. The paper evaluates the system using trace-driven simulations.
Strengths
- The caching, encoding and decoding approach is intuitive and novel to some extent.
- The paper is generally well-written.
Weaknesses
- Although the approach looks promising and intuitive, the whole solution doesn't seem to be practical:
- For the video response to be sent, multiple users need to request a specific set of videos. This requires a synchronization "primitive" between *all* clients and the server, which is hard to achieve and is not scalable.
- The paper doesn't discuss how broadcast is actually done. Assuming cellular networks, base stations need to set up multicast groups, and user equipment need to synchronize to receive contents of the resource blocks. Both issues are not evaluated or addressed in the paper. Similar challenges of multicast appear in other networks as well.
- The solution doesn't address that a single chunk is often encoded to multiple bitrates.
- It is not clear whether the proposed minimum clique coverage algorithm may result in fast churns or instabilities in the graph.
- Evaluation:
- The evaluation is based on simulations. It is not clear whether the system will perform effectively in real deployments.
- Many simulation details are missing such as implementation details of synchronization, broadcast etc. Also, the request rate, network topology, CDN caching policies etc.
- The evaluation simulates ~10K videos and around 1400 users. These numbers are low given the high popularity of short videos noted by the paper. Also, this setup doesn't stress the proposed system in terms of performance gains and costs.
- The paper doesn't explain how the network load savings are achieved and how much the broadcast/multicast contribute to these savings.
General comments:
- The term "relatively optimal solution" is not scientifically accurate.
- References [16] and [17] do not necessarily back up the claims made by the paper. For example, it seems that [17] focuses mostly on the impact of COVID-19's lockdowns on video and other traffic, and not necessarily related to short videos.
- In Figure 9, the paper mentions that the average network load decreases by 14%. CDFs cannot be used to report averages for general distributions.
questions: - Can you provide more details about the simulation? And why does it represent real deployments?
- What are the results of running the system in large-scale setups?
- How is broadcast/multicast implemented and realized?
- Can you address or elaborate on the need of synchronization discussed above?
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 3
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
AXEtC5inq8 | NCTM: A Novel Coded Transmission Mechanism for Short Video Deliveries | [
"Xu Zhenge",
"Qing Li",
"Wanxin Shi",
"Yong Jiang",
"Zhenhui Yuan",
"Peng Zhang",
"Gabriel Muntean"
] | With the rapid popularity of short video applications, a large number of short video transmissions occupy the bandwidth, placing a heavy load on the Internet. Due to the extensive number of short videos and the predominant service for mobile users, traditional approaches (e.g., CDN delivery, edge caching) struggle to achieve the expected performance, leading to a significant number of redundant transmissions. In order to reduce the amount of traffic, we design a Novel Coded Transmission Mechanism (NCTM), which transmits XOR-coded data instead of the original video content. NCTM caches the short videos that users have already watched in user devices, and encodes, broadcasts, and decodes XOR-coded files separately at the server, edge nodes, and clients, with the assistance of cached content. This approach enables NCTM to deliver more short video data given the limited bandwidth. Our extensive trace-driven simulations show how NCTM reduces network load by 3.02\%-14.75\%, cuts peak traffic by 23.01\%, and decreases rebuffering events by 43\%-85\% in comparison to a CDN-supported scheme and a naive edge caching scheme. Additionally, NCTM also increases the user's buffered video duration by 1.21x-13.53x, ensuring improved playback smoothness. | [
"short video delivery",
"coded transmission",
"client-side cache"
] | https://openreview.net/pdf?id=AXEtC5inq8 | gULl0fDLBu | decision | 1,705,909,238,369 | AXEtC5inq8 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: I summarise the pros and cons of the paper as follows.
Overall Pros:
Well-written and organized paper.
Thorough evaluation with suitable metrics.
Innovative system design that creatively applies mathematical problems.
Superior performance compared to existing solutions.
Clear presentation of findings.
Overall Cons:
Lack of real-world testing and evaluation.
Lack of comprehensive literature review and discussion of research gap.
Lack of clarity and thorough explanation in certain sections.
Potential limitations and scalability issues of the proposed system.
Lack of justification for the use of XOR codes for short video data.
Most Major Issue:
The most major issue in the reviews is the lack of real-world testing and evaluation. this may affect the overall validity and applicability of the proposed system. |
AXEtC5inq8 | NCTM: A Novel Coded Transmission Mechanism for Short Video Deliveries | [
"Xu Zhenge",
"Qing Li",
"Wanxin Shi",
"Yong Jiang",
"Zhenhui Yuan",
"Peng Zhang",
"Gabriel Muntean"
] | With the rapid popularity of short video applications, a large number of short video transmissions occupy the bandwidth, placing a heavy load on the Internet. Due to the extensive number of short videos and the predominant service for mobile users, traditional approaches (e.g., CDN delivery, edge caching) struggle to achieve the expected performance, leading to a significant number of redundant transmissions. In order to reduce the amount of traffic, we design a Novel Coded Transmission Mechanism (NCTM), which transmits XOR-coded data instead of the original video content. NCTM caches the short videos that users have already watched in user devices, and encodes, broadcasts, and decodes XOR-coded files separately at the server, edge nodes, and clients, with the assistance of cached content. This approach enables NCTM to deliver more short video data given the limited bandwidth. Our extensive trace-driven simulations show how NCTM reduces network load by 3.02\%-14.75\%, cuts peak traffic by 23.01\%, and decreases rebuffering events by 43\%-85\% in comparison to a CDN-supported scheme and a naive edge caching scheme. Additionally, NCTM also increases the user's buffered video duration by 1.21x-13.53x, ensuring improved playback smoothness. | [
"short video delivery",
"coded transmission",
"client-side cache"
] | https://openreview.net/pdf?id=AXEtC5inq8 | fWjtj1Jigu | official_review | 1,700,561,933,502 | AXEtC5inq8 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission425/Reviewer_NGsN"
] | review: The paper proposes a new transmission mechanism that harnesses the advantages of XOR codes and client-side caching for short video streaming. NCTM primarily consists of three components: the management of client cache information, the matching of client video queues for optimization opportunities, and the XOR encoder.
Pros:
1. The paper is well written and does a great job analyzing existing problems and data in short video streaming.
2. It introduces XOR codes for the use of short video data, and applies a novel mechanism to reduce the bandwidth usage.
3. The design of Minimum Clique Coverage algorithm is interesting, though it seems to not be able to guarantee a feasible solution in limited iteration or time.
Cons:
1. Most of the paper has been focus on analyzing and evaluating the effectiveness of algorithms for "Coded Chances Exploration", but the motivation to use of XOR codes for short video data has not been well addressed and tested. My concern is that the actual performance of XOR codes has largely to do with the characteristics of video contents and video codecs. The authors need to provide more information and reasons for this.
2. The frequent updating of clients and the video queue may lead to constant computing overhead for the cloud server. Additionally, the update latency from mobile clients to edge nodes and ultimately to the cloud server may cause untimely decisions for NCTM, resulting further performance degradation of the system.
3. As mentioned in Appendix C, NCTM is not suitable for situations with frequent handovers between base stations or frequent updates of client playbacks, such as watching progress and skip-overs. This poses a significant limitation for short video streaming, considering that many users consume videos on their phones while commuting in cars, buses, or subways.
4. The last sentence in paragraph 4 of Appendix B appears to be incomplete.
questions: Please refer to the cons.
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 7
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
AXEtC5inq8 | NCTM: A Novel Coded Transmission Mechanism for Short Video Deliveries | [
"Xu Zhenge",
"Qing Li",
"Wanxin Shi",
"Yong Jiang",
"Zhenhui Yuan",
"Peng Zhang",
"Gabriel Muntean"
] | With the rapid popularity of short video applications, a large number of short video transmissions occupy the bandwidth, placing a heavy load on the Internet. Due to the extensive number of short videos and the predominant service for mobile users, traditional approaches (e.g., CDN delivery, edge caching) struggle to achieve the expected performance, leading to a significant number of redundant transmissions. In order to reduce the amount of traffic, we design a Novel Coded Transmission Mechanism (NCTM), which transmits XOR-coded data instead of the original video content. NCTM caches the short videos that users have already watched in user devices, and encodes, broadcasts, and decodes XOR-coded files separately at the server, edge nodes, and clients, with the assistance of cached content. This approach enables NCTM to deliver more short video data given the limited bandwidth. Our extensive trace-driven simulations show how NCTM reduces network load by 3.02\%-14.75\%, cuts peak traffic by 23.01\%, and decreases rebuffering events by 43\%-85\% in comparison to a CDN-supported scheme and a naive edge caching scheme. Additionally, NCTM also increases the user's buffered video duration by 1.21x-13.53x, ensuring improved playback smoothness. | [
"short video delivery",
"coded transmission",
"client-side cache"
] | https://openreview.net/pdf?id=AXEtC5inq8 | boW7Z2XXfc | official_review | 1,700,714,537,179 | AXEtC5inq8 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission425/Reviewer_oYcd"
] | review: This paper proposes NCTM which is a caching technique designed for short form TikTok like videos. They key idea behind NCTM is to leverage similarity of videos between users to create and transmit a single file which is generated through an XOR operation on a set of videos. Users receive the XOR version and retrieve their requested video by running XOR operations using the videos individually cached by them. In order to make it work, the paper maintains history of each user in a user cache table and performs user group through a minimum coverage algorithm. Evaluations against LRU cache shows that NCTM is able to reduce peak bandwidth needs and reduces the number of rebuffer events and duration.
Pros:
- TikTok like short form videos are increasing in popularity
- CDNs continue to be interested in performant caching algorithms
Cons:
- Aspect of design are not fully explained
- Overheads associated with NCTM design are not evaluated thoroughly
- XOR coded caching has been proposed before.
questions: ## Design
NCTM requires significant synchronization and collaboration between user requests, however, there is very little evidence presented in the paper that opportunities for such a collaborative technique exists. Given the availability of real traces, the paper should furnish a measurement study to motivate that opportunities for its design.
The paper makes a case that short videos are harder for CDN to effectively cache, because users have different preferences and thus the access pattern is random. I’m not sure if short videos are dissimilar to any other long tail workloads such as long form VOD or any other popularity driven content. The paper needs to make a clearer case to differentiate short videos from other web workloads.
For grouping to work well, there are two assumptions central to the design. First, it needs users to have complementary playback queues (perhaps this can be achieved with Recommendation Reorder). Second, it requires the user requests arrive very close to each other, it is unclear how this synchronization can be achieved in practice.
How does NCTM handle the following cases:
- A newly released short video which has not been watched before?
- First video requested by a user after opening their app (such that no local cached videos exist)
It seems that NCTM will not only need to cache original videos but also cache XOR versions.
## Evaluation
The evaluation seems to mostly focus on bandwidth reductions and reduced rebuffer times, however, this is of secondary importance. First the paper needs to show how hitrates under NCTM compare against baselines for a range of cache sizes.
Second, in evaluations shown, the baseline caching algorithm (CDN delivery mode, edge caching mode) is vanilla LRU. LRU is not the strongest baseline, there has been significant work on caching algorithms and I recommend the paper to compare against stronger baselines such as GDS/GDSF, LFU-DA, LHD, AdaptSize and AViC etc. A good list of video caching algorithm can be obtained from AViC: https://dl.acm.org/doi/abs/10.1145/3359989.3365423
The claim of sufficient time complexity in Sec 5.3 is not at all convincing. The paper needs to show the total time taken to handle an individual GET request and whether that is fast enough to handle highly concurrent workloads.
What is the memory overhead of user cache table and furthermore what is the compute complexity of keeping this table updated? This is an important aspect of NCTM design and must be devoted attention in evaluation.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 3
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
AXEtC5inq8 | NCTM: A Novel Coded Transmission Mechanism for Short Video Deliveries | [
"Xu Zhenge",
"Qing Li",
"Wanxin Shi",
"Yong Jiang",
"Zhenhui Yuan",
"Peng Zhang",
"Gabriel Muntean"
] | With the rapid popularity of short video applications, a large number of short video transmissions occupy the bandwidth, placing a heavy load on the Internet. Due to the extensive number of short videos and the predominant service for mobile users, traditional approaches (e.g., CDN delivery, edge caching) struggle to achieve the expected performance, leading to a significant number of redundant transmissions. In order to reduce the amount of traffic, we design a Novel Coded Transmission Mechanism (NCTM), which transmits XOR-coded data instead of the original video content. NCTM caches the short videos that users have already watched in user devices, and encodes, broadcasts, and decodes XOR-coded files separately at the server, edge nodes, and clients, with the assistance of cached content. This approach enables NCTM to deliver more short video data given the limited bandwidth. Our extensive trace-driven simulations show how NCTM reduces network load by 3.02\%-14.75\%, cuts peak traffic by 23.01\%, and decreases rebuffering events by 43\%-85\% in comparison to a CDN-supported scheme and a naive edge caching scheme. Additionally, NCTM also increases the user's buffered video duration by 1.21x-13.53x, ensuring improved playback smoothness. | [
"short video delivery",
"coded transmission",
"client-side cache"
] | https://openreview.net/pdf?id=AXEtC5inq8 | BRWIOW3hLI | official_review | 1,701,343,339,302 | AXEtC5inq8 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission425/Reviewer_ceV6"
] | review: The paper introduces NCTM, a system that aims to reduce the bandwidth strain caused by the rising popularity of short-form video. NCTM uses XOR-coded data transmission, leveraging cached data on user devices for encoding and decoding. NCTM is shown to reduce network load, peak traffic, and rebuffering events, outperforming traditional CDN and edge caching.
The paper is technically sound, with a detailed description of the NCTM architecture and principles, indicating a thorough approach in system design. The methodology, using trace-driven simulations for evaluation, is rigorous, though real-world testing would have added to the paper’s convincingness.
The paper has a well-organized structure and a clear exposition of technical concepts. However, the related works section is not as comprehensive as it could be, particularly with regard to short-form video delivery.
The paper demonstrates originality through the application of existing techniques, specifically coded caching, to the context of short-form video delivery. However, the field itself is quite niche, and while the approach is somewhat novel, it does not mark a substantial departure from existing work.
One drawback is the lack of clarity in Appendix C regarding the simplification of treating video chunks in adaptive bitrate streaming as distinct videos. This oversight raises questions about the applicability and scalability of NCTM.
To enhance the paper, a more extensive literature review and a deeper discussion on the implications of treating video chunks as distinct videos are advised.
In summary, the paper presents a well-conceived system for short-form video delivery. However, its contribution to the broader field is somewhat limited, and there are areas, particularly with regard to real-world applicability, where improvements can be made to strengthen the paper’s relevance and impact.
questions: * Is NCTM adaptable to other types of video content beyond short-form video?
* How does NCTM affect the quality of experience in terms of video playback and latency?
* How does NCTM scale in larger, more diverse network environments, and what challenges might arise in such scenarios?
* Can you elaborate on the limitations or implications of treating video chunks in adaptive bitrate streaming as distinct videos?
* Are there plans to conduct real-world testing to complement the trace-driven simulations and validate the practical applicability of NCTM?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
AXEtC5inq8 | NCTM: A Novel Coded Transmission Mechanism for Short Video Deliveries | [
"Xu Zhenge",
"Qing Li",
"Wanxin Shi",
"Yong Jiang",
"Zhenhui Yuan",
"Peng Zhang",
"Gabriel Muntean"
] | With the rapid popularity of short video applications, a large number of short video transmissions occupy the bandwidth, placing a heavy load on the Internet. Due to the extensive number of short videos and the predominant service for mobile users, traditional approaches (e.g., CDN delivery, edge caching) struggle to achieve the expected performance, leading to a significant number of redundant transmissions. In order to reduce the amount of traffic, we design a Novel Coded Transmission Mechanism (NCTM), which transmits XOR-coded data instead of the original video content. NCTM caches the short videos that users have already watched in user devices, and encodes, broadcasts, and decodes XOR-coded files separately at the server, edge nodes, and clients, with the assistance of cached content. This approach enables NCTM to deliver more short video data given the limited bandwidth. Our extensive trace-driven simulations show how NCTM reduces network load by 3.02\%-14.75\%, cuts peak traffic by 23.01\%, and decreases rebuffering events by 43\%-85\% in comparison to a CDN-supported scheme and a naive edge caching scheme. Additionally, NCTM also increases the user's buffered video duration by 1.21x-13.53x, ensuring improved playback smoothness. | [
"short video delivery",
"coded transmission",
"client-side cache"
] | https://openreview.net/pdf?id=AXEtC5inq8 | 0zRIGewHMy | official_review | 1,699,111,696,492 | AXEtC5inq8 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission425/Reviewer_mGmY"
] | review: The paper presents, **NTCM**, a novel transmission of short videos as coded data instead of video content. **NTCM** XOR encodes the videos requested by multiple clients while ensuring successful decoding on each client by making effective use of user-side cached (watched) videos. **MTCN** is implemented using a novel algorithm (Minimum Clique Coverage algorithm) that is developed based on the concept of “cliques” and clique cover problem. This algorithm assists in efficiently encoding multiple requested videos by a group of clients, thus reducing the bandwidth demand. The paper also introduces a Recommendation Reorder algorithm, which reorders the recommended list of videos to create more opportunities for coded transmission. The system is evaluated comprehensively using real-life datasets, and a testbed of CDN, edge, and clients. **MTCN** reduces the bandwidth demand, and re-buffering events (time and number), compared to traditional CDN delivery and edge caching approaches.
**Pros:**
- The paper is well-written and organized and adds a significant amount of new knowledge.
- The system design and methodology are sound.
- The authors present an innovative system that creatively applies mathematical problems to a popular real-world application and solves critical issues, i.e., the high bandwidth demand of streaming short videos.
- The evaluation is comprehensive with suitable metrics, figures, and tables that enhance the clarity of the findings.
- The system performs very well in sufficient and limited bandwidth conditions and presents superior performance to edge caching and conventional CDN delivery.
**Cons:**
- Although the limitations of the SOTA were discussed very well in the intro, the research gap can be discussed and improved in the related works section. Currently, the motivation seems weak in the last paragraph of this section.
questions: - I would recommend discussing the research gap in the related works
- What is the difference between Figure 12 and Figure 13? Does each figure present the rebuffer request proportion in different settings of bandwidth?
ethics_review_flag: No
ethics_review_description: No ethical issues
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 7
technical_quality: 7
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
ANcZw18XWb | Spectral Heterogeneous Graph Convolutions via Positive Noncommutative Polynomials | [
"Mingguo He",
"Zhewei Wei",
"shikun feng",
"Zhengjie Huang",
"Weibin Li",
"Yu Sun",
"Dianhai Yu"
] | Heterogeneous Graph Neural Networks (HGNNs) have gained significant popularity in various heterogeneous graph learning tasks. However, most existing HGNNs rely on spatial domain-based methods to aggregate information, i.e., manually selected meta-paths or some heuristic modules, lacking theoretical guarantees. Furthermore, these methods cannot learn arbitrary valid heterogeneous graph filters within the spectral domain, which have limited expressiveness. To tackle these issues, we present a positive spectral heterogeneous graph convolution via positive noncommutative polynomials. Then, using this convolution, we propose PSHGCN, a novel heterogeneous graph convolutional network. PSHGCN offers a simple yet effective method for learning valid heterogeneous graph filters. Moreover, we demonstrate the rationale of PSHGCN in the graph optimization framework. We conducted an extensive experimental study to show that PSHGCN can learn diverse heterogeneous graph filters and outperform all baselines on open benchmarks. Notably, PSHGCN exhibits remarkable scalability, efficiently handling large real-world graphs comprising millions of nodes and edges. Our codes are available in the anonymous link: https://anonymous.4open.science/r/PSHGCN_Code-DFDC. | [
"Heterogeneous Graph Neural Networks",
"Spectral Graph Convolutions",
"Positive Noncommutative Polynomials",
"Graph Optimization."
] | https://openreview.net/pdf?id=ANcZw18XWb | tpjJoVIonK | official_review | 1,700,857,746,263 | ANcZw18XWb | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1217/Reviewer_umC3"
] | review: This work aims to develop a heterogeneous graph neural network model from the spectral perspective that can learn arbitrary valid heterogeneous graph filters. The authors propose the Positive Spectral Heterogeneous Graph Convolutional Network (PSHGCN), which leverages spectral graph convolutions and positive noncommutative polynomials. This method ensures that the acquired graph filters are positive semidefinite, and its rationale is justified by a generalized graph optimization framework (also presented by the authors). Experiments examine PSHGCN's performance on both node classification and link prediction tasks, and ablations studies are given to justify its scalability and the influence of hyper-parameter K (the max order) on the model performance.
Pros:
1. The paper is well-organized and the structure of this paper is easy to follow.
2. Code is provided.
3. The experimental setting is very clear, details are provided in the main paper and supplementary.
4. Developing HGNN models from a spectral perspective is a meaningful topic.
Cons:
1. The performance improvement is marginal except on the IMDB dataset.
2. The model design seems to be the same as considering all possible metapaths within hop-k and learning the coefficient on each of the metapaths.
3. What would be the learned coefficient is not clearly presented.
My major concerns are in the model design (cons 2,3) and in the performance (cons 1). In addition, I have a few questions listed in the Questions section. Therefore, I currently would like to vote for a reject.
questions: 1. For coefficients cr1...rk in formula (7), are they learnable? Comparing (6) and (7), the only difference is we remove the first MLP step so that we can pre-compute the Y?
2. It seems to me that, g() is essentially considering all the possible combinatorial of adjacency matrix Ari within order k (i.e. considering all potential metapaths within hop-k). In that case, I am wondering, are all of these metapaths important and worth to be considered? Will too many metapaths bring any noise?
3. What kind of coefficients (i.e. cis in formula (7)) can the model learn? Can the author please provide some visualizations (e.g. with heatmaps) on this?
4. Are the reported results obtained with a non-decoupled version or a decoupled version? Can the author please provide both?
ethics_review_flag: No
ethics_review_description: n/a
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
ANcZw18XWb | Spectral Heterogeneous Graph Convolutions via Positive Noncommutative Polynomials | [
"Mingguo He",
"Zhewei Wei",
"shikun feng",
"Zhengjie Huang",
"Weibin Li",
"Yu Sun",
"Dianhai Yu"
] | Heterogeneous Graph Neural Networks (HGNNs) have gained significant popularity in various heterogeneous graph learning tasks. However, most existing HGNNs rely on spatial domain-based methods to aggregate information, i.e., manually selected meta-paths or some heuristic modules, lacking theoretical guarantees. Furthermore, these methods cannot learn arbitrary valid heterogeneous graph filters within the spectral domain, which have limited expressiveness. To tackle these issues, we present a positive spectral heterogeneous graph convolution via positive noncommutative polynomials. Then, using this convolution, we propose PSHGCN, a novel heterogeneous graph convolutional network. PSHGCN offers a simple yet effective method for learning valid heterogeneous graph filters. Moreover, we demonstrate the rationale of PSHGCN in the graph optimization framework. We conducted an extensive experimental study to show that PSHGCN can learn diverse heterogeneous graph filters and outperform all baselines on open benchmarks. Notably, PSHGCN exhibits remarkable scalability, efficiently handling large real-world graphs comprising millions of nodes and edges. Our codes are available in the anonymous link: https://anonymous.4open.science/r/PSHGCN_Code-DFDC. | [
"Heterogeneous Graph Neural Networks",
"Spectral Graph Convolutions",
"Positive Noncommutative Polynomials",
"Graph Optimization."
] | https://openreview.net/pdf?id=ANcZw18XWb | tEN7z0QVjw | official_review | 1,700,197,372,699 | ANcZw18XWb | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1217/Reviewer_gXv8"
] | review: **Summary**:
In this paper, the authors propose a spectral domain-based Heterogeneous Graph Neural Networks (HGNNs) methods PSHGCN with a positive spectral heterogeneous graph convolution. PSHGCN enables heterogeneous graph filter to satisfy the requirement of a positive semidefinite constraint. Their experimental results show that PSHGCN can learn diverse heterogeneous graph filters and outperform all baselines on open benchmarks.
**Strengths**:
**S1**. The paper thoroughly explores the design philosophy of PSHGCN from multiple perspectives. Specifically, it delves into detailed comparisons with existing spectral HGNNs, examining the graph optimization and complexity perspectives.
**S2**. The paper is well-written and easy to follow.
**S3**. Code is provided.
**Weaknesses**:
**W1**. No significant test between the best performance and the second-best performance in Table 2. The difference between the best-performing and the second-best methods is slight in all datasets. A significant test should be needed.
questions: **Q1**. Please further discuss why models that are not equivalent to this noncommutative polynomial would lead to limits in their expressiveness.
**Q2**. Can you provide statistical analyses, such as significant tests, of repeated trials to explain that the better result of PSHGCN is not accidental?
ethics_review_flag: No
ethics_review_description: Not Applicable. The paper does not have any ethical considerations to address.
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
ANcZw18XWb | Spectral Heterogeneous Graph Convolutions via Positive Noncommutative Polynomials | [
"Mingguo He",
"Zhewei Wei",
"shikun feng",
"Zhengjie Huang",
"Weibin Li",
"Yu Sun",
"Dianhai Yu"
] | Heterogeneous Graph Neural Networks (HGNNs) have gained significant popularity in various heterogeneous graph learning tasks. However, most existing HGNNs rely on spatial domain-based methods to aggregate information, i.e., manually selected meta-paths or some heuristic modules, lacking theoretical guarantees. Furthermore, these methods cannot learn arbitrary valid heterogeneous graph filters within the spectral domain, which have limited expressiveness. To tackle these issues, we present a positive spectral heterogeneous graph convolution via positive noncommutative polynomials. Then, using this convolution, we propose PSHGCN, a novel heterogeneous graph convolutional network. PSHGCN offers a simple yet effective method for learning valid heterogeneous graph filters. Moreover, we demonstrate the rationale of PSHGCN in the graph optimization framework. We conducted an extensive experimental study to show that PSHGCN can learn diverse heterogeneous graph filters and outperform all baselines on open benchmarks. Notably, PSHGCN exhibits remarkable scalability, efficiently handling large real-world graphs comprising millions of nodes and edges. Our codes are available in the anonymous link: https://anonymous.4open.science/r/PSHGCN_Code-DFDC. | [
"Heterogeneous Graph Neural Networks",
"Spectral Graph Convolutions",
"Positive Noncommutative Polynomials",
"Graph Optimization."
] | https://openreview.net/pdf?id=ANcZw18XWb | nUaN7Soudp | official_review | 1,700,642,239,144 | ANcZw18XWb | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1217/Reviewer_Xyka"
] | review: This paper investigates the field of heterogeneous graph neural networks (HGNNs). To enhance the expressive power, this paper presents a positive spectral heterogeneous graph convolution via polynomials and proposes PSHGCN.
Strength:
1. This paper presents an interesting idea to extend spectral graph filters to heterogeneous graphs.
2. Explicit time complexity is analyzed for the proposed.
3. The proposed PSHGCN is extensively evaluated in the experiments in terms of both efficiency and effectiveness.
4. Necessary theoretical analysis is provided.
Weakness:
1. It seems that the number of trainable weights in heterogeneous graph filters is exponential to the number of relations in heterogeneous graphs, which raises concerns about its scalability.
2. The total running times are missing for the tested large datasets. I am curious about the training time corresponding to the results in Table 4 for PSHGCN.
Minor issue:
PSHGCN should be defined before it is first used.
questions: Please see my weakness.
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
ANcZw18XWb | Spectral Heterogeneous Graph Convolutions via Positive Noncommutative Polynomials | [
"Mingguo He",
"Zhewei Wei",
"shikun feng",
"Zhengjie Huang",
"Weibin Li",
"Yu Sun",
"Dianhai Yu"
] | Heterogeneous Graph Neural Networks (HGNNs) have gained significant popularity in various heterogeneous graph learning tasks. However, most existing HGNNs rely on spatial domain-based methods to aggregate information, i.e., manually selected meta-paths or some heuristic modules, lacking theoretical guarantees. Furthermore, these methods cannot learn arbitrary valid heterogeneous graph filters within the spectral domain, which have limited expressiveness. To tackle these issues, we present a positive spectral heterogeneous graph convolution via positive noncommutative polynomials. Then, using this convolution, we propose PSHGCN, a novel heterogeneous graph convolutional network. PSHGCN offers a simple yet effective method for learning valid heterogeneous graph filters. Moreover, we demonstrate the rationale of PSHGCN in the graph optimization framework. We conducted an extensive experimental study to show that PSHGCN can learn diverse heterogeneous graph filters and outperform all baselines on open benchmarks. Notably, PSHGCN exhibits remarkable scalability, efficiently handling large real-world graphs comprising millions of nodes and edges. Our codes are available in the anonymous link: https://anonymous.4open.science/r/PSHGCN_Code-DFDC. | [
"Heterogeneous Graph Neural Networks",
"Spectral Graph Convolutions",
"Positive Noncommutative Polynomials",
"Graph Optimization."
] | https://openreview.net/pdf?id=ANcZw18XWb | Du4s7J1NxA | decision | 1,705,909,213,874 | ANcZw18XWb | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: This paper develops a new heterogeneous graph method that applies the spectral method on a heterogeneous graph.
Pros:
* The paper applies a spectral method on heterogeneous graphs that commonly exist in the real-world setting. This is quite novel and is important for the field.
* The paper also analyzes the computation complexity of the work and demonstrates its scalability both theoretically and empirically.
* The method requires less configuration compared with the current SOTA while achieving performance comparable to or better than the current SOTA.
Cons:
* Although the authors have explained in the rebuttal about the model performance, the performance improvement is still quite minor except the IMDB dataset when comparing the strong baseline (SeHGNN) |
ANcZw18XWb | Spectral Heterogeneous Graph Convolutions via Positive Noncommutative Polynomials | [
"Mingguo He",
"Zhewei Wei",
"shikun feng",
"Zhengjie Huang",
"Weibin Li",
"Yu Sun",
"Dianhai Yu"
] | Heterogeneous Graph Neural Networks (HGNNs) have gained significant popularity in various heterogeneous graph learning tasks. However, most existing HGNNs rely on spatial domain-based methods to aggregate information, i.e., manually selected meta-paths or some heuristic modules, lacking theoretical guarantees. Furthermore, these methods cannot learn arbitrary valid heterogeneous graph filters within the spectral domain, which have limited expressiveness. To tackle these issues, we present a positive spectral heterogeneous graph convolution via positive noncommutative polynomials. Then, using this convolution, we propose PSHGCN, a novel heterogeneous graph convolutional network. PSHGCN offers a simple yet effective method for learning valid heterogeneous graph filters. Moreover, we demonstrate the rationale of PSHGCN in the graph optimization framework. We conducted an extensive experimental study to show that PSHGCN can learn diverse heterogeneous graph filters and outperform all baselines on open benchmarks. Notably, PSHGCN exhibits remarkable scalability, efficiently handling large real-world graphs comprising millions of nodes and edges. Our codes are available in the anonymous link: https://anonymous.4open.science/r/PSHGCN_Code-DFDC. | [
"Heterogeneous Graph Neural Networks",
"Spectral Graph Convolutions",
"Positive Noncommutative Polynomials",
"Graph Optimization."
] | https://openreview.net/pdf?id=ANcZw18XWb | 5A3YayID4c | official_review | 1,700,800,364,935 | ANcZw18XWb | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1217/Reviewer_zcQH"
] | review: This paper introduces a positive spectrum heterogeneous graph convolution based on positive semidefinite polynomials. Building upon this, a novel heterogeneous graph convolutional network, PSHGCN, is proposed. The fundamental principles of PSHGCN are demonstrated within a graph optimization framework.
Strengths:
1) This paper offers some theoretical analysis.
2) The proposed PSHGNN addresses the issues of poor theoretical guarantees and limited expressiveness observed in existing HGNNs.
3) Some experiments were conducted to demonstrate the effectiveness of the proposed method.
Weaknesses:
1) The issues addressed by the proposed PSHGCN in this paper are not novel problems, and there are some aspects of the specific implementation process that draw inspiration from previous works, indicating a lack of novelty.
2) Figure 1 lacks detailed explanations.
3) The paper mentions ‘simplifying the Sum of Squares form by utilizing a single polynomial’ in Section 4.3, but there is no theoretical proof provided for the feasibility of doing so.
4) The improvement in performance for node classification is not significant and the experiments lack comparisons with the latest methods.
questions: --How is the search for hyperparameters conducted?
--Why does GCN outperform many newer methods on the link prediction task in the Amazon dataset?
ethics_review_flag: No
ethics_review_description: No
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
ANcZw18XWb | Spectral Heterogeneous Graph Convolutions via Positive Noncommutative Polynomials | [
"Mingguo He",
"Zhewei Wei",
"shikun feng",
"Zhengjie Huang",
"Weibin Li",
"Yu Sun",
"Dianhai Yu"
] | Heterogeneous Graph Neural Networks (HGNNs) have gained significant popularity in various heterogeneous graph learning tasks. However, most existing HGNNs rely on spatial domain-based methods to aggregate information, i.e., manually selected meta-paths or some heuristic modules, lacking theoretical guarantees. Furthermore, these methods cannot learn arbitrary valid heterogeneous graph filters within the spectral domain, which have limited expressiveness. To tackle these issues, we present a positive spectral heterogeneous graph convolution via positive noncommutative polynomials. Then, using this convolution, we propose PSHGCN, a novel heterogeneous graph convolutional network. PSHGCN offers a simple yet effective method for learning valid heterogeneous graph filters. Moreover, we demonstrate the rationale of PSHGCN in the graph optimization framework. We conducted an extensive experimental study to show that PSHGCN can learn diverse heterogeneous graph filters and outperform all baselines on open benchmarks. Notably, PSHGCN exhibits remarkable scalability, efficiently handling large real-world graphs comprising millions of nodes and edges. Our codes are available in the anonymous link: https://anonymous.4open.science/r/PSHGCN_Code-DFDC. | [
"Heterogeneous Graph Neural Networks",
"Spectral Graph Convolutions",
"Positive Noncommutative Polynomials",
"Graph Optimization."
] | https://openreview.net/pdf?id=ANcZw18XWb | 31NH7hIWr3 | official_review | 1,701,121,202,167 | ANcZw18XWb | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1217/Reviewer_ic7Q"
] | review: Summary: This paper designs convolution matrices for graph neural networks in the case where graphs have nodes and edges with different "types". The convolution matrices are guaranteed to be positive semidefinite by virtue of the fact that they can be written in a sum of squares form. The authors motivate this by recalling the connection between spectral graph filters and solutions to a certain energy functional minimization problem on graph signals. In order for this connection to hold, the filter matrix must be positive semidefinite.
Pros:
1.) The development of the ideas is fairly clear.
2.) The empirical performance of the resulting methods at least matches baselines.
Cons:
1.) The empirical improvements are minimal, given the magnitudes of the reported variances. In fact, this holds even in comparison to versions of the proposed architecture that do not enforce the positive semidefiniteness constraint.
2.) The theoretical contributions seem overstated. Specifically, I am not sure if the expressiveness guarantee on page 5 holds when, as the authors do, one only uses a single monomial $g$ (as stated at the beginning of Section 4.3).
3.) The paper's approach is motivated by the fact that, if a filter $h$ is PSD, then it corresponds to the unique solution of equation (3) for some choice of the energy function $\gamma(L)$. It is not clearly argued why this is so important for performance, though.
4.) Some notation needs to be reworked. For example, in equation (3), the right-hand side seems to be the definition of the function $f$. One should not write an optimization in this way. One needs $\min_{y}$ on both sides.
questions: 1.) Can the authors clarify points 2 and 3 in the "cons" section of this review? Cons 1-3 are crucial in my evaluation of the paper.
2.) Can the authors define "meta-path" more precisely?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Subsets and Splits