forum_id
stringlengths
8
20
forum_title
stringlengths
4
171
forum_authors
sequencelengths
0
25
forum_abstract
stringlengths
4
4.27k
forum_keywords
sequencelengths
1
10
forum_pdf_url
stringlengths
38
50
note_id
stringlengths
8
13
note_type
stringclasses
6 values
note_created
int64
1,360B
1,736B
note_replyto
stringlengths
8
20
note_readers
sequencelengths
1
5
note_signatures
sequencelengths
1
1
note_text
stringlengths
10
16.6k
AFJYWMkVCh
GNNs as Adapters for LLMs on Text-Attributed Graphs
[ "Xuanwen Huang", "Kaiqiao Han", "Yang Yang", "Dezheng Bao", "Quanjin Tao", "Ziwei Chai", "Qi Zhu" ]
Text-attributed Graphs (TAGs), which interlace textual information with graph structures, pose unique challenges and opportunities for joint text and graph modeling. Recently, large language models (LLMs) have greatly advanced the generative and predictive power of text modeling. However, existing research on jointly modeling text and graph structures either incurs high computational costs or offers limited representational power. In this work, we propose GraphAdapter to harness the power of the LLM without fine-tuning its weights on Text-Attributed Graphs. Given a TAG, an adapter GNN is trained to reduce the LLM's error in predicting the next word of text sequences on nodes. Once trained, this GNN adapter can be seamlessly fine-tuned for various downstream tasks. Through extensive node classification experiments across multiple domains, GraphAdapter demonstrates an average improvement of 5\% while being more computationally efficient than baselines. We further validate its effectiveness with various language models, including RoBERTa, GPT-2, and Llama 2.
[ "Text-attributed graph; graph neural network; language model" ]
https://openreview.net/pdf?id=AFJYWMkVCh
xOygzMWtQe
decision
1,705,909,216,853
AFJYWMkVCh
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept comment: All the concerns of reviewers are addressed during the rebuttal, and all reviews are positive. I recommend a weak acceptance for this paper.
AFJYWMkVCh
GNNs as Adapters for LLMs on Text-Attributed Graphs
[ "Xuanwen Huang", "Kaiqiao Han", "Yang Yang", "Dezheng Bao", "Quanjin Tao", "Ziwei Chai", "Qi Zhu" ]
Text-attributed Graphs (TAGs), which interlace textual information with graph structures, pose unique challenges and opportunities for joint text and graph modeling. Recently, large language models (LLMs) have greatly advanced the generative and predictive power of text modeling. However, existing research on jointly modeling text and graph structures either incurs high computational costs or offers limited representational power. In this work, we propose GraphAdapter to harness the power of the LLM without fine-tuning its weights on Text-Attributed Graphs. Given a TAG, an adapter GNN is trained to reduce the LLM's error in predicting the next word of text sequences on nodes. Once trained, this GNN adapter can be seamlessly fine-tuned for various downstream tasks. Through extensive node classification experiments across multiple domains, GraphAdapter demonstrates an average improvement of 5\% while being more computationally efficient than baselines. We further validate its effectiveness with various language models, including RoBERTa, GPT-2, and Llama 2.
[ "Text-attributed graph; graph neural network; language model" ]
https://openreview.net/pdf?id=AFJYWMkVCh
N4uxVmzecp
official_review
1,698,155,025,925
AFJYWMkVCh
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1885/Reviewer_jej8" ]
review: The paper introduces GraphAdapter, an innovative approach to harnessing the predictive power of large language models (LLMs) for Text-Attributed Graphs (TAGs). The paper aims to resolve the limitations of computational costs and representation power in jointly modeling text and graph structures. The authors propose an adapter GNN that works with pre-trained LLMs like RoBERTa, GPT-2, and Llama 2, showing computational efficiency and an average accuracy improvement of approximately 5% across multiple tasks and domains. The following are three strength: 1. The paper provides an in-depth understanding of TAG challenges, laying a strong foundation for GraphAdapter's necessity and approach. 2. The paper tackles computational inefficiency by introducing a parameter-efficient GNN adapter, reducing trainable parameters significantly. 3. Comprehensive node classification experiments validate the model's effectiveness across multiple domains, showing a 5% accuracy improvement. questions: 1. Since the framework is efficient, I think there may be experiments on larger datasets, for example, Ogbn-Product 2. The prompt part seems also to play an important role in this framework. Nonetheless, there seems to be no detailed discussion about it. 3. The generative task in figure 2(a) is not clear for me ethics_review_flag: No ethics_review_description: No scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
AFJYWMkVCh
GNNs as Adapters for LLMs on Text-Attributed Graphs
[ "Xuanwen Huang", "Kaiqiao Han", "Yang Yang", "Dezheng Bao", "Quanjin Tao", "Ziwei Chai", "Qi Zhu" ]
Text-attributed Graphs (TAGs), which interlace textual information with graph structures, pose unique challenges and opportunities for joint text and graph modeling. Recently, large language models (LLMs) have greatly advanced the generative and predictive power of text modeling. However, existing research on jointly modeling text and graph structures either incurs high computational costs or offers limited representational power. In this work, we propose GraphAdapter to harness the power of the LLM without fine-tuning its weights on Text-Attributed Graphs. Given a TAG, an adapter GNN is trained to reduce the LLM's error in predicting the next word of text sequences on nodes. Once trained, this GNN adapter can be seamlessly fine-tuned for various downstream tasks. Through extensive node classification experiments across multiple domains, GraphAdapter demonstrates an average improvement of 5\% while being more computationally efficient than baselines. We further validate its effectiveness with various language models, including RoBERTa, GPT-2, and Llama 2.
[ "Text-attributed graph; graph neural network; language model" ]
https://openreview.net/pdf?id=AFJYWMkVCh
M5iYWUGq0g
official_review
1,700,740,232,122
AFJYWMkVCh
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1885/Reviewer_Lkj4" ]
review: This paper proposes a novel method, called GraphAdapter, which uses LLMs on graph structure data with parameter-efficient tuning. The method uses GNN as adapters for frozen LMs and pre-trains GNN to align with LMs. Strength It is interesting to use GNN as adapters for LLMs, so as to integrate LLMs with GNNs on Text-Attributed Graphs. Weakness 1) The reported performance of baseline TAPE (GPT-3.5) in Table 3 on the Arxiv dataset (0.7672) differs from the original paper's result (0.7750 ยฑ 0.0012), which seems unfair and the results are not competitive enough. 2) In the experiments, different prompts were set on different dataset. And the robustness of the prompt is lacking. Can another similar prompt achieve a comparable performance in GraphAdapter? 3) The paper is not well written and somewhat unclear. questions: See the weakness. ethics_review_flag: No ethics_review_description: N/A scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 4 technical_quality: 4 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
AFJYWMkVCh
GNNs as Adapters for LLMs on Text-Attributed Graphs
[ "Xuanwen Huang", "Kaiqiao Han", "Yang Yang", "Dezheng Bao", "Quanjin Tao", "Ziwei Chai", "Qi Zhu" ]
Text-attributed Graphs (TAGs), which interlace textual information with graph structures, pose unique challenges and opportunities for joint text and graph modeling. Recently, large language models (LLMs) have greatly advanced the generative and predictive power of text modeling. However, existing research on jointly modeling text and graph structures either incurs high computational costs or offers limited representational power. In this work, we propose GraphAdapter to harness the power of the LLM without fine-tuning its weights on Text-Attributed Graphs. Given a TAG, an adapter GNN is trained to reduce the LLM's error in predicting the next word of text sequences on nodes. Once trained, this GNN adapter can be seamlessly fine-tuned for various downstream tasks. Through extensive node classification experiments across multiple domains, GraphAdapter demonstrates an average improvement of 5\% while being more computationally efficient than baselines. We further validate its effectiveness with various language models, including RoBERTa, GPT-2, and Llama 2.
[ "Text-attributed graph; graph neural network; language model" ]
https://openreview.net/pdf?id=AFJYWMkVCh
C1YmgQ7PrK
official_review
1,701,254,147,653
AFJYWMkVCh
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1885/Reviewer_vwz9" ]
review: The submitted work explores the use of LLM on Text-Attributed Graphs. The authors propose a GNN-based parameter-efficient tuning method for LLMs. They also propose a residual learning procedure to pre-train the GNN adapter with LLMs. Pros 1. The idea of using the LLMs in structural and textual data is meaningful. 2. The proposed GNN Adapter is lightweight and convenient. 2. The paper is well-written and the experiments are sufficient. Cons 1. Although the significant improvements in performance are shown in Table 2, the experimental results shown in Table 4 also show the limited performance on the dataset Arxiv and Reddit compared with the baseline GIANT when using the same LM. 2. As a parameter-efficient tuning method, the authors do not present the experimental comparison with popular parameter-efficient tuning, such as LoRA. questions: 1. What's the performance comparison of your method and baselines GIANT and GLEM with the same LLMs, such as (GPT2, and Llmam 2)? 2. What's the performance improvement of your GraphAdapter compared with the popular parameter-efficient tuning, such as LoRA? 3. What are the running time and GPU cost of your methods with different PLMs? 4. What's the effect of different GNNs with the same LM, such as graph attention networks? ethics_review_flag: No ethics_review_description: Nan scope: 2: The connection to the Web is incidental, e.g., use of Web data or API novelty: 4 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
AFJYWMkVCh
GNNs as Adapters for LLMs on Text-Attributed Graphs
[ "Xuanwen Huang", "Kaiqiao Han", "Yang Yang", "Dezheng Bao", "Quanjin Tao", "Ziwei Chai", "Qi Zhu" ]
Text-attributed Graphs (TAGs), which interlace textual information with graph structures, pose unique challenges and opportunities for joint text and graph modeling. Recently, large language models (LLMs) have greatly advanced the generative and predictive power of text modeling. However, existing research on jointly modeling text and graph structures either incurs high computational costs or offers limited representational power. In this work, we propose GraphAdapter to harness the power of the LLM without fine-tuning its weights on Text-Attributed Graphs. Given a TAG, an adapter GNN is trained to reduce the LLM's error in predicting the next word of text sequences on nodes. Once trained, this GNN adapter can be seamlessly fine-tuned for various downstream tasks. Through extensive node classification experiments across multiple domains, GraphAdapter demonstrates an average improvement of 5\% while being more computationally efficient than baselines. We further validate its effectiveness with various language models, including RoBERTa, GPT-2, and Llama 2.
[ "Text-attributed graph; graph neural network; language model" ]
https://openreview.net/pdf?id=AFJYWMkVCh
24YhiZkYst
official_review
1,700,054,420,027
AFJYWMkVCh
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1885/Reviewer_PixD" ]
review: This work aims to address the challenges of high computational cost and insufficient model representation capacity in the joint learning of text and graph structures in TAG. The paper introduces its own solution, GraphAdapter, to tackle these issues. However, the entire article does not focus on addressing how the proposed method specifically resolves these problems. Instead, it is limited to introducing the method itself. The logical coherence in the Introduction section is lacking, as it fails to clearly articulate the problems it aims to solve. Additionally, the experimental section lacks comparative experiments with Large Language Models (LLM). questions: 1. in line 87-97 and line 99-107, > ...... However, when considering cascading GNN-LMs, existing techniques cannot be scaled up to billion-scale models like Llama 2 [ 33 ] and GPT-3 [2]. Another pioneering research has ventured to fine-tune language models using unsupervised graph information ..... >...... While cascading GNNs and LLMs prove infeasible for training, we draw inspiration from works on parameter-efficient tuning of LLMs to harness the power of large language models on TAGs ..... The concatenation method incurs computational overhead, and self-supervised graph fine-tuning is introduced. You point out that graphs can assist language models in extracting node information, but it is not explicitly explained why this fine-tuning approach demonstrates that graphs help language models understand textual information. You've mentioned the computational cost of the concatenation method in your motivation. Then, you employ parameter fine-tuning, but you haven't clarified the difference between this and the self-supervised fine-tuning mentioned earlier. So, what is the innovation in your approach? 2. in line 117-132, > ...... we employ the GNN adapter only at the transformerโ€™s last layer and implement residual learning for autoregressive next token prediction. Different from a traditional adapter, we perform mean-pooling on the hidden representations from a GNN adapter and LLMs, then optimize the adapter to improve the next-word prediction of the LLMs. ....... Your main motivation is to address the computational cost associated with the concatenation method. However, the new approach you propose does not explicitly explain why using GNN in the last layer of the transformer reduces computational overhead. Additionally, while you suggest that graphs can help language models understand textual information, there is no structural innovation in your approach to enhance the effectiveness of graphs for language models. You haven't clarified the differences between the concatenation method and your approach in terms of how graphs assist language models, and why your method is superior. Therefore, what is your specific contribution in addressing these issues? 3. in line 317, > Text-Attributed Grap ...... Your study focuses on text-attributed graphs, yet there is a lack of detailed introduction to this task. Instead, a substantial portion of the content is dedicated to explaining Pre-trained Language Models (PLM) and Graph Neural Networks (GNN), which may result in an illogical organization of the material. It's important to prioritize a comprehensive and clear explanation of the text-attributed graph task to ensure that readers understand the context and significance of your research. 4. in line 435-439, > ...... We separately calculated the prediction probabilities of the language model alone and the probabilities that mixed the graph structure and the previous predictions. The two robabilities are then averaged to obtain the final prediction result ....... Given your assertion that language models struggle to predict graph-related word information, the decision to average the probabilities from the standalone language model and the graph-structured model raises several concerns: (1) The potentially poor performance of the standalone language model on graph-related words may adversely affect the overall effectiveness after averaging. Have you considered how the discrepancies in performance might be mitigated or addressed? (2) Averaging probabilities from language and graph models introduces additional computational steps, contradicting your initial motivation to address computational overhead. How does this align with the goal of reducing computational costs? (3) The rationale behind this specific approach, combining probabilities through averaging, is not clearly justified. Why choose this particular method of fusion, and how does it address the challenges posed by the difficulty of language models in predicting graph-related word information? Providing a more detailed explanation or rationale for this aspect of your methodology would help address these concerns and strengthen the coherence of your approach. 5. in line 209 and line 646, > ..... LLMs for Graph > table 2 You rightly point out a potential gap in the evaluation of the proposed method. While the related work suggests the conversion of graphs to text for processing with Large Language Models (LLMs), there is a lack of comparative experiments in this aspect. The absence of experiments comparing the performance of the proposed method against approaches that directly utilize LLMs for graph processing makes it challenging to discern whether the improvement in model capability stems from the base model or from the fusion of graph and language model information. The absence of this comparison is also notable in the ablation experiment, where the experimental design falls short of demonstrating the clear advantages of the proposed model. Including experiments that specifically isolate and compare the contributions of the base model and the graph-language model fusion would enhance the robustness of your findings and better support your model's superiority claims. 6. in line 777, > Are all components comprising GraphAdapter valid? Table 5 indicates that the decrease in results is primarily attributed to pre-training. Removing both pre-training and graph structure, comparing the results with models that only exclude pre-training shows that the model's inference ability here is still predominantly derived from the base language model. This observation suggests that pre-training has a more significant impact on the overall performance than the exclusion of graph structures. 7. in line 866, > ...... Graph structure is the basis of pre-training ..... You make a valid point regarding the extensive analysis of pre-training versus no pre-training models. The analysis focuses on the idea that pre-training enables the Graph Neural Network (GNN) to learn structural information from the graph. However, it's crucial to connect this back to the initial proposition in the introduction, where you suggested that in the Text-Attributed Graph (TAG) task, graphs can complement node text attributes through structural proximity. To strengthen your argument and provide a more comprehensive analysis, consider conducting sample analyses that highlight instances where the graph indeed supplements the text attributes of nodes. By examining specific examples, you can elucidate the cases where the graph contributes valuable structural information, thereby addressing the question of which samples benefit from the inclusion of graph structures. This, in turn, allows for a more nuanced understanding of the challenges the model faces and provides additional insights into the effectiveness of the proposed approach. ethics_review_flag: No ethics_review_description: None scope: 2: The connection to the Web is incidental, e.g., use of Web data or API novelty: 4 technical_quality: 4 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
ADBeqIR10E
Detecting and Understanding Self-Deleting JavaScript Code
[ "Xinzhe Wang", "Zeyang Zhuang", "Wei Meng", "James Cheng" ]
Self-deletion is a well-known strategy frequently utilized by malware to evade detection. Recently, this technique has found its way into client-side JavaScript code, significantly raising the complexity of JavaScript analysis. In this work, we systematically study the emerging client-side JavaScript self-deletion behavior on the web. We tackle various technical challenges associated with JavaScript dynamic analysis and introduce JSRay, a browser-based JavaScript runtime monitoring system designed to comprehensively study client-side script deletion. We conduct a large-scale measurement of one million popular websites, revealing that script self-deletion is prevalent in the real world. While our findings indicate that most developers employ self-deletion for legitimate purposes, we also discover that self-deletion has already been employed together with other anti-analysis techniques for cloaking suspicious operations in client-side JavaScript.
[ "JavaScript", "anti-analysis techniques", "web browser" ]
https://openreview.net/pdf?id=ADBeqIR10E
kPYzesFGzQ
official_review
1,700,688,119,923
ADBeqIR10E
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1311/Reviewer_CxzJ" ]
review: Pros: - Always interesting to see a deep dive into a anti-analysis technique. - The tool created by the work shows a useful technique for browser instrumentation. The tool and technical measurement approach seems very useful in general, potentially even for other measurements. Cons: - It is not clear that the vast majority of website authors are intentionally self-deleting code. The claims made are stronger than the data seems to indicate. - Unclear what the impact of these findings is, since the self-deleting code was found to be a weaker signal for security-relevant things. The claims that the work makes in multiple places seems to be to be stronger than the data indicates. For example, in Section 6.2, accessing the listed APIs, network access, and presence on the EasyPrivacy ad blocking list are all things that seem more related to privacy and ad blocking preferences not security impactful concerns. The article assumes and implies that malicious code will score higher on these metrics than benign code, whereas in practice I am not sure if this is the case. The work states that JavaScript self-deletion is an anti-debugging technique, when the results show seem to imply that many website authors may not even know that are creating self-deleting code when they are jQuery or Google tags. In section 6.1, the authors assume that the website authors are intending to use self-deleting code, when the paper doesn't provide any evidence that the majority of these cases are intentionally. I could imagine that website authors just use the library or tool that helps them achieve their goals and don't know or care about the details of how jQuery or the google tags library are implemented. I would like to see more details provided around the manual classification of benign code and suspicious code in section 6.1. Manual classification into benign and suspicious is very hard to do reliably even for an experienced researcher. Many times, manual researchers just lack the context of what the site's intentions and reasoning are. It would be good to provide explicit definitions for what is suspicious or malicious and what evidence is used to reach that conclusion. Furthermore, statistics for inter-rater reliability rating would be useful here, since this is an area where personal judgment can influence the results. Finally, I'm surprised that the manual classification didn't have any borderline or unknown cases, since, in my experience, there are many situations where it is impossible to know really why a piece of data is being used and many possible suspicious or non-suspicious reasons for it being collected. The manual classification experiment is the only experiment that draws a direct link between this behavior and malicious or suspicious code, so it is important for these findings to be more rigorous. Finally, I would like more clarification in the work about what the impact of these findings are. It would help if the authors could clarify how the tool or the knowledge gained from the study should impact real systems. I have a vague feeling that the findings are potentially security relevant, but nothing explicit about how the findings of this study can make the state of things better. questions: Questions: - What is the impact of the findings? - How reliable is the link between self-deleting JavaScript and security? - How reliable is the manual classification of suspicious and benign JavaScript? ethics_review_flag: No ethics_review_description: NA scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 4 technical_quality: 4 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
ADBeqIR10E
Detecting and Understanding Self-Deleting JavaScript Code
[ "Xinzhe Wang", "Zeyang Zhuang", "Wei Meng", "James Cheng" ]
Self-deletion is a well-known strategy frequently utilized by malware to evade detection. Recently, this technique has found its way into client-side JavaScript code, significantly raising the complexity of JavaScript analysis. In this work, we systematically study the emerging client-side JavaScript self-deletion behavior on the web. We tackle various technical challenges associated with JavaScript dynamic analysis and introduce JSRay, a browser-based JavaScript runtime monitoring system designed to comprehensively study client-side script deletion. We conduct a large-scale measurement of one million popular websites, revealing that script self-deletion is prevalent in the real world. While our findings indicate that most developers employ self-deletion for legitimate purposes, we also discover that self-deletion has already been employed together with other anti-analysis techniques for cloaking suspicious operations in client-side JavaScript.
[ "JavaScript", "anti-analysis techniques", "web browser" ]
https://openreview.net/pdf?id=ADBeqIR10E
k4bLFhsTmF
official_review
1,700,689,872,401
ADBeqIR10E
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1311/Reviewer_pNLR" ]
review: Pros: - Well-executed measurement of an interesting phenomenon on the Web. I wasn't even aware that this is done in the wild. Thanks for describing, analyzing and measuring this. - I also very much appreciate that the authors open-sourced an artifact for this study, such that the reader can check how certain parts are implemented and that future work can benefit from the implementation, thanks. - The various examples and the case study made it easy to understand the concept and the results of the paper. Cons: - My biggest concern here is that although there is a subsection about the potential security impact and an assessment of sensitive API accesses, I don't see much Web security relation in this measurement work. Thus, I would recommend moving this to one of the measurement tracks, e.g. search track or submit it to conferences that target Web measurements e.g. IMC. - I was wondering about the exact reasons for those self-deleting scripts in the non-malicious use case. For example, in the case of protection of intellectual property, it is weird as someone interested in the code could just directly request it from the source or collect eval invocations via e.g. the Trusted Types API. I understand that a survey study is out of scope here, but asking the Web operators about the reason to get an initial idea for the reason would be a nice addition to the paper. - It should be made clear why the authors decided to alter chromium instead of using MutationObserver and Trusted Types together with stack traces of the invocations to implement that fully with client-side code instead of altering the execution engine. I'm sure there are reasons for both ways, but the paper would benefit from a more detailed explanation for this choice as it might point out why and in which cases it works better. - I was very surprised about the high number of sites that have deleting scripts in their application. Although I don't question the correctness of the assessment, it would be nice to have a detailed analysis of the root cause here, e.g. extend the library analysis to see how often libraries are causing that and why they do so. This has already been hinted at with the jQuery case, but I think the paper would benefit from a more detailed analysis here. - A Minor thing: The eval API seems to be used here as a general representative of string-to-code conversion functions. Also, setInterval or setTimeout can be used to do that, given that the analysis is based on JS parser invocations those cases are not missed, but it should be mentioned somewhere that those are also covered by your tool. questions: - Did you contact the developer or operator of the deleting scripts to assess the reasoning behind the deletion, especially for the benign use cases? - Why exactly did you choose to alter chromium instead of using client-side APIs that cover all cases per default? ethics_review_flag: No ethics_review_description: No issues scope: 2: The connection to the Web is incidental, e.g., use of Web data or API novelty: 6 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
ADBeqIR10E
Detecting and Understanding Self-Deleting JavaScript Code
[ "Xinzhe Wang", "Zeyang Zhuang", "Wei Meng", "James Cheng" ]
Self-deletion is a well-known strategy frequently utilized by malware to evade detection. Recently, this technique has found its way into client-side JavaScript code, significantly raising the complexity of JavaScript analysis. In this work, we systematically study the emerging client-side JavaScript self-deletion behavior on the web. We tackle various technical challenges associated with JavaScript dynamic analysis and introduce JSRay, a browser-based JavaScript runtime monitoring system designed to comprehensively study client-side script deletion. We conduct a large-scale measurement of one million popular websites, revealing that script self-deletion is prevalent in the real world. While our findings indicate that most developers employ self-deletion for legitimate purposes, we also discover that self-deletion has already been employed together with other anti-analysis techniques for cloaking suspicious operations in client-side JavaScript.
[ "JavaScript", "anti-analysis techniques", "web browser" ]
https://openreview.net/pdf?id=ADBeqIR10E
fMN3AkZRyi
official_review
1,700,225,267,498
ADBeqIR10E
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1311/Reviewer_7A9o" ]
review: This paper is undoubtedly interesting and is perhaps the best paper in my pile of papers. The authors propose modifications to the V8 JS engine (+700 lines of code). The authors add hooks to JS code handlers that are responsible for inserting and deleting scripts. They clearly show that merely monitoring the script tags are not enough. There is a need to go under the hood and add annotations. As compared to merely script-based interventions, actually modified the V8 code and then integrating it in Chromium is clearly a very solid contribution. There are some things that stop the paper from becoming stellar. 1. The scope is somewhat limited because we are not looking at script modification. 2. A lot of subtle bugs are being missed where instead of deleting a script, the environment is changed such that certain parts of the loaded scripts are active and the rest are inactive. This is conceptually similar to deletion but they go undetected in this system. 3. The security of the logging mechanism should be discussed. questions: 1. How is script modification and changing the control flow in a script by modifying the environment tackled? 2. What is the security of the logging mechanism? ethics_review_flag: No ethics_review_description: I think it is fine. scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 6 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
ADBeqIR10E
Detecting and Understanding Self-Deleting JavaScript Code
[ "Xinzhe Wang", "Zeyang Zhuang", "Wei Meng", "James Cheng" ]
Self-deletion is a well-known strategy frequently utilized by malware to evade detection. Recently, this technique has found its way into client-side JavaScript code, significantly raising the complexity of JavaScript analysis. In this work, we systematically study the emerging client-side JavaScript self-deletion behavior on the web. We tackle various technical challenges associated with JavaScript dynamic analysis and introduce JSRay, a browser-based JavaScript runtime monitoring system designed to comprehensively study client-side script deletion. We conduct a large-scale measurement of one million popular websites, revealing that script self-deletion is prevalent in the real world. While our findings indicate that most developers employ self-deletion for legitimate purposes, we also discover that self-deletion has already been employed together with other anti-analysis techniques for cloaking suspicious operations in client-side JavaScript.
[ "JavaScript", "anti-analysis techniques", "web browser" ]
https://openreview.net/pdf?id=ADBeqIR10E
Htd8JoCtu0
decision
1,705,909,226,874
ADBeqIR10E
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept (Oral) comment: ## Summary This paper conducts a comprehensive study of self-deleting JavaScript behavior on the web, introducing a novel tool, JSRay, to monitor and analyze this phenomenon. The study covers one million popular websites, uncovering the prevalence of script self-deletion, often used for legitimate purposes but also found in conjunction with other anti-analysis techniques in suspicious operations. While self-deletion is identified as a method to evade detection, the study also reveals its use for benign reasons, adding complexity to the analysis of JavaScript security. ## Evaluation **Strengths:** 1. **Innovative Research:** The study tackles an underexplored area in JavaScript analysis, focusing on the emerging trend of self-deleting scripts. 2. **Comprehensive Tool:** JSRay, the browser-based tool developed for this study, effectively captures and analyzes script deletion behavior. 3. **Significant Findings:** The research reveals that self-deleting scripts are prevalent and used for various reasons, including legitimate ones, challenging traditional assumptions about script deletion in web security. **Weaknesses:** 1. **Scope of Security Relevance:** The paper conflates security issues with non-security issues, lacking a detailed evaluation of truly security-sensitive behaviors. 2. **Data Collection Limitations:** The study's data collection method failed for a significant fraction of sites, potentially introducing bias. 3. **Lack of Clarity in Contributions:** The paper does not clearly differentiate its contributions from existing dynamic analysis techniques, particularly for handling obfuscated code. ## Suggestions for Improvement 1. **Distinguish Security Relevance:** Provide a more detailed analysis to differentiate between security-sensitive and non-security issues related to self-deleting scripts. 2. **Address Data Collection Gaps:** Clarify the impact of the failed data collection on the overall findings and explore ways to minimize biases. 3. **Clarify Technical Contributions:** Elaborate on how JSRay's approach to monitoring self-deleting scripts differs from and improves upon existing methods. ## Overall Impression The paper presents valuable insights into the complex nature of self-deleting JavaScript scripts on the web, highlighting both legitimate and suspicious uses. While the study offers significant contributions to understanding JavaScript behavior, it requires refinement in distinguishing between security and non-security aspects and clarifying the novelty of its technical approach. Furthermore, given that the connection to web security isn't robustly established, the Program Committee suggests that this paper would be more appropriately situated in the Web Mining and Content Analysis track, rather than the Security and Privacy track. This recommendation is based on the paper's stronger alignment with content analysis and web data mining, rather than direct implications for web security. ---
ADBeqIR10E
Detecting and Understanding Self-Deleting JavaScript Code
[ "Xinzhe Wang", "Zeyang Zhuang", "Wei Meng", "James Cheng" ]
Self-deletion is a well-known strategy frequently utilized by malware to evade detection. Recently, this technique has found its way into client-side JavaScript code, significantly raising the complexity of JavaScript analysis. In this work, we systematically study the emerging client-side JavaScript self-deletion behavior on the web. We tackle various technical challenges associated with JavaScript dynamic analysis and introduce JSRay, a browser-based JavaScript runtime monitoring system designed to comprehensively study client-side script deletion. We conduct a large-scale measurement of one million popular websites, revealing that script self-deletion is prevalent in the real world. While our findings indicate that most developers employ self-deletion for legitimate purposes, we also discover that self-deletion has already been employed together with other anti-analysis techniques for cloaking suspicious operations in client-side JavaScript.
[ "JavaScript", "anti-analysis techniques", "web browser" ]
https://openreview.net/pdf?id=ADBeqIR10E
CpLOtLBukJ
official_review
1,698,747,139,185
ADBeqIR10E
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1311/Reviewer_5iQr" ]
review: **Strengths** + interesting findings about self-deleting scripts on the Web + technique to identify self-deleting scripts **Weaknesses** - experimental evidence does not support the correlation between self-deletion and malicious intent on the Web - data collection failed for substantial fraction of sites - clarification of ethical compliance for non-intrusive data collection - capturing dynamically generated JS code is not a new contribution - open-source tooling unclear Thank you for submitting your work to WWW'24. The paper was an interesting read. In fact, studying self-deleting scripts on the Web is crucial for understanding evolving security threats and protecting against them. The main strength of this paper lies in the findings about self-deleting scripts on the Web, such as the prevalence of script deletion, the characterization of first-party and third-party scripts involved (important for understanding possible defense mechanisms), and the different types of self-deleting techniques used in the wild. The tooling to identify such self-deleting behaviours is also an added plus. Another strength of this paper is the surprising findings presented in Section 6.2. The paper effectively establishes a correlation between self-deletion and access to sensitive browser APIs. Moreover, it highlights that self-deleted scripts are more prone to being blocked by ad-blocking lists like EasyList. However, I also have a few noteworthy concerns: - First, the results of data collection in Section 5.1 suggests that requests to about 130K domains were blocked, which, according to the paper, were due to the high volume of requests. This brings up two observations: i) what ethical considerations have you taken into account for non-intrusive data collection? ii) the failed sites are a substantial fraction of the dataset (more than one out of every ten), which can introduce potential bias into the derived conclusions. - An additional concern related to data collection pertains to the methodology employed, in that the study exclusively acquires and analyzes a single webpage from each website (the landing page), raising questions about the dataset's representativeness and comprehensiveness. - Then, there is also the question of whether self-deletion could indicate maliciousness on the Web. The paper presents weak evidence for a direct correlation between self-deleting scripts and malicious intent. Specifically, the authors reviewed 600 self-deleting scripts in Section 6.1, and found that only 8% of them appeared to engage in suspicious activities. This suggests that self-deleting scripts are more likely to signal benign functionality rather than malicious intent in the context of Web. Furthermore, the quality and scale of this part of the study is also constrained by the limits of manual analysis, i.e., 600 scripts only, hard to reproduce and error-prone. - In Section 4.2.1, the authors mention that capturing dynamically generated code via eval() is a challenge and create a modified version of a browser engine. However, existing works (e.g., [1, 2]) use the 'Debugger.scriptParsed' event of the Chrome CDP [3] to capture all parsed scripts. This includes all scripts (including those that are dynamically loaded or dynamically generated via string evaluations). Therefore, this is not a new contribution. If the sole challenge addressed by RQ1 in Section 2.2 is the collection of JavaScript source code, it has already been accomplished in previous research. - Another issue that lies in the intersection of presentation and technical correctness is that lack of clarity regarding what the paper contributes for each of the challenges enumerated in Section 2.2. For example, it is lost on me how the proposed system, JSRay, can handle obfuscated code and in what aspects it differs from existing dynamic analysis and runtime monitoring techniques. - Finally, it is unclear if JSRay will be open-source to benefit the community. Overall, I believe the authors have raised interesting points, and the paper holds the potential for acceptance provided that the authors clarify their compliance with ethical standards for non-intrusive data collection. References - [1] https://dl.acm.org/doi/10.1145/3372297.3417267 - [2] https://ieeexplore.ieee.org/document/10179403 - [3] https://chromedevtools.github.io/devtools-protocol/tot/Debugger/#event-scriptParsed ## Update After Rebuttal Thank you for answering my questions. The rebuttal addresses (most of) my concerns. After reading your answers, I have a few recommendations to improve the manuscript: - Consider incorporating clarifications about CDP methods like Debugger.scriptParsed into the paper. In Section 2.2, the "Dynamic Code" paragraph seem to discuss a challenge addressed in prior work using Debugger.scriptParsed. It focuses on covering dynamically loaded or generated JavaScript code, highlighting the limitation of network request-based approaches due to the absence of a source URL for inline scripts. However, what the paper actually contributes is addressing the challenge of identifying script tag containers (i.e., DOM nodes) on the top of that, - Consider including a discussion covering ethical considerations (e.g., for data collection) and the limitations of your work (e.g., coverage of crawling, failed requests, etc). - Please make it clear in your paper that your tool will be open source. All in all, I think this is an excellent piece of work, and I am happy to recommend it for acceptance. questions: - In Section 5.1, what ethical considerations have you taken into account for non-intrusive data collection? - Will your tool be open source and publicly accessible to benefit our community? - How does JSRay differ from existing dynamic analysis / runtime monitoring techniques, particularly for handling obfuscated code, as motivated in Section 2.2 ethics_review_flag: No ethics_review_description: Section 5.1 suggests that requests to about 130K domains were blocked, which, according to the paper, were due to the high volume of requests. This introduces the question as to whether the authors employed intrusive data collection and crawling methods? scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 5 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
ADBeqIR10E
Detecting and Understanding Self-Deleting JavaScript Code
[ "Xinzhe Wang", "Zeyang Zhuang", "Wei Meng", "James Cheng" ]
Self-deletion is a well-known strategy frequently utilized by malware to evade detection. Recently, this technique has found its way into client-side JavaScript code, significantly raising the complexity of JavaScript analysis. In this work, we systematically study the emerging client-side JavaScript self-deletion behavior on the web. We tackle various technical challenges associated with JavaScript dynamic analysis and introduce JSRay, a browser-based JavaScript runtime monitoring system designed to comprehensively study client-side script deletion. We conduct a large-scale measurement of one million popular websites, revealing that script self-deletion is prevalent in the real world. While our findings indicate that most developers employ self-deletion for legitimate purposes, we also discover that self-deletion has already been employed together with other anti-analysis techniques for cloaking suspicious operations in client-side JavaScript.
[ "JavaScript", "anti-analysis techniques", "web browser" ]
https://openreview.net/pdf?id=ADBeqIR10E
B0mo1FtA2F
official_review
1,700,312,269,554
ADBeqIR10E
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1311/Reviewer_kBNH" ]
review: This work presents a large-scale analysis of the behaviors of self-deleting Javascript code in the top 1 million pages. The authors present a novel browser-based solution that collects inclusion trees, and more importantly, records deletion behaviors of running JS code. Analyzing the behavior of in-the-wild Javascript with respect to making the web a safer place for consumers remains an important focus of the attention of the security community. The authors present an evaluation that ~8% of sampled self-deleting scripts perform some kind of malicious activity, which sounds concerning. But the evaluation conflates legitimate issues with questionable security relevant issues. I.e., tracking user activities is a legitimate practices employed by website owners to monetize their business. Similarly, relying on tracking or ad lists to correlate "undesired" behavior does not have immediate security impact. The evaluation of the suspicious activity requires a more fine grained evaluation to allow for proper evaluation of security sensitive vs "undesired" behaviors. Overall, I feel like the tangible security issues presented by the paper are conflated with non-security issues and that the work might be better suited for a non-security focussed track in its current form. The paper claims that it is is the first one to investigate the inclusion trees spanned outside of those observable via network requests, which was already done in [34]. Similarly, the authors are distinguishing between 1st and 3rd parties without considering 1st party CDNs etc, similar to the concept of extended same party of [34]. Further analyses that might help shed more light onto the landscape of self-deleting scripts: - Where do most of the scripts performing the deletion come from? E.g., are those mostly libraries/integration with ad/tracking vendors? E.g., what about a malvertising script distributed via a legitimate ad vendor. We would count the ad as trying to "hide" whereas the distribution mechanism actually performs the deletion as "good" practice to not pollute the DOM. - What is the exact distribution among the 8% of scripts that are hiding their malicious behavior? - Does self-deletion happen predominantly together with obfuscation? I would assume that obfuscation is the stronger "hiding" technique? Pros: - Jsray is a nice system to understand script inclusion and deletion behavior - Anti-analysis techniques have the potential to be very security sensitive Cons: - security impact is shallow, and needs more detailed/extensive evaluation to shine - main takeaways fit better into a measurement focussed track than a security track - techniques are based on an already outdated chrome Nitpicks: - use they/them instead of other pronouns for, e.g., the attacker questions: - Where are the deletions predominantly happening, e.g., if we see them happening in the head that is part of a different workflow than, e.g., somewhere in the middle of the document? - Are self-deletion practices correlated with single page applications? - Why would we employ self-deletion over obfuscation, i.e., looking at an obfuscated piece of JS code feels a lot harder to analyze than a self-deleting one? - Is self-deletion predominantly a library feature? E.g., beside jquery I would assume that rocket loader and similar technology contributes to the majority of cases. - If the case study the script performs a redirect anyway, is there a need to perform the self-deletion? E.g., if it is a network script we can easily investigate in the sources devtools tab, or if we pause the script before the navigation we could investigate the DOM for an inline script, or add breakpoints on deletions from the DOM. ethics_review_flag: No ethics_review_description: n/a scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 4 technical_quality: 3 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
A711xj2EWt
Query Optimization for Ontology-Mediated Query Answering
[ "Wafaa EL HUSSEINI", "Cheikh Brahim EL VAIGH", "Franรงois Goasdouรฉ", "Helene Jaudoin" ]
Ontology-mediated query answering (OMQA) consists in asking database queries on knowledge bases (KBs); a KB is a set of facts called the KB's database, which is described by domain knowledge called the KB's ontology. A widely-investigated OMQA technique is FO-rewriting: every query asked on a KB is reformulated w.r.t. the KB's ontology, so that the query answers are computed by the relational evaluation of the query reformulation on the KB's database. Crucially, because FO-rewriting compiles the domain knowledge relevant to queries in their reformulations, query reformulations may be complex and their optimization is the crux of efficiency. We devise a novel optimization framework for a large set of OMQA settings that enjoy FO-rewriting: conjunctive queries, i.e., the core select-project-join queries, asked on KBs expressed in datalog+/- and existential rules, description logic and OWL, or RDF/S. We optimize the query reformulations produced by state-of-the-art FO-rewriting algorithms by computing rapidly, with the help of a KB's database summary, simpler (contained) queries with same answers that can be evaluated faster by RDBMSs. We show on a well-established OMQA benchmark that time performance is significantly improved by our optimization framework in general, up to three orders of magnitude.
[ "Ontology-Mediated Query Answering", "FO-rewriting", "Query optimization", "Query containment", "Database summaries" ]
https://openreview.net/pdf?id=A711xj2EWt
zv2AR87tQf
official_review
1,701,003,545,641
A711xj2EWt
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1499/Reviewer_KbRj" ]
review: The submission studies the problem of ontology-mediated query answering. The focus is on opmimizing query rewritings using a summary of the database. Some experiments are conducted and show that the technique works well for rewritings in the shape of unions of conjunctive queries (UCQs) and in half of the cases for rewritings in the shape of joins of UCQs (JUCQs). UCQs are prone to becoming very large in practice, but the fact that the optimization works there is hardly surprising - most of the combinations of atoms do not co-occur in the test database (which is what the submission actually demonstrates). JUCQs in general actually include UCQs, but the authors focus on one particular way of generating them (and this is not very well explained in the submission), which is why the presented results are somewhat surprising. To sum up - the experimental evaluation is not particularly extensive and a bit difficult to draw any conclusions from. The list "datalog\pm and existential rules, description logic and OWL, or RDF/S" does not make much sense: the "and" seem to join alternative names for the same languages; also, RDF/S normally denotes "RDF or RDFS" - as there is not much FO-rewriting in plain RDF, it should really be RDFS, but then it's subsumed by OWL. More seriously though, this list tries to cover too much in too few words (additional explanations are needed). And it's used twice in the text - in the abstract and then again on p 1, without any elaboration. The note that "non-recursive datalog programs ... unfold into UCQs" (p 1) is irrelevant and actually misleading - JUCQs and USCQs also unfold into UCQs, but it does not characterize them. The point is that non-recursive datalog not only generalizes JUCQs and USCQs, but also allows shared subqueries without any duplication. The difference between minimal and compact reformulations is unclear (without reading the cited papers)? Normally, minimal would be compact too... The authors claim novelty and originality of the optimization framework in Introduction and then re-iterate it in Conclusions. However, if we look at Example 4 in [11], then we'll also see an example of a data-dependent optimization: (1), (4) and (8) are derived from (0) by instantiating variable y with the three class names -- this could only work if we know that these are the only classes in the triple store (in other words, we have a summary of the triple store, which tells us which classes are empty and which are not - what if someone adds a triple of the form IRI rdf:type :CrimeNovel?). Such optimizations are quite natural and implemented, for example, in Ontop: [*] R. Kontchakov, M. Rezk, M. Rodriguez-Muro, G. Xiao, M. Zakharyaschev: Answering SPARQL Queries over Databases under OWL 2 QL Entailment Regime. ISWC (1) 2014: 552-567 So, perhaps, more credit needs to be given to the authors of [11] and [*1]. Also, there is a considerable amount of work on using integrity constraints on the data in opmimizing query answering: e.g., [*2] J. Mora, R. Rosati, O. Corcho: kyrie2: Query Rewriting under Extensional Constraints in ELHIO. ISWC (1) 2014: 568-583 Integrity constraints (or ABox dependencies) can also be viewed as "summaries" of the data (they are certainly data-dependent). The explanation "a simpler contained one, i.e., a simpler more specific one, with the same answers on a fixed database" is incomplete and misleading: first, the notion of query containment is not defined (and "a contained one" is actually quite an awkward way of bringing query containment into the sentence; second, "a more specific one" does not add any clarity to it; third, "the same answers" does not correspond to query containment (which only guarantees the subset relation between answers). In the definition of FO queries, the authors could be more specific - the proposed technique does not work with queries containing negation (even in the contexts where it can be eliminated, e.g., \neg (\neg A \lor \neg B)), so, perhaps, it would make sense to concentrate on existential positive queries from the very beginning. The claim on p 3 that "datalog-nr reformulations must be unfolded into UCQs reformulations... to be evaluated by RDBMSs" is not factually correct - some DB engines support enough of the Common Table Expression (CTEs) to deal with non-recursive datalog. It may be not the most efficient support, but saying "must be unfolded" is clearly incorrect. The authors list 4 papers for "the worst-case number of CQs that are maximally-contained in a CQ ... is exponential in the size of the CQ" - but really, the result is a simple observation made in [17] - the rest of the references are not needed. Also, what is "a lightweight RDFS"? What is datalog\pm0? The paragraph after Problem 1 does not clarify anything at all. Why is this a bad "optimization"? It delivers the required results, does it not? The first item in the definition of quotient database (page 5) is unreadable. In Definition 4.9, is it not easier to say "the minimal equivalence relation containing all (t1,t2) such that both t1 and t2 are terms of the same unary relation"? It also needs to be made clearer that the same unary relation in D. But what is a unary relation in D? "Concept" does not clarify this, as A \sqcap B and \exists R.C are also concepts in DLs. Is the vocabulary of classes and properties assumed to be fixed in advance? Is this "unary relation" then a class name (from a fixed vocabulary)? Fixing vocabulary in advance is not very typical in RDF (and in DLs and logic in general, it is quite common to assume a countably infinite vocabulary). In Section 5, the "etc." in the list of rewriters is missing some names, e.g., Clipper. The Conclusions section raises an important issue of computing and maintaining the summary. And this is where the usefulness of the proposed technique becomes less clear. First, in the materialization-based approaches, the "large and complex chase" can be stored in a compact way but it makes the query rewriting and answering steps very easy (in fact, almost trivial). The main drawback here is actually the need to have "write access" to the data - incrementally or not, the data needs to be extended. The rewriting approaches, on the other hand, have a penalty of expensive query rewriting but are applicable where the data cannot be changed or extended. The proposed approach seems to take the worst of the two - it does require some sort of "write access" to data and it also requires a potentially expensive query rewriting step. The argument with incremental updates does not really improve the situation - of course, we can imagine some sort of triggers that incrementally update the summary stored in some temporary tables, but that is still a sort of access that is often unavailable. If it is available, then why not go whole hog and materialize the chase? Typos: line 23: same -> the same line 204: something is missing after the conjunctions over rules and facts or is it meant to be \bigwedge O and \bigwedge D? Footnote 1 is unnecessary - it's really basic stuff (but looks too technical for the level of the material). In DL-Lite_R, the L in Lite should be capital. Footnote 2 is poor as it introduces a notion. In line 498, the comma after "Then" should be removed It's quite unusual to have a Conclusions paragraph in individual section (such as Section 5). questions: Could the authors clarify how the submission fits the Call for Papers? RDF and RDFS are briefly mentioned in the text, but are not really an essential component - the technique (as the authors write) applies to existential rules, and this conference, perhaps, is not the best place for a paper on existential rules. Also, any comment on why this approach is better than materialization (given that incremental updates of the summary would cost more or less the same)? ethics_review_flag: No ethics_review_description: N/A scope: 2: The connection to the Web is incidental, e.g., use of Web data or API novelty: 3 technical_quality: 3 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
A711xj2EWt
Query Optimization for Ontology-Mediated Query Answering
[ "Wafaa EL HUSSEINI", "Cheikh Brahim EL VAIGH", "Franรงois Goasdouรฉ", "Helene Jaudoin" ]
Ontology-mediated query answering (OMQA) consists in asking database queries on knowledge bases (KBs); a KB is a set of facts called the KB's database, which is described by domain knowledge called the KB's ontology. A widely-investigated OMQA technique is FO-rewriting: every query asked on a KB is reformulated w.r.t. the KB's ontology, so that the query answers are computed by the relational evaluation of the query reformulation on the KB's database. Crucially, because FO-rewriting compiles the domain knowledge relevant to queries in their reformulations, query reformulations may be complex and their optimization is the crux of efficiency. We devise a novel optimization framework for a large set of OMQA settings that enjoy FO-rewriting: conjunctive queries, i.e., the core select-project-join queries, asked on KBs expressed in datalog+/- and existential rules, description logic and OWL, or RDF/S. We optimize the query reformulations produced by state-of-the-art FO-rewriting algorithms by computing rapidly, with the help of a KB's database summary, simpler (contained) queries with same answers that can be evaluated faster by RDBMSs. We show on a well-established OMQA benchmark that time performance is significantly improved by our optimization framework in general, up to three orders of magnitude.
[ "Ontology-Mediated Query Answering", "FO-rewriting", "Query optimization", "Query containment", "Database summaries" ]
https://openreview.net/pdf?id=A711xj2EWt
myjXlD9PJJ
official_review
1,700,246,685,241
A711xj2EWt
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1499/Reviewer_oeDs" ]
review: # Overview: The paper studies the classical ontology based query answering but with a twist: while the traditional approach was to rewrite the ontology part into a huge query that works over any suitable set of base data, introducing potentially huge cost in terms of query evaluation, in the current paper the authors prune this rewriting so that it keeps only the part of the query relevant for the database we have at hand. An additional optimization which does this pruning based on the summary of the base data is also described and implemented. The proposed solution is quite simple, sound, and seems to work well in practice. # Strengths: - The setting/studies problem is very interesting and relevant. Approach which simply pushes all the logic/inference into the query rewriting had been the norm until now, and the optimization that is proposed here (which has to obtain this rewriting as well) is quite natural for this setting. Description of the studied problem in Problem 1 is spot on in that sense. - Overall, the writing is quite clear, the proofs are correct, and it is easy to understand the proposed solution. - The experimental data is available in a github repository. - The experimental setup is decent and it nicely showcases for which class of queries the proposed solution works well, and where it runs into roadblocks. # Weaknesses: - The zime complexity aspect of Problem 1 was never fully tackled. One could argue that the experimental evaluation does the trick here, but some theoretical guarantees would also be nice I guess. This is particularly relevant since the q^\fancyO rewriting can already be quite big, and decomposing it further might cause some more blowups (I am not sure if I am off here, but distribution ors over ands can cause exponential rise in the number of terms generally). Perhaps some discussion on this issue is warranted. - The writing, while generally excellent, is lacking several details in places. For someone reviewing query containment papers for the past twenty years this might not be an issue, but a non expert user might be lost at times. I provide concrete suggestions for tightening the writing below. - Dwelling a bit deeper into different data summaries might be interesting for the experimental evaluation, but I understand that the space is very limited. . # Recommendation: Overall, I would be more than happy to provide my support for this paper. While the proposed solution is quite simple and natural, I think it fills a nice niche in the area of ontology mediated query answering. Pushing this further, perhaps the real contribution is the setup for this problem to be stated. The solution itself is the almost trivial, but getting there might not be and I would like to fully acknowledge this. # Post Rebuttal: I would like to thank the authors for their very detailed responses which indeed clarified my doubts. I will stick with my original recommendation and am suggesting the paper to be accepted. # Some comments on writing: - On page 2, when defining query answers it should be either said that you consider logical implication, or define what certain answers are (providing references as well). - On page 3, relational evaluation of q^\fancyO is not clear since the database can have variables. Please clarify. - At the beginning of section 4.1. I am getting a bit lost with the nomenclature. Most notably as to why ww(h,x) is derived. A similar confusion arises when reading Theorem 4.5. I guess that the issue is the fact that we lose the context of the presentation. Perhaps it would be worth stressing again here that we are given q^\fancyO and we run from there. This q^\fancyO already derived all the facts it needed, so we just look at the logical structure of the formulas we are processing. - On page 5, when defining the quotient \sigma, why not just spell out the formula, e.g. \sigma(t)=c^i_\equiv , where t\in c^i_\equiv. - The use of "e.g." in the Intro is off. questions: - Are there any particular reasons why UCQ optimizations have the lowest optimization ratio in Table 4? From the time results I would expect the opposite behavior, but perhaps I misunderstood the optimization ratio metric. - What is the size of the q^\fancyO rewritings obtained for LUBM ontology? - As a curiosity, since a relatively large disk is used, are any space bottlenecks for the implementation? I know these would just boil down to postgres, but possibly the rewriting of tested queries push it quite far? - Did the authors try to scale the experiment to e.g. 1B facts? Does the solution scale to this degree? ethics_review_flag: No ethics_review_description: No issues detected scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 6 technical_quality: 6 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
A711xj2EWt
Query Optimization for Ontology-Mediated Query Answering
[ "Wafaa EL HUSSEINI", "Cheikh Brahim EL VAIGH", "Franรงois Goasdouรฉ", "Helene Jaudoin" ]
Ontology-mediated query answering (OMQA) consists in asking database queries on knowledge bases (KBs); a KB is a set of facts called the KB's database, which is described by domain knowledge called the KB's ontology. A widely-investigated OMQA technique is FO-rewriting: every query asked on a KB is reformulated w.r.t. the KB's ontology, so that the query answers are computed by the relational evaluation of the query reformulation on the KB's database. Crucially, because FO-rewriting compiles the domain knowledge relevant to queries in their reformulations, query reformulations may be complex and their optimization is the crux of efficiency. We devise a novel optimization framework for a large set of OMQA settings that enjoy FO-rewriting: conjunctive queries, i.e., the core select-project-join queries, asked on KBs expressed in datalog+/- and existential rules, description logic and OWL, or RDF/S. We optimize the query reformulations produced by state-of-the-art FO-rewriting algorithms by computing rapidly, with the help of a KB's database summary, simpler (contained) queries with same answers that can be evaluated faster by RDBMSs. We show on a well-established OMQA benchmark that time performance is significantly improved by our optimization framework in general, up to three orders of magnitude.
[ "Ontology-Mediated Query Answering", "FO-rewriting", "Query optimization", "Query containment", "Database summaries" ]
https://openreview.net/pdf?id=A711xj2EWt
bfWSI3cMjd
official_review
1,700,882,175,091
A711xj2EWt
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1499/Reviewer_b2Sf" ]
review: The paper proposes optimization techniques for Ontology-Mediated Query Answering. The proposed approach makes use of the FO-writing to fist formulate the required query, which is then further optimized by computing simpler (contained) queries with same answers that can be evaluated faster by RDBMSs. The optimization makes use of a a KBโ€™s database summary. The evaluation is based on LUBM benchmark and the results are promising. In general, the topic of the paper is relevant to the conference topics. Unfortunately, I found the paper hard to follow in general. May be some running examples could made it easy to follow. questions: Q1. The evaluation results are based on only 9 queries. Do you really think that these queries are sufficient to draw solid conclusions ? Q2. How the selected techniques for comparison with the proposed approach are state of the art? Q3. I am little curious if ontology-mediated query answering is used in practice ? some real-world usage might be interesting to discuss. ethics_review_flag: No ethics_review_description: Nothing scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 4 technical_quality: 4 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
A711xj2EWt
Query Optimization for Ontology-Mediated Query Answering
[ "Wafaa EL HUSSEINI", "Cheikh Brahim EL VAIGH", "Franรงois Goasdouรฉ", "Helene Jaudoin" ]
Ontology-mediated query answering (OMQA) consists in asking database queries on knowledge bases (KBs); a KB is a set of facts called the KB's database, which is described by domain knowledge called the KB's ontology. A widely-investigated OMQA technique is FO-rewriting: every query asked on a KB is reformulated w.r.t. the KB's ontology, so that the query answers are computed by the relational evaluation of the query reformulation on the KB's database. Crucially, because FO-rewriting compiles the domain knowledge relevant to queries in their reformulations, query reformulations may be complex and their optimization is the crux of efficiency. We devise a novel optimization framework for a large set of OMQA settings that enjoy FO-rewriting: conjunctive queries, i.e., the core select-project-join queries, asked on KBs expressed in datalog+/- and existential rules, description logic and OWL, or RDF/S. We optimize the query reformulations produced by state-of-the-art FO-rewriting algorithms by computing rapidly, with the help of a KB's database summary, simpler (contained) queries with same answers that can be evaluated faster by RDBMSs. We show on a well-established OMQA benchmark that time performance is significantly improved by our optimization framework in general, up to three orders of magnitude.
[ "Ontology-Mediated Query Answering", "FO-rewriting", "Query optimization", "Query containment", "Database summaries" ]
https://openreview.net/pdf?id=A711xj2EWt
O5DlvVge2w
official_review
1,700,560,971,812
A711xj2EWt
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1499/Reviewer_uAN2" ]
review: This paper deals with the problem of query optimization when ontological knowledge needs to be taken into account. There are several approaches to this problem, the most common one being to rewrite the query in order to take into account ontological knowledge. This is a generic method, that does not consider the current contents of the database, only the ontological knowledge, and, thus, it is applicable to all possible databases. The authors' work takes this approach one step further and considers the underlying database as well during query rewriting. The way this is done is by taking the generic query reformulation (rewriting) above, and removing conjunctive query clauses (CQs) that definitely have no answer in the specific database. The identification of the CQs with no answer can be done by querying the database itself, or a properly generated summary of the database (the latter is more efficient, of course). This is experimentally shown (using the LUBM benchmark) to improve query evaluation performance, in the general case. The paper is well-written, albeit a bit verbose at places. The considered problem is an important one. The method is rather simple and not highly sophisticated (especially the summary-generation process; see below) but seems sound and effective for its stated purposes. My main comments are associated with the summary-generation process, which I found a bit simplistic. It is clear that the effectiveness of the authors' method depends on the form of the summary. The chosen summary-generation process essentially collapses all instances of the same class into one. This has the side-effect that if there are instances belonging to multiple classes then this will cause all instances of all involved classes to collapse as well. And this can have cascading effects, essentially collapsing large parts of the database. To paraphrase the authors' example, suppose that we have two more classes, male and female. Since we have male PhD students (and supervisors) and female PhD students (and supervisors), we will end up with a summary having a single instance, essentially collapsing the members of two classes that are supposed to be disjoint (PhD students and supervisors). The fact that disjoint classes can collapse will cause many false negatives in the process of identifying CQs with no answers. Note that this is a common pattern in rich ontologies, where each instance may be classified against multiple different characteristics (e.g., profession, gender, nationality, ...) which are orthogonal to each other and will cause "collapses" in the above sense. This will significantly hinder the effectiveness of the method, leading to a lot of false negatives as regards the identification of empty CQs. Despite the fact that the experimental evaluation showed good results under this summary-generation algorithm, I'm still concerned about its effectiveness in the general case. Note that the experiments only considered one single dataset (LUBM), and LUBM, as far as I remember, does not have Male/Female classes or other such major "orthogonal" classifications. Also, are implicit instantiations considered during the summary generation? I suppose (and hope) that this is not the case, but this is not clarified in the paper. If so, the existence of a general class (e.g., similar to owl:Thing or rdf:Resource) will ruin any chance of identifying empty CQs. I'm pretty sure the answer is "no", otherwise the mere existence of the class "Person" in LUBM would cause the approach to fail. On Section 5: the authors provide only some of the figures of their experimental evaluation due to space considerations. It would be good to have the remaining ones in the appendix (like the theorem proofs). Typo: - "up to more 3 orders" I wish to thank the authors for their comments, clarifications and acknowledging the observation about efficiency. It is understood that the modelling pattern I mentioned is not ubiquitous. However, it exists, and thus limits the applicability of the authors' approach. I understand and agree that the problem could be resolved with an alternative summarization method. I believe that considering the effect of different summarization methods could be a nice addition to a future paper. questions: See main review, in particular the comments about the summary-generation process. The authors are asked to comment on the comments stated above. ethics_review_flag: No ethics_review_description: i selected NO above and yet description is needed. Please fix the bug. scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
A711xj2EWt
Query Optimization for Ontology-Mediated Query Answering
[ "Wafaa EL HUSSEINI", "Cheikh Brahim EL VAIGH", "Franรงois Goasdouรฉ", "Helene Jaudoin" ]
Ontology-mediated query answering (OMQA) consists in asking database queries on knowledge bases (KBs); a KB is a set of facts called the KB's database, which is described by domain knowledge called the KB's ontology. A widely-investigated OMQA technique is FO-rewriting: every query asked on a KB is reformulated w.r.t. the KB's ontology, so that the query answers are computed by the relational evaluation of the query reformulation on the KB's database. Crucially, because FO-rewriting compiles the domain knowledge relevant to queries in their reformulations, query reformulations may be complex and their optimization is the crux of efficiency. We devise a novel optimization framework for a large set of OMQA settings that enjoy FO-rewriting: conjunctive queries, i.e., the core select-project-join queries, asked on KBs expressed in datalog+/- and existential rules, description logic and OWL, or RDF/S. We optimize the query reformulations produced by state-of-the-art FO-rewriting algorithms by computing rapidly, with the help of a KB's database summary, simpler (contained) queries with same answers that can be evaluated faster by RDBMSs. We show on a well-established OMQA benchmark that time performance is significantly improved by our optimization framework in general, up to three orders of magnitude.
[ "Ontology-Mediated Query Answering", "FO-rewriting", "Query optimization", "Query containment", "Database summaries" ]
https://openreview.net/pdf?id=A711xj2EWt
LxrxD77gaI
decision
1,705,909,231,555
A711xj2EWt
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept (Oral) comment: * (+) There are no concerns with scope * (+) Novelty and technical quality scores are largely uniform. An accepted paper should include promised changes/improvements/clarifications from the discussion period
A711xj2EWt
Query Optimization for Ontology-Mediated Query Answering
[ "Wafaa EL HUSSEINI", "Cheikh Brahim EL VAIGH", "Franรงois Goasdouรฉ", "Helene Jaudoin" ]
Ontology-mediated query answering (OMQA) consists in asking database queries on knowledge bases (KBs); a KB is a set of facts called the KB's database, which is described by domain knowledge called the KB's ontology. A widely-investigated OMQA technique is FO-rewriting: every query asked on a KB is reformulated w.r.t. the KB's ontology, so that the query answers are computed by the relational evaluation of the query reformulation on the KB's database. Crucially, because FO-rewriting compiles the domain knowledge relevant to queries in their reformulations, query reformulations may be complex and their optimization is the crux of efficiency. We devise a novel optimization framework for a large set of OMQA settings that enjoy FO-rewriting: conjunctive queries, i.e., the core select-project-join queries, asked on KBs expressed in datalog+/- and existential rules, description logic and OWL, or RDF/S. We optimize the query reformulations produced by state-of-the-art FO-rewriting algorithms by computing rapidly, with the help of a KB's database summary, simpler (contained) queries with same answers that can be evaluated faster by RDBMSs. We show on a well-established OMQA benchmark that time performance is significantly improved by our optimization framework in general, up to three orders of magnitude.
[ "Ontology-Mediated Query Answering", "FO-rewriting", "Query optimization", "Query containment", "Database summaries" ]
https://openreview.net/pdf?id=A711xj2EWt
KCFZFlasqu
official_review
1,700,836,622,326
A711xj2EWt
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1499/Reviewer_3U1r" ]
review: The paper considers a general scenario of ontology-mediated query answering by query rewriting. The details vary, but the general idea is to rewrite queries in the presence of an ontology such that the answers to the rewritten query, without any more reasoning, are the same as the certain answers to the original query taking into account the ontology. It is central to this approach that the rewritten query is independent of the actual data. As a consequence however, it will often have poor performance on specific datasets because of parts of the query being redundant. This work suggests an approach where the rewritten query is further optimised using the dataset, while still preserving the query answers. To avoid the inefficiencies of considering the whole database for this purpose, the method works on a "summary" of the database. Depending on the summary used, the optimisation will be more or less effective at removing redundant parts of the query. I find this to be valuable work in general, and novel as far as I can tell. I have a few questions below that I would like to have answers. questions: 1) the statement of Problem 1 contains requirement (1), followed by an explanation that other optimisations, that are not subqueries of the original, are not of interest. I ask why? If the algorithm can guarantee that rewriting "Where does TheWebConf 2024 take place" to "Where does Petra live" is sound FOR THIS DATASET, and also that doing this optimisation and evaluating the resulting query will take less time than evaluating the original query, what is wrong with that? It seems to me that requirement (1) is a property of your solution and not something necessarily required for solving the actual problem. 2) The use of the computation time ฯ„(.) in a formal definition is problematic. The computation time is not well defined and Can this be replaced by a reasonable formally defined metric on queries? 3) The problem of irrelevant disjuncts in query rewritings has been studied previously. E.g., the following paper considers the problem in the case where the dataset is described not by a summary, but by certain kinds of constraints: OBDA Constraints for Effective Query Answering, https://link.springer.com/chapter/10.1007/978-3-319-42019-6_18 In general, I would have liked to see more comparison of this summary-based approach to that of optimising the rewritten query based on database constraints, SHACL shapes, etc. 4) The approach is presented as an integral part of an OMQA approach. It seems to me that the paper could be simply about the optimisation of queries using dataset summaries. Possibly the types of redundancies the presented approach is effective for are particularly prevalent in the result of FO rewriting? ethics_review_flag: No ethics_review_description: not applicable scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 6 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9jj7cMOXQo
Towards Cross-Table Masked Pretraining for Web Data Mining
[ "Chao Ye", "Guoshan Lu", "Haobo Wang", "Liyao Li", "Sai Wu", "Gang Chen", "Junbo Zhao" ]
Tabular data --- also known as structured data --- pervades the landscape of the World Wide Web, playing a foundational role in the digital architecture that underpins online information. Given the recent influence of large-scale pre-trained models like ChatGPT and SAM across various domains, exploring the application of pretraining techniques for mining tabular data on the web has emerged as a highly promising research direction. Indeed, there have been some recent works around this topic where most (if not all) of them are limited in the scope of a fixed-schema/single table. Due to the scale of the dataset and the parameter size of the prior models, we believe that we have not reached the ''BERT moment'' for the ubiquitous tabular data. The development on this line significantly lags behind the counterpart research domains such as natural language processing. In this work, we first identify the crucial research challenges behind tabular data pretraining, particularly overcoming the cross-table hurdle. As a pioneering endeavor, this work mainly (i)-contributes a high-quality real-world tabular dataset, (ii)-proposes an innovative, generic and efficient cross-table pretraining framework, dubbed as CM2, where the core to it comprises a semantic-aware tabular neural network that uniformly encodes heterogeneous tables without much restriction and (iii)-introduces a novel pretraining objective --- Prompt Masked Table Modeling (pMTM) --- inspired from NLP but intricately tailored to scalable pretraining on tables. Our extensive experiments demonstrate CM2's state-of-the-art performance and validate that cross-table pretraining can enhance the performance of various downstream tasks.
[ "Tabular Data", "Data Mining", "Pre-training" ]
https://openreview.net/pdf?id=9jj7cMOXQo
qX7O0Eskyq
official_review
1,700,423,675,673
9jj7cMOXQo
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2456/Reviewer_zHj3" ]
review: Summary ======= The paper proposes a large-scale dataset (OpenTabs) and pretrained model for cross-table predictions (CM2), where the model is pretrained on a different set of tables with a different number and order of columns than when making predictions. The dataset contains over 2,000 tables with over 46 million rows in total (2.9 GB of compressed data). The model encodes the tables in a column-invariant way by means of (set) transformers and a new self-supervised training objective. Pros ==== - Novel large-scale dataset - Novel large-scale pretrained model for tabular data which is highly relevant for many tasks on the web - Outperforms previous methods on various downstream tasks (regression, anomaly detection, missing value imputation) - Source code and data are publicly available Cons ==== - Some details of the approach could be described in more detail (see below) Details ======= - Section 4.1 introduces y_i (classes of labels). However, it is unclear how y_i is used later on. The model is trained by masking and does not require a supervised label. Is the y_i used for the downstream tasks (regression, anomaly detection, missing value imputation)? - More details about the constructed tabular dataset, such as the sources, types of data included, and any preprocessing steps would be helpful - "we refine the atomic units โ€œfeatureโ€ within the table into a sequence of words, which also includes the corresponding column name schema information" How exactly is the column name schema information included? - How exactly are Sections 4.2.1 and 4.2.2 related? I assume that 4.2.2 describes the encoder. However, it would be great to explicitly state this to make it easier for the reader. By the way, there is some related work on set transformers available. - "Different from previous tabular reconstruction endeavors [1, 48], our first attempt to use column names as prompt to assist in predicting masked features." This is not a complete sentence and a verb is missing. questions: - Can you provide more details on how the tabular dataset was curated? How did you decide which tables to include and which tables to exclude? Can you provide more detailed statistics on the dataset? (exact number of tables by source, number of rows, ...) ethics_review_flag: No ethics_review_description: no concerns scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9jj7cMOXQo
Towards Cross-Table Masked Pretraining for Web Data Mining
[ "Chao Ye", "Guoshan Lu", "Haobo Wang", "Liyao Li", "Sai Wu", "Gang Chen", "Junbo Zhao" ]
Tabular data --- also known as structured data --- pervades the landscape of the World Wide Web, playing a foundational role in the digital architecture that underpins online information. Given the recent influence of large-scale pre-trained models like ChatGPT and SAM across various domains, exploring the application of pretraining techniques for mining tabular data on the web has emerged as a highly promising research direction. Indeed, there have been some recent works around this topic where most (if not all) of them are limited in the scope of a fixed-schema/single table. Due to the scale of the dataset and the parameter size of the prior models, we believe that we have not reached the ''BERT moment'' for the ubiquitous tabular data. The development on this line significantly lags behind the counterpart research domains such as natural language processing. In this work, we first identify the crucial research challenges behind tabular data pretraining, particularly overcoming the cross-table hurdle. As a pioneering endeavor, this work mainly (i)-contributes a high-quality real-world tabular dataset, (ii)-proposes an innovative, generic and efficient cross-table pretraining framework, dubbed as CM2, where the core to it comprises a semantic-aware tabular neural network that uniformly encodes heterogeneous tables without much restriction and (iii)-introduces a novel pretraining objective --- Prompt Masked Table Modeling (pMTM) --- inspired from NLP but intricately tailored to scalable pretraining on tables. Our extensive experiments demonstrate CM2's state-of-the-art performance and validate that cross-table pretraining can enhance the performance of various downstream tasks.
[ "Tabular Data", "Data Mining", "Pre-training" ]
https://openreview.net/pdf?id=9jj7cMOXQo
RtKGTnC24j
decision
1,705,909,257,997
9jj7cMOXQo
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept comment: The paper addresses the underexplored application of pretraining techniques, specifically focusing on tabular data mining on the web, a domain that has not yet experienced a "BERT moment." The proposed CM2 framework introduces a high-quality real-world tabular dataset, an innovative cross-table pretraining approach, and a novel pretraining objective, pMTM, demonstrating state-of-the-art performance and highlighting the potential of cross-table pretraining for improving downstream tasks. - The paper is well written in an under-explored area and provides a solid benchmark as well as a new dataset. - Experimental details are unclear with incremental improvements in certain tasks and some lack of clarity in the last sections
9jj7cMOXQo
Towards Cross-Table Masked Pretraining for Web Data Mining
[ "Chao Ye", "Guoshan Lu", "Haobo Wang", "Liyao Li", "Sai Wu", "Gang Chen", "Junbo Zhao" ]
Tabular data --- also known as structured data --- pervades the landscape of the World Wide Web, playing a foundational role in the digital architecture that underpins online information. Given the recent influence of large-scale pre-trained models like ChatGPT and SAM across various domains, exploring the application of pretraining techniques for mining tabular data on the web has emerged as a highly promising research direction. Indeed, there have been some recent works around this topic where most (if not all) of them are limited in the scope of a fixed-schema/single table. Due to the scale of the dataset and the parameter size of the prior models, we believe that we have not reached the ''BERT moment'' for the ubiquitous tabular data. The development on this line significantly lags behind the counterpart research domains such as natural language processing. In this work, we first identify the crucial research challenges behind tabular data pretraining, particularly overcoming the cross-table hurdle. As a pioneering endeavor, this work mainly (i)-contributes a high-quality real-world tabular dataset, (ii)-proposes an innovative, generic and efficient cross-table pretraining framework, dubbed as CM2, where the core to it comprises a semantic-aware tabular neural network that uniformly encodes heterogeneous tables without much restriction and (iii)-introduces a novel pretraining objective --- Prompt Masked Table Modeling (pMTM) --- inspired from NLP but intricately tailored to scalable pretraining on tables. Our extensive experiments demonstrate CM2's state-of-the-art performance and validate that cross-table pretraining can enhance the performance of various downstream tasks.
[ "Tabular Data", "Data Mining", "Pre-training" ]
https://openreview.net/pdf?id=9jj7cMOXQo
J2dxiYGVQ7
official_review
1,700,616,366,237
9jj7cMOXQo
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2456/Reviewer_Vefn" ]
review: This paper addresses the challenges in tabular data pretraining, including proposing a high-quality real-world tabular dataset, an efficient cross-table pretraining framework and a novel pretraining objective, following the prompting and masked ideas from the NLP domain. The pretrained model can be used in various downstream tasks, such as classification, regression and anomaly detection. As I am not very familiar with tabular data processing, I am not sure whether there are some related works not involved in the paper, besides those appearing in Table 1, especially for the scale issue. Pros: 1. The pretraining model for tabular data is very significant and worth studying, and the dataset is an important contribution for future research. 2. The model is sound and the experiments are extensive to demonstrate the superiority of this model. 3. The structure as well as the presentation of the paper is fine, and the paper is easy to follow. Core sentences are emphasized to help readers understand the key ideas of the paper. Cons: 1. The information of Figure 1 is a little insufficient. The challenges of tabular data pretraining are not reflected. 2. There are some typos in the paper. For example, in page 2, line -2, what does "cule" mean? Should it be "cue"? questions: There are some typos in the paper. For example, in page 2, line -2, what does "cule" mean? Should it be "cue"? ethics_review_flag: No scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 6 technical_quality: 6 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
9jj7cMOXQo
Towards Cross-Table Masked Pretraining for Web Data Mining
[ "Chao Ye", "Guoshan Lu", "Haobo Wang", "Liyao Li", "Sai Wu", "Gang Chen", "Junbo Zhao" ]
Tabular data --- also known as structured data --- pervades the landscape of the World Wide Web, playing a foundational role in the digital architecture that underpins online information. Given the recent influence of large-scale pre-trained models like ChatGPT and SAM across various domains, exploring the application of pretraining techniques for mining tabular data on the web has emerged as a highly promising research direction. Indeed, there have been some recent works around this topic where most (if not all) of them are limited in the scope of a fixed-schema/single table. Due to the scale of the dataset and the parameter size of the prior models, we believe that we have not reached the ''BERT moment'' for the ubiquitous tabular data. The development on this line significantly lags behind the counterpart research domains such as natural language processing. In this work, we first identify the crucial research challenges behind tabular data pretraining, particularly overcoming the cross-table hurdle. As a pioneering endeavor, this work mainly (i)-contributes a high-quality real-world tabular dataset, (ii)-proposes an innovative, generic and efficient cross-table pretraining framework, dubbed as CM2, where the core to it comprises a semantic-aware tabular neural network that uniformly encodes heterogeneous tables without much restriction and (iii)-introduces a novel pretraining objective --- Prompt Masked Table Modeling (pMTM) --- inspired from NLP but intricately tailored to scalable pretraining on tables. Our extensive experiments demonstrate CM2's state-of-the-art performance and validate that cross-table pretraining can enhance the performance of various downstream tasks.
[ "Tabular Data", "Data Mining", "Pre-training" ]
https://openreview.net/pdf?id=9jj7cMOXQo
D4uTiOYbP7
official_review
1,700,081,317,211
9jj7cMOXQo
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2456/Reviewer_aot2" ]
review: The paper presents a new pre-trained model for tablular data called CM2. In addition, the paper also contributes a set of tabular datasets to-be-used for pre-training and fine-tuning. Overall, the paper is easy to follow and introduces a new method to address tabular data. I do,however, some concerns about the scope of the problem addressed and the analysis provided in the paper. Please see list of Pros and Cons below. Pros: S1: The paper is very easy to follow, providing nice illustrations and efficiently utilizing font gestures (bold/underline/...) to highlight the main ideas and insights. S2: The paper provides open-source resources for the community including code, model and pre-training datasets. This effort also includes a nice benchmarking of tasks (mainly Table 2) that can take us one step closer to Glue[1]-like environment for Tabular data. S3: The benefits of deep learning (and more specifically pre-trained models based on transformers) are still under-explored for tabular data and I believe that this paper takes this research one step forward. Cons: W1: The scope of Tabular Data is unclear: W1.1: The paper claims to address cross-table training. It is not completly clear to me where "cross-table" comes into play. Spesifically, Section 4.3, which discusses "cross-table" pretraining uses masks over single tables aiming to addess heterogeneity. I think renaming is in-order here. This leads me to cross-table tasks. W1.2: Not only that the name "cross-table" pretraining here is confusing, also the paper does not introduce real cross-table tasks such as Entity Matching [2], Unionable/Joinable Table Search [3], and others, which are explicit cross-table tasks that can be potentially used for pretraining. I would, at least, expect a discussion about these lines of work in the related work section and have deeply appreciated using these tasks for (pre)training and inference. W1.3: How does this model address missing data and metadata (e.g., lack of headers)? Specifically, seems like the training of the model depands on the existence of metadata, which seem to be a scarce resource in contemporary environments. [4] W1.4: The Problem definition is confusing. Based on the definition given in Section 4.1 seems like the tasks being solved are only "row-based", is that really the case (single yi for xi which represents a row)? How does this translate to the cls token? W2: Some experiment details (+dataset) are vogue and require more details: W2.1: Some of the improvements seem marginal. It would have been helpful if an additional indication was added to the tables (e.g., stat. sig. test or SD). For example, while the authors claim that "CM2 has an excellent advantage on regression tasks", looking into Table 4 reveals that other than the SAT 11 dataset, all improvements are extremely marginal. W2.2: I find Section 5.5 too vague and very hard to follow. I do understand that this due to space constraints; however, since it was added I do consider that to be confusing. W3: Figures are not color-blind inclusive and also cannot be understood in B&W. Other Comments: D1: Relevance to track is not declared. D2: The term "BERT moment" is being used continuously. I have not heard of this term before, is this coined by this paper? If not, I would appreciate a clear explanation in the paper or a reference. D3: I think adding the avg. sizes of tables to Table 1 can be beneficial. D4: I find it confusing that the term "web-tables" is commonly used to refer to smaller tables while in this paper it refers to larger tables [1] https://gluebenchmark.com/ [2] Barlaug, Nils, and Jon Atle Gulla. "Neural networks for entity matching: A survey." ACM Transactions on Knowledge Discovery from Data (TKDD) 15.3 (2021): 1-37. [3] Fan, Grace, et al. "Table Discovery in Data Lakes: State-of-the-art and Future Directions." Companion of the 2023 International Conference on Management of Data. 2023. [4] Nargesian, Fatemeh, et al. "Data lake management: challenges and opportunities." Proceedings of the VLDB Endowment 12.12 (2019): 1986-1989. I acknowledge you reading the rebuttal and responding accordingly. questions: See "Review". Specifically W1, W2, D2. Q1: How do you refer to columns having non-numeric/textual/categorical values (e.g., dates)? ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 4 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
9jj7cMOXQo
Towards Cross-Table Masked Pretraining for Web Data Mining
[ "Chao Ye", "Guoshan Lu", "Haobo Wang", "Liyao Li", "Sai Wu", "Gang Chen", "Junbo Zhao" ]
Tabular data --- also known as structured data --- pervades the landscape of the World Wide Web, playing a foundational role in the digital architecture that underpins online information. Given the recent influence of large-scale pre-trained models like ChatGPT and SAM across various domains, exploring the application of pretraining techniques for mining tabular data on the web has emerged as a highly promising research direction. Indeed, there have been some recent works around this topic where most (if not all) of them are limited in the scope of a fixed-schema/single table. Due to the scale of the dataset and the parameter size of the prior models, we believe that we have not reached the ''BERT moment'' for the ubiquitous tabular data. The development on this line significantly lags behind the counterpart research domains such as natural language processing. In this work, we first identify the crucial research challenges behind tabular data pretraining, particularly overcoming the cross-table hurdle. As a pioneering endeavor, this work mainly (i)-contributes a high-quality real-world tabular dataset, (ii)-proposes an innovative, generic and efficient cross-table pretraining framework, dubbed as CM2, where the core to it comprises a semantic-aware tabular neural network that uniformly encodes heterogeneous tables without much restriction and (iii)-introduces a novel pretraining objective --- Prompt Masked Table Modeling (pMTM) --- inspired from NLP but intricately tailored to scalable pretraining on tables. Our extensive experiments demonstrate CM2's state-of-the-art performance and validate that cross-table pretraining can enhance the performance of various downstream tasks.
[ "Tabular Data", "Data Mining", "Pre-training" ]
https://openreview.net/pdf?id=9jj7cMOXQo
BRcBH1ex9J
official_review
1,700,774,049,083
9jj7cMOXQo
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2456/Reviewer_Z57d" ]
review: The paper introduces CM2, a novel approach for tabular data analysis, leveraging cross-table pretraining with a focus on feature interaction modeling. CM2 employs a pretraining objective, Prompt Masked Table Modeling (pMTM) to effectively capture structural information in tabular datasets. The paper suggests the potential for further scaling and adaptation to different data domains in the future. questions: Strength: The introduction of pMTM is an interesting pretraining objective, showcasing the paperโ€™s contribution to capturing prior structural information in tabular data. The paper demonstrates performance gains in few-shot learning settings, making it particularly effective in scenarios with limited annotated data. This represents a practical advantage in domains where labeled data is scarce. CM2โ€™s tabular representations exhibit remarkable versatility across diverse downstream tasks, including regression, anomaly detection, and missing value imputation. This showcases its adaptability and applicability to a wide range of real-world scenarios. Weakness: While the paper provides an extensive list of baseline models, a more detailed analysis and discussion of their strengths and weakness in the context of tabular data could significantly enhance the paperโ€™s evaluation section. Additionally, absence of widely recognized and popular methods like TaBERT, TUTA, TURL, TAPAS etc. for tabular data learning from baseline comparisons is a notable gap. Including these models in the evaluation would provide a more comprehensive assessment of CM2โ€™s performance and clarify its standing compared to state-of-the-art tabular learning methods. The paper mentions tuning the Transformer architecture for the permutation invariance property of tabular data but lacks a detailed discussion of this tuning process. Providing more insights into how the model specifically addresses permutation invariance and the implications of discarding positional encoding would enhance the understanding of the modelโ€™s design choices. The paper does not extensively justify the choice of a 128-dimensionla embedding for tokens in the transformer architecture, especially when the widely adopted BERT base model employs a 768-dimensional embedding. The absence of an explicit justification raises questions about the trade-offs and potential loss of information when transitioning from the original 768-dimensional space to the chosen 128-dimensional embedding. The paper mentions the release of a large pre-trained tabular model (ใ€–CM2ใ€—_V1) trained on 2k datasets. However, the adequacy of this data size for training a model with approximately 50 million parameters is not extensively discussed. ethics_review_flag: No ethics_review_description: - scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 2 technical_quality: 2 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
9hFAdnR3CH
A Similarity-based Approach for Efficient Large Quasi-clique Detection
[ "Jiayang Pang", "Chenhao Ma", "Yixiang Fang" ]
Identifying dense subgraphs called quasi-cliques is pivotal in various graph mining tasks across domains like biology, social networks, and e-commerce. However, recent algorithms still suffer from efficiency issues when mining large quasi-cliques in massive and complex graphs. Our key insight is that vertices within a quasi-clique exhibit similar neighborhoods to some extent. Based on this, we introduce NBSim and FastNBSim, efficient algorithms that find near-maximum quasi-cliques by exploiting vertex neighborhood similarity. FastNBSim further uses MinHash approximations to reduce the time complexity for similarity computation. Empirical evaluation on 10 real-world graphs shows that our algorithms deliver up to three orders of magnitude speedup versus the state-of-the-art algorithms, while ensuring high-quality quasi-clique extraction.
[ "quasi-cliques", "neighborhoods", "similarity", "MinHash" ]
https://openreview.net/pdf?id=9hFAdnR3CH
yqEym5GbFe
decision
1,705,909,210,460
9hFAdnR3CH
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept (Oral) comment: The reviewers are in concensus about this work's broad relevance to WWW, but there are some concerns about whether the experiments are sufficiently conclusive.
9hFAdnR3CH
A Similarity-based Approach for Efficient Large Quasi-clique Detection
[ "Jiayang Pang", "Chenhao Ma", "Yixiang Fang" ]
Identifying dense subgraphs called quasi-cliques is pivotal in various graph mining tasks across domains like biology, social networks, and e-commerce. However, recent algorithms still suffer from efficiency issues when mining large quasi-cliques in massive and complex graphs. Our key insight is that vertices within a quasi-clique exhibit similar neighborhoods to some extent. Based on this, we introduce NBSim and FastNBSim, efficient algorithms that find near-maximum quasi-cliques by exploiting vertex neighborhood similarity. FastNBSim further uses MinHash approximations to reduce the time complexity for similarity computation. Empirical evaluation on 10 real-world graphs shows that our algorithms deliver up to three orders of magnitude speedup versus the state-of-the-art algorithms, while ensuring high-quality quasi-clique extraction.
[ "quasi-cliques", "neighborhoods", "similarity", "MinHash" ]
https://openreview.net/pdf?id=9hFAdnR3CH
xvaHrobdHZ
official_review
1,700,806,342,145
9hFAdnR3CH
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission357/Reviewer_wNEU" ]
review: The paper investigates the maximum quasi-clique detection problem and presents an algorithm based on structural similarity in which min-hash is utilized to accelerate the computation. Experiments demonstrate the outperformance on edge density and result size on part of the datasets. Nevertheless, the algorithm design is not quite consistent with the objective of the problem and the theoretical contribution is limited. Strong points: S1. The paper adopts structural similarity to refine the SOTA algorithm [16]. It also naturally incorporates min-hash to improve efficiency. S2. The experimental section validates the effectiveness of all the optimization techniques. S3. The overall presentation is clear. Opportunities for improvement: O1. The objective of the studied problem is to maximize the size of the resulting quasi-clique, while the algorithm design and the experiments emphasize the edge density of the results. If we focus on the objective, the result sizes of the proposed algorithms are smaller than the SOTA algorithms on many datasets, as shown in Table 3. A clarification is needed. O2. The outperformance of the proposed algorithm is not obvious. In the experiments, the quality of solutions obtained by the NBSim algorithm is similar to NuQClq, but its efficiency is worse on several graphs. The efficiency of FastNBSim is better than NBSim while the result quality is lower. The advantages and the use cases of the proposed algorithms should be made clear. O3. In comparison to the SOTA solution [16], this paper lacks a rigorous theoretical analysis of the effectiveness of using structural similarity to compute maximum quasi-cliques. O4. In the review of related works, the shortcomings of prior work in the field and the novelty of the technique design in this paper should be discussed. O5. It would be better to add explicit connections to the web, e.g. the practical applications of the problem and the case studies on the web data. questions: Please refer to O1-O5. ethics_review_flag: No ethics_review_description: NA scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 4 technical_quality: 4 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9hFAdnR3CH
A Similarity-based Approach for Efficient Large Quasi-clique Detection
[ "Jiayang Pang", "Chenhao Ma", "Yixiang Fang" ]
Identifying dense subgraphs called quasi-cliques is pivotal in various graph mining tasks across domains like biology, social networks, and e-commerce. However, recent algorithms still suffer from efficiency issues when mining large quasi-cliques in massive and complex graphs. Our key insight is that vertices within a quasi-clique exhibit similar neighborhoods to some extent. Based on this, we introduce NBSim and FastNBSim, efficient algorithms that find near-maximum quasi-cliques by exploiting vertex neighborhood similarity. FastNBSim further uses MinHash approximations to reduce the time complexity for similarity computation. Empirical evaluation on 10 real-world graphs shows that our algorithms deliver up to three orders of magnitude speedup versus the state-of-the-art algorithms, while ensuring high-quality quasi-clique extraction.
[ "quasi-cliques", "neighborhoods", "similarity", "MinHash" ]
https://openreview.net/pdf?id=9hFAdnR3CH
jMrGGYF2om
official_review
1,700,532,365,701
9hFAdnR3CH
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission357/Reviewer_oqGf" ]
review: The paper considers the quasi-clique finding problem and proposes a new algorithm that utilizes overlapping neighborhood similarities and minhashing. The proposed technique is simple but powerful. The paper is well-written and evaluation is well-done. - One thing that needs further explanation is how the parameters $\gamma$ and $\beta$ are set. For any new graph, is it always feasible to use the 0.9 and 0.6 values? Is there any connection to the graph structure? questions: See above. ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 6 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
9hFAdnR3CH
A Similarity-based Approach for Efficient Large Quasi-clique Detection
[ "Jiayang Pang", "Chenhao Ma", "Yixiang Fang" ]
Identifying dense subgraphs called quasi-cliques is pivotal in various graph mining tasks across domains like biology, social networks, and e-commerce. However, recent algorithms still suffer from efficiency issues when mining large quasi-cliques in massive and complex graphs. Our key insight is that vertices within a quasi-clique exhibit similar neighborhoods to some extent. Based on this, we introduce NBSim and FastNBSim, efficient algorithms that find near-maximum quasi-cliques by exploiting vertex neighborhood similarity. FastNBSim further uses MinHash approximations to reduce the time complexity for similarity computation. Empirical evaluation on 10 real-world graphs shows that our algorithms deliver up to three orders of magnitude speedup versus the state-of-the-art algorithms, while ensuring high-quality quasi-clique extraction.
[ "quasi-cliques", "neighborhoods", "similarity", "MinHash" ]
https://openreview.net/pdf?id=9hFAdnR3CH
gwwySNdX5j
official_review
1,700,394,125,264
9hFAdnR3CH
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission357/Reviewer_GQW6" ]
review: Computing the maximum quasi-clique is an important problem in graph data analysis. In this paper, the authors propose a similarity-based approach to detect large quasi-clique in graphs. Following the containment score, they propose two algorithms to find near-maximum quasi-cliques by exploiting vertex neighborhood similarity. Extensive experiments are conducted to evaluate the proposed algorithms. However, the proposed algorithms can only find the quasi-clique in the ego-network of a specific vertex, and the proposed algorithms are not evaluated comprehensively, which significantly weakens the contribution of this paper. Strengths: S1. The problem studied in the paper is interesting. S2. New algorithms are proposed to address the problem. S3. Experiments are conducted to evaluate the proposed algorithms. Weakness: W1. The technical contribution of this paper is limited. W2. The datasets used in the paper is small. W3. Some part of the algorithm is not well evaluated in the experiments. questions: Q1. The proposed method only focuses on the quasi-clique in the ego-network of a specific vertex, which makes the diameter of the detected results is 2 at most all the time. However, based on the definition of quasi-clique, there is no such property, which significantly limited the generalization of the proposed algorithms. Q2. Based on the problem definition, the given parameter \alpha also affects the returned results. It is unclear about the performance when directly varying \alpha. Q3. The datasets used to evaluate the performance is small. Based on the time complexity of the proposed algorithms, it is more convincing if larger datasets could be used to evaluate the performance. Q4. Some other works related to maximal clique enumeration is not discussed, such as the I/O efficient MCE, diversified clique enumeration. Q5. Although some guidelines for setting \gamma and b, it is still seems tricky to set these two parameters appropriately in practice. ethics_review_flag: No ethics_review_description: No scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 6 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
9hFAdnR3CH
A Similarity-based Approach for Efficient Large Quasi-clique Detection
[ "Jiayang Pang", "Chenhao Ma", "Yixiang Fang" ]
Identifying dense subgraphs called quasi-cliques is pivotal in various graph mining tasks across domains like biology, social networks, and e-commerce. However, recent algorithms still suffer from efficiency issues when mining large quasi-cliques in massive and complex graphs. Our key insight is that vertices within a quasi-clique exhibit similar neighborhoods to some extent. Based on this, we introduce NBSim and FastNBSim, efficient algorithms that find near-maximum quasi-cliques by exploiting vertex neighborhood similarity. FastNBSim further uses MinHash approximations to reduce the time complexity for similarity computation. Empirical evaluation on 10 real-world graphs shows that our algorithms deliver up to three orders of magnitude speedup versus the state-of-the-art algorithms, while ensuring high-quality quasi-clique extraction.
[ "quasi-cliques", "neighborhoods", "similarity", "MinHash" ]
https://openreview.net/pdf?id=9hFAdnR3CH
e7nmpKc8Rq
official_review
1,700,837,141,739
9hFAdnR3CH
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission357/Reviewer_JuDa" ]
review: This paper tackles the problem of detecting maximum quasi-cliques. The main contribution of the paper is to devise methods for this problem which are more efficient and/or effective than the state-of-the-art ones. The main strengths of the paper are as follows: S1) The paper comes with convincing motivation. S2) The proposed methods are well-designed and sound. S3) The experimental evaluation is well-designed and satisfactorily complete. The paper also comes with a number of major weaknesses: W1) The main claims of the paper seem to be not fully supported by experimental evidence. Or, at least, some findings are not sufficiently discussed/motivated. Specifically: W1.a) Table 3: NB outperforms NBSim in 8 out of 10 datasets. This somehow contradicts the theoretical design of NBSim, which is mainly devoted to be more effective (and efficient) than NB. The fact that NBSim's quasi-cliques are denser than NB's ones (Table 2) is not a really valid argument, as the two algorithms are supposed and designed to detect maximum-sized cliques, not densest ones. W1.b) Figure 4: in several datasets NBSim is (consistently) outperformed by NB. This, again, somehow contradicts the main claims/goal of the paper, according to which NBSim needs to be faster than NB. The authors discuss this by simply stating that it is "due to different computing paradigms". However, at least, a more detailed discussion and precise motivation of this behavior should be provided. In the end, the comparison NBSim vs. NB is central in the paper. W1.c) Table 3: FastNBSim detects quasi-cliques of size comparable to or, in two datasets (i.e., FB and ER), consistently larger than the size of the quasi-cliques detected by NBSim. As FastNBSim is an approximation of NBSim, this looks surprising. W2) The proposed NBSim algorithm (and its faster approximation, FasterNBSim) are not conceptually compared to the state of the art NB algorithm. The main technical differences and novelties, along with the corresponding motivations, should be discussed in detail, otherwise technical contribution and novelty of the proposed method(s) result questionable. W3) In Section 5, it is said that the time complexity of the proposed NBSim algorithm is O(m d_max), whereas, in Section 6.2, it is said that NBSim shares the same time complexity (O(m^{3/2})) as NB. What is the true time complexity? Also, a detailed time complexity analysis of the proposed NBSim algorithm (and FastNBSim too) should be provided, as designing more efficient algorithms is a central aspect of the paper. W4) The paper lacks a proper discussion of how the tackled problem is relevant in a Web setting. questions: Please comment on W1.b), W1.c), W2), W3), W4). ethics_review_flag: No ethics_review_description: No ethical issues. scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 3 technical_quality: 3 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9hFAdnR3CH
A Similarity-based Approach for Efficient Large Quasi-clique Detection
[ "Jiayang Pang", "Chenhao Ma", "Yixiang Fang" ]
Identifying dense subgraphs called quasi-cliques is pivotal in various graph mining tasks across domains like biology, social networks, and e-commerce. However, recent algorithms still suffer from efficiency issues when mining large quasi-cliques in massive and complex graphs. Our key insight is that vertices within a quasi-clique exhibit similar neighborhoods to some extent. Based on this, we introduce NBSim and FastNBSim, efficient algorithms that find near-maximum quasi-cliques by exploiting vertex neighborhood similarity. FastNBSim further uses MinHash approximations to reduce the time complexity for similarity computation. Empirical evaluation on 10 real-world graphs shows that our algorithms deliver up to three orders of magnitude speedup versus the state-of-the-art algorithms, while ensuring high-quality quasi-clique extraction.
[ "quasi-cliques", "neighborhoods", "similarity", "MinHash" ]
https://openreview.net/pdf?id=9hFAdnR3CH
EdVXn8qoY4
official_review
1,700,747,362,273
9hFAdnR3CH
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission357/Reviewer_jv5T" ]
review: **Short summary** The authors address the problem of identifying the maximum quasi clique in an undirected graph. That is, given a graph and a parameter controlling the clique density, the authors develop heuristic algorithms for reporting the largest quasi-clique with density controlled by two parameters (i.e., $\gamma$ and $b$). The authors develop algorithms based on node similarities, i.e., by carefully exploiting node neighborhoods according to node degrees. Additionally, the authors provide an improved variant of their algorithm based on hashing for which they characterize the density of the output of their algorithm under specific conditions. The authors then perform experiments to validate their proposed algorithms, both comparing with state-of-the-art and studying the various parameters of the algorithms. **Strengths** 1. The problem addressed is interesting and important for the community: Identifying maximum quasi cliques is an important graph-mining problem with several applications in WWW applications. Additionally, given the hardness of the problem efficient algorithms are required. 2. The paper overall is clearly written and easy to follow: The writing of the paper is simple and easy to follow, most of the techniques proposed by the authors are described such that the reader can understand them besides the technical details. 3. The authors are able to provide some lower bounds, under specific conditions for their output quasi-clique: The obtain some bounds on the density of the quasi-cliques obtained in output to their algorithms. It is unclear if such bounds are tight, and under which condition they can be improved. Perhaps adding such discussion can also help ongoing research in such field. For example, the results obtained in Lemma 5.2, seem not so of practical interest to me (especially if we consider the experimental evaluation, i.e., the values of $k$ used in practice by the authors). 4. The authors provide the source code for their experimental evaluation, a description of public datasets they used, and parameters used for the experimental evaluation. **Weaknesses** 1. The experimental procedure can be improved. - To my understanding, the various algorithms compared were executed only once, this can be strengthened by running multiple times the various methods. To attain better statistical power for statistics such as running time. - Results on memory usage of the various baselines are missing. - The authors study the impact of the various parameters in their algorithms (such as $\gamma$ and $b$) but they do not provide a hint on how to set them on specific datasets. This can be helpful for someone that needs to execute their algorithms. 2. There could be important missing references (I report some of them below). In particular, I think that a more detailed and exhaustive review is needed, given that the problem has been widely studies, and it is widely related to many other foundational problems. - Solving maximum quasi-clique problem by a hybrid artificial bee colony approach [Peng et al. (Information Sciences, 2021)] - Mining Largest Maximal Quasi-Cliques [Sanei-Mehri et al., (TKDD, 2021)] - On Effectively Finding Maximal Quasi-cliques in Graphs [Brunato et al., (LNCS, 2007)] - Lightning Fast and Space Efficient k-clique Counting [Ye et al., (WWW, 2022)] - Provably and Efficiently Approximating Near-cliques using the Turรกn Shadow: PEANUTS [jain ad Seshadhri, (WWW 2020)] 3. Some aspects of the presentation can be improved. - Adding a Table showing the different guarantees on the output density, time complexity, as well as memory requirements of the proposed algorithms (also comparing with existing state-of-the-art) can help the overall presentation, showing the achieved improvements. - In section 6.2 Tables 2 and 3 can be easily merged by using multicolumns, this can save much space. - Connection to the web should be better highlighted for the proposed problem (e.g., in the introduction), such as finding specific applications that use quasi-cliques for web-based mining tasks. - The parameter $\rho$ in Lemma 5.2, was never discussed before, and it is not mentioned in such statement. **Minor and typos** - Math notation should be properly applied across the manuscript, e.g., line 282 v -> $v$, line 603 >= -> $\ge$, line 799 k -> $k$ (also in Figure 6), and so on. - Line 793: the symbol $r$ was not defined previously. - In Figure 6, I think the value โ€œbaseโ€ is not supposed to be there. - Avoid using symbol "*" to denote the product, just use $\cdot$ or nothing, e.g., line 445, line 553. questions: If the authors can argue on the obtained bound on the density of the solution compared to the density of the solution obtained in practice (i.e., how tight the result obtained is in practice). ethics_review_flag: No ethics_review_description: No issues. scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9fkOcX5dGT
LinkNER: Linking Local Named Entity Recognition Models to Large Language Models using Uncertainty
[ "Zhen Zhang", "Yuhua Zhao", "Hang Gao", "Mengting Hu" ]
Named Entity Recognition (NER) serves as a fundamental task in natural language understanding, bearing direct implications for web content analysis, search engines, and information retrieval systems. Fine-tuned NER models exhibit satisfactory performance on standard NER benchmarks. However, due to limited fine-tuning data and lack of knowledge, it performs poorly on unseen entity recognition. As a result, the usability and reliability of NER models in web-related applications are compromised. Instead, Large Language Models (LLMs) like GPT-4 possess extensive external knowledge, but research indicates that they lack specialty for NER tasks. Furthermore, non-public and large-scale weights make tuning LLMs difficult. To address these challenges, we propose a framework that combines small fine-tuned models with LLMs (LinkNER) and an uncertainty-based linking strategy called RDC that enables fine-tuned models to complement black-box LLMs, achieving better performance. We conduct experiments on standard NER test sets as well as noisy social media datasets. We find that LinkNER can improve performance on NER tasks, especially outperforming SOTA models in challenging robustness tests (with a 3.04\% $\sim$ 21.30\% improvement in the F1 score). Additionally, we conduct a quantitative study to examine the impact of key components, such as uncertainty estimation methods, LLMs, and in-context learning, on various NER tasks and provide targeted web-related recommendations.
[ "Information extraction", "uncertainty estimation", "robustness", "large language models" ]
https://openreview.net/pdf?id=9fkOcX5dGT
m2Eur4ubK6
official_review
1,700,723,565,987
9fkOcX5dGT
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission563/Reviewer_7FBK" ]
review: This paper focuses on named entity recognition (NER) by combining local NER model with large language model (LLM). The motivation of this paper is that local model performs poorly on unseen entity recognition due to โ€œlack of knowledgeโ€, while LLM possesses extensive external knowledge but expresses โ€œlack specialtyโ€ for NER tasks. Therefore, in order to complement each other, the authors propose LinkNER that combines small fine-tuned models with LLMs by an uncertainty-based linking strategy. Strengths: 1. The named entity recognition (NER) task is important, especially in the era of LLMs, combining small fine-tuned NER models with large language models. 2. Extensive experiments are solid. 3. The writing is easy to follow. Weaknesses: 1. Some experimental settings are unclear. 2. The efficiency of linking local models to LLM needs further exploration. questions: 1. OOD is a scenario that this paper focuses on, the corresponding dataset is WikiGold which contains multiple domains. Is there any overlap in the domain between training and testing sets? I don't seem to have seen the experimental results on WikiGold dataset. 2. The proposed framework works that a fine-tuned local model is used for entity recognition, and its output uncertainty probabilities can be used for uncertain entity detection, and then sends uncertain entities to LLM for entity type classification. For OOV or OOD entities, it is also challenging for local model, which leads to some errors for entity spans. However, the subsequent LLM which is only responsible for entity type classification can not correct the span errors? 3. What function was used for ENN in this paper, an exponential function or Softplus? 4. What is the entity density? 5. Line 503-504, what are K and N set to? 6. Figure 5 make me confusing, uncertainty threshold has been set to 0.0, why does the F1 score for Full linked LinkNER still change with the change of Uncertainty Intervals? 7. For Link-SpanNER (Confidence), Link-SpanNER (Entropy) and Link-SpanNER (MCD), what is the local model? 8. How is the performance of naรฏve Llama2 (13B)? 9. โ€œHogwartsโ€ was mistakenly written as โ€œDumbledoreโ€ in Figure 1. ethics_review_flag: No ethics_review_description: None scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 4 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9fkOcX5dGT
LinkNER: Linking Local Named Entity Recognition Models to Large Language Models using Uncertainty
[ "Zhen Zhang", "Yuhua Zhao", "Hang Gao", "Mengting Hu" ]
Named Entity Recognition (NER) serves as a fundamental task in natural language understanding, bearing direct implications for web content analysis, search engines, and information retrieval systems. Fine-tuned NER models exhibit satisfactory performance on standard NER benchmarks. However, due to limited fine-tuning data and lack of knowledge, it performs poorly on unseen entity recognition. As a result, the usability and reliability of NER models in web-related applications are compromised. Instead, Large Language Models (LLMs) like GPT-4 possess extensive external knowledge, but research indicates that they lack specialty for NER tasks. Furthermore, non-public and large-scale weights make tuning LLMs difficult. To address these challenges, we propose a framework that combines small fine-tuned models with LLMs (LinkNER) and an uncertainty-based linking strategy called RDC that enables fine-tuned models to complement black-box LLMs, achieving better performance. We conduct experiments on standard NER test sets as well as noisy social media datasets. We find that LinkNER can improve performance on NER tasks, especially outperforming SOTA models in challenging robustness tests (with a 3.04\% $\sim$ 21.30\% improvement in the F1 score). Additionally, we conduct a quantitative study to examine the impact of key components, such as uncertainty estimation methods, LLMs, and in-context learning, on various NER tasks and provide targeted web-related recommendations.
[ "Information extraction", "uncertainty estimation", "robustness", "large language models" ]
https://openreview.net/pdf?id=9fkOcX5dGT
ly7uW3451Z
decision
1,705,909,252,113
9fkOcX5dGT
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept (Oral) comment: Proposes and evaluates a method for named entity recognition that combines a small, fine-tuned model with a black-box LLM. The topic is a reasonable fit with the conference. The paper is very readable. The idea represents sufficient novelty to support publication. The experiments are appropriate and support the conclusions. Although there are only three reviews, they are consistent. The authors have engaged with the reviewing process, and in my opinion, have adequately addressed the concerns of the reviewers. I don't see why we wouldn't accept this paper. I reccoment poster presentation because I'm not sure this paper will have broad appeal. It's interesting, but the topic is a bit narrow.
9fkOcX5dGT
LinkNER: Linking Local Named Entity Recognition Models to Large Language Models using Uncertainty
[ "Zhen Zhang", "Yuhua Zhao", "Hang Gao", "Mengting Hu" ]
Named Entity Recognition (NER) serves as a fundamental task in natural language understanding, bearing direct implications for web content analysis, search engines, and information retrieval systems. Fine-tuned NER models exhibit satisfactory performance on standard NER benchmarks. However, due to limited fine-tuning data and lack of knowledge, it performs poorly on unseen entity recognition. As a result, the usability and reliability of NER models in web-related applications are compromised. Instead, Large Language Models (LLMs) like GPT-4 possess extensive external knowledge, but research indicates that they lack specialty for NER tasks. Furthermore, non-public and large-scale weights make tuning LLMs difficult. To address these challenges, we propose a framework that combines small fine-tuned models with LLMs (LinkNER) and an uncertainty-based linking strategy called RDC that enables fine-tuned models to complement black-box LLMs, achieving better performance. We conduct experiments on standard NER test sets as well as noisy social media datasets. We find that LinkNER can improve performance on NER tasks, especially outperforming SOTA models in challenging robustness tests (with a 3.04\% $\sim$ 21.30\% improvement in the F1 score). Additionally, we conduct a quantitative study to examine the impact of key components, such as uncertainty estimation methods, LLMs, and in-context learning, on various NER tasks and provide targeted web-related recommendations.
[ "Information extraction", "uncertainty estimation", "robustness", "large language models" ]
https://openreview.net/pdf?id=9fkOcX5dGT
kgUzrMHeGT
official_review
1,700,790,584,358
9fkOcX5dGT
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission563/Reviewer_LtfW" ]
review: ## Overview In this paper, the authors propose a framework called LinkNER which combines a local NER model with an LLM for the NER task. Uncertainty estimation methods are utilized to decide whether to seek the prediction from the LLM for the final entity type classification. Experiments on multiple datasets show the advantages of the proposed method. ## Strengths of this paper - The paper is clearly written and the method is simple. - Experimental results show that existing NER methods can be further improved when equipped with LinkNER, especially for OOV/OOD scenarios. - On multiple OOV datasets, the proposed method outperforms previous SOTA. ## Weaknesses of this paper - The uncertainty threshold seems to be dataset-dependent. This setting can be difficult for real-world applications. - Since Table 3 already shows the SOTA results, I am wondering whether LinkNER could further improve its performance. - The proposed method can be expensive since LLM inference usually costs more and has lower latency. Since the authors already know that LLMs can perform better on some uncertainty interval buckets, why not distill the LLMโ€™s ability on those buckets to smaller models ? - I would like to see the performance of the LLM (i.e. llama 2) after fine-tuning with NER data. questions: see details in the review. ethics_review_flag: No ethics_review_description: no scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 3 technical_quality: 5 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
9fkOcX5dGT
LinkNER: Linking Local Named Entity Recognition Models to Large Language Models using Uncertainty
[ "Zhen Zhang", "Yuhua Zhao", "Hang Gao", "Mengting Hu" ]
Named Entity Recognition (NER) serves as a fundamental task in natural language understanding, bearing direct implications for web content analysis, search engines, and information retrieval systems. Fine-tuned NER models exhibit satisfactory performance on standard NER benchmarks. However, due to limited fine-tuning data and lack of knowledge, it performs poorly on unseen entity recognition. As a result, the usability and reliability of NER models in web-related applications are compromised. Instead, Large Language Models (LLMs) like GPT-4 possess extensive external knowledge, but research indicates that they lack specialty for NER tasks. Furthermore, non-public and large-scale weights make tuning LLMs difficult. To address these challenges, we propose a framework that combines small fine-tuned models with LLMs (LinkNER) and an uncertainty-based linking strategy called RDC that enables fine-tuned models to complement black-box LLMs, achieving better performance. We conduct experiments on standard NER test sets as well as noisy social media datasets. We find that LinkNER can improve performance on NER tasks, especially outperforming SOTA models in challenging robustness tests (with a 3.04\% $\sim$ 21.30\% improvement in the F1 score). Additionally, we conduct a quantitative study to examine the impact of key components, such as uncertainty estimation methods, LLMs, and in-context learning, on various NER tasks and provide targeted web-related recommendations.
[ "Information extraction", "uncertainty estimation", "robustness", "large language models" ]
https://openreview.net/pdf?id=9fkOcX5dGT
HDznQ5WJLP
official_review
1,699,808,054,422
9fkOcX5dGT
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission563/Reviewer_AWSH" ]
review: Quality - **High Quality**: The paper demonstrates high quality in its research design, methodology, and analysis. The experiments are well-structured and adequately support the claims made. The use of both standard NER test sets and challenging datasets like noisy social media data ensures a comprehensive evaluation of the LinkNER framework. - **Attention to Detail**: The paper provides detailed insights into the integration of fine-tuned models with LLMs, addressing the challenges in NER tasks, especially in recognizing unseen entities. Clarity - **Well-Organized and Clear**: The paper is well-organized, with a clear flow from the introduction to the conclusion. Each section logically leads to the next, making it easy for readers to follow the progression of the research. - **Accessible Language**: The authors use clear and concise language, making complex concepts accessible to a broad audience, including those not specialized in NER or LLMs. Originality - **Innovative Approach**: The paper's approach to integrating fine-tuned local models with LLMs for NER tasks is original and innovative. This novel strategy addresses a significant gap in the existing literature. - **Unique Contribution**: The introduction of the RDC linking strategy based on uncertainty estimation is a unique contribution, setting this work apart from previous studies in the field. Significance - **Substantial Impact**: The paper's findings have substantial significance in the field of NER and NLP. By demonstrating improved performance in recognizing unseen entities, the research contributes to enhancing the effectiveness of NER systems in various applications. - **Broad Relevance**: The relevance of this work extends beyond NER tasks, offering insights that could be applicable to other areas of NLP and AI. It provides a foundation for future research in integrating different types of models for improved performance in complex language processing tasks. What can be improved: - Check the values for "Ratio@SOTA" in Table 2, should be 70.95\% for CoNLL'03, 56.52\% for Onto. 5.0 and 52.57\% for JNLPBA; - Check the value of "Min $\Delta$ LinkNER vs. GPT-3.5" @ Onto 5.0 ID, should be 37.57; - Check the value of "Max $\Delta$ LinkNER vs. GPT-3.5" @ CoNLLโ€™03 OOV, should be 11.00; - Correct the improvements over SOTA in the Abstract, you showed Max $\Delta$ LinkNER (LinkGPT3.5) vs. LocalNER instead. Correct numbers for Best LinkNER vs. SOTA should be: CoNLL'03 Typos: -1.05, CoNLL'03 OOV: -5.41, CoNLL'03 OOD: 2.87, WNUT'17 OOV: 18.18, TweetNER OOV: 3.41, JNLPBA OOV: 8.34; - Add "Best LinkNER vs. SOTA" to Table 3 to align the results to the statement in the Abstract. Overall Pros: - Innovative integration of fine-tuned models with LLMs; - Significant improvement in NER performance, especially in challenging environments; - Comprehensive experimental validation and analysis. Overall Cons: - Limited exploration of the framework's generalizability among domains (e.g., legal, financial, or technical texts) and other languages besides English; - Heavy reliance on LLMs in some cases, raising sustainability questions; - Some small technical issues; - Wrong results in the Abstract and Conclusion: Max $\Delta$ LinkNER (LinkGPT3.5) vs. LocalNER instead of Best LinkNER vs. SOTA. questions: - Could you elaborate on the amount of shots (K) leading to the results in Table 3? Is this a fixed amount per model, a fixed amount per dataset or a hyperparameter? How did you find the best value for K? - How do you envision the long-term sustainability of the LinkNER framework, particularly in the context of the evolving landscape of LLMs? - In your view, how much does the performance of the framework change in the multilingual environment? ethics_review_flag: No ethics_review_description: No ethical issues. scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 7 technical_quality: 6 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
9UudHPxH27
Benchmark and Neural Architecture for Conversational Entity Retrieval from a Knowledge Graph
[ "Mona Zamiri", "Yao Qiang", "Fedor Nikolaev", "Dongxiao Zhu", "Alexander Kotov" ]
This paper introduces a novel information retrieval (IR) task of Conversational Entity Retrieval from a Knowledge Graph (CER-KG). CER-KG extends non-conversational entity retrieval from a knowledge graph (KG) to the conversational scenario. The user queries in CER-KG dialog turns may rely on the results of the preceding turns, which are KG entities. Similar to the conversational document IR, CER-KG can be viewed as a sequence of interrelated ranking tasks. To enable future research on CER-KG, we created QBLink-KG, a publicly available benchmark that was adapted from QBLink, a benchmark for text-based conversational reading comprehension of Wikipedia. In our initial approach to CER-KG, we experimented with Transformer- and LSTM-based dialog context encoders in combination with the Neural Architecture for Conversational Entity Retrieval (NACER), our proposed feature-based neural architecture for entity ranking in CER-KG. NACER computes the ranking score of a candidate KG entity by taking into account a large number of lexical and semantic matching signals between various KG components in its neighborhood, such as entities, categories, and literals, as well as entities in the results of the preceding turns in dialog history. The experimental results for our initial approach to CER-KG reveal the key challenges of the proposed task along with the possible future directions for developing new approaches to it.
[ "Conversational IR", "Entity Retrieval", "Knowledge Graphs", "Deep Learning", "IR Benchmarks" ]
https://openreview.net/pdf?id=9UudHPxH27
fcoUCg5QKj
decision
1,705,909,224,284
9UudHPxH27
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept comment: This paper introduces a new task, Conversational Entity Retrieval from a Knowledge Graph, and proposes a model leveraging handcrafted features designed for this task. The paper was reviewed by five reviewers. The paper has clearly some merits. Most reviewers agree on the technical quality and novely of the papers, but they also raise some comments still requirint a proper explanation. Please clarify this point in the camera-ready copy.
9UudHPxH27
Benchmark and Neural Architecture for Conversational Entity Retrieval from a Knowledge Graph
[ "Mona Zamiri", "Yao Qiang", "Fedor Nikolaev", "Dongxiao Zhu", "Alexander Kotov" ]
This paper introduces a novel information retrieval (IR) task of Conversational Entity Retrieval from a Knowledge Graph (CER-KG). CER-KG extends non-conversational entity retrieval from a knowledge graph (KG) to the conversational scenario. The user queries in CER-KG dialog turns may rely on the results of the preceding turns, which are KG entities. Similar to the conversational document IR, CER-KG can be viewed as a sequence of interrelated ranking tasks. To enable future research on CER-KG, we created QBLink-KG, a publicly available benchmark that was adapted from QBLink, a benchmark for text-based conversational reading comprehension of Wikipedia. In our initial approach to CER-KG, we experimented with Transformer- and LSTM-based dialog context encoders in combination with the Neural Architecture for Conversational Entity Retrieval (NACER), our proposed feature-based neural architecture for entity ranking in CER-KG. NACER computes the ranking score of a candidate KG entity by taking into account a large number of lexical and semantic matching signals between various KG components in its neighborhood, such as entities, categories, and literals, as well as entities in the results of the preceding turns in dialog history. The experimental results for our initial approach to CER-KG reveal the key challenges of the proposed task along with the possible future directions for developing new approaches to it.
[ "Conversational IR", "Entity Retrieval", "Knowledge Graphs", "Deep Learning", "IR Benchmarks" ]
https://openreview.net/pdf?id=9UudHPxH27
WfeYxoPb1r
official_review
1,701,277,127,707
9UudHPxH27
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2152/Reviewer_MZxL" ]
review: Summary: The paper introduces Conversational Entity Retrieval from a Knowledge Graph (CER-KG), a novel information retrieval task where user queries in a conversational setting depend on previous dialog turns that involve Knowledge Graph (KG) entities. The authors propose a Neural Architecture for Conversational Entity Retrieval (NACER), which ranks KG entities based on their relevance to the dialog context. NACER uses a feature-based approach to consider lexical and semantic matching signals between dialog turn, preceding answers, and KG entities. For evaluation, the authors adapted an existing benchmark, QBLink, to create QBLink-KG, a CER-KG benchmark for DBpedia. Strengths: 1. The paper addresses a gap in conversational information retrieval by focusing on entity retrieval from KGs in a dialog setting, which is a unique and relevant area given the advancements in conversational AI. 2. NACER's design is robust, considering a wide range of lexical and semantic features from dialog contexts and KG entities, indicating a thorough approach to the problem. 3. Adapting the QBLink benchmark to create QBLink-KG for DBpedia is a practical approach, facilitating further research in this new area. Weaknesses: 1. The paper primarily introduces the architecture and the benchmark, but lacks the source code of the benchmark. 2. The intricate design of NACER, while comprehensive, might pose challenges in implementation and optimization, especially in the dynamic KG. questions: 1. How does NACER handle ambiguities and evolving contexts in prolonged conversational settings? 2. Are there plans to extend the evaluation of NACER beyond the QBLink-KG benchmark, possibly in more diverse real-world datasets? 3. How does the performance of NACER compare with other established IR systems, especially in handling complex KG queries? ethics_review_flag: No ethics_review_description: no scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 5 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
9UudHPxH27
Benchmark and Neural Architecture for Conversational Entity Retrieval from a Knowledge Graph
[ "Mona Zamiri", "Yao Qiang", "Fedor Nikolaev", "Dongxiao Zhu", "Alexander Kotov" ]
This paper introduces a novel information retrieval (IR) task of Conversational Entity Retrieval from a Knowledge Graph (CER-KG). CER-KG extends non-conversational entity retrieval from a knowledge graph (KG) to the conversational scenario. The user queries in CER-KG dialog turns may rely on the results of the preceding turns, which are KG entities. Similar to the conversational document IR, CER-KG can be viewed as a sequence of interrelated ranking tasks. To enable future research on CER-KG, we created QBLink-KG, a publicly available benchmark that was adapted from QBLink, a benchmark for text-based conversational reading comprehension of Wikipedia. In our initial approach to CER-KG, we experimented with Transformer- and LSTM-based dialog context encoders in combination with the Neural Architecture for Conversational Entity Retrieval (NACER), our proposed feature-based neural architecture for entity ranking in CER-KG. NACER computes the ranking score of a candidate KG entity by taking into account a large number of lexical and semantic matching signals between various KG components in its neighborhood, such as entities, categories, and literals, as well as entities in the results of the preceding turns in dialog history. The experimental results for our initial approach to CER-KG reveal the key challenges of the proposed task along with the possible future directions for developing new approaches to it.
[ "Conversational IR", "Entity Retrieval", "Knowledge Graphs", "Deep Learning", "IR Benchmarks" ]
https://openreview.net/pdf?id=9UudHPxH27
WAEslbL48f
official_review
1,700,823,709,767
9UudHPxH27
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2152/Reviewer_Kb2n" ]
review: The subject of the article is very interesting from many perspectives. The authors propose a new dataset and approach to evaluate Entity Retrieval in a conversational setting. Using an available dataset, i.e. QBLlink, they build a new benchmark to evaluate the capacity of systems to extract the appropriate entity that answers a given query in a multi-turn QA system. It seems that the paperโ€™s goal is clear, and the approaches to conduct the experiments are sound. Nonetheless, the presentation in the paper, especially from the perspective of a researcher more focused on Conversational Search systems, sounds a bit hard to follow. More specifically, in section 3, when describing the filtering phase of the QBLink original dataset, the authors filter based on a set of candidate entities Y. This part seems unclear and may require further specification. Also, the part regarding the baselines, section 5.1, was very specific and, therefore, uneasy to understand for a more general audience. Overall the topic of the paper is interesting. It is a good idea to release a new dataset for the evaluation of entity retrieval in a Conversational setting. This could be used also in different tasks such as query expansion and query rewriting. The evaluation of the paper is good overall, with some revisions concerning the presentation of the experiments, it can achieve an even better quality. questions: As stated on the review, one question concerns the third filtering step. I was not able to guess what you refer to with the โ€œset of candidate entities yโ€ in section 3. Being more familiar with Conversational Search systems, it was a bit hard to follow the more technical parts of the paper. If youโ€™d like to make it more accessible, maybe you can try to stress better the task and its different phases before describing in detail what you did. Last, for future developments, you could try to test the Nacer for some different tasks, such as query expansion. ethics_review_flag: No ethics_review_description: does not apply scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9UudHPxH27
Benchmark and Neural Architecture for Conversational Entity Retrieval from a Knowledge Graph
[ "Mona Zamiri", "Yao Qiang", "Fedor Nikolaev", "Dongxiao Zhu", "Alexander Kotov" ]
This paper introduces a novel information retrieval (IR) task of Conversational Entity Retrieval from a Knowledge Graph (CER-KG). CER-KG extends non-conversational entity retrieval from a knowledge graph (KG) to the conversational scenario. The user queries in CER-KG dialog turns may rely on the results of the preceding turns, which are KG entities. Similar to the conversational document IR, CER-KG can be viewed as a sequence of interrelated ranking tasks. To enable future research on CER-KG, we created QBLink-KG, a publicly available benchmark that was adapted from QBLink, a benchmark for text-based conversational reading comprehension of Wikipedia. In our initial approach to CER-KG, we experimented with Transformer- and LSTM-based dialog context encoders in combination with the Neural Architecture for Conversational Entity Retrieval (NACER), our proposed feature-based neural architecture for entity ranking in CER-KG. NACER computes the ranking score of a candidate KG entity by taking into account a large number of lexical and semantic matching signals between various KG components in its neighborhood, such as entities, categories, and literals, as well as entities in the results of the preceding turns in dialog history. The experimental results for our initial approach to CER-KG reveal the key challenges of the proposed task along with the possible future directions for developing new approaches to it.
[ "Conversational IR", "Entity Retrieval", "Knowledge Graphs", "Deep Learning", "IR Benchmarks" ]
https://openreview.net/pdf?id=9UudHPxH27
VzJIUzthye
official_review
1,700,915,724,935
9UudHPxH27
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2152/Reviewer_NKqk" ]
review: In this paper, the authors extend an entity-retrieval dataset QBLink-KG to a conversational setting (CER-KG).The task is to determine the correct answer (a Knowledge Graph entity) to a query in a given dialog turn, considering the context of all preceding queries and their answers. The authors used the English subset of the September 2021 DBpedia snapshot as the target Knowledge Graph for QBLink-KG. The authors also propose a baseline (NACER) which computes various features derived using neural embeddings to retrieve the entities in the conversational setting. The authors miss important relevant dataset: Wizard of Wikipedia [1] which focuses on conversational models that leverage knowledge from Wikipedia. It primarily involves training conversational agents to effectively use and reference Wikipedia knowledge during conversations. This approach emphasizes the integration of vast unstructured textual data into conversational AI. While this dataset is not directly designed to retrieve entities, it can be adapted easily to link the DBPedia entity to Wikipedia page the answer is coming from. [1] Dinan, Emily, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. "Wizard of wikipedia: Knowledge-powered conversational agents." arXiv preprint arXiv:1811.01241 (2018). Strengths: 1. The proposed approach structured to consider the KG aspects, rather than just textual content. 2. The features computed are more human interpretable. 3. Paper is well written and easy to read. Limitations: 1. The authors miss relevant works. 2. The dataset has a narrow scope which works only DBPedia and entity-oriented conversations. 3. Comparison to larger generative models is missing. questions: 1. What are the novel aspects of your dataset and baseline methods compared to Wizard of Wikipedia and their baselines? 2. How do the few-shot and in-context learning using larger LLMs compare to the proposed feature-based appraoches? 3. It is unclear how the QBLink dataset was extended to the conversational setting. How were the conversations created? ethics_review_flag: No ethics_review_description: None scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 4 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
9UudHPxH27
Benchmark and Neural Architecture for Conversational Entity Retrieval from a Knowledge Graph
[ "Mona Zamiri", "Yao Qiang", "Fedor Nikolaev", "Dongxiao Zhu", "Alexander Kotov" ]
This paper introduces a novel information retrieval (IR) task of Conversational Entity Retrieval from a Knowledge Graph (CER-KG). CER-KG extends non-conversational entity retrieval from a knowledge graph (KG) to the conversational scenario. The user queries in CER-KG dialog turns may rely on the results of the preceding turns, which are KG entities. Similar to the conversational document IR, CER-KG can be viewed as a sequence of interrelated ranking tasks. To enable future research on CER-KG, we created QBLink-KG, a publicly available benchmark that was adapted from QBLink, a benchmark for text-based conversational reading comprehension of Wikipedia. In our initial approach to CER-KG, we experimented with Transformer- and LSTM-based dialog context encoders in combination with the Neural Architecture for Conversational Entity Retrieval (NACER), our proposed feature-based neural architecture for entity ranking in CER-KG. NACER computes the ranking score of a candidate KG entity by taking into account a large number of lexical and semantic matching signals between various KG components in its neighborhood, such as entities, categories, and literals, as well as entities in the results of the preceding turns in dialog history. The experimental results for our initial approach to CER-KG reveal the key challenges of the proposed task along with the possible future directions for developing new approaches to it.
[ "Conversational IR", "Entity Retrieval", "Knowledge Graphs", "Deep Learning", "IR Benchmarks" ]
https://openreview.net/pdf?id=9UudHPxH27
ISQXDaSRun
official_review
1,701,377,609,802
9UudHPxH27
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2152/Reviewer_wroa" ]
review: In this paper, a novel task named Conversational Entity Retrieval from a Knowledge Graph (CER-KG) is introduced. This task involves treating conversational contexts and previous answers as queries, with the goal of retrieving entities in the Knowledge Graph (KG). The authors also construct a benchmarking dataset based on QB-Link and develop feature-based Learning to Rank (LTR) models for reranking candidates identified through entity-linking. Despite these contributions, I find that the drawbacks of this work outweigh its strengths. **Pros:** - The paper is well-written, providing clear descriptions of technical details, including figures and tables. - The proposed technical contribution demonstrates the best performance, as evaluated by the adopted metrics. - The authors conduct a thorough analysis of the results, encompassing both successes and failures. - Introducing a new task could potentially interest other researchers in the field. **Cons:** - While the proposed task emphasizes entity "retrieval," the NACER approach is essentially focused on "reranking" candidates identified by an entity linker. This mismatch may mislead readers about the nature of this work. - The technical contribution primarily revolves around identifying the best LTR feature based on different text encoders with limited novelty. - The connection between CER-KG and CQA-KG, and their complementary effects, as discussed in the introduction, seems far-fetched. The utility of CER-KG for CQA-KG is not clearly justified. - The authors assert that "NACER makes no restrictive assumptions about the dialog context" in section 1 but contradict this by stating, "Due to practical considerations, such as the limit on the model capacity imposed by the benchmark size, we only use the answer to the previous turn ๐‘Ž_{๐‘˜โˆ’1} and query in the current turn ๐‘ž_๐‘˜ in both the baselines and NACER." - The use of the baseline GENRE appears inappropriate. Candidates are exposed to the model during the ranking stage for KV-MemNN and NACER but not for GENRE. Additionally, it seems that GENRE is not optimized on the training set of the collection, placing it and other models (LM-based generative ones) at a significant disadvantage. While generative models theoretically handle longer and variable-length conversations, the experimental settings overlook these factors. questions: - Could the authors provide further discussion on the connection between CER-KG and CQA-KG and elaborate on its importance? Additionally, why is the task CER-KG important independently? In what scenarios would a user pose a series of questions where the answers involve multiple connected entities? - What specific "practical considerations" led the authors to use only the answer to the previous turn ๐‘Ž_{๐‘˜โˆ’1} and the query in the current turn ๐‘ž_๐‘˜ in both the baselines and NACER? In the case of GENRE, considering the entire context as the input query is possible. Is it more about NACER's limitations in handling a variable number of turns in a conversation? If only the last turn is considered, it significantly diminishes the utility of the entire conversational setting. ethics_review_flag: No ethics_review_description: N/A scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 3 technical_quality: 3 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
9UudHPxH27
Benchmark and Neural Architecture for Conversational Entity Retrieval from a Knowledge Graph
[ "Mona Zamiri", "Yao Qiang", "Fedor Nikolaev", "Dongxiao Zhu", "Alexander Kotov" ]
This paper introduces a novel information retrieval (IR) task of Conversational Entity Retrieval from a Knowledge Graph (CER-KG). CER-KG extends non-conversational entity retrieval from a knowledge graph (KG) to the conversational scenario. The user queries in CER-KG dialog turns may rely on the results of the preceding turns, which are KG entities. Similar to the conversational document IR, CER-KG can be viewed as a sequence of interrelated ranking tasks. To enable future research on CER-KG, we created QBLink-KG, a publicly available benchmark that was adapted from QBLink, a benchmark for text-based conversational reading comprehension of Wikipedia. In our initial approach to CER-KG, we experimented with Transformer- and LSTM-based dialog context encoders in combination with the Neural Architecture for Conversational Entity Retrieval (NACER), our proposed feature-based neural architecture for entity ranking in CER-KG. NACER computes the ranking score of a candidate KG entity by taking into account a large number of lexical and semantic matching signals between various KG components in its neighborhood, such as entities, categories, and literals, as well as entities in the results of the preceding turns in dialog history. The experimental results for our initial approach to CER-KG reveal the key challenges of the proposed task along with the possible future directions for developing new approaches to it.
[ "Conversational IR", "Entity Retrieval", "Knowledge Graphs", "Deep Learning", "IR Benchmarks" ]
https://openreview.net/pdf?id=9UudHPxH27
DsxuRKHyUX
official_review
1,701,095,580,503
9UudHPxH27
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2152/Reviewer_YShU" ]
review: The paper introduces a new task, Conversational Entity Retrieval from a Knowledge Graph (CER-KG), and proposes NACER, a model built on handcrafted features designed for this task. The paper employs a complex notation that hampers readability. Understanding how NACER performs retrieval is challenging due to the abundance of subscripts, varied fonts, and symbols used. The authors suggest that while NACER, in its current form, could theoretically incorporate features based on previous answers and queries, practical constraints limit this to utilizing only the prior turn's answer (akโˆ’1) and the current turn's query (qk). As a result, the task seems less conversational and more aligned with classical KG Entity Retrieval (ER). Do W{a1,a2} and b{a1,a2} refer to the first and second answers? If not, the notation remains unclear. The experiments primarily compare NACER against two baseline models. Table 5 seems more focused on analyzing the components of NACER rather than benchmarking against the current state-of-the-art (SOTA). Additionally, the most robust comparison, KV-MemNN, is eight years old. Table 6 is anecdotal and not really informative. While intriguing, it might be better suited as an appendix rather than occupying an entire page in the main body of the paper questions: Do W{a1,a2} and b{a1,a2} refer to the first and second answers? If not, the notation remains unclear. Are the baselines chosen indeed the most effective approaches, considering particularly that NACER is based only on the last utterance and can be easily mapped on classical ER? ethics_review_flag: No ethics_review_description: No ethical issues scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 4 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9SYnNa4WUl
RulePrompt: Weakly Supervised Text Classification with Prompting PLMs and Self-Iterative Logical Rules
[ "Miaomiao Li", "Jiaqi Zhu", "Yang Wang", "Yi Yang", "Yilin Li", "Hongan Wang" ]
Weakly supervised text classification (WSTC), also called zero-shot or dataless text classification, has attracted increasing attention due to its applicability in classifying a mass of texts within the dynamic and open Internet environment, since it requires only a limited set of seed words (label names) for each category instead of labeled data. With the help of recently popular prompting pre-trained language models (PLMs), many studies leveraged manually crafted and/or automatically identified verbalizers to estimate the likelihood of categories, but they failed to differentiate the effects of these category-indicative words, let alone capture their correlations and realize adaptive adjustments according to the unlabeled corpus. In this paper, in order to let the PLM better understand each category, we at first propose a novel form of rule-based knowledge using logical expressions to characterize the meanings of categories. Then, we develop a prompting PLM-based approach named RulePrompt for the WSTC task, consisting of a rule mining module and a rule-enhanced pseudo label generation module, plus a self-supervised fine-tuning module to make the PLM align with this task. Within this framework, the inaccurate pseudo labels assigned to texts and the imprecise logical rules associated with categories mutually enhance each other in an alternative manner, establishing a self-iterative closed loop of knowledge (rule) acquisition and utilization, with seed words serving as the starting point. Extensive experiments validate the effectiveness and robustness of our approach, which outperforms state-of-the-art weakly supervised methods. Importantly, our approach yields interpretable category rules, proving its advantageous for disambiguating easily-confused categories.
[ "weak supervision", "text classification", "seed word", "pre-trained language model", "prompt", "logical rule", "rule mining", "pseudo label" ]
https://openreview.net/pdf?id=9SYnNa4WUl
pTNmSOWZ3G
official_review
1,700,808,382,872
9SYnNa4WUl
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1740/Reviewer_KiMf" ]
review: The paper introduces RulePrompt, an innovative approach aiming to overcome the limitations of weakly supervised text classification reliant solely on seed words. It proposes a novel method to represent category meanings using automatically mined logical rules derived from pseudo labels of texts, iteratively self-optimized to enhance the understanding of categories. By integrating prompting Pre-trained Language Models (PLMs) into the rule-based iteration process, RulePrompt effectively harnesses symbolic knowledge, improving generative capability and semantic representations. Experimentally, RulePrompt consistently outperforms state-of-the-art weakly supervised methods, providing intuitive logical rules that aid in disambiguating confusing categories, especially on larger datasets. The conclusion highlights the need for future work to enrich rule expressiveness and develop more effective iteration strategies. **Strengths** - The paper introduces a novel method of deriving logical rules from pseudo labels, enhancing weakly supervised text classification beyond seed words. - Comprehensive experiments demonstrate RulePrompt's consistent outperformance of existing methods, showcasing its effectiveness. - The incorporation of PLMs into the iterative rule-based process leverages the potential of these models, improving semantic representations. **Weaknesses** - The method involves various steps, including iterative self-optimization and PLM integration, potentially raising computational complexity. - While performing well, the approach's effectiveness might vary concerning specific datasets or domain-specific texts not covered in the evaluation. questions: How does the proposed method's computational complexity scale with larger datasets or more complex rules? ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 4 technical_quality: 5 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
9SYnNa4WUl
RulePrompt: Weakly Supervised Text Classification with Prompting PLMs and Self-Iterative Logical Rules
[ "Miaomiao Li", "Jiaqi Zhu", "Yang Wang", "Yi Yang", "Yilin Li", "Hongan Wang" ]
Weakly supervised text classification (WSTC), also called zero-shot or dataless text classification, has attracted increasing attention due to its applicability in classifying a mass of texts within the dynamic and open Internet environment, since it requires only a limited set of seed words (label names) for each category instead of labeled data. With the help of recently popular prompting pre-trained language models (PLMs), many studies leveraged manually crafted and/or automatically identified verbalizers to estimate the likelihood of categories, but they failed to differentiate the effects of these category-indicative words, let alone capture their correlations and realize adaptive adjustments according to the unlabeled corpus. In this paper, in order to let the PLM better understand each category, we at first propose a novel form of rule-based knowledge using logical expressions to characterize the meanings of categories. Then, we develop a prompting PLM-based approach named RulePrompt for the WSTC task, consisting of a rule mining module and a rule-enhanced pseudo label generation module, plus a self-supervised fine-tuning module to make the PLM align with this task. Within this framework, the inaccurate pseudo labels assigned to texts and the imprecise logical rules associated with categories mutually enhance each other in an alternative manner, establishing a self-iterative closed loop of knowledge (rule) acquisition and utilization, with seed words serving as the starting point. Extensive experiments validate the effectiveness and robustness of our approach, which outperforms state-of-the-art weakly supervised methods. Importantly, our approach yields interpretable category rules, proving its advantageous for disambiguating easily-confused categories.
[ "weak supervision", "text classification", "seed word", "pre-trained language model", "prompt", "logical rule", "rule mining", "pseudo label" ]
https://openreview.net/pdf?id=9SYnNa4WUl
guXi3egQDR
official_review
1,700,224,395,044
9SYnNa4WUl
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1740/Reviewer_QeZV" ]
review: The paper proposes a text classification approach for a weakly supervised scenario. The approach combines frequent pattern mining (to find representative words and word pairs for categories in texts) and prompting pre-trained language models (to assign pseudo-labels and extract signal words). The approach is compared against seven competitors which it beats on three datasets. However, PESCO [27] (cited in Lineย 200) is not among them, though it also outperforms many of the competitors. For example, on the AG News dataset, PESCO has an accuracy of 89.6 compared to 86.4 for LOTClass. Also, other approaches are compared on other/further datasets, e.g., DBpedia, Yahoo Answers, or Amazon. For comparison it would be crucial to use more of these common datasets and also compare against PESCO. Otherwise, the benefits of the proposed method over the state of the art are hard to judge. The idea of combining LLMs with rule-based approaches is appealing and I appreciate that the paper tries to pursue this goal. However, I find the claim misleading that the approach uses logical rules. Basically, frequent words and word pairs are used to compute text similarities (using embeddings, cf. Eq. 17 and 18). Also, in Sec. 4.4.3 all words are connected by "and" (cf. lย 584). Thus, I could not find any application of logics. Even if I regard this as "rule-based", the paper would need to justify why a "rule-based knowledge representation for categories" is novel. Disjunctions of (conjunctions of) words are nothing special and have been used before (just think of Boolean search). Finally, the ablation study should evaluate whether the word pairs really bring some benefits โ€“ how would the approach perform, if those words were treated like the individual words? Suggestions for improvement: - You might consider writing "Web" instead of "Internet", since that better fits to the observations you describe. - The effect of the rules and the combination of the signal words is quite difficult to judge. Please provide some representative examples of rules which were generated by the proposed approach. The example in lines 790 to 793 is not convincing, since "palestinian" and "war" are quite restrictive words and I wonder why, for instance, the word "politician" is not among the words. - The text contains many grammar issues and is at times difficult to understand. Please invest some time to improve the writing of the paper. questions: - (Notwithstanding how the rules are actually used:) Why can in each rule only two words be joined by a conjunction and why are arbitrary conjunctions of words not allowed? This seems rather restrictive. - How did you choose the support thresholds? How sensitive is the approach to other choices? ethics_review_flag: No ethics_review_description: N/A scope: 2: The connection to the Web is incidental, e.g., use of Web data or API novelty: 4 technical_quality: 4 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
9SYnNa4WUl
RulePrompt: Weakly Supervised Text Classification with Prompting PLMs and Self-Iterative Logical Rules
[ "Miaomiao Li", "Jiaqi Zhu", "Yang Wang", "Yi Yang", "Yilin Li", "Hongan Wang" ]
Weakly supervised text classification (WSTC), also called zero-shot or dataless text classification, has attracted increasing attention due to its applicability in classifying a mass of texts within the dynamic and open Internet environment, since it requires only a limited set of seed words (label names) for each category instead of labeled data. With the help of recently popular prompting pre-trained language models (PLMs), many studies leveraged manually crafted and/or automatically identified verbalizers to estimate the likelihood of categories, but they failed to differentiate the effects of these category-indicative words, let alone capture their correlations and realize adaptive adjustments according to the unlabeled corpus. In this paper, in order to let the PLM better understand each category, we at first propose a novel form of rule-based knowledge using logical expressions to characterize the meanings of categories. Then, we develop a prompting PLM-based approach named RulePrompt for the WSTC task, consisting of a rule mining module and a rule-enhanced pseudo label generation module, plus a self-supervised fine-tuning module to make the PLM align with this task. Within this framework, the inaccurate pseudo labels assigned to texts and the imprecise logical rules associated with categories mutually enhance each other in an alternative manner, establishing a self-iterative closed loop of knowledge (rule) acquisition and utilization, with seed words serving as the starting point. Extensive experiments validate the effectiveness and robustness of our approach, which outperforms state-of-the-art weakly supervised methods. Importantly, our approach yields interpretable category rules, proving its advantageous for disambiguating easily-confused categories.
[ "weak supervision", "text classification", "seed word", "pre-trained language model", "prompt", "logical rule", "rule mining", "pseudo label" ]
https://openreview.net/pdf?id=9SYnNa4WUl
QuJNMIKAYf
decision
1,705,909,256,498
9SYnNa4WUl
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept (Oral) comment: The authors discuss an approach that combined LLM and rule-based methods to overcome the limitations of weakly supervised text classifications. More specifically, the methods combines frequent pattern mining to find representative words and word pairs for categories in texts and prompting pre-trained language models to assign pseudo-labels and extract signal words. The method is thoroughly evaluated, and the empirical results demonstrated its performance.
9SYnNa4WUl
RulePrompt: Weakly Supervised Text Classification with Prompting PLMs and Self-Iterative Logical Rules
[ "Miaomiao Li", "Jiaqi Zhu", "Yang Wang", "Yi Yang", "Yilin Li", "Hongan Wang" ]
Weakly supervised text classification (WSTC), also called zero-shot or dataless text classification, has attracted increasing attention due to its applicability in classifying a mass of texts within the dynamic and open Internet environment, since it requires only a limited set of seed words (label names) for each category instead of labeled data. With the help of recently popular prompting pre-trained language models (PLMs), many studies leveraged manually crafted and/or automatically identified verbalizers to estimate the likelihood of categories, but they failed to differentiate the effects of these category-indicative words, let alone capture their correlations and realize adaptive adjustments according to the unlabeled corpus. In this paper, in order to let the PLM better understand each category, we at first propose a novel form of rule-based knowledge using logical expressions to characterize the meanings of categories. Then, we develop a prompting PLM-based approach named RulePrompt for the WSTC task, consisting of a rule mining module and a rule-enhanced pseudo label generation module, plus a self-supervised fine-tuning module to make the PLM align with this task. Within this framework, the inaccurate pseudo labels assigned to texts and the imprecise logical rules associated with categories mutually enhance each other in an alternative manner, establishing a self-iterative closed loop of knowledge (rule) acquisition and utilization, with seed words serving as the starting point. Extensive experiments validate the effectiveness and robustness of our approach, which outperforms state-of-the-art weakly supervised methods. Importantly, our approach yields interpretable category rules, proving its advantageous for disambiguating easily-confused categories.
[ "weak supervision", "text classification", "seed word", "pre-trained language model", "prompt", "logical rule", "rule mining", "pseudo label" ]
https://openreview.net/pdf?id=9SYnNa4WUl
CgGAzzM0Qw
official_review
1,700,967,818,202
9SYnNa4WUl
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1740/Reviewer_BcjZ" ]
review: ### Quality: The paper presents a new approach to the Weakly supervised text classification (WSTC) task that combines logical rules and prompting pre-trained language models. The authors provide a thorough description of their methodology and experimental setup. The method design and evaluation seem sound to me. ### Clarity: The paper is well-written and easy to follow. ### Originality: The approach presented in this paper is relatively new. Although the concrete modules (e.g., prompting PLMs and enrichment of supervision) borrow insights from previous work in WSTC, the overall framework that incorporates logical rules to improve WSTC is novel. There is also some novelty in generating pseudo labels based on embeddings and rules. ### Significance: The significance of this work lies in its potential to improve WSTC, which is an important task in text classification. The authors demonstrate that their approach outperforms existing methods on popular datasets, indicating that it could be a valuable tool. ### **Pros**: - A relatively novel approach to WSTC that combines logical rules and prompting pre-trained language models - Thorough explanation and sound designs of methodology and evaluation - The proposed method demonstrates effectiveness and robustness on popular datasets over strong baselines ### **Cons**: - The evaluation would benefit from incorporating large language model (LLMs) baseline results or at least discuss how LLMs could assist the WSTC task, though it is acceptable for a paper on WSTC to not compare to LLMs as prior work in WSTC usually assumes a small encoder model as the classifier. Some relevant studies can be found in the reference list below. - The experiments should better use more datasets. I found the current three datasets used to be acceptable but they are indeed on the easier side of text classification tasks. Also, better to report the sensitivity of the method (e.g., via standard deviation over several runs) as the method seems to be a bit complex with iterative loops. - (Minor suggestion) It looks like the referenced paper "PromptClass: Weakly-Supervised Text Classification with Prompting Enhanced Noise Robust Self-Training" has been updated with a different title "PIEClass: Weakly-Supervised Text Classification with Prompting and Noise-Robust Iterative Ensemble Training" Reference: * Meng et al. โ€œGenerating Training Data with Language Models: Towards Zero-Shot Language Understanding.โ€ NeurIPS (2022). * Ye et al. โ€œZeroGen: Efficient Zero-shot Learning via Dataset Generation.โ€ EMNLP (2022). questions: Please address the "cons" raised in my main review. ethics_review_flag: No ethics_review_description: N/A scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 6 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
9SYnNa4WUl
RulePrompt: Weakly Supervised Text Classification with Prompting PLMs and Self-Iterative Logical Rules
[ "Miaomiao Li", "Jiaqi Zhu", "Yang Wang", "Yi Yang", "Yilin Li", "Hongan Wang" ]
Weakly supervised text classification (WSTC), also called zero-shot or dataless text classification, has attracted increasing attention due to its applicability in classifying a mass of texts within the dynamic and open Internet environment, since it requires only a limited set of seed words (label names) for each category instead of labeled data. With the help of recently popular prompting pre-trained language models (PLMs), many studies leveraged manually crafted and/or automatically identified verbalizers to estimate the likelihood of categories, but they failed to differentiate the effects of these category-indicative words, let alone capture their correlations and realize adaptive adjustments according to the unlabeled corpus. In this paper, in order to let the PLM better understand each category, we at first propose a novel form of rule-based knowledge using logical expressions to characterize the meanings of categories. Then, we develop a prompting PLM-based approach named RulePrompt for the WSTC task, consisting of a rule mining module and a rule-enhanced pseudo label generation module, plus a self-supervised fine-tuning module to make the PLM align with this task. Within this framework, the inaccurate pseudo labels assigned to texts and the imprecise logical rules associated with categories mutually enhance each other in an alternative manner, establishing a self-iterative closed loop of knowledge (rule) acquisition and utilization, with seed words serving as the starting point. Extensive experiments validate the effectiveness and robustness of our approach, which outperforms state-of-the-art weakly supervised methods. Importantly, our approach yields interpretable category rules, proving its advantageous for disambiguating easily-confused categories.
[ "weak supervision", "text classification", "seed word", "pre-trained language model", "prompt", "logical rule", "rule mining", "pseudo label" ]
https://openreview.net/pdf?id=9SYnNa4WUl
465vb0CeIh
official_review
1,700,663,121,986
9SYnNa4WUl
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1740/Reviewer_6haf" ]
review: The authors propose a rules-based and prompt-guided methodology, RulePrompt to tackle (extremely) weakly supervised text classification. The framework consists of rule-mining and rule-enhancing pseudo label generation modules. Pseudo-labels are refined via an iterative process, making the weakly supervised signal stronger. The results indicate RulePrompt is SOTA or on-par with three widely used datasets. The framework is well-motivated and justified. While it could be somewhat complicated at first glance, the authors are able the clearly articulate each equation, idea, etc. The results are somewhat convincing. The framework (completely) outperforms other baselines on 2/3 datasets (AGnews and IMDB), although it isn't by much (0.895 & 0.895 -> 0.897 & 0.897 for AGnews and 0.939 & 0.939 -> 0.941 & 0.941 for IMDB). The authors have a nice ablation study that not only looks at all of their components individually, but also across all datasets used. I believe experiments on more datasets are warranted, since RulePrompt is, at best, barely better than other frameworks. Nevertheless, it is definitely comparable with SOTA techniques. While not warrented, I believe the authors should've done a case study so that they could identify and potential patterns from incorrect predictions. questions: NA ethics_review_flag: No ethics_review_description: No scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9QA3TpxM3U
Cardinality Counting in "Alcatraz": A Privacy-aware Federated Learning Approach
[ "Nan Wu", "Xin Yuan", "Shuo Wang", "Hongsheng Hu", "Jason Xue" ]
The task of cardinality counting, pivotal for data analysis, endeavors to quantify unique elements within datasets and has significant applications across various sectors like healthcare, marketing, cybersecurity, and web analytics. Current methods, categorized into deterministic and probabilistic, often fail to prioritize data privacy. Given the fragmentation of datasets across various organizations, there is an elevated risk of inadvertently disclosing sensitive information during collaborative data studies using state-of-the-art cardinality counting techniques. This study introduces an innovative privacy-centric solution for the cardinality counting dilemma, leveraging a federated learning framework. Our approach involves employing a locally differentially private data encoding for initial processing, followed by a privacy-aware federated $K$-means clustering strategy, ensuring that cardinality counting occurs across distinct datasets without necessitating data amalgamation. The efficacy of our methodology is underscored by promising results from tests on both real-world and simulated datasets, pointing towards a transformative approach to privacy-sensitive cardinality counting in contemporary data science.
[ "Differential privacy", "federated learning", "data privacy" ]
https://openreview.net/pdf?id=9QA3TpxM3U
yq2aabybZZ
official_review
1,700,770,386,213
9QA3TpxM3U
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2039/Reviewer_aqda" ]
review: This paper presents a privacy-aware approach to ensure cardinality counting for distinct datasets without amalgamation. The idea is to employ a locally differentially private data encoding followed by a privacy-aware K-means clustering. Pros: - Interesting and practical use case. - Good writing overall. Cons: - Missing related literature and explanation for the design choice. - Evaluation is not thorough. Overall, I think this paper addresses an interesting problem in privacy-aware collaborations on sensitive datasets. The design choice is unclear. The paper does not discuss existing literature to convince why the proposed design is effective. For example, Bloom filter and local DP encoding are known methods but this design choice was discussed too briefly. This missing related work (https://dl.acm.org/doi/10.1145/3372224.3419188) proposed a pretty similar design idea for submodel learning. The paper only evaluates the counting accuracy. It is important to also discuss the overheads (e.g., computation, communication) and potential limitations in larger-scale data analysis collaborations. questions: 1. What are the existing practices other than Bloom filter and local DP? 2. What are the overheads or limitations? ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 4 technical_quality: 4 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
9QA3TpxM3U
Cardinality Counting in "Alcatraz": A Privacy-aware Federated Learning Approach
[ "Nan Wu", "Xin Yuan", "Shuo Wang", "Hongsheng Hu", "Jason Xue" ]
The task of cardinality counting, pivotal for data analysis, endeavors to quantify unique elements within datasets and has significant applications across various sectors like healthcare, marketing, cybersecurity, and web analytics. Current methods, categorized into deterministic and probabilistic, often fail to prioritize data privacy. Given the fragmentation of datasets across various organizations, there is an elevated risk of inadvertently disclosing sensitive information during collaborative data studies using state-of-the-art cardinality counting techniques. This study introduces an innovative privacy-centric solution for the cardinality counting dilemma, leveraging a federated learning framework. Our approach involves employing a locally differentially private data encoding for initial processing, followed by a privacy-aware federated $K$-means clustering strategy, ensuring that cardinality counting occurs across distinct datasets without necessitating data amalgamation. The efficacy of our methodology is underscored by promising results from tests on both real-world and simulated datasets, pointing towards a transformative approach to privacy-sensitive cardinality counting in contemporary data science.
[ "Differential privacy", "federated learning", "data privacy" ]
https://openreview.net/pdf?id=9QA3TpxM3U
gKO2u3310g
official_review
1,700,735,255,050
9QA3TpxM3U
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2039/Reviewer_8Vhp" ]
review: This study introduces an innovative privacy-centric solution for the cardinality counting dilemma, leveraging a federated learning framework. This approach involves a combination of local differentially private data encoding and privacy-aware federated k-means clustering strategy, ensuring that cardinality counting occurs across distinct datasets without necessitating data amalgamation. ย  pros: 1. This paper proposes a pratical method. 2. This paper proposes a privacy-preserving method for the cardinality counting dilemma, which is crucial in sectors like healthcare, marketing, cybersecurity, and web analytics. 3. This paper tests the method on both real-world and simulated datasets, which indicates the robustness of the proposed method. 4. This paper conducts detailed experiments. cons: 1. More related works are needed to highlight the novelty of the proposed method. 2. Although Algorithm 2 summarizes the privacy-preserving federated K-means clustering algorithm for cardinality counting, more details on the steps of the algorithm should be provided in Section 3.4. 3. Federated learning can be resource-intensive, especially with privacy-preserving techniques like differentially private data encoding. This may pose challenges in terms of computational costs and efficiency. Is it possible to provide potential solutions to solve the above challenges? questions: see cons. ethics_review_flag: No ethics_review_description: N/A scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 7 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9QA3TpxM3U
Cardinality Counting in "Alcatraz": A Privacy-aware Federated Learning Approach
[ "Nan Wu", "Xin Yuan", "Shuo Wang", "Hongsheng Hu", "Jason Xue" ]
The task of cardinality counting, pivotal for data analysis, endeavors to quantify unique elements within datasets and has significant applications across various sectors like healthcare, marketing, cybersecurity, and web analytics. Current methods, categorized into deterministic and probabilistic, often fail to prioritize data privacy. Given the fragmentation of datasets across various organizations, there is an elevated risk of inadvertently disclosing sensitive information during collaborative data studies using state-of-the-art cardinality counting techniques. This study introduces an innovative privacy-centric solution for the cardinality counting dilemma, leveraging a federated learning framework. Our approach involves employing a locally differentially private data encoding for initial processing, followed by a privacy-aware federated $K$-means clustering strategy, ensuring that cardinality counting occurs across distinct datasets without necessitating data amalgamation. The efficacy of our methodology is underscored by promising results from tests on both real-world and simulated datasets, pointing towards a transformative approach to privacy-sensitive cardinality counting in contemporary data science.
[ "Differential privacy", "federated learning", "data privacy" ]
https://openreview.net/pdf?id=9QA3TpxM3U
bVrRd3xukF
official_review
1,700,423,805,443
9QA3TpxM3U
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2039/Reviewer_cwZd" ]
review: Paper Summary: This paper proposes a privacy-preserving cardinality counting algorithm. Specifically, the algorithm combines Bloom Filter Encoding, Locally Differentially Private Data Encoding, and a DP-perturbed federated K-Means. Cardinality counting has significant applications across various domains such as web mining, marketing, cybersecurity, and healthcare. However, the existing methods do not fully consider privacy during the counting process. The study proposes a privacy-centric solution for cardinality counting that can reduce privacy risks. Strengths: - The research problem in this paper is well-motivated. - The paper is well-written and easy to follow. Weakness: - Some technique details need to be clarified. - Lack of practical guidance on selecting the hyperparameters. This paper studies an important problem in contemporary data analysis, the authors provide sufficient background knowledge and motivation examples to formulate the problem. The paper is well-written and easy to follow. The experiments are performed on a real-world dataset. While I enjoyed reading this paper very much, I have some concerns about the technique details. Firstly, the main contribution of this paper is a modified version of previous work [34,36]. The authors design a purity score calculation part for the privacy-preserving federated K-means clustering. When the selected clients share the updated centroids with the server, whether this is guaranteed privacy-preserving is not proven. Secondly, there are many hyperparameters that can affect the trade-off between estimation accuracy and privacy protection ability. The authors need to provide some practical guidance on selecting the hyperparameters. For example, the authors can provide some empirical results on how the number of clusters affects the model's utility and efficiency. Thirdly, the authors do not analyze the computation complexity of the proposed algorithm. The authors need to provide some theoretical analysis of the computation complexity. Fourth, the authors do not provide a convergence guarantee. In the proposed system, it is unclear whether the server needs to pre-decide a K for each client. What if the selected clients hold different Ks? Can the server side still converge to perform the clustering? Besides, it would be great if the authors could provide some empirical results on the scalability of the proposed algorithm. For example, the authors can provide some empirical results on how the number of clients affects the model's utility and efficiency. Minor comments: - In Section 2.2, a ''linkage unit'' is mentioned to collect data from multiple clients. What is a linkage unit? How does it differ from a server? - Citations 14 and 15 seem repeated. questions: 1. How to determine the k for the K-means clustering in the hetogeneous FL setting? 2. Whether the proposed algorithm can be applied to the case where the number of clusters is different for each client? 3. Are there any potential privacy risks in the proposed algorithm? 4. How to guarantee the convergence of the proposed algorithm? 5. Is there any theoretical analysis of the computation complexity? ethics_review_flag: No ethics_review_description: I do not see any ethical issues with this paper. scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9QA3TpxM3U
Cardinality Counting in "Alcatraz": A Privacy-aware Federated Learning Approach
[ "Nan Wu", "Xin Yuan", "Shuo Wang", "Hongsheng Hu", "Jason Xue" ]
The task of cardinality counting, pivotal for data analysis, endeavors to quantify unique elements within datasets and has significant applications across various sectors like healthcare, marketing, cybersecurity, and web analytics. Current methods, categorized into deterministic and probabilistic, often fail to prioritize data privacy. Given the fragmentation of datasets across various organizations, there is an elevated risk of inadvertently disclosing sensitive information during collaborative data studies using state-of-the-art cardinality counting techniques. This study introduces an innovative privacy-centric solution for the cardinality counting dilemma, leveraging a federated learning framework. Our approach involves employing a locally differentially private data encoding for initial processing, followed by a privacy-aware federated $K$-means clustering strategy, ensuring that cardinality counting occurs across distinct datasets without necessitating data amalgamation. The efficacy of our methodology is underscored by promising results from tests on both real-world and simulated datasets, pointing towards a transformative approach to privacy-sensitive cardinality counting in contemporary data science.
[ "Differential privacy", "federated learning", "data privacy" ]
https://openreview.net/pdf?id=9QA3TpxM3U
W2XcXLe5JW
official_review
1,701,165,694,371
9QA3TpxM3U
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2039/Reviewer_ouiC" ]
review: # Summary In this paper, the authors propose a method for cardinality counting within the federated learning framework. This approach enables cardinality counting in a privacy-preserving manner, without revealing the actual data. # Strength Good logic flow and writing. # Weakness The paper appears to overlook a few significant contributions in the field of privacy-preserving cardinality estimation. Despite the emphasis on the importance of this problem, there seems to be a lack of reference to some recent notable works on the topic. - For instance, the paper by Wright et al., entitled "Privacy-Preserving Secure Cardinality and Frequency Estimation" (https://storage.googleapis.com/pub-tools-public-publication-data/pdf/3e44af84a8404c28aaebff347a4bd5e305a62eda.pdf), introduces advanced methods for cardinality and frequency estimation by combining aspects of HyperLogLog (HLL) and Bloom filters. This work is particularly relevant as it presents a scalable secure multi-party computation protocol that is crucial for the topic at hand. - Another significant contribution is the NeurIPS 2020 paper, "The Flajolet-Martin Sketch Itself Preserves Differential Privacy: Private Counting with Minimal Space" (https://proceedings.neurips.cc/paper/2020/file/e3019767b1b23f82883c9850356b71d6-Paper.pdf). This paper discusses privacy preservation using Flajolet-Martin Sketch, which is a key technique in the realm of cardinality estimation. Considering the relevance and impact of these works to the problem under discussion, it would greatly benefit the paper to include these in the discussion of related works. This could provide a more comprehensive background for the study, and further strengthen the positioning and novelty of the current work. questions: Why the related works are not mentioned & evaluated? ethics_review_flag: No ethics_review_description: No ethical issue scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 5 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
9QA3TpxM3U
Cardinality Counting in "Alcatraz": A Privacy-aware Federated Learning Approach
[ "Nan Wu", "Xin Yuan", "Shuo Wang", "Hongsheng Hu", "Jason Xue" ]
The task of cardinality counting, pivotal for data analysis, endeavors to quantify unique elements within datasets and has significant applications across various sectors like healthcare, marketing, cybersecurity, and web analytics. Current methods, categorized into deterministic and probabilistic, often fail to prioritize data privacy. Given the fragmentation of datasets across various organizations, there is an elevated risk of inadvertently disclosing sensitive information during collaborative data studies using state-of-the-art cardinality counting techniques. This study introduces an innovative privacy-centric solution for the cardinality counting dilemma, leveraging a federated learning framework. Our approach involves employing a locally differentially private data encoding for initial processing, followed by a privacy-aware federated $K$-means clustering strategy, ensuring that cardinality counting occurs across distinct datasets without necessitating data amalgamation. The efficacy of our methodology is underscored by promising results from tests on both real-world and simulated datasets, pointing towards a transformative approach to privacy-sensitive cardinality counting in contemporary data science.
[ "Differential privacy", "federated learning", "data privacy" ]
https://openreview.net/pdf?id=9QA3TpxM3U
N6pVhDR7DT
official_review
1,700,934,519,506
9QA3TpxM3U
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2039/Reviewer_9oJj" ]
review: This work proposes a privacy-preserving, FL-based solution for cardinality counting of datasets. The approach provides privacy guarantees by first applying a bloom filter encoding and then local DP on the client side. The author evaluated their approach using real and synthetic datasets. Strengths of this work: - Well-written and easy-to-follow manuscript - Work is very well motivated. - I appreciated all the background information, which makes the paper easy to read and understand Weaknesses: - My main complaint is the contribution of this work. The solution looks very trivial (Bloom filter encoding > PD > Local K-means > global K-means). - The authors mention: "the first federated cardinality counting framework that allows cardinality counting to occur across distinct datasets". Why FL is important in your case? What are the limitations that FL solves? - Limited related work (section is mixed with Introduction). Are there any related work? You mention "the existing methods do not fully consider privacy during the counting process" and refer to a supporting citation, but what are these works? - Used baseline is a central clustering with local DL. I appreciate the effort, but would expect to also compare with other existing approaches. - Possible missing related work: "Learning with Privacy at Scale", Differential Privacy Team, Apple questions: - What are the available related works and how do they compare to your approach? ethics_review_flag: No ethics_review_description: Datasets used are public scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 3 technical_quality: 4 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9QA3TpxM3U
Cardinality Counting in "Alcatraz": A Privacy-aware Federated Learning Approach
[ "Nan Wu", "Xin Yuan", "Shuo Wang", "Hongsheng Hu", "Jason Xue" ]
The task of cardinality counting, pivotal for data analysis, endeavors to quantify unique elements within datasets and has significant applications across various sectors like healthcare, marketing, cybersecurity, and web analytics. Current methods, categorized into deterministic and probabilistic, often fail to prioritize data privacy. Given the fragmentation of datasets across various organizations, there is an elevated risk of inadvertently disclosing sensitive information during collaborative data studies using state-of-the-art cardinality counting techniques. This study introduces an innovative privacy-centric solution for the cardinality counting dilemma, leveraging a federated learning framework. Our approach involves employing a locally differentially private data encoding for initial processing, followed by a privacy-aware federated $K$-means clustering strategy, ensuring that cardinality counting occurs across distinct datasets without necessitating data amalgamation. The efficacy of our methodology is underscored by promising results from tests on both real-world and simulated datasets, pointing towards a transformative approach to privacy-sensitive cardinality counting in contemporary data science.
[ "Differential privacy", "federated learning", "data privacy" ]
https://openreview.net/pdf?id=9QA3TpxM3U
7SpOK86mAx
decision
1,705,909,241,441
9QA3TpxM3U
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept comment: This paper propose a privacy-preserving federated clustering method for cardinality counting. The approach provides privacy guarantees by first applying a bloom filter encoding and then local DP on the client side, followed by privacy-aware K-means clustering. Most reviewers agree the paper is well-written and well-motivated. The idea, although not very novel, is technically sound and practical. The evaluation is extensive and results convincing. Suggestion: Please add more references to related work.
9Ob8Kmia9E
Mechanism Design for Large Language Models
[ "Paul Duetting", "Vahab Mirrokni", "Renato Paes Leme", "Haifeng Xu", "Song Zuo" ]
We investigate auction mechanisms to support the emerging format of AI-generated content. We in particular study how to aggregate several LLMs in an incentive compatible manner. In this problem, the preferences of each agent over stochastically generated contents are described/encoded as an LLM. A key motivation is to design an auction format for AI-generated ad creatives to combine inputs from different advertisers. We argue that this problem, while generally falling under the umbrella of mechanism design, has several unique features. We propose a general formalism---the *token auction* model---for studying this problem. A key feature of this model is that it acts on a token-by-token basis and lets LLM agents influence generated contents through single dimensional bids. We first explore a robust auction design approach, in which all we assume is that agent preferences entail partial orders over outcome distributions. We formulate two natural incentive properties, and show that these are equivalent to a monotonicity condition on distribution aggregation. We also show that for such aggregation functions, it is possible to design a second-price auction, despite the absence of bidder valuation functions. We then move to designing concrete aggregation functions by focusing on specific valuation forms based on KL-divergence, a commonly used loss function in LLM. The welfare-maximizing aggregation rules turn out to be the weighted (log-space) convex combination of the target distributions from all participants. We conclude with experimental results in support of the token auction formulation.
[ "auction design", "large language models", "content creation" ]
https://openreview.net/pdf?id=9Ob8Kmia9E
vYifTW7aFM
official_review
1,701,103,859,193
9Ob8Kmia9E
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1201/Reviewer_rLpi" ]
review: This paper explores the problem of designing auction mechanisms to support the emerging format of AI-generated content, with a focus on aggregating several Large Language Models (LLMs) in an incentive compatible manner. The authors propose a general formalism called the token auction model for studying this problem. The paper first proposes a robust auction design approach that assumes agent preferences entail partial orders over outcome distributions. The authors formulate two natural incentive properties and show that these are equivalent to a monotonicity condition on distribution aggregation. They also show that for such aggregation functions, it is possible to design a second-price auction, despite the absence of bidder valuation functions. The authors then design concrete aggregation functions by focusing on specific valuation forms based on KL-divergence, a commonly used loss function in LLM. The welfare-maximizing aggregation rules turn out to be the weighted (log-space) convex combination of the target distributions from all participants. questions: This paper considers a mechanism design problem from an innovative perspective and it seems that the model can be applied to realistic scenarios. Even though part of the results focuses on the property characterization of mechanisms, it is the first paper to take LLM into mechanism design which may stipulate subsequent work. Can the authors give more explanation on the utilities of advertisers and how the bids affect these utilities? ethics_review_flag: No ethics_review_description: No scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 6 technical_quality: 4 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
9Ob8Kmia9E
Mechanism Design for Large Language Models
[ "Paul Duetting", "Vahab Mirrokni", "Renato Paes Leme", "Haifeng Xu", "Song Zuo" ]
We investigate auction mechanisms to support the emerging format of AI-generated content. We in particular study how to aggregate several LLMs in an incentive compatible manner. In this problem, the preferences of each agent over stochastically generated contents are described/encoded as an LLM. A key motivation is to design an auction format for AI-generated ad creatives to combine inputs from different advertisers. We argue that this problem, while generally falling under the umbrella of mechanism design, has several unique features. We propose a general formalism---the *token auction* model---for studying this problem. A key feature of this model is that it acts on a token-by-token basis and lets LLM agents influence generated contents through single dimensional bids. We first explore a robust auction design approach, in which all we assume is that agent preferences entail partial orders over outcome distributions. We formulate two natural incentive properties, and show that these are equivalent to a monotonicity condition on distribution aggregation. We also show that for such aggregation functions, it is possible to design a second-price auction, despite the absence of bidder valuation functions. We then move to designing concrete aggregation functions by focusing on specific valuation forms based on KL-divergence, a commonly used loss function in LLM. The welfare-maximizing aggregation rules turn out to be the weighted (log-space) convex combination of the target distributions from all participants. We conclude with experimental results in support of the token auction formulation.
[ "auction design", "large language models", "content creation" ]
https://openreview.net/pdf?id=9Ob8Kmia9E
McHEgneWRF
official_review
1,700,500,490,331
9Ob8Kmia9E
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1201/Reviewer_KiYJ" ]
review: The paper studies a scenario of online ad auctions when the bidding agents are LLMs. They propose a formalism for studying these auctions and propose auctions that can merge content from different advertises, thus effectively allowing for more bidders to win the auction. The paper technically focuses on the domain of text generation, but it seems (as also noted in footnote 2) that the abstraction may be useful for other domains as well. I believe the paper touches an interesting and certainly relevant topic of automated agents (and in particular, LLMs) operating in economic online markets and makes an interesting proposal for how to auction ad position to such automated agents. I have some questions about the assumptions and economic modeling in this paper as well as some other comments (see Questions below). The paper does make progress on this important topic of markets with automated agents and makes an interesting and novel contribution as to how to theoretically model LLM agents. The theoretical analysis seems to be well performed, given the assumptions, as far as I saw. Overall, I think the paper is interesting and believe that some of the economic issues that arise may be addressed by follow-up work. I would be happy to hear the authors view on the points raised in "Questions" below. Minor comments: - The way I understand it, the main motivation for not using a standard auction design is that there is an opportunity to improve welfare by having more than one winner even when there is a single item for sale (a single ad position). I think it would be good to emphasize further this motivation of improving welfare. Currently in paragraph 2 it is not very clear why the simple auction is not a good-enough solution. - In the first time the acronym LLM appears, it would be better to present it. - Line 190, the statement is not very clear. Did the authors mean the following? If for two different bids x and y of the same agent, the final distribution when the bid is x is closer to the preferred distribution than the final distribution when the bid is y for some bids of the other agents, then it should be so for all bids of the other agents. If this is the intention, it would be good to clarify. - The section on additional related work in the body of the paper is very brief and not so informative. I guess that this may have been shortened due to the space limitation in the submission. Perhaps the authors could consider also deferring this paragraph to the appendix (or if there is space in the body of the paper, merging it with appendix A inside the paper as a full section). questions: 1)Combining competing ads: A concern that arises with the idea of merging content of different bidders, as in the example in page 1, is about cases when the bidders promote competing products, or more generally have competing economic interests. I could imagine how this can be a reasonable scenario, since if the bidders are competing for the same ad space (I.e., want their ad to be presented to the same consumers at the same times and locations, and, who perhaps were even searching for similar products), then the advertiser may be selling the same type of product. An LLM will produce an output combining both ads even in cases when advertiser interests are conflicting (e.g., take a flight to Hawaii with firm-A-airlines and with firm-B-airlines). How can a mechanism such as the one proposed prevent such cases? Or more generally, assure that it creates ads that make sense in the market (at least are not economically self-contradictory)? 2)Selling an item that is different from the one that the buyers asked to buy: Continuing the previous point, in the classic model of auctions, as the paper also mentions, it is assumed that bidders want to buy the item that is being sold, which is modeled as some value that they have for it. In particular, for ads, the bidders have some value that they perceive for the right to use the ad space, and the auction model makes it is completely the responsibility of the bidders (who win the auction) to form their valuations. If I understand correctly, in the proposed framework, this is no longer the case. The advertisers wish to present some content, but then they may end up winning the auction and paying to the auctioneer for presenting a different content. Is there a way to verify that the buyer (bidder) actually wants to pay for the resulting product they buy? The first issue is with merging content, and a different and perhaps more subtle issue is that payments are set by the average of the output distribution where the actual output sampled for the ad might be far from some of the agentsโ€™ preferences. As LLMs are black boxes, this seems challenging to have guarantees that their output makes sense economically (or makes sense at all), which is part of the reason this paper is interesting. I believe that some discussion of the above points may be useful for the paper. 3)Calibration of payments to actual willingness to pay: The payment rule suggested addresses the important point of making the payment aligned with ordinal preferences that the agents have (partially) over outcomes. However, eventually, the agents operate for some firms than need to pay when winning the auction. It is not clear how do these payments relate to actual monetary amounts that the owners of the agents are willing to pay. Specifically, if no values, budgets, or other monetary preferences were given to the agents, how can they calibrate their preferences over tokens to actual money? It seems unlikely that the owner of the LLM will specify to their LLM agent their willingness to pay for every possible outcome or distribution over outcomes. 4)Stateless agents: In Section 1.2, first paragraph, the statement โ€œOne salient feature of the state-of-the-art LLMs is that they are stateless, i.e., they maintain no internal memory or stateโ€ is not entirely clear to me. I believe that the intention is that the models are trained offline and their neural network itself is then fixed. However, LLM sessions do have some form of a state which records the history of input and output tokens across different prompts โ€“ e.g., it is possible to ask an LLM to print again the first prompt in the current session, or to refer it to the last two prompts and generate a combined prompt and respond to it, etc. It would be good to clarify this part further. Does this kind of memory have implications on the framework? If so, it would be good to discuss them. 5)Related work: Appendix A does seem to describe the technically related work and provide some key pointers on mechanism design, learning, and auctions. The context of this paper is studying how mechanisms should operate when instead of classic players, the game is played by automated learning algorithms. This connects to a recent line of work that is currently missing from the discussion, which studies how incentives are generated for users of learning algorithms in various contexts, including in auctions (though using different models). I suggest adding some discussion of references along these lines. [1] https://proceedings.neurips.cc/paper_files/paper/2022/file/b39fcf2e88dad4c38386b3af6edf88c7-Paper-Conference.pdf [2] https://dl.acm.org/doi/pdf/10.1145/3485447.3512055 [3] https://arxiv.org/pdf/2307.07374.pdf [4] https://dl.acm.org/doi/pdf/10.1145/3543507.3583416 ethics_review_flag: No ethics_review_description: The paper does theoretical modeling, there does not seem to be any ethical issue. scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9Ob8Kmia9E
Mechanism Design for Large Language Models
[ "Paul Duetting", "Vahab Mirrokni", "Renato Paes Leme", "Haifeng Xu", "Song Zuo" ]
We investigate auction mechanisms to support the emerging format of AI-generated content. We in particular study how to aggregate several LLMs in an incentive compatible manner. In this problem, the preferences of each agent over stochastically generated contents are described/encoded as an LLM. A key motivation is to design an auction format for AI-generated ad creatives to combine inputs from different advertisers. We argue that this problem, while generally falling under the umbrella of mechanism design, has several unique features. We propose a general formalism---the *token auction* model---for studying this problem. A key feature of this model is that it acts on a token-by-token basis and lets LLM agents influence generated contents through single dimensional bids. We first explore a robust auction design approach, in which all we assume is that agent preferences entail partial orders over outcome distributions. We formulate two natural incentive properties, and show that these are equivalent to a monotonicity condition on distribution aggregation. We also show that for such aggregation functions, it is possible to design a second-price auction, despite the absence of bidder valuation functions. We then move to designing concrete aggregation functions by focusing on specific valuation forms based on KL-divergence, a commonly used loss function in LLM. The welfare-maximizing aggregation rules turn out to be the weighted (log-space) convex combination of the target distributions from all participants. We conclude with experimental results in support of the token auction formulation.
[ "auction design", "large language models", "content creation" ]
https://openreview.net/pdf?id=9Ob8Kmia9E
HzivUN7FbL
official_review
1,700,628,237,311
9Ob8Kmia9E
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1201/Reviewer_zwT4" ]
review: The authors consider the problem of designing auctions and mechanisms in an advertising setting with content to be generated from large language models. More specifically, the authors propose a model which they call the โ€œtoken auctionโ€ model wherein bidders specify public LLMs which are modeled as preferred distributions over tokens and submit bids to influence how closely a generated aggregate distribution over tokens resembles their desired distribution. To this end, the authors define two natural incentive properties which essentially โ€œpin downโ€ the possible aggregation functions over the LLMs which allow for truthful implementation (akin to a Myersonian monotonicity condition in traditional auction settings). They further use this characterization to define second-price-like payment rules. Finally, the authors examine possible aggregation functions inspired by the training of LLMs and argue that an aggregation function inspired by KL divergence (i.e., a linear aggregation rule) - a loss function common in the first stage of LLM training - is monotone whereas an alternative aggregation function inspired by RL-stage training (i.e., a log-linear aggregation rule) is non-monotone. They ultimately evaluate their aggregation functions on a toy model with two LLM agents demonstrating that as one agentโ€™s bid becomes gradually larger (relative to the other bid) the resulting aggregate token generation becomes gradually closer to the agentโ€™s preferred token generation. On the positive side, this paper feels very timely and introduces an interesting auction model for a natural emerging setting which is likely to be of interest to many in the WebConf community. I agree with the sentiment in the paper that LLMs are likely to be an important part of the ad auction ecosystem, and I think this paper presents a nice โ€œproof of conceptโ€ first step toward thinking about how one should design mechanisms in this new space. Furthermore, the results in the paper, while not particularly technically demanding, in my opinion, are the โ€œrightโ€ suite of results for an initial paper proposing a new line of larger open questions and, thus, โ€œclear the barโ€ in my view. On the negative side, although I like the model and results, I am not sure it captures some of the fundamental tradeoffs in this setting. In particular, consider two competing firms offering a similar service. A generated set of tokens which mentions both competing firms may not have positive value to either firm (due to the externality generated by the mention of the other). I do not think such a scenario can neatly be captured in the proposed token auction framework in this paper. However, I do not think oversights in the initial model the authors propose significantly detracts from the work, but I would suggest that the authors add discussion about extensions to, drawbacks of, and future questions regarding their proposed setting to paint a more complete picture. On the whole, I am positive about this submission. Lines 75-93: I would suggest using a consistent capitalization/case choice for โ€œMaui Airlinesโ€ and โ€œStingray Resortโ€ Line 303: I wonder if โ€œobviousโ€ is the right word to use here (and elsewhere). Perhaps โ€œnaturalโ€ is better? I donโ€™t think it is too significant, but โ€œobvious preferencesโ€/โ€obvious strategyproofnessโ€ now have strong behavioral game theory connotations (see, e.g., [Zhang and Levin 2017] โ€œPartition Obvious Preference and Mechanism Design: Theory and Experimentโ€) Line 839: โ€œwe need to peakโ€ -> โ€œwe need to peekโ€ Line 846: โ€œwe start with a based modelโ€ -> โ€œwe start with a base modelโ€ [After rebuttal] I thank the authors for their responses to my questions as well as the questions of the other reviewers. I am positive about this paper and would recommend acceptance. questions: The objectives of the bidders (and welfare function of the central planner) that you propose seem to share some common "spirit" with the literature on truthful aggregation of budget proposals (see, e.g., [Freeman et al. 2021] in the Journal of Economic Theory). Can you comment on whether insights from that literature have any relevance to your setting (or vice-versa)? ethics_review_flag: No ethics_review_description: N/A scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9Ob8Kmia9E
Mechanism Design for Large Language Models
[ "Paul Duetting", "Vahab Mirrokni", "Renato Paes Leme", "Haifeng Xu", "Song Zuo" ]
We investigate auction mechanisms to support the emerging format of AI-generated content. We in particular study how to aggregate several LLMs in an incentive compatible manner. In this problem, the preferences of each agent over stochastically generated contents are described/encoded as an LLM. A key motivation is to design an auction format for AI-generated ad creatives to combine inputs from different advertisers. We argue that this problem, while generally falling under the umbrella of mechanism design, has several unique features. We propose a general formalism---the *token auction* model---for studying this problem. A key feature of this model is that it acts on a token-by-token basis and lets LLM agents influence generated contents through single dimensional bids. We first explore a robust auction design approach, in which all we assume is that agent preferences entail partial orders over outcome distributions. We formulate two natural incentive properties, and show that these are equivalent to a monotonicity condition on distribution aggregation. We also show that for such aggregation functions, it is possible to design a second-price auction, despite the absence of bidder valuation functions. We then move to designing concrete aggregation functions by focusing on specific valuation forms based on KL-divergence, a commonly used loss function in LLM. The welfare-maximizing aggregation rules turn out to be the weighted (log-space) convex combination of the target distributions from all participants. We conclude with experimental results in support of the token auction formulation.
[ "auction design", "large language models", "content creation" ]
https://openreview.net/pdf?id=9Ob8Kmia9E
GBw1y4kUM0
official_review
1,700,751,527,480
9Ob8Kmia9E
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1201/Reviewer_7WDb" ]
review: ## After rebuttal After reading the author's response and the other reviews, I still believe this is a solid paper for the conference. ## Summary: The paper model agents that have preferences over content created by large language models (LLMs) and propose a proper auction design to aggregate the created content together, for example, to create an aggregated advertisement for various products or services in a dialog of a famous video game. They demonstrate how to design an auction in the spirit of the well-known "second-price auction" and propose aggregation functions for this process. ## Evaluation: The paper proposes a new model, essentially combining mechanism design with a topic of extreme interest. Crucially, the connection is not straightforward. ### Pros: 1. The paper works on an area of extreme interest and makes the first connection of mechanism design techniques to the area. 2. The paper makes a first and quite successful attempt to model preferences for LLM agents. 3. The paper is well-written and technically sound. ### Cons: I could not identify any major concerns. questions: 1. While the obvious preferences modeling seems compelling for LLM agents, have you considered any alternatives? It would be nice to see any modeling examples that didn't work well or some high-level thoughts on possible future directions. 2. Regarding the necessity of randomization in line 109: Can you add some indicative citations or a quick explanation for the interested reader to be able to follow up with this? ethics_review_flag: No ethics_review_description: - scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 7 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9Ob8Kmia9E
Mechanism Design for Large Language Models
[ "Paul Duetting", "Vahab Mirrokni", "Renato Paes Leme", "Haifeng Xu", "Song Zuo" ]
We investigate auction mechanisms to support the emerging format of AI-generated content. We in particular study how to aggregate several LLMs in an incentive compatible manner. In this problem, the preferences of each agent over stochastically generated contents are described/encoded as an LLM. A key motivation is to design an auction format for AI-generated ad creatives to combine inputs from different advertisers. We argue that this problem, while generally falling under the umbrella of mechanism design, has several unique features. We propose a general formalism---the *token auction* model---for studying this problem. A key feature of this model is that it acts on a token-by-token basis and lets LLM agents influence generated contents through single dimensional bids. We first explore a robust auction design approach, in which all we assume is that agent preferences entail partial orders over outcome distributions. We formulate two natural incentive properties, and show that these are equivalent to a monotonicity condition on distribution aggregation. We also show that for such aggregation functions, it is possible to design a second-price auction, despite the absence of bidder valuation functions. We then move to designing concrete aggregation functions by focusing on specific valuation forms based on KL-divergence, a commonly used loss function in LLM. The welfare-maximizing aggregation rules turn out to be the weighted (log-space) convex combination of the target distributions from all participants. We conclude with experimental results in support of the token auction formulation.
[ "auction design", "large language models", "content creation" ]
https://openreview.net/pdf?id=9Ob8Kmia9E
5sY9W38ivB
decision
1,705,909,208,289
9Ob8Kmia9E
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept (Oral) comment: This paper studies how to design auction mechanisms for ads that are generated by LLMs, and bidders can submit both LLMs as well as bids. It proposes a novel model for this setting, a novel auction format generalizing second-price auctions for this model, and analyze the proposed auction theoretically and with experiments. The review team identified the following strengths and weaknesses of the submission: Strengths: - Novel, timely, and innovative mechanism design problem that is likely to be of broad interest to the WebConf community - The proposed model is non-trivial, original, and makes sense for this novel setting. - The technical results are natural, sound and seem to be the 'right' results for this new model Weaknesses: - There are some clear extensions of the model (e.g. dealing with substitutes) that are of first order of interest for the settings described in the paper, but which are not addressed by the authors in the submission Overall, the review team unanimously finds the paper novel, well-executed, and likely to be of broad interest. It has potential to be a landmark paper sparking a new line of research linking LLMs and mechanism design. I recommend to accept the paper.
9Ob8Kmia9E
Mechanism Design for Large Language Models
[ "Paul Duetting", "Vahab Mirrokni", "Renato Paes Leme", "Haifeng Xu", "Song Zuo" ]
We investigate auction mechanisms to support the emerging format of AI-generated content. We in particular study how to aggregate several LLMs in an incentive compatible manner. In this problem, the preferences of each agent over stochastically generated contents are described/encoded as an LLM. A key motivation is to design an auction format for AI-generated ad creatives to combine inputs from different advertisers. We argue that this problem, while generally falling under the umbrella of mechanism design, has several unique features. We propose a general formalism---the *token auction* model---for studying this problem. A key feature of this model is that it acts on a token-by-token basis and lets LLM agents influence generated contents through single dimensional bids. We first explore a robust auction design approach, in which all we assume is that agent preferences entail partial orders over outcome distributions. We formulate two natural incentive properties, and show that these are equivalent to a monotonicity condition on distribution aggregation. We also show that for such aggregation functions, it is possible to design a second-price auction, despite the absence of bidder valuation functions. We then move to designing concrete aggregation functions by focusing on specific valuation forms based on KL-divergence, a commonly used loss function in LLM. The welfare-maximizing aggregation rules turn out to be the weighted (log-space) convex combination of the target distributions from all participants. We conclude with experimental results in support of the token auction formulation.
[ "auction design", "large language models", "content creation" ]
https://openreview.net/pdf?id=9Ob8Kmia9E
5VvnVxIOWe
official_review
1,700,830,676,921
9Ob8Kmia9E
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1201/Reviewer_mj3d" ]
review: The authors study a mechanism design problem for large language models, that is motivated by the fact that AI-generated ads can combine input from different advertisers. They formulate the problem based on a token auction model, and they design two incentive properties that end up to be equivalent to monotonicity conditions on distribution aggregation. They then show that for such aggregation functions it is possible to design second price auctions despite the fact that in this model there are no bidder valuation functions. Finally, they design specific aggregation functions and provide analysis of the auction both theoretically and experimentally. Although I am not that familiar with the area, I liked the problem that the paper studies, I found it well-motivated, and in general I think that it is well-written and manages to convey the message even to the unfamiliar reader. The formulation that the authors propose is concrete and the analysis of the model is also nice and involved. I do not have any major complaints apart form the fact that the complexity of the model makes some parts hard to evaluate. Overall, I would say that this is an interesting paper that presents a nice collection of results and probably is a good match for the conference. questions: None. ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 6 technical_quality: 5 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
9M81IjqvPm
Ad vs Organic: Revisiting Incentive Compatible Mechanism Design in E-commerce Platforms
[ "Ningyuan Li", "Yunxuan Ma", "Yang Zhao", "Qian Wang", "Zhilin Zhang", "Chuan Yu", "Jian Xu", "Bo Zheng", "Xiaotie Deng" ]
On typical e-commerce platforms, a product can be dislayed to the user in two possible forms, as an ad item or an organic item. Usually ad and organic items are selected separately by the advertising system and recommendation system, and then combined by a merging mechanism. Although the design of the merging mechanism have been extensively studied, little attention has been paid to a critical situation that arises when the set of candidate ad items and organic items overlap. Despite its common occurrence, this situation is not correctly handled by almost all existing works, potentially causing incentive problems for advertisers and violation of economic constraints. To this end, we revisit the design of the merging mechanism. We identify a necessary property called form stability, and provide simplification results of the mechanism design problem. Moreover, we design simple mechanisms strictly ensuring economic properties such as incentive compatibility, and demonstrate that they are approximately optimal under certain assumptions.
[ "E-Commerce", "Mechanism Design", "Online Advertising", "Competitive Ratio" ]
https://openreview.net/pdf?id=9M81IjqvPm
wpkMz3oUP4
official_review
1,700,354,614,608
9M81IjqvPm
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1932/Reviewer_YhnN" ]
review: Summary of Main Content: The paper explores an intriguing yet overlooked scenario in existing literature: designing the merging mechanism of ad and organic items, wherein ad and organic items can overlap. The paper formally articulates the mechanism design problem in the aforementioned scenario and simplifies this problem by introducing a necessary condition named form stability. Finally, this paper designs two mechanisms, called the FIX mechanism and CHANGE mechanism, and analyze their competitive ratios under certain conditions Strengths: The paper explores the intersection of advertising auctions and recommendation systems, a topic that is novel, intriguing, and has garnered considerable attention. From the introduction of the problem background to the formulation and simplification of the mechanism design problem, and then to the design and analysis of two mechanisms in terms of their performance, the logic is clear, progressively deepened, and easy to comprehend. Weaknesses: There are many typos throughout the paper. For example, 'display' is misspelled in the first line, In Definition 5.1, there are two consecutive letters 'j', and there are numerous instances of garbled references to equations, such as in lines 346 and 381. While these issues do not hinder the understanding of the content, they do significantly affect the reader's experience, giving the impression that the authors did not rigorously quality-check the paper before submission Advices: The paper is generally well-written and technically sound. The only suggestion I have is to diligently revise and correct the typographical errors. Before submitting in the future, ensure a thorough quality check to prevent such typographical issues from affecting the outcomes and credibility of the paper. questions: 1.Given the extensive research in integrating ad and organic items in online advertising and recommend systems, how does your work differentiate from these existing approaches, especially in terms of handling the overlap of ad and organic items? 2.Please explain in detail the incentive problems present in existing works, the challenges they pose for mechanism design, and how you have addressed and overcome these issues. 3.Is there any real-world data indicating that the candidate ad and organic items may overlap, thereby validating the practical relevance of the research topic addressed in this paper? ethics_review_flag: No ethics_review_description: no scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9M81IjqvPm
Ad vs Organic: Revisiting Incentive Compatible Mechanism Design in E-commerce Platforms
[ "Ningyuan Li", "Yunxuan Ma", "Yang Zhao", "Qian Wang", "Zhilin Zhang", "Chuan Yu", "Jian Xu", "Bo Zheng", "Xiaotie Deng" ]
On typical e-commerce platforms, a product can be dislayed to the user in two possible forms, as an ad item or an organic item. Usually ad and organic items are selected separately by the advertising system and recommendation system, and then combined by a merging mechanism. Although the design of the merging mechanism have been extensively studied, little attention has been paid to a critical situation that arises when the set of candidate ad items and organic items overlap. Despite its common occurrence, this situation is not correctly handled by almost all existing works, potentially causing incentive problems for advertisers and violation of economic constraints. To this end, we revisit the design of the merging mechanism. We identify a necessary property called form stability, and provide simplification results of the mechanism design problem. Moreover, we design simple mechanisms strictly ensuring economic properties such as incentive compatibility, and demonstrate that they are approximately optimal under certain assumptions.
[ "E-Commerce", "Mechanism Design", "Online Advertising", "Competitive Ratio" ]
https://openreview.net/pdf?id=9M81IjqvPm
nLGZmbg7Et
official_review
1,701,230,648,512
9M81IjqvPm
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1932/Reviewer_dJeo" ]
review: The paper considers a very interesting problem on the advertisers' incentive of organic vs ad presence of their items --- I have constantly encountered such kind of problem when I am doing search on google & amazon, that an item is promoted while also being top results organically. The paper formulates this problem as a mechanism design problem, and seeks for the truthful mechanism that maximizes a combination of ad revenue and user experience. The paper explicitly construct several natural mechanisms that approximate the optimal objective under symmetric assumptions. Overall, I would like to recommend acceptance of this paper for the cute problem studied in this paper and its clean solution styles. That said, the paper still have some issues that need to be addressed before acceptance. For example, there are several typos in this paper ("dislayed" in the abstract, "Proof of ???" in Theorem 5.10). These issues make me worry about the other potential issues in its technical proofs, but I unfortunately do not have time to check them so I recommend the authors to carefully self-check these parts in the draft as well. questions: To what extent, do you think the proposed mechanism can be applied to the multi-slot case where a ranked list of items (possible ad or organic) is displayed to the user? ethics_review_flag: No ethics_review_description: n/a scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9M81IjqvPm
Ad vs Organic: Revisiting Incentive Compatible Mechanism Design in E-commerce Platforms
[ "Ningyuan Li", "Yunxuan Ma", "Yang Zhao", "Qian Wang", "Zhilin Zhang", "Chuan Yu", "Jian Xu", "Bo Zheng", "Xiaotie Deng" ]
On typical e-commerce platforms, a product can be dislayed to the user in two possible forms, as an ad item or an organic item. Usually ad and organic items are selected separately by the advertising system and recommendation system, and then combined by a merging mechanism. Although the design of the merging mechanism have been extensively studied, little attention has been paid to a critical situation that arises when the set of candidate ad items and organic items overlap. Despite its common occurrence, this situation is not correctly handled by almost all existing works, potentially causing incentive problems for advertisers and violation of economic constraints. To this end, we revisit the design of the merging mechanism. We identify a necessary property called form stability, and provide simplification results of the mechanism design problem. Moreover, we design simple mechanisms strictly ensuring economic properties such as incentive compatibility, and demonstrate that they are approximately optimal under certain assumptions.
[ "E-Commerce", "Mechanism Design", "Online Advertising", "Competitive Ratio" ]
https://openreview.net/pdf?id=9M81IjqvPm
Us16IjBaIl
decision
1,705,909,208,990
9M81IjqvPm
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept comment: The paper studies the practically well-motivated problem of the incentive issues that arise in merging candidates selected for organic results and sponsored results in e-commerce platforms. There is clear consensus about the importance of carefully studying this. The paper formulates a clean theoretical model of this setting and provides characterizations of truthful mechanisms, which are clean, but not too deep. The approximation results for the simple and nice mechanisms constructed in this work apply to quite a restrictive setting of two bidders, or when bidders are identical, and the results are for the single-slot case. Overall, the results in this paper are worth publishing to hopefully stimulate further study on this topic, even if the paper is not too strong.
9M81IjqvPm
Ad vs Organic: Revisiting Incentive Compatible Mechanism Design in E-commerce Platforms
[ "Ningyuan Li", "Yunxuan Ma", "Yang Zhao", "Qian Wang", "Zhilin Zhang", "Chuan Yu", "Jian Xu", "Bo Zheng", "Xiaotie Deng" ]
On typical e-commerce platforms, a product can be dislayed to the user in two possible forms, as an ad item or an organic item. Usually ad and organic items are selected separately by the advertising system and recommendation system, and then combined by a merging mechanism. Although the design of the merging mechanism have been extensively studied, little attention has been paid to a critical situation that arises when the set of candidate ad items and organic items overlap. Despite its common occurrence, this situation is not correctly handled by almost all existing works, potentially causing incentive problems for advertisers and violation of economic constraints. To this end, we revisit the design of the merging mechanism. We identify a necessary property called form stability, and provide simplification results of the mechanism design problem. Moreover, we design simple mechanisms strictly ensuring economic properties such as incentive compatibility, and demonstrate that they are approximately optimal under certain assumptions.
[ "E-Commerce", "Mechanism Design", "Online Advertising", "Competitive Ratio" ]
https://openreview.net/pdf?id=9M81IjqvPm
Am3f4Crfps
official_review
1,700,825,849,197
9M81IjqvPm
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1932/Reviewer_VU33" ]
review: The paper studies an ad-auction problem, where a product can be displayed to the user either as an organic item or as an ad item. In particular, the authors focus on the single-slot scenario under this setting, and their goal is the design of mechanisms that are truthful, individually rational, and provide good approximation guarantees in terms of revenue and user experience. Among others they characterize the content merging mechanisms in this setting, and also design two truthful mechanisms and analyze theoretically their performance. The introductory sections of the paper, in my opinion, are not that well-written. I have read the intro part many times and I was not able to understand what is the problem that the paper studies or how the problem is motivated (as the description of the real-life application is not clear). The same goes for the preliminaries section and the presentation of the model that is not that formal. All these sections should be restructured and rewritten. Technically the results that the paper presents are not trivial, but on the other hand the approaches that are followed are kind of standard (e.g., adaptations of Myersonโ€™s lemma etc.). Overall the paper tries to capture and analyze theoretically some real life problems in ad-auctions, but mostly due to the way that it is written I am not sure if it can be considered for publication. questions: None. ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 4 technical_quality: 3 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
9M81IjqvPm
Ad vs Organic: Revisiting Incentive Compatible Mechanism Design in E-commerce Platforms
[ "Ningyuan Li", "Yunxuan Ma", "Yang Zhao", "Qian Wang", "Zhilin Zhang", "Chuan Yu", "Jian Xu", "Bo Zheng", "Xiaotie Deng" ]
On typical e-commerce platforms, a product can be dislayed to the user in two possible forms, as an ad item or an organic item. Usually ad and organic items are selected separately by the advertising system and recommendation system, and then combined by a merging mechanism. Although the design of the merging mechanism have been extensively studied, little attention has been paid to a critical situation that arises when the set of candidate ad items and organic items overlap. Despite its common occurrence, this situation is not correctly handled by almost all existing works, potentially causing incentive problems for advertisers and violation of economic constraints. To this end, we revisit the design of the merging mechanism. We identify a necessary property called form stability, and provide simplification results of the mechanism design problem. Moreover, we design simple mechanisms strictly ensuring economic properties such as incentive compatibility, and demonstrate that they are approximately optimal under certain assumptions.
[ "E-Commerce", "Mechanism Design", "Online Advertising", "Competitive Ratio" ]
https://openreview.net/pdf?id=9M81IjqvPm
7c2UBGDu7m
official_review
1,700,586,853,234
9M81IjqvPm
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1932/Reviewer_9yLW" ]
review: **Summary:** The paper discusses ad auctions where organic and paid impressions may overlap. This presents a challenge to the platform that wishes to maximize both revenue and the quality of its recommendation system (user experience), and requires a rethinking of incentive-compatibility for the advertisers. The paper studies the case of a single slot. It characterizes possible mechanisms following Myerson's analysis of the single-item auction, and then presents two content-merging mechanisms and proves their (approximate-)optimality in different cases. **Strengths** - The motivating idea of the paper is very nice, relevant, and grounded in realistic settings. - The analysis seems well-founded, and the suggested mechanisms yield good results. **Weaknesses** - No proof of Lemma 5.5 in the appendix. **Minor comments** - Several undefined references, e.g., line 346, line 381, line 807. - Please include a reference to the 236.90 billion USD statistic. - Line 158: "FIX is 4/5-competitive relative to the optimal objective" - what optimal objective? - Line 160: "and all bid distributions are identical" - and independent? - Line 221: "which is a standard assumption in economics" - refer to Myerson '81 - Line 253: You refer to X_i as the total -probability-, but this is before you make the feasibility requirement (line 285) that guarantees it is indeed a probability. - Line 256: "The incentive compatibility requires" - rephrase. questions: - Line 415: "and this is basically equivalent to the following" - Do you mean that Lemma 4.4 and Lemma 4.5 are equivalent? Unclear. - An important part of Myersonโ€™s result is that the highest revenue DSIC mechanism is also the highest revenue BIC mechanism. Is that also the case here? It seems possible that the platform could convince the product owner to pay for an ad in a Bayesian fashion, even when it should be organically displayed, and this may help the objective. ethics_review_flag: No ethics_review_description: N/A scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9M81IjqvPm
Ad vs Organic: Revisiting Incentive Compatible Mechanism Design in E-commerce Platforms
[ "Ningyuan Li", "Yunxuan Ma", "Yang Zhao", "Qian Wang", "Zhilin Zhang", "Chuan Yu", "Jian Xu", "Bo Zheng", "Xiaotie Deng" ]
On typical e-commerce platforms, a product can be dislayed to the user in two possible forms, as an ad item or an organic item. Usually ad and organic items are selected separately by the advertising system and recommendation system, and then combined by a merging mechanism. Although the design of the merging mechanism have been extensively studied, little attention has been paid to a critical situation that arises when the set of candidate ad items and organic items overlap. Despite its common occurrence, this situation is not correctly handled by almost all existing works, potentially causing incentive problems for advertisers and violation of economic constraints. To this end, we revisit the design of the merging mechanism. We identify a necessary property called form stability, and provide simplification results of the mechanism design problem. Moreover, we design simple mechanisms strictly ensuring economic properties such as incentive compatibility, and demonstrate that they are approximately optimal under certain assumptions.
[ "E-Commerce", "Mechanism Design", "Online Advertising", "Competitive Ratio" ]
https://openreview.net/pdf?id=9M81IjqvPm
62ER618xw5
official_review
1,700,952,537,963
9M81IjqvPm
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1932/Reviewer_zCf5" ]
review: Summary: This paper studies the auction design problem when a product can be displayed in two possible forms of an ad item or an organic item. Building on top of the classic Myerson's lemma, the authors provide characterizations of truthful auctions and demonstrate an interesting property of form stability. The authors further propose two mechanisms, FIX and CHANGE, that are approximately optimal when there are two bidders or bidders are identical. Comments: This paper is generally well-written, clear, and easy to follow. The problem studied in the paper is interesting and relevant to the auction design in online advertising. The characterizations are clean and elegant while the proposed mechanisms are simple but effective. 1. The characterizations are neat but not deep or surprising as they are mostly based on the classic Myerson's lemma. 2. It seems that the authors implicitly restrict their attention to deterministic mechanisms. It would be interesting to see whether results/characterizations can be generalized to randomized mechanisms. 3. More justification are needed for the assumption that the organic result provides better click-through rate than an ad item. 4. It seems that the proof of Lemma 4.4 depends on assumptions mentioned in 2 and 3 above. Do they continue to hold without these assumptions? 5. Proofs are needed to show that FIX and CHANGE are truthful as mechanisms are combined by taking a max (while usually, mechanisms are combined via randomization, which maintains truthfulness). In particular, it seems that FIX might not be truthful: it looks like it is possible that the winners are the same for all k but the payments are different. 6. The approximation results only apply to restricted settings in which there are two bidders or bidders are identical. The paper would be stronger if the approximation results can be extended to more general settings. questions: Could the authors comment in questions mentioned in the review above? ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 5 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
9D1dlappB8
Air-CAD: Edge-Assisted Multi-Drone Network for Real-time Crowd Anomaly Detection
[ "Yuanzheng Tan", "Qing Li", "Junkun Peng", "Zhenhui Yuan", "Yong Jiang" ]
Drones connected via the web are increasingly being used for crowd anomaly detection (CAD). Existing solutions, however, face many challenges, such as low accuracy and high latency due to drones' dynamic shooting distances and angles as well as limited computing and networking capabilities. In this paper, we propose Air-CAD, an edge-assisted multi-drone network that uses air-ground cooperation to achieve fast and accurate CAD. Air-CAD consists of two stages: person detection and multi-feature analysis. To improve CAD accuracy, Air-CAD dynamically adjusts the inference of person detection model based on drones' shooting distances, and assigns appropriate feature analysis tasks to drones shooting at variable angles. To achieve fast CAD, edge devices connected to drones are deployed to offload assigned feature analysis tasks from drones. Air-CAD schedules the connection between each drone and edge to accelerate processing based on drone's assigned task and the computing/network resources of the edge device. To validate the performance of Air-CAD, we generate a new simulated human stampede dataset captured from various drone-view recordings. We deploy and evaluate Air-CAD in both simulation and real-world testbed. Experimental results show that Air-CAD achieves 95.33% AUROC and real-time inference latency within 0.47 seconds.
[ "Systems and Infrastructure for WoT", "Multi-drone network", "Edge computing", "Crowd anomaly detection" ]
https://openreview.net/pdf?id=9D1dlappB8
gFxCvgzKMB
official_review
1,700,794,049,374
9D1dlappB8
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission299/Reviewer_zdPM" ]
review: This paper proposes an Air-CAD that is an edge-assisted multi-drone network for crowd anomaly detection. It achieves high accuracy and real-time inference latency. However, the idea of this paper is not novel enough, and it lacks comparisons with state-of-the-art methods. Besides, the authors used the wrong template for the WWW paper. questions: 1. Use the wrong template. 2. in the submitted paper, the authors use model-assisted DQN for the scheduler. Why DQN is selected? Could you clarify the reason why the DQN was selected? 3. Besides, could you compare the proposed model-based DQN method with other deep reinforcement learning methods, such as PPO, A3C, etc.? 4. Although the authors have evaluated the performance of the proposed framework in terms of different aspects, no details on the parameter setting are provided in the evaluation section. Could you please provide more details on the evaluation of the proposed Air-CAD framework, such as parameter setting, and datasets, etc.? 5. In the Ablation experiment of flight conditions awareness, is it reasonable to set the input of flight conditions to zero as the Air-CAD as unaware of the flight conditions? ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 3 technical_quality: 4 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9D1dlappB8
Air-CAD: Edge-Assisted Multi-Drone Network for Real-time Crowd Anomaly Detection
[ "Yuanzheng Tan", "Qing Li", "Junkun Peng", "Zhenhui Yuan", "Yong Jiang" ]
Drones connected via the web are increasingly being used for crowd anomaly detection (CAD). Existing solutions, however, face many challenges, such as low accuracy and high latency due to drones' dynamic shooting distances and angles as well as limited computing and networking capabilities. In this paper, we propose Air-CAD, an edge-assisted multi-drone network that uses air-ground cooperation to achieve fast and accurate CAD. Air-CAD consists of two stages: person detection and multi-feature analysis. To improve CAD accuracy, Air-CAD dynamically adjusts the inference of person detection model based on drones' shooting distances, and assigns appropriate feature analysis tasks to drones shooting at variable angles. To achieve fast CAD, edge devices connected to drones are deployed to offload assigned feature analysis tasks from drones. Air-CAD schedules the connection between each drone and edge to accelerate processing based on drone's assigned task and the computing/network resources of the edge device. To validate the performance of Air-CAD, we generate a new simulated human stampede dataset captured from various drone-view recordings. We deploy and evaluate Air-CAD in both simulation and real-world testbed. Experimental results show that Air-CAD achieves 95.33% AUROC and real-time inference latency within 0.47 seconds.
[ "Systems and Infrastructure for WoT", "Multi-drone network", "Edge computing", "Crowd anomaly detection" ]
https://openreview.net/pdf?id=9D1dlappB8
e7lQNuHIDw
official_review
1,698,685,697,483
9D1dlappB8
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission299/Reviewer_9ZFj" ]
review: The paper presents, Air-CAD, an edge-assisted multi-drone network for crowd anomaly detection (CAD). After investigating the impact of flight conditions on CAD performance as motivation, it presents a design Air-CAD consisting of person detection and multi-feature analysis. For accurate and fast detection, the design includes primarily three modules, 1) Zoom Detector to dynamically adjust the depth and focus based on the dronesโ€™ shooting distances for fast and accurate person detection, 2) Feature Scheduler to efficiently offload data and feature analysis tasks to suitable edge devices, and 3) Feature Analyser that is a multi-feature anomaly detection algorithm. The system is comprehensively evaluated the system using both simulation and real-world experiments. The simulation relies on a newly collected dataset, i.e., ArmyStampede, that is synthetic yet large-scale and comprehensive, encompassing direct and indirect crowd anomalies. The proposed system outperforms the selected benchmarks, providing fast (lowest latency) and accurate CAD (highest AUROC). **Pros:** - The paper is well-written and organized, with a logical flow of ideas. - The system design and methodology are sound. - To inspire new architectural stages that improve performance, the authors investigated the performance of general CAD and derived insights for choosing impacting parameters. - The authors introduce a novel approach by combining drone networks with edge computing, supported with novel modules. - The evaluation is comprehensive with suitable metrics, figures, and tables that enhance the clarity of the findings. **Cons:** - While I believe the work is solid and original, it doesnโ€™t seem very suitable for submission to the WebConf conference. - AUROC has inherent limitations in imbalanced datasets, with significant disparity between the number of normal and abnormal instances. AUROC can still be high even if the model's performance on the minority class is poor. In such cases, AUROC is not enough, and other metrics such as a precision-recall curve or F1 score, would convey a clear picture of the performance. - Figure 8 shows that Air-CAD performs similarly or slightly better, compared to benchmarks in terms of F1 score. This indicates that the dataset is imbalanced. - Some writing issues led to confusion. For instance, the texts refer to Figure 13a and Figure 13b, while Figure 13 is entirely missing. Perhaps 14a & 14b instead! In Figure 4, there is a typo. Perhaps โ€œShooting Parametersโ€ not โ€œParamentsโ€. questions: - โ€œThis track solicits novel research contributions describing the construction of systems architecture, and performance related to the Web, and Web-based mobile and ubiquitous computingโ€. What makes this work relevant to this WebConf track? The authors mentioned WoT in the intro. However, it is not clear how Air-CAD could be integrated into WoT. - In Figure 2a, you use AUROC to study the impact of shooting parameters on overall AUROC, while you use accuracy to study the impact of shooting parameters on detection accuracy in Figure 3a. What is the difference and how did you measure the accuracy in Figure 3a? How did you measure accuracy in Figure 11? - What is the percentage of normal, and abnormal instances, or direct and indirect anomalies in the ArmyStampede dataset, and how it is labeled? - Figure 13 is missing. I believe you mean 14a & 14b. - In Figure 4, there is a typo. I believe you mean โ€œShooting Parametersโ€ not โ€œParamentsโ€ ethics_review_flag: No ethics_review_description: No ethical issues scope: 2: The connection to the Web is incidental, e.g., use of Web data or API novelty: 6 technical_quality: 7 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
9D1dlappB8
Air-CAD: Edge-Assisted Multi-Drone Network for Real-time Crowd Anomaly Detection
[ "Yuanzheng Tan", "Qing Li", "Junkun Peng", "Zhenhui Yuan", "Yong Jiang" ]
Drones connected via the web are increasingly being used for crowd anomaly detection (CAD). Existing solutions, however, face many challenges, such as low accuracy and high latency due to drones' dynamic shooting distances and angles as well as limited computing and networking capabilities. In this paper, we propose Air-CAD, an edge-assisted multi-drone network that uses air-ground cooperation to achieve fast and accurate CAD. Air-CAD consists of two stages: person detection and multi-feature analysis. To improve CAD accuracy, Air-CAD dynamically adjusts the inference of person detection model based on drones' shooting distances, and assigns appropriate feature analysis tasks to drones shooting at variable angles. To achieve fast CAD, edge devices connected to drones are deployed to offload assigned feature analysis tasks from drones. Air-CAD schedules the connection between each drone and edge to accelerate processing based on drone's assigned task and the computing/network resources of the edge device. To validate the performance of Air-CAD, we generate a new simulated human stampede dataset captured from various drone-view recordings. We deploy and evaluate Air-CAD in both simulation and real-world testbed. Experimental results show that Air-CAD achieves 95.33% AUROC and real-time inference latency within 0.47 seconds.
[ "Systems and Infrastructure for WoT", "Multi-drone network", "Edge computing", "Crowd anomaly detection" ]
https://openreview.net/pdf?id=9D1dlappB8
TloLAMDic3
official_review
1,700,617,308,880
9D1dlappB8
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission299/Reviewer_YYk3" ]
review: pros: 1.The Air-CAD system has the advantages of high efficiency and real-time. Through air-ground coordination, dynamic adjustment of drone shooting distance and angle, and deployment of edge devices, it achieves high accuracy, rapidity and real-time crowd anomaly detection. At the same time, Air-CAD has high practical value in practical applications. 2.Air-CAD proposes a new dataset๏ผˆcalled ArmyStampede๏ผ‰to simulate human panic escape, which is derived from the recording of various drone perspectives, and provides a new experimental verification method for drone crowd anomaly detection. 3.The experiment of the paper is very sufficient.The experimental results show that Air-CAD performs well in both simulated and real environments, achieving 95.33 % AUROC and real-time inference delay within 0.47 seconds. cons: 1.In this paper, the security and privacy protection measures of drone network in data transmission, storage and processing are not mentioned. In practical applications, these problems need to be paid close attention to and solved. 2.The paper does not discuss the feasibility and sustainability of Air-CAD in practical applications, such as drone battery life and drone control difficulty. 3.The paper does not fully discuss the applicability of Air-CAD in different scenarios, such as different scale activities, outdoor and indoor environments. questions: 1.During the practical application of the Air-CAD system, it is essential to implement security and privacy protection measures in data transmission, storage, and processing to safeguard against potential risks and maintain user privacy. 2.The paper has the following deficiencies in discussing the feasibility and sustainability of the Air-CAD system : 2.1 Drone battery life: The paper does not discuss how to ensure the battery life of drones during long-term operation. 2.2 Drone control difficulty: The paper does not address how to solve the problem of controlling multiple drones, thus improving operation efficiency and reducing labor costs. 2.3 Cost-effectiveness: The paper does not discuss the cost-effectiveness of the Air-CAD system, including drone, edge device, and operational costs. 2.4 Regulation and policy:Real-world testing of CAD on drones has been limited due to constraints on flight conditions. 3.The paper fails to comprehensively explore the applicability of Air-CAD in various scenarios. For instance, it does not thoroughly discuss how well the system performs in different scale activities. Additionally, the paper does not provide sufficient information on how Air-CAD adapts to outdoor and indoor environments. ethics_review_flag: No ethics_review_description: NO scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 5 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
9D1dlappB8
Air-CAD: Edge-Assisted Multi-Drone Network for Real-time Crowd Anomaly Detection
[ "Yuanzheng Tan", "Qing Li", "Junkun Peng", "Zhenhui Yuan", "Yong Jiang" ]
Drones connected via the web are increasingly being used for crowd anomaly detection (CAD). Existing solutions, however, face many challenges, such as low accuracy and high latency due to drones' dynamic shooting distances and angles as well as limited computing and networking capabilities. In this paper, we propose Air-CAD, an edge-assisted multi-drone network that uses air-ground cooperation to achieve fast and accurate CAD. Air-CAD consists of two stages: person detection and multi-feature analysis. To improve CAD accuracy, Air-CAD dynamically adjusts the inference of person detection model based on drones' shooting distances, and assigns appropriate feature analysis tasks to drones shooting at variable angles. To achieve fast CAD, edge devices connected to drones are deployed to offload assigned feature analysis tasks from drones. Air-CAD schedules the connection between each drone and edge to accelerate processing based on drone's assigned task and the computing/network resources of the edge device. To validate the performance of Air-CAD, we generate a new simulated human stampede dataset captured from various drone-view recordings. We deploy and evaluate Air-CAD in both simulation and real-world testbed. Experimental results show that Air-CAD achieves 95.33% AUROC and real-time inference latency within 0.47 seconds.
[ "Systems and Infrastructure for WoT", "Multi-drone network", "Edge computing", "Crowd anomaly detection" ]
https://openreview.net/pdf?id=9D1dlappB8
MVdr6voYJ7
official_review
1,701,010,929,664
9D1dlappB8
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission299/Reviewer_qs8i" ]
review: This paper proposes a design and application of a edge-network with drones for real-time crowd disaster anomaly detection. The paper focuses on a "zoom detector" for person detection using image processing and "feature scheduler" for anomaly detection. The paper includes evaluation from a 3D simulator and a real-world data collected with a limited setup. Pros: - The paper includes both simulated and real-world experiment for crowd anomaly detection. - The paper considers various engineering technique in their design of the drone-network. Cons: - The application seesm a bit not well motivated. It seems rather an over-engineered solution for the crowd anomaly detection. - The solution may have problem with practical application aspects. - The work does not seem relevant to the Web. Detailed comments below: - The setup of the real-world experiment seems very different than the simulated setup. I am not sure if any crowd disaster could be detected with only 10 participant in a relatively large ground. Real crowd disasters may happen in very different urban setups with buildings or unexpected environments. - The three scenarios seem rather arbitrary although it is taken from a study. Crowd behaviors may be more complex and detection of them might not be as straight-forward. - Related work seems rather unsatisfactory. The work related to crowd behavior detection is not only based on using drones. Actually, the drone application seem more on the exotic side compared to existing work on crowd behaviors detection. - Although mentioned in the title, abstract and introdcution, the paper does not really focus on the networking problems, it is rather more focusing on the machine learning computation through image processing. - The motivation is a bit not so clear, considering the given background and related work. For instance, it is not clear why a drone-edge-network is needed for a solution, whereas many cameras are already deployed on the ground (with their possible edge capabilities).It is also not clear how a real solution can be implemented considering existing problems of drones (e.g., battery) and spontaneity of crowd gatherings. - The solution seems to have practical aspects such as maintaining such an edge network and drones and operating in a real setup. - In addition to the practical application, the ethical concerns would still apply (even though mentioned in the paper). This is mentioned in the paper even for data collection in research purpose. It is hard to imagine when they will be solved and, when they are solved, if the current engineering solutions (e.g., processing capability) would still be relevant. questions: Although the authors listed various considerations on the engineering solution, I would like to ask a few question regarding the real application of the solution. - What would be a real application scenario for such a solution? - What would be the cost of an edge-network in a real setup and how it can be maintained? - How could the ethical aspects be addressed for the deployment of the solution? ethics_review_flag: No ethics_review_description: NA scope: 1: The work is irrelevant to the Web novelty: 2 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
9D1dlappB8
Air-CAD: Edge-Assisted Multi-Drone Network for Real-time Crowd Anomaly Detection
[ "Yuanzheng Tan", "Qing Li", "Junkun Peng", "Zhenhui Yuan", "Yong Jiang" ]
Drones connected via the web are increasingly being used for crowd anomaly detection (CAD). Existing solutions, however, face many challenges, such as low accuracy and high latency due to drones' dynamic shooting distances and angles as well as limited computing and networking capabilities. In this paper, we propose Air-CAD, an edge-assisted multi-drone network that uses air-ground cooperation to achieve fast and accurate CAD. Air-CAD consists of two stages: person detection and multi-feature analysis. To improve CAD accuracy, Air-CAD dynamically adjusts the inference of person detection model based on drones' shooting distances, and assigns appropriate feature analysis tasks to drones shooting at variable angles. To achieve fast CAD, edge devices connected to drones are deployed to offload assigned feature analysis tasks from drones. Air-CAD schedules the connection between each drone and edge to accelerate processing based on drone's assigned task and the computing/network resources of the edge device. To validate the performance of Air-CAD, we generate a new simulated human stampede dataset captured from various drone-view recordings. We deploy and evaluate Air-CAD in both simulation and real-world testbed. Experimental results show that Air-CAD achieves 95.33% AUROC and real-time inference latency within 0.47 seconds.
[ "Systems and Infrastructure for WoT", "Multi-drone network", "Edge computing", "Crowd anomaly detection" ]
https://openreview.net/pdf?id=9D1dlappB8
DEMUmHBNuR
official_review
1,701,191,833,673
9D1dlappB8
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission299/Reviewer_g4cd" ]
review: Overall, I think this is an interesting work. It reveals interesting observations in the motivational studies, and design corresponding modules based on these observations. The evaluation is based a generated dataset and a real-world setting. The paper is also well presented. questions: What is the difference between generally person detection and crowd anomaly? What new challenges can this objective bring to the problem? The reviewer also suggests the authors clearly state the limitations of others in the related work section. ethics_review_flag: No ethics_review_description: n/a scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 5 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
9D1dlappB8
Air-CAD: Edge-Assisted Multi-Drone Network for Real-time Crowd Anomaly Detection
[ "Yuanzheng Tan", "Qing Li", "Junkun Peng", "Zhenhui Yuan", "Yong Jiang" ]
Drones connected via the web are increasingly being used for crowd anomaly detection (CAD). Existing solutions, however, face many challenges, such as low accuracy and high latency due to drones' dynamic shooting distances and angles as well as limited computing and networking capabilities. In this paper, we propose Air-CAD, an edge-assisted multi-drone network that uses air-ground cooperation to achieve fast and accurate CAD. Air-CAD consists of two stages: person detection and multi-feature analysis. To improve CAD accuracy, Air-CAD dynamically adjusts the inference of person detection model based on drones' shooting distances, and assigns appropriate feature analysis tasks to drones shooting at variable angles. To achieve fast CAD, edge devices connected to drones are deployed to offload assigned feature analysis tasks from drones. Air-CAD schedules the connection between each drone and edge to accelerate processing based on drone's assigned task and the computing/network resources of the edge device. To validate the performance of Air-CAD, we generate a new simulated human stampede dataset captured from various drone-view recordings. We deploy and evaluate Air-CAD in both simulation and real-world testbed. Experimental results show that Air-CAD achieves 95.33% AUROC and real-time inference latency within 0.47 seconds.
[ "Systems and Infrastructure for WoT", "Multi-drone network", "Edge computing", "Crowd anomaly detection" ]
https://openreview.net/pdf?id=9D1dlappB8
2ZLvxFEEaf
decision
1,705,909,238,202
9D1dlappB8
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept (Oral) comment: Overall, the paper presents a research work on an edge-assisted multi-drone network for crowd anomaly detection. I summarise the pros and cons from the reviewers as follows. Pros: Interesting observations and corresponding design modules based on those observations. Evaluation based on both generated dataset and real-world setting. Well-presented paper. Includes both simulated and real-world experiments. Considers various engineering techniques in the design. Achieves high efficiency and real-time crowd anomaly detection. New dataset for experimental verification. Sufficient experiments with good results. Well-written and organized paper. Sound system design and methodology. Novel approach with drone networks and edge computing. Comprehensive evaluation with suitable metrics. Cons: Lack of clear limitations of related work. Application seems over-engineered for crowd anomaly detection. Potential issues with practical application. Not relevant to the Web. Lack of novelty and comparisons with state-of-the-art methods. Use of wrong template. Missing details on parameter settings and datasets. Questionable input setting in the ablation experiment. No discussion on security and privacy protection measures. No discussion on feasibility and sustainability in practical applications. No discussion on applicability in different scenarios. Not suitable for submission to the WebConf conference. Limitations of AUROC in imbalanced datasets. Confusion in figure references and typos in the text. The paper has a balanced pros and cons. There is an issue about the relevant to the web conference pointed out by the reviewers.
8oczaP1YKD
SPRING: Improving the Throughput of Sharding Blockchain via Deep Reinforcement Learning Based State Placement
[ "Pengze Li", "Mingxuan Song", "Mingzhe Xing", "Zhen Xiao", "QIUYU DING", "Shengjie Guan", "Jieyi Long" ]
Sharding provides an opportunity to overcome the inherent scalability challenges of the blockchain. In a sharding blockchain, the state, and computation are partitioned into smaller groups, known as "shards," to facilitate parallel transaction processing and improve throughput. However, since the states are placed on different shards, cross-shard transactions are inevitable, which is detrimental to the performance of the sharding blockchain. Existing sharding solutions place states based on heuristic algorithms or redistribute states via graph-partitioning-based methods, which are either less effective or costly. In this paper, we present Spring, the first deep-reinforcement-learning(DRL)-based sharding framework for state placement. Spring formulates the state placement as a Markov Decision Process, which takes into consideration the cross-shard transaction ratio and workload balancing, and employs DRL to learn the effective state placement policy. Experimental results based on real Ethereum transaction data demonstrate the superiority of Spring compared to other state placement solutions. In particular, it decreases the cross-shard transaction ratio by up to 26.63% and boosts throughput by up to 36.03%, all without unduly sacrificing the workload balance among shards. Moreover, updating the training model and making decisions takes only 0.1s and 0.002s, respectively, which shows the overhead introduced by Spring is acceptable.
[ "blockchain", "sharding", "reinforcement learning", "scalability" ]
https://openreview.net/pdf?id=8oczaP1YKD
fkeXDhDIT4
official_review
1,700,569,448,627
8oczaP1YKD
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission423/Reviewer_gtQj" ]
review: This article proposes Spring, a blockchain sharding framework based on deep reinforcement learning. This paper models the state placement of sharding blockchain as a Markov model and provides a solution to reduce cross-shard transactions using deep reinforcement learning. Evaluation is conducted on the historical dataset of Ethereum. The results shows that Spring can reduce cross-shard transaction ratio by about 26%, with a small calculating overhead. Pros๏ผš + The state placement problem of sharding blockchain is a very interesting topic that can have a significant impact on the actual operation of blockchain. + Compared to the work of SkyChain and others, Spring's experiments are based on real Ethereum historical datasets. + Writing is good. The author's writing looks very smooth. Cons๏ผš + While I believe there are some novel features in Spring, at the end of reading the paper I am not completely sure it surpasses existing literature (i.e. SkyChain, mostly) in a way that warrants a top conference publication. + Compared to SkyChain, Spring considers load balancing and ST characteristics between different shards when establishing a reward mechanism for reinforcement learning. Please explain why Spring has made significant improvements in CSTR compared to SkyChain in Figure 7. Is it because Spring utilizes historical data? Is it because there are many duplicate input data in Ethereum's real data? + The authors also mentioned in the article that there is a trade-off between the cross-shard ratio and workload balance. According to Figure 8, Spring's load balancing seems to be inferior to SkyChain. I think Spring sacrifices workload balance to improve CSTR. Both of these seem to have an impact on throughput. So, why does Spring with poor load balance and high CSTR outperform other solutions in terms of throughput? + Spring uses deep reinforcement learning for state placement, but the text does not seem to mention where reinforcement learning should run. If running on a blockchain using smart contracts, does the smart contract support the cost of deep reinforcement learning? As far as I know, Ethereum smart contracts have gas limits that seem insufficient to support retraining for deep reinforcement learning. If it is executed offline, then reinforcement learning is executed on each agent node, and then consensus is required? + The author introduced ฮปin Reward to balance the impact of workload balance and CSTR on Reward. What impact will this hyperparameter have on throughput and CSTR? Please explain how the hyperparameter is set. + The font of the image is too small (such as the legends in Figures 5 and 6), and it will definitely not be clear when printed. questions: +Please explain the impact of load balancing and CSTR on throughput. +Please explain why Spring outperforms other solutions in terms of CSTR metrics. +Please explain the operational location of deep reinforcement learning. +Please explain the impact of lambda selection in Reward on throughput. ethics_review_flag: No ethics_review_description: No scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 3 technical_quality: 4 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
8oczaP1YKD
SPRING: Improving the Throughput of Sharding Blockchain via Deep Reinforcement Learning Based State Placement
[ "Pengze Li", "Mingxuan Song", "Mingzhe Xing", "Zhen Xiao", "QIUYU DING", "Shengjie Guan", "Jieyi Long" ]
Sharding provides an opportunity to overcome the inherent scalability challenges of the blockchain. In a sharding blockchain, the state, and computation are partitioned into smaller groups, known as "shards," to facilitate parallel transaction processing and improve throughput. However, since the states are placed on different shards, cross-shard transactions are inevitable, which is detrimental to the performance of the sharding blockchain. Existing sharding solutions place states based on heuristic algorithms or redistribute states via graph-partitioning-based methods, which are either less effective or costly. In this paper, we present Spring, the first deep-reinforcement-learning(DRL)-based sharding framework for state placement. Spring formulates the state placement as a Markov Decision Process, which takes into consideration the cross-shard transaction ratio and workload balancing, and employs DRL to learn the effective state placement policy. Experimental results based on real Ethereum transaction data demonstrate the superiority of Spring compared to other state placement solutions. In particular, it decreases the cross-shard transaction ratio by up to 26.63% and boosts throughput by up to 36.03%, all without unduly sacrificing the workload balance among shards. Moreover, updating the training model and making decisions takes only 0.1s and 0.002s, respectively, which shows the overhead introduced by Spring is acceptable.
[ "blockchain", "sharding", "reinforcement learning", "scalability" ]
https://openreview.net/pdf?id=8oczaP1YKD
cy2nNpgszd
official_review
1,699,272,961,353
8oczaP1YKD
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission423/Reviewer_TgcR" ]
review: ### Summary: The paper looks into the challenge of reducing the overhead of cross-shard transactions by applying specific modeling to better exploit reinforcement learning-based coordination approaches. By tapping into spatial-temporal information, the proposed approach SPRING tries to address the weakness (unutilized potential) of other approaches in the area. The paper also features an extensive evaluation, which also incorporates real-world data, to detail how SPRING outperforms related approaches. ### Pros: +1: The evaluation section is quite extensive and considers related work as well as real-world transaction data. +2: The paper is well written and nicely presents the approach and findings. ### Cons: -1: The difference between SPRING and SpringChain is not clearly presented in the paper. -2: Certain aspects of the evaluation could be improved to further strengthen the paper's contributions. In my view, the paper is in good shape, which allows the reader to easily follow the presentation, the conducted experiments, and the drawn conclusions. The ratio of CSTs is still a lot, but it appears to be the best approach that we have these days. I also like how the authors manage to implicitly embed motivation and research gap in Sections 2.2 and 2.4. As I will detail below, from my point of view, a few minor issues impair the paper's quality, but nothing too critical. ### Detailed Comments: #### -1: SPRING vs. SpringChain In the current version, the term "SpringChain" simply pops up at the beginning of Section 3. Since the paper has no paper organization and the relation between SPRING and SpringChain is not discussed either, this aspect introduces a little bit of confusion. I would like the authors to make the differences, why they are needed, and what implications this separation has more explicit. From my understanding, the separation is mostly between Sections 3 and 4. Is this conclusion correct? Finally, the writing style of SPRING is inconsistent. Sometimes, it is capitalized, but most of the time, it is not. To add a certain recognition value, I recommend the authors to always use the capitalized form. Moreover, this change can help to better separate it from the term/concept SpringChain. #### -2: Evaluation While the evaluation is already quite extensive, and most aspects are well argued, also in terms of which evaluations have been conducted and how the evaluation parameters have been chosen. Regardless, a few minor improvements remain, in my opinion. First, I would like to know more about the hyperparameter settings that have been selected and whether they are universally applicable. This information is only briefly discussed in the appendix. Second, the overhead evaluation/discussion is rather brief. To what do the individual processing times accumulate in real-world settings? What is the frequency of these actions/procession steps? Additionally, the paper could better stress which parties are affected by the respective overhead. This information is not given at the moment. I believe that being more precise in this part of the evaluation would improve the paper. Third, I am not able to follow how the real transaction data, or more precisely, the addresses, are split across shards. To my understanding, this behavior is not directly related to the state, is it? I doubt that this will have a significant impact on the results, but I would still like to know how the authors deal with this aspect. Finally, the performance of SPRING is not compared to CBDS, which has also been introduced in the related work section (jointly with SkyChain). Without a closer look, I cannot find a reason for this decision. Why did the authors decide to not compare SPRING to CBDS? #### Other: - The current presentation has little pointers to the web. Hence, the paper could stress more explicitly why it is relevant for the conference's and track's topics. - In Section 3.3., why are the proposer and other nodes highlighted as part of the commit step? Is my assumption that this steps covers all nodes incorrect? - I am a little bit surprised that most of the baselines (especially Monoxide) have not been introduced in the related work section. Why has Monoxide not been presented in the related work section? - The side note on 9 addresses being responsible for 74% of the transactions is interesting. Wow, I did not know that. ### Nits: - Figure 1: The white text on yellow color is hard to read, especially on a printout. - Section 3.2: The writing style of A-Shard and T-Shard is inconsistent. - Section 3.2: Is "CSTS" correct, or should the last "s" rather be lowercase? - Section 3.2: "who designates" should probably be "which designates". - Section 4: "Appendix.B" has a period, which should be removed. - Section 5: The spacing in "Fig.X" seems to be missing repeatedly. - Section 5.3: Adding a reference to the initial statement of this section or at least a pointer to the related work section could be beneficial. - Figures 4-7: The figure sorting is not in order of appearance, but I also have no better idea at this point how to address this nit. - Figure 8: I suggest including the year of the underlying evaluation data in the figure caption as well. - Section 5.4: "Table.1" has a period, which should be removed. - Quite a few times throughout the paper, a space before the citation marker is missing, e.g., "analysis[8,45]". ### Post-Rebuttal I kindly thank the authors for responding to the reviews. While the response helps to clarify a lot of aspects, ~~it fails to convey whether changes will be made to the original manuscript~~ (except for my query on SPRING vs. SpringChain). This approach makes it challenging to assess whether a revised version could convincingly resolve all presentation issues of the paper. Moreover, the situation amplifies when considering the breadth of the feedback raised by all reviewers. *Update:* Thank you for providing more details on your revision plan, which is helpful to estimate the planned changes. Unfortunately, the plan only lists a few omissions, making it challenging to really incorporate all changes in a convincing way within the page limits. Personally, I believe that the paper would benefit from another round of reviews once the outlined changes have been made. questions: What is the reason for not considering CBDS as a baseline in the evaluation? ethics_review_flag: No ethics_review_description: n/a scope: 2: The connection to the Web is incidental, e.g., use of Web data or API novelty: 5 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
8oczaP1YKD
SPRING: Improving the Throughput of Sharding Blockchain via Deep Reinforcement Learning Based State Placement
[ "Pengze Li", "Mingxuan Song", "Mingzhe Xing", "Zhen Xiao", "QIUYU DING", "Shengjie Guan", "Jieyi Long" ]
Sharding provides an opportunity to overcome the inherent scalability challenges of the blockchain. In a sharding blockchain, the state, and computation are partitioned into smaller groups, known as "shards," to facilitate parallel transaction processing and improve throughput. However, since the states are placed on different shards, cross-shard transactions are inevitable, which is detrimental to the performance of the sharding blockchain. Existing sharding solutions place states based on heuristic algorithms or redistribute states via graph-partitioning-based methods, which are either less effective or costly. In this paper, we present Spring, the first deep-reinforcement-learning(DRL)-based sharding framework for state placement. Spring formulates the state placement as a Markov Decision Process, which takes into consideration the cross-shard transaction ratio and workload balancing, and employs DRL to learn the effective state placement policy. Experimental results based on real Ethereum transaction data demonstrate the superiority of Spring compared to other state placement solutions. In particular, it decreases the cross-shard transaction ratio by up to 26.63% and boosts throughput by up to 36.03%, all without unduly sacrificing the workload balance among shards. Moreover, updating the training model and making decisions takes only 0.1s and 0.002s, respectively, which shows the overhead introduced by Spring is acceptable.
[ "blockchain", "sharding", "reinforcement learning", "scalability" ]
https://openreview.net/pdf?id=8oczaP1YKD
Y941WhW6ai
official_review
1,700,645,601,109
8oczaP1YKD
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission423/Reviewer_Vhz8" ]
review: The paper at hand proposes a protocol to improve the transaction throughput of sharding blockchains by systematically reducing the number of costly transactions between shards using deep reinforcement learning. Namely, a network of agent nodes maintains a model for assigning newly observed addresses to a shard based on the blockchain's recent history. Overall, the authors' goal is intuitive and the chosen approach seems sensible, but the paper raises some important but unanswered questions regarding the underlying (effective) scenario and some of the design choices (see Questions section). Furthermore, I believe that an extension of the relevant background on deep reinforcement learning and previous approaches, while keeping the information condensed, would help a broader audience to better understand the concepts and approaches. Similarly, the authors should ensure that the notation and terminology is used consistently and easy to follow. Crucially, the deep-learning approach presented in Section 4 seems to have notable flaws as far as I am concerned: - The components of the reward function $r_{t}$, $r_{cstr}$ and $r_{wlb}$, have different effective domains that the weighing parameter $\lambda$ cannot catch. Namely, $r_{cstr}$ can grow (theoretically) infinity, whereas $r_{wlb}$ lies between 1.0 and 0.0. - In a performance-wise ideal scenario, there would be no CSTs at all, leading to a hypothetical division by zero for $r_{cstr}$. - In fact, the authors assume a low ratio of CSTs (increasing $r_{cstr}$). However, this assumption stands in stark contrast to their observation in Section 5.4.1, indicating that current approaches suffer from approximately 94% of all transactions being CSTs. *Suggested improvements:* - The introduction already dives into a detailed technical discussion of sharding blockchains. I suggest keeping this discussion as concise as possible to still support the motivation, but move details to Section 2.2 to have all background information in one place. - Make more clear what the *state* is in a sharding blockchain, where it is located, and what implications shuffling nodes between committees has on transferring the required state - Further, please make the distinction between the state of the RL-model and the blockchain's state (addresses of users and smart contracts) as early as possible and ensure to avoid ambiguities. In the presented manuscript, this aspect especially is a source for confusion. - Section 3.2 already discusses performance overheads. I suggest limiting this aspect to a mere intuition at this point and move the detailed discussion to Section 5. - Section 4 relies on unintuitive notation, as $num\textunderscore tx_{11}$, for example, could be misinterpreted. Possible alternatives are $num\textunderscore tx_{1,1}$ or $num\textunderscore tx_{1}^{i}$. *Minor issues:* - Spaces are sometimes missing when using references or citations, or introducing an acronym. - There is an inconsistency in capitalizing Spring/SPRING in the title and body of the paper. - In Section 2.3, the layout of the state graph used by BrokerChain remains unclear without referring to [15, Section III-B], and it is also unclear whether these details are needed going forward; however, this issue could also relate to the ambiguous use of "state" (see above). - In Section 3.1, $f = 1/3$ should read $f < 1/3$. - Minor inconsistency: In Section 3.3, PBFT is being used as an "example," but Section 3.2 already fixed the usage of PBFT as a design choice. - Furthermore, it is only implicit that the pre-prepare, prepare, etc. messages are part of PBFT and not Spring. - In Section 5.3, there seems to be an artefact of an older version of the sentence referencing Figure 4. Update: I acknowledge that I have read the authors' rebuttal comments. questions: **Scenario-related:** - What are the implicit underlying payment patterns assumed? While Section 3.1 mentions some assumptions, I have two unaddressed concerns/questions: 1. How stable must the transaction flows be to even have a chance to build even a short-term model in the general case? I assume that stable, reoccurring patterns lend themselves to the authors' approach, but I would expect more seemingly random, one-off payments to decrease the effectiveness of Spring. 2. Similarly, what is the *impact* of heavy hitters, i.e., few addresses that occur in many transactions, such as exchange services? Section 5.4.2 discusses that heavy hitters indeed exist, but does that not imply that such heavy hitters become more likely to be involved with transactions from every shard regardless? This question probably boils down to the following: Of the 94% reported cross-shard transactions, what is the theoretical baseline for achievable reduction given the presence of heavy hitters? - Regarding Section 5.4: Isn't it to be expected that Random, Shard Scheduler, and Monoxide behave very similarly with respect to CSTs? None of the strategies takes the payment flows into account and thus I would have expected that they all show random behavior in these experiments. **Design choices:** - I had a hard time grasping the DRL state layout proposed in Section 4: - Does $s$ cover a global state or does it encode a single update based on one transaction? The overall state layout seems to imply the former, but then I do not understand how the flag $f$ works. - What is the intuition behind $sender\_pos_{i}$? I did not understand the provided explanation. ethics_review_flag: No ethics_review_description: - scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 5 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
8oczaP1YKD
SPRING: Improving the Throughput of Sharding Blockchain via Deep Reinforcement Learning Based State Placement
[ "Pengze Li", "Mingxuan Song", "Mingzhe Xing", "Zhen Xiao", "QIUYU DING", "Shengjie Guan", "Jieyi Long" ]
Sharding provides an opportunity to overcome the inherent scalability challenges of the blockchain. In a sharding blockchain, the state, and computation are partitioned into smaller groups, known as "shards," to facilitate parallel transaction processing and improve throughput. However, since the states are placed on different shards, cross-shard transactions are inevitable, which is detrimental to the performance of the sharding blockchain. Existing sharding solutions place states based on heuristic algorithms or redistribute states via graph-partitioning-based methods, which are either less effective or costly. In this paper, we present Spring, the first deep-reinforcement-learning(DRL)-based sharding framework for state placement. Spring formulates the state placement as a Markov Decision Process, which takes into consideration the cross-shard transaction ratio and workload balancing, and employs DRL to learn the effective state placement policy. Experimental results based on real Ethereum transaction data demonstrate the superiority of Spring compared to other state placement solutions. In particular, it decreases the cross-shard transaction ratio by up to 26.63% and boosts throughput by up to 36.03%, all without unduly sacrificing the workload balance among shards. Moreover, updating the training model and making decisions takes only 0.1s and 0.002s, respectively, which shows the overhead introduced by Spring is acceptable.
[ "blockchain", "sharding", "reinforcement learning", "scalability" ]
https://openreview.net/pdf?id=8oczaP1YKD
KJOCLxgCBW
official_review
1,700,371,240,726
8oczaP1YKD
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission423/Reviewer_sqUb" ]
review: The authors present Spring, a deep-reinforcement-learning (DRL)-based sharding framework for state placement. Spring formulates the state placement as a Markov Decision Process which takes into consideration the cross-shard transaction ratio and workload balancing, and employs DRL to learn the effective state placement policy. Pros: 1. This paper proposes a practical method. 2. The authors conducted detailed experiments. cons: 1. This paper only discusses two related papers that use RL in sharding blockchains.This paper lacks discussion and comparison of some important papers, such as "DQN-Based Optimization Framework for Secure Sharded Blockchain Systems"๏ผŒ"Sharding for Blockchain based Mobile Edge Computing System: A Deep Reinforcement Learning Approach". 2. In Section 5.2, the authors claim that "updating the training model (UTM) costs about 0.1 seconds". It is necessary to explain under which experimental parameters this result is obtained. 3. Some minor issues: a) The font in the figures is too small. b) Insufficient description of the dataset in Section 5.1. questions: 1. Discussion and comparison of more relevant papers. ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 4 technical_quality: 4 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
8oczaP1YKD
SPRING: Improving the Throughput of Sharding Blockchain via Deep Reinforcement Learning Based State Placement
[ "Pengze Li", "Mingxuan Song", "Mingzhe Xing", "Zhen Xiao", "QIUYU DING", "Shengjie Guan", "Jieyi Long" ]
Sharding provides an opportunity to overcome the inherent scalability challenges of the blockchain. In a sharding blockchain, the state, and computation are partitioned into smaller groups, known as "shards," to facilitate parallel transaction processing and improve throughput. However, since the states are placed on different shards, cross-shard transactions are inevitable, which is detrimental to the performance of the sharding blockchain. Existing sharding solutions place states based on heuristic algorithms or redistribute states via graph-partitioning-based methods, which are either less effective or costly. In this paper, we present Spring, the first deep-reinforcement-learning(DRL)-based sharding framework for state placement. Spring formulates the state placement as a Markov Decision Process, which takes into consideration the cross-shard transaction ratio and workload balancing, and employs DRL to learn the effective state placement policy. Experimental results based on real Ethereum transaction data demonstrate the superiority of Spring compared to other state placement solutions. In particular, it decreases the cross-shard transaction ratio by up to 26.63% and boosts throughput by up to 36.03%, all without unduly sacrificing the workload balance among shards. Moreover, updating the training model and making decisions takes only 0.1s and 0.002s, respectively, which shows the overhead introduced by Spring is acceptable.
[ "blockchain", "sharding", "reinforcement learning", "scalability" ]
https://openreview.net/pdf?id=8oczaP1YKD
9ozbbU1xMJ
decision
1,705,909,238,356
8oczaP1YKD
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept (Oral) comment: The paper received 5 reviews. One was negative leaning; the others were positive leaning. The authors engaged with the reviewers during the rebuttal phase and addressed several concerns. I received a final recommendation from 3 reviewers, which was borderline, weak, accept and accept. Based on these recommendations, the reviews and the discussions with the authors, I recommend a weak accept. The paper has strong merits, but a few issues prevent it from making an unequivocal positive recommendation. The main issue is the technical novelty given the existence of the referenced SkyChain work and clarity around the DRL approach and how it fits within the framework.
8oczaP1YKD
SPRING: Improving the Throughput of Sharding Blockchain via Deep Reinforcement Learning Based State Placement
[ "Pengze Li", "Mingxuan Song", "Mingzhe Xing", "Zhen Xiao", "QIUYU DING", "Shengjie Guan", "Jieyi Long" ]
Sharding provides an opportunity to overcome the inherent scalability challenges of the blockchain. In a sharding blockchain, the state, and computation are partitioned into smaller groups, known as "shards," to facilitate parallel transaction processing and improve throughput. However, since the states are placed on different shards, cross-shard transactions are inevitable, which is detrimental to the performance of the sharding blockchain. Existing sharding solutions place states based on heuristic algorithms or redistribute states via graph-partitioning-based methods, which are either less effective or costly. In this paper, we present Spring, the first deep-reinforcement-learning(DRL)-based sharding framework for state placement. Spring formulates the state placement as a Markov Decision Process, which takes into consideration the cross-shard transaction ratio and workload balancing, and employs DRL to learn the effective state placement policy. Experimental results based on real Ethereum transaction data demonstrate the superiority of Spring compared to other state placement solutions. In particular, it decreases the cross-shard transaction ratio by up to 26.63% and boosts throughput by up to 36.03%, all without unduly sacrificing the workload balance among shards. Moreover, updating the training model and making decisions takes only 0.1s and 0.002s, respectively, which shows the overhead introduced by Spring is acceptable.
[ "blockchain", "sharding", "reinforcement learning", "scalability" ]
https://openreview.net/pdf?id=8oczaP1YKD
4lNm6xW4V3
official_review
1,699,965,929,700
8oczaP1YKD
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission423/Reviewer_f3Ry" ]
review: **Paper summary** This paper targets the scalability issue in the state placement in sharding blockchains. It develops Spring, which takes the cross-shard transaction ratio and workload balancing into consideration, and then uses deep reinforcement learning to infer state placement policy. SPRING is applied in sharding blockchains, and evaluated using real Ethereum transaction data in 2015, 2019, and 2023. Experimental results demonstrate that it performs better than four baselines in terms of throughput and reduction of CST ratio. **Strengths** + The formulation of the state placement problem as an MDP is a reasonable modeling choice, aligning with the sequential nature of blockchain transactions. + The paper uses real Ethereum transaction data for experimental evaluation. + The paper is well-structured. **Weaknesses** - Novelty and new technical contributions seems lacking. Given the solution of SkyChain, it is a bit difficulty to justify the novelty and the new technical contributions provided by this paper. - Some technical details about the DRL process should be made clear in the experimental settings, such as the model architecture, training parameters, and hyperparameter tuning. **Detailed comments** Spring proposes to use DRL to address the problem of state placement in sharding blockchains. It formulates the problem as an MDP, so that DRL can well handle the sequential nature of blockchain transactions. Below I mainly elaborate on the weaknesses listed above. *Novelty and innovations* Despite the common limitations discussed in Section 2.4, the novelty and new technical contributions of Spring haven't been inadequately justified, especially when compared to the existing solution SkyChain. I suggest the paper include a detailed and explicit discussion on the unique technical aspects that differentiate Spring from SkyChain. In the current writing, it is a bit challenging to identify the innovation and inspiring contributions of Spring. *Technical details* The paper should also provide more technical details regarding the architecture of the DRL model, training methodologies, and hyperparameter tuning. This will enhance its reproducibility. How is Ethereum transaction data embedded with BlockEmulator? Or is it just used for training? There should be a benchmarking on the reproduced SkyChain, to ensure the fidelity of its re-implementation. Figure 5 and 6 show that Spring's performance in the data of 2015, 2019 and 2023 differ. What make these differences? questions: See my review above ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 3 technical_quality: 4 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
8fw7fmQunO
Hierarchical Position Embedding of Graphs with Landmarks and Clustering for Link Prediction
[ "Minsang Kim", "Seung Jun Baek" ]
Learning positional information of nodes in a graph is important for link prediction tasks. We propose a representation of positional information using representative nodes called landmarks. A small number of nodes with high degree centrality are selected as landmarks, which serve as reference points for the nodes' positions. We justify this selection strategy for well-known random graph models, and derive closed-form bounds on the average path lengths involving landmarks. In a model for scale-free networks, we prove that landmarks provide asymptotically exact information on inter-node distances. We apply theoretical insights to practical networks, and propose Hierarchical Position embedding with Landmarks and Clustering (HPLC). HPLC combines graph clustering and landmark selection, where the graph is partitioned into densely connected clusters in which nodes with the highest degree are selected as landmarks. HPLC leverages the positional information of nodes based on landmarks at various levels of hierarchy such as nodes' distances to landmarks, inter-landmark distances and hierarchical grouping of clusters. Experiments show that HPLC achieves state-of-the-art performances of link prediction on various datasets in terms of HIT@K, MRR, and AUC.
[ "Link Prediction", "Network Science", "Graph Neural Networks" ]
https://openreview.net/pdf?id=8fw7fmQunO
y8s7YRlz6d
official_review
1,700,208,020,550
8fw7fmQunO
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission351/Reviewer_12xJ" ]
review: This research presents a method for representing positional information through the use of representative nodes known as landmarks. The researchers delve into combining landmark selection with graph clustering, leveraging on the positional data of nodes in order to boost performance levels in link prediction. They select landmarks and organize the graph in a principled way, unlike previous methods using random selection. **Strengths:** 1. The authors employ an approach to selecting landmarks, specifically choosing nodes with high degree instead of random selection. Furthermore, they provide a theoretical justification for this method. 2. Extensive experimental analysis on 8 datasets demonstrates positive outcomes, indicating the potential applicability of the proposed method in link prediction tasks. **Weaknesses:** 1. Reproducibility issue. The source code has not been shared. 2. The implementation details of the position encoder are not well described. Please provide more details of the position encoder. 3. The experimental analysis of the ablation experiment is less clear. Please provide relevant explanations and details of the ablation experiment. questions: See the weaknesses part. ethics_review_flag: No ethics_review_description: None scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 4 technical_quality: 4 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
8fw7fmQunO
Hierarchical Position Embedding of Graphs with Landmarks and Clustering for Link Prediction
[ "Minsang Kim", "Seung Jun Baek" ]
Learning positional information of nodes in a graph is important for link prediction tasks. We propose a representation of positional information using representative nodes called landmarks. A small number of nodes with high degree centrality are selected as landmarks, which serve as reference points for the nodes' positions. We justify this selection strategy for well-known random graph models, and derive closed-form bounds on the average path lengths involving landmarks. In a model for scale-free networks, we prove that landmarks provide asymptotically exact information on inter-node distances. We apply theoretical insights to practical networks, and propose Hierarchical Position embedding with Landmarks and Clustering (HPLC). HPLC combines graph clustering and landmark selection, where the graph is partitioned into densely connected clusters in which nodes with the highest degree are selected as landmarks. HPLC leverages the positional information of nodes based on landmarks at various levels of hierarchy such as nodes' distances to landmarks, inter-landmark distances and hierarchical grouping of clusters. Experiments show that HPLC achieves state-of-the-art performances of link prediction on various datasets in terms of HIT@K, MRR, and AUC.
[ "Link Prediction", "Network Science", "Graph Neural Networks" ]
https://openreview.net/pdf?id=8fw7fmQunO
oonQUDiNWq
decision
1,705,909,210,445
8fw7fmQunO
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept comment: The paper makes a solid contribution in the positional embedding of graphs using landmarks. The authors have responded to reviewer comments in an exhaustive manner.
8fw7fmQunO
Hierarchical Position Embedding of Graphs with Landmarks and Clustering for Link Prediction
[ "Minsang Kim", "Seung Jun Baek" ]
Learning positional information of nodes in a graph is important for link prediction tasks. We propose a representation of positional information using representative nodes called landmarks. A small number of nodes with high degree centrality are selected as landmarks, which serve as reference points for the nodes' positions. We justify this selection strategy for well-known random graph models, and derive closed-form bounds on the average path lengths involving landmarks. In a model for scale-free networks, we prove that landmarks provide asymptotically exact information on inter-node distances. We apply theoretical insights to practical networks, and propose Hierarchical Position embedding with Landmarks and Clustering (HPLC). HPLC combines graph clustering and landmark selection, where the graph is partitioned into densely connected clusters in which nodes with the highest degree are selected as landmarks. HPLC leverages the positional information of nodes based on landmarks at various levels of hierarchy such as nodes' distances to landmarks, inter-landmark distances and hierarchical grouping of clusters. Experiments show that HPLC achieves state-of-the-art performances of link prediction on various datasets in terms of HIT@K, MRR, and AUC.
[ "Link Prediction", "Network Science", "Graph Neural Networks" ]
https://openreview.net/pdf?id=8fw7fmQunO
dEHvWLuhZm
official_review
1,701,407,147,550
8fw7fmQunO
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission351/Reviewer_eTou" ]
review: The authors consider the task of link prediction using graph neural networks (GNNs). It has recently been proposed to use positional information in the node embedding in GNNs. The authors propose a scalable way to do so using a cluster + landmark strategy. They also provide an analysis on landmark selection in random graph models that is complementary to the positional embedding contribution. They demonstrate strong empirical results for link prediction tasks. *Note:* I have reviewed a previous version of this paper for a different venue and used my previous review as a starting point for this review. I have updated the relevant portions of the review to account for changes made by the authors since the previous submission. *After author rebuttal:* My opinion on this paper hasn't changed much. There's nothing particularly wrong with it. As the authors point out, ad-hoc innovations leading to better performance are indeed a valuable contribution. There's also nothing that particularly excites me about it, but it is solid research that I would hope to see published in a reputable venue. ## Strengths - Interesting analysis on landmark selection in both the Erdล‘s-Rรฉnyi and Barabรกsi-Albert (B-A) models that provides insights on how to choose landmarks on such Poisson and scale-free networks, respectively. These results could be of independent interest to the network science community. - Many small innovations leading to strong empirical performance on a variety of data sets compared to lots of other methods. Ablation studies are also provided to justify the need for the different innovations. ## Weaknesses - While the analysis in Section 2 is rigorous and principled, a lot of the proposed innovations in Section 3 are very much ad-hoc with lots of choices of hyperparameters. While I still consider this a weakness, this area has improved from the previous submission, where the authors now try to connect some of the design decisions in Section 3 with the analysis in Section 2, e.g., in the paragraph labeled "Effect of Clustering on Landmarks". ### Minor presentation issues: - Equations (1) and (2) come after equation (3) in the paper. - Possible error in bolding in Table 7: $\eta = 4$ with COLLAB looks like it has higher accuracy than for $\eta = 5$. questions: 1. In most of the 6 data sets in Table 7, the accuracy looks to increase with $\eta$. Does this trend continue in general? Is there a reason to stop increasing $\eta$ aside from increasing computation time and memory requirement? ethics_review_flag: No ethics_review_description: No concerns scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 4 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
8fw7fmQunO
Hierarchical Position Embedding of Graphs with Landmarks and Clustering for Link Prediction
[ "Minsang Kim", "Seung Jun Baek" ]
Learning positional information of nodes in a graph is important for link prediction tasks. We propose a representation of positional information using representative nodes called landmarks. A small number of nodes with high degree centrality are selected as landmarks, which serve as reference points for the nodes' positions. We justify this selection strategy for well-known random graph models, and derive closed-form bounds on the average path lengths involving landmarks. In a model for scale-free networks, we prove that landmarks provide asymptotically exact information on inter-node distances. We apply theoretical insights to practical networks, and propose Hierarchical Position embedding with Landmarks and Clustering (HPLC). HPLC combines graph clustering and landmark selection, where the graph is partitioned into densely connected clusters in which nodes with the highest degree are selected as landmarks. HPLC leverages the positional information of nodes based on landmarks at various levels of hierarchy such as nodes' distances to landmarks, inter-landmark distances and hierarchical grouping of clusters. Experiments show that HPLC achieves state-of-the-art performances of link prediction on various datasets in terms of HIT@K, MRR, and AUC.
[ "Link Prediction", "Network Science", "Graph Neural Networks" ]
https://openreview.net/pdf?id=8fw7fmQunO
aNnTR7trXH
official_review
1,701,043,275,077
8fw7fmQunO
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission351/Reviewer_9ma2" ]
review: Message passing GNNs are known to be as powerful as 1-WL. Hence, there are cases where they can not distinguish between non-isomorphic graphs. In this work, each node is enhanced with distance information with respect to a set of landmark nodes that are chosen as high-degree nodes. The idea of enhancing node information with position/or distance information is not new, however the approach of using landmark nodes and an embedding of landmarks as cluster centroids is novel. The proofs are modified version of [17], adapted to the landmark setting. Pros: * The paper is in general well written and easy to follow. * The results regarding E-R and B-A are not surprising, but act as a sanity check and inspiration for the applicability of this method. * The results on link prediction outperform a wide variety of traditional link prediction baselines, as well more recent GNN based approaches. * There is substantial experimental verification of most claims in this work (theoretical, ablation, etc.). Cons: * The exact setting for link prediction is not clearly described. While a subset of edges is chosen for training, the amount of negative samples (non-edges) is not described. More precisely, how do the authors choose the negative samples, is it the whole set of non-edges (in this case the d/s is highly imbalanced) or a subset of them? Moreover, different measures are used to evaluate the methods for each dataset, making the comparison rather confusing. * There is prior work on landmark selection that is not discussed, some with theoretical results -- mostly related to the NP-completeness on choosing landmarks for general graphs. (E.g., Potamias et al., "Fast shortest path distance estimation in large networks" & Zhao et al., "Orion: Shortest Path Estimation for Large Social Graphs"). * The evaluation is only performed for link prediction, while it can be easily applied to node classification. * The proof in A1 relies on a lemma that is applied to mutually independent events, however the events under consideration are not independent. There is a discussion on that "the fraction of such correlations becomes negligible", however this is more an intuitive explanation rather a concrete setting. Also, the Bernoulli approximation is applied in Eq.(12). For it to be correct there should be conditions on $\frac{h_\lambda^2}{N<h^2>} \cdot (s-1)$ that should be stated. questions: Q1: How is the set of non-edges chosen? Is it ${N \choose 2} - E$, or a random sample of them? Q2: How are clusters merged to macro-clusters in 3.3.? ethics_review_flag: No ethics_review_description: N/A scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 3 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
8fw7fmQunO
Hierarchical Position Embedding of Graphs with Landmarks and Clustering for Link Prediction
[ "Minsang Kim", "Seung Jun Baek" ]
Learning positional information of nodes in a graph is important for link prediction tasks. We propose a representation of positional information using representative nodes called landmarks. A small number of nodes with high degree centrality are selected as landmarks, which serve as reference points for the nodes' positions. We justify this selection strategy for well-known random graph models, and derive closed-form bounds on the average path lengths involving landmarks. In a model for scale-free networks, we prove that landmarks provide asymptotically exact information on inter-node distances. We apply theoretical insights to practical networks, and propose Hierarchical Position embedding with Landmarks and Clustering (HPLC). HPLC combines graph clustering and landmark selection, where the graph is partitioned into densely connected clusters in which nodes with the highest degree are selected as landmarks. HPLC leverages the positional information of nodes based on landmarks at various levels of hierarchy such as nodes' distances to landmarks, inter-landmark distances and hierarchical grouping of clusters. Experiments show that HPLC achieves state-of-the-art performances of link prediction on various datasets in terms of HIT@K, MRR, and AUC.
[ "Link Prediction", "Network Science", "Graph Neural Networks" ]
https://openreview.net/pdf?id=8fw7fmQunO
DgEkAoYFMF
official_review
1,698,769,060,148
8fw7fmQunO
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission351/Reviewer_YFYA" ]
review: This paper proposes a node embedding method for link prediction task, which takes local potional information into consideration by calculating the distance between nodes and local landmarks after clustering. The authors provide sufficient theoretical foundation of how many and what kind of landmarks should be chosen in different kind of graphs. The authors compare the proposed method against fifteen related methods on seven real-world datasets demonstrating a promising result. Overall, the paper is well organized and easy to follow. Pros: (1) This paper is easy to follow. The method description, and the experiments are clear in logic. (2) The results show the promising of the proposed approach Cons: (1) This paper is less motivated. In the section of Introduction, the authors divide current position-based methods into two categories and summarize the limitations as "fall short of state-of-the-art", "does not scal well" and "may not outperform" without clear elplanation. Besides, I really want to know why the proposed method overcomes the current dilemma but cannot find such description. (2) The experiments are insufficient. There are only experiments to validate the effectiveness of HPLC. The authors claim that HPLC is efficient (at line 94) and robustness (at line 151). However, no experimental results can support this conclusion. Besides, the ablation study is also insufficient. I want to know how does HPLC performs with only CE or MV. (3) Theoritical analysis in section 2 seems less related to the method. If I understand correctly, the authors analyze landmarks in three kind of graphs with different distribution of node degree and propose that the core question is how many and what kind of landmarks should be chosen. However, after analysis, I still do not know why should choose high-degree nodes as landmarks. (4) Landmark graph is designed to be a complete graph, where nodes are fully-connected with each other. I think membership encoding based on such a full connection may affect the distinguishability between nodes. questions: (1) What are the limitations of current position-based competitors and why HPLC is better than them? (2) There are two important hyperparameters: K and R, which are resulted from wo simple formulas. Why do such operations? ethics_review_flag: No ethics_review_description: NA scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 3 technical_quality: 3 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
8fw7fmQunO
Hierarchical Position Embedding of Graphs with Landmarks and Clustering for Link Prediction
[ "Minsang Kim", "Seung Jun Baek" ]
Learning positional information of nodes in a graph is important for link prediction tasks. We propose a representation of positional information using representative nodes called landmarks. A small number of nodes with high degree centrality are selected as landmarks, which serve as reference points for the nodes' positions. We justify this selection strategy for well-known random graph models, and derive closed-form bounds on the average path lengths involving landmarks. In a model for scale-free networks, we prove that landmarks provide asymptotically exact information on inter-node distances. We apply theoretical insights to practical networks, and propose Hierarchical Position embedding with Landmarks and Clustering (HPLC). HPLC combines graph clustering and landmark selection, where the graph is partitioned into densely connected clusters in which nodes with the highest degree are selected as landmarks. HPLC leverages the positional information of nodes based on landmarks at various levels of hierarchy such as nodes' distances to landmarks, inter-landmark distances and hierarchical grouping of clusters. Experiments show that HPLC achieves state-of-the-art performances of link prediction on various datasets in terms of HIT@K, MRR, and AUC.
[ "Link Prediction", "Network Science", "Graph Neural Networks" ]
https://openreview.net/pdf?id=8fw7fmQunO
DAzPJk66pm
official_review
1,700,783,827,467
8fw7fmQunO
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission351/Reviewer_ZZpU" ]
review: Summary This paper investigates the hierarchical position encoding of graphs with landmarks. Position encoding encodes a node by exploiting the distances to a set of select nodes named landmarks. The authors provide a detailed theoretical analysis of the strategy of selecting landmarks on various types of random graphs. With the insights, the authors then propose a hierarchical position encoding, in which they first partition the graph into small clusters and select landmark nodes in each cluster separately. Experiments show that the proposed method achieves sota performance. Strong points The paper is well-written and easy to follow. The theoretical analysis is solid. The insights may be useful for other researchers. Opportunities for improvement The whole analysis focuses on how the path length via the landmarks resembles the actual distance. But it is not obvious whether a tight lower bound/upper bound leads to a good position encoding. For instance, if all nodes are selected as landmarks, the path length via landmarks would be the actual distance. It would be interesting to see if the authors can test this on a toy graph. questions: The whole analysis focuses on how the path length via the landmarks resembles the actual distance. But it is not obvious whether a tight lower bound/upper bound leads to a good position encoding. The quality of position encoding perhaps could be more related to how the landmarks distribute in the graph. For instance, if all nodes are selected as landmarks, the path length via landmarks would be the actual distance. It would be interesting to see if the authors can test this on a toy graph. ethics_review_flag: No ethics_review_description: Na scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 5 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
8f8GrRqb2l
Using Model Calibration to Evaluate Link Prediction in Knowledge Graphs
[ "Aishwarya Rao", "Narayanan Asuri Krishnan", "Carlos Rivero" ]
Link prediction models assign scores to predict new, plausible edges to complete knowledge graphs. In link prediction evaluation, the score of an existing edge (positive) is ranked w.r.t.~the scores of its synthetically corrupted counterparts (negatives). An accurate model ranks positives higher than negatives, assuming ascending order. Since the number of negatives are typically large for a single positive, link prediction evaluation is computationally expensive. As far as we know, only one approach has proposed to replace rank aggregations by a distance between sample positives and negatives. Unfortunately, the distance does not consider individual ranks, so edges in isolation cannot be assessed. In this paper, we propose an alternative protocol based on posterior probabilities of positives rather than ranks. A calibration function assigns posterior probabilities to edges that measure their plausibility. We propose to assess our alternative protocol in various ways, including whether expected semantics are captured when using different strategies to synthetically generate negatives. Our experiments show that posterior probabilities and ranks are highly correlated. Also, the time reduction of our alternative protocol is quite significant: more than 77\% compared to rank-based evaluation. We conclude that link prediction evaluation based on posterior probabilities is viable and significantly reduces computational costs.
[ "Knowledge Graph Embedding", "Link Prediction", "Model Calibration" ]
https://openreview.net/pdf?id=8f8GrRqb2l
oBoG7t2R2Q
official_review
1,701,118,057,264
8f8GrRqb2l
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1179/Reviewer_Yaad" ]
review: This work proposes a protocol based on posterior probabilities to assess link prediction models in knowledge graphs instead of using ranks. This protocol consists of learning a posterior probability function $f$ in the evaluation step. Given a score function $x$ for triples $t$, the posterior probability is $f(x(t))$. The authors evaluate the new protocol on various embedding models, and datasets. The results show that their protocol has significant improvements regarding the time required for model evaluation. The proposed method would have a high impact since the issues of the protocol based on ranks. The paper is very well-written and sound. questions: 1. In Section 4 you said that, unlike Tabacof and Costabello, you use all available negatives. This way, you would avoid the sampling bias. Did you measure the effect of such bias compared with your solution? Is the elimination of this bias the only reason to avoid such a sampling? 2. In Section 6.1 you said that the size of the validation split and the variety of the triples contained in it were detrimental to learn calibration functions. I am not sure what do you mean by variety of the triples. Regarding the size, you decided to use validation and test splits to contain 1.6% of the triples each. Does this mean that this method protocol be non-reliable for larger sizes? Does in mean that for this method could not be used in production for datasets with many missing triples? ethics_review_flag: No ethics_review_description: None scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 7 reviewer_confidence: 1: The reviewer's evaluation is an educated guess
8f8GrRqb2l
Using Model Calibration to Evaluate Link Prediction in Knowledge Graphs
[ "Aishwarya Rao", "Narayanan Asuri Krishnan", "Carlos Rivero" ]
Link prediction models assign scores to predict new, plausible edges to complete knowledge graphs. In link prediction evaluation, the score of an existing edge (positive) is ranked w.r.t.~the scores of its synthetically corrupted counterparts (negatives). An accurate model ranks positives higher than negatives, assuming ascending order. Since the number of negatives are typically large for a single positive, link prediction evaluation is computationally expensive. As far as we know, only one approach has proposed to replace rank aggregations by a distance between sample positives and negatives. Unfortunately, the distance does not consider individual ranks, so edges in isolation cannot be assessed. In this paper, we propose an alternative protocol based on posterior probabilities of positives rather than ranks. A calibration function assigns posterior probabilities to edges that measure their plausibility. We propose to assess our alternative protocol in various ways, including whether expected semantics are captured when using different strategies to synthetically generate negatives. Our experiments show that posterior probabilities and ranks are highly correlated. Also, the time reduction of our alternative protocol is quite significant: more than 77\% compared to rank-based evaluation. We conclude that link prediction evaluation based on posterior probabilities is viable and significantly reduces computational costs.
[ "Knowledge Graph Embedding", "Link Prediction", "Model Calibration" ]
https://openreview.net/pdf?id=8f8GrRqb2l
hwDfsWpgho
official_review
1,700,196,085,506
8f8GrRqb2l
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1179/Reviewer_b9uU" ]
review: This paper studies the efficiency issues of evaluating KG link prediction models. It is well-written and easy to follow. The proposed method using model calibration is intuitive and straightforward. My detailed comments are as follows: 1. Although the efficiency problem of evaluating link prediction models seems to be important according to the complexity analysis in section 3, it is worth noting that in practice this task is performed in parallelized on GPUs. This makes the motivation of this paper less convincing. More importantly, it is unclear whether this consideration is taken into account in the experiments. What is the evaluation setting and hardware for the results in Section 6.3? 2. The proposed calibration method seems to be specific to ranking-based scoring functions (with negative samples). How about the other case of using cross-entropy loss (learning without negative samples), such as ConvE? 3. The application of classical calibration methods (Platt scaling and Isotonic regression) is intuitive, but lacks novelty. In particular, the experiments show that there are unexpected results on some link prediction techniques BoxE, RotatE, and RotPro in Section 6.2 and also on some datasets Hetionet and WN18 in Section 6.3, without a strong reason for explanation. Further investigation on these issues and designing a corresponding calibration method robust against these cases would be a strong plus for this work. 4. One more suggestion is to also consider instance completion tasks in KGs, such as prediction (h,?,?), see the reference below. The efficiency issue is more serious in this task. - Rosso, Paolo, Dingqi Yang, Natalia Ostapuk, and Philippe Cudrรฉ-Mauroux. "Reta: A schema-aware, end-to-end solution for instance completion in knowledge graphs." In Proceedings of the Web Conference 2021, pp. 845-856. 2021. questions: See the comments 1, 2, and 3 above. ethics_review_flag: No ethics_review_description: N/A scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 4 technical_quality: 3 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
8f8GrRqb2l
Using Model Calibration to Evaluate Link Prediction in Knowledge Graphs
[ "Aishwarya Rao", "Narayanan Asuri Krishnan", "Carlos Rivero" ]
Link prediction models assign scores to predict new, plausible edges to complete knowledge graphs. In link prediction evaluation, the score of an existing edge (positive) is ranked w.r.t.~the scores of its synthetically corrupted counterparts (negatives). An accurate model ranks positives higher than negatives, assuming ascending order. Since the number of negatives are typically large for a single positive, link prediction evaluation is computationally expensive. As far as we know, only one approach has proposed to replace rank aggregations by a distance between sample positives and negatives. Unfortunately, the distance does not consider individual ranks, so edges in isolation cannot be assessed. In this paper, we propose an alternative protocol based on posterior probabilities of positives rather than ranks. A calibration function assigns posterior probabilities to edges that measure their plausibility. We propose to assess our alternative protocol in various ways, including whether expected semantics are captured when using different strategies to synthetically generate negatives. Our experiments show that posterior probabilities and ranks are highly correlated. Also, the time reduction of our alternative protocol is quite significant: more than 77\% compared to rank-based evaluation. We conclude that link prediction evaluation based on posterior probabilities is viable and significantly reduces computational costs.
[ "Knowledge Graph Embedding", "Link Prediction", "Model Calibration" ]
https://openreview.net/pdf?id=8f8GrRqb2l
YvR7cQqK0G
official_review
1,700,770,155,687
8f8GrRqb2l
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1179/Reviewer_Y3XG" ]
review: This article introduces a novel protocol that leverages posterior probabilities of positive outcomes, rather than ranking systems, to evaluate link predictions in knowledge graphs. It also details a calibration function designed to assign posterior probabilities to edges. The paper is well-composed and presents its ideas clearly. The subject matter is pertinent to the field, and the analysis of the new technique, which focuses on posterior probabilities, effectively highlights the limitations of current methodologies. The approach appears sound and is elaborately described. The evaluation of this methodology is thorough, applying it to nine alternative methods across eight well-established benchmarks in the field. Notably, this approach significantly reduces the time required for computing link prediction. However, the paper states that "Models, source code, and results will be made publicly available, with the URL to be disclosed after the double-blind review process." This presents a challenge for reviewers, as they are unable to examine these materials during the review process. In today's context, it is relatively straightforward to anonymously share such materials for conference review, and the lack of this provision is a notable shortcoming of the paper. questions: Can you share the material on an anonymous link? ethics_review_flag: No ethics_review_description: no concerns scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 5 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
8f8GrRqb2l
Using Model Calibration to Evaluate Link Prediction in Knowledge Graphs
[ "Aishwarya Rao", "Narayanan Asuri Krishnan", "Carlos Rivero" ]
Link prediction models assign scores to predict new, plausible edges to complete knowledge graphs. In link prediction evaluation, the score of an existing edge (positive) is ranked w.r.t.~the scores of its synthetically corrupted counterparts (negatives). An accurate model ranks positives higher than negatives, assuming ascending order. Since the number of negatives are typically large for a single positive, link prediction evaluation is computationally expensive. As far as we know, only one approach has proposed to replace rank aggregations by a distance between sample positives and negatives. Unfortunately, the distance does not consider individual ranks, so edges in isolation cannot be assessed. In this paper, we propose an alternative protocol based on posterior probabilities of positives rather than ranks. A calibration function assigns posterior probabilities to edges that measure their plausibility. We propose to assess our alternative protocol in various ways, including whether expected semantics are captured when using different strategies to synthetically generate negatives. Our experiments show that posterior probabilities and ranks are highly correlated. Also, the time reduction of our alternative protocol is quite significant: more than 77\% compared to rank-based evaluation. We conclude that link prediction evaluation based on posterior probabilities is viable and significantly reduces computational costs.
[ "Knowledge Graph Embedding", "Link Prediction", "Model Calibration" ]
https://openreview.net/pdf?id=8f8GrRqb2l
Ynet62dkQx
official_review
1,701,428,835,542
8f8GrRqb2l
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1179/Reviewer_UnCj" ]
review: ## Summary This work tackles the problem of evaluating link prediction in knowledge graphs for approaches based on knowledge graph embeddings (KGE). Existing evaluation protocols mainly rely on ranking-based metrics (e.g., hits@k, mean rank, or mean reciprocal rank) that require generating a large number of possible candidates for a triple, i.e., generating positives and negatives. These metrics introduce two major drawbacks to the evaluation protocol: (1) they are computationally expensive, and (2) assessing the membership of a triple to the set of positives is not straightforward as a ranking is required. To overcome these shortcomings, this work proposes a novel protocol that alleviates the computation of rankings during evaluation by learning a calibration model over the scoring function of the KGE model. The model is learned using the full set of negatives โ€” generated with different strategies such as Global Naive (GN), Typed Constraint LCWA (TC), and Local Naive (LC)) โ€” and using Platt scaling or Isotonic regression. The paper also introduces the metrics to assess the quality of the calibration model โ€” i.e., the weighted Brier score BS_w and the weighted coefficient of determination R^2_w โ€” and metrics to compare these values with ranks โ€” i.e., the Pearson correlation r_{xy} with adjusted ranks. The experiments on eight datasets and nine KGE models show that the proposed protocol produces results that slightly correlates with ranking-based approaches and the evaluation time can be reduced up to 98.9%. ## Strong Points S1. This work tackles a relevant, timely problem for knowledge graphs and link prediction approaches based on **machine learning**. S2. The proposed evaluation protocol is well-motivated. The authors clearly state the limitations of the state of the art (i.e., rank-based metrics), and explain how the presented solution overcomes the current limitations. S3. The experimental evaluation includes several knowledge graph embeddings to show the behavior of the new protocol over different models. ## Weak Points W1. The presentation of the paper requires major improvement. Unfortunately, the structure of the paper and presentation of the experimental results make it difficult to assess the soundness of the overall contribution. W2. The robustness of the evaluation protocol is sensitive to the learning of the calibration model, whose quality can be impacted by factors (e.g., the dataset, the strategy for generating negatives, the calibration function, etc.) independent of the link prediction model. This could lead to misleading conclusions about the quality of the link prediction approaches. W3. The interpretability of the new metrics is not sufficiently analysed. W4. The reproducibility of this work is unknown. ## Detailed Comments - **Presentation of the paper:** Each section is well-written; however, the overall paper structure is hard to follow. Below are just a few examples of how the different pieces of the work are scattered in different sections: - The concept of model calibration is introduced in Section 2.2. as part of the background. Here, only two calibration functions for model calibration, i.e., Platt scaling and isotonic regression, are defined, which are the ones investigated in the paper. Note that there might be other techniques (e.g., Beta calibration) that are not discussed in the paper. So, this section is a mixture of background and proposed solutions simultaneously. - Section 3 presents a discussion that refers to Equation 2, but this is coming too late as Section 2 finishes with Equation 9 with the definitions of TP, FP, TN, and FN. The flow of the paper is broken here. - Section 5 explains the metrics used to assess the calibration model. Yet, two metrics have already been introduced in the Background (concretely, Section 2.2), and now a new metric, BA, is introduced here. In addition, Section 5 includes another metric to compare the calibration model to rank-based metrics. Is this truly part of the proposed approach? Or is this a metric specific to the experimental study of this paper, which should instead be defined in Section 6? - **Soundness of the proposed solution:** As mentioned before, it is difficult to assess the soundness of the proposed evaluation protocol, as the pieces are not presented coherently. Unfortunately, the presentation of the experimental results is also hard to follow. Please see the detailed comments below: - The paper does not sufficiently discuss the limitations of the proposed protocol, for example, that it is sensitive to the learning of the calibration model. Furthermore, there are no guidelines on which calibration functions, negative generation techniques, etc., to apply in the future to obtain meaningful results for an evaluation. - Equation 17: The paper does not explain the choice of the arithmetic mean over other means, for example, the harmonic mean. - Figures 1 and 2 show the results for the best calibration function, but it would be interesting to report the results for each calibration function investigated. Without these levels of detail, it is impossible to understand the impact of the different โ€œcomponentsโ€ of the new protocol on the observed results. - Table 1 shows the time difference (in %) between the proposed and existing solutions. However, the time difference is defined as the sum of the times of the approaches, normalized by the time of the link prediction approach. Why the sum and not the difference? This might lead to values higher than 100%. Table 1 could be simplified by presenting the compared approaches' raw times. - Table 2 is also hard to follow. - **Interpretability of the metrics:** - One important aspect when developing a novel benchmarking protocol is to demonstrate that the results obtained with the new techniques can be easily interpreted, i.e., to confirm that the new protocol is behaving as expected. The paper includes several passages about this matter, but this is not demonstrated with the experimental results. - The paper does not show how the proposed protocol scores individual triples using BA, BS_w, or R^2_w. This was one of the limitations of the state of the art discussed in the introduction. It would be great if the paper provides examples of how this is achieved with the new protocol. - **Reproducibility:** - Footnote 4 indicates that the required sources to reproduce the results will be available after the review. But at the time of review, it is not possible to assess how easy the reproducibility of this work is: does the repository include all the necessary files? does the repository include a well-documented README with instructions to repeat the experiments? - The authors may consider using services like Anonymous Github (https://anonymous.4open.science) for future submissions. This allows reviewers to assess the reproducibility of the work without compromising the double anonymization. - **Relevance to the Web:** - This work perfectly fits the topic of โ€œknowledge graphsโ€. Yet, the connection of this paper to the overarching scheme of the conference, i.e., The Web, is not straightforward. - This type of work is more suitable for a machine learning or representation learning conference. - This remark does not (negatively) influence the overall rating of this paper, but it is more of a suggestion for fitting venues for this work. questions: Q1. About the sensitivity of the proposed evaluation protocol, how do the different factors (dataset, generation of negatives, calibration function, etc.) affect the robustness of the results obtained with the new protocol? Q2. In Equation (17), why not use the harmonic mean between TPR and TNR (i.e., which resembles the F-measure between precision and recall) instead of the arithmetic mean? Q3. Can you provide concrete examples of how triples are scored with the new protocol and compare them to the ranking obtained with a rank-based metric? Q4. Do you have any concrete guidelines on configuring the proposed protocol to ensure high robustness and interpretability of the results? ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 4 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
8f8GrRqb2l
Using Model Calibration to Evaluate Link Prediction in Knowledge Graphs
[ "Aishwarya Rao", "Narayanan Asuri Krishnan", "Carlos Rivero" ]
Link prediction models assign scores to predict new, plausible edges to complete knowledge graphs. In link prediction evaluation, the score of an existing edge (positive) is ranked w.r.t.~the scores of its synthetically corrupted counterparts (negatives). An accurate model ranks positives higher than negatives, assuming ascending order. Since the number of negatives are typically large for a single positive, link prediction evaluation is computationally expensive. As far as we know, only one approach has proposed to replace rank aggregations by a distance between sample positives and negatives. Unfortunately, the distance does not consider individual ranks, so edges in isolation cannot be assessed. In this paper, we propose an alternative protocol based on posterior probabilities of positives rather than ranks. A calibration function assigns posterior probabilities to edges that measure their plausibility. We propose to assess our alternative protocol in various ways, including whether expected semantics are captured when using different strategies to synthetically generate negatives. Our experiments show that posterior probabilities and ranks are highly correlated. Also, the time reduction of our alternative protocol is quite significant: more than 77\% compared to rank-based evaluation. We conclude that link prediction evaluation based on posterior probabilities is viable and significantly reduces computational costs.
[ "Knowledge Graph Embedding", "Link Prediction", "Model Calibration" ]
https://openreview.net/pdf?id=8f8GrRqb2l
XRflJ8NvZQ
decision
1,705,909,230,815
8f8GrRqb2l
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept comment: The paper is sound in methods and experiments, as acknowledged by the reviewers. The issues regarding methodology and fitness to the conference have been addressed, hence the paper evaluation has also increased toward weak acceptance.
8f8GrRqb2l
Using Model Calibration to Evaluate Link Prediction in Knowledge Graphs
[ "Aishwarya Rao", "Narayanan Asuri Krishnan", "Carlos Rivero" ]
Link prediction models assign scores to predict new, plausible edges to complete knowledge graphs. In link prediction evaluation, the score of an existing edge (positive) is ranked w.r.t.~the scores of its synthetically corrupted counterparts (negatives). An accurate model ranks positives higher than negatives, assuming ascending order. Since the number of negatives are typically large for a single positive, link prediction evaluation is computationally expensive. As far as we know, only one approach has proposed to replace rank aggregations by a distance between sample positives and negatives. Unfortunately, the distance does not consider individual ranks, so edges in isolation cannot be assessed. In this paper, we propose an alternative protocol based on posterior probabilities of positives rather than ranks. A calibration function assigns posterior probabilities to edges that measure their plausibility. We propose to assess our alternative protocol in various ways, including whether expected semantics are captured when using different strategies to synthetically generate negatives. Our experiments show that posterior probabilities and ranks are highly correlated. Also, the time reduction of our alternative protocol is quite significant: more than 77\% compared to rank-based evaluation. We conclude that link prediction evaluation based on posterior probabilities is viable and significantly reduces computational costs.
[ "Knowledge Graph Embedding", "Link Prediction", "Model Calibration" ]
https://openreview.net/pdf?id=8f8GrRqb2l
GkhNJMSGkn
official_review
1,699,288,362,387
8f8GrRqb2l
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1179/Reviewer_FvPt" ]
review: In the article โ€œUsing Model Calibration to Evaluate Link Prediction in Knowledge Graphsโ€ authors propose to make a contribution to the evaluation of methods that aim to predict missing triples in a triplestore. The main problem with that article is that its contribution is purely about KR / KG and no link to the Web is made. So it violates the relevance rule: "Every submission must clearly state how the work is relevant to the Web and to the track in the first page. Submissions that merely use a Web artifact---e.g., a dataset or a Web Application Programmer Interface (API) or a social network---rather than answering a specific Web-related scientific research challenge, are out of scope and will be desk-rejected." This paper never formulated any explicitly Web-related scientific research challenge. It is clear from the lists of contributions page 2 that the submission would have been suitable for a purely AI/KG conference but not for TheWebConf: โ€œโ€ข We discuss how to learn a calibration function for link prediction evaluation using Platt scaling and isotonic regression. As far as we know, it is the first time this has been studied. โ€ข We propose an alternative protocol for link prediction evaluation based on the output of the calibration function learned. This new protocol only works with positives. โ€ข We propose several ways of assessing the accuracy and reliability of the calibration function learned, and of our alternative protocol to evaluate link prediction. โ€ข We conduct experiments involving popular methods, such as BoxE, HAKE, QuatE and TransE, and datasets, such as FB15K-237, NELL-995, WN18RR and YAGO3-10.โ€ questions: How is you paper compliant with the relevance rule of the CfP of TheWebConf? "Every submission must clearly state how the work is relevant to the Web and to the track in the first page. Submissions that merely use a Web artifact---e.g., a dataset or a Web Application Programmer Interface (API) or a social network---rather than answering a specific Web-related scientific research challenge, are out of scope and will be desk-rejected." What is your paper contributing to the Web? ethics_review_flag: No ethics_review_description: none scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 5 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
8Tp7MPWF9L
Donโ€™t bite off more than you can chew: Investigating Excessive Permission Requests in Trigger-Action Integrations
[ "Liuhuo Wan", "Kailong Wang", "Kulani Tharaka Mahadewa", "Haoyu Wang", "Guangdong Bai" ]
Various web-based trigger-action platforms (TAPs) enable users to integrate diverse Internet of Things (IoT) systems and online services into trigger-action integrations (TAIs), designed to facilitate the functionality-rich automation tasks called applets. A typical TAI involves at least three cooperative entities, i.e., the TAP, and the participating trigger and action service providers. This multi-party nature can render the integration susceptible to security and privacy challenges though. Issues such as privileged action mis-triggering and sensitive data leakage have been continuously reported from existing applets by recent studies. In this work, we investigate the cross-entity permission manage- ment in TAIs, addressing the root causes of the applet-level security and privacy issues that have been the focus of the literature in this area. We advocate the permission-functionality consistency, aiming to reclaim fairness when the user is requested for permissions. We develop PFCon, which extracts the required permissions based on all functionalities offered by an entity, and checks the consistency between the required and requested permissions on usersโ€™ assets. PFCon is featured in leveraging advanced GPT-based language models to address the challenge in the TAI context that the textual artifacts are short and written in an unformatted manner. We con- duct a large-scale study on all TAIs built around IFTTT, the most popular TAP. Our study unveils that nearly one third of the services in these integrations request excessive permissions. Our findings raise an alert to all service providers involved in TAIs, and encourage them to enforce the permission-functionality consistency.
[ "Trigger-Action Platform", "Permission Minimization" ]
https://openreview.net/pdf?id=8Tp7MPWF9L
zCBmq30Tqx
decision
1,705,909,242,224
8Tp7MPWF9L
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept (Oral) comment: The paper presents a framework to discover when services on so-called trigger-action platforms such as IFTTT ask for more permissions than they need. Authors find some problematic cases in IFTTT, which were also responsibly disclosed. Detailed reviews raised several questions to which the authors provide suitable answers, and also make several promises for improving clarity in writing and adding additional details. All these should be implemented in the next version submitted.
8Tp7MPWF9L
Donโ€™t bite off more than you can chew: Investigating Excessive Permission Requests in Trigger-Action Integrations
[ "Liuhuo Wan", "Kailong Wang", "Kulani Tharaka Mahadewa", "Haoyu Wang", "Guangdong Bai" ]
Various web-based trigger-action platforms (TAPs) enable users to integrate diverse Internet of Things (IoT) systems and online services into trigger-action integrations (TAIs), designed to facilitate the functionality-rich automation tasks called applets. A typical TAI involves at least three cooperative entities, i.e., the TAP, and the participating trigger and action service providers. This multi-party nature can render the integration susceptible to security and privacy challenges though. Issues such as privileged action mis-triggering and sensitive data leakage have been continuously reported from existing applets by recent studies. In this work, we investigate the cross-entity permission manage- ment in TAIs, addressing the root causes of the applet-level security and privacy issues that have been the focus of the literature in this area. We advocate the permission-functionality consistency, aiming to reclaim fairness when the user is requested for permissions. We develop PFCon, which extracts the required permissions based on all functionalities offered by an entity, and checks the consistency between the required and requested permissions on usersโ€™ assets. PFCon is featured in leveraging advanced GPT-based language models to address the challenge in the TAI context that the textual artifacts are short and written in an unformatted manner. We con- duct a large-scale study on all TAIs built around IFTTT, the most popular TAP. Our study unveils that nearly one third of the services in these integrations request excessive permissions. Our findings raise an alert to all service providers involved in TAIs, and encourage them to enforce the permission-functionality consistency.
[ "Trigger-Action Platform", "Permission Minimization" ]
https://openreview.net/pdf?id=8Tp7MPWF9L
rb3lVZNizc
official_review
1,699,274,480,157
8Tp7MPWF9L
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2547/Reviewer_wLAG" ]
review: ### Summary: Using the newly proposed approach PFCon, the paper studies security and privacy issues in the context of trigger action platforms (TAPs). In particular, the paper considers issues that result from an excess of object and operational-level permissions. PFCon utilizes LLM prompts to identify the requested and required permissions to subsequently model their relationship using a lattice-based approach. For the study, the paper considers 427 services that are offered on the IFTTT website and discovers significant issues in one-third of the studied services. To validate and confirm the accuracy of their approach, the authors conduct a small user study (involving people with a computer science background), where the participants manually classify 64 services. ### Pros: +1: Very well-written and presented paper +2: Quite the interesting and novel approach to tackle the outlined challenges ### Cons: -1: Some minor issues/questions remain The paper is a very interesting read and nicely outlines the problem, the design, and the evaluation, including the results and potential reasons for the observed situation. From my point of view, I am not able to identify significant flaws in the work, which is why I would like to see it at the main conference. I also like the extra content in the appendix of the paper. Below, I will attach a list of minor comments that could allow the authors to further improve their paper before publication. ### Detailed Comments: #### -1: Minor issues (sorted by occurrence) - Unfortunately, the paper does not indicate whether PFCon will be publicly available (open-sourced). Should third parties be allowed to run the proposed approach? - My impression is that Section 3.2 offers quite a lot of engineering details that are not required to understand the design of PFCon. As a result, I would recommend the authors to move most of the details regarding the identification of login fields, etc., to the appendix of the paper and use the space for more important content. - The paper outlines that 700 services are available and that 427 have been selected for the evaluation. While the paper gives a few pointers as to why the services are considered in the evaluation, the corresponding paragraph is quite vague. Are foreign languages and the lack of OAuth authentication the only reasons for exclusion (what is the distribution)? Or are there any other reasons at play that the number is reduced by quite a lot? - The authors state that they reached out to the service developers to inform them about their findings. Unfortunately, the paper does not report whether any changes have been made and how many responses the authors received. I would like to see additional details in this regard, possibly as part of the appendix. Moreover, it would be interesting to observe how the situation evolves now that PFCon is available and allows for repeated studies. - The order of Tables 4 and 5 is different from their first reference in the text. I would recommend swapping them. - The conclusion of the paper is rather short. I hope that the authors can add a few more details to this part of the paper once they have condensed Section 3.2 (see above). - The text embedding of the tables in the appendix is quite brief. I would like to have some more elaborate context for each of the subsections. Since the number of pages for the appendix is not limited by the submission requirements, the authors can easily extend it. - Picking up on the previous comment, the semantics of Table 9 are not pointed out in the current version of the paper. What is the rationale behind the grouping of different approaches? #### Nits: - There is a typo in Figure 6: "carema" should be "camera" - There are a few lines that exceed the column width, for example, in Sections 4.3.3 and 5. - I believe Android should be capitalized in Section 5. Moreover, I think the plural of "app" is needed in the same sentence. - The first sentence of Section 6 most likely sounds better if it is written in past tense. ### Post-Rebuttal I kindly thank the authors for responding to the reviews and outlining their proposed changes. After these comments, I do not have any follow-up questions concerning the aspects that I initially raised as part of my review. I am curious to see whether the authors still find the time to respond to the response by Reviewer mciy. Certainly, the approach is "flawed" in the sense that it can only work with the textual information that is available. Personally, I would not discredit the proposed approach for this reason because effectively any design that relies on the textual information is impacted/limited in the same way. questions: n/a ethics_review_flag: No ethics_review_description: n/a scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 6 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
8Tp7MPWF9L
Donโ€™t bite off more than you can chew: Investigating Excessive Permission Requests in Trigger-Action Integrations
[ "Liuhuo Wan", "Kailong Wang", "Kulani Tharaka Mahadewa", "Haoyu Wang", "Guangdong Bai" ]
Various web-based trigger-action platforms (TAPs) enable users to integrate diverse Internet of Things (IoT) systems and online services into trigger-action integrations (TAIs), designed to facilitate the functionality-rich automation tasks called applets. A typical TAI involves at least three cooperative entities, i.e., the TAP, and the participating trigger and action service providers. This multi-party nature can render the integration susceptible to security and privacy challenges though. Issues such as privileged action mis-triggering and sensitive data leakage have been continuously reported from existing applets by recent studies. In this work, we investigate the cross-entity permission manage- ment in TAIs, addressing the root causes of the applet-level security and privacy issues that have been the focus of the literature in this area. We advocate the permission-functionality consistency, aiming to reclaim fairness when the user is requested for permissions. We develop PFCon, which extracts the required permissions based on all functionalities offered by an entity, and checks the consistency between the required and requested permissions on usersโ€™ assets. PFCon is featured in leveraging advanced GPT-based language models to address the challenge in the TAI context that the textual artifacts are short and written in an unformatted manner. We con- duct a large-scale study on all TAIs built around IFTTT, the most popular TAP. Our study unveils that nearly one third of the services in these integrations request excessive permissions. Our findings raise an alert to all service providers involved in TAIs, and encourage them to enforce the permission-functionality consistency.
[ "Trigger-Action Platform", "Permission Minimization" ]
https://openreview.net/pdf?id=8Tp7MPWF9L
qsjMnaLxDt
official_review
1,699,765,917,554
8Tp7MPWF9L
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2547/Reviewer_hRwH" ]
review: ### Summary In this paper, the authors develop a tool, named PFCon, which is dedicated to detect the permission-functionality consistency issue in trigger-action integrations. Specifically, the consistency issue is divided into object level and operation level. Thus, the authors firstly collect TAIs from IFTTT. Then, taking advantage of ChatGPT, they extract the required and requested permissions for each TAI. Last, they utilize a lattice system to identify excessive permission requests. The evaluation depicts the landscape of this issue in IFTTT, and conducts a large-scale analysis. ### Strength - This story of this paper is quite complete, the motivation, methodology, and evaluation are organized logically and fluently. - The authors reveal the permission excess issues in TAIs, which are widely-spread according to their experimental results. It is interesting. - The adopted methodology is systematic, and the evaluation is comprehensive. ### Weakness - The adopted lattice system needs to be better illustrated. - Some key parts are missing or should be paid more attention. For example, there is no โ€œthreats to validityโ€ before the conclusion. ### Comments First of all, I think this paper is quite complete and deserves to be published. However, some revisions should be conducted before that. I will detail some concrete concerns in the following. In Fig. 2, the authors should add sub-captions to clarify the requested permissions and required permissions. This will increase the readability of this figure. One of the main concerns is the adopted lattice system. Firstly, in Section 2.3, the authors define $S$ and $R$ as requested and required permissions, respectively. Thus, can I simply extract the abused permission as the $p$, which complies $\exists p \in S. p \notin R$? Moreover, the description in Section 3.4 is unclear. For example, in Section 3.4.1, the authors detail the built object lattice and operation lattice. How are these two lattices built? Through ChatGPT? In Fig. 6, why adopts *bottom* instead of *top* as the default symbol? In Section 3.4.2, in object-level detecting, the authors say $\exists (OP, OB) \in R$ such that $S.OB \preceq R.OB$. According to the example, it only illustrates the *comment* object is on the same level of other required permission. Can you revise the example to illustrate the $\prec$ relation? In operation-level detecting, why does the tool proceed to check the operation fields after successfully detecting object-level permissions abuse? Moreover, I think a universal quantifier is needed before the $S.OP \preceq R.OP$. Last but not least, if you have constructed a lattice system, why do you still perform the comparison with the help of ChatGPT instead of directly performing the comparison, where ChatGPT cannot guarantee 100% precision on such a kind of task. In Section 4.1, there is a brief description about the ethical considerations. However, I think more details should be discussed here. For example, how long have you taken to disclose the corresponding permission abuse to IFTTT and service providers after you discovered them. How many of them have been recognized and patched timely? In the evaluation part, I prefer to see a separate paragraph after each RQ to summarize the findings of the current RQ. In Table 6(a), the positive cases are defined as the $S-R$, which does not distinguish the object and operation level permission excess. On the one hand, if they are not distinguished, is it necessary to distinguish them in the methodology part? On the other hand, I would like to see a break-down analysis here. That is, how many cases are suffered object level permission abuse, as well as the operation level abuse. What is task 1 to 4 in Table 6(b)? The authors should clarify this. Last but not least, the authors should add an explicit paragraph to discuss the threats to validity. questions: Please refer to the `Review` part. ethics_review_flag: Yes ethics_review_description: The authors have addressed part of the ethical considerations, however, I think it should be discussed more in detail. scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 6 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
8Tp7MPWF9L
Donโ€™t bite off more than you can chew: Investigating Excessive Permission Requests in Trigger-Action Integrations
[ "Liuhuo Wan", "Kailong Wang", "Kulani Tharaka Mahadewa", "Haoyu Wang", "Guangdong Bai" ]
Various web-based trigger-action platforms (TAPs) enable users to integrate diverse Internet of Things (IoT) systems and online services into trigger-action integrations (TAIs), designed to facilitate the functionality-rich automation tasks called applets. A typical TAI involves at least three cooperative entities, i.e., the TAP, and the participating trigger and action service providers. This multi-party nature can render the integration susceptible to security and privacy challenges though. Issues such as privileged action mis-triggering and sensitive data leakage have been continuously reported from existing applets by recent studies. In this work, we investigate the cross-entity permission manage- ment in TAIs, addressing the root causes of the applet-level security and privacy issues that have been the focus of the literature in this area. We advocate the permission-functionality consistency, aiming to reclaim fairness when the user is requested for permissions. We develop PFCon, which extracts the required permissions based on all functionalities offered by an entity, and checks the consistency between the required and requested permissions on usersโ€™ assets. PFCon is featured in leveraging advanced GPT-based language models to address the challenge in the TAI context that the textual artifacts are short and written in an unformatted manner. We con- duct a large-scale study on all TAIs built around IFTTT, the most popular TAP. Our study unveils that nearly one third of the services in these integrations request excessive permissions. Our findings raise an alert to all service providers involved in TAIs, and encourage them to enforce the permission-functionality consistency.
[ "Trigger-Action Platform", "Permission Minimization" ]
https://openreview.net/pdf?id=8Tp7MPWF9L
fsE6cYgPhA
official_review
1,700,841,372,750
8Tp7MPWF9L
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2547/Reviewer_mciy" ]
review: Summary The authors study the cross-entity permission management in the trigger-action integrations (TAIs). More specifically, they propose and develop a prototype named PFCON to analyze the permission-functionality consistency in TAIs. In detail, PFCON employs a large language model to analyze the required permissions based on functionalities offered by an entity and checks the consistency between the required and requested permissions on usersโ€™ assets. Applying PFCON to TAIs built upon IFTTT, the authors find several TAIs request excessive permissions. Strengths - The authors take the first step to investigate the problem of excessive permission requests in TAIs and develop a prototype named PFCON to detect the problem. - The authors do find problematic cases in real-world TAIs. Weaknesses - PFCON does not take applets (i.e., code) into analysis, making the results unreliable. - The feedback on the discovered problematic cases is not disclosed. - The source code of the prototype and the dataset have not been released. Detailed comments The paper studies an important problem in TAIs. However, I have the following suggestions and comments on the current version of the paper. 1. Clarifying the correctness of the proposed approach. PFCON mainly analyzes the documentation of a TAIโ€™s interfaces and functionalities and the corresponding description of the authorization page to identify excessive permission requests. My main concern is whether such textual information is reliable. If the textual information is incorrect, the analysis results of PFCON become unreliable. In my view, it is common that the textual information is incorrect or imprecise. For example, in Figure 2, can โ€œtagโ€ denote โ€œnameโ€? Can the โ€œcommentsโ€ refer to โ€œmodified_timeโ€ and/or โ€œmodified_byโ€? It seems that there is no ground truth about the corresponding relationship between different descriptions. Such a problem makes me worry about the correctness of the proposed approach. 2. Clarifying why PFCON does not analyze applets. In my view, code is more reliable than textual information. I think PFCON can also perform code analysis on applets to verify the correctness of the results by purely analyzing the textual information. If the authors do not think so, provide a corresponding discussion in the paper. 3. Disclosing the detected problematic TAIs to IFTTT and including the feedback in the paper. The authors should follow the responsible disclosure policy to disclose their findings to IFTTT. If the IFTTT can help confirm the correctness of the detection results, the authors could add feedback to the paper to demonstrate the effectiveness of PFCON. 4. Releasing the source code and dataset. Will the authors release the source code of PFCON and the dataset used in the evaluation? questions: 1. Are the textual information about TAIโ€™s interfaces and functionalities and authorization pages reliable? 2. Will the authors disclose their findings to IFTTT? ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 3 technical_quality: 4 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
8Tp7MPWF9L
Donโ€™t bite off more than you can chew: Investigating Excessive Permission Requests in Trigger-Action Integrations
[ "Liuhuo Wan", "Kailong Wang", "Kulani Tharaka Mahadewa", "Haoyu Wang", "Guangdong Bai" ]
Various web-based trigger-action platforms (TAPs) enable users to integrate diverse Internet of Things (IoT) systems and online services into trigger-action integrations (TAIs), designed to facilitate the functionality-rich automation tasks called applets. A typical TAI involves at least three cooperative entities, i.e., the TAP, and the participating trigger and action service providers. This multi-party nature can render the integration susceptible to security and privacy challenges though. Issues such as privileged action mis-triggering and sensitive data leakage have been continuously reported from existing applets by recent studies. In this work, we investigate the cross-entity permission manage- ment in TAIs, addressing the root causes of the applet-level security and privacy issues that have been the focus of the literature in this area. We advocate the permission-functionality consistency, aiming to reclaim fairness when the user is requested for permissions. We develop PFCon, which extracts the required permissions based on all functionalities offered by an entity, and checks the consistency between the required and requested permissions on usersโ€™ assets. PFCon is featured in leveraging advanced GPT-based language models to address the challenge in the TAI context that the textual artifacts are short and written in an unformatted manner. We con- duct a large-scale study on all TAIs built around IFTTT, the most popular TAP. Our study unveils that nearly one third of the services in these integrations request excessive permissions. Our findings raise an alert to all service providers involved in TAIs, and encourage them to enforce the permission-functionality consistency.
[ "Trigger-Action Platform", "Permission Minimization" ]
https://openreview.net/pdf?id=8Tp7MPWF9L
C8lyrYq7ta
official_review
1,700,749,362,653
8Tp7MPWF9L
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2547/Reviewer_kKe4" ]
review: ----------- Summary ----------- The paper investigates the cross-entity permission management in TAIs, and tries to address the root causes of the applet-level security and privacy issues that have been the focus of the literature in this area. They develop PFCon, which extracts the required permissions based on all functionalities offered by an entity, and checks the consistency between the required and requested permissions on usersโ€™ assets. The study reveals some interesting findings such that nearly one third of the services in these integrations request excessive permissions. ----------- Strengths ----------- S1. The paper is well written, and the logic is clear. S2. The authors conduct a large-scale study to study the permission excess issues. S3. The paper proposes an assessment approach to automatically identify permission excess issues from TAIs. ----------- Weaknesses ----------- W1. The motivation is not clear. It is unclear how serious is the permission excess problem in TAIs. W2. It is not clear what resources (services) the paper used to collect the artifact. How are the services selected? What are the statistics of the collected information? W3. The rationale of methodology design is lacking. For example, how and why is TAIFU selected as the analyzer? Similarly, how is GPT-4 model selected and used? How are the prompts designed? W4. It is unclear why the two permission lattice systems (object lattice and operation lattice) are effective in capturing the permissions. How would the performance of this step affect the result of the detection of excessive permission? W5. The sample size is too small (64 samples). W6. I also doubt about the novelty of PFCon, as it seems to combine the existing tools. The authors may want to clarify the innovation of the approach clearly. questions: - What is the motivation of the work? Why solving the permission excess problem is important? - What are the resources (services) the paper used to collect the artifact? How are the services selected? What are the statistics of the collected information? - What is the rationale of methodology design. For example, how and why is TAIFU selected as the analyzer? Similarly, how is GPT-4 model selected and used? How are the prompts designed? - Why the two permission lattice systems (object lattice and operation lattice) are effective in capturing the permissions? How would the performance of this step affect the result of the detection of excessive permission? - What is the novelty of PFCon, as it seems to combine the existing tools? ethics_review_flag: No ethics_review_description: N/A scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 3 technical_quality: 3 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
8KMXZxEnA4
Malicious Package Detection using Metadata Information
[ "Sajal Halder", "Michael Bewong", "Arash Mahboubi", "Yinhao Jiang", "Rafiqul Islam", "Zahid Islam", "Ryan H.L. Ip", "Muhammad Ejaz Ahmed", "Gowri Sankar Ramachandran", "Muhammad Ali Babar" ]
Protecting software supply chains from malicious packages is paramount in the evolving landscape of software development. Attacks on the software supply chain involve attackers injecting harmful software into commonly used packages or libraries in a software repository. For instance, JavaScript uses Node Package Manager (NPM), and Python uses Python Package Index (PyPi) as their respective package repositories. In the past, NPM has had vulnerabilities such as the event-stream incident, where a malicious package was introduced into a popular NPM package, potentially impacting a wide range of projects. As the integration of third-party packages becomes increasingly ubiquitous in modern software development, accelerating the creation and deployment of applications, the need for a robust detection mechanism has become critical. On the other hand, due to the sheer volume of new packages being released daily, the task of identifying malicious packages presents a significant challenge. To address this issue, in this paper, we introduce a metadata-based malicious package detection model, MeMPtec. This model extracts a set of features from package metadata information. These extracted features are classified as either easy-to-manipulate (ETM) or difficult-to-manipulate (DTM) features based on monotonicity and restricted control properties. By utilising these metadata features, not only do we improve the effectiveness of detecting malicious packages, but also we demonstrate its resistance to adversarial attacks in comparison with existing state-of-the-art. Our experiments indicate a significant reduction in both false positives (up to 97.56\%) and false negatives (up to 91.86\%).
[ "NPM Metadata", "Malicious Detection", "Feature Extractions", "Adversarial Attacks", "Software Supply Chain" ]
https://openreview.net/pdf?id=8KMXZxEnA4
vtDh6dEMEU
decision
1,705,909,227,033
8KMXZxEnA4
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Accept comment: ## Summary This paper introduces MeMPtec, a metadata-based machine learning model for detecting malicious packages in software repositories like NPM and PyPi. The model classifies features into easy-to-manipulate (ETM) and difficult-to-manipulate (DTM) categories, significantly reducing false positives and negatives compared to existing methods. It represents a critical step in securing software supply chains, demonstrating how metadata can be leveraged to enhance package safety. ## Evaluation **Strengths:** 1. **Clear Methodology:** The paper outlines a distinct approach to feature extraction based on metadata, improving the resilience of the detection model against adversarial attacks. 2. **Notable Performance:** Experimental results show substantial improvements in detection accuracy over existing methods. **Weaknesses:** 1. **Lack of Detail:** Some aspects, such as the dataset used and the specific features in existing techniques, need more clarification. 2. **Presentation Issues:** The paper could benefit from clearer explanations of self-defined terms and improved figure organization. 3. **Artifact Availability:** The absence of publicly available artifacts limits the ability for extended validation and replication. ## Suggestions for Improvement 1. **Enhance Dataset Description:** Provide more details about the dataset's collection process and label accuracy. 2. **Improve Presentation:** Clarify self-defined terms and align figures with their references in the text. 3. **Release Artifacts:** Publicly share the code and data post-review to facilitate further research. ## Recommendations The paper presents notable advancements in the field of software security, particularly focusing on the detection of malicious packages through metadata analysis. The research is underpinned by robust methodologies, and its contributions are noteworthy. However, further refinements in terms of detail and clarity could significantly amplify its impact. To achieve the highest quality in the final paper, the Program Committee recommends appointing a shepherd to guide its finalization. ## Author's Rebuttal The authors have addressed several concerns raised by the reviewers, including clarifying their novel approach to feature extraction and the significance of their contribution in the context of adversarial attacks. They acknowledged the limitations related to the dataset and agreed to include more detailed discussions and clarifications in the final version of the paper. Additionally, they plan to make the code and data publicly available, which will contribute to the field's advancement by allowing for further research and validation. ---
8KMXZxEnA4
Malicious Package Detection using Metadata Information
[ "Sajal Halder", "Michael Bewong", "Arash Mahboubi", "Yinhao Jiang", "Rafiqul Islam", "Zahid Islam", "Ryan H.L. Ip", "Muhammad Ejaz Ahmed", "Gowri Sankar Ramachandran", "Muhammad Ali Babar" ]
Protecting software supply chains from malicious packages is paramount in the evolving landscape of software development. Attacks on the software supply chain involve attackers injecting harmful software into commonly used packages or libraries in a software repository. For instance, JavaScript uses Node Package Manager (NPM), and Python uses Python Package Index (PyPi) as their respective package repositories. In the past, NPM has had vulnerabilities such as the event-stream incident, where a malicious package was introduced into a popular NPM package, potentially impacting a wide range of projects. As the integration of third-party packages becomes increasingly ubiquitous in modern software development, accelerating the creation and deployment of applications, the need for a robust detection mechanism has become critical. On the other hand, due to the sheer volume of new packages being released daily, the task of identifying malicious packages presents a significant challenge. To address this issue, in this paper, we introduce a metadata-based malicious package detection model, MeMPtec. This model extracts a set of features from package metadata information. These extracted features are classified as either easy-to-manipulate (ETM) or difficult-to-manipulate (DTM) features based on monotonicity and restricted control properties. By utilising these metadata features, not only do we improve the effectiveness of detecting malicious packages, but also we demonstrate its resistance to adversarial attacks in comparison with existing state-of-the-art. Our experiments indicate a significant reduction in both false positives (up to 97.56\%) and false negatives (up to 91.86\%).
[ "NPM Metadata", "Malicious Detection", "Feature Extractions", "Adversarial Attacks", "Software Supply Chain" ]
https://openreview.net/pdf?id=8KMXZxEnA4
hYRUrbu4z2
official_review
1,700,764,174,877
8KMXZxEnA4
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1343/Reviewer_P2EH" ]
review: I thank the authors for this submission, this paper is well written, well motivated and easy to follow. The authors present a new ML methodology for detecting malicious NPM packages, which uses a set of features that are highly resilient to modifications by malicious actors. They show that their models outperform relevant previous work. My main questions and recommendations for this paper are the following: - Intro: It would be good to add some numbers about the popularity of PyPi, similar to those shown for NPM. If PyPi is not as relevant, then it could also just be mentioned as the Python alternative to NPM. - The categories presented in Table 1 are different to those presented in S4. They should either be introduced further in S2 or at least be consistent with S4 so that readers can find a definition of these categories. - Authors assume packets on NPM are not malicious. This is probably true for the vast majority of packages, but it would be good if they could provide any indication of how often malicious packages are found and how long they often survive before being removed. - Is metadata on NPM packages self-reported by developers or is there some sort of verification process? This should be further developed on this paper, as relying on self-reported metadata haves its own set of inherent challenges. It becomes clearer in S5 that an adversarial attacker can modify those, but it is unclear until then. - On a similar note, I thin there's a lack of discussion on how likely it is that malicious actors do change these values. The authors show clear examples as to why an actor would do this, but it is unclear if there is any evidence of this sort of behavior happening on the wild. questions: - Authors shows that even manipulating 100% of features shows small decreases in accuracy and recall. This begs the question of how needed are these features and whether it would make sense to only use those that are more resilient. ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 4 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
8KMXZxEnA4
Malicious Package Detection using Metadata Information
[ "Sajal Halder", "Michael Bewong", "Arash Mahboubi", "Yinhao Jiang", "Rafiqul Islam", "Zahid Islam", "Ryan H.L. Ip", "Muhammad Ejaz Ahmed", "Gowri Sankar Ramachandran", "Muhammad Ali Babar" ]
Protecting software supply chains from malicious packages is paramount in the evolving landscape of software development. Attacks on the software supply chain involve attackers injecting harmful software into commonly used packages or libraries in a software repository. For instance, JavaScript uses Node Package Manager (NPM), and Python uses Python Package Index (PyPi) as their respective package repositories. In the past, NPM has had vulnerabilities such as the event-stream incident, where a malicious package was introduced into a popular NPM package, potentially impacting a wide range of projects. As the integration of third-party packages becomes increasingly ubiquitous in modern software development, accelerating the creation and deployment of applications, the need for a robust detection mechanism has become critical. On the other hand, due to the sheer volume of new packages being released daily, the task of identifying malicious packages presents a significant challenge. To address this issue, in this paper, we introduce a metadata-based malicious package detection model, MeMPtec. This model extracts a set of features from package metadata information. These extracted features are classified as either easy-to-manipulate (ETM) or difficult-to-manipulate (DTM) features based on monotonicity and restricted control properties. By utilising these metadata features, not only do we improve the effectiveness of detecting malicious packages, but also we demonstrate its resistance to adversarial attacks in comparison with existing state-of-the-art. Our experiments indicate a significant reduction in both false positives (up to 97.56\%) and false negatives (up to 91.86\%).
[ "NPM Metadata", "Malicious Detection", "Feature Extractions", "Adversarial Attacks", "Software Supply Chain" ]
https://openreview.net/pdf?id=8KMXZxEnA4
amShwTkVOj
official_review
1,700,809,945,205
8KMXZxEnA4
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1343/Reviewer_q17h" ]
review: Summary: The paper presents MeMPtec, a metadata-based model for detecting malicious packages in software repositories like NPM and PyPi. MeMPtec classifies metadata features into easy-to-manipulate (ETM) and difficult-to-manipulate (DTM) categories, improving detection effectiveness and showing resilience to adversarial attacks. Experimental results indicate a significant reduction in both false positives (up to 97.56%) and false negatives (up to 91.86%) compared to existing methods. MeMPtec addresses the critical challenge of securing software supply chains by leveraging metadata information. Strengths: + Well-written and easy to follow + The metadata-related features considered are relatively comprehensive. Weaknesses: - Unavailable artifacts - Missing some details - Presentations need to be improved questions: Most of the time I enjoy reading this paper. My biggest concern is that, in my opinion, the contribution of this paper is mainly to propose more features based on metadata information compared to other work. This makes the novelty and contribution of this paper less impressive. The details about the used dataset are not clear. For example, how do you collect the dataset? Can you make sure that the labels (benign/malicious) of your data are all accurate? By the way, the size of the dataset seems to be relatively small. It's not clear how well the MeMPtec performs in the real world scenarios. Table 6 takes up a lot of space but is not well interpreted in the text. Itโ€™s not clear what specific features are in the ๐ธ๐‘ฅ๐‘–๐‘ ๐‘ก๐‘–๐‘›๐‘”_๐‘ก๐‘’๐‘. Itโ€™s interesting to discuss that, which of your proposed features are more important compared to these features and thereby enhance the classification model. There is no discussion of the limitations of the work. Public release of the artifacts might help the readers who may want to extend this work further. Currently missing. Presentations could be improved: (1) Table 2 only lists the self-defined name of each metadata information, which is not clear to the readers to understand the specific information. It would be better to expand the table to three columns to present the metadata information, including self-defined names, descriptions, and examples. (2) There are many self-defined terms in the paper. Sometimes itโ€™s not easy to understand the term, e.g., what is ๐ท_๐‘–๐‘š๐‘_๐ฟ in table 6? What is I๐‘›๐‘’๐‘ค ? (3) It is recommended that the figures be numbered in the order in which they appear in the text, however, figure 3 is mentioned before figure 2. ethics_review_flag: No ethics_review_description: No Ethical concerns. scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 3 technical_quality: 3 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
8KMXZxEnA4
Malicious Package Detection using Metadata Information
[ "Sajal Halder", "Michael Bewong", "Arash Mahboubi", "Yinhao Jiang", "Rafiqul Islam", "Zahid Islam", "Ryan H.L. Ip", "Muhammad Ejaz Ahmed", "Gowri Sankar Ramachandran", "Muhammad Ali Babar" ]
Protecting software supply chains from malicious packages is paramount in the evolving landscape of software development. Attacks on the software supply chain involve attackers injecting harmful software into commonly used packages or libraries in a software repository. For instance, JavaScript uses Node Package Manager (NPM), and Python uses Python Package Index (PyPi) as their respective package repositories. In the past, NPM has had vulnerabilities such as the event-stream incident, where a malicious package was introduced into a popular NPM package, potentially impacting a wide range of projects. As the integration of third-party packages becomes increasingly ubiquitous in modern software development, accelerating the creation and deployment of applications, the need for a robust detection mechanism has become critical. On the other hand, due to the sheer volume of new packages being released daily, the task of identifying malicious packages presents a significant challenge. To address this issue, in this paper, we introduce a metadata-based malicious package detection model, MeMPtec. This model extracts a set of features from package metadata information. These extracted features are classified as either easy-to-manipulate (ETM) or difficult-to-manipulate (DTM) features based on monotonicity and restricted control properties. By utilising these metadata features, not only do we improve the effectiveness of detecting malicious packages, but also we demonstrate its resistance to adversarial attacks in comparison with existing state-of-the-art. Our experiments indicate a significant reduction in both false positives (up to 97.56\%) and false negatives (up to 91.86\%).
[ "NPM Metadata", "Malicious Detection", "Feature Extractions", "Adversarial Attacks", "Software Supply Chain" ]
https://openreview.net/pdf?id=8KMXZxEnA4
KBJYKbHTsM
official_review
1,699,888,657,225
8KMXZxEnA4
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1343/Reviewer_JnBT" ]
review: This work focuses on the big, unsolved problem of supply-chain security. Indeed, especially in the context of open source, third party libraries and applications are commonly used in software development. This poses a significant threat for developers that might include in their code base unwanted software such as operational libraries exhibiting a malicious behavior. To cope with this issue, the authors introduce a novel detection model that leverages metadata information to infer the reputation of a third party package. The model is tested against adversarial attacks and compared with previous work. The paper is well written and easy to follow. The authors did a good job in motivating their work and discussing their contributions in this research field. While other researchers had previously suggested using metadata information to infer the reputation of a software package, this work extends on previous research. The authors introduced a novel set of features (i.e., temporal and interaction in Table 1) and did a meticulous work in feature engineering (the main contribution) that resulted in a model which proved to outperform previous work (as per Table 6). However, there are some limitations and considerations that I would like to mention. See the questions section. questions: - Figure 2 shows the resilience of the model to adversarial attacks. For example, a) depicts how the accuracy of the model decreases with the increase on the percentage of features manipulation. Could the authors explain why their best model (in yellow, MeMPtec) still performs 80% detection when all the features (100% rate) are actually manipulated? [Rebuttal update: this point will be better elaborated in the camera ready version of the paper] - To have a better feeling of the prevalence of packages with metadata shady information, it would be interesting if the authors could give some numbers (based on their experience) of how many malicious packages are actually found in the wild out of the total, e.g. on a daily basis. - How many of these packages are actually malicious by nature, or are actually benign packages being hijacked? In this case, I assume the model won't be able to detect them because the metadata would were not changed. Could the authors better discuss this point? - The paper is lacking a discussion on the example of some False Positive. Why these occur and how the model could be improved for further reduction? - Do the author plan to introduce a novel class, e.g. suspicious, to handle suspicious packages, e.g. packages with generic metadata information that would might fool the detection? - s/true/original on line 6 of Section 6.3 ethics_review_flag: No ethics_review_description: Nothing scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
8KMXZxEnA4
Malicious Package Detection using Metadata Information
[ "Sajal Halder", "Michael Bewong", "Arash Mahboubi", "Yinhao Jiang", "Rafiqul Islam", "Zahid Islam", "Ryan H.L. Ip", "Muhammad Ejaz Ahmed", "Gowri Sankar Ramachandran", "Muhammad Ali Babar" ]
Protecting software supply chains from malicious packages is paramount in the evolving landscape of software development. Attacks on the software supply chain involve attackers injecting harmful software into commonly used packages or libraries in a software repository. For instance, JavaScript uses Node Package Manager (NPM), and Python uses Python Package Index (PyPi) as their respective package repositories. In the past, NPM has had vulnerabilities such as the event-stream incident, where a malicious package was introduced into a popular NPM package, potentially impacting a wide range of projects. As the integration of third-party packages becomes increasingly ubiquitous in modern software development, accelerating the creation and deployment of applications, the need for a robust detection mechanism has become critical. On the other hand, due to the sheer volume of new packages being released daily, the task of identifying malicious packages presents a significant challenge. To address this issue, in this paper, we introduce a metadata-based malicious package detection model, MeMPtec. This model extracts a set of features from package metadata information. These extracted features are classified as either easy-to-manipulate (ETM) or difficult-to-manipulate (DTM) features based on monotonicity and restricted control properties. By utilising these metadata features, not only do we improve the effectiveness of detecting malicious packages, but also we demonstrate its resistance to adversarial attacks in comparison with existing state-of-the-art. Our experiments indicate a significant reduction in both false positives (up to 97.56\%) and false negatives (up to 91.86\%).
[ "NPM Metadata", "Malicious Detection", "Feature Extractions", "Adversarial Attacks", "Software Supply Chain" ]
https://openreview.net/pdf?id=8KMXZxEnA4
AwtjqtpYfW
official_review
1,700,409,367,619
8KMXZxEnA4
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1343/Reviewer_8yKD" ]
review: This work aims to identify malicious NPM packages by using two groups of features, those are easy to manipulate and those are difficult to manipulate, and made a comparison with the model with features used in existing work (for detection tasks among software packages). The used dataset consists of benign NPM packages and malicious ones obtained leveraging a Github project. The experiment results show that (1) the trained model improved the performance by 1 - 3 percents, compared with the basedline model using features from existing work, and (2) the difficult-to-manipulate features plays a significant role. Pros: 1. A large-enough dataset. Cons: 1. This work puts a lot of efforts on the feature selection, and thus its research contribution is unclear. 2. Regarding the feature selection, 1. since the SVM model with the difficult-to-manipulate features only works pretty good (in Table 6, Figures 2, 5, 6, and 7), is it necessary to come up the easy-to-manipulate features? 2. is it necessary to separate name_exist from name_length, as an example, in Table 4? We can set name_length to -1 to represent name_not_exist. 3. It is unfair to compare the proposed model with the model using features from existing work [1, 15, 26, 31, 36], since many of them are not for the same purpose as this work. In addition, it is better to list the features used in the baseline model. 4. In Algorithm 2 (Appendix A.2), it seems that when M.predict(a'), a is still in the model M, where a is the original data and a' is the manipulate data. This approach isn't realistic. It is better to remove a from M when M.predict(a'). 5. Minor mistakes in paper writing. 1. In line 628, it should not be "10% malicious packages" but 1/11 according to Table 5. 2. In line 749, the "FP in Figure 3 (b)" should be "FN in Figure 3 (b)". questions: Concerns raised up in cons. ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 3 technical_quality: 3 reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature
8KMXZxEnA4
Malicious Package Detection using Metadata Information
[ "Sajal Halder", "Michael Bewong", "Arash Mahboubi", "Yinhao Jiang", "Rafiqul Islam", "Zahid Islam", "Ryan H.L. Ip", "Muhammad Ejaz Ahmed", "Gowri Sankar Ramachandran", "Muhammad Ali Babar" ]
Protecting software supply chains from malicious packages is paramount in the evolving landscape of software development. Attacks on the software supply chain involve attackers injecting harmful software into commonly used packages or libraries in a software repository. For instance, JavaScript uses Node Package Manager (NPM), and Python uses Python Package Index (PyPi) as their respective package repositories. In the past, NPM has had vulnerabilities such as the event-stream incident, where a malicious package was introduced into a popular NPM package, potentially impacting a wide range of projects. As the integration of third-party packages becomes increasingly ubiquitous in modern software development, accelerating the creation and deployment of applications, the need for a robust detection mechanism has become critical. On the other hand, due to the sheer volume of new packages being released daily, the task of identifying malicious packages presents a significant challenge. To address this issue, in this paper, we introduce a metadata-based malicious package detection model, MeMPtec. This model extracts a set of features from package metadata information. These extracted features are classified as either easy-to-manipulate (ETM) or difficult-to-manipulate (DTM) features based on monotonicity and restricted control properties. By utilising these metadata features, not only do we improve the effectiveness of detecting malicious packages, but also we demonstrate its resistance to adversarial attacks in comparison with existing state-of-the-art. Our experiments indicate a significant reduction in both false positives (up to 97.56\%) and false negatives (up to 91.86\%).
[ "NPM Metadata", "Malicious Detection", "Feature Extractions", "Adversarial Attacks", "Software Supply Chain" ]
https://openreview.net/pdf?id=8KMXZxEnA4
9Wj7odi6Ar
official_review
1,700,790,921,288
8KMXZxEnA4
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission1343/Reviewer_DPBn" ]
review: # Summary The paper proposes MeMPtec, a system for detecting malicious source code packages using metadata. The approach includes a feature extraction technique that identifies features that are easy to manipulate, and those that are difficult, and leverages them to find attack-resistant features. An experimental evaluation demonstrates significant reduction in FPs and FNs relative to state of the art feature selection approaches. # Strengths + The paper identifies key properties that a feature needs to possess if it is to be used reliably for analysis without the fear of it being manipulated by an adversary. + The inclusion of features that are difficult to manipulate (DTM) leads to improved results over the state-of-the-art. + By constructing and testing with adversarial samples, the paper demonstrates how including the DTM features helps prevent misclassification (although it not completely eliminate the problem). # Weaknesses - The improvement over prior work in the base scenario (i.e., without adversarial/manipulated features) is not significant. # Additional Comments The paper makes an important contribution to the literature by identifying and leveraging features that are difficult to manipulate (based on well-reasoned properties) for the task of detecting malicious NPM packages. I was particularly surprised with how fragile existing work (and features) were in the face of adversarial samples, and how using the DTM features significantly improved performance in the same scenario. That said, the values in Table 6 show that existing techniques are not far behind the proposed approach in non-adversarial situations. The paper attempts to explain this by describing how on absolute terms, the proposed approach generates fewer FPs than existing techniques (i.e., 2.6 vs 23). However, I don't see how the absolute number of false positives/negatives are even relevant, given that they become insignificant when considering the number of total samples. I don't think this diminishes the value of the paper, but rather that this unnecessary dissection of the FPs/FNs distracts from the overall message of the paper, and can be removed. To summarize, this is a good paper that clearly outlines the properties that make features hard to manipulate, then systematically identifies features that exhibit the properties, and experimentally demonstrates improved detection capabilities even when the adversary manipulates metadata. questions: Please clarify if I have misunderstood the intention behind discussing the number of FPs/FNs (when the percentages already demonstrated some improvement). ethics_review_flag: No ethics_review_description: No ethical issues. scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community novelty: 5 technical_quality: 6 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
8HTwfqUYRz
Spot Check Equivalence: an Interpretable Metric for Information Elicitation Mechanisms
[ "Shengwei Xu", "Yichi Zhang", "Paul Resnick", "Grant Schoenebeck" ]
Because high-quality data is like oxygen for AI systems, effectively eliciting information from crowdsourcing workers has become a first-order problem for developing high-performance machine learning algorithms. Two prevalent paradigms, spot-checking and peer prediction, enable the design of mechanisms to evaluate and incentivize high-quality data from human labelers. So far, at least three metrics have been proposed to compare the performances of these techniques [Zhang and Schoenebeck 2023, Gao et al. 2016, Burrell and Schoenebeck 2023]. However, different metrics lead to divergent and even contradictory results in various contexts. In this paper, we harmonize these divergent stories, showing that two of these metrics are actually the same within certain contexts and explain the divergence of the third. Moreover, we unify these different contexts by introducing Spot Check Equivalence, which offers an interpretable metric for the effectiveness of a peer prediction mechanism. Finally, we present two approaches to compute spot check equivalence in various contexts, where simulation results prove the effectiveness of our proposed metric.
[ "Algorithmic Game Theory", "Information Elicitation", "Incentive for Effort", "Peer Prediction" ]
https://openreview.net/pdf?id=8HTwfqUYRz
gd03uyJ2Bp
official_review
1,700,548,785,436
8HTwfqUYRz
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2200/Reviewer_N9WQ" ]
review: ## Quality 1. The background, purpose and results of the research are clearly pointed out. The equivalence of spot check is proposed, the integrity and sensitivity of measurement are unified, and two methods for calculating the equivalence of spot check are proposed, which are suitable for the setting of data with and without ground-truth. It's very detailed, and there's no obvious logical trauma. 2. The research methods and conclusions expressed are based on empirical research and theoretical analysis, and have certain scientificity and reliability. At the same time, the experimental process is also described and explained in detail, which ensures the quality of the research on the whole. ## Clarity 1. The article is clearly organized, from concept introduction, theorem derivation to model establishment, experimental analysis and result demonstration. Each section is organized in a logical order so that the reader can easily understand the author's point of view and the process of argument. 2. The concepts and terms involved in the paper are accurate, logical and easy to understand. For some professional concepts and theories, the author has also carried out appropriate explanations and explanations, so that readers can better understand the main content of the paper. 3. However, the fact that most of the content in the main text is often transferred to the appendix may not feel very friendly to beginners or readers outside the field. Details are listed in the question section. ## Originality 1. This paper proposes a new method to evaluate and motivate high-quality data, that is, by "Spot Check Equivalence" to evaluate, which has a high originality. 2. In addition, the existing evaluation indicators are compared and analyzed, which further improves their originality. 3. Finally, the author puts forward his own opinion on "peer prediction makes things worse". ## Significance &nbsp; &nbsp; &nbsp; The problems studied in this paper are of certain importance for the development of high-performance machine learning algorithms. With the continuous development of artificial intelligence technology, the demand for high-quality data is also increasing. Therefore, how to effectively obtain high quality data from mass contractors has become an important problem. The method proposed in this paper can provide an effective tool for designers to solve this problem, which has practical significance and application value. questions: 1. Too much important content is placed in the appendix, and the text will skip too much if read only. For example, Chapter 5: how to use the confusion matrix and the detailed definition of evaluation indicators (the main text introduces f-MI, PTS and other indicators in fig4 and fig5 too briefly, which is difficult to understand if you do not read the appendix). Experimental process, algorithm 2, formula source. 2. There are many minor flaws in the writing. Please review the whole article again to confirm the use of certain words. For example, the use of single and complex numbers in line 503. For example, in lines 82 and 133, the agent's qualifier uses her, while in lines 291, 562, 563, 622, 646, 810, 1175, 1179, 1370, 1430, 1475, 1630 and 1631, the agent's qualifier changes to his. 3. Wording issues: Isn't line 93 too absolute when it says that arbitrary incentives can be measured using test equivalence? Is it appropriate to use prove in the summary (line 24), or is it better to use verify? 4. Under what circumstances might two different indicators lead to different or even contradictory results? 5. Are the scenarios considered too simple? In more complex models, how to study incentive effects when individuals have heterogeneous cost functions? 6. How to explain the use of quadratic functions in the cost function of the experiment (line 612)? Is a linear, cubic, or other function more appropriate? 7. It is mentioned in the abstract that there are three metrics (line 14), but it is not clearly pointed out which three are. After reading the whole paper, it can only be seen that what you want to unify is measurement completeness and sensitivity. 8. Is it better to give an explanation for the assumption that $r_{ij} = o_{ij}$ (line 244)? 9. You specifically state on lines 293 and 294 that report quality is not a component of the information-induced context, but you identify this component in Figure 2, and you do not say anything about Figure 2. Please give a reasonable explanation for this. 10. Some nouns are not explained. For example: What does u.a.r. stand for? Line 491 $\sigma'$ is not mentioned before and is directly used after, and the definition of IR is not mentioned in line 734. 11. It is pointed out in the appendix that hypothesis 3.1 and 3.2 are made according to the central limit theorem (line 1139), so will the scale of the experiment part be smaller (line 610 and line 615, only consider 50 agents and 500 tasks)? ethics_review_flag: No ethics_review_description: NA scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 5 reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct
8HTwfqUYRz
Spot Check Equivalence: an Interpretable Metric for Information Elicitation Mechanisms
[ "Shengwei Xu", "Yichi Zhang", "Paul Resnick", "Grant Schoenebeck" ]
Because high-quality data is like oxygen for AI systems, effectively eliciting information from crowdsourcing workers has become a first-order problem for developing high-performance machine learning algorithms. Two prevalent paradigms, spot-checking and peer prediction, enable the design of mechanisms to evaluate and incentivize high-quality data from human labelers. So far, at least three metrics have been proposed to compare the performances of these techniques [Zhang and Schoenebeck 2023, Gao et al. 2016, Burrell and Schoenebeck 2023]. However, different metrics lead to divergent and even contradictory results in various contexts. In this paper, we harmonize these divergent stories, showing that two of these metrics are actually the same within certain contexts and explain the divergence of the third. Moreover, we unify these different contexts by introducing Spot Check Equivalence, which offers an interpretable metric for the effectiveness of a peer prediction mechanism. Finally, we present two approaches to compute spot check equivalence in various contexts, where simulation results prove the effectiveness of our proposed metric.
[ "Algorithmic Game Theory", "Information Elicitation", "Incentive for Effort", "Peer Prediction" ]
https://openreview.net/pdf?id=8HTwfqUYRz
WvMe8LCEmU
official_review
1,700,343,107,817
8HTwfqUYRz
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2200/Reviewer_1yF5" ]
review: The paper develops a novel metric called Spot Check Equivalence (SCE) for measuring the performance of information gathering systems like crowdsourcing. SCE aims to assess both motivational proficiency and ex-post fairness. The authors demonstrate its effectiveness through theoretical analysis and agent-based model simulations. Strengths + The proposed metric SCE provides a fresh perspective on evaluating information elicitation mechanisms. Further, SCE utilizes the spot-checking ratio as a reference point, making it interpretable. + Overall the paper is well written and easy to follow + The paper establishes that two existing metrics, sensitivity and measurement integrity, are equivalent under certain assumptions. This connection enhances the credibility of the proposed metric. Weakenesses - The assumptions made during SCE's developement are scattered through the text. Stating them upfront and also discussing the limitations of SCE would enhace transparency - While theoretical analysis and simulations are valuable, empirical validation using real-world data or experiments would situate the paperโ€™s claims better questions: Please refer to weaknesses pointed out in teh review ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 5 technical_quality: 5 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
8HTwfqUYRz
Spot Check Equivalence: an Interpretable Metric for Information Elicitation Mechanisms
[ "Shengwei Xu", "Yichi Zhang", "Paul Resnick", "Grant Schoenebeck" ]
Because high-quality data is like oxygen for AI systems, effectively eliciting information from crowdsourcing workers has become a first-order problem for developing high-performance machine learning algorithms. Two prevalent paradigms, spot-checking and peer prediction, enable the design of mechanisms to evaluate and incentivize high-quality data from human labelers. So far, at least three metrics have been proposed to compare the performances of these techniques [Zhang and Schoenebeck 2023, Gao et al. 2016, Burrell and Schoenebeck 2023]. However, different metrics lead to divergent and even contradictory results in various contexts. In this paper, we harmonize these divergent stories, showing that two of these metrics are actually the same within certain contexts and explain the divergence of the third. Moreover, we unify these different contexts by introducing Spot Check Equivalence, which offers an interpretable metric for the effectiveness of a peer prediction mechanism. Finally, we present two approaches to compute spot check equivalence in various contexts, where simulation results prove the effectiveness of our proposed metric.
[ "Algorithmic Game Theory", "Information Elicitation", "Incentive for Effort", "Peer Prediction" ]
https://openreview.net/pdf?id=8HTwfqUYRz
R8iXBNoFPO
official_review
1,700,719,294,790
8HTwfqUYRz
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2200/Reviewer_AFvq" ]
review: Summary: The paper introduces "Spot Check Equivalence" (SCE), a metric for evaluating information elicitation mechanisms in AI and machine learning. SCE assesses how well these mechanisms perform compared to a baseline spot-checking method under different conditions. It incorporates concepts like motivational proficiency and uses metrics such as Sensitivity and Measurement Integrity. The paper demonstrates SCE's application through simulations and experiments, showing its effectiveness in different scenarios. This research offers a new framework for improving data quality in machine learning contexts. Pros: * The paper addresses an important problem in AI systems - effectively eliciting information from crowdsourcing workers. * The paper presents a logical flow of ideas, systematically introducing the concept of Spot Check Equivalence (SCE) and its relevance in the context of information elicitation. * The paper unifies different metrics and provides a comprehensive understanding of how they relate to each other. Cons: 1. Real-world scenarios often involve agents with heterogeneous cost functions. However, this paper does not consider it. 2. The paper could benefit from more evaluation and ablation studies to validate the proposed method. questions: 1. Could you provide more insights into how SCE compares with other existing metrics in the field? Were there specific metrics that you found SCE to be particularly more effective or less effective against? 2. Can you provide more details on the implementation of the proposed approaches for computing spot check equivalence? ethics_review_flag: No ethics_review_description: n.a. scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 4 technical_quality: 5 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper
8HTwfqUYRz
Spot Check Equivalence: an Interpretable Metric for Information Elicitation Mechanisms
[ "Shengwei Xu", "Yichi Zhang", "Paul Resnick", "Grant Schoenebeck" ]
Because high-quality data is like oxygen for AI systems, effectively eliciting information from crowdsourcing workers has become a first-order problem for developing high-performance machine learning algorithms. Two prevalent paradigms, spot-checking and peer prediction, enable the design of mechanisms to evaluate and incentivize high-quality data from human labelers. So far, at least three metrics have been proposed to compare the performances of these techniques [Zhang and Schoenebeck 2023, Gao et al. 2016, Burrell and Schoenebeck 2023]. However, different metrics lead to divergent and even contradictory results in various contexts. In this paper, we harmonize these divergent stories, showing that two of these metrics are actually the same within certain contexts and explain the divergence of the third. Moreover, we unify these different contexts by introducing Spot Check Equivalence, which offers an interpretable metric for the effectiveness of a peer prediction mechanism. Finally, we present two approaches to compute spot check equivalence in various contexts, where simulation results prove the effectiveness of our proposed metric.
[ "Algorithmic Game Theory", "Information Elicitation", "Incentive for Effort", "Peer Prediction" ]
https://openreview.net/pdf?id=8HTwfqUYRz
IxDrmG7Ycc
official_review
1,700,777,274,161
8HTwfqUYRz
[ "everyone" ]
[ "ACM.org/TheWebConf/2024/Conference/Submission2200/Reviewer_9KMv" ]
review: This paper introduces the concept of Spot Check Equivalence, which combines a spot-checking mechanism with peer prediction as a standard to measure the motivational effectiveness of any incentive mechanism. The authors evaluate SCE using two criteria, Measurement Integrity and Sensitivity, showcasing its effectiveness as a metric for measuring motivational proficiency. Strengths: * S1: This paper addresses a significant challenge associated with assessing and motivating human laborers to produce high-quality data. * S2: The authors showcased existing methods and highlighted the distinctions of their approach through experiments. Weaknesses: * W1: Part of the introduction could potentially be better placed within the related work section. Although referencing previous work and its limitations is valuable, reducing the depth of detail in the introduction might improve its flow. * W2: The paper lacks a demonstration of how their approach performs differently across various tasks or contexts. While I might have overlooked it, the authors didn't explicitly exhibit in their evaluation the specific types of contexts they addressed and how their approach might excel in certain contexts more than others. * W3:The absence of references to metrics beyond Measurement Integrity and Sensitivity, as well as the rationale behind selecting these metrics over others, has not been addressed (I added a question about this in the questions section below) questions: * Q1: I comprehend why the authors examined Spot Check Equivalence using Measurement Integrity and Sensitivity to showcase its effectiveness as a motivational proficiency metric. However, have they explored additional metrics that might also have an impact, such as consistency, bias and fairness, or cost-effectiveness? Did the authors conduct experiments with other metrics? Should certain metrics take priority based on the task context? * Q2: How do the authors envision their approach being applied in light of the increasing use of LLMs and the growing reliance on AI for labeling and annotating data? * Q3: Could the authors provide additional information regarding the acquisition of ground truth through the crowdsourcing task on Amazon Mechanical Turk? ethics_review_flag: No ethics_review_description: N/A scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community novelty: 4 technical_quality: 5 reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper