forum_id
stringlengths 8
20
| forum_title
stringlengths 4
171
| forum_authors
sequencelengths 0
25
| forum_abstract
stringlengths 4
4.27k
| forum_keywords
sequencelengths 1
10
| forum_pdf_url
stringlengths 38
50
| note_id
stringlengths 8
13
| note_type
stringclasses 6
values | note_created
int64 1,360B
1,736B
| note_replyto
stringlengths 8
20
| note_readers
sequencelengths 1
5
| note_signatures
sequencelengths 1
1
| note_text
stringlengths 10
16.6k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
4zmXFCXFI7 | Efficiency of Non-Truthful Auctions in Auto-bidding with Budget Constraints | [
"Christopher Liaw",
"Aranyak Mehta",
"Wennan Zhu"
] | We study the efficiency of non-truthful auctions for auto-bidders with both return on spend (ROS) and budget constraints. The efficiency of a mechanism is measured by the price of anarchy (PoA), which is the worst case ratio between the liquid welfare of any equilibrium and the optimal (possibly randomized) allocation. Our first main result is that the first-price auction (FPA) is optimal, among deterministic mechanisms, in this setting. Without any assumptions, the PoA of FPA is $n$ which we prove is tight for any deterministic mechanism. However, under a mild assumption that a bidder's value for any query does not exceed their total budget, we show that the PoA is at most $2$. This bound is also tight as it matches the optimal PoA without a budget constraint. We next analyze two randomized mechanisms: randomized FPA (rFPA) and "quasi-proportional" FPA. We prove two results that highlight the efficacy of randomization in this setting. First, we show that the PoA of rFPA for two bidders is at most $1.8$ without requiring any assumptions. This extends prior work which focused only on an ROS constraint. Second, we show that quasi-proportional FPA has a PoA of $2$ for any number of bidders, without any assumptions. Both of these bypass lower bounds in the deterministic setting. Finally, we study the setting where bidders are assumed to bid uniformly. We show that uniform bidding can be detrimental for efficiency in deterministic mechanisms while being beneficial for randomized mechanisms, which is in stark contrast with the settings without budget constraints. | [
"auto-bidding",
"auction design",
"price of anarchy",
"mechanism design"
] | https://openreview.net/pdf?id=4zmXFCXFI7 | D93kGHO1Ck | official_review | 1,700,587,681,307 | 4zmXFCXFI7 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1925/Reviewer_a1B5"
] | review: The authors study the efficiency of non-truthful auctions under auto-bidding and budget constraints. In particular, in the studied model, each advertiser has a value for each query, and their goal is to maximize their total value so that this value does not exceed their total spend. Then, the auto-bidding agents try to solve this optimization problem on behalf of the advertisers. Under this setting, the authors consider the first price auction (FPA), which is known to be non-truthful, and prove the following:
1) The FPA is optimal among all deterministic mechanisms in this setting. Specifically, its price of anarchy (PoA) is n (where n is the number of advertisers), and this is a bound for any deterministic mechanism. For the special case where the value of a bidder for any query does not exceed their total budget, the prove that the PoA is 2.
2) The authors consider 2 randomized versions of the FPA, and (among others) they show that the PoA can be improved to 2, bypassing thus the impossibility results that hold for deterministic mechanisms.
3) Finally, the authors consider bidders that bid uniformly, and they show that this affects negatively the performance (in terms of efficiency) of deterministic mechanisms, while it is beneficial for randomized mechanisms.
Strengths
1) The paper studies an interesting and well-motivated problem. It is in general well-written, and has a clear focus.
2) The authors provide a nice collection of results, give a clear picture of the performance of the celebrated FPA in this setting, and this picture is also complete as the paper considers both the case of deterministic and randomized mechanisms
3) The paper technically is not trivial, and as far as I checked is sound and correct.
Weaknesses
I do not have any major complaints apart from the fact that, although as I said the paper is well-written, it is not always easy to follow. A revision on the introductory sections so that the model is more clear would be appreciated.
Overall, I would say that this is a nice paper, with an interesting set of results. It is not always easy to follow but regardless, I believe that it is a good match for the conference.
questions: None.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
4qtxfjSyFE | Advancing Web 3.0: Making Smart Contracts Smarter on Blockchain | [
"Junqin Huang",
"Linghe Kong",
"Guanjie Cheng",
"Qiao Xiang",
"Guihai Chen",
"Gang Huang",
"Xue Liu"
] | Blockchain and smart contracts are one of key technologies promoting Web 3.0. However, due to security considerations and consistency requirements, smart contracts currently only support simple and deterministic programs, which significantly hinders their deployment in intelligent Web 3.0 applications. To enhance smart contracts intelligence on the blockchain, we propose $\texttt{SMART}$, a plug-in smart contract framework that supports efficient AI model inference while being compatible with existing blockchains. To handle the high complexity of model inference, we propose an on-chain and off-chain joint execution model, which separates the $\texttt{SMART}$ contract into two parts: the deterministic code still runs inside an on-chain virtual machine, while the complex model inference is offloaded to off-chain compute nodes. To solve the non-determinism brought by model inference, we leverage Trusted Execution Environments (TEEs) to endorse the integrity and correctness of the off-chain execution. We also design distributed attestation and secret key provisioning schemes to further enhance the system security and model privacy. We implement a $\texttt{SMART}$ prototype and evaluate it on a popular Ethereum Virtual Machine (EVM)-based blockchain. Theoretical analysis and prototype evaluation show that $\texttt{SMART}$ not only achieves the security goals of correctness, liveness, and model privacy, but also has approximately 5 orders of magnitude faster inference efficiency than existing on-chain solutions. | [
"Web 3.0",
"Smart contract",
"Blockchain",
"Model inference",
"Trusted execution environment"
] | https://openreview.net/pdf?id=4qtxfjSyFE | UI1QfWgeXM | official_review | 1,700,116,912,069 | 4qtxfjSyFE | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission42/Reviewer_HzVW"
] | review: This article introduces an approach for integrating AI model inference with smart contract execution in Web3 applications. Model inference is computationally expensive in comparison to most smart contract executions and thus is impractical to be implemented directly in smart contract code. Furthermore, model inference can be non-deterministic which conflicts with the consensus mechanisms of most blockchain systems. The proposal here is to split deterministic smart contract code which can mutate state on chain from non-deterministic model inference, which is run on off-chain nodes that do not alter state or participate in consensus.
The technical quality of the system implementation, including the precompiled contracts and attestation service, is sound. My main question reading this paper is rather a more general issue of the utility of this approach for a decentralized system. I note a lack of a "decentralization" dimension in the star chart shown in Figure 2, but if it were there the SMART system would score low in contrast to an on-chain approach that validates the model inference output through consensus. My understanding of the system is that each time a contract is executed at step 4-a the TEE provider loads the model from off-chain storage. What happens if the storage service goes offline? At that point no TEE provider will be able to complete its execution.
Furthermore, there is no verification of the actual model output value--only validity of the signed quote. (step 8) What if a compromised TEE provider simply gives incorrect model outputs? How would this system verify that? The security analysis only considers malicious TEE providers ability to affect system liveness. This once again highlights the lack of decentralization in this approach.
Fundamentally, given the reliance on available storage services to store the models, I wonder what makes the SMART system model better than making a Dapp that runs model inference on a private server, taking the output and sending that as input to a normal smart contract execution? In that case the blockchain node is not triggering the model inference, the client is but I am struggling to see how that materially changes the kinds of applications you can build. The benefit of an on-chain approach is that the inference is decentralized and the nodes arrive at consensus about the model output.
In terms of related work, approaches for on-chain inference are largely dismissed out of hand. However, quantized models can be used in resource-contrained embedded systems despite being somewhat degraded in accuracy. Are such methods being utilized, and what kinds of "smart" web3 applications might be enabled by smaller, on-chain models? It would be good to see a slightly more balanced take on this, as well as a discussion of limiations that come from your proposed method (putting it off-chain).
Overall, the paper is well written and I appreciate the sharing of source code and effort involved. I also appreciate the approach to implement in a plug-in manner in an existing EVM tech stack. The evaluation shows that off-chain computation is significantly more efficient, though this is not especially surprising. My main question/concerns stem from the issues stated above.
Additional questions:
Line 261: Ref to Ekiden says it is off-chain but Oasis Network (mainnet derived from Ekiden) uses TEEs for on-chain compute.
What is to stop the TEE provider from stealing the secret key of the private model (See sec 4.3, step 4-c)?
How do you meter gas for inference on TEE provider? Can repeated calls to TEE.inference() halt the chain?
"Note that the blockchain node could outsource multiple model inference tasks to TEE providers simultaneously, and wait for responses in an asynchronous mode, thus the off-chain inference does not congest block generation" How is this possible since solidity is a synchronous language?
Because model inference is run in SGX does that mean it can only do CPU inference or is GPU inference possible?
Grammar issues:
Line 45: "are one of key" -> "are one of the key"
Line 82: "build more fancy Web 3.0 applications" is informal language
Line 144: "Even though current two most" -> "Even though the current two most"
Line 289: plural agreement
Line 497: "provide an quote" -> "provide a quote"
Line 1056: "to do evil" is informal language
### Update (10 Dec 2023)
I acknowledge that I have read the rebuttal.
questions: See questions in review above.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
4qtxfjSyFE | Advancing Web 3.0: Making Smart Contracts Smarter on Blockchain | [
"Junqin Huang",
"Linghe Kong",
"Guanjie Cheng",
"Qiao Xiang",
"Guihai Chen",
"Gang Huang",
"Xue Liu"
] | Blockchain and smart contracts are one of key technologies promoting Web 3.0. However, due to security considerations and consistency requirements, smart contracts currently only support simple and deterministic programs, which significantly hinders their deployment in intelligent Web 3.0 applications. To enhance smart contracts intelligence on the blockchain, we propose $\texttt{SMART}$, a plug-in smart contract framework that supports efficient AI model inference while being compatible with existing blockchains. To handle the high complexity of model inference, we propose an on-chain and off-chain joint execution model, which separates the $\texttt{SMART}$ contract into two parts: the deterministic code still runs inside an on-chain virtual machine, while the complex model inference is offloaded to off-chain compute nodes. To solve the non-determinism brought by model inference, we leverage Trusted Execution Environments (TEEs) to endorse the integrity and correctness of the off-chain execution. We also design distributed attestation and secret key provisioning schemes to further enhance the system security and model privacy. We implement a $\texttt{SMART}$ prototype and evaluate it on a popular Ethereum Virtual Machine (EVM)-based blockchain. Theoretical analysis and prototype evaluation show that $\texttt{SMART}$ not only achieves the security goals of correctness, liveness, and model privacy, but also has approximately 5 orders of magnitude faster inference efficiency than existing on-chain solutions. | [
"Web 3.0",
"Smart contract",
"Blockchain",
"Model inference",
"Trusted execution environment"
] | https://openreview.net/pdf?id=4qtxfjSyFE | JohJWIjwRX | decision | 1,705,909,224,619 | 4qtxfjSyFE | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: The paper proposes an architecture for blockchains where some of the off-chain computations are done by TEEs.
Overall, the paper has a nice architecture and some interesting experimental results.
From the novelt point of view such ideas have been proposed before and the suggested modifications seem rather incremental.
--- |
4qtxfjSyFE | Advancing Web 3.0: Making Smart Contracts Smarter on Blockchain | [
"Junqin Huang",
"Linghe Kong",
"Guanjie Cheng",
"Qiao Xiang",
"Guihai Chen",
"Gang Huang",
"Xue Liu"
] | Blockchain and smart contracts are one of key technologies promoting Web 3.0. However, due to security considerations and consistency requirements, smart contracts currently only support simple and deterministic programs, which significantly hinders their deployment in intelligent Web 3.0 applications. To enhance smart contracts intelligence on the blockchain, we propose $\texttt{SMART}$, a plug-in smart contract framework that supports efficient AI model inference while being compatible with existing blockchains. To handle the high complexity of model inference, we propose an on-chain and off-chain joint execution model, which separates the $\texttt{SMART}$ contract into two parts: the deterministic code still runs inside an on-chain virtual machine, while the complex model inference is offloaded to off-chain compute nodes. To solve the non-determinism brought by model inference, we leverage Trusted Execution Environments (TEEs) to endorse the integrity and correctness of the off-chain execution. We also design distributed attestation and secret key provisioning schemes to further enhance the system security and model privacy. We implement a $\texttt{SMART}$ prototype and evaluate it on a popular Ethereum Virtual Machine (EVM)-based blockchain. Theoretical analysis and prototype evaluation show that $\texttt{SMART}$ not only achieves the security goals of correctness, liveness, and model privacy, but also has approximately 5 orders of magnitude faster inference efficiency than existing on-chain solutions. | [
"Web 3.0",
"Smart contract",
"Blockchain",
"Model inference",
"Trusted execution environment"
] | https://openreview.net/pdf?id=4qtxfjSyFE | JDSPUUs26Q | official_review | 1,700,759,715,697 | 4qtxfjSyFE | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission42/Reviewer_DehU"
] | review: Rebuttal response
----------------------------
You have made an extensive effort to reply to my review, for which I am grateful. Unfortunately, it has only affected my view a little. Nevertheless: enough to (slightly) increase my grade on novelty.
Summary
----------------------------
The paper proposes a way to enable smart contracts to make use of AI output. They do so by proposing a new type of blockchain, one that can forward calls to a new party (TEE providers).
Weak points
----------------------------
- Scope
This is neither a web paper, nor a security paper.
- Ethics
This paper aims to find a way to enable the 2 currently most energy-sucking
technologies work well together. It is highly likely that that is a net
negative for society - a point that at least merits a discussion.
- Novelty
There are already ways for smart contracts to interact with the world outside
the blockchain. While these may be hacky, this paper makes no effort to
distinguish its contribution from such efforts.
Overall evaluation
----------------------------
This is a weird paper. First of all: it has a scope problem. Its contribution is not a security contribution, but a programming paradigm contribution. Moreover, it has little to nothing to do with the Web. That implies that it is not that interesting for TheWebConference's security track.
Second of all: the introduction makes it sound as if including AI models in smart contracts is a desperately needed idea. But it fails to motivate that, from the smart contract end as well as from the AI end. Moreover, obviously AI models can much better be offered by cloud providers in an AI-as-a-service model. Of course, the paper recognises this and does the obvious thing: offload intense computation off the blockchain.
So, in effect, the paper proposes a way for on-chain code to interact with off-chain code. Nowhere near as lofty as is claimed.
Lastly: merging the two most energy-hogging technologies ever emerged from the field of CS raises moral questions -- questions that this paper fails to recognise.
All in all, I'm not a fan of the positioning of this paper. On top of that, I do not think it fits for WWW-SEC.
Comments for authors
----------------------------
- General:
+ do not use "utilize". It leaves a rather bad impression.
+ no spaces before footnotes.
- Intro:
The idea to run AI on smart contracts seems patently stupid: this is not what
blockchains / the EVM were designed for, nor what AI is designed for.
Moreover, it is not clear what usecase this addresses: if one can interact
with the blockchain, one could presumably interact with an AI-as-a-service
provider. A solution desperately looking for a problem.
- pg 2, "TEE": acronym not explained. Just write it out full the first time.
- pg 2, footnote: this is not a grammatically well-formed sentence; its meaning
is ambiguous.
- pg 2, "In specific,": this is not English. You probably meant "Specifically,"
- pg 3, Fig 2: on what do you base the scores for "on-chain" and "off-chain"?
And why do you group all previous solutions into two diagrams?
This figure comes across more like a poor attempt at hype than an actual
comparison.
- pg 3, sec 2.2: note that Intel has deprecated SGX. Because it wasn't secure.
That is: there is no ubiquitous secure enclave standard any more (well,
turns out there never was). This takes most of the shine away from this
section; you should at the very least acknowledge both SGX's deprecation and
its inability to deliver on the assumptions you need for your implementation.
- Sec 3, "SMART"
You should've called it "smartr contracts". Sounds better, is recognisable, and
probably still trademarkable.
- Sec 3.2
+ Methodology missing: how did you arrive at these threats?
+ speaking of which: there are some threats missing.
One of the missing ones concerns a TEE piece of code being malicious.
questions: -
ethics_review_flag: Yes
ethics_review_description: No discussion of the detrimental societal impact of uniting two energy-hogging technologies.
scope: 1: The work is irrelevant to the Web
novelty: 4
technical_quality: 4
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
4qtxfjSyFE | Advancing Web 3.0: Making Smart Contracts Smarter on Blockchain | [
"Junqin Huang",
"Linghe Kong",
"Guanjie Cheng",
"Qiao Xiang",
"Guihai Chen",
"Gang Huang",
"Xue Liu"
] | Blockchain and smart contracts are one of key technologies promoting Web 3.0. However, due to security considerations and consistency requirements, smart contracts currently only support simple and deterministic programs, which significantly hinders their deployment in intelligent Web 3.0 applications. To enhance smart contracts intelligence on the blockchain, we propose $\texttt{SMART}$, a plug-in smart contract framework that supports efficient AI model inference while being compatible with existing blockchains. To handle the high complexity of model inference, we propose an on-chain and off-chain joint execution model, which separates the $\texttt{SMART}$ contract into two parts: the deterministic code still runs inside an on-chain virtual machine, while the complex model inference is offloaded to off-chain compute nodes. To solve the non-determinism brought by model inference, we leverage Trusted Execution Environments (TEEs) to endorse the integrity and correctness of the off-chain execution. We also design distributed attestation and secret key provisioning schemes to further enhance the system security and model privacy. We implement a $\texttt{SMART}$ prototype and evaluate it on a popular Ethereum Virtual Machine (EVM)-based blockchain. Theoretical analysis and prototype evaluation show that $\texttt{SMART}$ not only achieves the security goals of correctness, liveness, and model privacy, but also has approximately 5 orders of magnitude faster inference efficiency than existing on-chain solutions. | [
"Web 3.0",
"Smart contract",
"Blockchain",
"Model inference",
"Trusted execution environment"
] | https://openreview.net/pdf?id=4qtxfjSyFE | 8MSxTxlFT9 | official_review | 1,701,008,734,858 | 4qtxfjSyFE | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission42/Reviewer_Sexm"
] | review: Smart contracts are at the core of Web 3.0 and guaranteeing their correctness is paramount. In this work, the authors propose SMART, a smart contract framework that is compatible with current blockchains and at the same time supports efficient AI model inference. Given the complexity of existing AI models, the authors propose a solution combining on-chain execution for the deterministic part, and an off-chain execution to compute the complex AI models. The non-determinism derived from the AI models is dealt by leveraging TEEs in order to endorse the integrity and correctness of these off-chain executions, allowing this way all nodes to validate the offloaded computations. The authors also design distributed attestation and secret key provisioning schemes to allow the usage of private models. A prototype of SMART is presented and evaluated in EVM-based blockchains, and the authors show that it achieves not only the intended goals of correctness, liveness, and model privacy, but also that is more efficient that existing on-chain solutions.
The paper is well-written and well-structured providing a clear view of it goals. The theme is appropriate for the track, and timely with the widespread usage of LLMs.
Pros
- Proposal of a novel framework to support off-chain model inference
- Development of a prototype that is available online
Cons
- The paper would benefit from some clarifications as indicated in the comments
- The key-management committee is fixed making the system dependent on $t$ out of $n$ of these members
questions: - pg 5: V_TEE and H_enclave are used in section 4.1 but are not defined previously
- pg6: clarify what does it mean ``V_TEE satisfies the minimum value and H_enclave is the desired one."
- Section 4.3, secret key provisioning: It is not clear how the key-management committee is set. From the text there is a fixed set of $n$ TEE providers that form this committee, and that have shares for all the existing private models. However, this implies that no more than $n - t$ of these leave the network, otherwise it would be impossible to decrypt the private models
- As for the distribution of these keys, the distribution over trusted channels is unclear. Which secure channels exist between the client and the committee? Why are these communications trusted? It is not immediate why identity attestation between TEEs and clients guarantees the existence of secure channels
- A follow up, how are these keys loaded into the TEE?
ethics_review_flag: No
ethics_review_description: None
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
4ieLqLgu2q | Collaborate to Adapt: Source-Free Graph Domain Adaptation via Bi-directional Adaptation | [
"Zhen Zhang",
"Meihan Liu",
"an hui wang",
"Hongyang Chen",
"Zhao Li",
"Jiajun Bu",
"Bingsheng He"
] | Unsupervised graph domain adaptation has emerged as a practical solution to transfer knowledge from a label-rich source graph to a completely unlabelled target graph, when there is a scarcity of labels in target graph. However, most of existing methods require a labelled source graph to provide supervision signals, which might not be accessible in the real-world scenarios due to regulations and privacy concerns. In this paper, we explore the scenario of source-free unsupervised graph domain adaptation, which tries to address the domain adaptation problem without accessing the labelled source graph. Specifically, we present a novel paradigm called GraphCTA, which performs model adaptation and graph adaptation collaboratively through a series of procedures: (1) conduct model adaptation based on node's neighborhood predictions in target graph considering both local and global information; (2) perform graph adaptation by updating graph structure and node attributes via neighborhood constrastive learning; and (3) the updated graph serves as an input to facilitate the subsequent iteration of model adaptation, thereby establishing a collaborative loop between model adaptation and graph adaptation. Comprehensive experiments are conducted on various public datasets including transaction, social, and citation graphs. The experimental results demonstrate that our proposed model outperforms recent source free baselines by large margins. Our source code and datasets are available at https://anonymous.4open.science/r/GraphCTA-code. | [
"Graph Neural Networks",
"Source-Free Unsupervised Graph Domain Adaptation"
] | https://openreview.net/pdf?id=4ieLqLgu2q | vJifnlA0dT | official_review | 1,700,574,563,588 | 4ieLqLgu2q | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1186/Reviewer_nwhM"
] | review: The paper addresses the graph domain adaptation problem in source-free unsupervised setting. The novel paradigm, GraphCTA is introduced, which follows a series of procedures: model adaptation, graph adaptation, and model adaptation with updated graph as new input. This can be viwed as a collaborative loop between model adaptation and graph adaptation. Extensive experiments supports the efficacy of Graph CTA.
- The paper addresses an interesting problem: GDA in unsupervised setting.
- The paper is well organized, and easy to follow. The theorems are stated clearly, and proven nicely.
- Extensive experiments are conducted. Source code is provided.
questions: Q1. While extensive experiments are conducted, only accuracy is used as main evaluation metric. Is there any reason F1 was not used for evaluation?
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
4ieLqLgu2q | Collaborate to Adapt: Source-Free Graph Domain Adaptation via Bi-directional Adaptation | [
"Zhen Zhang",
"Meihan Liu",
"an hui wang",
"Hongyang Chen",
"Zhao Li",
"Jiajun Bu",
"Bingsheng He"
] | Unsupervised graph domain adaptation has emerged as a practical solution to transfer knowledge from a label-rich source graph to a completely unlabelled target graph, when there is a scarcity of labels in target graph. However, most of existing methods require a labelled source graph to provide supervision signals, which might not be accessible in the real-world scenarios due to regulations and privacy concerns. In this paper, we explore the scenario of source-free unsupervised graph domain adaptation, which tries to address the domain adaptation problem without accessing the labelled source graph. Specifically, we present a novel paradigm called GraphCTA, which performs model adaptation and graph adaptation collaboratively through a series of procedures: (1) conduct model adaptation based on node's neighborhood predictions in target graph considering both local and global information; (2) perform graph adaptation by updating graph structure and node attributes via neighborhood constrastive learning; and (3) the updated graph serves as an input to facilitate the subsequent iteration of model adaptation, thereby establishing a collaborative loop between model adaptation and graph adaptation. Comprehensive experiments are conducted on various public datasets including transaction, social, and citation graphs. The experimental results demonstrate that our proposed model outperforms recent source free baselines by large margins. Our source code and datasets are available at https://anonymous.4open.science/r/GraphCTA-code. | [
"Graph Neural Networks",
"Source-Free Unsupervised Graph Domain Adaptation"
] | https://openreview.net/pdf?id=4ieLqLgu2q | eAVdc7n6IK | official_review | 1,698,155,970,927 | 4ieLqLgu2q | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1186/Reviewer_rjLH"
] | review: The paper introduces a new framework named GraphCTA, designed for tackling the complex challenges of source-free graph domain adaptation. By ingeniously combining model and graph adaptation, the framework addresses the distribution shifts and source hypothesis bias commonly observed in graph-structured data. GraphCTA employs a dual adaptation strategy: on the model side, it uses local neighborhood predictions and global class prototypes for adaptation, while on the graph side, it leverages predictions made by the model along with stored data in memory banks for further refinement. Extensive experiments validate the utility of GraphCTA, showcasing its superior performance over existing state-of-the-art baselines across multiple scenarios.
questions: 1. While the paper introduces GraphCTA as a novel framework, its technical contributions, specifically the use of pseudo-labeling and contrastive learning, are well-established techniques in the field. Therefore, the innovation in methodology could be considered incremental. What is the own advantage of your methods? More detail is better.
2. In the introduction, the paper could modify its claim to better align with the mention of related work later in the text. Instead of stating that "there has been limited investigation of source-free adaptation techniques for the non-iid graph-structured data," it could say: "While there has been some work (add cite), which focuses on source-free unsupervised graph domain adaptation, the field remains relatively under-explored." This adjustment would provide a more accurate picture of the current state of research and seamlessly connect with the discussion of SOGA in the related work section.
3. While the paper claims the novelty of collaboratively bi-directional adaptations as a significant contribution, the absence of experimental results focusing solely on one type of adaptation limits the ability to measure the effectiveness of this collaborative approach. Moreover, the paper could benefit from additional experiments specifically designed to validate the claimed synergistic effects of model and graph adaptation working in tandem.
ethics_review_flag: No
ethics_review_description: No
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 4
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
4ieLqLgu2q | Collaborate to Adapt: Source-Free Graph Domain Adaptation via Bi-directional Adaptation | [
"Zhen Zhang",
"Meihan Liu",
"an hui wang",
"Hongyang Chen",
"Zhao Li",
"Jiajun Bu",
"Bingsheng He"
] | Unsupervised graph domain adaptation has emerged as a practical solution to transfer knowledge from a label-rich source graph to a completely unlabelled target graph, when there is a scarcity of labels in target graph. However, most of existing methods require a labelled source graph to provide supervision signals, which might not be accessible in the real-world scenarios due to regulations and privacy concerns. In this paper, we explore the scenario of source-free unsupervised graph domain adaptation, which tries to address the domain adaptation problem without accessing the labelled source graph. Specifically, we present a novel paradigm called GraphCTA, which performs model adaptation and graph adaptation collaboratively through a series of procedures: (1) conduct model adaptation based on node's neighborhood predictions in target graph considering both local and global information; (2) perform graph adaptation by updating graph structure and node attributes via neighborhood constrastive learning; and (3) the updated graph serves as an input to facilitate the subsequent iteration of model adaptation, thereby establishing a collaborative loop between model adaptation and graph adaptation. Comprehensive experiments are conducted on various public datasets including transaction, social, and citation graphs. The experimental results demonstrate that our proposed model outperforms recent source free baselines by large margins. Our source code and datasets are available at https://anonymous.4open.science/r/GraphCTA-code. | [
"Graph Neural Networks",
"Source-Free Unsupervised Graph Domain Adaptation"
] | https://openreview.net/pdf?id=4ieLqLgu2q | dQOYwP3izc | official_review | 1,699,871,764,500 | 4ieLqLgu2q | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1186/Reviewer_ZFxg"
] | review: Strength
The paper addresses a significant challenge in graph domain adaptation: adapting a pre-trained model to a target graph without access to the labeled source graph. This is particularly relevant in scenarios where data privacy and regulation constraints make source data inaccessible
Weakness
1. The representation of the paper is bad. First, the domain is not cleared defined in the beginning which causes difficulty in reading the introduction part. Fsecond, there is almost no explaination for Figure 1, which makes the figure useless.
2. The experiment didn't show what if the graph is used for a higher level domain adaptation like transfer from different datasets.
3. Adversarial attack is also an important domain shifting. Experiments should be provided to compare the robustness of their method against adversarial attack.
4. For different methods, the backbone training on source domain should use the same training procedure or even the same pre-trained model. The author use the code of different adaption method, which may cause some unfair comparison. The accuracy of source graph should be reported to make sure the performance of proposed method is not from the extensive hyperpamameter tuning on source graph.
questions: What is the performance on the source domain? That is important for comparison. Since the author didn’t use the same code, the performance on source domain may be different. But they should be the same for fairness.
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
4ieLqLgu2q | Collaborate to Adapt: Source-Free Graph Domain Adaptation via Bi-directional Adaptation | [
"Zhen Zhang",
"Meihan Liu",
"an hui wang",
"Hongyang Chen",
"Zhao Li",
"Jiajun Bu",
"Bingsheng He"
] | Unsupervised graph domain adaptation has emerged as a practical solution to transfer knowledge from a label-rich source graph to a completely unlabelled target graph, when there is a scarcity of labels in target graph. However, most of existing methods require a labelled source graph to provide supervision signals, which might not be accessible in the real-world scenarios due to regulations and privacy concerns. In this paper, we explore the scenario of source-free unsupervised graph domain adaptation, which tries to address the domain adaptation problem without accessing the labelled source graph. Specifically, we present a novel paradigm called GraphCTA, which performs model adaptation and graph adaptation collaboratively through a series of procedures: (1) conduct model adaptation based on node's neighborhood predictions in target graph considering both local and global information; (2) perform graph adaptation by updating graph structure and node attributes via neighborhood constrastive learning; and (3) the updated graph serves as an input to facilitate the subsequent iteration of model adaptation, thereby establishing a collaborative loop between model adaptation and graph adaptation. Comprehensive experiments are conducted on various public datasets including transaction, social, and citation graphs. The experimental results demonstrate that our proposed model outperforms recent source free baselines by large margins. Our source code and datasets are available at https://anonymous.4open.science/r/GraphCTA-code. | [
"Graph Neural Networks",
"Source-Free Unsupervised Graph Domain Adaptation"
] | https://openreview.net/pdf?id=4ieLqLgu2q | caC8Rd1DKm | official_review | 1,700,885,883,975 | 4ieLqLgu2q | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1186/Reviewer_sGRw"
] | review: In this paper, authors propose a novel framework for source-free graph domain adaptation. They conduct model adaptation and graph adaptation collaboratively with different training process. Complexity analysis is also provided for all modules. Experiments on different datasets could show the superiority of their model on the source-free scenario. Ablation experiments could prove the effectiveness of all components in their framework.
The strong points of this work could be summarized as follows:
1.The problem they study is meaningful.
2.The model they propose is novel enough.
3.The experiments on three datasets among different settings could show the superiority of their method.
4.The parameter analysis and ablation experiments could prove the effectiveness of all components in their framework.
However, there are some drawbacks as:
1.The variable should be defined before utilization. For example, the m and z_i in Eq(2) are not defined, which make this work hard to be understood.
2.The scales of the datasets they use are not large enough.
3.The performance improvement is tiny.
questions: Could this model be applied on large dataset?
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 6
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
4ieLqLgu2q | Collaborate to Adapt: Source-Free Graph Domain Adaptation via Bi-directional Adaptation | [
"Zhen Zhang",
"Meihan Liu",
"an hui wang",
"Hongyang Chen",
"Zhao Li",
"Jiajun Bu",
"Bingsheng He"
] | Unsupervised graph domain adaptation has emerged as a practical solution to transfer knowledge from a label-rich source graph to a completely unlabelled target graph, when there is a scarcity of labels in target graph. However, most of existing methods require a labelled source graph to provide supervision signals, which might not be accessible in the real-world scenarios due to regulations and privacy concerns. In this paper, we explore the scenario of source-free unsupervised graph domain adaptation, which tries to address the domain adaptation problem without accessing the labelled source graph. Specifically, we present a novel paradigm called GraphCTA, which performs model adaptation and graph adaptation collaboratively through a series of procedures: (1) conduct model adaptation based on node's neighborhood predictions in target graph considering both local and global information; (2) perform graph adaptation by updating graph structure and node attributes via neighborhood constrastive learning; and (3) the updated graph serves as an input to facilitate the subsequent iteration of model adaptation, thereby establishing a collaborative loop between model adaptation and graph adaptation. Comprehensive experiments are conducted on various public datasets including transaction, social, and citation graphs. The experimental results demonstrate that our proposed model outperforms recent source free baselines by large margins. Our source code and datasets are available at https://anonymous.4open.science/r/GraphCTA-code. | [
"Graph Neural Networks",
"Source-Free Unsupervised Graph Domain Adaptation"
] | https://openreview.net/pdf?id=4ieLqLgu2q | Z63tagcFSm | decision | 1,705,909,213,636 | 4ieLqLgu2q | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: **Meta-review**: The paper addresses the graph domain adaptation problem in a *source-free*, unsupervised setting. Unsupervised GDA is an interesting problem and reviewers generally liked the paper. The discussion was productive and improved the paper.
**Strengths**:
+ problem setting; unsupervised GDA (nwhM, sGRw, ZFxg, rjLH)
+ paper organization (nwhM)
+ experiments (nwhM, sGRw)
**Weaknesses**:
- relatively small performance improvement (sGRw) |
432AJU0zEt | Efficient Computation for Diagonal of Forest Matrix via Variance-Reduced Forest Sampling | [
"Haoxin Sun",
"Zhongzhi Zhang"
] | The forest matrix, particularly its diagonal elements, has far-reaching implications in network science and machine learning. The state-of-the-art algorithms for the diagonal of forest matrix computation are based on a fast Laplacian solver. However, these algorithms encounter limitations when applied to digraphs due to the incapacity of the Laplacian solver. To overcome the issue, in this paper, we propose three novel sampling-based algorithms: SCF, SCFV, and SCFV. Our first algorithm SCF leverages a probability interpretation of the diagonal of the forest matrix and utilizes an expansion of Wilson's algorithm to sample spanning converging forests. To reduce the variance in the forest sampling, we develop two novel variance-reduced techniques. The first technique, leading to the proposal of the SCFV algorithm, is inspired by opinion dynamics in graphs and applies matrix-vector iteration to the spanning forest sampling. While SCFV achieves reduced variance compared to SCF, the cross-product term in its variance expression can be complex and potentially large in certain graphs. Therefore, we develop another technique, leading to a new iteration equation and the SCFV+ algorithm. SCFV+ achieves further reduced variance without the cross-product term in the variance of SCFV. We prove that SCFV+ can achieve a relative error guarantee with high probability and maintain a linear time complexity relative to the nodes of graphs, presenting a superior theoretical result compared to state-of-the-art algorithms. Finally, we conduct extensive experiments on various real-world networks, showing that our algorithms achieve better estimation accuracy and are more time-efficient than the state-of-the-art algorithms. Moreover, our algorithms are scalable to massive graphs with more than twenty million nodes in both undirected and directed graphs. | [
"Forest matrix",
"Wilson's algorithm",
"spanning converging forest",
"variance reduction"
] | https://openreview.net/pdf?id=432AJU0zEt | ugu74goSgm | official_review | 1,700,675,346,231 | 432AJU0zEt | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1568/Reviewer_Nn6r"
] | review: **Summary**
This work proposes three new algorithms for estimating the diagonal entries of
the forest matrix for directed graphs: SCF, SCFV, and SCFV+. These samplers are
based on Wilson's algorithm and a probabilistic interpretation of the diagonal
entries (Theorem 4.1). Concretely, the forest matrix is
$\Omega = (I + L)^{-1}$ where $L$ is the graph Laplacian.
The inverse of the $i$-th diagonal entry of $\Omega$ is the average size of the
tree containing $i$ across all converging spanning forests where $i$ is a root
node.
All of these estimators are unbiased, and the relative variance of SCFV+ is
bounded by a constant (Lemma 6.2).
Previous works for estimating these diagonal entries are based on (undirected)
Laplacian solvers:
* JLT (Jin et al., ICDM 2019)
* UST (van der Grinten et al., ICDM 2021)
This work offers comprehensive experiments comparing their three algorithms
with JLT and UST (for undirected graphs), for a wide range of graph sizes.
Overall, this is a very strong paper, both theoretically and experimentally.
See the questions box for potential weaknesses and suggested improvements.
**Typos and suggestions**
- [line 043] suggestion: Start by defined $L$ in terms of the adjacency matrix
of graph $G=(V,E)$ for unfamiliar readers.
- [line 124] typo: Missing space in "PageRank[22]"
- [line 171] suggestion: Replace "In the sequel," by "For brevity,"
- [line 183] suggestion: Italicize "rooted converging tree"
- [line 193] suggestion: Replace "surpass" by "dominate"
- [line 292] suggestion: Use $\mathbb{E}[\cdot]$ notation for expected values,
i.e., brackets not braces.
- [line 350] typo: Strange behavior with ":" in the algorithm input/output. Same
for Algorithm 2
- [line 354] suggestion: Just initialize with the vector value $\mathbf{0}_n$.
- [line 426] typo: $i$-th is boldfaced
- [line 792] typo: "average(upper)" --> "average (upper)"
questions: - [line 256] Is Theorem 4.1 novel? It seems that this equation should exist in
the (directed) spectral graph theory literature. If so, would you cite the
original reference?
- [line 333] Is the probability distribution on spanning converging forests as
described in Section 4.2 really uniform? In particular, does Wilson's
algorithm sample rooted trees converging trees from $G'$ uniformly? This could
use a proof or citation from [4, 37,44].
- [line 390] Why are we writing the lower bound for $\omega_{ii}$ in terms of
$\sigma$ when we also know that $\omega_{ii} \le \frac{1}{1 + d_i}$?
Can this be tight? How do we estimate $\sigma$ when there's room for improvement?
- [line 530] If the variance of SCFV is lower than SCF (Lemma 5.2), why can't
we use the bound for $l$ in SCF? Is the discussion here more about not being
able to come up with a smaller value of $l$?
- [line 711] Does "average relative error" mean arithmetic or geometric mean?
- [line 847] How long do the "ground truth" computations take with GMRES?
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
432AJU0zEt | Efficient Computation for Diagonal of Forest Matrix via Variance-Reduced Forest Sampling | [
"Haoxin Sun",
"Zhongzhi Zhang"
] | The forest matrix, particularly its diagonal elements, has far-reaching implications in network science and machine learning. The state-of-the-art algorithms for the diagonal of forest matrix computation are based on a fast Laplacian solver. However, these algorithms encounter limitations when applied to digraphs due to the incapacity of the Laplacian solver. To overcome the issue, in this paper, we propose three novel sampling-based algorithms: SCF, SCFV, and SCFV. Our first algorithm SCF leverages a probability interpretation of the diagonal of the forest matrix and utilizes an expansion of Wilson's algorithm to sample spanning converging forests. To reduce the variance in the forest sampling, we develop two novel variance-reduced techniques. The first technique, leading to the proposal of the SCFV algorithm, is inspired by opinion dynamics in graphs and applies matrix-vector iteration to the spanning forest sampling. While SCFV achieves reduced variance compared to SCF, the cross-product term in its variance expression can be complex and potentially large in certain graphs. Therefore, we develop another technique, leading to a new iteration equation and the SCFV+ algorithm. SCFV+ achieves further reduced variance without the cross-product term in the variance of SCFV. We prove that SCFV+ can achieve a relative error guarantee with high probability and maintain a linear time complexity relative to the nodes of graphs, presenting a superior theoretical result compared to state-of-the-art algorithms. Finally, we conduct extensive experiments on various real-world networks, showing that our algorithms achieve better estimation accuracy and are more time-efficient than the state-of-the-art algorithms. Moreover, our algorithms are scalable to massive graphs with more than twenty million nodes in both undirected and directed graphs. | [
"Forest matrix",
"Wilson's algorithm",
"spanning converging forest",
"variance reduction"
] | https://openreview.net/pdf?id=432AJU0zEt | tXlfK71g0K | official_review | 1,701,193,545,909 | 432AJU0zEt | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1568/Reviewer_GvTg"
] | review: This paper considers the problem of approximating the diagonal of the forest matrix. The forest matrix has many applications, such as in the computation of the forest closeness centrality. This matrix can be computed exactly by inverting the Laplacian, but this approach is expensive on large graphs. Previous methods for approximate computation of diagonal entries of the forest matrix leverage fast Laplacian solvers, which are only available for undirected graphs.
In this paper an alternative interpretation of the $i$-th diagonal entry is considered: its value is the fraction of converging spanning trees such that $i$ belongs is one of the roots. A natural idea is then to estimate this value as the fraction of randomly sampled spanning trees that have $i$ as one of the roots.
Based on this idea, the paper proposes three estimators of gradually smaller variances, and derives accuracy bounds via the Chernoff bound. The variance-reducing techniques are interesting.
In general, the description of the algorithms and estimators is clear.
On the other hand, the motivation for the importance of considering the diagonal entries of the forest matrix is implicitly assumed, with an extremely limited discussion. For example, in line 125: "The calculation of forest closeness centrality is inherently tied to the diagonal elements of the forest matrix"; how are they related? I would discuss more clearly how to use such entries to compute/approximate the forest closeness centrality, providing a clear motivation for the considered problem. Are there other applications where having accurate estimates of the diagonal is important? Could you provide more details on this?
The new algorithms are tested in practice on several large real-world graphs; the best-proposed method (SCFV+) is shown to be much more accurate than previous methods for the same amount of work (the sampling number $l$).
Regarding the experiments on directed graphs: the results are shown for fixed values of the number of samples $l$. When $l=1000$, the time to conclude for SCFV+ is approx 23 minutes. Are these values of $l$ useful to derive any approximation guarantee, as derived in the theoretical analysis (e.g., Theorem 6.3)?
Minor comments:
- line 69: memory instead of memories
- line 99: "superior theoretical result" is not very clear
- the used chernoff bound (Lemma A.1) seems to be Bernstein's inequality?
questions: - provide more details of at least one application where the diagonal of the forest matrix is important (e.g., forest closeness centrality)
- add more details on the obtained approximation guarantees for the considered parameters in the experiments
ethics_review_flag: No
ethics_review_description: No issues
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
432AJU0zEt | Efficient Computation for Diagonal of Forest Matrix via Variance-Reduced Forest Sampling | [
"Haoxin Sun",
"Zhongzhi Zhang"
] | The forest matrix, particularly its diagonal elements, has far-reaching implications in network science and machine learning. The state-of-the-art algorithms for the diagonal of forest matrix computation are based on a fast Laplacian solver. However, these algorithms encounter limitations when applied to digraphs due to the incapacity of the Laplacian solver. To overcome the issue, in this paper, we propose three novel sampling-based algorithms: SCF, SCFV, and SCFV. Our first algorithm SCF leverages a probability interpretation of the diagonal of the forest matrix and utilizes an expansion of Wilson's algorithm to sample spanning converging forests. To reduce the variance in the forest sampling, we develop two novel variance-reduced techniques. The first technique, leading to the proposal of the SCFV algorithm, is inspired by opinion dynamics in graphs and applies matrix-vector iteration to the spanning forest sampling. While SCFV achieves reduced variance compared to SCF, the cross-product term in its variance expression can be complex and potentially large in certain graphs. Therefore, we develop another technique, leading to a new iteration equation and the SCFV+ algorithm. SCFV+ achieves further reduced variance without the cross-product term in the variance of SCFV. We prove that SCFV+ can achieve a relative error guarantee with high probability and maintain a linear time complexity relative to the nodes of graphs, presenting a superior theoretical result compared to state-of-the-art algorithms. Finally, we conduct extensive experiments on various real-world networks, showing that our algorithms achieve better estimation accuracy and are more time-efficient than the state-of-the-art algorithms. Moreover, our algorithms are scalable to massive graphs with more than twenty million nodes in both undirected and directed graphs. | [
"Forest matrix",
"Wilson's algorithm",
"spanning converging forest",
"variance reduction"
] | https://openreview.net/pdf?id=432AJU0zEt | gHILQhZZia | official_review | 1,700,932,071,555 | 432AJU0zEt | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1568/Reviewer_CpSa"
] | review: This paper is about estimating the diagonal entries of the matrix (I+L)^{-1} (also known as the forest matrix), where I is the identity matrix and L is the Laplacian matrix associated with an input graph. The problem itself is fundamental, lies at the core of network science, and has found several applications in Markov processes, opinion dynamics, graph signal processing, etc.
Compared to the state-of-the-art, where the emphasis has been on undirected graphs, the focus of this paper is on *directed* graphs. The core idea is to leverage the probabilistic interpretation of the diagonal elements of the forest matrix. The paper then presents a sampling estimation technique that builds upon and extends the seminal work of Wilson that uses loop-erased random walks for generating random spanning trees. A direct application of this yields accurate estimates only when the diagonal entries are large enough. To overcome this, the paper presents a new estimator with reduced variance and also argues about the quality of such an estimator.
The part of the paper concerning the experimental evaluations looks extensive and adequate. The proposed unbiased estimator appears to perform well and scales to relatively large networks.
*Pros
-The paper studies a very well-motivated and fundamental problem that seems to be important for many applications.
-Some effort has been put into trying to give provable guarantees for the presented algorithms, which is always welcome.
*Cons
-Unfortunately, I think the paper does make some inadequate claims, and some of the sentences/notations are a bit non-standard and hard to understand. All of these indicate that the paper may not be ready for wider dissemination. I have elaborated more in the comments to the authors below.
-Also, for the particular problem studied in the paper, I’m not very much convinced that directed graphs are harder than undirected ones.
-The approach presented here is also very similar to other works, e.g, [45]
-Evaluation
Overall, I think this is a sold contribution but due to my concerns above, I’m not that sure that it passes the bar for the WebConference.
questions: Generic comment: If you assume you have access to a very efficient directed Laplacian solver, what's then the main contribution of your work?
-In the abstract, you said “encounter limitations when applied to digraphs due to incapacity of the Laplacian solver”. What does this sentence mean? There are also directed Laplacian solvers presented in the literature (https://arxiv.org/abs/1811.10722) – why can’t you use those? Or what about practical solvers like multi-grid?
- Lines 52,53, the sentences are not connected well with each other.
- Line 88, what does “rooted probability” mean?
- Line 92, what does “sampling number” mean?
- Line 70, the Laplacian solver you cited is not the state-of-the-art Laplacian solver. Please do a more thorough literature review
- Line 137, again, Laplacian solvers for directed graphs exist
- Generic comment: why did you use spanning converging forests for describing forests of directed graphs? I think you might need to check the notion of pseudoforest online – it seems that’s what you need
- Line 215, a bad notation to use $\phi$ for a forest – it’s standard to use capital letters (and not Greek letters) for defining such objects
- Line 525, what’s a requisite sampling number?
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
432AJU0zEt | Efficient Computation for Diagonal of Forest Matrix via Variance-Reduced Forest Sampling | [
"Haoxin Sun",
"Zhongzhi Zhang"
] | The forest matrix, particularly its diagonal elements, has far-reaching implications in network science and machine learning. The state-of-the-art algorithms for the diagonal of forest matrix computation are based on a fast Laplacian solver. However, these algorithms encounter limitations when applied to digraphs due to the incapacity of the Laplacian solver. To overcome the issue, in this paper, we propose three novel sampling-based algorithms: SCF, SCFV, and SCFV. Our first algorithm SCF leverages a probability interpretation of the diagonal of the forest matrix and utilizes an expansion of Wilson's algorithm to sample spanning converging forests. To reduce the variance in the forest sampling, we develop two novel variance-reduced techniques. The first technique, leading to the proposal of the SCFV algorithm, is inspired by opinion dynamics in graphs and applies matrix-vector iteration to the spanning forest sampling. While SCFV achieves reduced variance compared to SCF, the cross-product term in its variance expression can be complex and potentially large in certain graphs. Therefore, we develop another technique, leading to a new iteration equation and the SCFV+ algorithm. SCFV+ achieves further reduced variance without the cross-product term in the variance of SCFV. We prove that SCFV+ can achieve a relative error guarantee with high probability and maintain a linear time complexity relative to the nodes of graphs, presenting a superior theoretical result compared to state-of-the-art algorithms. Finally, we conduct extensive experiments on various real-world networks, showing that our algorithms achieve better estimation accuracy and are more time-efficient than the state-of-the-art algorithms. Moreover, our algorithms are scalable to massive graphs with more than twenty million nodes in both undirected and directed graphs. | [
"Forest matrix",
"Wilson's algorithm",
"spanning converging forest",
"variance reduction"
] | https://openreview.net/pdf?id=432AJU0zEt | VfaC6pqmDX | decision | 1,705,909,214,839 | 432AJU0zEt | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: The reviewers agreed on the importance and broad applicability of the problem, that of obtaining the diagonal entries of the forest matrix of a directed graph. There was some discussion about whether sufficiently efficiently theoretical algorithms already exist for this problem, and that the current paper does not do a convincing job of highlighting the shortcomings of these algorithms from a practical perspective. However, the new algorithm proposed is simple and the rigorous analysis presented well. Moreover, the experiments convincingly establish the practical utility of the algorithm vis-a-vis previous theoretical work on undirected graph, i.e., in a simpler setting than directed graphs. Overall, the reviewers were positive about the paper, although not very enthusiastic. The paper can be (weakly) recommended for acceptance. |
432AJU0zEt | Efficient Computation for Diagonal of Forest Matrix via Variance-Reduced Forest Sampling | [
"Haoxin Sun",
"Zhongzhi Zhang"
] | The forest matrix, particularly its diagonal elements, has far-reaching implications in network science and machine learning. The state-of-the-art algorithms for the diagonal of forest matrix computation are based on a fast Laplacian solver. However, these algorithms encounter limitations when applied to digraphs due to the incapacity of the Laplacian solver. To overcome the issue, in this paper, we propose three novel sampling-based algorithms: SCF, SCFV, and SCFV. Our first algorithm SCF leverages a probability interpretation of the diagonal of the forest matrix and utilizes an expansion of Wilson's algorithm to sample spanning converging forests. To reduce the variance in the forest sampling, we develop two novel variance-reduced techniques. The first technique, leading to the proposal of the SCFV algorithm, is inspired by opinion dynamics in graphs and applies matrix-vector iteration to the spanning forest sampling. While SCFV achieves reduced variance compared to SCF, the cross-product term in its variance expression can be complex and potentially large in certain graphs. Therefore, we develop another technique, leading to a new iteration equation and the SCFV+ algorithm. SCFV+ achieves further reduced variance without the cross-product term in the variance of SCFV. We prove that SCFV+ can achieve a relative error guarantee with high probability and maintain a linear time complexity relative to the nodes of graphs, presenting a superior theoretical result compared to state-of-the-art algorithms. Finally, we conduct extensive experiments on various real-world networks, showing that our algorithms achieve better estimation accuracy and are more time-efficient than the state-of-the-art algorithms. Moreover, our algorithms are scalable to massive graphs with more than twenty million nodes in both undirected and directed graphs. | [
"Forest matrix",
"Wilson's algorithm",
"spanning converging forest",
"variance reduction"
] | https://openreview.net/pdf?id=432AJU0zEt | Im2gXJsoSr | official_review | 1,701,413,938,756 | 432AJU0zEt | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1568/Reviewer_sTM8"
] | review: The paper studies the problem of estimating the diagonal entries of $(I+L)^{-1}$ where L is the Laplacian matrix of a directed graph. This matrix is called the forest matrix and it has some important applications that are mentioned in the paper. The authors give new variance-reduced techniques to estimate these quantities efficiently. The algorithms presented are necessary since the standard fast algorithms are constrained by fast directed Laplacian solvers.
Pros:
Algorithm is simple and fairly easy to analyze. It has good practical performance and this seems to be important.
Cons:
It is not clear to me why are these methods required. There are near linear time laplacian solvers for directed graphs as well and the significance of this particular work is unclear.
questions: 1. The comparison to previous work on directed graphs is unclear.
2. The matrix (I+L) should be diagonally dominant and I am wondering why isn't there an easier method to do this via some power iterations?
3. There are fast directed Laplacian solvers and it is unclear why the previous works do not work. Also, the barrier between the directed and undirected graphs is unclear. More precisely, why is the problem harder in directed graphs for this setting?
It seems to me that the main reason these new algorithms are required is due to the directed structure of the graph and not the Laplacian solvers. It would be good if the authors can clarify what exactly is the need for these.
Otherwise, the algorithms are presented fairly well and are easy to follow.
ethics_review_flag: No
ethics_review_description: None
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
432AJU0zEt | Efficient Computation for Diagonal of Forest Matrix via Variance-Reduced Forest Sampling | [
"Haoxin Sun",
"Zhongzhi Zhang"
] | The forest matrix, particularly its diagonal elements, has far-reaching implications in network science and machine learning. The state-of-the-art algorithms for the diagonal of forest matrix computation are based on a fast Laplacian solver. However, these algorithms encounter limitations when applied to digraphs due to the incapacity of the Laplacian solver. To overcome the issue, in this paper, we propose three novel sampling-based algorithms: SCF, SCFV, and SCFV. Our first algorithm SCF leverages a probability interpretation of the diagonal of the forest matrix and utilizes an expansion of Wilson's algorithm to sample spanning converging forests. To reduce the variance in the forest sampling, we develop two novel variance-reduced techniques. The first technique, leading to the proposal of the SCFV algorithm, is inspired by opinion dynamics in graphs and applies matrix-vector iteration to the spanning forest sampling. While SCFV achieves reduced variance compared to SCF, the cross-product term in its variance expression can be complex and potentially large in certain graphs. Therefore, we develop another technique, leading to a new iteration equation and the SCFV+ algorithm. SCFV+ achieves further reduced variance without the cross-product term in the variance of SCFV. We prove that SCFV+ can achieve a relative error guarantee with high probability and maintain a linear time complexity relative to the nodes of graphs, presenting a superior theoretical result compared to state-of-the-art algorithms. Finally, we conduct extensive experiments on various real-world networks, showing that our algorithms achieve better estimation accuracy and are more time-efficient than the state-of-the-art algorithms. Moreover, our algorithms are scalable to massive graphs with more than twenty million nodes in both undirected and directed graphs. | [
"Forest matrix",
"Wilson's algorithm",
"spanning converging forest",
"variance reduction"
] | https://openreview.net/pdf?id=432AJU0zEt | A2FmnsGIwu | official_review | 1,701,067,862,796 | 432AJU0zEt | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1568/Reviewer_FVNU"
] | review: The paper studies the computation of the diagonal elements of the forest matrix $\Omega$ for directed graphs, where $\Omega = (I+L)^{-1}$ with $I$ being the identity matrix and $L$ being the graph Laplacian. These diagonal elements have implications of the structural importance of the corresponding nodes in the network. Previous nearly-linear algorithms are known for undirected graph based on the nearly linear time Laplacian solver (for undirected graphs). This paper gives three sampling based approximation algorithms SCF, SCFV and SCFV+ for directed graphs, and the best one SCFV+ can approximate all diagonal element within a multiplicative error of $\epsilon$ with probability at least $1-\delta$ in expected running time $O(\frac{n}{\epsilon^2}\log\frac{2}{\delta})$. This matches (or even slightly better) than the algorithm for undirected graphs, and is conceptually simpler as it doesn't involve Laplacian solver.
The basic idea is an interpretation of the diagonal element $\omega_{ii}$ as the probability that node $i$ is a root node in a randomly sampled spanning converging forest. The SCF algorithm faithfully follow this interpretation by randomly sampling $\ell$ spanning converging forests and using the empirical mean of node $i$ being a root as the estimated $\omega_{ii}$ for any $i$. To sample a random spanning converging forest, the algorithm adds dummy node $x$ and edges from every original node directed toward $x$, then uses the classic loop-erasing algorithm of Wilson to sample a directed spanning tree rooted at $x$, and finally remove $x$ to get the forest. Chernoff bound suggests when $\ell = \Omega(\frac{1}{\sigma\epsilon^2}\log\frac{2}{\delta})$ suffices when $\omega_{ii}\geq \sigma$ for all $i$. The further improve the sampling complexity and avoid the dependence on $\sigma$, the author(s) device two variance reduction techniques, and both are based on a known property of the $\Omega$ matrix from the opinion evolution literature. In particular, if one starts with some initial configuration $z(0)$ and follow a simple random walk type dynamic (parametrized by some seed vector $s$, then $z(0),z(1),\ldots,z(t)$ converges to $z=\Omega\cdot s$. This suggests that $\omega_i = \Omega\cdot e_i$, and if one can start with a $z(0)$ that is a fairly good estimation of $\omega_i$, then running a small number (even 1 step) of the dynamic can improve the quality of the estimate. The main idea of the improved algorithms SCFV and SCFV+ is to start with the estimation of SCF and run 1 step of the dynamic (or its transpose variant). For SCFV+ the authors can show a provable guarantee that $\ell=\Omega(\frac{1}{\epsilon^2}\log\frac{2}{\delta})$ is sufficient.
In the empirical evaluations, on undirected graphs the new algorithms show better approximation error (both on average and max) over previous algorithms under comparable complexity. On directed graphs, for several dataset of small size, the authors were able to compute the exact solution using numerical methods to demonstrate good approximation quality of their algorithm, and on large graphs (without ground truth to measure quality), the new algorithms (especially SCFV and SCFV+) are shown to scale reasonably well.
questions: I don't have any specific question.
ethics_review_flag: No
ethics_review_description: No
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
3yW6F0bUhC | List-aware Reranking-Truncation Joint Model for Search and Retrieval-augmented Generation | [
"Shicheng Xu",
"Liang Pang",
"Jun Xu",
"Huawei Shen",
"Xueqi Cheng"
] | The results of information retrieval (IR) are usually presented in the form of a ranked list of candidate documents, such as web search for humans and retrieval-augmented paradigm for large language models (LLMs). List-aware retrieval aims to capture the list-level contextual features to return a better list, mainly including reranking and truncation. Reranking finely re-scores the documents in the list. Truncation dynamically determines the cut-off point of the ranked list to achieve the trade-off between overall relevance and avoiding misinformation from irrelevant documents. Previous studies treat them as two separate tasks and model them separately. However, the separation is not optimal. First, it is hard to share information between the two tasks. Specifically, reranking can provide fine-grained relevance information for truncation, while truncation can provide utility requirement for reranking. Second, the separate pipeline usually meets the error accumulation problem, where the small error from the reranking stage can largely affect the truncation stage. To solve these problems, we propose a Reranking-Truncation joint model (GenRT) that can perform the two tasks concurrently. GenRT integrates reranking and truncation via generative paradigm based on encoder-decoder architecture. We also design the novel loss functions for joint optimization to make the model learn both tasks. Sharing parameters by the joint model is conducive to making full use of the common modeling information of the two tasks. Besides, the two tasks are performed concurrently and co-optimized to solve the error accumulation problem between separate stages. Experimentats on public learning-to-rank benchmarks and open-domain Q&A tasks show that our method achieves SOTA performance on both reranking and truncation tasks for web search and retrieval-augmented LLMs. To the best of our knowledge, this is the first work that discusses list-aware retrieval (esp. truncation task) in retrieval-augmented LLMs. | [
"Reranking",
"Truncation",
"Retrieval-augmented large language models"
] | https://openreview.net/pdf?id=3yW6F0bUhC | yAy3jUu2H6 | official_review | 1,698,917,307,671 | 3yW6F0bUhC | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission129/Reviewer_fzkQ"
] | review: The paper proposes a joint model called GenRT for learning and performing reranking and truncation concurrently in list-aware retrieval systems. The key idea is using an encoder-decoder architecture where the encoder captures global list-level features and the decoder generates the reranked list step-by-step while making truncation decisions. The parameters of the encoder are shared in the joint optimization, and to learn the truncation model in the dynamic reranking process, a local backward window is introduced to provide backward context. A couple of reranking and truncation losses are designed to train the model for the two tasks. The authors have conducted extensive experiment with both LTR datasets and retrieval augmented LLMs on QA datasets.
Strengths:
- The paper is mostly easy to read.
- The authors have conducted extensive experiments to show the effectiveness of their methods
- Overall, the proposed method is reasonable and the experiment results support the authors’ arguments in the paper. The learning of truncation with dynamic list is particularly interesting.
Weakness:
- The idea of jointly training a reranking model with a truncation model is good but a bit incremental since LeCut has already considered jointly training the ranking models with the truncation models. The authors have clearly explained their differences with LeCut, but the differences are mostly on the model side, which means that main framework is not surprising new in this paper.
- The proposed TDCG reward/metric is not full grounded by user studies or previous studies. This could be problematic considering that it is the only truncation metric used in this paper. Particularly, there is no justification on how the gamma is selected.
questions: - Section 3.1, what is the ranking score l_i in feature-based datasets? The ranking score from the initial list? Or the position in the initial list?
- NCI is a widely used metric in truncation. Why not use NCI but proposed a new metric TDCG?
- Line 643, does \gamma means \gamma(y*x)?
- In Table 4, what does’t mean to compute TCDG for retrieval-augmented LLMs? How to compute it exactly?
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
3yW6F0bUhC | List-aware Reranking-Truncation Joint Model for Search and Retrieval-augmented Generation | [
"Shicheng Xu",
"Liang Pang",
"Jun Xu",
"Huawei Shen",
"Xueqi Cheng"
] | The results of information retrieval (IR) are usually presented in the form of a ranked list of candidate documents, such as web search for humans and retrieval-augmented paradigm for large language models (LLMs). List-aware retrieval aims to capture the list-level contextual features to return a better list, mainly including reranking and truncation. Reranking finely re-scores the documents in the list. Truncation dynamically determines the cut-off point of the ranked list to achieve the trade-off between overall relevance and avoiding misinformation from irrelevant documents. Previous studies treat them as two separate tasks and model them separately. However, the separation is not optimal. First, it is hard to share information between the two tasks. Specifically, reranking can provide fine-grained relevance information for truncation, while truncation can provide utility requirement for reranking. Second, the separate pipeline usually meets the error accumulation problem, where the small error from the reranking stage can largely affect the truncation stage. To solve these problems, we propose a Reranking-Truncation joint model (GenRT) that can perform the two tasks concurrently. GenRT integrates reranking and truncation via generative paradigm based on encoder-decoder architecture. We also design the novel loss functions for joint optimization to make the model learn both tasks. Sharing parameters by the joint model is conducive to making full use of the common modeling information of the two tasks. Besides, the two tasks are performed concurrently and co-optimized to solve the error accumulation problem between separate stages. Experimentats on public learning-to-rank benchmarks and open-domain Q&A tasks show that our method achieves SOTA performance on both reranking and truncation tasks for web search and retrieval-augmented LLMs. To the best of our knowledge, this is the first work that discusses list-aware retrieval (esp. truncation task) in retrieval-augmented LLMs. | [
"Reranking",
"Truncation",
"Retrieval-augmented large language models"
] | https://openreview.net/pdf?id=3yW6F0bUhC | diNW72jc6k | decision | 1,705,909,222,211 | 3yW6F0bUhC | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: This is a metareview. It is based on the reviews, the author's feedback, and my own opinion. This paper proposes a list-aware reranking with early termination. While the reviews were reasonably detailed and pointed out important issues, it is my opinion that the authors did a very good job of carefully addressing all of these. I must admit that I am a little disappointed that several of the reviewers did not seem to acknowledge/engage with the the auth
ors on their detailed responses, I want to commend the authors. It is my opi
nion that this paper should be accepted.
A few small comments regarding potential changes:
* A few questions were raised regarding the efficiency / complexity aspect of the early termination mechanism.
The authors provided a detailed response to this issue, which should be included in the camera ready.
* Reviewer 3 has made a few concrete suggestions about the order of
presentation that the authors should consider in the camera ready.
* Other detailed experimental responses should be included in the camera
ready if possible as they clearly improve the quality of the work in my
opinion.
This is a nice piece of work. Well done. |
3yW6F0bUhC | List-aware Reranking-Truncation Joint Model for Search and Retrieval-augmented Generation | [
"Shicheng Xu",
"Liang Pang",
"Jun Xu",
"Huawei Shen",
"Xueqi Cheng"
] | The results of information retrieval (IR) are usually presented in the form of a ranked list of candidate documents, such as web search for humans and retrieval-augmented paradigm for large language models (LLMs). List-aware retrieval aims to capture the list-level contextual features to return a better list, mainly including reranking and truncation. Reranking finely re-scores the documents in the list. Truncation dynamically determines the cut-off point of the ranked list to achieve the trade-off between overall relevance and avoiding misinformation from irrelevant documents. Previous studies treat them as two separate tasks and model them separately. However, the separation is not optimal. First, it is hard to share information between the two tasks. Specifically, reranking can provide fine-grained relevance information for truncation, while truncation can provide utility requirement for reranking. Second, the separate pipeline usually meets the error accumulation problem, where the small error from the reranking stage can largely affect the truncation stage. To solve these problems, we propose a Reranking-Truncation joint model (GenRT) that can perform the two tasks concurrently. GenRT integrates reranking and truncation via generative paradigm based on encoder-decoder architecture. We also design the novel loss functions for joint optimization to make the model learn both tasks. Sharing parameters by the joint model is conducive to making full use of the common modeling information of the two tasks. Besides, the two tasks are performed concurrently and co-optimized to solve the error accumulation problem between separate stages. Experimentats on public learning-to-rank benchmarks and open-domain Q&A tasks show that our method achieves SOTA performance on both reranking and truncation tasks for web search and retrieval-augmented LLMs. To the best of our knowledge, this is the first work that discusses list-aware retrieval (esp. truncation task) in retrieval-augmented LLMs. | [
"Reranking",
"Truncation",
"Retrieval-augmented large language models"
] | https://openreview.net/pdf?id=3yW6F0bUhC | ZxnXXCGfgk | official_review | 1,701,065,219,667 | 3yW6F0bUhC | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission129/Reviewer_5vwi"
] | review: Summary
----
The article introduces the GenRT model, a novel approach that simultaneously addresses reranking and truncation through a generative framework built on an encoder-decoder architecture.
Strong Points
----
1. The innovative integration of reranking with truncation tasks.
2. Demonstrated improvements in performance through tests on widely available datasets.
3. The paper is clear to read and follow.
Weak Points
----
1. A key area of concern with the GenRT model is its application in real-time or online environments, where retrieval-augmented generation is usually used. The paper does not adequately address the potential increase in computational complexity and latency that the model might introduce in such scenarios. This oversight is significant, as it leaves questions about the model's practicality and efficiency in real-world applications. I would suggest the authors to include a detailed comparison of the GenRT model's latency and complexity against existing baseline methods. Such an analysis would provide a clearer understanding of the model's performance in time-sensitive applications. This additional analysis could offer more comprehensive insights into the trade-offs between model performance and operational efficiency.
questions: See weak points.
ethics_review_flag: No
ethics_review_description: n/a
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
3yW6F0bUhC | List-aware Reranking-Truncation Joint Model for Search and Retrieval-augmented Generation | [
"Shicheng Xu",
"Liang Pang",
"Jun Xu",
"Huawei Shen",
"Xueqi Cheng"
] | The results of information retrieval (IR) are usually presented in the form of a ranked list of candidate documents, such as web search for humans and retrieval-augmented paradigm for large language models (LLMs). List-aware retrieval aims to capture the list-level contextual features to return a better list, mainly including reranking and truncation. Reranking finely re-scores the documents in the list. Truncation dynamically determines the cut-off point of the ranked list to achieve the trade-off between overall relevance and avoiding misinformation from irrelevant documents. Previous studies treat them as two separate tasks and model them separately. However, the separation is not optimal. First, it is hard to share information between the two tasks. Specifically, reranking can provide fine-grained relevance information for truncation, while truncation can provide utility requirement for reranking. Second, the separate pipeline usually meets the error accumulation problem, where the small error from the reranking stage can largely affect the truncation stage. To solve these problems, we propose a Reranking-Truncation joint model (GenRT) that can perform the two tasks concurrently. GenRT integrates reranking and truncation via generative paradigm based on encoder-decoder architecture. We also design the novel loss functions for joint optimization to make the model learn both tasks. Sharing parameters by the joint model is conducive to making full use of the common modeling information of the two tasks. Besides, the two tasks are performed concurrently and co-optimized to solve the error accumulation problem between separate stages. Experimentats on public learning-to-rank benchmarks and open-domain Q&A tasks show that our method achieves SOTA performance on both reranking and truncation tasks for web search and retrieval-augmented LLMs. To the best of our knowledge, this is the first work that discusses list-aware retrieval (esp. truncation task) in retrieval-augmented LLMs. | [
"Reranking",
"Truncation",
"Retrieval-augmented large language models"
] | https://openreview.net/pdf?id=3yW6F0bUhC | MjeFjBeoxE | official_review | 1,701,250,866,849 | 3yW6F0bUhC | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission129/Reviewer_ZWGp"
] | review: This paper proposes a Reranking-Truncation joint model for list-aware retrieval in web search and retrieval-augmented LLM. The experimental results have verified the effectiveness of the method, and the experimental design is reasonable.
This paper has a well-organized structure.
Truncation and Reranking are important problems in retrieval and integrating two parts together is an important issue.
Efficiency Analysis of List-aware Retrieval in section 4.4 is very necessary.
questions: More detailed information is needed on the time-consuming data of truncation and reranking using traditional methods.
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
3yW6F0bUhC | List-aware Reranking-Truncation Joint Model for Search and Retrieval-augmented Generation | [
"Shicheng Xu",
"Liang Pang",
"Jun Xu",
"Huawei Shen",
"Xueqi Cheng"
] | The results of information retrieval (IR) are usually presented in the form of a ranked list of candidate documents, such as web search for humans and retrieval-augmented paradigm for large language models (LLMs). List-aware retrieval aims to capture the list-level contextual features to return a better list, mainly including reranking and truncation. Reranking finely re-scores the documents in the list. Truncation dynamically determines the cut-off point of the ranked list to achieve the trade-off between overall relevance and avoiding misinformation from irrelevant documents. Previous studies treat them as two separate tasks and model them separately. However, the separation is not optimal. First, it is hard to share information between the two tasks. Specifically, reranking can provide fine-grained relevance information for truncation, while truncation can provide utility requirement for reranking. Second, the separate pipeline usually meets the error accumulation problem, where the small error from the reranking stage can largely affect the truncation stage. To solve these problems, we propose a Reranking-Truncation joint model (GenRT) that can perform the two tasks concurrently. GenRT integrates reranking and truncation via generative paradigm based on encoder-decoder architecture. We also design the novel loss functions for joint optimization to make the model learn both tasks. Sharing parameters by the joint model is conducive to making full use of the common modeling information of the two tasks. Besides, the two tasks are performed concurrently and co-optimized to solve the error accumulation problem between separate stages. Experimentats on public learning-to-rank benchmarks and open-domain Q&A tasks show that our method achieves SOTA performance on both reranking and truncation tasks for web search and retrieval-augmented LLMs. To the best of our knowledge, this is the first work that discusses list-aware retrieval (esp. truncation task) in retrieval-augmented LLMs. | [
"Reranking",
"Truncation",
"Retrieval-augmented large language models"
] | https://openreview.net/pdf?id=3yW6F0bUhC | HQmxEy16Xl | official_review | 1,700,821,792,627 | 3yW6F0bUhC | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission129/Reviewer_da2Z"
] | review: This paper proposes GenRT, which jointly does reranking and ranking truncation in one model structure. GenRT exploits global dependency encoder to capture the information of the whole ranked list and decide which document to pick and whether to be truncated simultaneously in each step of the sequential decoding phase.
Positive Feedback:
1. The proposed model, GenRT, effectively combines reranking and ranking truncation within a novel model structure.
2. The presentation of the paper is clear, with a well-structured description and helpful visualizations.
3. The inclusion of baselines for both reranking and truncation, along with comparisons in different settings (w/o T, w/o R), demonstrates the behavior and flexibility of the proposed model.
Concerns and Suggestions:
1. For experiment results, while statistical significance is indicated, it would be beneficial to elaborate on the actual impact of the observed improvements, particularly in cases where they are small, such as the reranking results for Yahoo! and reranking results on retrieval-augmented LLMs for NQ.
2. In the results of truncation, it would be valuable to include the TDCG of optimal truncation to provide a deeper understanding of the performance of truncation models. Additionally, including the truncation performance of GenRT with other reranking models can offer insights into GenRT's truncation performance without the benefit of the end-to-end process and the same model structure.
3. Given the proposed acceleration strategy, it is suggested to include an efficiency analysis in the experimental results to provide a comprehensive understanding of the model's efficiency.
4. According to Table 4, it is noted that the highest accuracy for NQ is achieved on the fixed-x setting with the largest x. To gain a more comprehensive understanding of truncation performance, it would be valuable to explore the performance of all models with larger x which causes worse accuracy. This analysis can provide insights into the value of truncation.
5. The symbol 'p' is used in equations 7 and 9 to represent different meanings.
questions: It would be helpful if the authors could address the concerns about model performance and evaluation.
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
3yW6F0bUhC | List-aware Reranking-Truncation Joint Model for Search and Retrieval-augmented Generation | [
"Shicheng Xu",
"Liang Pang",
"Jun Xu",
"Huawei Shen",
"Xueqi Cheng"
] | The results of information retrieval (IR) are usually presented in the form of a ranked list of candidate documents, such as web search for humans and retrieval-augmented paradigm for large language models (LLMs). List-aware retrieval aims to capture the list-level contextual features to return a better list, mainly including reranking and truncation. Reranking finely re-scores the documents in the list. Truncation dynamically determines the cut-off point of the ranked list to achieve the trade-off between overall relevance and avoiding misinformation from irrelevant documents. Previous studies treat them as two separate tasks and model them separately. However, the separation is not optimal. First, it is hard to share information between the two tasks. Specifically, reranking can provide fine-grained relevance information for truncation, while truncation can provide utility requirement for reranking. Second, the separate pipeline usually meets the error accumulation problem, where the small error from the reranking stage can largely affect the truncation stage. To solve these problems, we propose a Reranking-Truncation joint model (GenRT) that can perform the two tasks concurrently. GenRT integrates reranking and truncation via generative paradigm based on encoder-decoder architecture. We also design the novel loss functions for joint optimization to make the model learn both tasks. Sharing parameters by the joint model is conducive to making full use of the common modeling information of the two tasks. Besides, the two tasks are performed concurrently and co-optimized to solve the error accumulation problem between separate stages. Experimentats on public learning-to-rank benchmarks and open-domain Q&A tasks show that our method achieves SOTA performance on both reranking and truncation tasks for web search and retrieval-augmented LLMs. To the best of our knowledge, this is the first work that discusses list-aware retrieval (esp. truncation task) in retrieval-augmented LLMs. | [
"Reranking",
"Truncation",
"Retrieval-augmented large language models"
] | https://openreview.net/pdf?id=3yW6F0bUhC | 6Get8iHV0t | official_review | 1,700,887,300,130 | 3yW6F0bUhC | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission129/Reviewer_F3xg"
] | review: The study introduces list-aware retrieval, which involves presenting search results as a ranked list of documents. The tasks involved are reranking and truncation. Previous studies treated them separately, but the proposed joint model, GenRT, performs them concurrently using a generative paradigm. This approach improves information sharing and addresses the issue of error accumulation. Experimental results show that GenRT achieves better performance in both reranking and truncation tasks for web search and retrieval-augmented language models.
Pros:
1.The proposed combined training of document reranking and truncation appears to be novel compared to existing approaches.
2.The experimental design clearly demonstrates that the combination of reranking and truncation improved both tasks on the information retrieval datasets.
3.The paper does a decent job of describing their models and experiments. Readers should be able to reproduce their work based on the details provided in the paper.
Cons:
1.There are some parts of the writing that require clarification.
2.The improvement on the information retrieval dataset appears to be moderate.
3.The application of truncation to the large language model question-answering (LLM QA) task seems to reduce the algorithm's performance.
questions: 1.It would be beneficial to define and explain truncation and retrieval-augmented LLMs earlier, such as in the introduction or abstract section.
2.Justification for selecting learning-to-rank and QA as testing tasks should be provided earlier in Section 1.
3.The meaning of "they do not satisfy the permutation invariant" in Section 2 is unclear.
4.Additional details on how document embeddings are obtained are necessary in Section 3.1.
5.The importance and necessity of position embedding should be clarified, and supporting experiments could be added.
6.Further justification is needed for using TDCG as an evaluation metric and how it penalizes irrelevant documents.
7.In Section 4.4, an explanation is needed for why SetRank is a suitable baseline in Fig 5.
8.Table 4 seems to suggest that applying truncation negatively impacts LLM performance on QA tasks. This makes people question the importance of the truncation task.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
3mtJnDbfo9 | Top-Personalized-K Recommendation | [
"WONBIN KWEON",
"SeongKu Kang",
"Sanghwan Jang",
"Hwanjo Yu"
] | The conventional top-K recommendation, which presents the top-K items with the highest ranking scores, is a common practice for generating personalized ranking lists.
However, is this fixed-size top-K recommendation the optimal approach for every user’s satisfaction?
Not necessarily.
We point out that providing fixed-size recommendations without taking into account user utility can be suboptimal, as it may unavoidably include irrelevant items or limit the exposure to relevant ones.
To address this issue, we introduce Top-Personalized-K Recommendation, a new recommendation task aimed at generating a personalized-sized ranking list to maximize individual user satisfaction.
As a solution to the proposed task, we develop a model-agnostic framework named PerK.
PerK estimates the expected user utility by leveraging calibrated interaction probabilities, subsequently selecting the recommendation size that maximizes this expected utility.
Through extensive experiments on real-world datasets, we demonstrate the superiority of PerK in Top-Personalized-K recommendation task.
We expect that Top-Personalized-K recommendation has the potential to offer enhanced solutions for various real-world recommendation scenarios, based on its great compatibility with existing models. | [
"Recommender System",
"Collaborative Filtering",
"Personalization",
"Recommendation Size"
] | https://openreview.net/pdf?id=3mtJnDbfo9 | ab7sykXEo5 | official_review | 1,700,510,059,954 | 3mtJnDbfo9 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission568/Reviewer_uRD6"
] | review: # Summary
The paper presents a new technique to determine the ideal number of recommended items for each user. The authors argue that fixed list sizes are not optimal as they may either include irrelevant items or limit exposure to relevant ones. The problem is treated as an optimization problem over a general user utility function from which various evaluation metrics (such as NDCG and F1) can be derived. Since there is no ground truth for the unobserved user-item interactions, the objective function cannot be optimized directly. Instead, they use the Expected User Utility that can be estimated by treating the interaction labels for unobserved items as Bernoulli random variables. The Expected User Utility is computed over calibrated interaction probabilities. Experiments are conducted on several datasets, considering different baselines, where the proposed method shows superior results over the baselines.
# Clarity, Originality, and Significance
The paper is well-written, and the methodology is sound. The motivation is appealing, and the main ideas are built on established research. For example, the Expected User Utility comes from [45] and [46], while the calibration of Interaction Probability through Platt scaling comes from [12], [11], [34], [10], and [24]. The latter method was adapted to calibrate at the user level. The method and results have a high potential impact on academia and industry.
# Pros
- Paper is well written and technically sound;
- The method is built on sound principles and previous research;
- The experiments include several datasets and baselines;
- An ablation study to isolate the impact of calibration is included, showing that user-wise calibration indeed helps;
- The overall results are encouraging.
# Cons
- The recommendation task involves implicit feedback data, but some of the chosen datasets are rather explicit feedback transformed to accommodate the task. For example, two versions of MovieLens are used, while many pure implicit feedback data available could be better suited to this task. Random train/test splits might also be an issue. See "Balázs Hidasi and Ádám Tibor Czapp. 2023. Widespread Flaws in Offline Evaluation of Recommender Systems. In Proceedings of the 17th ACM Conference on Recommender Systems (RecSys '23)" for common problems in offline evaluation and how to mitigate them.
- I understand that the proposed method is agnostic to the recommendation model. Still, I would try to include more recommendation models, especially those appearing at well-known benchmarks, for this recommendation task. Please refer to "Steffen Rendle, Walid Krichene, Li Zhang, and Yehuda Koren. 2022. Revisiting the Performance of iALS on Item Recommendation Benchmarks. In Proceedings of the 16th ACM Conference on Recommender Systems (RecSys '22)."
questions: In Fig.2, for MovieLens 25M, there is a curious peak for k=50 in all cases. I wonder whether it is realistic to consider that the best choice is to recommend 50 items to a significant fraction of the users in any real-world application. Could the authors elaborate on that observation?
What would happen with non-personalized baselines? Would we observe the same gains?
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
3mtJnDbfo9 | Top-Personalized-K Recommendation | [
"WONBIN KWEON",
"SeongKu Kang",
"Sanghwan Jang",
"Hwanjo Yu"
] | The conventional top-K recommendation, which presents the top-K items with the highest ranking scores, is a common practice for generating personalized ranking lists.
However, is this fixed-size top-K recommendation the optimal approach for every user’s satisfaction?
Not necessarily.
We point out that providing fixed-size recommendations without taking into account user utility can be suboptimal, as it may unavoidably include irrelevant items or limit the exposure to relevant ones.
To address this issue, we introduce Top-Personalized-K Recommendation, a new recommendation task aimed at generating a personalized-sized ranking list to maximize individual user satisfaction.
As a solution to the proposed task, we develop a model-agnostic framework named PerK.
PerK estimates the expected user utility by leveraging calibrated interaction probabilities, subsequently selecting the recommendation size that maximizes this expected utility.
Through extensive experiments on real-world datasets, we demonstrate the superiority of PerK in Top-Personalized-K recommendation task.
We expect that Top-Personalized-K recommendation has the potential to offer enhanced solutions for various real-world recommendation scenarios, based on its great compatibility with existing models. | [
"Recommender System",
"Collaborative Filtering",
"Personalization",
"Recommendation Size"
] | https://openreview.net/pdf?id=3mtJnDbfo9 | KVO0BPNDp6 | official_review | 1,700,751,975,121 | 3mtJnDbfo9 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission568/Reviewer_k6gS"
] | review: ## summary:
This paper challenges the conventional fixed-size top-K recommendation
approach, arguing that it may not optimize user satisfaction. The
authors introduce the Top-Personalized-K Recommendation task, where
the recommendation size varies for each user to maximize individual
satisfaction. They propose the PerK framework, a model-agnostic
solution that estimates expected user utility using calibrated
interaction probabilities. PerK selects the recommendation size that
maximizes this expected utility. The framework involves a bi-level
optimization problem, using the concept of expected user utility and
calibrated interaction probabilities obtained through user-wise
calibration.
## strengths:
* s1. PerK is designed to seamlessly collaborate with any pre-trained
recommender model, offering flexibility and adaptability to diverse
recommendation scenarios. PerK is highly adaptable, suitable for
integration into any item recommendation scenario with existing
recommenders. Its simplicity doesn't compromise effectiveness.
* s2. The theoretical derivations are meticulously presented,
contributing to a thorough understanding of the model.
* s3. The model undergoes a comprehensive evaluation with a detailed
complexity analysis and AB testing conducted on an Amazon dataset.
* s4. Outperforming all baseline models by approximately 7% across four
public datasets, the proposed model demonstrates robust performance.
## weaknesses:
* w1. The authors used three base recommender systems: Bayesian Personalized Ranking (BPR), Neural Collaborative Filtering (NCF), and LightGCN (LGCN). Notably, the authors did not provide explicit justification for the selection of these three base recommenders. Additionally, these particular models were not employed as base recommenders in baseline papers. The performance of this proposed model on top of advanced recommender systems remains uncertain, and the potential computational resources, particularly for large-scale recommendation systems, may necessitate significant resources for optimizing such a system.
questions: see weakness above.
ethics_review_flag: No
ethics_review_description: n/a
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
3mtJnDbfo9 | Top-Personalized-K Recommendation | [
"WONBIN KWEON",
"SeongKu Kang",
"Sanghwan Jang",
"Hwanjo Yu"
] | The conventional top-K recommendation, which presents the top-K items with the highest ranking scores, is a common practice for generating personalized ranking lists.
However, is this fixed-size top-K recommendation the optimal approach for every user’s satisfaction?
Not necessarily.
We point out that providing fixed-size recommendations without taking into account user utility can be suboptimal, as it may unavoidably include irrelevant items or limit the exposure to relevant ones.
To address this issue, we introduce Top-Personalized-K Recommendation, a new recommendation task aimed at generating a personalized-sized ranking list to maximize individual user satisfaction.
As a solution to the proposed task, we develop a model-agnostic framework named PerK.
PerK estimates the expected user utility by leveraging calibrated interaction probabilities, subsequently selecting the recommendation size that maximizes this expected utility.
Through extensive experiments on real-world datasets, we demonstrate the superiority of PerK in Top-Personalized-K recommendation task.
We expect that Top-Personalized-K recommendation has the potential to offer enhanced solutions for various real-world recommendation scenarios, based on its great compatibility with existing models. | [
"Recommender System",
"Collaborative Filtering",
"Personalization",
"Recommendation Size"
] | https://openreview.net/pdf?id=3mtJnDbfo9 | GkePihUarx | decision | 1,705,909,244,374 | 3mtJnDbfo9 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: This paper introduces Top-Personalized-K Recommendation, an innovative approach to recommendation tasks that addresses the constraints of traditional top-K recommendations, which rely on a universally fixed-size list. The reviewers find the idea of the paper intriguing and the experiments persuasive. They advise the authors to clearly justify their choice of the three base recommenders used, and to expand their research by conducting additional experiments with a broader range of recommendation models. |
3mtJnDbfo9 | Top-Personalized-K Recommendation | [
"WONBIN KWEON",
"SeongKu Kang",
"Sanghwan Jang",
"Hwanjo Yu"
] | The conventional top-K recommendation, which presents the top-K items with the highest ranking scores, is a common practice for generating personalized ranking lists.
However, is this fixed-size top-K recommendation the optimal approach for every user’s satisfaction?
Not necessarily.
We point out that providing fixed-size recommendations without taking into account user utility can be suboptimal, as it may unavoidably include irrelevant items or limit the exposure to relevant ones.
To address this issue, we introduce Top-Personalized-K Recommendation, a new recommendation task aimed at generating a personalized-sized ranking list to maximize individual user satisfaction.
As a solution to the proposed task, we develop a model-agnostic framework named PerK.
PerK estimates the expected user utility by leveraging calibrated interaction probabilities, subsequently selecting the recommendation size that maximizes this expected utility.
Through extensive experiments on real-world datasets, we demonstrate the superiority of PerK in Top-Personalized-K recommendation task.
We expect that Top-Personalized-K recommendation has the potential to offer enhanced solutions for various real-world recommendation scenarios, based on its great compatibility with existing models. | [
"Recommender System",
"Collaborative Filtering",
"Personalization",
"Recommendation Size"
] | https://openreview.net/pdf?id=3mtJnDbfo9 | DL2tJ2R2jM | official_review | 1,700,638,572,793 | 3mtJnDbfo9 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission568/Reviewer_fsvR"
] | review: Summary:
- The paper discusses limitations of fixed-size top-K recommendation systems, proposing Top-Personalized-K Recommendation for personalized-sized lists. The PerK model, leveraging calibrated interaction probabilities, maximizes user satisfaction by selecting an optimal recommendation size. Extensive experiments show PerK's superiority, suggesting its potential for enhancing various real-world recommendation scenarios.
Pros:
1. This paper proposes a very interesting Top-Personalized-K recommendation task. The idea is simple and makes sense.
2. The paper is well-written and easy to follow.
3. The experimental results validate the efficacy of proposed PerK method.
questions: In real-world applications, if a screen can display 10 items, but the user's personalized k is 3, what should we do with the remaining space?
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
3mtJnDbfo9 | Top-Personalized-K Recommendation | [
"WONBIN KWEON",
"SeongKu Kang",
"Sanghwan Jang",
"Hwanjo Yu"
] | The conventional top-K recommendation, which presents the top-K items with the highest ranking scores, is a common practice for generating personalized ranking lists.
However, is this fixed-size top-K recommendation the optimal approach for every user’s satisfaction?
Not necessarily.
We point out that providing fixed-size recommendations without taking into account user utility can be suboptimal, as it may unavoidably include irrelevant items or limit the exposure to relevant ones.
To address this issue, we introduce Top-Personalized-K Recommendation, a new recommendation task aimed at generating a personalized-sized ranking list to maximize individual user satisfaction.
As a solution to the proposed task, we develop a model-agnostic framework named PerK.
PerK estimates the expected user utility by leveraging calibrated interaction probabilities, subsequently selecting the recommendation size that maximizes this expected utility.
Through extensive experiments on real-world datasets, we demonstrate the superiority of PerK in Top-Personalized-K recommendation task.
We expect that Top-Personalized-K recommendation has the potential to offer enhanced solutions for various real-world recommendation scenarios, based on its great compatibility with existing models. | [
"Recommender System",
"Collaborative Filtering",
"Personalization",
"Recommendation Size"
] | https://openreview.net/pdf?id=3mtJnDbfo9 | D3csE7ZchI | official_review | 1,701,051,791,630 | 3mtJnDbfo9 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission568/Reviewer_F7hN"
] | review: This paper proposed a Top-Personalized-K Recommendation, a new recommendation task resolving the limitation of the top-K recommendation with a globally fixed-size recommended list.
Pros:
1. The motivation is clearly clarified and the solution is well formulated.
2. Extensive experiments are conducted to demonstrate the performance and rationality of the proposed framework.
Cons:
1. In my opinion, this motivation is not reasonable in practice. The recommendation size K is constrained by business design rather than the algorithms. The example in Fig 1 seems unreasonable, the irrelevant items in the tail can not prove the impact of the personalized size, maybe just a result of the suboptimal ranking models.
2. The understanding of the application scenarios seems inappropriate. E.g. Multi-domain recommendation is a topic aiming at recommending items with a unified model which unitizes the mixed data from various domains. What you want to express here may be the page-level recommendation like[1].
3. The comparison in the experiments is not convincing enough for the unequal recommendation list.
[1] https://arxiv.org/pdf/2211.09303.pdf
questions: Please refer to the Cons.
ethics_review_flag: No
ethics_review_description: There is no ethics issue
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 3
technical_quality: 3
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
3ebYzt0obL | SatGuard: Concealing Endless and Bursty Packet Losses in LEO Satellite Networks for Delay-Sensitive Web Applications | [
"Jihao Li",
"Hewu Li",
"Zeqi Lai",
"Qian Wu",
"Yijie Liu",
"Qi Zhang",
"Yuanjie Li",
"Jun Liu"
] | Delay-sensitive Web services are crucial applications in emerging low-earth orbit (LEO) satellite networks (LSNs). However, our realworld measurement study based on SpaceX’s Starlink, the most widely used commercial LSN today reveals that the endless and
bursty packet losses over unstable LEO satellite links impose significant challenges on guaranteeing the quality of experience (QoE) of Web applications. We propose SatGuard, a distributed in-orbit loss recovery mechanism that can reduce user-perceived delay by completely concealing packet losses in the unstable and lossy LSN environment from endpoints. Specifically, SatGuard adopts a series of techniques to: (i) correctly migrate on-board packet buffer to support link local retransmission under LEO dynamics; (ii) efficiently detect packet losses on satellite links; and (iii) ensure packets ordering for endpoints. We implement a SatGuard prototype, and conduct extensive trace-driven evaluations guided by public constellation information and real-world measurements. Our experiments demonstrate that, in comparison with other state-of-the-art approaches, SatGuard can significantly improve Web-based QoE, by reducing: (i) up to 48.3% of page load time for Web browsing; and (ii) up to 57.4% end-to-end communication delay for WebRTC. | [
"LEO satellite networks",
"webRTC",
"web browsing",
"loss recovery"
] | https://openreview.net/pdf?id=3ebYzt0obL | t0zBzkQIuJ | official_review | 1,701,386,142,053 | 3ebYzt0obL | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1935/Reviewer_PyHy"
] | review: In this paper, the authors address the challenge of delivering delay-sensitive Web applications via low earth orbit satellites. The authors propose several techniques to comply with LEO satellites characteristics and to reduce delay. The analysis is supported a prototype and trace-driven emulation.
I like the perspective of the authors that delay, even though in low orbit, is less of an issue compared to unstable links.
Without doubt, the paper is nicely written und includes clear thoughts. What would be nice to see is a much better articulation about the novelty of the proposal.
Some comments:
(1) The in-network caching approach is related to types of prior work. First, anything that relates to information-centric (or named-data) networking. Second, approaches that try to cope with IP mobility. In particular, Fast Mobile IPv6 had a very similar idea, i.e., sending buffered content ahead to bridge handover gaps. How would you compare from a principle design perspective?
(2) The authors propose to use some kind of p2p communication between the satellites (i.e., spacerouting). I'm wondering how realistic is this to implement.
(3) The proposal seems to require synchronized clocks to align with the GS schedule. What about clock drift?
(4) The general problem of lossy links is also prevalent in IoT networks. What could we learn from proposal in this domain?
Editorial remarks:
* Figure 17: StarGuard should probably be SatGuard.
questions: See above.
ethics_review_flag: No
ethics_review_description: n/a
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
3ebYzt0obL | SatGuard: Concealing Endless and Bursty Packet Losses in LEO Satellite Networks for Delay-Sensitive Web Applications | [
"Jihao Li",
"Hewu Li",
"Zeqi Lai",
"Qian Wu",
"Yijie Liu",
"Qi Zhang",
"Yuanjie Li",
"Jun Liu"
] | Delay-sensitive Web services are crucial applications in emerging low-earth orbit (LEO) satellite networks (LSNs). However, our realworld measurement study based on SpaceX’s Starlink, the most widely used commercial LSN today reveals that the endless and
bursty packet losses over unstable LEO satellite links impose significant challenges on guaranteeing the quality of experience (QoE) of Web applications. We propose SatGuard, a distributed in-orbit loss recovery mechanism that can reduce user-perceived delay by completely concealing packet losses in the unstable and lossy LSN environment from endpoints. Specifically, SatGuard adopts a series of techniques to: (i) correctly migrate on-board packet buffer to support link local retransmission under LEO dynamics; (ii) efficiently detect packet losses on satellite links; and (iii) ensure packets ordering for endpoints. We implement a SatGuard prototype, and conduct extensive trace-driven evaluations guided by public constellation information and real-world measurements. Our experiments demonstrate that, in comparison with other state-of-the-art approaches, SatGuard can significantly improve Web-based QoE, by reducing: (i) up to 48.3% of page load time for Web browsing; and (ii) up to 57.4% end-to-end communication delay for WebRTC. | [
"LEO satellite networks",
"webRTC",
"web browsing",
"loss recovery"
] | https://openreview.net/pdf?id=3ebYzt0obL | lk7pDvYy6A | official_review | 1,700,489,789,988 | 3ebYzt0obL | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1935/Reviewer_9EwC"
] | review: This paper proposes an in-network link layer retransmission scheme for LEO satellite network such as Starlink. The approach is called SatGuard.
I noticed a few problems in the motivation and the claimed properties of Starlink:
1. "Upper layer applications are marching to the new era of Web-3.0."
* No, they are really not. These are hyperbolic marketing claims of web-3.0 proponents. It's not a good idea to repeat them in a scientific publication.
* Even if they were, it would really not be relevant to the topic of this paper.
* A link layer retransmission as proposed here does not really have a strong connection with web topics. The mechanisms would apply to *any* type of communication – maybe the paper would be a better fit for a satellite communication publication venue?
2. Starlink's "endless and bust packet losses".
* You seem to exaggerate the performance problems, specifically the packet loss rates. Other measurements and publications report a less problematic network performance, e.g.:
- https://www.netforecast.com/wp-content/uploads/FixedWireless_LEO_CableComparisonReport_NFR5148-1.pdf
- https://blog.apnic.net/2022/11/28/fact-checking-starlinks-performance-figures/
* From my own experience, packet loss occurs at low rates. Higher rates are often caused by sub-optimal antenna position, FOV blocking, or antenna mobility.
With respect to the proposed technical approach, I have a few comments:
3. You did not really analyze how current LEO link layers work. This would have been a good basis for your design.
4. Considering the current low (or at least acceptable) loss rates, inventing a handover-aware buffer migration scheme is obviously quite complex and costly. You do not discuss this a lot.
5. The assumptions for the partial traffic processing feature (that you could QoS classification) is unrealistic in today's Internet. Diffserv is not used inter-domain, and given ubiquitous encryption, it is hard/impossible to classify traffic in the network.
6. It's good that you performed application-level performance tests. The discussion could be more technical. E.g., what protocols were actually used (RTP, I assume?). How did the packet loss without your scheme affect the FPS?
questions: 1. Can you reconsider your claims on the unreliability of current the Starlink network? There may be other LEO networks that match those claims.
2. Can you base your design on actual design of current LEO link layer protocols?
3. Can you discuss the complexity issue?
4. Can you discuss the impact on WebRTC communication with more technical depth?
ethics_review_flag: No
ethics_review_description: no issues
scope: 2: The connection to the Web is incidental, e.g., use of Web data or API
novelty: 4
technical_quality: 3
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
3ebYzt0obL | SatGuard: Concealing Endless and Bursty Packet Losses in LEO Satellite Networks for Delay-Sensitive Web Applications | [
"Jihao Li",
"Hewu Li",
"Zeqi Lai",
"Qian Wu",
"Yijie Liu",
"Qi Zhang",
"Yuanjie Li",
"Jun Liu"
] | Delay-sensitive Web services are crucial applications in emerging low-earth orbit (LEO) satellite networks (LSNs). However, our realworld measurement study based on SpaceX’s Starlink, the most widely used commercial LSN today reveals that the endless and
bursty packet losses over unstable LEO satellite links impose significant challenges on guaranteeing the quality of experience (QoE) of Web applications. We propose SatGuard, a distributed in-orbit loss recovery mechanism that can reduce user-perceived delay by completely concealing packet losses in the unstable and lossy LSN environment from endpoints. Specifically, SatGuard adopts a series of techniques to: (i) correctly migrate on-board packet buffer to support link local retransmission under LEO dynamics; (ii) efficiently detect packet losses on satellite links; and (iii) ensure packets ordering for endpoints. We implement a SatGuard prototype, and conduct extensive trace-driven evaluations guided by public constellation information and real-world measurements. Our experiments demonstrate that, in comparison with other state-of-the-art approaches, SatGuard can significantly improve Web-based QoE, by reducing: (i) up to 48.3% of page load time for Web browsing; and (ii) up to 57.4% end-to-end communication delay for WebRTC. | [
"LEO satellite networks",
"webRTC",
"web browsing",
"loss recovery"
] | https://openreview.net/pdf?id=3ebYzt0obL | RPSbpz6If2 | decision | 1,705,909,241,219 | 3ebYzt0obL | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: The paper proposes SatGuard, addressing packet loss challenges in low earth orbit (LEO) satellite constellations like Starlink. SatGuard employs three mechanisms for fast loss recovery, supported by a measurement study on Starlink, simulations, and an open-source implementation. Reviewers appreciate the paper's insights, well-written nature, and timely relevance to a significant problem. However, some reviewers raised concerns around the proposal's novelty and its realistic implementation. Some reviewers question the exaggerated portrayal of Starlink's performance issues and raise doubts about the proposed link layer retransmission scheme's practicality.
The work unboubtedly provides valuable insights from a measurement study on Starlink, but it may overlap with more recent studies, such as:
Aravindh Raman, Matteo Varvello, Hyunseok Chang, Nishanth Sastry, and Yasir Zaki. 2023. Dissecting the Performance of Satellite Network Operators. Proc. ACM Netw. 1, CoNEXT3, Article 15 (December 2023), 25 pages. https://doi.org/10.1145/3629137
Nevertheless, SatGuard's contribution to fast packet loss recovery is deemed relevant and well-timed by the reviewers. The paper is well-written, and the application-level performance tests provide technical insights. The use of simulations to evaluate SatGuard's impact on end-to-end delay and web page load times is acknowledged.
The authors have engaged with the reviewers during the rebuttal phase, and presented ways in which they plan to address the feedback they received, and thus improve their submission.
I would recommend conditionally accepting this paper -- I would probably encourage appointing a shepherd to oversee the camera ready, especially since (from the rebuttal phase) I understand that some reviewers might want to actually see new experiments being performed. Specifically, I'd like that the authors clarify and articulate the novelty of the proposal more explicitly, and that they address concerns about the practicality and complexity of the proposed handover-aware buffer migration scheme. |
3ebYzt0obL | SatGuard: Concealing Endless and Bursty Packet Losses in LEO Satellite Networks for Delay-Sensitive Web Applications | [
"Jihao Li",
"Hewu Li",
"Zeqi Lai",
"Qian Wu",
"Yijie Liu",
"Qi Zhang",
"Yuanjie Li",
"Jun Liu"
] | Delay-sensitive Web services are crucial applications in emerging low-earth orbit (LEO) satellite networks (LSNs). However, our realworld measurement study based on SpaceX’s Starlink, the most widely used commercial LSN today reveals that the endless and
bursty packet losses over unstable LEO satellite links impose significant challenges on guaranteeing the quality of experience (QoE) of Web applications. We propose SatGuard, a distributed in-orbit loss recovery mechanism that can reduce user-perceived delay by completely concealing packet losses in the unstable and lossy LSN environment from endpoints. Specifically, SatGuard adopts a series of techniques to: (i) correctly migrate on-board packet buffer to support link local retransmission under LEO dynamics; (ii) efficiently detect packet losses on satellite links; and (iii) ensure packets ordering for endpoints. We implement a SatGuard prototype, and conduct extensive trace-driven evaluations guided by public constellation information and real-world measurements. Our experiments demonstrate that, in comparison with other state-of-the-art approaches, SatGuard can significantly improve Web-based QoE, by reducing: (i) up to 48.3% of page load time for Web browsing; and (ii) up to 57.4% end-to-end communication delay for WebRTC. | [
"LEO satellite networks",
"webRTC",
"web browsing",
"loss recovery"
] | https://openreview.net/pdf?id=3ebYzt0obL | NJvQaaw2lg | official_review | 1,700,627,053,023 | 3ebYzt0obL | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1935/Reviewer_dZp2"
] | review: This paper proposes Satguard which is an approach to achieve fast packet loss recovery for low earth orbit satellite constellations. The paper performs a measurement study on Starlink to show persistent and periodic packets losses which correlate with satellite handovers. Satguard achieves fast loss recovery through three mechanisms (i) advance sender buffer migration to future ingress satellite before a handover, (ii) early loss detection through beam specific timers and (iii) in-network ordered packet delivery. Evaluations are conducted on a simulated satellite network constellation and shows that Satguard minimizes end-to-end delay, improves web page load times and sustains stable high frame rate in WebRTC streams.
### Pros:
- Valuable insights from measurement study.
- Paper agrees to open-source Satguard.
- Paper is on a timely topic and is well written for most parts.
### Cons:
- The overheads introduced by Satguard are not evaluated.
- Some aspects of the design such as connectivity cache are glossed over.
questions: - The in-network packet ordering mechanism on the ground terminal functions like a jitter buffer but for many concurrent flows. What is the additional latency introduced by it to order the packets before delivery to sender? Furthermore, what is the overhead and impact of per flow state it needs to maintain?
- How is the connectivity cache kept updated? Even if a global scheduler is deployed, are there guarantees that pre-calculated connectivity plans will not change dynamically at runtime?
- While the paper notes the additional overhead of marking packets, it doesn’t receive much attention in the evaluation section so it remains unclear whether the approach is practical. Furthermore, how can satellite ISPs reliably determine traffic class to annotate packet headers?
- It seems Satguard assumes that packets can not be lost between intermediate nodes? It is unclear if this is a reasonable assumption to make.
### Writing nits.
- **Sec 1:** statue quo -> status quo
- **Sec 1:** *due to the high LEO dynamics* . “High LEO dynamics” is vague, while the context helps to explain what is meant, but it is better to be clear what this refers to in the intro.
- **Sec 2.2**: *523.3% of medium value* -> 523.3% of median value
- **Sec 4.2**: effectively recovery packet loss -> effectively recover packet loss
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
3ebYzt0obL | SatGuard: Concealing Endless and Bursty Packet Losses in LEO Satellite Networks for Delay-Sensitive Web Applications | [
"Jihao Li",
"Hewu Li",
"Zeqi Lai",
"Qian Wu",
"Yijie Liu",
"Qi Zhang",
"Yuanjie Li",
"Jun Liu"
] | Delay-sensitive Web services are crucial applications in emerging low-earth orbit (LEO) satellite networks (LSNs). However, our realworld measurement study based on SpaceX’s Starlink, the most widely used commercial LSN today reveals that the endless and
bursty packet losses over unstable LEO satellite links impose significant challenges on guaranteeing the quality of experience (QoE) of Web applications. We propose SatGuard, a distributed in-orbit loss recovery mechanism that can reduce user-perceived delay by completely concealing packet losses in the unstable and lossy LSN environment from endpoints. Specifically, SatGuard adopts a series of techniques to: (i) correctly migrate on-board packet buffer to support link local retransmission under LEO dynamics; (ii) efficiently detect packet losses on satellite links; and (iii) ensure packets ordering for endpoints. We implement a SatGuard prototype, and conduct extensive trace-driven evaluations guided by public constellation information and real-world measurements. Our experiments demonstrate that, in comparison with other state-of-the-art approaches, SatGuard can significantly improve Web-based QoE, by reducing: (i) up to 48.3% of page load time for Web browsing; and (ii) up to 57.4% end-to-end communication delay for WebRTC. | [
"LEO satellite networks",
"webRTC",
"web browsing",
"loss recovery"
] | https://openreview.net/pdf?id=3ebYzt0obL | 3t4F1Tg2WL | official_review | 1,700,199,892,594 | 3ebYzt0obL | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1935/Reviewer_LxcQ"
] | review: This paper first measures the network and application performance in Starlink, a production LSN. The measurement results show that LSN is limited by frequent handovers in LIS. The handovers cause heavy packet loss and latency. To improve the network performance in LSN, this paper propose SatGuard which recover the lost packets by link local retransmission. In LSN simulation environment, SatGuard can improve the LSN performance compared with other LSN mechanisms. I think this paper is interesting and the evaluation is good.
Strengths:
-This paper reveals that handover between satellite is the main factor that influence the network performance in LSN.
-The evaluation shows that SatGuard (by link local retransmission, packet loss detection) can improve the performance of LSN.
Major weaknesses:
-Link local retransmission mechanisms are not new and have been deployed in wireless network. There are not comparisons (mechanism rule or evaluation) between SatGuard and these ones.
questions: Overall, I like this paper and I thinks its meaningful for the community. And here are some comments which perhaps can be discussed further.
1) In the evaluation, the experiments lack prove of the mechanism, such as how the lost packets are recovered, how the buffer is migrated when handover happens.
2) There are not description of the experiments environments, such as the packets loss rate, bandwidth, delay.
3) Link local retransmission mechanisms are not new and have been deployed in wireless network. There are not comparisons (mechanism rule or evaluation) between SatGuard and these ones.
4) I think some discussion about how the handover affect TCP mechanism (not just loss packet recovery) such as the change of CWND, RTT calculation can make this paper stronger.
5) The handover in LSN is similar to some extent with the mobility between cells in cellular network. And there have been some TCP mechanisms([1],[2],[3]) specially designed for the mobility in cellular network. Perhaps the ideas in these mechanisms can further help to improve the data transfer in LSN.
[1] Leong, Wai Kay, Zixiao Wang, and Ben Leong. "TCP congestion control beyond bandwidth-delay product for mobile cellular networks." In Proceedings of the 13th International Conference on emerging Networking EXperiments and Technologies, pp. 167-179. 2017.
[2] Lee, Jinsung, Sungyong Lee, Jongyun Lee, Sandesh Dhawaskar Sathyanarayana, Hyoyoung Lim, Jihoon Lee, Xiaoqing Zhu et al. "PERCEIVE: deep learning-based cellular uplink prediction using real-time scheduling patterns." In Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services, pp. 377-390. 2020.
[3] Abbasloo, Soheil, Yang Xu, and H. Jonathan Chao. "C2TCP: A flexible cellular TCP to meet stringent delay requirements." IEEE Journal on Selected Areas in Communications 37, no. 4 (2019): 918-932.
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
3cna82jLrS | Heterogeneous Subgraph Transformer for Fake News Detection | [
"Yuchen Zhang",
"Xiaoxiao Ma",
"Jia Wu",
"Jian Yang",
"Hao Fan"
] | Fake news is pervasive on social media, inflicting substantial harm on public discourse and societal well-being.
We investigate the explicit structural information and textual features of news pieces by constructing a heterogeneous graph with regard to the relations among news topics, entities, and content.
Through our study, we reveal that fake news can be effectively detected in terms of the atypical heterogeneous subgraphs centered on them.
These subgraphs encapsulate the essential semantics of news articles as well as the intricate relations between different news articles, topics, and entities. However, suffering from the heterogeneity of topics, entities, and news content, exploring such heterogeneous subgraphs remains an open problem.
To bridge the gap, this work proposes a hierarchical framework - heterogeneous subgraph transformer (HeteroSGT) - to exploit subgraphs in our constructed heterogeneous graph.
In HeteroSGT, we first apply a pre-trained dual-attention language model to derive textual features in accordance with word-level and sentence-level semantics.
Then, we employ random walk with restart (RWR) to extract subgraphs centered on each news. The extracted subgraphs are further fed to our proposed subgraph Transformer to encode the subgraph surrounding each news piece for quantifying its authenticity.
Extensive experiments on five real-world datasets demonstrate the superior performance of HeteroSGT over five baselines.
Further case and ablation studies validate our motivation in investigating the subgraphs centered on news and demonstrate that performance improvement stems from our specially designed components. The source code of HeteroSGT is available at https://github.com/HeteroSGT/HeteroSGT} | [
"fake news detection",
"misinformation and disinformation",
"subgraph mining",
"heterogeneous graph"
] | https://openreview.net/pdf?id=3cna82jLrS | hkFe4uyllO | official_review | 1,700,840,654,800 | 3cna82jLrS | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2204/Reviewer_5pb4"
] | review: This paper proposes HeteroSGT, a heterogeneous subgraph transformer that captures both the textual features (sentence-level and word-level) and the structure for fake news detection. They first apply a dual-attention LM to derive textual features and then employ random walks with restart to extract subgraphs centered on each news. The extracted subgraphs are further fed to the proposed subgraph transformer for encoding and detecting the presence of fake news. Extensive experiments show the effectiveness of HeteroSGT over 5 baselines and on 5 datasets.
1. The paper is well-motivated and easy to understand.
2. It tackles an important problem of fake news detection and proposes HeteroSGT, a heterogeneous subgraph transformer that captures both the textual features and the graph structure for fake news detection.
3. It conducts comprehensive experiments and ablation studies on 5 datasets demonstrating the effectiveness of the approach.
Usually, random walk based approaches are time and memory intensive. It is worth comparing their approach with the SOTA approaches in terms of the training/inference time and the memory requirements.
“For MM COVID, the optimal walk length is 11, and the restart probability is 0.1” – did you use the same/different walk length and restart probability for the other 4 datasets? It’s important to report that in the paper.
From figure 5, it is difficult to infer which walk length and restart probability are effective as the line plots show zig-zag trends and for some walk lengths/restart probabilities, m-precision is higher with lower m-recall leading to low m-F1 score.
I appreciate authors' thorough rebuttal response and running additional experiments. Upon adding these additional technical details and experiments, the next version of the paper will be much stronger. My concerns have been addressed successfully. I believe the current ratings give justice to the work.
questions: “For MM COVID, the optimal walk length is 11, and the restart probability is 0.1” – did you use the same/different walk length and restart probability for the other 4 datasets? It’s important to report that in the paper.
Random walk-based approaches are usually time and memory intensive. What do you think? Is it worth comparing the proposed approach with the SOTA approaches in terms of the training/inference time and the memory requirements?
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
3cna82jLrS | Heterogeneous Subgraph Transformer for Fake News Detection | [
"Yuchen Zhang",
"Xiaoxiao Ma",
"Jia Wu",
"Jian Yang",
"Hao Fan"
] | Fake news is pervasive on social media, inflicting substantial harm on public discourse and societal well-being.
We investigate the explicit structural information and textual features of news pieces by constructing a heterogeneous graph with regard to the relations among news topics, entities, and content.
Through our study, we reveal that fake news can be effectively detected in terms of the atypical heterogeneous subgraphs centered on them.
These subgraphs encapsulate the essential semantics of news articles as well as the intricate relations between different news articles, topics, and entities. However, suffering from the heterogeneity of topics, entities, and news content, exploring such heterogeneous subgraphs remains an open problem.
To bridge the gap, this work proposes a hierarchical framework - heterogeneous subgraph transformer (HeteroSGT) - to exploit subgraphs in our constructed heterogeneous graph.
In HeteroSGT, we first apply a pre-trained dual-attention language model to derive textual features in accordance with word-level and sentence-level semantics.
Then, we employ random walk with restart (RWR) to extract subgraphs centered on each news. The extracted subgraphs are further fed to our proposed subgraph Transformer to encode the subgraph surrounding each news piece for quantifying its authenticity.
Extensive experiments on five real-world datasets demonstrate the superior performance of HeteroSGT over five baselines.
Further case and ablation studies validate our motivation in investigating the subgraphs centered on news and demonstrate that performance improvement stems from our specially designed components. The source code of HeteroSGT is available at https://github.com/HeteroSGT/HeteroSGT} | [
"fake news detection",
"misinformation and disinformation",
"subgraph mining",
"heterogeneous graph"
] | https://openreview.net/pdf?id=3cna82jLrS | azHRtq0ZbU | decision | 1,705,909,221,821 | 3cna82jLrS | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: Our decision is to accept. Please see the AC's review below and improve the work considering that and the reviewers' feedback for cemera-ready submission. We also ask that you more clearly state and defend assumptions around atypical fake news subgraphs in your camera-ready submission.
"This paper proposes a heterogeneous subgraph transformer that captures textual and structural features for fake news detection. They use a dual-attention LM to derive the textual features and then use random walks with restart to extract subgraphs centered on each news. The extracted subgraphs are further fed to the proposed subgraph transformer for encoding and detecting the presence of fake news. The experimental results show the effectiveness of the tool over several baselines and datasets.
Strengths:
- The paper address a relevant problem, fake news detection, with a novel approach
- It performs extensive experiments and ablation studies on 5 baselines and datasets demonstrating the effectiveness of the approach.
- The model is well described and includes a code repository, providing good reproducibility
- The paper is well-motivated and easy to read.
Weaknesses:
- The research is based on the strong assumption that all fake news will have atypical graph structures and that true news will not have them. In general this will not be the case and hence more work is needed to understand the impact of false negatives and positives.
- The components of the model can be improved as there are newer alternatives. Why those were not used?
- In spite of the combination of techniques, the improvements are incremental
The rebuttal generated new results that should be included in the final version if the paper is accepted.
Scope: 4; Novelty: 5; Quality: 4" |
3cna82jLrS | Heterogeneous Subgraph Transformer for Fake News Detection | [
"Yuchen Zhang",
"Xiaoxiao Ma",
"Jia Wu",
"Jian Yang",
"Hao Fan"
] | Fake news is pervasive on social media, inflicting substantial harm on public discourse and societal well-being.
We investigate the explicit structural information and textual features of news pieces by constructing a heterogeneous graph with regard to the relations among news topics, entities, and content.
Through our study, we reveal that fake news can be effectively detected in terms of the atypical heterogeneous subgraphs centered on them.
These subgraphs encapsulate the essential semantics of news articles as well as the intricate relations between different news articles, topics, and entities. However, suffering from the heterogeneity of topics, entities, and news content, exploring such heterogeneous subgraphs remains an open problem.
To bridge the gap, this work proposes a hierarchical framework - heterogeneous subgraph transformer (HeteroSGT) - to exploit subgraphs in our constructed heterogeneous graph.
In HeteroSGT, we first apply a pre-trained dual-attention language model to derive textual features in accordance with word-level and sentence-level semantics.
Then, we employ random walk with restart (RWR) to extract subgraphs centered on each news. The extracted subgraphs are further fed to our proposed subgraph Transformer to encode the subgraph surrounding each news piece for quantifying its authenticity.
Extensive experiments on five real-world datasets demonstrate the superior performance of HeteroSGT over five baselines.
Further case and ablation studies validate our motivation in investigating the subgraphs centered on news and demonstrate that performance improvement stems from our specially designed components. The source code of HeteroSGT is available at https://github.com/HeteroSGT/HeteroSGT} | [
"fake news detection",
"misinformation and disinformation",
"subgraph mining",
"heterogeneous graph"
] | https://openreview.net/pdf?id=3cna82jLrS | Py68i4D3wh | official_review | 1,701,061,783,313 | 3cna82jLrS | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2204/Reviewer_ny21"
] | review: This paper proposes a heterogeneous subgraph transformer (HeteroSGT) to detect fake news, which integrates pre-trained dual-attention module to get node representation and random walks to get subraph sequence. Here are some pros and cons of this paper:
Pros:
1. This paper proposes a novel heterogeneous subgraph transformer (HeteroSGT) - to exploit subgraphs in constructed heterogeneous graph for fake news detection.
2. This paper also performs a comprehensive results analysis and ablation study to show the effectiveness of HeteroSGT.
However, this paper also has some cons:
1. It seems that edges are not considered in the feature learning process. How to link the entities and topics with different edges and how to learn representations via BERT are not clearly justified.
2. Why not use BERT or Sentence-BERT in 3.3.1 and 3.3.2 rather than Bi-GRU? Since Bi-GRU is not the latest and most effective technology.
3. It’s not clear why RWR is used in the METHODOLOGY section.
4. Baseline settings are somewhat simple, missing some SOTA comparisons such as KAN, dEFEND.
questions: 1. RWR is essentially a random walk and why can it confirm position information? Will there not be a situation where a topic or entity at a later position in the sequence is more relevant to the news node? Assume that these entities are directly connected to news (line456).
2. Why not use BERTopic rather than LDA?
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
3cna82jLrS | Heterogeneous Subgraph Transformer for Fake News Detection | [
"Yuchen Zhang",
"Xiaoxiao Ma",
"Jia Wu",
"Jian Yang",
"Hao Fan"
] | Fake news is pervasive on social media, inflicting substantial harm on public discourse and societal well-being.
We investigate the explicit structural information and textual features of news pieces by constructing a heterogeneous graph with regard to the relations among news topics, entities, and content.
Through our study, we reveal that fake news can be effectively detected in terms of the atypical heterogeneous subgraphs centered on them.
These subgraphs encapsulate the essential semantics of news articles as well as the intricate relations between different news articles, topics, and entities. However, suffering from the heterogeneity of topics, entities, and news content, exploring such heterogeneous subgraphs remains an open problem.
To bridge the gap, this work proposes a hierarchical framework - heterogeneous subgraph transformer (HeteroSGT) - to exploit subgraphs in our constructed heterogeneous graph.
In HeteroSGT, we first apply a pre-trained dual-attention language model to derive textual features in accordance with word-level and sentence-level semantics.
Then, we employ random walk with restart (RWR) to extract subgraphs centered on each news. The extracted subgraphs are further fed to our proposed subgraph Transformer to encode the subgraph surrounding each news piece for quantifying its authenticity.
Extensive experiments on five real-world datasets demonstrate the superior performance of HeteroSGT over five baselines.
Further case and ablation studies validate our motivation in investigating the subgraphs centered on news and demonstrate that performance improvement stems from our specially designed components. The source code of HeteroSGT is available at https://github.com/HeteroSGT/HeteroSGT} | [
"fake news detection",
"misinformation and disinformation",
"subgraph mining",
"heterogeneous graph"
] | https://openreview.net/pdf?id=3cna82jLrS | MdKu5PwGC9 | official_review | 1,700,826,518,517 | 3cna82jLrS | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2204/Reviewer_PirY"
] | review: ## Summary:
The authors propose a novel method for fake news classification that combines news articles, textual features, topics, and entity annotations in a heterogeneous network repreentation. A classifier is trained to recognize strucural outliers in this network representation and utilize them for fake news classification. The performance of the proposed method is demonstrated to improve over the SOTA on several benchmark data sets.
## Overall recommendation:
On the upside, the paper is well written, the proposed method is novel in design and provides good performance, and the provided experimentation is extensive. On the downside, the paper is incremental in a well researched task and the premise behind atypical subgraph-based classification is not sufficiently explored. Overall, the novelty is limited, but the paper would be a suitable fit to the program.
## Strengths:
* S1 By focusing on the classification of mis/disinformation, the paper addresses a timely and relevant topic
* S2 The proposed model utilizes a novel approach that the authors demonstrate to provide SOTA performance.
* S3 The paper includes exensive experiments, incl. ablative testing
* S4 The model description is extensive and a code repository is included, providing excellent reproducibility
## Weaknesses:
W1 Veracity of the central premise / assumption
I have concerns regarding the central premise that the authors use to motivate their approach, namely the focus on atypical graph structures. While I agree that some fake news is likely going to results in atypical graph structures, I am not convinced that this is necessarily always the case. In particular, would one not expect this approach to fail spectacularly in the case of long running propaganda campaigns? Conversely, some of the most influential true breaking news stories will also cause atypical patterns by linking hitherto unconnected entities. It would be good to perform an initial analysis into this premise, as well as an error analysis to demonstrate that the method does not falsely classify the most important real news alongside fake news. Alternatively, it would be good to explore the overlap of correctly classified examples with regard to prior baselines to establish the novelty.
W2 Model components
While the model description itself is extensive, little attention is given to some of the key components. Why is such an old tool like LDA used to determine topics when we have newer alternatives? I can see how the performance might still be good enough or even better, but it would be good to see this tested empirically. It would also be good to have an estimate of error propagation for entity detection and its impact on the proposed model.
W3 Incrementality
While the model itself is new, it is pushing performance on a well-researched task. Overall, despite SOTA performance, the paper is incremental in method development (i.e., the contribution boils down to how one can include even more features/information as input for a transformer arcitecture) and in performance as compared to existing baselines.
## Minor remarks:
* In line 135, "identifying and matching these subgraphs rely on the investigation of the heterogeneous graph, which is NP-hard" is quite vague. Can you be more concrete?
* In line 485 "we take the advantage of" -> "we take advantage of the"
* Figure 3 is quite difficult to read
## Post review update:
Increased novelty score (3 -> 4) after author feedback
questions: Q1: In case you have performed further investigation into the suitability of using atypical graphs for fake news classification beyond measurin the model's performance, could you please share them?
ethics_review_flag: No
ethics_review_description: -
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
3cna82jLrS | Heterogeneous Subgraph Transformer for Fake News Detection | [
"Yuchen Zhang",
"Xiaoxiao Ma",
"Jia Wu",
"Jian Yang",
"Hao Fan"
] | Fake news is pervasive on social media, inflicting substantial harm on public discourse and societal well-being.
We investigate the explicit structural information and textual features of news pieces by constructing a heterogeneous graph with regard to the relations among news topics, entities, and content.
Through our study, we reveal that fake news can be effectively detected in terms of the atypical heterogeneous subgraphs centered on them.
These subgraphs encapsulate the essential semantics of news articles as well as the intricate relations between different news articles, topics, and entities. However, suffering from the heterogeneity of topics, entities, and news content, exploring such heterogeneous subgraphs remains an open problem.
To bridge the gap, this work proposes a hierarchical framework - heterogeneous subgraph transformer (HeteroSGT) - to exploit subgraphs in our constructed heterogeneous graph.
In HeteroSGT, we first apply a pre-trained dual-attention language model to derive textual features in accordance with word-level and sentence-level semantics.
Then, we employ random walk with restart (RWR) to extract subgraphs centered on each news. The extracted subgraphs are further fed to our proposed subgraph Transformer to encode the subgraph surrounding each news piece for quantifying its authenticity.
Extensive experiments on five real-world datasets demonstrate the superior performance of HeteroSGT over five baselines.
Further case and ablation studies validate our motivation in investigating the subgraphs centered on news and demonstrate that performance improvement stems from our specially designed components. The source code of HeteroSGT is available at https://github.com/HeteroSGT/HeteroSGT} | [
"fake news detection",
"misinformation and disinformation",
"subgraph mining",
"heterogeneous graph"
] | https://openreview.net/pdf?id=3cna82jLrS | 1gbWJOGvcb | official_review | 1,700,663,861,860 | 3cna82jLrS | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2204/Reviewer_89hq"
] | review: The paper proposes the detection of fake news by identifying irregular subgraph structures and features in a heterogeneous graph. The graph captures word- and sentence-level semantic patterns and structural information among news, entities, and topics. The model is evaluated across 5 datasets that span various subject areas. The authors also perform 4 case studies, an ablation study, and a hyperparameter sensitivity analysis to better understand the strengths and limitations of their model.
The paper is written very well overall with very detailed experiments. Although the individual parts themselves are not necessarily new approaches (i.e., random walk subgraph sampling, heterogeneous graph transformer) there are two interesting tweaks that improve the model performance. The first is to leverage the random walk subgraph sampling to provide the relative positional encoding for use with the transformer and which embedding to use in the subgraph transformer to use as the representation without incurring additional computational cost.
The only complaint is that some of the discussed related work isn't compared in the text and some of the baseline methods used are not discussed in the related work section either. This seems to cause a slight inconsistency between the two.
questions: (1) Why is the bidirectional GRU used instead of say pre-trained embedding models?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
3ZEQ0TENg1 | Taxonomy Completion via Implicit Concept Insertion | [
"Jingchuan Shi",
"Hang Dong",
"Jiaoyan Chen",
"ZHE WU",
"Ian Horrocks"
] | High quality taxonomies play a critical role in various domains such as e-commerce, web search and ontology engineering. While there has been extensive work on expanding taxonomies from externally mined data, there has been less attention paid to enriching taxonomies by exploiting existing concepts and structure within the taxonomy. In this work, we show the usefulness of this kind of enrichment, and explore its viability with a new taxonomy completion system ICON (Implicit CONcept Insertion). ICON generates new concepts by identifying implicit concepts based the existing concept structure, generating names for such concepts and inserting them in appropriate positions within the taxonomy. ICON integrates techniques from entity retrieval, text summary, and subsumption prediction; this modular architecture offers high flexibility while achieving state-of-the-art performance. We have evaluated ICON on two e-commerce taxonomies, and the results show that it offers significant advantages over strong baselines including recent taxonomy completion models and the large language model, ChatGPT. | [
"Taxonomy Completion",
"Taxonomy Enrichment",
"Ontology Engineering",
"Text Summarisation",
"Pre-trained Language Model"
] | https://openreview.net/pdf?id=3ZEQ0TENg1 | t6jvMqXeAk | official_review | 1,700,657,306,335 | 3ZEQ0TENg1 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1605/Reviewer_og6N"
] | review: The paper proposes a solution for a problem that the authors define as “implicit concept insertion” which consists on adding concepts to an incomplete taxonomy. The authors apply their approach to 2 real world taxonomies: one from eBay in English and another from alibaba in Chinese.
The authors divide the task in 3 steps: identifying implicit concepts, naming implicit concepts and inserting them in the taxonomy. Those steps are further detailed and the authors describe the different algorithms and approaches employed.
I appreciate that the paper is very well written and I didn’t find any important typo. The motivating example with clothes is clear and the description of related work and preliminaries is also useful. Maybe one minor point is that section 3.2 titled “problem statement” only refers to the insertion of a concept q in the taxonomy and not the whole problem statement described in the paper.
I found quite interesting that the authors also compare their results with ChatGPT although looking to the prompts described in appendix C, I am not quite sure about the quality of the results.
One issue for me is that the authors don’t include a reference to any source code or dataset that could be used to check the reproducibility of their approach. The authors include a simple section "implementation details" in appendix B and another one about the hyperparameters in appendix A but there is not further details about the implementation and its availability.
After reading the paper, I wonder if the results could be improved with the use of multilingual taxonomies, i. E. taxonomies whose concepts could be labeled, for example, in both Chinese and English.
questions: - Are the source code/datasets available?
ethics_review_flag: No
ethics_review_description: I think there are no ethical issues, although the authors indicate that they used the platform appen to obtain human labelling. I am not sure if that could have some ethical concern.
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 1: The reviewer's evaluation is an educated guess |
3ZEQ0TENg1 | Taxonomy Completion via Implicit Concept Insertion | [
"Jingchuan Shi",
"Hang Dong",
"Jiaoyan Chen",
"ZHE WU",
"Ian Horrocks"
] | High quality taxonomies play a critical role in various domains such as e-commerce, web search and ontology engineering. While there has been extensive work on expanding taxonomies from externally mined data, there has been less attention paid to enriching taxonomies by exploiting existing concepts and structure within the taxonomy. In this work, we show the usefulness of this kind of enrichment, and explore its viability with a new taxonomy completion system ICON (Implicit CONcept Insertion). ICON generates new concepts by identifying implicit concepts based the existing concept structure, generating names for such concepts and inserting them in appropriate positions within the taxonomy. ICON integrates techniques from entity retrieval, text summary, and subsumption prediction; this modular architecture offers high flexibility while achieving state-of-the-art performance. We have evaluated ICON on two e-commerce taxonomies, and the results show that it offers significant advantages over strong baselines including recent taxonomy completion models and the large language model, ChatGPT. | [
"Taxonomy Completion",
"Taxonomy Enrichment",
"Ontology Engineering",
"Text Summarisation",
"Pre-trained Language Model"
] | https://openreview.net/pdf?id=3ZEQ0TENg1 | pgfy4xLJbD | official_review | 1,700,773,814,673 | 3ZEQ0TENg1 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1605/Reviewer_de3b"
] | review: The paper aims to complete existing taxonomies by 1) discovering implicit concepts that can be hidden, 2) naming them and 3) inserting them at the right place. The idea of “implicit concept” is new compared to the state of art. The proposal named ICON for Implicit CONcept Insertion, combines different well-known algorithms to solve the three tasks : Entity retrieval and KNN for discovering implicit concepts, text summarization for naming the implicit concepts, and BERTSubs to insert new concepts in the right place.
The approach is validated on two real taxonomies: the one from ebay in english and the other one form AliOpenKG in Chinese.
Strong points:
* The paper is relevant for the track Semantic and knowledge of the conference.
* The paper is nicely written, with convincing examples, and easy to follow.
* The idea of implicit concepts is original and interesting.
* The proposal is well positioned vs SOTA approaches.
Weak points:
* Thanks to the (nice) examples of the introduction, we have an intuitive understanding of what is an implicit concept, but there is no real definition of what it is. As it is, implicit concepts are defined by examples. It is written “careful identification of intermediate nodes is required in order to find useful implicit concepts that can improve the taxonomy’s quality and benefit downstream applications”. I fully agree with that, but what are the expected formal properties to be filled by an implicit concept?
* The proposal in section 4 is presented mainly by explaining *how* the three subtasks are realised, but not really why it should be done like that, especially for the first subtask : implicit concept discovery. For example, it is not obvious for me that implicit concepts should appear in KNN-cluster: It is written : “The reason we select similar concepts is to increase the chance a subset built from these concepts identifies an implicit concept.” Does this method identify all “good” implicit concepts or just some ?? How can we be sure that we discovered an interesting implicit concept like that?
* The experiment validates the proposal by artificially creating implicit concepts by masking” some of the existing taxonomy concepts. I agree that this method is a way to validate the ICON approach, but it does not give evidence of implicit concepts that may not exist in Ebay and AliOpenKG. It is a serious issue for me as we cannot see the impact of ICON on existing taxonomies. I expect more than just validating the approach. I expect to see how ICON can improve existing taxonomies by inserting new pertinent concepts in the taxonomy. To be more general, if implicit concept is an appealing idea, we don’t know if its usage change marginally or drastically existing taxonomy ie. We don't know the usefulness of the whole approach.
questions: * How ICON impact the Ebay taxonomy? How many useful new implicit concepts you discovered in ebay? Can we see them somewhere?
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
3ZEQ0TENg1 | Taxonomy Completion via Implicit Concept Insertion | [
"Jingchuan Shi",
"Hang Dong",
"Jiaoyan Chen",
"ZHE WU",
"Ian Horrocks"
] | High quality taxonomies play a critical role in various domains such as e-commerce, web search and ontology engineering. While there has been extensive work on expanding taxonomies from externally mined data, there has been less attention paid to enriching taxonomies by exploiting existing concepts and structure within the taxonomy. In this work, we show the usefulness of this kind of enrichment, and explore its viability with a new taxonomy completion system ICON (Implicit CONcept Insertion). ICON generates new concepts by identifying implicit concepts based the existing concept structure, generating names for such concepts and inserting them in appropriate positions within the taxonomy. ICON integrates techniques from entity retrieval, text summary, and subsumption prediction; this modular architecture offers high flexibility while achieving state-of-the-art performance. We have evaluated ICON on two e-commerce taxonomies, and the results show that it offers significant advantages over strong baselines including recent taxonomy completion models and the large language model, ChatGPT. | [
"Taxonomy Completion",
"Taxonomy Enrichment",
"Ontology Engineering",
"Text Summarisation",
"Pre-trained Language Model"
] | https://openreview.net/pdf?id=3ZEQ0TENg1 | joXYBvYWUx | decision | 1,705,909,231,808 | 3ZEQ0TENg1 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: The paper proposes a method to extend an existing (incomplete) taxonomy by inserting new concepts in its hierarchical structure. The paper is clear, and proposals are innovative and quite appropriately tested on relevant taxonomies (even if the evaluation could be more convincing, or at least convincingly written). The topic is quite specific, but as many in the community do work with taxonomies that are very imperfect, the paper should be read with interest by more than a few. |
3ZEQ0TENg1 | Taxonomy Completion via Implicit Concept Insertion | [
"Jingchuan Shi",
"Hang Dong",
"Jiaoyan Chen",
"ZHE WU",
"Ian Horrocks"
] | High quality taxonomies play a critical role in various domains such as e-commerce, web search and ontology engineering. While there has been extensive work on expanding taxonomies from externally mined data, there has been less attention paid to enriching taxonomies by exploiting existing concepts and structure within the taxonomy. In this work, we show the usefulness of this kind of enrichment, and explore its viability with a new taxonomy completion system ICON (Implicit CONcept Insertion). ICON generates new concepts by identifying implicit concepts based the existing concept structure, generating names for such concepts and inserting them in appropriate positions within the taxonomy. ICON integrates techniques from entity retrieval, text summary, and subsumption prediction; this modular architecture offers high flexibility while achieving state-of-the-art performance. We have evaluated ICON on two e-commerce taxonomies, and the results show that it offers significant advantages over strong baselines including recent taxonomy completion models and the large language model, ChatGPT. | [
"Taxonomy Completion",
"Taxonomy Enrichment",
"Ontology Engineering",
"Text Summarisation",
"Pre-trained Language Model"
] | https://openreview.net/pdf?id=3ZEQ0TENg1 | eE1r9RmsWH | official_review | 1,700,130,043,370 | 3ZEQ0TENg1 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1605/Reviewer_C743"
] | review: This paper proposed the algorithm to discover implicit concepts in taxonomies with deep learning methods.
The unique feature of the work is the combination of three different learning models corresponding to the step of clustering, generation of new concepts, and locating them in taxonomies. The result show positive comparing with the existing methods and Chat-GPT.
The paper is well written technically so that the technical achievement is clear.
On the other land, the limitation of applicability is not clear. One reason is It that the targeted datasets are only e-commerce ones so that it is unclear whether the proposed method is meaningful beyond e-commerce domain. The other is that the targeted concepts are only the second layer concepts from the bottom in taxonomies. It is not certain that it would be applicable to other concepts.
The more fundamental question is whether it is really discovery of implicit concepts. The experiment is conducted to evaluate how it can re-discover artificially hidden concepts in the taxonomies. It is meaningful as the performance of the algorithm but not as the discovery of new hidden concepts.
Rather, RQ1, RQ2, and RQ3 with the experiment indicate how it can refine human-curated taxonomies automatically. The evaluation for RQ2 allows semantic similarity in concept names and one for RQ3 includes the discovery of implicit sub-concepts. The refinement of taxonomies itself is the important task so that the work would be re-organized for this direction.
questions: 1.
Do you have any indication how the algorithm would discover new hidden concepts rather than the re-discovery of hidden concepts?
2.
Have you tested it with datasets in other domains, in particular, those with deeper hierarchy? The datasets in the paper are shallow as taxonomy.
ethics_review_flag: No
ethics_review_description: None
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
3ZEQ0TENg1 | Taxonomy Completion via Implicit Concept Insertion | [
"Jingchuan Shi",
"Hang Dong",
"Jiaoyan Chen",
"ZHE WU",
"Ian Horrocks"
] | High quality taxonomies play a critical role in various domains such as e-commerce, web search and ontology engineering. While there has been extensive work on expanding taxonomies from externally mined data, there has been less attention paid to enriching taxonomies by exploiting existing concepts and structure within the taxonomy. In this work, we show the usefulness of this kind of enrichment, and explore its viability with a new taxonomy completion system ICON (Implicit CONcept Insertion). ICON generates new concepts by identifying implicit concepts based the existing concept structure, generating names for such concepts and inserting them in appropriate positions within the taxonomy. ICON integrates techniques from entity retrieval, text summary, and subsumption prediction; this modular architecture offers high flexibility while achieving state-of-the-art performance. We have evaluated ICON on two e-commerce taxonomies, and the results show that it offers significant advantages over strong baselines including recent taxonomy completion models and the large language model, ChatGPT. | [
"Taxonomy Completion",
"Taxonomy Enrichment",
"Ontology Engineering",
"Text Summarisation",
"Pre-trained Language Model"
] | https://openreview.net/pdf?id=3ZEQ0TENg1 | WBJwbAIEhG | official_review | 1,701,209,615,982 | 3ZEQ0TENg1 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1605/Reviewer_GSGN"
] | review: The paper describes a technique for integrating taxonomies with additional terms emerging from existing terms in the taxonomy. It has been evaluated using popular large taxonomies available in the world of online commerce, which helps place this work aptly in the context of TheWebConf and of this track.
The paper is overall well-structured and easy to follow, without significant language issues (only, please correct line 15 in the abstract: "based *on* the existing concept structure". This helped formulate a few points where further clarification would be helpful and help a re-assessment of this work which is otherwise valid and pertinent. See the following Questions:
questions: - The Related Work section is a little disappointing in that it doesn't quite offer a critique of the state of the art, highlighing where the proposed approach differs and is expected to be novel or overcome limitations (also GenTaxo is only discussed outside that section). I don't think it would be very burdensome or lengthy to adapt the section: would it be possible to do so?
- Where does the Tolerance factor described in page 4 fit in Algorithm 1? Is it part of the SUB_MODEL?
- Where an evaluation is carried out in terms of precision, recall and F-Score, it is not clear to me how exactly the necessary correctness is assessed. Was it a user evaluation? Does the Human Labelling section in Appendix D have anything to do with it? Please explain.
- I get the GenTaxo as a term of comparison, but why evaluate this approach against ChatGPT which, as admitted in the evaluation section, suffers from a handicap (size of corpus, not being re-trainable) that doesn't place it optimally in the use case at hand?
ethics_review_flag: No
ethics_review_description: -
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
3ZEQ0TENg1 | Taxonomy Completion via Implicit Concept Insertion | [
"Jingchuan Shi",
"Hang Dong",
"Jiaoyan Chen",
"ZHE WU",
"Ian Horrocks"
] | High quality taxonomies play a critical role in various domains such as e-commerce, web search and ontology engineering. While there has been extensive work on expanding taxonomies from externally mined data, there has been less attention paid to enriching taxonomies by exploiting existing concepts and structure within the taxonomy. In this work, we show the usefulness of this kind of enrichment, and explore its viability with a new taxonomy completion system ICON (Implicit CONcept Insertion). ICON generates new concepts by identifying implicit concepts based the existing concept structure, generating names for such concepts and inserting them in appropriate positions within the taxonomy. ICON integrates techniques from entity retrieval, text summary, and subsumption prediction; this modular architecture offers high flexibility while achieving state-of-the-art performance. We have evaluated ICON on two e-commerce taxonomies, and the results show that it offers significant advantages over strong baselines including recent taxonomy completion models and the large language model, ChatGPT. | [
"Taxonomy Completion",
"Taxonomy Enrichment",
"Ontology Engineering",
"Text Summarisation",
"Pre-trained Language Model"
] | https://openreview.net/pdf?id=3ZEQ0TENg1 | TyMIhHkB87 | official_review | 1,700,741,262,966 | 3ZEQ0TENg1 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1605/Reviewer_c1vu"
] | review: The authors study the phenomenon of implicit concepts within taxonomies and propose the task of implicit taxonomy completion along three sub-tasks in a semiautomatic manner supported by a system called ICON which combines entity retrieval, text summary, and subsumption prediction approaches. The authors tested and evaluated their framework against real world taxonomies with promising results.
Quality:
The paper addresses a very interesting, obviously under researched topic at a high formal level. The research questions are well defined, formalized and operationalized, and an evaluation of the proposed framework has been carried out. The research is well contextualized in the related work and white spots in the state of the art, limitations of the proposed solution and future work have been addressed. Nevertheless, there is room for improvement in the latter aspect, preferably with a more substantial reference to the convergence of graph-technologies and LLMs for the specific purpose of this paper.
Clarity:
The paper is well written, and all addressed concepts are well described and formalized, especially given the supplementary material provided in the Appendices of the paper. It is a pity that the framework has not been released under an appropriate license for testing purposes (at least not information on this is available in the paper).
Originality:
The paper introduces a sound approach for implicit concept detection and taxonomy enrichment which is worth discussing. Although the claim that there is little work available in this area can be confirmed the authors should additionally scan the following literature for relevance:
https://arxiv.org/abs/2202.00070
https://www.iti.gr/~bmezaris/publications/csvt19_preprint.pdf
Gözüaçik, Ö., & Can, F. (2020). Concept learning using one-class classifiers for implicit drift detection in evolving data streams. Artificial Intelligence Review, 54, 3725 - 3747.
Significance:
The paper contains relevant questions and methodological inputs contributing and advancing the subject matter.
questions: Given the growing interest in the convergence of graph-based data representation and probabilistic methods, can you be more specific on how graph-based approaches shall and could be used to improve your suggested approach? Where do you see benefits? What are the trade-offs?
ethics_review_flag: No
ethics_review_description: There are noo ethical issues related to this topic.
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 6
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
3YYtaQir1f | Fingerprinting the Shadows: Unmasking Malicious Servers with Machine Learning-Powered TLS Analysis | [
"Andreas Theofanous",
"Eva Papadogiannaki",
"Alexander Shevtsov",
"Sotiris Ioannidis"
] | Over the last few years, the adoption of encryption in network
traffic has been constantly increasing. The percentage of encrypted
communications worldwide is estimated to exceed 90%. Although
network encryption protocols mainly aim to secure and protect
users’ online activities and communications, they have been exploited
by malicious entities that hide their presence in the network.
It was estimated that in 2022, more than 85% of the malware used
encrypted communication channels.
In this work, we examine state-of-the-art fingerprinting techniques
and extend a machine learning pipeline for effective and practical
server classification. Specifically, we actively contact servers to
initiate communication over the TLS protocol and through exhaustive
requests, we extract communication metadata. We investigate
which features favor an effective classification, while we utilize and
evaluate state-of-the-art approaches. Our extended pipeline can
indicate whether a server is malicious or not with 91% precision and
95% recall, while it can specify the botnet family with 99% precision
and 99% recall. | [
"TLS",
"TLS Fingerprinting",
"Active Probing",
"Botnet",
"Command and Control",
"Server Characterization",
"Machine Learning",
"Explainability"
] | https://openreview.net/pdf?id=3YYtaQir1f | uwfmH1ZmvE | official_review | 1,700,855,329,041 | 3YYtaQir1f | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2531/Reviewer_7oou"
] | review: This research aims to identify servers that are used for malicious purposes such as C&C by leveraging TLS fingerprinting through machine learning techniques. Notably, they utilize active probing of the servers using crafted TLS Client Hello requests to monitor the gather the difference in the responses sent by the server. Later, features identified from these responses per server are used to train the ML models. They obtain the ground truth for benign servers by probing servers from Tranco’s top 10K ranked domains. Through this methodology, their approaches prove successful in classifying benign and malicious servers. Further, they also report that such TLS-based features can be successfully used to identify servers of different botnet families.
Strengths:
1) Interesting insights on the difference in configuration between benign and malicious servers
2) Uses a large dataset of benign and malicious samples to test their approach. Dataset on making it public would prove useful to the community.
3) Experiments, feature selection and model selection are conducted methodically.
Weaknesses:
1) Focuses only on C&C servers
2) Detection approach is not robust enough and can be evaded by modifying server configurations to mimic legitimate servers
questions: 1) Provide more clarifications on the data collected from blocklists. Specifically, are these IPs solely associated with Botnet servers
2) While calculating overlaps of IPs between databases, it would be useful to identify how many of these malicious IPs are owned by popular services and draw relations (if any) between the overlaps between these malicious servers and the benign servers
3) Provide more discussion on how robust is this classification if attackers choose to mimic legitimate behavior.
**This is to acknowledge that I have read the rebuttal.**
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
3YYtaQir1f | Fingerprinting the Shadows: Unmasking Malicious Servers with Machine Learning-Powered TLS Analysis | [
"Andreas Theofanous",
"Eva Papadogiannaki",
"Alexander Shevtsov",
"Sotiris Ioannidis"
] | Over the last few years, the adoption of encryption in network
traffic has been constantly increasing. The percentage of encrypted
communications worldwide is estimated to exceed 90%. Although
network encryption protocols mainly aim to secure and protect
users’ online activities and communications, they have been exploited
by malicious entities that hide their presence in the network.
It was estimated that in 2022, more than 85% of the malware used
encrypted communication channels.
In this work, we examine state-of-the-art fingerprinting techniques
and extend a machine learning pipeline for effective and practical
server classification. Specifically, we actively contact servers to
initiate communication over the TLS protocol and through exhaustive
requests, we extract communication metadata. We investigate
which features favor an effective classification, while we utilize and
evaluate state-of-the-art approaches. Our extended pipeline can
indicate whether a server is malicious or not with 91% precision and
95% recall, while it can specify the botnet family with 99% precision
and 99% recall. | [
"TLS",
"TLS Fingerprinting",
"Active Probing",
"Botnet",
"Command and Control",
"Server Characterization",
"Machine Learning",
"Explainability"
] | https://openreview.net/pdf?id=3YYtaQir1f | tklKlYE02b | official_review | 1,700,250,575,619 | 3YYtaQir1f | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2531/Reviewer_RP9R"
] | review: This work aims to classify malicious servers and their particular categories using machine learning. The machine learning features were learned from server_hello messages, as responses to fabricated client_hello messages, while the later ones are used for forcing servers to select possibly different cipher suites for TLS connection. The experiments were performed on websites of Tranco 10K list, Feodo, and Blocklists. Overall, the results look promising but maybe incomplete.
Pros:
1. Using cipher suites and other parameters, e.g., those about the certificate, for the classification of malicious servers, which has not yet been studied.
2. A proper selection of benign and malicious datasets.
Cons:
1. [57] is highly likely the baseline to compare with, which is missing in this work, since [57] adopted the same strategy to obtain parameters from TLS handshake (although for a different purpose).
2. When saying "no prior work has delved into the use of machine learning models for ...Our research seeks to address this limitation..." in line 547, it is unclear about why not using machine learning becomes a limitation.
3. In line 317 it is mentioned to "evaluate the effectiveness of various state-of-the-art approaches" but I could not find such an evaluation in the paper.
4. It is unclear about how the unlabeled Blocklists dataset served experiments regarding machine learning and fingerprinting in Sections 6 and 7, as the paper stated "Blocklists ... refers to these unlabled lists" in line 279, while no manual labeling was mentioned in the paper.
5. It is better to provide more details about the "additional layer of complexity" when using machine learning in line 181.
6. Why does it use 10 but not 11 or 9 client_hello messages in line 289?
7. In Section 4.3, how large the sets of incomplete, disrupted, and those don't match up or follow the rules are? Can they serve fingerprinting? In browser fingerprinting, no fingerprint can also be fingerprinted.
8. "20,000 features" in line 400 should be "20,700 features".
9. Why does it not just use the hash, but perform a concatenation on the hash, in lines 670 to 673?
questions: The cons above.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
3YYtaQir1f | Fingerprinting the Shadows: Unmasking Malicious Servers with Machine Learning-Powered TLS Analysis | [
"Andreas Theofanous",
"Eva Papadogiannaki",
"Alexander Shevtsov",
"Sotiris Ioannidis"
] | Over the last few years, the adoption of encryption in network
traffic has been constantly increasing. The percentage of encrypted
communications worldwide is estimated to exceed 90%. Although
network encryption protocols mainly aim to secure and protect
users’ online activities and communications, they have been exploited
by malicious entities that hide their presence in the network.
It was estimated that in 2022, more than 85% of the malware used
encrypted communication channels.
In this work, we examine state-of-the-art fingerprinting techniques
and extend a machine learning pipeline for effective and practical
server classification. Specifically, we actively contact servers to
initiate communication over the TLS protocol and through exhaustive
requests, we extract communication metadata. We investigate
which features favor an effective classification, while we utilize and
evaluate state-of-the-art approaches. Our extended pipeline can
indicate whether a server is malicious or not with 91% precision and
95% recall, while it can specify the botnet family with 99% precision
and 99% recall. | [
"TLS",
"TLS Fingerprinting",
"Active Probing",
"Botnet",
"Command and Control",
"Server Characterization",
"Machine Learning",
"Explainability"
] | https://openreview.net/pdf?id=3YYtaQir1f | nZuqsfpyTA | official_review | 1,700,488,022,929 | 3YYtaQir1f | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2531/Reviewer_3dyX"
] | review: The paper presents a study on the fingerprintability of server configurations with respect to the TLS protocol, to classify them as either benign or malicious. The main contribution is the exploration of different techniques, including existing ML-based ones, on a large dataset. For the ML part, the authors apply an existing pipeline with different sets of features that they extracted through active probing, instead of passive.
Pros:
* Large dataset collected in a reproducible way
* Comparison between ML-based and fingerprinting- (i.e., signature-) based techniques
* Sound methodology and clear description
* Ground truth labelling clearly explained and justified
Cons:
* Feature selection for fingerprinting method is not sufficiently described (Sec. 6.2): "Based on current approaches we create 4 distinct combinations of features.".
* Missing details about experimental machine measurements (how long did the training take? How much memory was needed? etc.)
* Lack of comparison of results with related work (for the fingerprinting-based techniques). As this work basically compares existing techniques (aside from the new ML features), it should contain the comparison with the prior works and explanation of their potentially differing results.
* Unnecessary repetitions in Sec. 4.1 from this section itself and from the previous one.
questions: None.
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
3YYtaQir1f | Fingerprinting the Shadows: Unmasking Malicious Servers with Machine Learning-Powered TLS Analysis | [
"Andreas Theofanous",
"Eva Papadogiannaki",
"Alexander Shevtsov",
"Sotiris Ioannidis"
] | Over the last few years, the adoption of encryption in network
traffic has been constantly increasing. The percentage of encrypted
communications worldwide is estimated to exceed 90%. Although
network encryption protocols mainly aim to secure and protect
users’ online activities and communications, they have been exploited
by malicious entities that hide their presence in the network.
It was estimated that in 2022, more than 85% of the malware used
encrypted communication channels.
In this work, we examine state-of-the-art fingerprinting techniques
and extend a machine learning pipeline for effective and practical
server classification. Specifically, we actively contact servers to
initiate communication over the TLS protocol and through exhaustive
requests, we extract communication metadata. We investigate
which features favor an effective classification, while we utilize and
evaluate state-of-the-art approaches. Our extended pipeline can
indicate whether a server is malicious or not with 91% precision and
95% recall, while it can specify the botnet family with 99% precision
and 99% recall. | [
"TLS",
"TLS Fingerprinting",
"Active Probing",
"Botnet",
"Command and Control",
"Server Characterization",
"Machine Learning",
"Explainability"
] | https://openreview.net/pdf?id=3YYtaQir1f | YkT46NhmSv | decision | 1,705,909,228,941 | 3YYtaQir1f | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: ### Meta Review:
**Pros:**
1. **Novelty in Server Classification:** The paper introduces a novel approach to classify servers as benign or malicious using machine learning features extracted from TLS handshake messages, focusing on cipher suites and other parameters.
2. **Proper Dataset Selection:** The authors utilize a proper selection of benign and malicious datasets, including websites from Tranco 10K list, Feodo, and Blocklists, enhancing the credibility of their experiments.
3. **Reproducibility:** The paper emphasizes reproducibility, and the large dataset collection is acknowledged as a strength.
4. **Methodological Clarity:** The methodology for active TLS fingerprinting is well-described, and the paper maintains a clear structure.
**Cons:**
1. **Missing Baseline Comparison:** The absence of a baseline for comparison, particularly with [57], which employs a similar strategy for obtaining parameters from TLS handshake, is noted as a significant drawback.
2. **Unclear Limitations Statement:** The paper mentions that "no prior work has delved into the use of machine learning models," but it is unclear why not using machine learning is considered a limitation.
3. **Lack of Expected Evaluation:** The promise to "evaluate the effectiveness of various state-of-the-art approaches" is noted in the review, but the evaluation is not found in the paper, raising concerns about clarity.
4. **Dataset Uncertainty:** The handling of the unlabeled Blocklists dataset raises concerns, especially when the paper refers to it as "unlabeled lists" but does not describe the manual labeling process.
5. **Lack of Detail:** Some aspects require additional details, such as the "additional layer of complexity" in using machine learning, the choice of using 10 client_hello messages, and the size of sets in Section 4.3.
6. **Inconsistencies in Figures:** Inconsistencies are noted in figures, such as the mention of "20,000 features" when it should be "20,700 features" and the choice of concatenation on the hash in lines 670 to 673.
**Suggestions:**
1. **Baseline Comparison:** Include a baseline comparison, especially with [57], to provide a more comprehensive evaluation of the proposed approach.
2. **Clarify Limitations:** Provide a clearer explanation of why not using machine learning is considered a limitation.
3. **Ensure Expected Evaluation:** Address the statement about evaluating state-of-the-art approaches by incorporating the expected evaluation in the paper.
4. **Clarify Dataset Handling:** Clearly explain the handling of the unlabeled Blocklists dataset and how it serves experiments regarding machine learning and fingerprinting.
5. **Provide Additional Details:** Offer more details about the "additional layer of complexity" when using machine learning, the choice of using 10 client_hello messages, and the size of sets in Section 4.3.
6. **Check Figure Consistency:** Address inconsistencies in figures, such as the mention of "20,000 features," and provide accurate information.
### Conclusion:
The paper introduces a novel approach to classify servers as benign or malicious through machine learning features extracted from TLS handshake messages. While the methodology and dataset selection are acknowledged strengths, the lack of a baseline comparison, unclear limitations statement, missing expected evaluation, dataset handling uncertainties, and inconsistencies in figures are significant drawbacks. Addressing these issues, providing additional details, and ensuring figure consistency are crucial for enhancing the paper's overall quality.
--- |
3YYtaQir1f | Fingerprinting the Shadows: Unmasking Malicious Servers with Machine Learning-Powered TLS Analysis | [
"Andreas Theofanous",
"Eva Papadogiannaki",
"Alexander Shevtsov",
"Sotiris Ioannidis"
] | Over the last few years, the adoption of encryption in network
traffic has been constantly increasing. The percentage of encrypted
communications worldwide is estimated to exceed 90%. Although
network encryption protocols mainly aim to secure and protect
users’ online activities and communications, they have been exploited
by malicious entities that hide their presence in the network.
It was estimated that in 2022, more than 85% of the malware used
encrypted communication channels.
In this work, we examine state-of-the-art fingerprinting techniques
and extend a machine learning pipeline for effective and practical
server classification. Specifically, we actively contact servers to
initiate communication over the TLS protocol and through exhaustive
requests, we extract communication metadata. We investigate
which features favor an effective classification, while we utilize and
evaluate state-of-the-art approaches. Our extended pipeline can
indicate whether a server is malicious or not with 91% precision and
95% recall, while it can specify the botnet family with 99% precision
and 99% recall. | [
"TLS",
"TLS Fingerprinting",
"Active Probing",
"Botnet",
"Command and Control",
"Server Characterization",
"Machine Learning",
"Explainability"
] | https://openreview.net/pdf?id=3YYtaQir1f | SmYk7Vd5uN | official_review | 1,700,641,016,743 | 3YYtaQir1f | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2531/Reviewer_qykq"
] | review: This paper proposes a new method for extracting server classification and fingerprinting features. Besides, this paper leverages a range of active TLS fingerprinting techniques to examine server behavior. The authors implement binary classification systems to label benign/malicious servers and multi-class classification models to identify specific botnet families. The evaluation part shows the effectiveness of the proposed methods.
Pros
1. Well-structured
2. Important topic
3. Detailed evaluation of the proposed methods
Cons
1. Limited novelty
2. Lack of comparison with existing works
3. Some parts are not clear
questions: 1. The novelty of this paper is not clear. The novelty of this paper is mainly located in adopting the mean number of successful handshakes and the selected cipher suites as the feature for classification and fingerprinting, which is not enough. The authors are supposed to demonstrate their novelty more clearly and pertinently.
2. This paper lacks comparisons with existing works, such as ATSF, JARM, and DissecTLS. Quantitative comparisons with these works regarding accuracy, efficiency, recall, etc., are needed for better evaluation.
3. Some parts of this paper need to be clarified. First, the reasons for selecting the parameters in Table 8 are not clear. Were these choices based on some previous works? Besides, the details of the selected features in Table 3 are not presented.
4. I am curious about why the choice of cipher suites can be used to identify benign and malicious servers. I would appreciate it if the authors provided some intuitive explanations.
5. Another problem is why the features formalized by a hash function could be used for classification, as the hashed data will vary significantly with slight differences in the inputs.
ethics_review_flag: No
ethics_review_description: No issues found.
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
3YYtaQir1f | Fingerprinting the Shadows: Unmasking Malicious Servers with Machine Learning-Powered TLS Analysis | [
"Andreas Theofanous",
"Eva Papadogiannaki",
"Alexander Shevtsov",
"Sotiris Ioannidis"
] | Over the last few years, the adoption of encryption in network
traffic has been constantly increasing. The percentage of encrypted
communications worldwide is estimated to exceed 90%. Although
network encryption protocols mainly aim to secure and protect
users’ online activities and communications, they have been exploited
by malicious entities that hide their presence in the network.
It was estimated that in 2022, more than 85% of the malware used
encrypted communication channels.
In this work, we examine state-of-the-art fingerprinting techniques
and extend a machine learning pipeline for effective and practical
server classification. Specifically, we actively contact servers to
initiate communication over the TLS protocol and through exhaustive
requests, we extract communication metadata. We investigate
which features favor an effective classification, while we utilize and
evaluate state-of-the-art approaches. Our extended pipeline can
indicate whether a server is malicious or not with 91% precision and
95% recall, while it can specify the botnet family with 99% precision
and 99% recall. | [
"TLS",
"TLS Fingerprinting",
"Active Probing",
"Botnet",
"Command and Control",
"Server Characterization",
"Machine Learning",
"Explainability"
] | https://openreview.net/pdf?id=3YYtaQir1f | 8R2BUzFWlU | official_review | 1,700,933,334,013 | 3YYtaQir1f | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2531/Reviewer_dteh"
] | review: ## Summary
This paper introduces a machine learning (ML) based method to identify malicious servers through TLS handshakes. The authors utilize DissectTLS to examine a set of popular websites and blocklisted sites, aiming to detect unique features and fingerprints for training a classifier. They further develop a multi-class classifier to distinguish malware families based on server characteristics. The evaluation encompasses 10,000 well-known sites and a number of blocklisted sites, the exact count of which is not specified.
## Strengths
- The use of datasets from various periods enhances the robustness of the evaluation.
- The research addresses a compelling problem, offering a potential method for detecting malicious servers through active scanning.
- The technical integration of TLS cipher information is noteworthy and has broader applications in areas like vulnerability analysis and compliance.
## Weaknesses
- The paper could benefit from a clearer presentation, particularly in explaining ML feature extraction and lacking practical examples.
- The methodology for transformation and parameter selection is vaguely described, lacking an in-depth discussion of the details.
- Standard feature transformation processes are mentioned but fail to offer insightful details about the experimental setup.
- While feature analysis is covered, it lacks direct references to figures in the appendix.
- The method of differentiating between successful probing and disrupted (timed out) cases is unclear.
- Validation of blocklists is absent; these often contain false positives due to CDNs and shared hosting, raising questions about handling such scenarios.
- The results, particularly in section 5.1 regarding response analysis, are speculative and lack concrete empirical backing.
- The rationale for excluding the analysis of the last handshake from the scope is not explained.
- In Figure 2 regarding blocklists, the absence of IP address validation casts doubt on the reliability of the results.
- Concerning fingerprinting techniques:
- The rationale behind not employing PCA or a feature ranking approach for selecting crucial fingerprint features is not addressed.
- The fingerprinting technique itself is not validated, leading to uncertainty about the stability and distinctiveness of these fingerprints.
- The reproducibility of fingerprints upon repeated TLS extraction is questionable.
- The problem of overlap in fingerprints between blocklisted IPs and Tranco (clean) sites, as seen in Figure 3 and Section 7.2.2, is a significant concern.
Overall, the paper has promising early results, but I do not think it is ready for publishing. Additional evaluation and validation is required. Specifically, I recommend the authors to focus on:
1. Characterizing the IP addresses from Tranco and the blocklist (ASNs, Orgs, Geo distribution, etc.) gives us a sense of the data representation. Additionally, including top sites from various regions across the world would improve the confidence in the results.
2. Validate the uniqueness and reproducibility of the fingerprints.
3. Provide explicit numbers regarding the dataset, how many IP addresses, what was filtered, what the overlap between datasets, and how the final training and testing dataset stems from the initial IPs.
4. Investigation into the last handshake anomaly and providing some explanation of why these factors happen, how they can impact the collected dataset, and how to mitigate it.
5. Focus on feature ranking and what are the key factors that differentiate malicious from benign. What key features differentiate malware families?
questions: Did the authors validate their source of top sites and blocklists?
Did the authors validate the reproducibility of the TLS feature extraction? Is it deterministic?
Did the authors validate their fingerprinting hash? How does it impact the results?
What factor limited the use of only Fedeo list? Why not also include Threatfox and broader list from abuse.ch?
ethics_review_flag: No
ethics_review_description: the authors discuss ethical considerations, but not required for this study.
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 2
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
3TSpM7X2aY | DRAM-like Architecture with Asynchronous Refreshing for Continual Relation Extraction | [
"Tianci Bu",
"Kang Yang",
"Wenchuan Yang",
"Jiawei Feng",
"Xiaoyu Zhang",
"Xin Lu"
] | Continual Relation Extraction (CRE) has found widespread web applications (e.g., search engines) in recent times. One significant challenge in this task is the phenomenon of catastrophic forgetting, where models tend to forget earlier information. Existing approaches in this field predominantly rely on memory-based methods to alleviate catastrophic forgetting, which overlooks the inherent challenge posed by the varying memory requirements of different relations and the need for a suitable memory refreshing strategy. Drawing inspiration from the mechanisms of Dynamic Random Access Memory (DRAM), our study introduces a novel CRE architecture with an asynchronous refreshing strategy to tackle these challenges. We first design a DRAM-like architecture, comprising three key modules: perceptron, controller, and refresher. This architecture dynamically allocates memory, enabling the consolidation of well-remembered relations while allocating additional memory for revisiting poorly learned relations. Furthermore, we propose a compromising asynchronous refreshing strategy to find the pivot between over-memorization and overfitting, which focuses on the current learning task and mixed-memory data asynchronously. Additionally, we explain the existing refreshing strategies in CRE from the DRAM perspective. Our proposed method has experimented on two benchmarks and overall outperforms ConPL (the SOTA method) by an average of 1.50\% on accuracy, which demonstrates the efficiency of the proposed architecture and refreshing strategy. | [
"Continual Relation Extraction",
"Dynamic Random Access Memory",
"Memory Allocation",
"Refreshing Strategy"
] | https://openreview.net/pdf?id=3TSpM7X2aY | z55zhfTIRQ | official_review | 1,702,608,222,981 | 3TSpM7X2aY | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1833/Reviewer_ppqz"
] | review: The paper proposes an approach for continuous relation extraction with a particular focus on addressing the challenge referred to as "catastrophic forgetting". In particular, the goal is to dynamically allocate memory in such as manner that consolidates
of well-remembered relations while allocating additional memory for revisiting poorly learned relations.
The approach proposed is based on an architecture that is inspired by Dynamic Random Access Memory and comprises of a perceptron, a controller and a refresher. In addition, an asynchronous refresh strategy is proposed to find a pivot between
over-memorization and overfitting, which concentrates on current tasks and asynchronously training mixed-memory data.
A comparative evaluation against a state-of-the-art approach using two benchmarks shows some promising benefits of proposed approach
+ relevant problem with interesting solution strategy.
+ intuition and rationale for approach are well motivated.
- some basic editorial improvements are needed throughout the paper.
questions: None
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 7
technical_quality: 6
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
3TSpM7X2aY | DRAM-like Architecture with Asynchronous Refreshing for Continual Relation Extraction | [
"Tianci Bu",
"Kang Yang",
"Wenchuan Yang",
"Jiawei Feng",
"Xiaoyu Zhang",
"Xin Lu"
] | Continual Relation Extraction (CRE) has found widespread web applications (e.g., search engines) in recent times. One significant challenge in this task is the phenomenon of catastrophic forgetting, where models tend to forget earlier information. Existing approaches in this field predominantly rely on memory-based methods to alleviate catastrophic forgetting, which overlooks the inherent challenge posed by the varying memory requirements of different relations and the need for a suitable memory refreshing strategy. Drawing inspiration from the mechanisms of Dynamic Random Access Memory (DRAM), our study introduces a novel CRE architecture with an asynchronous refreshing strategy to tackle these challenges. We first design a DRAM-like architecture, comprising three key modules: perceptron, controller, and refresher. This architecture dynamically allocates memory, enabling the consolidation of well-remembered relations while allocating additional memory for revisiting poorly learned relations. Furthermore, we propose a compromising asynchronous refreshing strategy to find the pivot between over-memorization and overfitting, which focuses on the current learning task and mixed-memory data asynchronously. Additionally, we explain the existing refreshing strategies in CRE from the DRAM perspective. Our proposed method has experimented on two benchmarks and overall outperforms ConPL (the SOTA method) by an average of 1.50\% on accuracy, which demonstrates the efficiency of the proposed architecture and refreshing strategy. | [
"Continual Relation Extraction",
"Dynamic Random Access Memory",
"Memory Allocation",
"Refreshing Strategy"
] | https://openreview.net/pdf?id=3TSpM7X2aY | wxBqhbCI5p | official_review | 1,700,681,786,566 | 3TSpM7X2aY | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1833/Reviewer_EPcd"
] | review: This paper looks at the problem of relation extraction in a continual learning step where new sets of relations are introduced over time. The aim then is to be able to extract new relations learnt at particular time without forgetting about relations learned earlier. This is a fairly widely studied task in NLP. This paper looks at a new way of implementing external memory for such continual learning systems that focuses on updating the memory from previously seen examples. The paper makes an analogy to physical computing memory (i.e. DRAM chips) to focus on I what I would term a variant of rehearsal learning - which is common in continual learning setups. The variant is to refresh the memory in dynamic fashion depending on a separate
The results show better performance against other methods in this domain on two relation extraction databases converted to be used in a continual learning setup.
Strengths
- Slightly better performance than other SOTA methods
- Interesting ablation studies
- Asynchronous refreshing of memory looks to be unique in continual learning
Weaknesses
- The paper makes a lot of its analogy to physical DRAM chips but I think that's somewhat distracting to the reader when maybe more time should be spent just focusing on the importance of the asynchronous cycle
- It would have been interesting to see the potential bounds of relation extraction performance by just training a whole model not in a continual setup.
- The connection to rehearsal learning could have been made clearer.
- The memory employed is fairly small.
__After Rebuttal__
The rebuttal addressed my main questions in particular the additional experiments I think help a lot and the clearer connection to rehearsal learning.
questions: How is your approach related to rehearsal learning? Is it a variant of that approach? If not how is it different?
ethics_review_flag: Yes
ethics_review_description: no ethics concerns
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
3TSpM7X2aY | DRAM-like Architecture with Asynchronous Refreshing for Continual Relation Extraction | [
"Tianci Bu",
"Kang Yang",
"Wenchuan Yang",
"Jiawei Feng",
"Xiaoyu Zhang",
"Xin Lu"
] | Continual Relation Extraction (CRE) has found widespread web applications (e.g., search engines) in recent times. One significant challenge in this task is the phenomenon of catastrophic forgetting, where models tend to forget earlier information. Existing approaches in this field predominantly rely on memory-based methods to alleviate catastrophic forgetting, which overlooks the inherent challenge posed by the varying memory requirements of different relations and the need for a suitable memory refreshing strategy. Drawing inspiration from the mechanisms of Dynamic Random Access Memory (DRAM), our study introduces a novel CRE architecture with an asynchronous refreshing strategy to tackle these challenges. We first design a DRAM-like architecture, comprising three key modules: perceptron, controller, and refresher. This architecture dynamically allocates memory, enabling the consolidation of well-remembered relations while allocating additional memory for revisiting poorly learned relations. Furthermore, we propose a compromising asynchronous refreshing strategy to find the pivot between over-memorization and overfitting, which focuses on the current learning task and mixed-memory data asynchronously. Additionally, we explain the existing refreshing strategies in CRE from the DRAM perspective. Our proposed method has experimented on two benchmarks and overall outperforms ConPL (the SOTA method) by an average of 1.50\% on accuracy, which demonstrates the efficiency of the proposed architecture and refreshing strategy. | [
"Continual Relation Extraction",
"Dynamic Random Access Memory",
"Memory Allocation",
"Refreshing Strategy"
] | https://openreview.net/pdf?id=3TSpM7X2aY | s39qaEflt4 | official_review | 1,700,654,005,595 | 3TSpM7X2aY | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1833/Reviewer_tyxL"
] | review: # Summary
The paper presents an approach for continual relation extraction. The framework aims to overcome the challenge of catastrophic forgetting via a novel architecture inspired by the mechanisms of Dynamic Random Access Memory, called DAAR. The approach has been evaluated on two benchmarks (FewRel and TACRED) and compared with three baselines (EMAR, ERDA and ConPL). Performance is measured in terms of the overall accuracy.
# Significance
Automatic extraction of relations from text is of paramount importance for streamlining the knowledge graph construction problem. In an era where textual content is continuously produced, attention has to be paid to the continual extraction problem and avoiding problems such as catastrophic forgetting. Similar problems have been addressed in computer engineering and therefore, the architecture presented is a direction worthwhile of further investigations.
# Relevance
It is relevant as relation extraction is a technique for constructing knowledge graphs.
# Readability
I’m not familiar with learning architectures and I found Section 3 (the section presenting the approach) hard to read. The authors assume readers are very familiar with the topic and this is necessary the case, since the conference is on web technologies and the track attains to semantics and knowledge. Maybe the readability can be improved by giving intuitions, examples and additional explanations of concepts like the Prototype (what is that? Can you provide the readers with an example? Why do we need it?), Perceptual score, memory etc.. After reading a couple of times the section I have only a vague intuition of the meaning of these concepts. In other words, the mathematics is clear, but the design rationale is vague. An important question is why do we need this component? The answer has to be clear to the reader when the component is introduced. I couldn’t find it in the text.
# *Novelty*
The problem is not original, but the approach is novel.
# Positioning wrt state of the art.*
To my knowledge, most of the relevant work is cited and positioned to the authors' contribution.
# Potential impact.*
Again, as I’m not very familiar with the research on this topic, so, I’m not sure how to assess the significance of the improvement of 1.5% in accuracy over the baselines. I’d value the impact as limited, but maybe this constitutes a big leap in the field.
# Technical soundness.
The approach appears sound and reasonable to me.
# Reproducibility.
Although the architecture is described in detail, the lack of source code hinders the reproducibility of the experiments.
questions: See the review
ethics_review_flag: No
ethics_review_description: -
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 6
reviewer_confidence: 1: The reviewer's evaluation is an educated guess |
3TSpM7X2aY | DRAM-like Architecture with Asynchronous Refreshing for Continual Relation Extraction | [
"Tianci Bu",
"Kang Yang",
"Wenchuan Yang",
"Jiawei Feng",
"Xiaoyu Zhang",
"Xin Lu"
] | Continual Relation Extraction (CRE) has found widespread web applications (e.g., search engines) in recent times. One significant challenge in this task is the phenomenon of catastrophic forgetting, where models tend to forget earlier information. Existing approaches in this field predominantly rely on memory-based methods to alleviate catastrophic forgetting, which overlooks the inherent challenge posed by the varying memory requirements of different relations and the need for a suitable memory refreshing strategy. Drawing inspiration from the mechanisms of Dynamic Random Access Memory (DRAM), our study introduces a novel CRE architecture with an asynchronous refreshing strategy to tackle these challenges. We first design a DRAM-like architecture, comprising three key modules: perceptron, controller, and refresher. This architecture dynamically allocates memory, enabling the consolidation of well-remembered relations while allocating additional memory for revisiting poorly learned relations. Furthermore, we propose a compromising asynchronous refreshing strategy to find the pivot between over-memorization and overfitting, which focuses on the current learning task and mixed-memory data asynchronously. Additionally, we explain the existing refreshing strategies in CRE from the DRAM perspective. Our proposed method has experimented on two benchmarks and overall outperforms ConPL (the SOTA method) by an average of 1.50\% on accuracy, which demonstrates the efficiency of the proposed architecture and refreshing strategy. | [
"Continual Relation Extraction",
"Dynamic Random Access Memory",
"Memory Allocation",
"Refreshing Strategy"
] | https://openreview.net/pdf?id=3TSpM7X2aY | gALot7C7sQ | official_review | 1,700,532,465,905 | 3TSpM7X2aY | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1833/Reviewer_nScK"
] | review: The paper proposed a new method called Dynamic Random Access Memory (DRAM) for Continual Relation Extraction. It is inspired from DRAM update as physical memory. Its feature is to allocate memory dynamically to consolidate well-remembered relations and also to allocate additional memory for poor learned relations. It also includes asynchronous refreshing strategy to find the pivot between over-memorization and overfitting.
Apart from analogy of DRAM, the proposed method is a well-considered and practical method to solve the problems for CRE. The authors tested it by comparing the existing methods and claimed that it is superior than them. Its performance is constantly better than others through the series tasks mostly. The experiments and their evaluation are fairly so seem reliable.
questions: In Table 4, the forgetting performance is shown. It is interesting, but it only shows up to T5. It is possible to show the result until T8?
ethics_review_flag: No
ethics_review_description: None
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
3TSpM7X2aY | DRAM-like Architecture with Asynchronous Refreshing for Continual Relation Extraction | [
"Tianci Bu",
"Kang Yang",
"Wenchuan Yang",
"Jiawei Feng",
"Xiaoyu Zhang",
"Xin Lu"
] | Continual Relation Extraction (CRE) has found widespread web applications (e.g., search engines) in recent times. One significant challenge in this task is the phenomenon of catastrophic forgetting, where models tend to forget earlier information. Existing approaches in this field predominantly rely on memory-based methods to alleviate catastrophic forgetting, which overlooks the inherent challenge posed by the varying memory requirements of different relations and the need for a suitable memory refreshing strategy. Drawing inspiration from the mechanisms of Dynamic Random Access Memory (DRAM), our study introduces a novel CRE architecture with an asynchronous refreshing strategy to tackle these challenges. We first design a DRAM-like architecture, comprising three key modules: perceptron, controller, and refresher. This architecture dynamically allocates memory, enabling the consolidation of well-remembered relations while allocating additional memory for revisiting poorly learned relations. Furthermore, we propose a compromising asynchronous refreshing strategy to find the pivot between over-memorization and overfitting, which focuses on the current learning task and mixed-memory data asynchronously. Additionally, we explain the existing refreshing strategies in CRE from the DRAM perspective. Our proposed method has experimented on two benchmarks and overall outperforms ConPL (the SOTA method) by an average of 1.50\% on accuracy, which demonstrates the efficiency of the proposed architecture and refreshing strategy. | [
"Continual Relation Extraction",
"Dynamic Random Access Memory",
"Memory Allocation",
"Refreshing Strategy"
] | https://openreview.net/pdf?id=3TSpM7X2aY | RX4nctU7Hq | official_review | 1,700,143,351,334 | 3TSpM7X2aY | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1833/Reviewer_QqUN"
] | review: This paper proposes a new architecture for continual relation extraction (CRE) that uses an asynchronous refreshing strategy to tackle catastrophic forgetting. It is inspired by Dynamic Random Access Memory (DRAM) mechanisms and consists of two main components: a DRAM-like architecture containing three other subcomponents (perceptron, controller, and refresher), and the asynchronous refreshing strategies. Both components are carefully described in the paper, consolidating its two main contributions. It also evaluates the approach against two established benchmarks and three baselines. The numbers reported are not that impressive to me, as the current approach only improves the baselines by about 1.5%. I appreciate the ablation studies reported in Section 4.5, as they allow all component contributions to be effectively understood.
Although the paper is easy to read in general, it also shows some rare sentences and minor typos that must be checked. A revision of the language could help to improve the quality of the final manuscript. On the other hand, I recommend the author to rewrite the Conclusions section, because the current one is like a clone of the abstract, even it provides less information. I think they are missing the opportunity to show the impact of the proposal in a more assertive way, which would perhaps help a reader like me to see more clearly the contribution of the article, which in principle describes an interesting approach but with results that do not seem a substantial improvement on the state of the art.
questions: The authors do not provide any resources to facilitate the reproducibility of their experiments. Have they considered doing so in the future? What requirements should the platform meet in order to use the current approach?
ethics_review_flag: No
ethics_review_description: I have not check "Yes" to the ethics review flag.
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 1: The reviewer's evaluation is an educated guess |
3TSpM7X2aY | DRAM-like Architecture with Asynchronous Refreshing for Continual Relation Extraction | [
"Tianci Bu",
"Kang Yang",
"Wenchuan Yang",
"Jiawei Feng",
"Xiaoyu Zhang",
"Xin Lu"
] | Continual Relation Extraction (CRE) has found widespread web applications (e.g., search engines) in recent times. One significant challenge in this task is the phenomenon of catastrophic forgetting, where models tend to forget earlier information. Existing approaches in this field predominantly rely on memory-based methods to alleviate catastrophic forgetting, which overlooks the inherent challenge posed by the varying memory requirements of different relations and the need for a suitable memory refreshing strategy. Drawing inspiration from the mechanisms of Dynamic Random Access Memory (DRAM), our study introduces a novel CRE architecture with an asynchronous refreshing strategy to tackle these challenges. We first design a DRAM-like architecture, comprising three key modules: perceptron, controller, and refresher. This architecture dynamically allocates memory, enabling the consolidation of well-remembered relations while allocating additional memory for revisiting poorly learned relations. Furthermore, we propose a compromising asynchronous refreshing strategy to find the pivot between over-memorization and overfitting, which focuses on the current learning task and mixed-memory data asynchronously. Additionally, we explain the existing refreshing strategies in CRE from the DRAM perspective. Our proposed method has experimented on two benchmarks and overall outperforms ConPL (the SOTA method) by an average of 1.50\% on accuracy, which demonstrates the efficiency of the proposed architecture and refreshing strategy. | [
"Continual Relation Extraction",
"Dynamic Random Access Memory",
"Memory Allocation",
"Refreshing Strategy"
] | https://openreview.net/pdf?id=3TSpM7X2aY | NltD1D3XR5 | decision | 1,705,909,232,191 | 3TSpM7X2aY | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept (Oral)
comment: This article considers the problem of relation extraction in continual learning steps, where a new technique is introduced for implementing external memory that can be dynamically updated. Results show slight improvements compared to existing approaches.
All reviewers agree that this work is relevant to the Web Conference and solves a relevant problem with a novel solution.
After discussion, it was agreed that this work deserves to be accepted.
We do recommend the reviewers to include the suggested changes from the reviewers, such as the suggested editorial fixes and improvements to readability. |
3CojD79xYh | MatchNAS: Optimizing Edge AI in Sparse-Label Data Contexts via Automating Deep Neural Network Porting for Mobile Deployment | [
"Hongtao Huang",
"Lina Yao",
"Xiaojun Chang",
"Wen Hu"
] | Recent years have seen the explosion of edge intelligence with powerful deep learning models. As 5G technology becomes more widespread, it has opened up new possibilities for edge intelligence, where the cloud-edge scheme has emerged to overcome the limited computational capabilities of edge devices. Deep-learning models can be trained on powerful cloud servers and then ported to smart edge devices after model lightweight. However, porting models to match a variety of edge platforms with real-world data, especially in sparse-label data contexts, is a labour-intensive and resource-costing task. In this paper, we present MatchNAS, a neural network porting scheme, to automate network porting for mobile platforms in label-scarce contexts. Specifically, we employ neural architecture search schemes to reduce human effort in network fine-tuning and semi-supervised learning techniques to overcome the challenge of lacking labelled data. MatchNAS can serve as an intermediary that helps bridge the gap between cloud AI and edge AI, facilitating both porting efficiency and network performance. | [
"Edge AI",
"mobile intelligence",
"deep neural network",
"AutoML"
] | https://openreview.net/pdf?id=3CojD79xYh | irO3XZs8tX | decision | 1,705,909,206,293 | 3CojD79xYh | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: Summary: A scheme to automate the porting of deep-learning models from cloud servers to resource-constrained edge devices.
Strengths:
+ Address an important practical challenge
+ New method using NAS and SSL
+ Clear technical motivation
+ Easy to follow
Weaknesses:
- Concerns about limited novelty
- Concerns about generalizability of the methods
- Rationale for using NAS is less clear
- Better related work would be helpful
- Need to discuss limitations
Recommendation: The paper addresses a useful practical problem and has a method, but some elements of novelty and rationale are debated by the reviewers. |
3CojD79xYh | MatchNAS: Optimizing Edge AI in Sparse-Label Data Contexts via Automating Deep Neural Network Porting for Mobile Deployment | [
"Hongtao Huang",
"Lina Yao",
"Xiaojun Chang",
"Wen Hu"
] | Recent years have seen the explosion of edge intelligence with powerful deep learning models. As 5G technology becomes more widespread, it has opened up new possibilities for edge intelligence, where the cloud-edge scheme has emerged to overcome the limited computational capabilities of edge devices. Deep-learning models can be trained on powerful cloud servers and then ported to smart edge devices after model lightweight. However, porting models to match a variety of edge platforms with real-world data, especially in sparse-label data contexts, is a labour-intensive and resource-costing task. In this paper, we present MatchNAS, a neural network porting scheme, to automate network porting for mobile platforms in label-scarce contexts. Specifically, we employ neural architecture search schemes to reduce human effort in network fine-tuning and semi-supervised learning techniques to overcome the challenge of lacking labelled data. MatchNAS can serve as an intermediary that helps bridge the gap between cloud AI and edge AI, facilitating both porting efficiency and network performance. | [
"Edge AI",
"mobile intelligence",
"deep neural network",
"AutoML"
] | https://openreview.net/pdf?id=3CojD79xYh | WyYCRINtg9 | official_review | 1,701,427,472,043 | 3CojD79xYh | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1301/Reviewer_btnQ"
] | review: This paper presents a neural network porting scheme, named MatchNAS, to automate network porting for mobile platforms. MatchNAS addresses two bottlenecks present in existing techniques: need for large amount of data and difficulty in producing high-quality artificial labels. MatchNAS' evaluation on four image classification datasets show good performance improvement. Further, its deployment show better latency-accuracy trade-off compared to baselines.
Strengths:
- The paper clearly describes technical challenges with existing techniques and explains the novelty of the proposed method w.r.t to these techniques
- Related works on Neural Architecture Search (NAS) explains in detail the preliminaries necessary to understand the main contribution
- Results compared to SOTA are promising
Weaknesses:
- Whereas the paper describes the contributions in context of mobile network porting, the broader applicability of the contribution to the Web is not clear. The technique as described looks generalisable but generalisability aspect needs to be made explicit.
- The proposed approach is about mobile networks but the evaluation is on image classification datasets. Why the four datasets employed in evaluation are relevant is not clarified.
questions: - What are the unique challenges in the domain of mobile number porting that differentiates it from other domain? <Pg1, right column> does briefly touches upon it but I ask this to better understand the generalisability of the proposed approach.
- Why the four image datasets employed in evaluation are relevant for evaluating technique specific to mobile network porting?
ethics_review_flag: No
ethics_review_description: -
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 4
reviewer_confidence: 1: The reviewer's evaluation is an educated guess |
3CojD79xYh | MatchNAS: Optimizing Edge AI in Sparse-Label Data Contexts via Automating Deep Neural Network Porting for Mobile Deployment | [
"Hongtao Huang",
"Lina Yao",
"Xiaojun Chang",
"Wen Hu"
] | Recent years have seen the explosion of edge intelligence with powerful deep learning models. As 5G technology becomes more widespread, it has opened up new possibilities for edge intelligence, where the cloud-edge scheme has emerged to overcome the limited computational capabilities of edge devices. Deep-learning models can be trained on powerful cloud servers and then ported to smart edge devices after model lightweight. However, porting models to match a variety of edge platforms with real-world data, especially in sparse-label data contexts, is a labour-intensive and resource-costing task. In this paper, we present MatchNAS, a neural network porting scheme, to automate network porting for mobile platforms in label-scarce contexts. Specifically, we employ neural architecture search schemes to reduce human effort in network fine-tuning and semi-supervised learning techniques to overcome the challenge of lacking labelled data. MatchNAS can serve as an intermediary that helps bridge the gap between cloud AI and edge AI, facilitating both porting efficiency and network performance. | [
"Edge AI",
"mobile intelligence",
"deep neural network",
"AutoML"
] | https://openreview.net/pdf?id=3CojD79xYh | V4orDOMzbv | official_review | 1,701,400,552,734 | 3CojD79xYh | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1301/Reviewer_qmLc"
] | review: This paper proposes an approach for Deep network porting in mobile devices when the enough labelled data is not available.
The authors essentially combines semi-supervised learning and neural architecture search to achieve the goal.
The paper has limited novelty. All one has to do apply semi-supervised learning and then compress the network to fit in mobile platform.
I did not find any rationale behind the use of architecture search student-teacher paradigm to achieve this. NASes are already unstable and in combination with small labeled data this makes more unreliable.
Rather a straightforward approach would be to train a network in semi-supervised settings and prune it within the fixed budget for deployment.
questions: Did the authors consider pruning based network fine-tuning?
What was the performance of the NAS model used by the authors on data with varying labels?
What happens if we simpler approach like this: train a Semi-supervised model and then use network pruning to compress the network?
ethics_review_flag: No
ethics_review_description: None.
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 3
technical_quality: 4
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
3CojD79xYh | MatchNAS: Optimizing Edge AI in Sparse-Label Data Contexts via Automating Deep Neural Network Porting for Mobile Deployment | [
"Hongtao Huang",
"Lina Yao",
"Xiaojun Chang",
"Wen Hu"
] | Recent years have seen the explosion of edge intelligence with powerful deep learning models. As 5G technology becomes more widespread, it has opened up new possibilities for edge intelligence, where the cloud-edge scheme has emerged to overcome the limited computational capabilities of edge devices. Deep-learning models can be trained on powerful cloud servers and then ported to smart edge devices after model lightweight. However, porting models to match a variety of edge platforms with real-world data, especially in sparse-label data contexts, is a labour-intensive and resource-costing task. In this paper, we present MatchNAS, a neural network porting scheme, to automate network porting for mobile platforms in label-scarce contexts. Specifically, we employ neural architecture search schemes to reduce human effort in network fine-tuning and semi-supervised learning techniques to overcome the challenge of lacking labelled data. MatchNAS can serve as an intermediary that helps bridge the gap between cloud AI and edge AI, facilitating both porting efficiency and network performance. | [
"Edge AI",
"mobile intelligence",
"deep neural network",
"AutoML"
] | https://openreview.net/pdf?id=3CojD79xYh | U0o7WXmgwT | official_review | 1,700,247,183,384 | 3CojD79xYh | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1301/Reviewer_1ciL"
] | review: In essence, the paper explores the deployment of large models trained on the cloud to edge devices. It acknowledges the challenges posed by the diversity and resource constraints of various edge devices. To address this, the paper introduces a novel application of Neural Architecture Search (NAS), term MatchNAS.
Strengths
+ The paper addresses an important practical problem in the field of ML deployemnt.
+ Overall, the paper is easy to follow.
Weaknesses
- The novelty of the work could be further emphasized. While the application of teacher training to NAS is insightful, it is widely employed in zero-shot and few-shot NAS algorithms. This aspect makes the paper seem like a deployment specificaiton with application of existing techniques.
- A comparison with more recent works that utilize self or semi-supervised learning to mitigate the data scarcity problem in NAS would have been beneficial like [1,2].
- The inclusion of additional benchmarks datasets considered by the NAS community, such as Pascal VOC and Cityscapes, would have enriched the evaluation process
[1]Semi-Supervised Neural Architecture Search. Renqian Luo, Xu Tan, Rui Wang, Tao Qin, Enhong Chen, and Tie-Yan Liu. CoRR abs/2002.10389 (2020).
[2]Self Semi Supervised Neural Architecture Search for Semantic Segmentation. Loïc Pauletto, Massih-Reza Amini, and Nicolas Winckler. CoRR abs/2201.12646 (2022).
questions: Please see the weaknesses pointed out in the Review
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 3
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
3CojD79xYh | MatchNAS: Optimizing Edge AI in Sparse-Label Data Contexts via Automating Deep Neural Network Porting for Mobile Deployment | [
"Hongtao Huang",
"Lina Yao",
"Xiaojun Chang",
"Wen Hu"
] | Recent years have seen the explosion of edge intelligence with powerful deep learning models. As 5G technology becomes more widespread, it has opened up new possibilities for edge intelligence, where the cloud-edge scheme has emerged to overcome the limited computational capabilities of edge devices. Deep-learning models can be trained on powerful cloud servers and then ported to smart edge devices after model lightweight. However, porting models to match a variety of edge platforms with real-world data, especially in sparse-label data contexts, is a labour-intensive and resource-costing task. In this paper, we present MatchNAS, a neural network porting scheme, to automate network porting for mobile platforms in label-scarce contexts. Specifically, we employ neural architecture search schemes to reduce human effort in network fine-tuning and semi-supervised learning techniques to overcome the challenge of lacking labelled data. MatchNAS can serve as an intermediary that helps bridge the gap between cloud AI and edge AI, facilitating both porting efficiency and network performance. | [
"Edge AI",
"mobile intelligence",
"deep neural network",
"AutoML"
] | https://openreview.net/pdf?id=3CojD79xYh | NtMQmwp1Gp | official_review | 1,699,499,927,529 | 3CojD79xYh | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1301/Reviewer_HPf6"
] | review: This paper presents MatchNAS, a neural network porting scheme designed to automate the process for mobile platforms, especially in contexts with scarce labeled data.
Pros\
S1: The significance of this work lies in its potential to bridge the gap between cloud AI and edge AI, improving the efficiency of porting networks to edge devices. This is particularly relevant as edge computing grows with the expansion of IoT and mobile devices.\
S2: The paper introduces MatchNAS, which appears to be a novel approach combining NAS schemes and semi-supervised learning techniques to reduce the effort in network fine-tuning and address the challenge of limited labeled data. \
S3: The paper provides a clear methodology, making it replicable for further research and practical application.
Cons\
W1: The paper may lack detailed discussion on limitations and potential trade-offs of the MatchNAS approach.
questions: Q1: What is the limitation of the proposed MatchNAS model?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
3CojD79xYh | MatchNAS: Optimizing Edge AI in Sparse-Label Data Contexts via Automating Deep Neural Network Porting for Mobile Deployment | [
"Hongtao Huang",
"Lina Yao",
"Xiaojun Chang",
"Wen Hu"
] | Recent years have seen the explosion of edge intelligence with powerful deep learning models. As 5G technology becomes more widespread, it has opened up new possibilities for edge intelligence, where the cloud-edge scheme has emerged to overcome the limited computational capabilities of edge devices. Deep-learning models can be trained on powerful cloud servers and then ported to smart edge devices after model lightweight. However, porting models to match a variety of edge platforms with real-world data, especially in sparse-label data contexts, is a labour-intensive and resource-costing task. In this paper, we present MatchNAS, a neural network porting scheme, to automate network porting for mobile platforms in label-scarce contexts. Specifically, we employ neural architecture search schemes to reduce human effort in network fine-tuning and semi-supervised learning techniques to overcome the challenge of lacking labelled data. MatchNAS can serve as an intermediary that helps bridge the gap between cloud AI and edge AI, facilitating both porting efficiency and network performance. | [
"Edge AI",
"mobile intelligence",
"deep neural network",
"AutoML"
] | https://openreview.net/pdf?id=3CojD79xYh | 6oDSg6sCXN | official_review | 1,700,818,826,704 | 3CojD79xYh | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1301/Reviewer_oTat"
] | review: This paper presents a semi-supervised-network-architecture-search algorithm, named MatchNAS, which combines NAS and SSL methods to automate network porting for mobile platforms in label-scarce contexts.
Strength
* The motivation is relatively clear and reasonable.
* Paper is well-written.
Weakness
* The main concern is novelty. It seems that it only combines NAS and SSL technology, lacking technological innovation points.
* The experiment was only based on the MobileNetV3 structure, and it is unclear whether this method is applicable to networks with other structures.
* Abbreviated words need to be marked with their full name where they first appear, e.g., SSL in Introduction.
questions: Please see weakness.
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 3
technical_quality: 3
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
3CojD79xYh | MatchNAS: Optimizing Edge AI in Sparse-Label Data Contexts via Automating Deep Neural Network Porting for Mobile Deployment | [
"Hongtao Huang",
"Lina Yao",
"Xiaojun Chang",
"Wen Hu"
] | Recent years have seen the explosion of edge intelligence with powerful deep learning models. As 5G technology becomes more widespread, it has opened up new possibilities for edge intelligence, where the cloud-edge scheme has emerged to overcome the limited computational capabilities of edge devices. Deep-learning models can be trained on powerful cloud servers and then ported to smart edge devices after model lightweight. However, porting models to match a variety of edge platforms with real-world data, especially in sparse-label data contexts, is a labour-intensive and resource-costing task. In this paper, we present MatchNAS, a neural network porting scheme, to automate network porting for mobile platforms in label-scarce contexts. Specifically, we employ neural architecture search schemes to reduce human effort in network fine-tuning and semi-supervised learning techniques to overcome the challenge of lacking labelled data. MatchNAS can serve as an intermediary that helps bridge the gap between cloud AI and edge AI, facilitating both porting efficiency and network performance. | [
"Edge AI",
"mobile intelligence",
"deep neural network",
"AutoML"
] | https://openreview.net/pdf?id=3CojD79xYh | 26REAQfb70 | official_review | 1,699,695,612,896 | 3CojD79xYh | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1301/Reviewer_CPPQ"
] | review: This paper proposes to combine neural architecture search techniques with mobile deployment of DNNs and proposes a new network porting scheme MatchNAS. MatchNAS transforms a trained DNN to a supernet and conducts a semi-supervised NAS training process to transfer the supernet to a label-scarce dataset. Experimental results shows both the effectiveness and the training efficiency of MatchNAS. The on device performance further validate its network performance.
The paper is generally well structured and there are comprehensive experiments on the proposed MatchNAS. Since not being an expert in either edge AI nor NAS field, this reviewer couldn't discern any obvious weaknesses within this paper.
questions: Since the authors have mentioned the efficiency of the proposed MatchNAS, it would be more persuasive to explicitly analyze the computational complexity of MatchNAS.
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 1: The reviewer's evaluation is an educated guess |
39AIGw9x8m | Interface Illusions: Uncovering the Rise of Visual Scams in Cryptocurrency Wallets | [
"Guoyi Ye",
"Geng Hong",
"Yuan Zhang",
"Min Yang"
] | Cryptocurrencies, while revolutionary, have become a magnet for malicious actors. With numerous reports underscoring cyberattacks and scams in this domain, our paper takes the lead in characterizing visual scams associated with cryptocurrency wallets—a fundamental component of Web3. Specifically, scammers capitalize on the omission of vital wallet interface details, such as token symbols, wallet addresses, and smart contract function names, to mislead users, potentially resulting in unintended financial losses. Analyzing Ethereum blockchain transactions from July 2022 to June 2023, we uncovered a total of 24,901,115 visual scam incidents, which include 3,585,493 counterfeit token attacks, 21,281,749 zero-transfer attacks, and 33,873 function name attacks, orchestrated by 6,768 distinct attackers. Shockingly, over 28,414 victims fell prey to these scams, with losses surpassing 27 million USD. This alarming data underscores the pressing need for robust protective measures. By profiling the typical victims and attackers, we are able to propose mitigation strategies informed by our findings. | [
"cybercrime",
"scam",
"cryptocurrency wallet",
"phishing",
"visual scam"
] | https://openreview.net/pdf?id=39AIGw9x8m | vy8f6XgWC2 | official_review | 1,700,145,247,394 | 39AIGw9x8m | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission198/Reviewer_vtA4"
] | review: The paper investigates three specific types of visual deceptions in cryptocurrency wallets, providing an analytic exploration of their mechanisms and detailing detection methodologies. Running these detection strategies on Ethereum transaction data, the research delivers a measurement study of the cryptocurrency scam ecosystem, as well as conducting a revenue analysis.
The paper provides a substantial measurement study across three varied types of visual scams targeting Ethereum wallets. However, it lacks a clear definition of the threat model. For instance, in the case of the counterfeit token scam and the function name scam, the paper should provide a detailed description of the attack method: how attackers display counterfeit tokens or deceptive function names in a user's wallet. A more explanation of the threat model for each attack is needed to enhance the paper's comprehensibility.
Regarding the detection techniques, the paper lacks an analysis of the potential introduction of false positives or false negatives for the proposed methods. For the counterfeit token scam, there should be a discussion on the risk of legitimate token might exhibit characteristics that resemble counterfeit tokens when applying the forgery methods to the top-200 tokens. Similarly, in the zero-transfer scam, the paper should consider cases where legitimate addresses may coincidentally exhibit similar display patterns to deceptive ones.
While the paper presents a discussion to validate the detected results (i.e., scam cases). However, the validation process is not entirely convincing. The number of samples used for checking recall appears to be too small, and the standards employed for manually validating precision are not adequately explained. A more rigorous validation process is necessary to ensure the validity of the subsequent measurement study and its findings.
Other minor concerns include:
(1) In the function name scam detection, the paper should elaborate on the criteria used to select misleading function names during the manual inspection. Additionally, a complete list of these function names should be included to facilitate a better understanding of the analysis. There appears to be a discrepancy in Table 4 in Appendix A, where the origin of the final 17 distinct function names is noted to be from 7 instead of 8 sources.
(2) It is unclear why the paper does not provide revenue estimations for counterfeit tokens.
(3) There are some presentation issues in the paper, such as "phony effect,""elusive semantics," which should be addressed for clarity.
questions: 1. Can you provide a detailed explanation of the threat models for each type of visual scam addressed in your study?
2. What was the rationale behind the sample size chosen for checking the recall of your detection methods, and could you elaborate on the standards used for manually validating precision?
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 3
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
39AIGw9x8m | Interface Illusions: Uncovering the Rise of Visual Scams in Cryptocurrency Wallets | [
"Guoyi Ye",
"Geng Hong",
"Yuan Zhang",
"Min Yang"
] | Cryptocurrencies, while revolutionary, have become a magnet for malicious actors. With numerous reports underscoring cyberattacks and scams in this domain, our paper takes the lead in characterizing visual scams associated with cryptocurrency wallets—a fundamental component of Web3. Specifically, scammers capitalize on the omission of vital wallet interface details, such as token symbols, wallet addresses, and smart contract function names, to mislead users, potentially resulting in unintended financial losses. Analyzing Ethereum blockchain transactions from July 2022 to June 2023, we uncovered a total of 24,901,115 visual scam incidents, which include 3,585,493 counterfeit token attacks, 21,281,749 zero-transfer attacks, and 33,873 function name attacks, orchestrated by 6,768 distinct attackers. Shockingly, over 28,414 victims fell prey to these scams, with losses surpassing 27 million USD. This alarming data underscores the pressing need for robust protective measures. By profiling the typical victims and attackers, we are able to propose mitigation strategies informed by our findings. | [
"cybercrime",
"scam",
"cryptocurrency wallet",
"phishing",
"visual scam"
] | https://openreview.net/pdf?id=39AIGw9x8m | iFj8JOjCjl | official_review | 1,700,583,989,653 | 39AIGw9x8m | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission198/Reviewer_dG3W"
] | review: This paper evaluates the feasibility and prevalence of three specific "visual scams" which deceive users of cryptocurrencies into transferring money to the wrong person. Two of them depend on a mismatch between the datum used to identify someone at the protocol level (an "address", which is the fingerprint of a public key, therefore a long string of random bits, meaningless to a human) and the datum used in the user interface (human-readable text and/or graphics, _chosen by the attacker_ -- possibly augmented with a shortened version of the address, short enough that the attacker can also select it by brute-force key generation). The third depends on the fact that there is no necessary connection between the _name_ of a software function, and what it actually does. A "smart contract" named `securityUpdate` is just as capable of stealing your money as one named `stealYourMoney`.
All of these attacks are well-known in previous literature. The authors have done a nice job of documenting both their feasibility in this particular context, and assessing that they do indeed occur in this particular context. If I were reviewing for a security-focused conference, or for one focused on distributed ledgers, I would not hesitate to vote for publication (with some suggestions for improvement, see "questions" section). However, I do not see _any_ relevance to the Web. It is 100% about cryptocurrency, security flaws in the user interfaces of cryptocurrency client software, and security flaws in the basic design of "smart contracts".
questions: Please explain why you submitted this paper to The Web Conference and not to Financial Crypto, Advances in Financial Technologies, IEEE S&P, USENIX Security, or any other conference that is actually about security and/or distributed ledgers.
You considered only blockchain scams and visual similarity attacks on websites in your evaluation of related work. You therefore missed an entire subfield of previous work on visual similiarity attacks on _public key fingerprints_: for example
* PGP key presentation spoofing: "[Johnny, you are fired!](https://www.usenix.org/system/files/sec19-muller.pdf)"
* PGP short key ID spoofing: "[Evil 32](https://evil32.com/)" (short key IDs are shortened fingerprints, just like those presented by crypto wallet software)
* Visual presentation of complete fingerprints so that they aren't just "line noise": "[Can Unicorns Help Users Compare Crypto Key Fingerprints?](https://dl.acm.org/doi/abs/10.1145/3025453.3025733)"
* Petname systems and Zooko's Triangle: "[Petname Systems: Background, Theory and Applications](https://www.researchgate.net/profile/Md-Sadek-Ferdous/publication/265188437_Petname_Systems_Background_Theory_and_Applications/links/563218a908ae13bc6c371f73/Petname-Systems-Background-Theory-and-Applications.pdf)"
There has also been _some_ work on defenses against underhanded and/or mislabeled code, although [Rice's Theorem](https://en.wikipedia.org/wiki/Rice%27s_theorem) precludes any truly perfect solution. Start with "[Initial Analysis of Underhanded Source
Code](https://apps.dtic.mil/sti/pdfs/AD1122149.pdf)". Object-capability security theory might also be relevant, though I'm not aware of any work specifically on exploits that seek to deceive end users about what capabilities a blob of code possesses.
Broadening your survey to include these classes of related work would help you situate your paper and describe how wallet software might better defend against at least the first two visual scams you describe.
ethics_review_flag: No
ethics_review_description: n/a
scope: 1: The work is irrelevant to the Web
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
39AIGw9x8m | Interface Illusions: Uncovering the Rise of Visual Scams in Cryptocurrency Wallets | [
"Guoyi Ye",
"Geng Hong",
"Yuan Zhang",
"Min Yang"
] | Cryptocurrencies, while revolutionary, have become a magnet for malicious actors. With numerous reports underscoring cyberattacks and scams in this domain, our paper takes the lead in characterizing visual scams associated with cryptocurrency wallets—a fundamental component of Web3. Specifically, scammers capitalize on the omission of vital wallet interface details, such as token symbols, wallet addresses, and smart contract function names, to mislead users, potentially resulting in unintended financial losses. Analyzing Ethereum blockchain transactions from July 2022 to June 2023, we uncovered a total of 24,901,115 visual scam incidents, which include 3,585,493 counterfeit token attacks, 21,281,749 zero-transfer attacks, and 33,873 function name attacks, orchestrated by 6,768 distinct attackers. Shockingly, over 28,414 victims fell prey to these scams, with losses surpassing 27 million USD. This alarming data underscores the pressing need for robust protective measures. By profiling the typical victims and attackers, we are able to propose mitigation strategies informed by our findings. | [
"cybercrime",
"scam",
"cryptocurrency wallet",
"phishing",
"visual scam"
] | https://openreview.net/pdf?id=39AIGw9x8m | dwfQl7ppDl | official_review | 1,700,874,148,127 | 39AIGw9x8m | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission198/Reviewer_NpVh"
] | review: The paper focuses on multiple forms of cryptocurrency wallet scams in the wild by performing an analysis of the blockchain transactions. The paper identified more than 25M scam incidents, surpassing 27 million US dollar. The paper also provides an analysis of the distribution of these attacks and describes possible mitigation strategies.
**Strengths**
The paper focuses on an emerging form of scam not discussed in-depth
The dataset covers a large number of more prevalent forms of visual scams
**Weaknesses**
It would be helpful to provide more details on the threat model
The campaign analysis part needs more clarifications.
**Detailed Comments**
I would like to thank the authors for defining the project. The paper is well-written and contains several interesting findings. The idea of visual scam in crypto wallets is interesting and not very well studied in the prior work.
The paper focuses on multiple aspects of the scams in crypto wallets from the types of scams, to the monetization factor and provides interesting insights on the topics. The longitudinal analysis of the attacks were also interesting and unique.
That said, I think the paper needs to discuss a few areas in greater detail and clarify the definitions used in the paper. One example is campaign analysis. What is the definition of campaign in this context? The scams generated by the same group of adversaries?
If this is the case, how could the authors cluster different scams under one campaign? The paper might mean something else, but it is not very clear that is. It was a bit unclear to me what the authors wanted to communited without knowing the definition.
It was also not very clear to me how the paper performed an analysis of the distribution of the scam types. The number of samples acquired based on transaction history were significant. How the authors distinguished the type of scam. A more clear description on the methodology could be very helpful.
questions: Please read the reviews.
ethics_review_flag: No
ethics_review_description: No ethical concern
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
39AIGw9x8m | Interface Illusions: Uncovering the Rise of Visual Scams in Cryptocurrency Wallets | [
"Guoyi Ye",
"Geng Hong",
"Yuan Zhang",
"Min Yang"
] | Cryptocurrencies, while revolutionary, have become a magnet for malicious actors. With numerous reports underscoring cyberattacks and scams in this domain, our paper takes the lead in characterizing visual scams associated with cryptocurrency wallets—a fundamental component of Web3. Specifically, scammers capitalize on the omission of vital wallet interface details, such as token symbols, wallet addresses, and smart contract function names, to mislead users, potentially resulting in unintended financial losses. Analyzing Ethereum blockchain transactions from July 2022 to June 2023, we uncovered a total of 24,901,115 visual scam incidents, which include 3,585,493 counterfeit token attacks, 21,281,749 zero-transfer attacks, and 33,873 function name attacks, orchestrated by 6,768 distinct attackers. Shockingly, over 28,414 victims fell prey to these scams, with losses surpassing 27 million USD. This alarming data underscores the pressing need for robust protective measures. By profiling the typical victims and attackers, we are able to propose mitigation strategies informed by our findings. | [
"cybercrime",
"scam",
"cryptocurrency wallet",
"phishing",
"visual scam"
] | https://openreview.net/pdf?id=39AIGw9x8m | d9Lx6yf5JD | official_review | 1,700,682,660,226 | 39AIGw9x8m | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission198/Reviewer_b2PJ"
] | review: Overall, I really enjoyed reading this paper and felt like the authors did a good job in presenting their findings in a clear and concise manner.
__Pros:__
1. They did a good job categorizing each type of attack and breaking each type of attack down into subcategories (e.g., the different types of token forgeries).
2. The dataset used in the paper is extremely large with over 24M scam incidents and 6,768 attackers.
4. The distinguish themselves from prior work by focusing largely on the visual scams associated with crypto wallets.
5. Propose mitigation approaches based on their findings.
6. At first glance, the scams demonstrated seem somewhat elementary. However, the authors were able to show the significance of these scams in their Revenue Estimation section, which showed that these attacks lead to over 27M in losses from 5,693 victims.
__Cons:__
1. It is difficult to empirically understand Figure 2, and the author's should come up with a better way to summarize this information, such as a table.
__Response Discussion:__ I appreciate the authors' comments and they provided clarification into my minor concerns. Additionally, I did update my novelty score from 5 to 6 to be better reflect my views of the new novel insights the paper provides.
questions: 1. There is little justification for why the Zero-Transfer Scam went to zero starting in 23-03. It appears to be a fairly straightforward attack, so it's surprising that it stopped so suddenly. Why is this the case?
ethics_review_flag: No
ethics_review_description: There are no ethics review needed.
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
39AIGw9x8m | Interface Illusions: Uncovering the Rise of Visual Scams in Cryptocurrency Wallets | [
"Guoyi Ye",
"Geng Hong",
"Yuan Zhang",
"Min Yang"
] | Cryptocurrencies, while revolutionary, have become a magnet for malicious actors. With numerous reports underscoring cyberattacks and scams in this domain, our paper takes the lead in characterizing visual scams associated with cryptocurrency wallets—a fundamental component of Web3. Specifically, scammers capitalize on the omission of vital wallet interface details, such as token symbols, wallet addresses, and smart contract function names, to mislead users, potentially resulting in unintended financial losses. Analyzing Ethereum blockchain transactions from July 2022 to June 2023, we uncovered a total of 24,901,115 visual scam incidents, which include 3,585,493 counterfeit token attacks, 21,281,749 zero-transfer attacks, and 33,873 function name attacks, orchestrated by 6,768 distinct attackers. Shockingly, over 28,414 victims fell prey to these scams, with losses surpassing 27 million USD. This alarming data underscores the pressing need for robust protective measures. By profiling the typical victims and attackers, we are able to propose mitigation strategies informed by our findings. | [
"cybercrime",
"scam",
"cryptocurrency wallet",
"phishing",
"visual scam"
] | https://openreview.net/pdf?id=39AIGw9x8m | 7GT5IQJhgv | official_review | 1,700,568,464,094 | 39AIGw9x8m | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission198/Reviewer_YGcT"
] | review: This study provides a comprehensive characterization and analysis of visual scams in cryptocurrency wallets through an extensive longitudinal measurement study. The authors identified approximately 24M scam incidents carried out by 6,768 attackers, resulting in a total loss of 27M USD. The research reveals the diverse aspects and dynamics of the visual scam ecosystem within cryptocurrency wallets.
+ pros
(maybe) the first attempt to characterize the visual scams in cryptocurrency wallets
timely topic
well written
- cons
lack of details in some points
Thank you for your insightful research on visual scams in cryptocurrency wallets, a topic of growing importance in the Web3 era. Your paper effectively unveils the entire ecosystem and scale of these scams, impressively structured with illustrative examples, detailed scam logic, and methodical detection approaches at the implementation level.
However, I have several suggestions that could enhance the clarity and quality of the paper:
Section 3.1 - Implementation: The method for detecting 'combo forgery' needs further details. The current description of 'testing all possible scenarios' is somewhat vague. Providing additional details, such as the set of keywords used or the strategy for adding them, would substantially clarify this section.
Section 4.1 - Timeline: For improved coherence, please reference Figure 3 directly in the main text, linking the narrative more closely with the illustrative figure.
Section 4.1 - Distribution: In Table 3, it is necessary to specify that only the top distribution, rather than the entire distribution, is presented for certain scam types. This clarification will aid in accurately interpreting the data.
Section 4.2 - Findings 3 and 4: Each scam type should be distinctly identified to ensure that the findings are not mistakenly generalized to other scam types.
Section 4.3 - Scamming Toolkits: An introduction to the toolkits employed by Zero-Transfer scammers for generating multiple Ethereum accounts resembling target addresses would be a valuable addition.
Section 4.3: There appears to be no mention of Figure 5 in the main text, nor an explanation of its contents. Integrating a reference and description of this figure would enhance understanding.
Section 4.4: The rationale for excluding the Counterfeit Token Scam from the revenue estimation is not clear. If there's a specific reason for this exclusion, please clarify it. Otherwise, for a comprehensive analysis, consider including the revenue trends of the Counterfeit Token Scam
questions: Did you try to measure/evaluate the mitigation methods proposed in the paper?
ethics_review_flag: No
ethics_review_description: none
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
39AIGw9x8m | Interface Illusions: Uncovering the Rise of Visual Scams in Cryptocurrency Wallets | [
"Guoyi Ye",
"Geng Hong",
"Yuan Zhang",
"Min Yang"
] | Cryptocurrencies, while revolutionary, have become a magnet for malicious actors. With numerous reports underscoring cyberattacks and scams in this domain, our paper takes the lead in characterizing visual scams associated with cryptocurrency wallets—a fundamental component of Web3. Specifically, scammers capitalize on the omission of vital wallet interface details, such as token symbols, wallet addresses, and smart contract function names, to mislead users, potentially resulting in unintended financial losses. Analyzing Ethereum blockchain transactions from July 2022 to June 2023, we uncovered a total of 24,901,115 visual scam incidents, which include 3,585,493 counterfeit token attacks, 21,281,749 zero-transfer attacks, and 33,873 function name attacks, orchestrated by 6,768 distinct attackers. Shockingly, over 28,414 victims fell prey to these scams, with losses surpassing 27 million USD. This alarming data underscores the pressing need for robust protective measures. By profiling the typical victims and attackers, we are able to propose mitigation strategies informed by our findings. | [
"cybercrime",
"scam",
"cryptocurrency wallet",
"phishing",
"visual scam"
] | https://openreview.net/pdf?id=39AIGw9x8m | 1tTEhqGJYU | decision | 1,705,909,224,963 | 39AIGw9x8m | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: **Meta Review:**
**Pros:**
1. **Comprehensive Study:** The paper conducts a comprehensive analysis of visual scams in cryptocurrency wallets, providing valuable insights into the mechanisms, dynamics, and revenue aspects of these scams.
2. **Large Dataset:** The use of a substantial dataset comprising over 24 million scam incidents and 6,768 attackers strengthens the study and contributes to the understanding of the prevalence of visual scams.
3. **Categorization and Mitigation:** The paper categorizes different types of visual scams, including subcategories, and proposes mitigation approaches based on the findings. This contributes to practical solutions for addressing the identified issues.
4. **Timely and Relevant:** The study addresses a timely and relevant topic in the era of Web3 and cryptocurrency, shedding light on a growing concern in the cryptocurrency wallet ecosystem.
5. **Revenue Analysis:** The inclusion of a revenue analysis, estimating over 27 million USD in losses from 5,693 victims, provides a concrete understanding of the financial impact of visual scams.
**Cons:**
1. **Lack of Clear Threat Model Definition:** Reviewers express concerns about the lack of a clear definition of the threat model, particularly for specific visual scams like the counterfeit token scam and the function name scam. More detailed descriptions of attack methods are requested.
2. **Detection Technique Analysis:** The paper is criticized for not analyzing the potential introduction of false positives or false negatives for the proposed detection methods. Specific scenarios, such as the risk of legitimate tokens resembling counterfeit tokens, should be discussed.
3. **Validation Process Concerns:** The validation process for detected results is deemed not entirely convincing. Reviewers express reservations about the sample size used for checking recall and the lack of clear standards for manually validating precision.
4. **Presentation Issues:** Some presentation issues, including terminology like "phony effect" and "elusive semantics," are noted, suggesting a need for improved clarity in the paper's language.
5. **Incomplete Information and Discrepancies:** There are concerns about incomplete information, discrepancies in tables, and missing details, such as the rationale for excluding revenue estimations for counterfeit tokens.
**Suggestions and Questions:**
1. **Detailed Threat Model:** Clarification and detailed explanations of threat models for each type of visual scam are recommended to enhance comprehensibility.
2. **Methodology Clarification:** A more transparent description of the methodology used for the distribution analysis and distinguishing between scam types is suggested for improved understanding.
3. **Further Detail in Sections:** Specific sections, such as Implementation and Timeline, are suggested to include further details for clarity, coherence, and accurate interpretation of data.
4. **Broadening Related Work:** The review recommends broadening the survey of related work to include a broader range of visual similarity attacks, such as those on public key fingerprints, to better situate the paper in the existing literature.
5. **Explanation of Trends:** Clarification is sought for trends observed in the Zero-Transfer Scam, particularly the sudden drop in incidents in March 2023.
**Conclusion:**
The paper is acknowledged for its contribution to understanding visual scams in cryptocurrency wallets, but improvements in threat model definition, detection technique analysis, validation process, and presentation are recommended to enhance its overall quality and impact. Addressing these concerns could lead to a more robust and compelling contribution to the academic community.
--- |
32oBtcUTfz | IDEA-DAC: Integrity-Driven Editing for Accountable Decentralized Anonymous Credentials | [
"Zonglun Li",
"Shuhao Zheng",
"Junliang Luo",
"Ziyue Xin",
"Xue Liu"
] | Decentralized Anonymous Credential (DAC) systems are increasingly relevant, especially when enhancing revocation mechanisms in the face of complex traceability challenges. This paper introduces IDEA-DAC, a paradigm shift from the conventional revoke-and-reissue methods, promoting direct and Integrity-Driven Editing (IDE) for Accountable DACs, which results in better integrity accountability, traceability, and system simplicity. We further incorporate an Edit-bound Conformity Check that ensures tailored integrity standards during credential amendments using R1CS-based ZK-SNARKs. Delving deeper, we propose a unique R1CS circuit design tailored for IDE. This design imposes strictly $O(N)$ rank-1 constraints for variable-length JSON documents of up to $N$ bytes in length, encompassing serialization, encryption, and edit-bound conformity checks. Additionally, our circuits only necessitate a one-time compilation, setup, and smart contract deployment for homogeneous JSON documents up to a specified size. While preserving core DAC features such as selective disclosure, anonymity, and predicate provability, IDEA-DAC achieves precise data modification checks that operate without revealing private content, ensuring only authorized edits are permitted. In summary, IDEA-DAC offers an enhanced methodology for large-scale JSON-formatted credential systems, setting a new standard in decentralized identity management efficiency and precision. | [
"Integrity-driven Editing (IDE)",
"Decentralized Anonymous Credential (DAC)",
"Edit-bound conformity check"
] | https://openreview.net/pdf?id=32oBtcUTfz | z6CmsamH43 | official_review | 1,700,570,465,062 | 32oBtcUTfz | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2060/Reviewer_XXbe"
] | review: This paper present IDEA-DAC, a new method for enabling Integrity-Driven Editing in decentralized anonymous credentials (DACs), in order to address shortcomings of the current revoke-and-reissue paradigm. The proposed method, leveraging R1CS circuits' properties, enables constraint-adhering credential amendments through edit-bound conformity checks, while maintaining DAC security and privacy properties. Importantly, IDEA-DAC requires a one-time setup and deployment for homogeneous JSON credentials.
While I'm not extremely familiar with the area, I find that the paper was relatively easy to follow and well motivated; the authors seem to properly relate their work to prior systems, while clearly highlighting the gap which they aim to fill. Also, despite the sheer number of (necessary) definitions and notations, I think most of them are presented in a rather intuitive way, easing readability.
My first question concerns the maximum length of variables, which allows for IDEA-DAC's one-time setup. I assume that picking an extremely large threshold to accommodate lengthy edits might affect the system's efficiency; is this true? If so, how is this maximum length optimally selected and which one did you use for your use-case evaluation? Also, while I understand the authors' assumption on JSON documents' static nature, i.e., maintaining data types, I would like to ask how would the system behave or what would it require to operate if, for instance, the JSON credential changes format, data types etc.
Moreover, I would expect the evaluation to be more thorough, given that IDEA-DAC can facilitate different modifications. Specifically, I understand that the edits made to the example, use-case credential are to add more publications, meanwhile increasing the document size. Additional edits could be adding new dictionary fields, changing string/numeric values etc. Are these incorporated in the current evaluation and, if not, would they affect the presented performance results?
- Pros
+ Enables precise, integrity-driven editing in DACs
+ Requires one-time setup for homogeneous JSON documents
- Cons
+ Evaluation could possibly be more comprehensive
questions: - How is the maximum length predetermined and how does it affect the system's efficiency?
- What are IDEA-DAC's requirements for accommodating new credential formats, e.g., changing types?
- What precise edits were carried out in the evaluation?
ethics_review_flag: No
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 1: The reviewer's evaluation is an educated guess |
32oBtcUTfz | IDEA-DAC: Integrity-Driven Editing for Accountable Decentralized Anonymous Credentials | [
"Zonglun Li",
"Shuhao Zheng",
"Junliang Luo",
"Ziyue Xin",
"Xue Liu"
] | Decentralized Anonymous Credential (DAC) systems are increasingly relevant, especially when enhancing revocation mechanisms in the face of complex traceability challenges. This paper introduces IDEA-DAC, a paradigm shift from the conventional revoke-and-reissue methods, promoting direct and Integrity-Driven Editing (IDE) for Accountable DACs, which results in better integrity accountability, traceability, and system simplicity. We further incorporate an Edit-bound Conformity Check that ensures tailored integrity standards during credential amendments using R1CS-based ZK-SNARKs. Delving deeper, we propose a unique R1CS circuit design tailored for IDE. This design imposes strictly $O(N)$ rank-1 constraints for variable-length JSON documents of up to $N$ bytes in length, encompassing serialization, encryption, and edit-bound conformity checks. Additionally, our circuits only necessitate a one-time compilation, setup, and smart contract deployment for homogeneous JSON documents up to a specified size. While preserving core DAC features such as selective disclosure, anonymity, and predicate provability, IDEA-DAC achieves precise data modification checks that operate without revealing private content, ensuring only authorized edits are permitted. In summary, IDEA-DAC offers an enhanced methodology for large-scale JSON-formatted credential systems, setting a new standard in decentralized identity management efficiency and precision. | [
"Integrity-driven Editing (IDE)",
"Decentralized Anonymous Credential (DAC)",
"Edit-bound conformity check"
] | https://openreview.net/pdf?id=32oBtcUTfz | lhGAEwEWzY | official_review | 1,700,719,581,981 | 32oBtcUTfz | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2060/Reviewer_uChu"
] | review: A method of IDEA-DAC, integrated driver editing (IDE) with zk-snark and R1CS circuits for editing process is proposed. The efficiency of R1CS design for JSON serialization is optimized, and the editing integrity of dac is improved.
questions: 1, the experimental design is insufficient, the lack of horizontal comparison data of similar software;
2. It is mentioned that the proof time can be further optimized by using a more powerful CPU and advanced ZKP protocol. Does the current scheme encounter performance bottlenecks when dealing with large-scale JSON certificates?
3, please explain how to support selective disclosure and predicate proof of credentials, and how to ensure the anonymity and privacy of users;
4. When describing completeness, the paper mentions the identified limitations in the existing system. Please describe the limitations in detail.
ethics_review_flag: No
ethics_review_description: NULL
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 3
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
32oBtcUTfz | IDEA-DAC: Integrity-Driven Editing for Accountable Decentralized Anonymous Credentials | [
"Zonglun Li",
"Shuhao Zheng",
"Junliang Luo",
"Ziyue Xin",
"Xue Liu"
] | Decentralized Anonymous Credential (DAC) systems are increasingly relevant, especially when enhancing revocation mechanisms in the face of complex traceability challenges. This paper introduces IDEA-DAC, a paradigm shift from the conventional revoke-and-reissue methods, promoting direct and Integrity-Driven Editing (IDE) for Accountable DACs, which results in better integrity accountability, traceability, and system simplicity. We further incorporate an Edit-bound Conformity Check that ensures tailored integrity standards during credential amendments using R1CS-based ZK-SNARKs. Delving deeper, we propose a unique R1CS circuit design tailored for IDE. This design imposes strictly $O(N)$ rank-1 constraints for variable-length JSON documents of up to $N$ bytes in length, encompassing serialization, encryption, and edit-bound conformity checks. Additionally, our circuits only necessitate a one-time compilation, setup, and smart contract deployment for homogeneous JSON documents up to a specified size. While preserving core DAC features such as selective disclosure, anonymity, and predicate provability, IDEA-DAC achieves precise data modification checks that operate without revealing private content, ensuring only authorized edits are permitted. In summary, IDEA-DAC offers an enhanced methodology for large-scale JSON-formatted credential systems, setting a new standard in decentralized identity management efficiency and precision. | [
"Integrity-driven Editing (IDE)",
"Decentralized Anonymous Credential (DAC)",
"Edit-bound conformity check"
] | https://openreview.net/pdf?id=32oBtcUTfz | fjozV5S5Xc | official_review | 1,700,668,175,792 | 32oBtcUTfz | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2060/Reviewer_324e"
] | review: In this paper, the authors introduce IDEA-DAC, a novel methodology designed to facilitate Integrity-Driven Editing (IDE) for Accountable Decentralized Anonymous Credentials. Overall, the paper is well-crafted and presents concepts in a clear manner.
The abstract goes right away to the point and might not be easy to follow. It has many technical details and might be a roadblock for many potential readers. To improve this, you might want to avoid acronyms and try to explain your proposal assuming that readers have no prior knowledge on the field. Recall that WebConf is not a crypto conference so the writing style should be adjusted accordingly.
A key concern revolves around the practical feasibility of the proposed methodology in real-world environments. As highlighted in Section 7, the proving time is non-linear and time-consuming. Considering that users typically lack resources such as 32 vCPUs and 256GB of memory, the paper should address the question of whether this schema is currently feasible in practical scenarios.
Additionally, given that the proposal introduces a new DAC schema for enabling users to edit JSON documents, the absence of source code is a notable limitation. Including the source code would not only allow readers to validate the proposed results and deploy the solution in a real environment but also encourage contributions to future research in the field.
Minor things:
-Enlarge Table 1 to utilize available empty spaces, enhancing readability.
-Consider merging Sections 6 and 7 as their separation seems unnecessary.
-
Review and avoid the use of abbreviations for improved clarity.
questions: -Using [24] as a baseline, there is a notable lack of detailed information regarding the practicality of the solution in terms of machine capabilities and the performance of the proposed schema. It would be beneficial to clarify the minimum specifications required for all parties involved in deploying the proposed IDEA-DAC.
-The experiments were conducted on a "standard AWS EC2 r5a.8xlarge instance, equipped with 32 vCPUs and 256GB of memory." However, this configuration significantly deviates from a typical computer setup. In comparison to [24], where experiments were carried out on a "2021 Intel i9-11900KB CPU with 8 physical cores and 64GiB RAM," the need for such powerful hardware should be justified.
-Given that the paper introduces a new DAC schema, the inclusion of the source code is crucial. As the proposal itself is a fundamental aspect, providing access to the source code would enhance the transparency and reproducibility of your work.
ethics_review_flag: No
ethics_review_description: n/a
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
32oBtcUTfz | IDEA-DAC: Integrity-Driven Editing for Accountable Decentralized Anonymous Credentials | [
"Zonglun Li",
"Shuhao Zheng",
"Junliang Luo",
"Ziyue Xin",
"Xue Liu"
] | Decentralized Anonymous Credential (DAC) systems are increasingly relevant, especially when enhancing revocation mechanisms in the face of complex traceability challenges. This paper introduces IDEA-DAC, a paradigm shift from the conventional revoke-and-reissue methods, promoting direct and Integrity-Driven Editing (IDE) for Accountable DACs, which results in better integrity accountability, traceability, and system simplicity. We further incorporate an Edit-bound Conformity Check that ensures tailored integrity standards during credential amendments using R1CS-based ZK-SNARKs. Delving deeper, we propose a unique R1CS circuit design tailored for IDE. This design imposes strictly $O(N)$ rank-1 constraints for variable-length JSON documents of up to $N$ bytes in length, encompassing serialization, encryption, and edit-bound conformity checks. Additionally, our circuits only necessitate a one-time compilation, setup, and smart contract deployment for homogeneous JSON documents up to a specified size. While preserving core DAC features such as selective disclosure, anonymity, and predicate provability, IDEA-DAC achieves precise data modification checks that operate without revealing private content, ensuring only authorized edits are permitted. In summary, IDEA-DAC offers an enhanced methodology for large-scale JSON-formatted credential systems, setting a new standard in decentralized identity management efficiency and precision. | [
"Integrity-driven Editing (IDE)",
"Decentralized Anonymous Credential (DAC)",
"Edit-bound conformity check"
] | https://openreview.net/pdf?id=32oBtcUTfz | bOd2wgWZjU | decision | 1,705,909,228,213 | 32oBtcUTfz | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: # Summary
Decentralized Anonymous Credential (DAC) systems enable users to verify specific attributes of their identity without revealing their complete identity to the verifier. This paper specifically focuses on the problem of how to update existing credentials. Prior DAC systems allow updates to existing credentials in a revoke-and-reissue approach which increases the verification overhead. This paper presents IDEA-DAC that enables edits directly to a JSON credential document utilizing zero knowledge proofs built on rank-1-constraint-system. This is useful in smart contract applications (and thus relevant to TheWebConf).
# Strengths
+ The paper addresses an important problem.
+ Verification time experiments show significant improvement.
+ Paper is well-written and easy-to-follow, even for non-experts.
# Weaknesses
- Evaluation could be improved to traditional DAC techniques.
# Recommendation
Overall, the reviewers felt that this paper addresses an important problem and proposes an interesting solution that pushes the state-of-the-art forward. In addition, the reviewers also appreciated that the authors engaged in the discussion process, as that answered several of the reviewers questions/concerns on the paper. Therefore, given the strengths and the fit for TheWebConf, I recommend accepting this paper.
--- |
32oBtcUTfz | IDEA-DAC: Integrity-Driven Editing for Accountable Decentralized Anonymous Credentials | [
"Zonglun Li",
"Shuhao Zheng",
"Junliang Luo",
"Ziyue Xin",
"Xue Liu"
] | Decentralized Anonymous Credential (DAC) systems are increasingly relevant, especially when enhancing revocation mechanisms in the face of complex traceability challenges. This paper introduces IDEA-DAC, a paradigm shift from the conventional revoke-and-reissue methods, promoting direct and Integrity-Driven Editing (IDE) for Accountable DACs, which results in better integrity accountability, traceability, and system simplicity. We further incorporate an Edit-bound Conformity Check that ensures tailored integrity standards during credential amendments using R1CS-based ZK-SNARKs. Delving deeper, we propose a unique R1CS circuit design tailored for IDE. This design imposes strictly $O(N)$ rank-1 constraints for variable-length JSON documents of up to $N$ bytes in length, encompassing serialization, encryption, and edit-bound conformity checks. Additionally, our circuits only necessitate a one-time compilation, setup, and smart contract deployment for homogeneous JSON documents up to a specified size. While preserving core DAC features such as selective disclosure, anonymity, and predicate provability, IDEA-DAC achieves precise data modification checks that operate without revealing private content, ensuring only authorized edits are permitted. In summary, IDEA-DAC offers an enhanced methodology for large-scale JSON-formatted credential systems, setting a new standard in decentralized identity management efficiency and precision. | [
"Integrity-driven Editing (IDE)",
"Decentralized Anonymous Credential (DAC)",
"Edit-bound conformity check"
] | https://openreview.net/pdf?id=32oBtcUTfz | K6oU0RYMsU | official_review | 1,701,437,465,339 | 32oBtcUTfz | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2060/Reviewer_H1SQ"
] | review: Decentralized Anonymous Credential (DAC) systems enable users to verify specific attributes of their identity without revealing the complete identity to the verifier. This paper focusses on an interesting problem. Traditionally DAC systems allow updates to existing credentials in a revoke-and-reissue approach which increases the verification overhead. This paper presents IDEA-DAC that enables edits directly to a JSON credential document utilizing zero knowledge proofs built on rank-1-constraint-system.
I enjoyed reading this paper, and find the research problem interesting and novel. The work is important in the space of privacy-centric authentication because IDEA-DAC helps with reducing computational redundancy and verification overhead. It also ensures integrity in the editing process of the credentials.
Pros
------
* The paper works on an important and timely problem.
* The paper clearly describes the IDEA-DAC mechanism and provides a use case to explain the end-to-end functionality.
* The verification time performance is encouraging.
Cons
-------
* The paper does not compare its performances with a state-of-the-art system or a traditional DAC system.
* The paper does not discuss other formats of verifiable credentials.
Comments
------------
This paper discusses an important problem, and proposes a system that may have significant impact on the DAC ecosystem. The paper does an excellent work in explaining the IDEA-DAC system. I have few questions for the authors -
* In Section 7 there is no baseline to compare IDEA-DAC’s performance against. Could you show how IDEA-DAC’s performance improves over traditional revoke-reissue credential systems? How IDEA-DAC is better than closely related work like Candid, Coconut etc?
* While I understand that IDEA-DAC focusses on JSON formatted credential documents, the discussion on other types of credential formats is missing. What types of other credential formats are there? How IDEA-DAC would need to be extended to support those formats?
* What are the limitations of IDEA-DAC?
questions: Please see review comments.
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
32oBtcUTfz | IDEA-DAC: Integrity-Driven Editing for Accountable Decentralized Anonymous Credentials | [
"Zonglun Li",
"Shuhao Zheng",
"Junliang Luo",
"Ziyue Xin",
"Xue Liu"
] | Decentralized Anonymous Credential (DAC) systems are increasingly relevant, especially when enhancing revocation mechanisms in the face of complex traceability challenges. This paper introduces IDEA-DAC, a paradigm shift from the conventional revoke-and-reissue methods, promoting direct and Integrity-Driven Editing (IDE) for Accountable DACs, which results in better integrity accountability, traceability, and system simplicity. We further incorporate an Edit-bound Conformity Check that ensures tailored integrity standards during credential amendments using R1CS-based ZK-SNARKs. Delving deeper, we propose a unique R1CS circuit design tailored for IDE. This design imposes strictly $O(N)$ rank-1 constraints for variable-length JSON documents of up to $N$ bytes in length, encompassing serialization, encryption, and edit-bound conformity checks. Additionally, our circuits only necessitate a one-time compilation, setup, and smart contract deployment for homogeneous JSON documents up to a specified size. While preserving core DAC features such as selective disclosure, anonymity, and predicate provability, IDEA-DAC achieves precise data modification checks that operate without revealing private content, ensuring only authorized edits are permitted. In summary, IDEA-DAC offers an enhanced methodology for large-scale JSON-formatted credential systems, setting a new standard in decentralized identity management efficiency and precision. | [
"Integrity-driven Editing (IDE)",
"Decentralized Anonymous Credential (DAC)",
"Edit-bound conformity check"
] | https://openreview.net/pdf?id=32oBtcUTfz | Fs6IjhgN0t | official_review | 1,700,782,233,021 | 32oBtcUTfz | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2060/Reviewer_bc4R"
] | review: **Summary**
This paper presents IDEA-DAC, a decentralized anonymous credential system that is designed to support the modification of JSON credential documents of variable length, while ensuring the integrity of these edits. The proposed system employs R1CS and ZK-SNARKs to implement edit-bound conformity checks, ensuring adherence to predefined rules and compliance with integrity standards. One important aspect of this work is that the proposed circuits can support JSON documents up to a specific size without the need for recompilation, setup, or smart contract redeployment.
**Comments**
- The paper is well written in general, easy to follow and understand. I enjoyed reading this paper.
- It's an interesting and important work in the area of DAC systems. It proposes a solution to the issue of credential editing while ensuring their integrity, without the need for the system to revoke and reissue the credentials. I find that this work contributes to advancements in the area of credential management.
- One issue with this paper is that some of the complementary materials included in the appendix are, indeed, very useful for better understanding and following the paper. I would definitely suggest having some algorithms and protocols in the main body of the paper, but I understand that this may be difficult due to the space limitation.
- The experimental evaluation presents key metrics such as the circuit size, and proving and verifying time, for credential documents of different sizes. In that regard, I would like to see a comparison of this system’s performance with other DAC systems (for example those in Table 1), and also an exploration of the trade-offs between any additional verification overheads that incur in this system in comparison to the costs involved in existing systems that revoke and reissue credentials.
questions: It seems that the proving time does not scale linearly. Which operations are responsible for this increase, and can these be further optimised? It seems to me that the proving time might become unacceptably high If the JSON credentials increase significantly in size.
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
2w8F73ZffH | Is Contrastive Learning Necessary? A Study of Data Augmentation vs Contrastive Learning in Sequential Recommendation | [
"Peilin Zhou",
"You-Liang Huang",
"Yueqi XIE",
"Jingqi Gao",
"Shoujin Wang",
"Jaeboum KIM",
"Sunghun Kim"
] | Sequential recommender systems (SRS) are designed to predict users’ future behaviors based on their historical interaction data. Recent research has increasingly utilized contrastive learning (CL) to leverage unsupervised signals to alleviate the data sparsity issue in SRS. In general, CL-based SRS first augments the raw sequential interaction data by using data augmentation strategies and employs a contrastive training scheme to enforce the representations of those sequences from the same raw interaction data to be similar. Despite the growing popularity of CL, data augmentation, as a basic component of CL, has not received sufficient attention. This raises the question: Is data augmentation sufficient to achieve superior recommendation results? To answer this question, we benchmark a large amount of data augmentation strategies, as well as state-of-the-art CL-based SRS methods, on four real-world datasets under both warm- and cold-start settings. Intriguingly, the conclusion drawn from our study is that data augmentation is sufficient and CL may not be necessarily required. In fact, utilizing augmentation alone can significantly alleviate the data sparsity issue and certain data augmentation can achieve similar or even superior performance compared with CL-based methods. We hope that our study can further inspire more fundamental studies on the key functional components of complex CL techniques. Our processed datasets and codes will be released once our paper is accepted. | [
"Data Augmentation",
"Sequential Recommendation",
"Contrastive Learning"
] | https://openreview.net/pdf?id=2w8F73ZffH | fd8B3gi7dg | official_review | 1,701,313,214,664 | 2w8F73ZffH | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2080/Reviewer_ch1L"
] | review: This work presents a critical comparison of combining data augmentation or contrastive learning (CL) for the sequential recommender systems (SRS), which often suffers from data sparsity. Experiments on four datasets, comparing different data augmentation strategies with state-of-the-art CL-based SRS methods, showing that data augmentation is sufficient and CL may not be necessarily required.
**Pros:**
- (P1) The work systematically studies different data augmentation techniques for sequence data and how they contribute to the sequential recommendation task compared to some state-of-the-art CL-based methods. Notably, slide-window strategy, an augmentation technique, can significantly enhance the performance of SRS (including CL-based methods) and has a synergistic effect when combined with other data augmentation techniques.
- (P2) The codes and datasets of this work will be provided for reproducibility.
- (P3) The experimental results look promising and show relative improvements over baselines.
**Cons:**
- (C1) Slide-window (SW) strategy for sequential data augmentation produces rich augmented data that also preserves sequential context. This obviously significantly increases the amount of data for training. This augmentation strategy thus enhances the performance in contrast to training without SW. Moreover, SW also helps to enhance other CL-based methods performance. The results in Table 3 show that SW also enhances other CL-based methods to achieve the best ranking performance on Yelp and ML-1m datasets. It is not clear how SW combined with other augmentation strategy might superior CL-based methods with SW.
- (C2) On Table 3, it creates an unfair advantage when comparing CL-based + SW with SASRec + SW + another augmentation strategy. It is good to show whether applying similar augmentation strategies when comparing CL-based with non-CL-based models to show the advantages or disadvantage when using CL-based models over non-CL-based models. For example, comparing SASRec + SW + subset-split and ICLRec + SW + subset-split.
questions: I have only one question related to (C2) discussed above.
- (Q1) How do CL-based models perform when applying SW + another augmentation strategy (if possible)? This will help clarify whether applying CL with the same augmentation strategies enhances ranking performance.
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
2w8F73ZffH | Is Contrastive Learning Necessary? A Study of Data Augmentation vs Contrastive Learning in Sequential Recommendation | [
"Peilin Zhou",
"You-Liang Huang",
"Yueqi XIE",
"Jingqi Gao",
"Shoujin Wang",
"Jaeboum KIM",
"Sunghun Kim"
] | Sequential recommender systems (SRS) are designed to predict users’ future behaviors based on their historical interaction data. Recent research has increasingly utilized contrastive learning (CL) to leverage unsupervised signals to alleviate the data sparsity issue in SRS. In general, CL-based SRS first augments the raw sequential interaction data by using data augmentation strategies and employs a contrastive training scheme to enforce the representations of those sequences from the same raw interaction data to be similar. Despite the growing popularity of CL, data augmentation, as a basic component of CL, has not received sufficient attention. This raises the question: Is data augmentation sufficient to achieve superior recommendation results? To answer this question, we benchmark a large amount of data augmentation strategies, as well as state-of-the-art CL-based SRS methods, on four real-world datasets under both warm- and cold-start settings. Intriguingly, the conclusion drawn from our study is that data augmentation is sufficient and CL may not be necessarily required. In fact, utilizing augmentation alone can significantly alleviate the data sparsity issue and certain data augmentation can achieve similar or even superior performance compared with CL-based methods. We hope that our study can further inspire more fundamental studies on the key functional components of complex CL techniques. Our processed datasets and codes will be released once our paper is accepted. | [
"Data Augmentation",
"Sequential Recommendation",
"Contrastive Learning"
] | https://openreview.net/pdf?id=2w8F73ZffH | dvQSewUZsk | official_review | 1,700,702,970,371 | 2w8F73ZffH | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2080/Reviewer_UeyS"
] | review: The paper, through meticulous experimentation, demonstrates that optimal performance can be achieved through simple data augmentation or the combination of various data augmentation methods. The results even surpass some current CL-based methods, suggesting that there may be no need to design complex contrastive losses in the future.
These findings have the potential to simplify the development of temporal recommendation, guiding researchers towards more fundamental studies in contrastive learning. The emphasis should shift from attempting to increase the complexity of contrastive learning to enhancing recommendation effectiveness.
----------- Strengths -----------
1. The authors hope to inspire researchers to conduct more fundamental studies on the key functional components of CL techniques rather than finding ways to come up with more and more complex comparative learning methods.
2. The authors demonstrate with exhaustive experiments that the effectiveness of sequence recommendation can be improved by data augmentation.
3. The paper is well-organized and easy to follow.
----------- Weaknesses -----------
1. The authors say that the data augmentation strategy is simpler and more effective compared to CL-based methods, and the experimental section proves the effectiveness through a large number of experiments but lacks the associated complexity analysis.
2. I noticed that in the experimental section, the authors give two baseline models, CL4SRec and ICLRec, on the Beauty and Sports datasets, which are quite different from those given in the original article.
3. In Figure 9, what you're trying to say is that different slide window sizes don't have a significant effect on recommendation performance, but there should be some differences between them, even if they're small.
questions: Please see the weakness part.
ethics_review_flag: No
ethics_review_description: No
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
2w8F73ZffH | Is Contrastive Learning Necessary? A Study of Data Augmentation vs Contrastive Learning in Sequential Recommendation | [
"Peilin Zhou",
"You-Liang Huang",
"Yueqi XIE",
"Jingqi Gao",
"Shoujin Wang",
"Jaeboum KIM",
"Sunghun Kim"
] | Sequential recommender systems (SRS) are designed to predict users’ future behaviors based on their historical interaction data. Recent research has increasingly utilized contrastive learning (CL) to leverage unsupervised signals to alleviate the data sparsity issue in SRS. In general, CL-based SRS first augments the raw sequential interaction data by using data augmentation strategies and employs a contrastive training scheme to enforce the representations of those sequences from the same raw interaction data to be similar. Despite the growing popularity of CL, data augmentation, as a basic component of CL, has not received sufficient attention. This raises the question: Is data augmentation sufficient to achieve superior recommendation results? To answer this question, we benchmark a large amount of data augmentation strategies, as well as state-of-the-art CL-based SRS methods, on four real-world datasets under both warm- and cold-start settings. Intriguingly, the conclusion drawn from our study is that data augmentation is sufficient and CL may not be necessarily required. In fact, utilizing augmentation alone can significantly alleviate the data sparsity issue and certain data augmentation can achieve similar or even superior performance compared with CL-based methods. We hope that our study can further inspire more fundamental studies on the key functional components of complex CL techniques. Our processed datasets and codes will be released once our paper is accepted. | [
"Data Augmentation",
"Sequential Recommendation",
"Contrastive Learning"
] | https://openreview.net/pdf?id=2w8F73ZffH | XGkODj05pB | official_review | 1,701,371,364,614 | 2w8F73ZffH | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2080/Reviewer_pDy8"
] | review: This paper designs a comparison framework to evaluate two approaches in sequential recommendations, Data Augmentation and Contrastive Learning. The intellectual contributions of this paper are weak in my opinion for a conference at WWW level. I found the paper to be focused too much on application, seems to be a benchmarking paper. Additionally, the contributions to web community are not clearly stated which is required by the conference.
questions: Major:
1- All evaluated datasets are quite sparse. Experimenting also on a slightly denser dataset will be nice for comparison.
2- How could hyperparameter optimization for different methods be carried out? In the text, it says that 0.5 dropout has been used for all transformers. However, that will not be fair.
3- The contribution to the web community is not clearly stated anywhere in text, which is required by the conference.
Minor:
1- Domain specific evaluation metrics should be referenced, and explained in the text.
2- Ablation study should be performed more in depth.
ethics_review_flag: No
ethics_review_description: N/A
scope: 2: The connection to the Web is incidental, e.g., use of Web data or API
novelty: 2
technical_quality: 2
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
2w8F73ZffH | Is Contrastive Learning Necessary? A Study of Data Augmentation vs Contrastive Learning in Sequential Recommendation | [
"Peilin Zhou",
"You-Liang Huang",
"Yueqi XIE",
"Jingqi Gao",
"Shoujin Wang",
"Jaeboum KIM",
"Sunghun Kim"
] | Sequential recommender systems (SRS) are designed to predict users’ future behaviors based on their historical interaction data. Recent research has increasingly utilized contrastive learning (CL) to leverage unsupervised signals to alleviate the data sparsity issue in SRS. In general, CL-based SRS first augments the raw sequential interaction data by using data augmentation strategies and employs a contrastive training scheme to enforce the representations of those sequences from the same raw interaction data to be similar. Despite the growing popularity of CL, data augmentation, as a basic component of CL, has not received sufficient attention. This raises the question: Is data augmentation sufficient to achieve superior recommendation results? To answer this question, we benchmark a large amount of data augmentation strategies, as well as state-of-the-art CL-based SRS methods, on four real-world datasets under both warm- and cold-start settings. Intriguingly, the conclusion drawn from our study is that data augmentation is sufficient and CL may not be necessarily required. In fact, utilizing augmentation alone can significantly alleviate the data sparsity issue and certain data augmentation can achieve similar or even superior performance compared with CL-based methods. We hope that our study can further inspire more fundamental studies on the key functional components of complex CL techniques. Our processed datasets and codes will be released once our paper is accepted. | [
"Data Augmentation",
"Sequential Recommendation",
"Contrastive Learning"
] | https://openreview.net/pdf?id=2w8F73ZffH | GHkZZWV9Dh | decision | 1,705,909,249,706 | 2w8F73ZffH | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: If we're willing to treat the one negative review as an outlier, the remaining reviews make a pretty good argument for acceptance. The rebuttal is fairly persuasive (though not all reviewers engaged with it), and most of the issues raised seem to be about issues that could be clarified in a revision (though there are quite a lot of these, and I dare say the authors have a lot to do to revise the paper). |
2w8F73ZffH | Is Contrastive Learning Necessary? A Study of Data Augmentation vs Contrastive Learning in Sequential Recommendation | [
"Peilin Zhou",
"You-Liang Huang",
"Yueqi XIE",
"Jingqi Gao",
"Shoujin Wang",
"Jaeboum KIM",
"Sunghun Kim"
] | Sequential recommender systems (SRS) are designed to predict users’ future behaviors based on their historical interaction data. Recent research has increasingly utilized contrastive learning (CL) to leverage unsupervised signals to alleviate the data sparsity issue in SRS. In general, CL-based SRS first augments the raw sequential interaction data by using data augmentation strategies and employs a contrastive training scheme to enforce the representations of those sequences from the same raw interaction data to be similar. Despite the growing popularity of CL, data augmentation, as a basic component of CL, has not received sufficient attention. This raises the question: Is data augmentation sufficient to achieve superior recommendation results? To answer this question, we benchmark a large amount of data augmentation strategies, as well as state-of-the-art CL-based SRS methods, on four real-world datasets under both warm- and cold-start settings. Intriguingly, the conclusion drawn from our study is that data augmentation is sufficient and CL may not be necessarily required. In fact, utilizing augmentation alone can significantly alleviate the data sparsity issue and certain data augmentation can achieve similar or even superior performance compared with CL-based methods. We hope that our study can further inspire more fundamental studies on the key functional components of complex CL techniques. Our processed datasets and codes will be released once our paper is accepted. | [
"Data Augmentation",
"Sequential Recommendation",
"Contrastive Learning"
] | https://openreview.net/pdf?id=2w8F73ZffH | 7z17Qd3ihb | official_review | 1,701,268,387,344 | 2w8F73ZffH | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2080/Reviewer_o33W"
] | review: **Summary**
The paper presents an empirical study on the role of contrastive learning (CL) in data augmentation-based sequential recommendation systems (SRS). This seems like an unexplored and understudied problem where a thorough empirical investigation may benefit the research community. The paper poses and investigates the research question – whether CL is necessary to improve SRS performance. Despite the growing popularity of CL in SRS, the paper makes an intriguing conclusion -- data augmentation is sufficient, and CL is not necessary.
**Pros**
- This paper appears to be the first comprehensive study that compares 8 different augmentation strategies and 3 popular CL baselines. It also studies the effect of augmentation size and sampling strategy.
- The research questions are mostly meaningful and impactful. The experiments are carefully designed to answer these questions.
**Cons**
- *Questionable results*: The results of CL baselines in Table 2 are wildly different from the reported ones in their respective original papers.
- *Weird results in RQ4*: The sequence length in the beauty dataset is <10 per user. However, according to the results presented in Fig. 7, it appears that using up to 10 augmentations (which may include deletions) in a sequence does not affect recall.
- *Unclear insights in RQ3*: The authors conclude from Fig. 5 that all methods perform well for popular items. It is not clear why this is an insightful observation. Also, at least from this figure, it is not clear if there is a clear winner between data augmentation and CL methods across different item groups.
- *Low Reproducibility*: Code not provided.
**Recommendation**
This is a borderline paper due to questionable confidence in experimental results and conclusions drawn from them. As an empirical study, the lack of confidence or robust justification undermines its contribution.
questions: - Please justify the discrepancy in the results of baseline CL methods (see cons).
- Would analyzing performance across different user groups, rather than item groups (RQ3), provide more meaningful insights? It is suggested that such analyses be added to the paper.
- Are the results (recall and ndcg) reported as percentages? This is not explicitly mentioned.
- The different colors in Fig. 5 are very hard to distinguish.
ethics_review_flag: No
ethics_review_description: no ethical concerns
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
2w8F73ZffH | Is Contrastive Learning Necessary? A Study of Data Augmentation vs Contrastive Learning in Sequential Recommendation | [
"Peilin Zhou",
"You-Liang Huang",
"Yueqi XIE",
"Jingqi Gao",
"Shoujin Wang",
"Jaeboum KIM",
"Sunghun Kim"
] | Sequential recommender systems (SRS) are designed to predict users’ future behaviors based on their historical interaction data. Recent research has increasingly utilized contrastive learning (CL) to leverage unsupervised signals to alleviate the data sparsity issue in SRS. In general, CL-based SRS first augments the raw sequential interaction data by using data augmentation strategies and employs a contrastive training scheme to enforce the representations of those sequences from the same raw interaction data to be similar. Despite the growing popularity of CL, data augmentation, as a basic component of CL, has not received sufficient attention. This raises the question: Is data augmentation sufficient to achieve superior recommendation results? To answer this question, we benchmark a large amount of data augmentation strategies, as well as state-of-the-art CL-based SRS methods, on four real-world datasets under both warm- and cold-start settings. Intriguingly, the conclusion drawn from our study is that data augmentation is sufficient and CL may not be necessarily required. In fact, utilizing augmentation alone can significantly alleviate the data sparsity issue and certain data augmentation can achieve similar or even superior performance compared with CL-based methods. We hope that our study can further inspire more fundamental studies on the key functional components of complex CL techniques. Our processed datasets and codes will be released once our paper is accepted. | [
"Data Augmentation",
"Sequential Recommendation",
"Contrastive Learning"
] | https://openreview.net/pdf?id=2w8F73ZffH | 6kEZay450T | official_review | 1,701,830,782,465 | 2w8F73ZffH | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2080/Reviewer_cchU"
] | review: This work investigates the role of data augmentation and contrastive learning in sequential recommendation systems (SRS). The authors conduct a comprehensive experimental study comparing the performance of SRS based on data augmentation alone and those based on contrastive learning. They explore the effectiveness of eight popular sequence-level augmentation strategies and benchmark them against state-of-the-art contrastive learning-based methods on real-world datasets. The results show that certain data augmentation strategies can achieve comparable or even superior performance to contrastive learning-based methods while requiring less training and inference time. This study highlights the potential of data augmentation as a standalone technique to address the data sparsity issue in SRS and questions the necessity of contrastive learning in this context.
The findings of the study validate the efficacy of data augmentation as a standalone technique for improving the performance of SRS. The results demonstrate that certain sequence-level data augmentation strategies can effectively mitigate the data sparsity issue, achieving comparable or even superior performance compared to contrastive learning-based methods.
To improve this work, the authors could consider including detailed parameter tuning and settings of the compared baselines in the evaluation section. This additional information would provide a clearer understanding of the experimental setup and allow for better reproducibility of the results. Specifically, the authors could provide details on the hyperparameters and configurations used for each baseline method, including the contrastive learning methods and data augmentation strategies.
To enhance the comprehensiveness of the work, conducting a more detailed experiment comparison between the newly proposed method and baseline models in terms of computational and memory cost would be beneficial. This additional analysis would provide insights into the efficiency and resource requirements of the different approaches, which is an important aspect to consider in real-world applications. The authors could compare the training and inference times of each method, measure the memory usage during the experiments, and report any significant differences observed. Additionally, they could explore the scalability of the proposed method and baselines by varying the dataset size or model complexity to assess their performance under different computational and memory constraints.
questions: To improve this work, the authors should include detailed parameter tuning and baseline settings in the evaluation section. This would enhance the understanding and reproducibility of the experimental setup. Additionally, conducting a more comprehensive experiment comparison between the proposed method and baselines in terms of computational and memory cost is necessary. This analysis would provide insights into the efficiency and resource requirements of each approach, benefiting real-world applications. Comparing training and inference times, measuring memory usage, and exploring scalability would contribute to a more comprehensive evaluation.
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
2kMfEmBbT5 | Cross-Space Adaptive Filter: Integrating Graph Topology and Node Attributes for Alleviating the Over-smoothing Problem | [
"Chen Huang",
"Haoyang Li",
"Yifan Zhang",
"Wenqiang Lei",
"Jiancheng Lv"
] | The vanilla Graph Convolutional Network (GCN) uses a low-pass filter to extract low-frequency signals from graph topology, which may lead to the over-smoothing problem when GCN goes deep. To this end, various methods have been proposed to create an adaptive filter by incorporating an extra filter (e.g., a high-pass filter) extracted from the graph topology. However, these methods heavily rely on topological information and ignore the node attribute space, which severely sacrifices the expressive power of the deep GCNs, especially when dealing with disassortative graphs.
In this paper, we propose a cross-space adaptive filter, called CSF, to produce the adaptive-frequency information extracted from both the topology and attribute spaces.
Specifically, we first derive a tailored attribute-based high-pass filter that can be interpreted theoretically as a minimizer for semi-supervised kernel ridge regression.
Then, we cast the topology-based low-pass filter as a Mercer’s kernel within the context of GCNs. This serves as a foundation for combining it with the attribute-based filter to capture the adaptive-frequency information.
Finally, we derive the cross-space filter via an effective multiple-kernel learning strategy, which unifies the attribute-based high-pass filter and the topology-based low-pass filter. This helps to address the over-smoothing problem while maintaining effectiveness.
Extensive experiments demonstrate that CSF not only successfully alleviates the over-smoothing problem but also promotes the effectiveness of the node classification task. | [
"Graph convolutional network",
"over-smoothing",
"node attribute"
] | https://openreview.net/pdf?id=2kMfEmBbT5 | voedYF2Eqo | official_review | 1,700,994,585,442 | 2kMfEmBbT5 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1600/Reviewer_SCn8"
] | review: Based on the multiple-kernel learning, this paper proposes a cross-space adaptive filter that includes attributed-based high-pass filter and topology-based low-pass filter, which can alleviate the over-smoothing problem, especially when processing disassortative graphs. Besides, the proposed attributed-based filter is interpretable based on semi-supervised kernel ridge regression. Experiments demonstrate that the proposed CSF can alleviate the over-smoothing problem as GNN goes deep.
Strength:
S1. This paper attempts to alleviate the over-smoothing problem of GNNs from the perspectives of graph kernel theory, where both attribute information and topology structure are employed to design different filters. This idea is sound and interesting.
S2. This paper provides theoretical analysis for the proposed attributed-based high-pass filter, which is convincing.
S3. The experimental results compared to baselines are promising.
Weakness:
W1. In a semi-supervised learning setting, how to process unlabeled nodes in Z in Eq (1) in this paper? would their labels be inferred during model training? If not, how does this approach differ from supervised Kernel Ridge Regression?
W2. GNN-BC [1] has validated that attribute information and topological structure can be simultaneously employed to alleviate the over-smoothing issue in GNNs. This paper should clarify the advantages of employing both information in graph kernel theory compared to GNN-BC.
Furthermore, the performance of GNN-BC reported in this manuscript is considerably lower than that in the original paper. While the authors mention the absence of available code for GNN-BC, its code was made available at [2] last year. Please check.
W3. The paper shows that each filter is important for graph learning. However, in this cross-space filter, it should be clarified how to achieve a balance between the high-pass and low-pass filters. Furthermore, an experimental analysis of the CSF performance across various adaptive scenarios would be insightful.
Reference:
[1] Liang Yang, Wenmiao Zhou, Weihang Peng, Bingxin Niu, Junhua Gu, Chuan Wang, Xiaochun Cao, Dongxiao He. "Graph Neural Networks Beyond Compromise Between Attribute and Topology". WWW 2022.
[2] https://github.com/GitEventHandler/GNNBC
questions: Please refer to weakness above.
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
2kMfEmBbT5 | Cross-Space Adaptive Filter: Integrating Graph Topology and Node Attributes for Alleviating the Over-smoothing Problem | [
"Chen Huang",
"Haoyang Li",
"Yifan Zhang",
"Wenqiang Lei",
"Jiancheng Lv"
] | The vanilla Graph Convolutional Network (GCN) uses a low-pass filter to extract low-frequency signals from graph topology, which may lead to the over-smoothing problem when GCN goes deep. To this end, various methods have been proposed to create an adaptive filter by incorporating an extra filter (e.g., a high-pass filter) extracted from the graph topology. However, these methods heavily rely on topological information and ignore the node attribute space, which severely sacrifices the expressive power of the deep GCNs, especially when dealing with disassortative graphs.
In this paper, we propose a cross-space adaptive filter, called CSF, to produce the adaptive-frequency information extracted from both the topology and attribute spaces.
Specifically, we first derive a tailored attribute-based high-pass filter that can be interpreted theoretically as a minimizer for semi-supervised kernel ridge regression.
Then, we cast the topology-based low-pass filter as a Mercer’s kernel within the context of GCNs. This serves as a foundation for combining it with the attribute-based filter to capture the adaptive-frequency information.
Finally, we derive the cross-space filter via an effective multiple-kernel learning strategy, which unifies the attribute-based high-pass filter and the topology-based low-pass filter. This helps to address the over-smoothing problem while maintaining effectiveness.
Extensive experiments demonstrate that CSF not only successfully alleviates the over-smoothing problem but also promotes the effectiveness of the node classification task. | [
"Graph convolutional network",
"over-smoothing",
"node attribute"
] | https://openreview.net/pdf?id=2kMfEmBbT5 | v4fV2JKZXd | official_review | 1,701,387,102,471 | 2kMfEmBbT5 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1600/Reviewer_tnra"
] | review: The paper introduces a novel method, Cross-Space Adaptive Filter (CSF), designed to enhance the performance of deepGCNs in node classification tasks. The authors propose a attribute-based high-pass filter and a topology-based low-pass filter, combining them to capture adaptive-frequency information. The construction of the proposed high-pass filter is interpreted through kernel ridge regression, offering a new approach to alleviate the over-smoothing problem. The authors conduct comparative experiments with various baselines under different
numbers of convolution layers and reveal more characteristics of CSF by conducting ablation studies on graph topology and node attributes. The experimental results demonstrate CSF's ability to mitigate the over-smoothing issue and enhance the effectiveness of deep GCN.
However, the writing can be improved, especially in section 1 and 2, there some almost identical sentences were repeated many times. In summary, the paper exhibits good novelty, solid analysis, and readability.
questions: In the process of node updating, there is a concatenation with the raw feature X. Can the authors provide an ablation study by comparing the performance with a GCN that includes such a skip connection?
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
2kMfEmBbT5 | Cross-Space Adaptive Filter: Integrating Graph Topology and Node Attributes for Alleviating the Over-smoothing Problem | [
"Chen Huang",
"Haoyang Li",
"Yifan Zhang",
"Wenqiang Lei",
"Jiancheng Lv"
] | The vanilla Graph Convolutional Network (GCN) uses a low-pass filter to extract low-frequency signals from graph topology, which may lead to the over-smoothing problem when GCN goes deep. To this end, various methods have been proposed to create an adaptive filter by incorporating an extra filter (e.g., a high-pass filter) extracted from the graph topology. However, these methods heavily rely on topological information and ignore the node attribute space, which severely sacrifices the expressive power of the deep GCNs, especially when dealing with disassortative graphs.
In this paper, we propose a cross-space adaptive filter, called CSF, to produce the adaptive-frequency information extracted from both the topology and attribute spaces.
Specifically, we first derive a tailored attribute-based high-pass filter that can be interpreted theoretically as a minimizer for semi-supervised kernel ridge regression.
Then, we cast the topology-based low-pass filter as a Mercer’s kernel within the context of GCNs. This serves as a foundation for combining it with the attribute-based filter to capture the adaptive-frequency information.
Finally, we derive the cross-space filter via an effective multiple-kernel learning strategy, which unifies the attribute-based high-pass filter and the topology-based low-pass filter. This helps to address the over-smoothing problem while maintaining effectiveness.
Extensive experiments demonstrate that CSF not only successfully alleviates the over-smoothing problem but also promotes the effectiveness of the node classification task. | [
"Graph convolutional network",
"over-smoothing",
"node attribute"
] | https://openreview.net/pdf?id=2kMfEmBbT5 | fo41Ox7Bij | official_review | 1,700,837,627,652 | 2kMfEmBbT5 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1600/Reviewer_orgj"
] | review: This paper aims to solve the over-smoothing and improve the effectiveness of deep GCN on downstream tasks. It designs a cross-space filter named CSF. Specifically, it first designs a high-pass filter based on correlations of node attributes, which can be interpreted as a minimization of semi-supervised kernel ridge regression. It then combines the high-pass filter with the conventional low-pass filter of GNNs. Finally, an effective multiple-kernel learning strategy is employed to unify these two filters and adaptively control the frequency information. Extensive experiments demonstrate the effectiveness of CSF.
> Quality
Pros: Most arguments are well-supported. Extensive experiments demonstrate the effectiveness of the proposed method. This paper displays a variety of visualization reports, e.g., Figures 2 and 3, which make the model more accessible and convincing.
Cons: This paper doesn’t compare with AM-GCN [1], which similarly unifies the graph topology and node attributes. In addition, this paper lacks some important baselines and an analysis of $\gamma$ in Eq(5). Please see Questions 2.
> Clarity
Pros: The paper is well-organized in general and experimental details are clear.
Cons: The paper lacks a theoretical derivation process of the equations in Session 4.1. Additionally, there are some minor mistakes in these equations. Please see Question 1.
> Originality
This paper solves the over-smoothing problem via a cross-space filter, which adaptively integrates an attributed-based high-pass filter and a topology-based low-pass filter to capture the frequency information.
> Significance
The lack of interpretability of hand-crafted high-pass filters is a significant challenge. This paper is the first to interpret the attribute-based high-pass as a minimization of semi-supervised kernel ridge regression.
[1] AM-GCN: Adaptive multi-channel graph convolutional networks. KDD 2020.
questions: 1. There are some minor mistakes in equations 2 and 3. Specifically, the first term should be $||\mathbb{Y}^T\left(I-\Gamma\left(K, a_3\right)\right)\mathbb{Y}||_{F}$ rather than $\mathbb{Y}^T\left(I-\Gamma\left(K, a_3\right)\right)\mathbb{Y}$ if $\mathbb{Y}$ is a matrix.
Additionally, I think it would be better to provide the derivation process of the equations in Session 4.1, especially the process from Eq 1 to Eq 2 and the solution for Eq 3.
2. The benchmarks lack experiments evaluating polynomial-based spectral GNNs, such as GPR-GNN [1] and ChebNetII [2], as well as GNNs with cross-space convolution, such as AM-GCN. GPR-GNN and ChebNetII not only solve the over-smoothing problems but also demonstrate competitive performance on the disassortative graphs.
[1] Adaptive universal generalized pagerank graph neural network.
[2] Convolutional neural networks on graphs with Chebyshev approximation, revisited.
ethics_review_flag: No
ethics_review_description: No ethics issue
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
2kMfEmBbT5 | Cross-Space Adaptive Filter: Integrating Graph Topology and Node Attributes for Alleviating the Over-smoothing Problem | [
"Chen Huang",
"Haoyang Li",
"Yifan Zhang",
"Wenqiang Lei",
"Jiancheng Lv"
] | The vanilla Graph Convolutional Network (GCN) uses a low-pass filter to extract low-frequency signals from graph topology, which may lead to the over-smoothing problem when GCN goes deep. To this end, various methods have been proposed to create an adaptive filter by incorporating an extra filter (e.g., a high-pass filter) extracted from the graph topology. However, these methods heavily rely on topological information and ignore the node attribute space, which severely sacrifices the expressive power of the deep GCNs, especially when dealing with disassortative graphs.
In this paper, we propose a cross-space adaptive filter, called CSF, to produce the adaptive-frequency information extracted from both the topology and attribute spaces.
Specifically, we first derive a tailored attribute-based high-pass filter that can be interpreted theoretically as a minimizer for semi-supervised kernel ridge regression.
Then, we cast the topology-based low-pass filter as a Mercer’s kernel within the context of GCNs. This serves as a foundation for combining it with the attribute-based filter to capture the adaptive-frequency information.
Finally, we derive the cross-space filter via an effective multiple-kernel learning strategy, which unifies the attribute-based high-pass filter and the topology-based low-pass filter. This helps to address the over-smoothing problem while maintaining effectiveness.
Extensive experiments demonstrate that CSF not only successfully alleviates the over-smoothing problem but also promotes the effectiveness of the node classification task. | [
"Graph convolutional network",
"over-smoothing",
"node attribute"
] | https://openreview.net/pdf?id=2kMfEmBbT5 | Ua36a8MGzw | official_review | 1,700,748,717,283 | 2kMfEmBbT5 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1600/Reviewer_dLdN"
] | review: This paper studies the over-smoothing problem of graph convolutional networks. Specifically, the authors propose an interpretable high-pass filter to capture the correlations among node attributes and extract high-frequency information from the attribution space. By integrating the high-pass filter and the low-pass filter from graph topology space, the proposed cross-space adaptive filter (CSF) shows promising performance on addressing the over-smoothing problem, and helps GCN models achieve superior performance on the challenging disassortative graphs.
Pros:
1. This paper is well organized and easy to follow.
2. This paper explores the over-smoothing problem of GCNs --- an essential challenge in the filed of graph neural network, and propose an effective solution.
3. This work can be interpreted theoretically.
Cons:
1. The authors do not explain why the high-pass filter come from the attribute space, while low-pass filter come from the topology space. The motivation of such a design is desired to be clarified.
2. The authors conduct experiments on the disassortative graphs (i.e., heterophilic graphs). More related baselines such as [1] and [2] should be considered for comparison.
[1] Auto-HeG: Automated Graph Neural Network on Heterophilic Graphs. WWW 2023.
[2] Convolutional Neural Networks on Graphs with Chebyshev Approximation, Revisited. NeurIPS 2022.
questions: Please response according to the aforementioned cons.
ethics_review_flag: No
ethics_review_description: No ethical issues
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
2kMfEmBbT5 | Cross-Space Adaptive Filter: Integrating Graph Topology and Node Attributes for Alleviating the Over-smoothing Problem | [
"Chen Huang",
"Haoyang Li",
"Yifan Zhang",
"Wenqiang Lei",
"Jiancheng Lv"
] | The vanilla Graph Convolutional Network (GCN) uses a low-pass filter to extract low-frequency signals from graph topology, which may lead to the over-smoothing problem when GCN goes deep. To this end, various methods have been proposed to create an adaptive filter by incorporating an extra filter (e.g., a high-pass filter) extracted from the graph topology. However, these methods heavily rely on topological information and ignore the node attribute space, which severely sacrifices the expressive power of the deep GCNs, especially when dealing with disassortative graphs.
In this paper, we propose a cross-space adaptive filter, called CSF, to produce the adaptive-frequency information extracted from both the topology and attribute spaces.
Specifically, we first derive a tailored attribute-based high-pass filter that can be interpreted theoretically as a minimizer for semi-supervised kernel ridge regression.
Then, we cast the topology-based low-pass filter as a Mercer’s kernel within the context of GCNs. This serves as a foundation for combining it with the attribute-based filter to capture the adaptive-frequency information.
Finally, we derive the cross-space filter via an effective multiple-kernel learning strategy, which unifies the attribute-based high-pass filter and the topology-based low-pass filter. This helps to address the over-smoothing problem while maintaining effectiveness.
Extensive experiments demonstrate that CSF not only successfully alleviates the over-smoothing problem but also promotes the effectiveness of the node classification task. | [
"Graph convolutional network",
"over-smoothing",
"node attribute"
] | https://openreview.net/pdf?id=2kMfEmBbT5 | QbAIWeg6bp | decision | 1,705,909,215,067 | 2kMfEmBbT5 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: The paper develops a method to tackle the over-smoothing problem in graph learning. Specifically, the method uses an attribute-based high-pass filter and a topology-based low-pass filter.
Pros:
* Oversmoothing is a well-known problem in graph learning and the method developed in the paper shows pretty good performance over the baseline especially when the number of layers increases.
* The paper is easy to follow and understand.
* The method is novel and has theoretical support.
Cons:
* The method is only studied on small datasets, but the real-world datasets in a single graph setting are usually very large. Even though the author claims that the method has a complexity of O(m x N x N), this complexity is too large to scale to large datasets. This limits the applicability of the method in the real-world setting.
* The method doesn't really increase the model performance when the number of layers increases. The authors should find the settings/datasets where increasing the number of layers can improve performance. Otherwise, the studied method is not useful in practice. |
2kMfEmBbT5 | Cross-Space Adaptive Filter: Integrating Graph Topology and Node Attributes for Alleviating the Over-smoothing Problem | [
"Chen Huang",
"Haoyang Li",
"Yifan Zhang",
"Wenqiang Lei",
"Jiancheng Lv"
] | The vanilla Graph Convolutional Network (GCN) uses a low-pass filter to extract low-frequency signals from graph topology, which may lead to the over-smoothing problem when GCN goes deep. To this end, various methods have been proposed to create an adaptive filter by incorporating an extra filter (e.g., a high-pass filter) extracted from the graph topology. However, these methods heavily rely on topological information and ignore the node attribute space, which severely sacrifices the expressive power of the deep GCNs, especially when dealing with disassortative graphs.
In this paper, we propose a cross-space adaptive filter, called CSF, to produce the adaptive-frequency information extracted from both the topology and attribute spaces.
Specifically, we first derive a tailored attribute-based high-pass filter that can be interpreted theoretically as a minimizer for semi-supervised kernel ridge regression.
Then, we cast the topology-based low-pass filter as a Mercer’s kernel within the context of GCNs. This serves as a foundation for combining it with the attribute-based filter to capture the adaptive-frequency information.
Finally, we derive the cross-space filter via an effective multiple-kernel learning strategy, which unifies the attribute-based high-pass filter and the topology-based low-pass filter. This helps to address the over-smoothing problem while maintaining effectiveness.
Extensive experiments demonstrate that CSF not only successfully alleviates the over-smoothing problem but also promotes the effectiveness of the node classification task. | [
"Graph convolutional network",
"over-smoothing",
"node attribute"
] | https://openreview.net/pdf?id=2kMfEmBbT5 | Cvszebx7XQ | official_review | 1,701,397,399,460 | 2kMfEmBbT5 | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1600/Reviewer_Nnvd"
] | review: This paper proposes to a novel method for alleviating the over-smoothing problem in GCN by applying a low-pass filter in the graph topology space and a high-pass filter in the node attribute space.
pros:
1. The paper is well-written and easy to follow.
2. The method is supported with comprehensive theory.
3. The experiments demonstrate good improvements over baselines.
cons:
1. The method has high time complexity of $O(n^3)$, which makes it infeasible for medium and large graphs.
2. The experimental setup seems biased towards the proposed method. Table 1 shows the average classification accuracy over models with varying depth. However, it is well-known that most GNN architectures achieve the best accuracy at 2 or 3 layers and experience serious accuracy degradation with increasing depth. As shown in figure 4, the proposed method CSF does not achieve better accuracy with increasing depth in most cases. So having many layers seems very unnecessary.
3. The authors are missing on some important baselines, e.g. RevGNN [1], which also uses a deep architecture.
References
[1] Li, Guohao, et al. "Training graph neural networks with 1000 layers." International conference on machine learning. PMLR, 2021.
questions: Please see the full review for questions.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
2UZXqt9hpK | Beyond Labels and Topics: Discovering Causal Relationships in Neural Topic Modeling | [
"Yi-Kun Tang",
"Heyan Huang",
"Xuewen Shi",
"Xian-Ling Mao"
] | Topic models that can take advantage of labels are broadly used in identifying interpretable topics from textual data.
However, existing topic models tend to merely view labels as names of topic clusters or as categories of texts, thereby neglecting the potential causal relationships between supervised information and latent topics, as well as within these elements themselves.
In this paper, we focus on uncovering possible causal relationships both between and within the supervised information and latent topics to better understand the mechanisms behind the emergence of the topics and the labels.
To this end, we propose Causal Relationship-Aware Neural Topic Model (CRNTM), a novel neural topic model that can automatically uncover interpretable causal relationships between and within supervised information and latent topics, while concurrently discovering high-quality topics.
In CRNTM, both supervised information and latent topics are treated as nodes, with the causal relationships represented as directed edges in a Directed Acyclic Graph (DAG).
A Structural Causal Model (SCM) is employed to model the DAG.
Experiments are conducted on three public corpora with different types of labels.
Experimental results show that the discovered causal relationships are both reliable and interpretable, and
the learned topics are of high quality comparing with seven start-of-the-art topic model baselines. | [
"Causal relationships discovery",
"Neural topic model",
"Structural Causal Model"
] | https://openreview.net/pdf?id=2UZXqt9hpK | xYuGsPJSW9 | official_review | 1,700,773,097,692 | 2UZXqt9hpK | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2503/Reviewer_uvXZ"
] | review: ### Quality
The authors propose a method CRNTM, which aims to leverage the causal relationships among labels in topic modeling. In CRNTM, both supervised information and latent topics are treated as nodes, with the causal relationships represented as directed edges in a DAG modeled with a Structural Causal Model (SCM). The method design and execution seem sound to me.
The method's performance is evaluated on three datasets and is compared to several (supervised) topic modeling methods. The authors use standard evaluation metrics including topic coherence, topic uniqueness, and topic quality. They also showcase the method's ability to uncover causal relationships between supervised information and latent topics. The experiments are well-executed as well (albeit some additional baselines could be included; see below).
### Clarity
The paper is overall well-written and easy to follow. The figures and tables are also well-presented.
### Originality
The proposed method, CRNTM, is novel based on my knowledge. The major novelty is to explicitly model the causal relationships between supervised information and latent topics.
### Significance
This work is under the category of topic modeling and will facilitate the applications related to topic discovery.
### **Pros**
- The proposed method, CRNTM, is new and interesting.
- The method design is sound.
- The authors provide a thorough evaluation of the method's performance, comparing it to several topic modeling methods.
### **Cons**
- I'm not sure if the statement "casual relationships between supervised information and latent topics have not been visited in prior work" is completely accurate. Although past work may not be under exactly the same setup as studied in this paper, the line of hierarchical topic models can also model topics as DAGs to capture their casual relationships. I would imagine it wouldn't be too hard to extend those methods to incorporate the relationship introduced by supervised information. The paper would benefit from including more discussions of hierarchical topic models.
- I believe CatE (Meng et al.) is also a relevant baseline that can be compared to. I'd encourage the authors to include it in the evaluation.
Reference:
Meng et al. “Discriminative Topic Mining via Category-Name Guided Text Embedding.” WWW 2020.
questions: Please address the cons raised in my main review.
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
2UZXqt9hpK | Beyond Labels and Topics: Discovering Causal Relationships in Neural Topic Modeling | [
"Yi-Kun Tang",
"Heyan Huang",
"Xuewen Shi",
"Xian-Ling Mao"
] | Topic models that can take advantage of labels are broadly used in identifying interpretable topics from textual data.
However, existing topic models tend to merely view labels as names of topic clusters or as categories of texts, thereby neglecting the potential causal relationships between supervised information and latent topics, as well as within these elements themselves.
In this paper, we focus on uncovering possible causal relationships both between and within the supervised information and latent topics to better understand the mechanisms behind the emergence of the topics and the labels.
To this end, we propose Causal Relationship-Aware Neural Topic Model (CRNTM), a novel neural topic model that can automatically uncover interpretable causal relationships between and within supervised information and latent topics, while concurrently discovering high-quality topics.
In CRNTM, both supervised information and latent topics are treated as nodes, with the causal relationships represented as directed edges in a Directed Acyclic Graph (DAG).
A Structural Causal Model (SCM) is employed to model the DAG.
Experiments are conducted on three public corpora with different types of labels.
Experimental results show that the discovered causal relationships are both reliable and interpretable, and
the learned topics are of high quality comparing with seven start-of-the-art topic model baselines. | [
"Causal relationships discovery",
"Neural topic model",
"Structural Causal Model"
] | https://openreview.net/pdf?id=2UZXqt9hpK | unyYwNTYPb | official_review | 1,701,274,218,056 | 2UZXqt9hpK | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2503/Reviewer_DNFr"
] | review: Supervised topic modelling has remained a dominant way to model latent topics because the topic models have additional side information in the form of labels that could help guide the latent topic model to generate interpretable topics. Efforts such as those Supervised topic models and maximum margin topic models have been developed a long time ago.
In maximum-margin topic models (https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/38352.pdf), generative and discriminative learning paradigms are exploited. The underlying model is a posterior regularisation framework that regularises the latent space to discover interpretable topics. This paper would have benefitted had such key works been cited.
In this paper, the authors focus not on discovering topics like supervised topic models but on developing causal relationships. The key idea of the model is to jointly discover the causal relationships within a supervised learning framework and discover latent topic information.
From the experimental study, the model improves upon existing methods.
While there are advantages, there are a few disadvantages to this work too.
One advantage is that this work models causal relationships under a supervised learning setting. The key limitation of this work is that it fixes the number of topics rather than finding the number of topics using techniques such as tuning or cross-validation. There is another line of work that not only model word order but also model topic correlations such as the correlated topic model and NTSeg (https://dl.acm.org/doi/pdf/10.1145/2484028.2484062). It would be interesting to find out how this work is different from those works. This is where the discussion on causation and correlation might be important. It is also important that the authors conduct experiments on some downstream applications such as document classification or information retrieval to find out whether the model generalises reliably on downstream applications too. Such experimental analyses have become very popular in the last decade.
questions: The authors can find the questions in my main comments. I wanted to post questions as I go along my comments to remain coherent.
ethics_review_flag: No
ethics_review_description: NIL
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 3
technical_quality: 3
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
2UZXqt9hpK | Beyond Labels and Topics: Discovering Causal Relationships in Neural Topic Modeling | [
"Yi-Kun Tang",
"Heyan Huang",
"Xuewen Shi",
"Xian-Ling Mao"
] | Topic models that can take advantage of labels are broadly used in identifying interpretable topics from textual data.
However, existing topic models tend to merely view labels as names of topic clusters or as categories of texts, thereby neglecting the potential causal relationships between supervised information and latent topics, as well as within these elements themselves.
In this paper, we focus on uncovering possible causal relationships both between and within the supervised information and latent topics to better understand the mechanisms behind the emergence of the topics and the labels.
To this end, we propose Causal Relationship-Aware Neural Topic Model (CRNTM), a novel neural topic model that can automatically uncover interpretable causal relationships between and within supervised information and latent topics, while concurrently discovering high-quality topics.
In CRNTM, both supervised information and latent topics are treated as nodes, with the causal relationships represented as directed edges in a Directed Acyclic Graph (DAG).
A Structural Causal Model (SCM) is employed to model the DAG.
Experiments are conducted on three public corpora with different types of labels.
Experimental results show that the discovered causal relationships are both reliable and interpretable, and
the learned topics are of high quality comparing with seven start-of-the-art topic model baselines. | [
"Causal relationships discovery",
"Neural topic model",
"Structural Causal Model"
] | https://openreview.net/pdf?id=2UZXqt9hpK | f47RfCNHfr | official_review | 1,700,065,634,718 | 2UZXqt9hpK | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2503/Reviewer_WaQX"
] | review: This work proposes a new neural topic model that is capable of identifying causal relationships between/within supervised information and discovered topics. The three components (pre-encoding, joint encoding, and causal relationship learning) are employed to learn latent topics and corresponding casual relationships. Overall, the proposed idea is interesting and the proposed method is well justified. The clear presentation helps with readability. There is room for improvement in discussing motivations for the idea and methods. It is not sure if the codes will be provided for reproducibility.
questions: 1. It would be a general question, but personally, it is not clear to me what "influence" means in the identified causal relationships. For instance, there is an example saying "A topic Computer Science influences a topic Topic Modeling". Simply speaking, does this imply that Topic Modeling can be a topic because of Computer Science? Some discussion on the meaning of causal relationships between topics would be helpful.
2. The authors argue that identifying causal relationships is more useful in many scenarios than merely getting correlation and hierarchical relationships. Could the authors give more specific examples of how the topic modeling with causal relationships is practically useful?
3. Along the same line, the role and necessity of supervised information in this work is not clear. How does the supervised information affect the performance of the proposed model?
4. (Minor) Can the authors provide justification or motivation for using Dirichlet distributions instead of Gaussian distributions to model latent topics?
5. (Comment) Sections 3.2 and 3.3 were a bit overwhelming with a lot of complex concepts and theoretical background. I believe it would be nice to supplement the intuitive explanations. and save the specific formulas and such for the appendix.
6. (Comment) Authors are strongly encouraged to provide the source codes for reproducibility.
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
2UZXqt9hpK | Beyond Labels and Topics: Discovering Causal Relationships in Neural Topic Modeling | [
"Yi-Kun Tang",
"Heyan Huang",
"Xuewen Shi",
"Xian-Ling Mao"
] | Topic models that can take advantage of labels are broadly used in identifying interpretable topics from textual data.
However, existing topic models tend to merely view labels as names of topic clusters or as categories of texts, thereby neglecting the potential causal relationships between supervised information and latent topics, as well as within these elements themselves.
In this paper, we focus on uncovering possible causal relationships both between and within the supervised information and latent topics to better understand the mechanisms behind the emergence of the topics and the labels.
To this end, we propose Causal Relationship-Aware Neural Topic Model (CRNTM), a novel neural topic model that can automatically uncover interpretable causal relationships between and within supervised information and latent topics, while concurrently discovering high-quality topics.
In CRNTM, both supervised information and latent topics are treated as nodes, with the causal relationships represented as directed edges in a Directed Acyclic Graph (DAG).
A Structural Causal Model (SCM) is employed to model the DAG.
Experiments are conducted on three public corpora with different types of labels.
Experimental results show that the discovered causal relationships are both reliable and interpretable, and
the learned topics are of high quality comparing with seven start-of-the-art topic model baselines. | [
"Causal relationships discovery",
"Neural topic model",
"Structural Causal Model"
] | https://openreview.net/pdf?id=2UZXqt9hpK | 1X3IsU75D4 | decision | 1,705,909,258,151 | 2UZXqt9hpK | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: This paper proposes a new neural topic model that is capable of identifying causal relationships among discovered topics. Three components, including pre-encoding, joint encoding, and causal relationship learning, are designed to learn latent topics and corresponding casual relationships. The proposed method is based on sound rationale, and the evaluations are convincing. |
2UZXqt9hpK | Beyond Labels and Topics: Discovering Causal Relationships in Neural Topic Modeling | [
"Yi-Kun Tang",
"Heyan Huang",
"Xuewen Shi",
"Xian-Ling Mao"
] | Topic models that can take advantage of labels are broadly used in identifying interpretable topics from textual data.
However, existing topic models tend to merely view labels as names of topic clusters or as categories of texts, thereby neglecting the potential causal relationships between supervised information and latent topics, as well as within these elements themselves.
In this paper, we focus on uncovering possible causal relationships both between and within the supervised information and latent topics to better understand the mechanisms behind the emergence of the topics and the labels.
To this end, we propose Causal Relationship-Aware Neural Topic Model (CRNTM), a novel neural topic model that can automatically uncover interpretable causal relationships between and within supervised information and latent topics, while concurrently discovering high-quality topics.
In CRNTM, both supervised information and latent topics are treated as nodes, with the causal relationships represented as directed edges in a Directed Acyclic Graph (DAG).
A Structural Causal Model (SCM) is employed to model the DAG.
Experiments are conducted on three public corpora with different types of labels.
Experimental results show that the discovered causal relationships are both reliable and interpretable, and
the learned topics are of high quality comparing with seven start-of-the-art topic model baselines. | [
"Causal relationships discovery",
"Neural topic model",
"Structural Causal Model"
] | https://openreview.net/pdf?id=2UZXqt9hpK | 0DLldbpJR3 | official_review | 1,700,811,766,503 | 2UZXqt9hpK | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission2503/Reviewer_i12s"
] | review: The paper introduces CRNTM, a Causal Relationship-Aware Neural Topic Model, designed to uncover causal relationships within supervised information and latent topics while discovering high-quality topics in neural topic modeling. CRNTM employs a Directed Acyclic Graph (DAG) to represent these relationships, utilizing Structural Causal Models (SCM) to imbue representations with causality. Experimental results demonstrate the reliability and interpretability of the discovered causal relationships, along with the high quality of the learned topics. This novel approach enhances the overall interpretability of neural topic modeling by simultaneously uncovering meaningful causal links and generating coherent and valuable topics.
**Strengths**
- The paper conducts comprehensive experiments across multiple datasets and compares CRNTM with state-of-the-art models, showcasing its superiority in terms of topic quality and interpretability.
- CRNTM not only generates high-quality topics but also provides interpretable causal relationships between supervised information and topics, adding depth to the understanding of topic modeling.
**Areas of Improvement**
- While the paper presents a complex model, some technical components (e.g., SCM) could be explained more intuitively for readers less familiar with causal modeling.
questions: - I am curious to know what are the specific limitations or edge cases where CRNTM might struggle to establish accurate causal relationships between supervised information and latent topics?
- How sensitive is CRNTM to hyperparameters or variations in the experimental setup, and does it demonstrate robust performance across different types of corpora or domains?
ethics_review_flag: No
ethics_review_description: N/A
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 6
technical_quality: 5
reviewer_confidence: 2: The reviewer is willing to defend the evaluation, but it is likely that the reviewer did not understand parts of the paper |
2MjX9ZDFeC | Link Prediction on Multilayer Networks through Learning of Within-Layer and Across-Layer Node-Pair Structural Features and Node Embedding Similarity | [
"Lorenzo Zangari",
"Domenico Mandaglio",
"Andrea Tagarelli"
] | Link prediction has traditionally been studied in the context of simple graphs, although real-world networks are inherently complex as they are often comprised of multiple interconnected components, or layers. Predicting links in such network systems, or multilayer networks, require to consider both the internal structure of a target layer as well as the structure of the other layers in a network, in addition to layer-specific node-attributes when available. This problem poses several challenges, even for graph neural network based approaches despite their successful and wide application to a variety of graph learning problems. In this work, we aim to fill a lack of multilayer graph representation learning methods designed for link prediction. Our proposal is a novel neural-network-based learning framework for link prediction on (attributed) multilayer networks, whose key idea is to combine (i) pairwise similarities of multilayer node embeddings learned by a graph neural network model, and (ii) structural features learned from both within-layer and across-layer link information based on overlapping multilayer neighborhoods. Extensive experimental results have shown that our framework consistently outperforms both single-layer and multilayer methods for link prediction on popular real-world multilayer networks, with an average percentage increase in AUC up to 38\%. We make source code and evaluation data available to the research community at https://shorturl.at/cOUZ4. | [
"graph-based machine learning",
"link prediction",
"multilayer networks"
] | https://openreview.net/pdf?id=2MjX9ZDFeC | u233Pr9USd | decision | 1,705,909,217,097 | 2MjX9ZDFeC | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Program_Chairs"
] | title: Paper Decision
decision: Accept
comment: The consensus is that this is a good paper with strong experimental results. There were some discussions on whether the experimental setting was somehow too easy. (But it is consistent with previous work, so there is a clear improvement in this paper.)
One reviewer felt that the data sets themselves are too easy, but that the random negative sampling approach is making the link prediction task too easy.
If possible, it would be good to have some comments on this in the camera-ready version. |
2MjX9ZDFeC | Link Prediction on Multilayer Networks through Learning of Within-Layer and Across-Layer Node-Pair Structural Features and Node Embedding Similarity | [
"Lorenzo Zangari",
"Domenico Mandaglio",
"Andrea Tagarelli"
] | Link prediction has traditionally been studied in the context of simple graphs, although real-world networks are inherently complex as they are often comprised of multiple interconnected components, or layers. Predicting links in such network systems, or multilayer networks, require to consider both the internal structure of a target layer as well as the structure of the other layers in a network, in addition to layer-specific node-attributes when available. This problem poses several challenges, even for graph neural network based approaches despite their successful and wide application to a variety of graph learning problems. In this work, we aim to fill a lack of multilayer graph representation learning methods designed for link prediction. Our proposal is a novel neural-network-based learning framework for link prediction on (attributed) multilayer networks, whose key idea is to combine (i) pairwise similarities of multilayer node embeddings learned by a graph neural network model, and (ii) structural features learned from both within-layer and across-layer link information based on overlapping multilayer neighborhoods. Extensive experimental results have shown that our framework consistently outperforms both single-layer and multilayer methods for link prediction on popular real-world multilayer networks, with an average percentage increase in AUC up to 38\%. We make source code and evaluation data available to the research community at https://shorturl.at/cOUZ4. | [
"graph-based machine learning",
"link prediction",
"multilayer networks"
] | https://openreview.net/pdf?id=2MjX9ZDFeC | oLjJnM8QkL | official_review | 1,699,542,185,187 | 2MjX9ZDFeC | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1955/Reviewer_ggFv"
] | review: The authors propose a neural-network-based approach for link prediction in multilayer networks which they call ML-Link.
They combine GNN-based node embeddings, intra-layer node similarity, and oberlapping inter-layer node neighbourhoods using an attention mechanism.
Applied to a set of real networks, the authors find that their approach outperforms current baseline methods, achieving high AUC and AP values around 99% on average.
Upon examining the learned attention weights, the authors find that their proposed method assigns different attention weights to different layers.
A hyperparameter controls how much weight should be put on layer-internal vs. -external features, however, the authors show that their method is not very sensitive to the setting of this parameter, achieving similar results over a relatively large range for values of this hyperparameter.
The paper is well-written and follows a simple idea: to use observed link patterns across multiple layers to inform link prediction.
The overall approach is well-motivated and all mathematical symbols used in the text are defined.
However, at times it seems like the authors could express the same ideas in a slighly less formal way to help the readers follow their story.
I believe this paper will be useful for the research community and practitioners.
My main concern about the paper is the relatively small number of quite small networks that were used for evaluation, and I have several questions, detailed below.
questions: 1. ML-Link was evaluated mainly on quite small networks with a small number of layers, except for the arXiv network with close to 15,000 nodes. What is the reason for this?
2. Could ML-Link be applied for link prediction in single-layer networks? I suppose this would be the same as saying that one of the layers should not be paired with any other layer. How does ML-Link perform in this case?
3. Did I understand it right that ML-Link operates on unweighted and undirected networks? If so, how is ML-Link used to predict links for the directed datasets?
4. The results show that ML-Link performs very well on networks from different domains. The authors mention that some of the used baseline methods do not perform well on the Lazega network because of its structural properties. Are there any such limitations that apply to ML-Link? That is, what structural properties make it harder for ML-Link to perform well?
5. ML-Link uses node metadata, if available, to compute node embeddings with GNNs. Could metadata labels also be used to further inform internal and external structure learning?
ethics_review_flag: No
ethics_review_description: -
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
2MjX9ZDFeC | Link Prediction on Multilayer Networks through Learning of Within-Layer and Across-Layer Node-Pair Structural Features and Node Embedding Similarity | [
"Lorenzo Zangari",
"Domenico Mandaglio",
"Andrea Tagarelli"
] | Link prediction has traditionally been studied in the context of simple graphs, although real-world networks are inherently complex as they are often comprised of multiple interconnected components, or layers. Predicting links in such network systems, or multilayer networks, require to consider both the internal structure of a target layer as well as the structure of the other layers in a network, in addition to layer-specific node-attributes when available. This problem poses several challenges, even for graph neural network based approaches despite their successful and wide application to a variety of graph learning problems. In this work, we aim to fill a lack of multilayer graph representation learning methods designed for link prediction. Our proposal is a novel neural-network-based learning framework for link prediction on (attributed) multilayer networks, whose key idea is to combine (i) pairwise similarities of multilayer node embeddings learned by a graph neural network model, and (ii) structural features learned from both within-layer and across-layer link information based on overlapping multilayer neighborhoods. Extensive experimental results have shown that our framework consistently outperforms both single-layer and multilayer methods for link prediction on popular real-world multilayer networks, with an average percentage increase in AUC up to 38\%. We make source code and evaluation data available to the research community at https://shorturl.at/cOUZ4. | [
"graph-based machine learning",
"link prediction",
"multilayer networks"
] | https://openreview.net/pdf?id=2MjX9ZDFeC | bckj8t3Vsn | official_review | 1,701,091,627,612 | 2MjX9ZDFeC | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1955/Reviewer_o3yt"
] | review: Summary:
This paper discusses the problem of link prediction in multilayer networks and proposes a novel neural-network-based learning framework called ML-Link. The framework combines pairwise similarities of multilayer node embeddings learned by a graph neural network model with structural features learned from within-layer and across-layer link information based on overlapping multilayer neighborhoods.
Pros:
1. This paper is well-written and easy to follow.
2. The method proposed in the paper is novel. ML-Link is the first to propose augmenting multilayer GNNs with node-pair features learned from both within-layer and across-layer structural information. The authors consider comprehensively the internal structure of a target layer, the structure of the other layers in a network, and the layer-specific node-attributes.
3. Experimental verification is comprehensive, involving a diverse range of datasets, performance metrics, and evaluation methodologies.
Cons:
1. Baselines are not new enough and should be considered with some recent studies on multiplex networks.
2. The model diagram is a bit unclear and intuitive.
questions: 1. Since the connection existence of each module is calculated through the cosine similarity between representations, can some multilayer network representation learning methods be considered in baselines? Some references are as follows:DMGI[1], BPHGNN[2], DualHGNN[3], HDMI[4] and MHGCN[5].
[1] Park, Chanyoung, et al. "Unsupervised attributed multiplex network embedding." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 04. 2020.
[2] Fu, Chaofan, et al. "Multiplex Heterogeneous Graph Neural Network with Behavior Pattern Modeling." Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2023.
[3] Xue, Hansheng, et al. "Multiplex bipartite network embedding using dual hypergraph convolutional networks." Proceedings of the Web Conference 2021. 2021.
[4] Jing, Baoyu, Chanyoung Park, and Hanghang Tong. "Hdmi: High-order deep multiplex infomax." Proceedings of the Web Conference 2021. 2021.
[5] Yu, Pengyang, et al. "Multiplex heterogeneous graph convolutional network." Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022.
2. In Table 1, one might question the seemingly exceptional performance of ML-Link in terms of AUC and AP across numerous datasets, i.e., most results are more than 99%.
Please see the above cons.
ethics_review_flag: No
ethics_review_description: NA
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 3
technical_quality: 3
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
2MjX9ZDFeC | Link Prediction on Multilayer Networks through Learning of Within-Layer and Across-Layer Node-Pair Structural Features and Node Embedding Similarity | [
"Lorenzo Zangari",
"Domenico Mandaglio",
"Andrea Tagarelli"
] | Link prediction has traditionally been studied in the context of simple graphs, although real-world networks are inherently complex as they are often comprised of multiple interconnected components, or layers. Predicting links in such network systems, or multilayer networks, require to consider both the internal structure of a target layer as well as the structure of the other layers in a network, in addition to layer-specific node-attributes when available. This problem poses several challenges, even for graph neural network based approaches despite their successful and wide application to a variety of graph learning problems. In this work, we aim to fill a lack of multilayer graph representation learning methods designed for link prediction. Our proposal is a novel neural-network-based learning framework for link prediction on (attributed) multilayer networks, whose key idea is to combine (i) pairwise similarities of multilayer node embeddings learned by a graph neural network model, and (ii) structural features learned from both within-layer and across-layer link information based on overlapping multilayer neighborhoods. Extensive experimental results have shown that our framework consistently outperforms both single-layer and multilayer methods for link prediction on popular real-world multilayer networks, with an average percentage increase in AUC up to 38\%. We make source code and evaluation data available to the research community at https://shorturl.at/cOUZ4. | [
"graph-based machine learning",
"link prediction",
"multilayer networks"
] | https://openreview.net/pdf?id=2MjX9ZDFeC | RUIIhWCA8N | official_review | 1,701,464,617,087 | 2MjX9ZDFeC | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1955/Reviewer_vieH"
] | review: The authors propose the ML-Link approach for link prediction on multi-layer networks with node attributes. There are two main components: a multi-layer graph neural network (GNN) and node pair features learned from within- and across-layer structural information. The latter is their primary new contribution. Experiment results shown extremely impressive gains in accuracy compared to other link prediction methods, including others designed for multi-layer networks.
*After author rebuttal:* The authors clarified their reasoning for choosing data sets, and I do agree that maintaining data sets used in prior work is an important thing to make fair comparisons. To potentially improve the paper, perhaps the authors could also try to create some more challenging setting (e.g., by taking negative samples differently).
## Strengths
- Extremely strong empirical performance across multiple data sets. Unlike most papers that report a small increase over competing methods, the improvement offered by ML-Link is often so large that it seems almost unbelievable in some cases.
- The nearest-neighbor-based node-pair neighborhood (NN-NPN) feature extraction component looks to be novel, particularly the external structure learning and context-level attention components.
- Very well written and comprehensive paper with a good balance of technical details and results.
## Weaknesses
- The extremely high accuracy values achieved suggest that the experimental evaluation approach may be too easy (see question 1 below).
- Heavy use of acronyms throughout the paper that makes it difficult to keep up at times. For example, in Table 2, instead of acronym soup, perhaps it would be better to write Internal and External instead of ISL and ESL.
questions: 1. In most of the data sets, your ML-Link method achieves 99+% in the area under the ROC curve (AUC) and average precision (AP) metrics. This is extremely high and far beyond what I would expect to be achievable for so many data sets. How should we interpret these results? Are the data sets perhaps too easy, or is it due to the negative sampling procedure resulting in negative samples that are too easy?
2. What are some limitations and areas for future improvement that you see? Please include these in your conclusion for the benefit of the reader.
3. Are the three loss functions $\mathcal{I}\_{npn}$, $\mathcal{I}\_{ne}$, $\mathcal{I}\_{p}$ directly summed or first weighted with tunable weights?
ethics_review_flag: No
ethics_review_description: No concerns
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
2MjX9ZDFeC | Link Prediction on Multilayer Networks through Learning of Within-Layer and Across-Layer Node-Pair Structural Features and Node Embedding Similarity | [
"Lorenzo Zangari",
"Domenico Mandaglio",
"Andrea Tagarelli"
] | Link prediction has traditionally been studied in the context of simple graphs, although real-world networks are inherently complex as they are often comprised of multiple interconnected components, or layers. Predicting links in such network systems, or multilayer networks, require to consider both the internal structure of a target layer as well as the structure of the other layers in a network, in addition to layer-specific node-attributes when available. This problem poses several challenges, even for graph neural network based approaches despite their successful and wide application to a variety of graph learning problems. In this work, we aim to fill a lack of multilayer graph representation learning methods designed for link prediction. Our proposal is a novel neural-network-based learning framework for link prediction on (attributed) multilayer networks, whose key idea is to combine (i) pairwise similarities of multilayer node embeddings learned by a graph neural network model, and (ii) structural features learned from both within-layer and across-layer link information based on overlapping multilayer neighborhoods. Extensive experimental results have shown that our framework consistently outperforms both single-layer and multilayer methods for link prediction on popular real-world multilayer networks, with an average percentage increase in AUC up to 38\%. We make source code and evaluation data available to the research community at https://shorturl.at/cOUZ4. | [
"graph-based machine learning",
"link prediction",
"multilayer networks"
] | https://openreview.net/pdf?id=2MjX9ZDFeC | GFFMU0FFUn | official_review | 1,700,819,109,720 | 2MjX9ZDFeC | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1955/Reviewer_J3hv"
] | review: The paper presents a neural network-based learning framework for link prediction on (attributed) multilayer networks. The authors claim that it is the first work to augment multilayer GNNs with node-pair features learned from both within-layer and across-layer
structural information. The experiments demonstrate the effectiveness of the proposal.
questions: The major concern is the baselines. Whether the baselines are strong competitors is not clearly introduced. In addition, the proposal can outperform the other baselines by a large margin in CKM, but the result is not well explained.
ethics_review_flag: No
ethics_review_description: no
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
2MjX9ZDFeC | Link Prediction on Multilayer Networks through Learning of Within-Layer and Across-Layer Node-Pair Structural Features and Node Embedding Similarity | [
"Lorenzo Zangari",
"Domenico Mandaglio",
"Andrea Tagarelli"
] | Link prediction has traditionally been studied in the context of simple graphs, although real-world networks are inherently complex as they are often comprised of multiple interconnected components, or layers. Predicting links in such network systems, or multilayer networks, require to consider both the internal structure of a target layer as well as the structure of the other layers in a network, in addition to layer-specific node-attributes when available. This problem poses several challenges, even for graph neural network based approaches despite their successful and wide application to a variety of graph learning problems. In this work, we aim to fill a lack of multilayer graph representation learning methods designed for link prediction. Our proposal is a novel neural-network-based learning framework for link prediction on (attributed) multilayer networks, whose key idea is to combine (i) pairwise similarities of multilayer node embeddings learned by a graph neural network model, and (ii) structural features learned from both within-layer and across-layer link information based on overlapping multilayer neighborhoods. Extensive experimental results have shown that our framework consistently outperforms both single-layer and multilayer methods for link prediction on popular real-world multilayer networks, with an average percentage increase in AUC up to 38\%. We make source code and evaluation data available to the research community at https://shorturl.at/cOUZ4. | [
"graph-based machine learning",
"link prediction",
"multilayer networks"
] | https://openreview.net/pdf?id=2MjX9ZDFeC | 2Blawti5AC | official_review | 1,700,449,846,639 | 2MjX9ZDFeC | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission1955/Reviewer_UoJr"
] | review: The paper aims to fill a lack of multilayer graph representation learning methods designed for link prediction. The model considers the link prediction task very well. It not only considers the similarity between nodes, but also considers the influence of graph structure on link prediction tasks from the layer-level perspective. The performance of the model on multilayer graph link prediction and the effectiveness of cross-layer are demonstrated through extensive experimental results.
## Strength
1. The paper is clearly written and describes the problem definition in detail.
2. The paper is well-considered for link prediction and the experimental results are very good in performance.
3. The paper provides a complete implementation setting and evaluation baseline.
## Weakness
1. Novelty. The Across-Layer feature propagation has been proposed in GNN with many solutions, and the solution in this paper is trivial, I hope the authors can compare the related methods in more depth, and the challenges in it.
2. Redundant feature propagation. There are inevitably many redundant information propagations between ML-GAT and ISL, ESL in the model. When the propagation meets noise nodes, there will be a large extent through the cross-layer diffusion. Even if there are no early noise effects, it also has many unnecessary redundant propagations. Hope to be further optimized.
3. Complexity. The model not only needs to train a GNN model to deal with pairwise node similarity, but also needs to add an extra NN module for information propagation from layer level. Learning across layers requires more detailed experiments on the order parameters. we are unable to judge the superiority of the improved method compared to other end-to-end methods in terms of time and memory, especially when the graph size is relatively large.
questions: See Review.
ethics_review_flag: No
ethics_review_description: none
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 4
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
2IwSOTWvXu | Convergence-Aware Online Model Selection with Time-Increasing Bandits | [
"Yu Xia",
"Fang Kong",
"Tong Yu",
"Liya Guo",
"Ryan A. Rossi",
"Sungchul Kim",
"Shuai Li"
] | Web-based applications such as chatbots, search engines and news recommendations continue to grow in scale and complexity with the recent surge in the adoption of large language models (LLMs). Online model selection has thus garnered increasing attention due to the need to choose the best model among a diverse set while balancing task reward and exploration cost. Organizations faces decisions like whether to employ a costly API-based LLM or a locally finetuned small LLM, weighing cost against performance. Traditional selection methods often evaluate every candidate model before choosing one, which are becoming impractical given the rising costs of training and finetuning LLMs. Moreover, it is undesirable to allocate excessive resources towards exploring poor-performing models. While some recent works leverage online bandit algorithm to manage such exploration-exploitation trade-off in model selection, they tend to overlook the increasing-then-converging trend in model performances as the model is iteratively finetuned, leading to less accurate predictions and suboptimal model selections.
In this paper, we propose a time-increasing bandit algorithm TI-UCB, which effectively predicts the increase of model performances due to training or finetuning and efficiently balances exploration and exploitation in model selection. To further capture the converging points of models, we develop a change detection mechanism by comparing consecutive increase predictions. We theoretically prove that our algorithm achieves a lower regret upper bound, improving from prior works' polynomial regret to logarithmic in a similar setting. The advantage of our method is also empirically validated through extensive experiments on classification model selection and online selection of LLMs. Our results highlight the importance of utilizing increasing-then-converging pattern for more efficient and economic model selection in the deployment of LLMs. | [
"Online Model Selection",
"Increasing Bandits"
] | https://openreview.net/pdf?id=2IwSOTWvXu | fa0fm3Tk11 | official_review | 1,701,424,895,453 | 2IwSOTWvXu | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission600/Reviewer_87is"
] | review: Pros:
1. It is novel to predict the reward increases and detect converging points with a sliding-window change detection mechanism.
2. The authors theoretically prove the lower regret upper bound in their method.
Cons:
1. More backgrounds are needed for online model selection.
2. The experiments are conducted in a synthetic environment. Real online experiments are needed to make the results convincing.
3. It is not representative enough to verify the online model selection strategy on the text summarization task.
questions: Do the candidate models influence the model selection?
ethics_review_flag: No
ethics_review_description: NA
scope: 3: The work is somewhat relevant to the Web and to the track, and is of narrow interest to a sub-community
novelty: 4
technical_quality: 4
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
2IwSOTWvXu | Convergence-Aware Online Model Selection with Time-Increasing Bandits | [
"Yu Xia",
"Fang Kong",
"Tong Yu",
"Liya Guo",
"Ryan A. Rossi",
"Sungchul Kim",
"Shuai Li"
] | Web-based applications such as chatbots, search engines and news recommendations continue to grow in scale and complexity with the recent surge in the adoption of large language models (LLMs). Online model selection has thus garnered increasing attention due to the need to choose the best model among a diverse set while balancing task reward and exploration cost. Organizations faces decisions like whether to employ a costly API-based LLM or a locally finetuned small LLM, weighing cost against performance. Traditional selection methods often evaluate every candidate model before choosing one, which are becoming impractical given the rising costs of training and finetuning LLMs. Moreover, it is undesirable to allocate excessive resources towards exploring poor-performing models. While some recent works leverage online bandit algorithm to manage such exploration-exploitation trade-off in model selection, they tend to overlook the increasing-then-converging trend in model performances as the model is iteratively finetuned, leading to less accurate predictions and suboptimal model selections.
In this paper, we propose a time-increasing bandit algorithm TI-UCB, which effectively predicts the increase of model performances due to training or finetuning and efficiently balances exploration and exploitation in model selection. To further capture the converging points of models, we develop a change detection mechanism by comparing consecutive increase predictions. We theoretically prove that our algorithm achieves a lower regret upper bound, improving from prior works' polynomial regret to logarithmic in a similar setting. The advantage of our method is also empirically validated through extensive experiments on classification model selection and online selection of LLMs. Our results highlight the importance of utilizing increasing-then-converging pattern for more efficient and economic model selection in the deployment of LLMs. | [
"Online Model Selection",
"Increasing Bandits"
] | https://openreview.net/pdf?id=2IwSOTWvXu | dPnNusfTIL | official_review | 1,701,390,082,905 | 2IwSOTWvXu | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission600/Reviewer_bFMa"
] | review: This paper focuses on the online model selection problem, where the models are fine-tuned continuously as when they receive feedback and show increasing-then-converging reward behavior. In this setting, the authors propose a time-increasing bandit algorithm, TI- UCB, which essentially predicts the increase of candidate model performances via finetuning and handles the usual exploration-exploitation scenarios in the bandit setting (for the model selection problem). The main novelty of the proposed algorithm comes from the change detection mechanism, where the authors compare consecutive increase predictions. The authors provide theoretical bounds on the cumulative regret which improves in comparison to the state-of-the-art baseline. Experiments on both simulated and real-world setting are provided which show the efficacy of the proposed algorithm. The application on selection of LLMs is also interesting.
Pros:
1. Relevant paper for the community
2. Contains appropriate theoretical and experimental contributions.
3. Solid presentation and does not require much effort for understanding.
Cons: I enjoyed reading the paper and do not have cons. Only a minor one stated below:
1. (Minor) Presentation can be improved by removing imprecise statements.
questions: Overall, I found the paper to be very useful for the community. It correctly highlights its importance via theoretical and experimental evaluation. The presentation of the paper is also solid. I have a few questions on which the authors can rebut on.
1. I do not know if the increasing-then-converging is a terminology used anywhere else. I believe the authors want to state the sublinear curve, right? Or, do the authors still want to use increasing-then-converging terminology?
2. In figure 2, is the reward shown the cumulative reward of the banding setting (eq 2) or the instantaneous reward of the models? I believe it’s the latter.
3. Line 358-359, the authors mention that v_i is the convergence point when the reward become “stable”. The word stable is imprecise. The authors should mention what do they mean by stable more formally.
4. Theorem 1 mentions that the regret becomes logarithmic in T; whereas, R-ed-UCB’s regret is polynomial in T. Does this really make the algorithm efficient since Theorem 1 also mentions that the algorithm’s regret is polynomial in (1/\Delta_min), and (1/\Delta_min) again depends on T due to non-stationarity. Can the authors comment on that?
5. How did the authors choose the m values for smaller models (in line 805)? Is it really 100 times cheaper to fine tune smaller models than the API based models?
6. In Figure 5(b), I see a sharp convergence of regret for TI-UCB. Can the authors comment on the time-step difference between the convergence of models’ rewards vs convergence of regret by T1-UCB? Or, are they not correlated?
Minor:
1. In eq1, it should be x_{A_s, s} instead of the random variable X itself.
2. Line 390, “and thus the cumulative regret” -> “and thus to minimize the cumulative regret”
ethics_review_flag: No
ethics_review_description: N/A
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 6
technical_quality: 6
reviewer_confidence: 4: The reviewer is certain that the evaluation is correct and very familiar with the relevant literature |
2IwSOTWvXu | Convergence-Aware Online Model Selection with Time-Increasing Bandits | [
"Yu Xia",
"Fang Kong",
"Tong Yu",
"Liya Guo",
"Ryan A. Rossi",
"Sungchul Kim",
"Shuai Li"
] | Web-based applications such as chatbots, search engines and news recommendations continue to grow in scale and complexity with the recent surge in the adoption of large language models (LLMs). Online model selection has thus garnered increasing attention due to the need to choose the best model among a diverse set while balancing task reward and exploration cost. Organizations faces decisions like whether to employ a costly API-based LLM or a locally finetuned small LLM, weighing cost against performance. Traditional selection methods often evaluate every candidate model before choosing one, which are becoming impractical given the rising costs of training and finetuning LLMs. Moreover, it is undesirable to allocate excessive resources towards exploring poor-performing models. While some recent works leverage online bandit algorithm to manage such exploration-exploitation trade-off in model selection, they tend to overlook the increasing-then-converging trend in model performances as the model is iteratively finetuned, leading to less accurate predictions and suboptimal model selections.
In this paper, we propose a time-increasing bandit algorithm TI-UCB, which effectively predicts the increase of model performances due to training or finetuning and efficiently balances exploration and exploitation in model selection. To further capture the converging points of models, we develop a change detection mechanism by comparing consecutive increase predictions. We theoretically prove that our algorithm achieves a lower regret upper bound, improving from prior works' polynomial regret to logarithmic in a similar setting. The advantage of our method is also empirically validated through extensive experiments on classification model selection and online selection of LLMs. Our results highlight the importance of utilizing increasing-then-converging pattern for more efficient and economic model selection in the deployment of LLMs. | [
"Online Model Selection",
"Increasing Bandits"
] | https://openreview.net/pdf?id=2IwSOTWvXu | Za8gtpBnOT | official_review | 1,700,474,840,840 | 2IwSOTWvXu | [
"everyone"
] | [
"ACM.org/TheWebConf/2024/Conference/Submission600/Reviewer_K1rg"
] | review: The authors of the paper “Convergence-Aware Online Model Selection with Time-Increasing Bandits” present a novel framework for online model selection based on the multi-armed bandits formulation. The framework takes into account the property of loss convergence (or performance metric convergence) of learned models as part of the reward formulation. The authors support their proposed algorithm with both extensive experimentation and formal proofs.
In my opinion the work is very interesting and important given the present advancements in the LLMs field. The ability to effectively select a model from a pool of models in an online manner is highly desirable. The writing of the paper is clear and the overall presentation of results looks sound and the claims/RQs are well supported.
All in all, I think the experimental setup is well explained and easy to follow which increases the chances of reproducibility.
questions: Are you going to release the code upon acceptance? I think the experiments are well explained, however, the settings might be a bit complex in some cases.
ethics_review_flag: No
ethics_review_description: .
scope: 4: The work is relevant to the Web and to the track, and is of broad interest to the community
novelty: 5
technical_quality: 5
reviewer_confidence: 3: The reviewer is confident but not certain that the evaluation is correct |
Subsets and Splits