forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
bdZyAoTTC2 | Distributional Information Embedding: A Framework for Multi-bit Watermarking | [
"Haiyun He",
"Yepeng Liu",
"Ziqiao Wang",
"Yongyi Mao",
"Yuheng Bu"
] | This paper introduces a novel problem, distributional information embedding, motivated by the practical demands of multi-bit watermarking for large language models (LLMs). Unlike traditional information embedding, which embeds information into a pre-existing host signal, LLM watermarking actively controls the text generation process—adjusting the token distribution—to embed a detectable signal. We develop an information-theoretic framework to analyze this distributional information embedding problem, characterizing the fundamental trade-offs among three critical performance metrics: text quality, detectability, and information rate. In the asymptotic regime, we demonstrate that the maximum achievable rate with vanishing error corresponds to the entropy of the LLM's output distribution and increases with higher allowable distortion. We also characterize the optimal watermarking scheme to achieve this rate. Extending the analysis to the finite-token case, we identify schemes that maximize detection probability while adhering to constraints on false alarm and distortion. | [
"Multi-bit Watermarking",
"Large Language Models",
"Theory",
"Hypothesis Testing"
] | Accept | https://openreview.net/pdf?id=bdZyAoTTC2 | https://openreview.net/forum?id=bdZyAoTTC2 | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"h8ldT73mon"
],
"note_type": [
"decision"
],
"note_created": [
1741250135511
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
ZiLEUaf8ZC | Digital Art Creation and Copyright Protection in Pollock Style Using GANs, Fractal Analysis, and NFT Generation | [
"WangXu",
"YiquanWang",
"Jiazhuo Pan"
] | The rapid evolution of artificial intelligence has revolutionized digital art creation, enabling the development of novel methodologies that integrate artistic synthesis with robust intellectual property protection. In this study, we propose an integrated framework that combines Generative Adversarial Networks (GANs), fractal analysis, and wavelet-based turbulence modeling to generate abstract artworks inspired by Jackson Pollock's drip paintings. Beyond emulating Pollock’s dynamic style via neural style transfer, our approach quantitatively characterizes the artworks' intrinsic complexity using fractal dimension and turbulence power spectrum metrics. Importantly, we introduce a comprehensive watermark robustness testing protocol that embeds imperceptible digital watermarks into the generated images and rigorously assesses their resilience against common perturbations—including Gaussian noise, JPEG compression, and spatial distortions. By merging these watermarks with NFT metadata, our framework ensures secure provenance and immutability of digital assets. Experimental results demonstrate the feasibility and efficacy of this multifaceted approach in advancing both artistic innovation and reliable digital copyright protection. | [
"Digital Art",
"Neural Style Transfer",
"Fractal Analysis",
"Digital Watermarking",
"NFT Authentication"
] | Accept | https://openreview.net/pdf?id=ZiLEUaf8ZC | https://openreview.net/forum?id=ZiLEUaf8ZC | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"dFaLdm87Py"
],
"note_type": [
"decision"
],
"note_created": [
1741250134405
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
YzaML0S9Di | WINTER SOLDIER: HYPNOTIZING LANGUAGE MODELS AT PRE-TRAINING WITH INDIRECT DATA POISONING | [
"Wassim Bouaziz",
"Mathurin VIDEAU",
"Nicolas Usunier",
"El-Mahdi El-Mhamdi"
] | The pre-training of large language models (LLMs) relies on massive text datasets sourced from diverse and difficult-to-curate origins.
While membership inference attacks and hidden canaries have been explored to trace data usage, such methods rely on memorization of the training data, which LM providers try to limit.
We suggest to instead perform an indirect data poisoning (where the targeted behavior is hidden) to protect a dataset before sharing it.
Using gradient-based optimization prompt-tuning, we make a model learn arbitrary *secret sequences*: secret responses to secret prompts that are **absent from the training corpus**.\
We demonstrate our approach on language models pre-trained from scratch and show that less than $0.005\%$ of poisoned tokens are sufficient to covertly make a LM learn a secret, and detect it with a theoretically certifiable $p$-value as low as $10^{-55}$.
All without performance degradation (as measured on LM benchmarks) and despite secrets **never appearing in the training set**. | [
"data poisoning",
"dataset watermarking",
"dataset ownership verification"
] | Accept | https://openreview.net/pdf?id=YzaML0S9Di | https://openreview.net/forum?id=YzaML0S9Di | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"DmyAwXMRhK"
],
"note_type": [
"decision"
],
"note_created": [
1741250135745
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
VdXBLMTpD7 | Machine never said that: Defending spoofing attacks by diverse fragile watermark | [
"Yuhang Cai",
"Yaofei Wang",
"Donghui Hu",
"Chen Gu"
] | Misusing the large language models (LLMs) has intensified the need for robust generated-text detection through watermarking. Existing watermark methods prioritize robustness but remain vulnerable to spoofing attacks, where modified text retains detectable watermarks, falsely attributing malicious content to the LLM. We propose the Multiple-Sampling Fragile Watermark (MSFW), the first framework to integrate local fragile watermarks to defend against such attacks. By embedding context-dependent watermarks through a multiple-sampling strategy, MSFW enables two critical detection capabilities: (1) Modification detection via localized watermark fragility, where any modification disrupts adjacent watermark and reflectd through localized watermark extraction; (2) Generated-text detection using unaffected global watermarks. Meanwhile, our watermarking method is unbiased and improves the diversity of the output by the multiple-sampling strategy. This work bridges the gap between robustness and fragility in LLM watermarking, offering a practical defense against spoofing attacks without compromising utility. | [
"LLM",
"LLM Watermark",
"Modification Detection",
"Fragile Watermark"
] | Accept | https://openreview.net/pdf?id=VdXBLMTpD7 | https://openreview.net/forum?id=VdXBLMTpD7 | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"ajVWQ3W0AL"
],
"note_type": [
"decision"
],
"note_created": [
1741250134796
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
VQPJ0hfoPs | SpARK: An Embarrassingly Simple Sparse Watermarking in LLMs with Enhanced Text Quality | [
"Duy Cao Hoang",
"Thanh Quoc Hung Le",
"Rui Chu",
"Ping Li",
"Weijie Zhao",
"Yingjie Lao",
"Khoa D Doan"
] | With the widespread adoption of Large Language Models (LLMs), concerns about potential misuse have emerged. To this end, watermarking has been adapted to LLM, enabling a simple and effective way to detect and monitor generated text. However, while the existing methods can differentiate between watermarked and unwatermarked text with high accuracy, they often face a trade-off between the quality of the generated text and the effectiveness of the watermarking process. In this work, we present a novel type of LLM watermark, *Sparse Watermark*, which aims to mitigate this trade-off by applying watermarks to a small subset of generated tokens distributed across the text. To demonstrate this type of watermark, we introduce **SpARK**, a **Sp**arse Waterm**ARK** method that achieves sparsity by anchoring watermarked tokens to words that have specific Part-of-Speech (POS) tags. Our experimental results demonstrate that the proposed watermarking scheme, albeit *embarrassingly simple*, is *incredibly effective*, achieving high detectability while generating text that outperforms previous LLM watermarking methods in quality across various tasks. | [
"watermarking",
"large language models"
] | Accept | https://openreview.net/pdf?id=VQPJ0hfoPs | https://openreview.net/forum?id=VQPJ0hfoPs | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"DBNllhoZYl"
],
"note_type": [
"decision"
],
"note_created": [
1741250134357
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
SIBkIV48gF | Watermarking Degrades Alignment in Language Models: Analysis and Mitigation | [
"Apurv Verma",
"Hai Phan",
"Shubhendu Trivedi"
] | Watermarking techniques for large language models (LLMs) can significantly impact output quality, yet their effects on truthfulness, safety, and helpfulness remain critically underexamined. This paper presents a systematic analysis of how two popular watermarking approaches-Gumbel and KGW-affect these core alignment properties across four aligned LLMs. Our experiments reveal two distinct degradation patterns: guard attenuation, where enhanced helpfulness undermines model safety, and guard amplification, where excessive caution reduces model helpfulness. These patterns emerge from watermark-induced shifts in token distribution, surfacing the fundamental tension that exists between alignment objectives.
To mitigate these degradations, we propose Alignment Resampling (AR), an inference-time sampling method that uses an external reward model to restore alignment. We establish a theoretical lower bound on the improvement in expected reward score as the sample size is increased and empirically demonstrate that sampling just 2-4 watermarked generations effectively recovers or surpasses baseline (unwatermarked) alignment scores. To overcome the limited response diversity of standard Gumbel watermarking, our modified implementation sacrifices strict distortion-freeness while maintaining robust detectability, ensuring compatibility with AR. Experimental results confirm that AR successfully recovers baseline alignment in both watermarking approaches, while maintaining strong watermark detectability. This work reveals the critical balance between watermark strength and model alignment, providing a simple inference-time solution to responsibly deploy watermarked LLMs in practice. | [
"watermarking",
"alignment",
"rejection-sampling"
] | Accept | https://openreview.net/pdf?id=SIBkIV48gF | https://openreview.net/forum?id=SIBkIV48gF | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"CrfSVgEnLl"
],
"note_type": [
"decision"
],
"note_created": [
1741250135015
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
RrEuRQn5mB | Detection Limits and Statistical Separability of Tree Ring Watermarks in Rectified Flow-based Text-to-Image Generation Models | [
"Ved Umrajkar",
"Aakash Kumar Singh"
] | Tree-Ring Watermarking is a significant technique for authenticating AI-generated images. However, its effectiveness in rectified flow-based models remains unexplored, particularly given the inherent challenges of these models with noise latent inversion. Through extensive experimentation, we evaluated and compared the detection and separability of watermarks between SD 2.1 and FLUX.1-dev models.
By analyzing various text guidance configurations and augmentation attacks, we demonstrate how inversion limitations affect both watermark recovery and the statistical separation between watermarked and unwatermarked images. Our findings provide valuable insights into the current limitations of Tree-Ring Watermarking in the current SOTA models and highlight the critical need for improved inversion methods to achieve reliable watermark detection and separability. | [
"Watermarking",
"Diffusion",
"Flow-Matching",
"Rectified Flow",
"Text-to-Image Generation"
] | Accept | https://openreview.net/pdf?id=RrEuRQn5mB | https://openreview.net/forum?id=RrEuRQn5mB | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"I4pgvDyZnE"
],
"note_type": [
"decision"
],
"note_created": [
1741250134541
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
PruDvpRJ0a | Productionizing Audio Watermarking for Short-Form Video | [
"Elias Lumpert"
] | In this work, the application of audio watermarking in large scale short-form video platforms is explored by addressing challenges particularly focusing on minimizing watermark audibility while maximizing detectability. Experimental results are presented, discussing approaches to improve imperceptibility such as using a mixing gain for the watermark signal and applying it only on speech segments. The experiments also examine the impact of multiple audio encodings and music mixing on watermark detectability, proposing solutions to enhance robustness. | [
"audio watermarking",
"large scale video platforms",
"audio watermark attacks",
"audio watermark robustness"
] | Accept | https://openreview.net/pdf?id=PruDvpRJ0a | https://openreview.net/forum?id=PruDvpRJ0a | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"9k7lGF4Ubj"
],
"note_type": [
"decision"
],
"note_created": [
1741250135278
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
OKjZvLJxOY | Towards Watermarking of Open-Source LLMs | [
"Thibaud Gloaguen",
"Nikola Jovanović",
"Robin Staab",
"Martin Vechev"
] | While watermarks for closed LLMs have matured and have been included in large-scale deployments, these methods are not applicable to open-source models, which allow users full control over the decoding process. This setting is understudied yet critical, given the rising performance of open-source models. In this work, we lay the foundation for systematic study of open-source LLM watermarking. For the first time, we explicitly formulate key requirements, including durability against common model modifications such as model merging, quantization, or finetuning, and propose a concrete evaluation setup. Given the prevalence of these modifications, durability is crucial for an open-source watermark to be effective. We survey and evaluate existing methods, showing that they are not durable. We also discuss potential ways to improve their durability and highlight remaining challenges. We hope our work enables future progress on this important problem. | [
"llm",
"open source"
] | Accept | https://openreview.net/pdf?id=OKjZvLJxOY | https://openreview.net/forum?id=OKjZvLJxOY | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"NEmNL970Fa"
],
"note_type": [
"decision"
],
"note_created": [
1741250135702
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
NtRFzUxuYh | MultiNeRF: Multiple Watermark Embedding for Neural Radiance Fields | [
"Yash Kulthe",
"Andrew Gilbert",
"John Collomosse"
] | We present MultiNeRF, a novel 3D watermarking method that enables embedding multiple uniquely keyed watermarks within images rendered by a single Neural Radiance Field (NeRF) model while maintaining high visual quality. Our approach extends the TensoRF NeRF model by incorporating a dedicated watermark grid alongside the existing geometry and appearance grids. This ensures higher watermark capacity without entangling watermark signals with scene content. We propose a FiLM-based conditional modulation mechanism that dynamically activates watermarks based on input identifiers, allowing multiple independent watermarks to be embedded and extracted without requiring model retraining. We validate MultiNeRF on the NeRF-Synthetic and LLFF datasets, demonstrating statistically significant improvements in robust capacity without compromising on rendering quality. By generalizing single-watermark NeRF methods into a flexible multi-watermarking framework, MultiNeRF provides a scalable solution for securing ownership and attribution in 3D content. | [
"NeRF",
"Watermarking",
"3D",
"Copyright",
"Provenance"
] | Accept | https://openreview.net/pdf?id=NtRFzUxuYh | https://openreview.net/forum?id=NtRFzUxuYh | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"tUJI5nsRxT"
],
"note_type": [
"decision"
],
"note_created": [
1741250136022
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
MS7QPrBngC | Visual Fidelity vs. Robustness: Trade-Off Analysis of Image Adversarial Watermark Mitigated by SSIM Loss | [
"Jiwoo Choi",
"Jinwoo Kim",
"Sejong Yang",
"Seon Joo Kim"
] | Adversarial watermark is an important technique for protecting digital images from unauthorized use and illegal AI training. However, conventional methods often introduce visually unpleasant artifacts, making the watermark easily perceptible. This results in an inherent trade-off between robustness and visual fidelity, where stronger protection comes at the cost of degraded image quality. In this work, we address this challenge by integrating SSIM loss into the perturbation embedding process using the Fully-trained Surrogate Model Guidance (FSMG) from baseline. By employing tunable SSIM weights, our approach balances the adversarial loss—designed to hinder unauthorized model training—with a perceptual loss that preserves image fidelity. Experimental results on CelebA-HQ and VGGFace2 show that our method effectively enhances image quality while preserving robustness, as validated by quantitative metrics and user evaluations confirming its practical viability for content protection. | [
"Adversarial watermark",
"AI watermark",
"Perturbation imperceptibility"
] | Accept | https://openreview.net/pdf?id=MS7QPrBngC | https://openreview.net/forum?id=MS7QPrBngC | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"ZX8A0jKKJA"
],
"note_type": [
"decision"
],
"note_created": [
1741250134243
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
Lzi8raVEQu | Theoretically Grounded Framework for LLM Watermarking: A Distribution-Adaptive Approach | [
"Haiyun He",
"Yepeng Liu",
"Ziqiao Wang",
"Yongyi Mao",
"Yuheng Bu"
] | Watermarking has emerged as a crucial method to distinguish AI-generated text from human-created text. In this paper, we present a novel theoretical framework for watermarking Large Language Models (LLMs) that jointly optimizes both the watermarking scheme and the detection process. Our approach focuses on maximizing detection performance while maintaining control over the worst-case Type-I error and text distortion. We characterize \emph{the universally minimum Type-II error}, showing a fundamental trade-off between watermark detectability and text distortion. Importantly, we identify that the optimal watermarking schemes are adaptive to the LLM generative distribution. Building on our theoretical insights, we propose an efficient, model-agnostic, distribution-adaptive watermarking algorithm, utilizing a surrogate model alongside the Gumbel-max trick. Experiments conducted on Llama2-13B and Mistral-8$\times$7B models confirm the effectiveness of our approach. Additionally, we examine incorporating robustness into our framework, paving the way for future watermarking systems that withstand adversarial attacks more effectively. | [
"LLM",
"Watermark",
"Theory",
"Distribution-Adaptive",
"Hypothesis Testing",
"Trustworthy"
] | Accept | https://openreview.net/pdf?id=Lzi8raVEQu | https://openreview.net/forum?id=Lzi8raVEQu | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"Mm9tPM8Hsl"
],
"note_type": [
"decision"
],
"note_created": [
1741250135960
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
Lnij8CaFFO | Optimized Couplings For Watermarking Large Language Models | [
"Carol Xuan Long",
"Dor Tsur",
"Claudio Mayrink Verdun",
"Hsiang Hsu",
"Haim H. Permuter",
"Flavio Calmon"
] | Large-language models (LLMs) are now able to produce text that is indistinguishable from human-generated content. This has fueled the development of watermarks that imprint a ``signal'' in LLM-generated text with minimal perturbation of an LLM's output. This paper provides an analysis of text watermarking in a one-shot setting. Through the lens of hypothesis testing with side information, we formulate and analyze the fundamental trade-off between watermark detection power and distortion in generated textual quality. We argue that a key component in watermark design is generating a coupling between the side information shared with the watermark detector and a random partition of the LLM vocabulary. Our analysis identifies the optimal coupling and randomization strategy under the worst-case LLM next-token distribution that satisfies a min-entropy constraint. We provide a closed-form expression of the resulting detection rate under the proposed scheme and quantify the cost in a max-min sense. Finally, we numerically compare the proposed scheme with the theoretical optimum. | [
"large language model watermarking",
"information theory"
] | Accept | https://openreview.net/pdf?id=Lnij8CaFFO | https://openreview.net/forum?id=Lnij8CaFFO | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"3W5wXUS5bM"
],
"note_type": [
"decision"
],
"note_created": [
1741250135128
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
Lcn0WNCVA9 | Can LLM Watermarking Robustly Prevent Unauthorized Knowledge Distillation? | [
"Leyi Pan",
"Aiwei Liu",
"Shiyu Huang",
"Yijian LU",
"Xuming Hu",
"Lijie Wen",
"Irwin King",
"Philip S. Yu"
] | The radioactive nature of Large Language Model (LLM) watermarking enables the detection of watermarks inherited by student models when trained on the outputs of watermarked teacher models, making it a promising tool for preventing unauthorized knowledge distillation. However, the robustness of watermark radioactivity against adversarial actors remains largely unexplored. In this paper, we investigate whether student models can acquire the capabilities of teacher models through knowledge distillation while avoiding watermark inheritance. We propose two categories of watermark removal approaches: pre-distillation removal through untargeted and targeted training data paraphrasing (UP and TP), and post-distillation removal through inference-time watermark neutralization (WN). Extensive experiments across multiple model pairs, watermarking schemes and hyper-parameter settings demonstrate that both TP and WN thoroughly eliminate inherited watermarks, with WN achieving this while maintaining knowledge transfer efficiency and low computational overhead. Given the ongoing deployment of watermarking techniques in production LLMs, these findings emphasize the urgent need for more robust defense strategies. | [
"watermark",
"knowledge distillation",
"robustness"
] | Accept | https://openreview.net/pdf?id=Lcn0WNCVA9 | https://openreview.net/forum?id=Lcn0WNCVA9 | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"0Gh95TIs4h"
],
"note_type": [
"decision"
],
"note_created": [
1741250133936
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. While some reviews were negative, we believe that the core idea of your paper is compelling and has the potential to spark interest and discussion within the community.\\nThe reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing dialogue in the field. Although certain weaknesses were noted, they do not warrant rejection, especially given the non-archival nature of this workshop.\\nWe encourage you to address the major concerns raised by the reviewers to strengthen your work and submit the revised version by the 5th of April.\"}"
]
} |
JyrjeQJ8VK | Provable Watermark Extraction | [
"Tomer Solberg"
] | Introducing zkDL++, a novel framework designed for provable AI. Leveraging zkDL++, we
address a key challenge in generative AI watermarking—maintaining privacy while
ensuring provability. By enhancing the watermarking system developed by Meta, zkDL++
solves the problem of needing to keep watermark extractors private to avoid attacks,
offering a more secure solution. Beyond watermarking, zkDL++ proves the integrity of any
deep neural network (DNN) with high efficiency. In this post, we outline our approach,
evaluate its performance, and propose avenues for further optimization. | [
"Watermark",
"CNN",
"Zero Knowledge",
"Privacy"
] | Accept | https://openreview.net/pdf?id=JyrjeQJ8VK | https://openreview.net/forum?id=JyrjeQJ8VK | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"XC3tkwGMIm"
],
"note_type": [
"decision"
],
"note_created": [
1741250135578
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
JdNfGpVH6r | Watermarking and Metadata for GenAI Transparency at Scale - Lessons Learned and Challenges Ahead | [
"Elizabeth Hilbert",
"Gretchen Greene",
"Michael Godwin",
"Sarah Shirazyan"
] | The proliferation of generative-AI (“GenAI”) technology promises to revolutionize content creation across online platforms. This advancement has sparked significant public debate concerning transparency around AI-generated content. As the difference between human-generated and synthetic content is blurred, people increasingly want to know where the boundary lies. Invisible and visible watermarks, content labels, and IPTC and C2PA metadata are some of the technical approaches in use by Meta and by the industry at large today to enable transparency of AI-created or AI-edited content online. This paper examines Meta’s approach to marking AI content and providing user transparency, highlighting lessons learned–and the challenges ahead–in striving for effective AI transparency, including suggestions for research areas most likely to advance industry solutions for indirect disclosure and user transparency for GenAI content. Key challenges have included the lack of robustness of metadata, imperfect robustness of watermarks, difficulty in defining "materiality" for AI edits, and how to provide users appropriate transparency, and evolving understanding and expectations over time. We provide details of Meta’s experience launching labels for first- and third-party content–both fully AI generated and AI edited–at a global scale using GenAI signals from IPTC, C2PA, and known invisible watermarks and the challenge of meeting user expectations related to materiality of edits and choice of language, resulting in changes to our approach. This paper focuses specifically on transparency related to user generated content that is non-commercial in nature. | [
"watermarking",
"metadata",
"generative AI",
"robustness",
"research",
"regulation",
"regulators",
"transparency",
"provenance",
"labeling",
"disclosure",
"materiality"
] | Accept | https://openreview.net/pdf?id=JdNfGpVH6r | https://openreview.net/forum?id=JdNfGpVH6r | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"TEUSWJwcuK"
],
"note_type": [
"decision"
],
"note_created": [
1741250134581
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
JGTRj6h0Cv | Mark Your LLM: Detecting the Misuse of Open-Source Large Language Models via Watermarking | [
"Yijie Xu",
"Aiwei Liu",
"Xuming Hu",
"Lijie Wen",
"Hui Xiong"
] | As open-source large language models (LLMs) like Llama3 become more capable, it is crucial to develop watermarking techniques to detect their potential misuse. Existing watermarking methods either add watermarks during LLM inference, which is unsuitable for open-source LLMs, or primarily target classification LLMs rather than recent generative LLMs. Adapting these watermarks to open-source LLMs for misuse detection remains an open challenge. This work defines two misuse scenarios for open-source LLMs: intellectual property (IP) violation and LLM Usage Violation. Then we explore the application of inference-time watermark distillation and backdoor watermarking in these contexts. We propose comprehensive evaluation methods to assess the impact of various real-world further fine-tuning scenarios on watermarks and the effect of these watermarks on LLM performance. Our experiments reveal that backdoor watermarking could effectively detect IP Violation, while inference-time watermark distillation is applicable in both scenarios but less robust to further fine-tuning and has a more significant impact on LLM performance compared to backdoor watermarking. Exploring more advanced watermarking methods for open-source LLMs to detect their misuse should be an important future direction. | [
"ethical considerations in NLP applications",
"llm watermark"
] | Accept | https://openreview.net/pdf?id=JGTRj6h0Cv | https://openreview.net/forum?id=JGTRj6h0Cv | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"1YWjSibR81"
],
"note_type": [
"decision"
],
"note_created": [
1741250136317
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
ImrmzMDq5z | Scalable Fingerprinting of Large Language Models | [
"Anshul Nasery",
"Jonathan Hayase",
"Creston Brooks",
"Peiyao Sheng",
"Himanshu Tyagi",
"Pramod Viswanath",
"Sewoong Oh"
] | Model fingerprinting has emerged as a powerful tool for model owners to identify their shared model given API access. However, to lower false discovery rate, fight fingerprint leakage, and defend against coalitions of model users attempting to bypass detection, we argue that scaling up the number of fingerprints one can embed into a model is critical. Hence, we pose Scalability as a crucial requirement for good fingerprinting schemes. We experiment with fingerprint design at larger scales than previously considered, and propose a new method, dubbed Perinucleus sampling, to generate scalable, persistent, and harmless fingerprints. We demonstrate that this scheme can add 24,576 fingerprints to a Llama-3.1-8B model --- two orders of magnitude more than existing schemes --- without degrading the model's utility. Our inserted fingerprints persist even after supervised fine-tuning on other data. We further describe security risks for fingerprinting, and theoretically and empirically show how a scalable fingerprinting scheme like ours can help mitigate these risks. | [
"Fingerprinting"
] | Accept | https://openreview.net/pdf?id=ImrmzMDq5z | https://openreview.net/forum?id=ImrmzMDq5z | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"ZudTBZIIEZ"
],
"note_type": [
"decision"
],
"note_created": [
1741250136098
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
FcP5qPweaL | WaterFlow: Learning Fast & Robust Watermarks using Stable Diffusion | [
"Vinay Shukla",
"Prachee Sharma",
"Ryan A. Rossi",
"Sungchul Kim",
"Tong Yu",
"Aditya Grover"
] | The ability to embed watermarks in images is a fundamental problem of interest for computer vision, and is exacerbated by the rapid rise of generated imagery in recent times. Current state-of-the-art techniques suffer from computational and statistical challenges such as the slow execution speed for practical deployments. In addition, other works trade off fast watermarking speeds but suffer greatly in their robustness or perceptual quality. In this work, we propose WaterFlow (WF), a fast and extremely robust approach for high fidelity visual watermarking based on a learned latent-dependent watermark. Our approach utilizes a pretrained latent diffusion model to encode an arbitrary image into a latent space and produces a learned watermark that is then planted into the Fourier Domain of the latent. The transformation is specified via invertible flow layers that enhance the expressivity of the latent space of the pre-trained model to better preserve image quality while permitting robust and tractable detection. Most notably, WaterFlow demonstrates *state-of-the-art performance on general robustness* and is the *first method capable of effectively defending against difficult combination attacks*. We validate our findings on three widely used real and generated datasets: MS-COCO, DiffusionDB, and WikiArt. | [
"image watermarking",
"latent diffusion models",
"computer vision"
] | Accept | https://openreview.net/pdf?id=FcP5qPweaL | https://openreview.net/forum?id=FcP5qPweaL | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"w6TzHbEy9h"
],
"note_type": [
"decision"
],
"note_created": [
1741250134190
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
EGwOI0deaU | Detecting Benchmark Contamination Through Watermarking | [
"Tom Sander",
"Pierre Fernandez",
"Saeed Mahloujifar",
"Alain Oliviero Durmus",
"Chuan Guo"
] | Benchmark contamination poses a significant challenge to the reliability of Large Language Models (LLMs) evaluations, as it is difficult to assert whether a model has been trained on a test set. We introduce a solution to this problem by watermarking benchmarks before their release.The embedding involves reformulating the original questions with a watermarked LLM, in a way that does not alter the benchmark quality and utility. During evaluation, we can detect ``radioactivity'', i.e. traces that the text watermarks leave in the model during training, using a theoretically grounded statistical test. We test our method by pre-training 1B models from scratch on 10B tokens with controlled benchmark contamination, and validate its effectiveness in detecting contamination on ARC-Easy, ARC-Challenge, and MMLU. Results show similar benchmark utility post-rephrasing and successful contamination detection when models are contaminated enough to enhance performance, e.g. p-val =$10^{-3}$ for +5% on ARC-Easy. | [
"LLM",
"Watermarking",
"Benchmark",
"Contamination"
] | Accept | https://openreview.net/pdf?id=EGwOI0deaU | https://openreview.net/forum?id=EGwOI0deaU | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"8MKZwS9Wfo"
],
"note_type": [
"decision"
],
"note_created": [
1741250135600
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
CQjafCMd5c | Discovering Spoofing Attempts on Language Model Watermarks | [
"Thibaud Gloaguen",
"Nikola Jovanović",
"Robin Staab",
"Martin Vechev"
] | LLM watermarks stand out as a promising way to attribute ownership of LLM-generated text.
One threat to watermark credibility comes from spoofing attacks, where an unauthorized third party forges the watermark, enabling it to falsely attribute arbitrary texts to a particular LLM.
Despite recent work demonstrating that state-of-the-art schemes are, in fact, vulnerable to spoofing, no prior work has focused on post-hoc methods to discover spoofing attempts.
In this work, we for the first time propose a reliable statistical method to distinguish spoofed from genuinely watermarked text, suggesting that current spoofing attacks are less effective than previously thought.
In particular, we show that regardless of their underlying approach, all current learning-based spoofing methods consistently leave observable artifacts in spoofed texts, indicative of watermark forgery.
We build upon these findings to propose rigorous statistical tests that reliably reveal the presence of such artifacts and thus demonstrate that a watermark has been spoofed.
Our experimental evaluation shows high test power across all learning-based spoofing methods, providing insights into their fundamental limitations and suggesting a way to mitigate this threat. | [
"LLM watermarks; watermark spoofing"
] | Accept | https://openreview.net/pdf?id=CQjafCMd5c | https://openreview.net/forum?id=CQjafCMd5c | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"Sw8hkbawfS"
],
"note_type": [
"decision"
],
"note_created": [
1741250135804
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
A7qD0g6DoT | High payload robust watermarking of generative models with multiple triggers and channel coding | [
"Jianwei Fei",
"Benedetta Tondi",
"Mauro Barni"
] | We present a robust and high-payload black-box multi-bit watermarking scheme for generative models. In order to embed a high payload message while retaining robustness against modifications of the watermarked network, we rely on the use of channel codes with strong error correction capacity (polar codes). This, in turn, increases the number of (coded) bits to be embedded within the network, thus challenging the embedding capabilities of the watermarking scheme. For this reason, we split the watermark bits into several chunks, each of which is associated with a different watermark triggering input. Through extensive experiments on the StyleGAN family of generative models, we show that the proposed method has excellent payload and robustness performance, allowing great flexibility to trade off between payload and robustness. Noticeably, our method demonstrates the capability of embedding over 100,000 coded bits for a net payload of up to 8192 bits while maintaining high image quality, with a PSNR exceeding 37 dB. Experiments demonstrate that the proposed high-payload strategy effectively improves the robustness of messages via high-performance channel codes, against white-box model attacks such as fine-tuning and pruning. Codes at: https://github.com/jumpycat/CCMark | [
"Generative Model Watermarking",
"DNN Watermarking",
"Intellectual Property Right protection",
"Channel coding"
] | Accept | https://openreview.net/pdf?id=A7qD0g6DoT | https://openreview.net/forum?id=A7qD0g6DoT | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"VgmHh5HzFe"
],
"note_type": [
"decision"
],
"note_created": [
1741250133593
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. While some reviews were negative, we believe that the core idea of your paper is compelling and has the potential to spark interest and discussion within the community.\\nThe reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing dialogue in the field. Although certain weaknesses were noted, they do not warrant rejection, especially given the non-archival nature of this workshop.\\nWe encourage you to address the major concerns raised by the reviewers to strengthen your work and submit the revised version by the 5th of April.\"}"
]
} |
6IfvMfNYrv | NSmark: Null Space Based Black-box Watermarking Defense Framework for Language Models | [
"Haodong Zhao",
"Jinming Hu",
"Peixuan Li",
"Fangqi Li",
"Jinrui Sha",
"Tianjie Ju",
"PeixuanChen",
"Zhuosheng Zhang",
"Gongshen Liu"
] | Language models (LMs) have emerged as critical intellectual property (IP) assets that necessitate protection. Although various watermarking strategies have been proposed, they remain vulnerable to Linear Functionality Equivalence Attack (LFEA), which can invalidate most existing white-box watermarks without prior knowledge of the watermarking scheme or training data. This paper further analyzes and extends the attack scenarios of LFEA to the commonly employed black-box settings for LMs by considering Last-Layer outputs (dubbed LL-LFEA). We discover that the null space of the output matrix remains invariant against LL-LFEA attacks. Based on this finding, we propose NSmark, a black-box watermarking scheme that is task-agnostic and capable of resisting LL-LFEA attacks. NSmark consists of three phases: (i) watermark generation using the digital signature of the owner, enhanced by spread spectrum modulation for increased robustness; (ii) watermark embedding through an output mapping extractor that preserves the LM performance while maximizing watermark capacity; (iii) watermark verification, assessed by extraction rate and null space conformity. Extensive experiments on both pre-training and downstream tasks confirm the effectiveness, scalebility, reliability, fidelity, and robustness of our approach. Code is available at https://anonymous.4open.science/r/NSmark-2FC1. | [
"watermark",
"security/privacy",
"LM"
] | Accept | https://openreview.net/pdf?id=6IfvMfNYrv | https://openreview.net/forum?id=6IfvMfNYrv | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"AUFNi42LZv"
],
"note_type": [
"decision"
],
"note_created": [
1741250136267
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
56ZC5dqvJO | DeepMark Benchmark: Redefining Audio Watermarking Robustness | [
"Slavko Kovačević",
"Murilo Z. Silvestre",
"Kosta Pavlović",
"Petar Nedić",
"Igor Djurović"
] | This paper introduces DeepMark Benchmark, a novel and comprehensive framework for evaluating the robustness of audio watermarking algorithms. Designed with modularity and scalability in mind, the benchmark enables systematic testing of watermarking methods against a diverse set of attacks. These include basic audio editing operations, advanced desynchronization techniques, and deep learning-based attacks that leverage generative models and neural processing methods. Additionally, we introduce a new class of attacks, termed Process Disruption Attacks, which target generative AI (GenAI) platforms. These attacks do not rely on prior knowledge of the system’s architecture or signal processing methods and can arise inadvertently within the GenAI workflows. The code is available at: https://github.com/deepmarkpy/deepmarkpy-benchmark. | [
"deep learning; audio watermarking; benchmark; ai attacks",
"process disruption attacks; audio editing attacks; desynchronization attacks"
] | Accept | https://openreview.net/pdf?id=56ZC5dqvJO | https://openreview.net/forum?id=56ZC5dqvJO | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"VyUUm3XpeN"
],
"note_type": [
"decision"
],
"note_created": [
1741250134723
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
4kKLLnh63z | A Taxonomy of Watermarking Methods for AI-Generated Content | [
"Pierre Fernandez",
"Hady Elsahar",
"Sylvestre-Alvise Rebuffi",
"Tomas Soucek",
"Valeriu Lacatusu",
"Tuan Tran",
"Alexandre Mourachko"
] | As AI-generated content features more prominently in our lives, it becomes important to develop methods for tracing their origin. Watermarking is a promising approach, but a clear categorization of existing techniques is lacking. We propose a simple taxonomy of watermarking methods for generative AI based on where they are applied in the deployment of the models: (1) *post-hoc watermarking*, adding watermarks after content generation; (2) *out-of-model watermarking*, embedding watermarks during generation without modifying the model; (3) *in-model watermarking*, integrating watermarks directly into the model's parameters. By providing a structured overview of existing techniques across image, audio, and text domains, this taxonomy aims to help researchers, policymakers, and regulators make informed decisions about which approach best fits their needs, acknowledging that no single method is universally superior and that different approaches may be suited to specific use cases and requirements. | [
"Watermarking",
"Taxonomy",
"Generative AI"
] | Accept | https://openreview.net/pdf?id=4kKLLnh63z | https://openreview.net/forum?id=4kKLLnh63z | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"Eh4iIvImCP"
],
"note_type": [
"decision"
],
"note_created": [
1741250136271
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
4eXshz6L8m | Watermark Smoothing Attacks against Language Models | [
"Hongyan Chang",
"Hamed Hassani",
"Reza Shokri"
] | Watermarking is a key technique for detecting AI-generated text. In this work, we study its vulnerabilities and introduce the Smoothing Attack, a novel watermark removal method. By leveraging the relationship between the model’s confidence and watermark detectability, our attack selectively smoothes the watermarked content, erasing watermark traces while preserving text quality. We validate our attack on open-source models ranging from 1.3B to 30B parameters on 10 different water- marks, demonstrating its effectiveness. Our findings expose critical weaknesses in existing watermarking schemes and highlight the need for stronger defenses. | [
"Watermark",
"Language models"
] | Accept | https://openreview.net/pdf?id=4eXshz6L8m | https://openreview.net/forum?id=4eXshz6L8m | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"q2wWmHJxFD"
],
"note_type": [
"decision"
],
"note_created": [
1741250135306
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
4SjAevJ5uE | Bayesian Inference for Robust Video Watermarking | [
"Wonhyuk Ahn",
"Jihyeon Kang",
"Seung-Hun Nam"
] | We propose a simple yet effective Bayesian extractor for multi-frame video watermarking that can be plugged into any existing image-based watermarking method, such as HiDDeN, CIN, MBRS, TrustMark, WAM, or VideoSeal.
In particular, we focus on challenging real-world conditions where videos undergo repeated or strong compression (e.g., H.264, H.265) or frame-rate changes that typically degrade watermark signals severely.
When all frames carry the same hidden bits, our Bayesian extractor treats each frame’s output as an independent observation and aggregates the log-likelihood ratios across frames, in contrast to naive averaging.
Despite only modifying the extraction phase, this approach consistently boosts bit accuracy under moderate-to-aggressive compression, frame-rate conversions, and other distortions—while preserving the same watermark imperceptibility and embedding efficiency as the baseline.
Experiments on diverse transformations and watermarking models show that these benefits are particularly pronounced when frames encounter uneven or heavy distortions, making our Bayesian extraction a lightweight but potent upgrade for robust video watermarking. | [
"video watermarking",
"bayesian inference",
"video compression"
] | Accept | https://openreview.net/pdf?id=4SjAevJ5uE | https://openreview.net/forum?id=4SjAevJ5uE | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"0lOt9QfCx2"
],
"note_type": [
"decision"
],
"note_created": [
1741250133907
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. While some reviews were negative, we believe that the core idea of your paper is compelling and has the potential to spark interest and discussion within the community.\\nThe reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing dialogue in the field. Although certain weaknesses were noted, they do not warrant rejection, especially given the non-archival nature of this workshop.\\nWe encourage you to address the major concerns raised by the reviewers to strengthen your work and submit the revised version by the 5th of April.\"}"
]
} |
44TCZ5XTuR | Deep Audio Watermarks are Shallow: Limitations of Post-Hoc Watermarking Techniques for Speech | [
"Patrick O'Reilly",
"Zeyu Jin",
"Jiaqi Su",
"Bryan Pardo"
] | In the audio modality, state-of-the-art watermarking methods leverage deep neural networks to allow the embedding of human-imperceptible signatures in generated audio. The ideal is to embed signatures that can be detected with high accuracy when the watermarked audio is altered via compression, filtering, or other transformations. Existing audio watermarking techniques operate in a post-hoc manner, manipulating ``low-level" features of audio recordings after generation (e.g. through the addition of a low-magnitude watermark signal). We show that this post-hoc formulation makes existing audio watermarks vulnerable to transformation-based removal attacks. Focusing on speech audio, we (1) unify and extend existing evaluations of the effect of audio transformations on watermark detectability, and (2) demonstrate that state-of-the-art post-hoc audio watermarks can be removed with no knowledge of the watermarking scheme and minimal degradation in audio quality. | [
"audio watermarking",
"deepfakes",
"speech synthesis"
] | Accept | https://openreview.net/pdf?id=44TCZ5XTuR | https://openreview.net/forum?id=44TCZ5XTuR | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"WDKS8WmMFM"
],
"note_type": [
"decision"
],
"note_created": [
1741250134989
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
3K4oAgZTcO | RINTAW: A Robust Invisible Watermark for Tabular Generative Models | [
"Liancheng Fang",
"Aiwei Liu",
"Henry Peng Zou",
"Hengrui Zhang",
"Philip S. Yu"
] | Watermarking tabular generative models is critical for preventing misuse of synthetic tabular data. However, existing watermarking methods for tabular data often lack robustness against common attacks (e.g., row shuffling) or are limited to specific data types (e.g., numerical), restricting their practical utility. To address these challenges, we propose \modelname, a novel watermarking framework for tabular generative models that is robust to common attacks while preserving data fidelity. \modelname embeds watermarks by leveraging a subset of column values as seeds. To ensure the pseudorandomness of the watermark key, \modelname employs an adaptive column selection strategy and a masking mechanism to enforce distribution uniformity. This approach guarantees minimal distortion to the original data distribution and is compatible with any tabular data format (numerical, categorical, or mixed) and generative model architecture. We validate \modelname on six real-world tabular datasets, demonstrating that the quality of watermarked tables remains nearly indistinguishable from non-watermarked ones while achieving high detectability even under strong post-editing attacks. The code is available at this \href{https://github.com/fangliancheng/RINTAW}{link}. | [
"Tabular watermark"
] | Accept | https://openreview.net/pdf?id=3K4oAgZTcO | https://openreview.net/forum?id=3K4oAgZTcO | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"hp4GKYu0hL"
],
"note_type": [
"decision"
],
"note_created": [
1741250135676
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. The reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing discussion in the field.\\nWhile some weaknesses were identified, they do not warrant rejection given the non-archival nature of this workshop. We encourage the authors to address the major concerns raised by the reviewers to strengthen your work, and to submit the revised version by the 5th of April.\"}"
]
} |
3BSgwbC7Q6 | Hidden in the Noise: Two-Stage Robust Watermarking for Images | [
"Kasra Arabi",
"Benjamin Feuer",
"R. Teal Witter",
"Chinmay Hegde",
"Niv Cohen"
] | As the quality of image generators continues to improve, deepfakes become a topic of considerable societal debate. Image watermarking allows responsible model owners to detect and label their AI-generated content, which can mitigate the harm. Yet, current state-of-the-art methods in image watermarking remain vulnerable to forgery and removal attacks. This vulnerability occurs in part because watermarks distort the distribution of generated images, unintentionally revealing information about the watermarking techniques.
In this work, we first demonstrate a distortion-free watermarking method for images, based on a diffusion model's initial noise. However, detecting the watermark requires comparing the initial noise reconstructed for an image to all previously used initial noises. To mitigate these issues, we propose a two-stage watermarking framework for efficient detection. During generation, we augment the initial noise with generated Fourier patterns to embed information about the group of initial noises we used. For detection, we (i) retrieve the relevant group of noises, and (ii) search within the given group for an initial noise that might match our image. This watermarking approach achieves state-of-the-art robustness to forgery and removal against a large battery of attacks. | [
"Watermarking"
] | Accept | https://openreview.net/pdf?id=3BSgwbC7Q6 | https://openreview.net/forum?id=3BSgwbC7Q6 | ICLR.cc/2025/Workshop/WMARK | 2025 | {
"note_id": [
"DGaGlMHKsN"
],
"note_type": [
"decision"
],
"note_created": [
1741250134166
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/WMARK/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept\", \"comment\": \"We are pleased to inform you that your paper has been accepted to the first workshop on Watermarking for Generative AI. While some reviews were negative, we believe that the core idea of your paper is compelling and has the potential to spark interest and discussion within the community.\\nThe reviewers found that the paper aligns well with the workshop's theme and contributes to the ongoing dialogue in the field. Although certain weaknesses were noted, they do not warrant rejection, especially given the non-archival nature of this workshop.\\nWe encourage you to address the major concerns raised by the reviewers to strengthen your work and submit the revised version by the 5th of April.\"}"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.