🔥 As a new released project, We welcome PRs! If you have implemented a LDM watermarking algorithm or are interested in contributing one, we'd love to include it in MarkDiffusion. Join our community and help make generative watermarking more accessible to everyone!
- Notes
- Updates
- Introduction to MarkDiffusion
- Installation
- Quick Start
- How to Use the Toolkit
- Citation
As the MarkDiffusion repository content becomes increasingly rich and its size grows larger, we have created a model storage repository on Hugging Face called Generative-Watermark-Toolkits to facilitate usage. This repository contains various default models for watermarking algorithms that involve self-trained models. We have removed the model weights from the corresponding ckpts/
folders of these watermarking algorithms in the main repository. When using the code, please first download the corresponding models from the Hugging Face repository according to the config paths and save them to the ckpts/
directory before running the code.
🎯 (2025.10.10) Add Mask, Overlay, AdaptiveNoiseInjection image attack tools, thanks Zheyu Fu for his PR!
🎯 (2025.10.09) Add VideoCodecAttack, FrameRateAdapter, FrameInterpolationAttack video attack tools, thanks Luyang Si for his PR!
🎯 (2025.10.08) Add SSIM, BRISQUE, VIF, FSIM image quality analyzer, thanks Huan Wang for her PR!
✨ (2025.10.07) Add SFW watermarking method, thanks Huan Wang for her PR!
✨ (2025.10.07) Add VideoMark watermarking method, thanks Hanqian Li for his PR!
✨ (2025.9.29) Add GaussMarker watermarking method, thanks Luyang Si for his PR!
MarkDiffusion is an open-source Python toolkit for generative watermarking of latent diffusion models. As the use of diffusion-based generative models expands, ensuring the authenticity and origin of generated media becomes critical. MarkDiffusion simplifies the access, understanding, and assessment of watermarking technologies, making it accessible to both researchers and the broader community. Note: if you are interested in LLM watermarking (text watermark), please refer to the MarkLLM toolkit from our group.
The toolkit comprises three key components: a unified implementation framework for streamlined watermarking algorithm integrations and user-friendly interfaces; a mechanism visualization suite that intuitively showcases added and extracted watermark patterns to aid public understanding; and a comprehensive evaluation module offering standard implementations of 24 tools across three essential aspects—detectability, robustness, and output quality, plus 8 automated evaluation pipelines.
-
Unified Implementation Framework: MarkDiffusion provides a modular architecture supporting eight state-of-the-art generative image/video watermarking algorithms of LDMs.
-
Comprehensive Algorithm Support: Currently implements 8 watermarking algorithms from two major categories: Pattern-based methods (Tree-Ring, Ring-ID, ROBIN, WIND) and Key-based methods (Gaussian-Shading, PRC, SEAL, VideoShield).
-
Visualization Solutions: The toolkit includes custom visualization tools that enable clear and insightful views into how different watermarking algorithms operate under various scenarios. These visualizations help demystify the algorithms' mechanisms, making them more understandable for users.
-
Evaluation Module: With 20 evaluation tools covering detectability, robustness, and impact on output quality, MarkDiffusion provides comprehensive assessment capabilities. It features 5 automated evaluation pipelines: Watermark Detection Pipeline, Image Quality Analysis Pipeline, Video Quality Analysis Pipeline, and specialized robustness assessment tools.
MarkDiffusion supports eight pipelines, two for detection (WatermarkedMediaDetectionPipeline and UnWatermarkedMediaDetectionPipeline), and six for quality analysis. The table below details the quality analysis pipelines.
Quality Analysis Pipeline | Input Type | Required Data | Applicable Metrics |
---|---|---|---|
DirectImageQualityAnalysisPipeline | Single image | Generated watermarked/unwatermarked image | Metrics for single image evaluation |
ReferencedImageQualityAnalysisPipeline | Image + reference content | Generated watermarked/unwatermarked image + reference image/text | Metrics requiring computation between single image and reference content (text/image) |
GroupImageQualityAnalysisPipeline | Image set (+ reference image set) | Generated watermarked/unwatermarked image set (+reference image set) | Metrics requiring computation on image sets |
RepeatImageQualityAnalysisPipeline | Image set | Repeatedly generated watermarked/unwatermarked image set | Metrics for evaluating repeatedly generated image sets |
ComparedImageQualityAnalysisPipeline | Two images for comparison | Generated watermarked and unwatermarked images | Metrics measuring differences between two images |
DirectVideoQualityAnalysisPipeline | Single video | Generated video frame set | Metrics for overall video evaluation |
Tool Name | Evaluation Category | Function Description | Output Metrics |
---|---|---|---|
FundamentalSuccessRateCalculator | Detectability | Calculate classification metrics for fixed-threshold watermark detection | Various classification metrics |
DynamicThresholdSuccessRateCalculator | Detectability | Calculate classification metrics for dynamic-threshold watermark detection | Various classification metrics |
Image Attack Tools | |||
Rotation | Robustness (Image) | Image rotation attack, testing watermark resistance to rotation transforms | Rotated images/frames |
CrSc (Crop & Scale) | Robustness (Image) | Cropping and scaling attack, evaluating watermark robustness to size changes | Cropped/scaled images/frames |
GaussianNoise | Robustness (Image) | Gaussian noise attack, testing watermark resistance to noise interference | Noise-corrupted images/frames |
GaussianBlurring | Robustness (Image) | Gaussian blur attack, evaluating watermark resistance to blur processing | Blurred images/frames |
JPEGCompression | Robustness (Image) | JPEG compression attack, testing watermark robustness to lossy compression | Compressed images/frames |
Brightness | Robustness (Image) | Brightness adjustment attack, evaluating watermark resistance to brightness changes | Brightness-modified images/frames |
Mask | Robustness (Image) | Image masking attack, testing watermark resistance to partial occlusion by random black rectangles | Masked images/frames |
Overlay | Robustness (Image) | Image overlay attack, testing watermark resistance to graffiti-style strokes and annotations | Overlaid images/frames |
AdaptiveNoiseInjection | Robustness (Image) | Adaptive noise injection attack, testing watermark resistance to content-aware noise (Gaussian/Salt-pepper/Poisson/Speckle) | Noisy images/frames with adaptive noise |
Video Attack Tools | |||
MPEG4Compression | Robustness (Video) | MPEG-4 video compression attack, testing video watermark compression robustness | Compressed video frames |
FrameAverage | Robustness (Video) | Frame averaging attack, destroying watermarks through inter-frame averaging | Averaged video frames |
FrameSwap | Robustness (Video) | Frame swapping attack, testing robustness by changing frame sequences | Swapped video frames |
VideoCodecAttack | Robustness (Video) | Codec re-encoding attack simulating platform transcoding (H.264/H.265/VP9/AV1) | Re-encoded video frames |
FrameRateAdapter | Robustness (Video) | Frame rate conversion attack that resamples frames while preserving duration | Resampled frame sequence |
FrameInterpolationAttack | Robustness (Video) | Frame interpolation attack inserting blended frames to alter temporal density | Interpolated video frames |
Image Quality Analyzers | |||
InceptionScoreCalculator | Quality (Image) | Evaluate generated image quality and diversity | IS score |
FIDCalculator | Quality (Image) | Fréchet Inception Distance, measuring distribution difference between generated and real images | FID value |
LPIPSAnalyzer | Quality (Image) | Learned Perceptual Image Patch Similarity, evaluating perceptual quality | LPIPS distance |
CLIPScoreCalculator | Quality (Image) | CLIP-based text-image consistency evaluation | CLIP similarity score |
PSNRAnalyzer | Quality (Image) | Peak Signal-to-Noise Ratio, measuring image distortion | PSNR value (dB) |
NIQECalculator | Quality (Image) | Natural Image Quality Evaluator, reference-free quality assessment | NIQE score |
SSIMAnalyzer | Quality (Image) | Structural Similarity Index between two images | SSIM value |
BRISQUEAnalyzer | Quality (Image) | Blind/Referenceless Image Spatial Quality Evaluator, evaluating perceptual quality of an image without requiring a reference | BRISQUE score |
VIFAnalyzer | Quality (Image) | Visual Information Fidelity analyzer, comparing a distorted image with a reference image to quantify the amount of visual information preserved | VIF value |
FSIMAnalyzer | Quality (Image) | Feature Similarity Index analyzer, comparing structural similarity between two images based on phase congruency and gradient magnitude | FSIM value |
Video Quality Analyzers | |||
SubjectConsistencyAnalyzer | Quality (Video) | Evaluate consistency of subject objects in video | Subject consistency score |
BackgroundConsistencyAnalyzer | Quality (Video) | Evaluate background coherence and stability in video | Background consistency score |
MotionSmoothnessAnalyzer | Quality (Video) | Evaluate smoothness of video motion | Motion smoothness metric |
DynamicDegreeAnalyzer | Quality (Video) | Measure dynamic level and change magnitude in video | Dynamic degree value |
ImagingQualityAnalyzer | Quality (Video) | Comprehensive evaluation of video imaging quality | Imaging quality score |
- Python 3.10+
- PyTorch
- Install dependencies:
pip install -r requirements.txt
Note: Some algorithms may require additional setup steps. Please refer to individual algorithm documentation for specific requirements.
Here's a simple example to get you started with MarkDiffusion:
import torch
from watermark.auto_watermark import AutoWatermark
from utils.diffusion_config import DiffusionConfig
from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler
# Device setup
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Configure diffusion pipeline
scheduler = DPMSolverMultistepScheduler.from_pretrained("model_path", subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained("model_path", scheduler=scheduler).to(device)
diffusion_config = DiffusionConfig(
scheduler=scheduler,
pipe=pipe,
device=device,
image_size=(512, 512),
num_inference_steps=50,
guidance_scale=7.5,
gen_seed=42,
inversion_type="ddim"
)
# Load watermark algorithm
watermark = AutoWatermark.load('TR',
algorithm_config='config/TR.json',
diffusion_config=diffusion_config)
# Generate watermarked media
prompt = "A beautiful sunset over the ocean"
watermarked_image = watermark.generate_watermarked_media(prompt)
# Detect watermark
detection_result = watermark.detect_watermark_in_media(watermarked_image)
print(f"Watermark detected: {detection_result}")
We provide extensive examples in MarkDiffusion_demo.ipynb
.
import torch
from watermark.auto_watermark import AutoWatermark
from utils.diffusion_config import DiffusionConfig
# Load watermarking algorithm
mywatermark = AutoWatermark.load(
'GS',
algorithm_config=f'config/GS.json',
diffusion_config=diffusion_config
)
# Generate watermarked image
watermarked_image = mywatermark.generate_watermarked_media(
input_data="A beautiful landscape with a river and mountains"
)
# Visualize the watermarked image
watermarked_image.show()
# Detect watermark
detection_result = mywatermark.detect_watermark_in_media(watermarked_image)
print(detection_result)
The toolkit includes custom visualization tools that enable clear and insightful views into how different watermarking algorithms operate under various scenarios. These visualizations help demystify the algorithms' mechanisms, making them more understandable for users.
from visualize.auto_visualization import AutoVisualizer
# Get data for visualization
data_for_visualization = mywatermark.get_data_for_visualize(watermarked_image)
# Load Visualizer
visualizer = AutoVisualizer.load('GS',
data_for_visualization=data_for_visualization)
# Draw diagrams on Matplotlib canvas
fig = visualizer.visualize(rows=2, cols=2,
methods=['draw_watermark_bits',
'draw_reconstructed_watermark_bits',
'draw_inverted_latents',
'draw_inverted_latents_fft'])
- Watermark Detection Pipeline
from evaluation.dataset import StableDiffusionPromptsDataset
from evaluation.pipelines.detection import (
WatermarkedMediaDetectionPipeline,
UnWatermarkedMediaDetectionPipeline,
DetectionPipelineReturnType
)
from evaluation.tools.image_editor import JPEGCompression
from evaluation.tools.success_rate_calculator import DynamicThresholdSuccessRateCalculator
# Dataset
my_dataset = StableDiffusionPromptsDataset(max_samples=200)
# Set up detection pipelines
pipeline1 = WatermarkedMediaDetectionPipeline(
dataset=my_dataset,
media_editor_list=[JPEGCompression(quality=60)],
show_progress=True,
return_type=DetectionPipelineReturnType.SCORES
)
pipeline2 = UnWatermarkedMediaDetectionPipeline(
dataset=my_dataset,
media_editor_list=[],
show_progress=True,
return_type=DetectionPipelineReturnType.SCORES
)
# Configure detection parameters
detection_kwargs = {
"num_inference_steps": 50,
"guidance_scale": 1.0,
}
# Calculate success rates
calculator = DynamicThresholdSuccessRateCalculator(
labels=labels,
rule=rules,
target_fpr=target_fpr
)
results = calculator.calculate(
pipeline1.evaluate(my_watermark, detection_kwargs=detection_kwargs),
pipeline2.evaluate(my_watermark, detection_kwargs=detection_kwargs)
)
print(results)
- Image Quality Analysis Pipeline
from evaluation.dataset import StableDiffusionPromptsDataset, MSCOCODataset
from evaluation.pipelines.image_quality_analysis import (
DirectImageQualityAnalysisPipeline,
ReferencedImageQualityAnalysisPipeline,
GroupImageQualityAnalysisPipeline,
RepeatImageQualityAnalysisPipeline,
ComparedImageQualityAnalysisPipeline,
QualityPipelineReturnType
)
from evaluation.tools.image_quality_analyzer import (
NIQECalculator, CLIPScoreCalculator, FIDCalculator,
InceptionScoreCalculator, LPIPSAnalyzer, PSNRAnalyzer
)
# Different quality metrics examples:
# NIQE (No-Reference Image Quality Evaluator)
if metric == 'NIQE':
my_dataset = StableDiffusionPromptsDataset(max_samples=max_samples)
pipeline = DirectImageQualityAnalysisPipeline(
dataset=my_dataset,
watermarked_image_editor_list=[],
unwatermarked_image_editor_list=[],
analyzers=[NIQECalculator()],
show_progress=True,
return_type=QualityPipelineReturnType.MEAN_SCORES
)
# CLIP Score
elif metric == 'CLIP':
my_dataset = MSCOCODataset(max_samples=max_samples)
pipeline = ReferencedImageQualityAnalysisPipeline(
dataset=my_dataset,
watermarked_image_editor_list=[],
unwatermarked_image_editor_list=[],
analyzers=[CLIPScoreCalculator()],
unwatermarked_image_source='generated',
reference_image_source='natural',
show_progress=True,
return_type=QualityPipelineReturnType.MEAN_SCORES
)
# FID (Fréchet Inception Distance)
elif metric == 'FID':
my_dataset = MSCOCODataset(max_samples=max_samples)
pipeline = GroupImageQualityAnalysisPipeline(
dataset=my_dataset,
watermarked_image_editor_list=[],
unwatermarked_image_editor_list=[],
analyzers=[FIDCalculator()],
unwatermarked_image_source='generated',
reference_image_source='natural',
show_progress=True,
return_type=QualityPipelineReturnType.MEAN_SCORES
)
# IS (Inception Score)
elif metric == 'IS':
my_dataset = StableDiffusionPromptsDataset(max_samples=max_samples)
pipeline = GroupImageQualityAnalysisPipeline(
dataset=my_dataset,
watermarked_image_editor_list=[],
unwatermarked_image_editor_list=[],
analyzers=[InceptionScoreCalculator()],
show_progress=True,
return_type=QualityPipelineReturnType.MEAN_SCORES
)
# LPIPS (Learned Perceptual Image Patch Similarity)
elif metric == 'LPIPS':
my_dataset = StableDiffusionPromptsDataset(max_samples=10)
pipeline = RepeatImageQualityAnalysisPipeline(
dataset=my_dataset,
prompt_per_image=20,
watermarked_image_editor_list=[],
unwatermarked_image_editor_list=[],
analyzers=[LPIPSAnalyzer()],
show_progress=True,
return_type=QualityPipelineReturnType.MEAN_SCORES
)
# PSNR (Peak Signal-to-Noise Ratio)
elif metric == 'PSNR':
my_dataset = StableDiffusionPromptsDataset(max_samples=max_samples)
pipeline = ComparedImageQualityAnalysisPipeline(
dataset=my_dataset,
watermarked_image_editor_list=[],
unwatermarked_image_editor_list=[],
analyzers=[PSNRAnalyzer()],
show_progress=True,
return_type=QualityPipelineReturnType.MEAN_SCORES
)
# Load watermark and evaluate
my_watermark = AutoWatermark.load(
f'{algorithm_name}',
algorithm_config=f'config/{algorithm_name}.json',
diffusion_config=diffusion_config
)
print(pipeline.evaluate(my_watermark))
- Video Quality Analysis Pipeline
from evaluation.dataset import VBenchDataset
from evaluation.pipelines.video_quality_analysis import DirectVideoQualityAnalysisPipeline
from evaluation.tools.video_quality_analyzer import (
SubjectConsistencyAnalyzer,
MotionSmoothnessAnalyzer,
DynamicDegreeAnalyzer,
BackgroundConsistencyAnalyzer,
ImagingQualityAnalyzer
)
# Load VBench dataset
my_dataset = VBenchDataset(max_samples=200, dimension=dimension)
# Initialize analyzer based on metric
if metric == 'subject_consistency':
analyzer = SubjectConsistencyAnalyzer(device=device)
elif metric == 'motion_smoothness':
analyzer = MotionSmoothnessAnalyzer(device=device)
elif metric == 'dynamic_degree':
analyzer = DynamicDegreeAnalyzer(device=device)
elif metric == 'background_consistency':
analyzer = BackgroundConsistencyAnalyzer(device=device)
elif metric == 'imaging_quality':
analyzer = ImagingQualityAnalyzer(device=device)
else:
raise ValueError(f'Invalid metric: {metric}. Supported metrics:
subject_consistency, motion_smoothness, dynamic_degree,
background_consistency, imaging_quality')
# Create video quality analysis pipeline
pipeline = DirectVideoQualityAnalysisPipeline(
dataset=my_dataset,
watermarked_video_editor_list=[],
unwatermarked_video_editor_list=[],
watermarked_frame_editor_list=[],
unwatermarked_frame_editor_list=[],
analyzers=[analyzer],
show_progress=True,
return_type=QualityPipelineReturnType.MEAN_SCORES
)
print(pipeline.evaluate(my_watermark))
@article{pan2025markdiffusion,
title={MarkDiffusion: An Open-Source Toolkit for Generative Watermarking of Latent Diffusion Models},
author={Pan, Leyi and Guan, Sheng and Fu, Zheyu and Si, Luyang and Wang, Zian and Hu, Xuming and King, Irwin and Yu, Philip S and Liu, Aiwei and Wen, Lijie},
journal={arXiv preprint arXiv:2509.10569},
year={2025}
}