Skip to content

IrisRainbowNeko/HCP-Diffusion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

HCP-Diffusion V2

PyPI GitHub stars GitHub license codecov open issues

πŸ“˜δΈ­ζ–‡θ―΄ζ˜Ž

πŸ“˜English document πŸ“˜δΈ­ζ–‡ζ–‡ζ‘£

Old HCP-Diffusion V1 at main branch

Introduction

HCP-Diffusion is a Diffusion model toolbox built on top of the 🐱 RainbowNeko Engine.
It features a clean code structure and a flexible Python-based configuration file, making it easier to conduct and manage complex experiments. It includes a wide variety of training components, and compared to existing frameworks, it's more extensible, flexible, and user-friendly.

HCP-Diffusion allows you to use a single .py config file to unify training workflows across popular methods and model architectures, including Prompt-tuning (Textual Inversion), DreamArtist, Fine-tuning, DreamBooth, LoRA, ControlNet, ....
Different techniques can also be freely combined.

This framework also implements DreamArtist++, an upgraded version of DreamArtist based on LoRA. It enables high generalization and controllability with just a single image for training.
Compared to the original DreamArtist, it offers better stability, image quality, controllability, and faster training.


Installation

Install pytorch

Install via pip:

pip install hcpdiff
# Initialize configuration
hcpinit

Install from source:

git clone https://github.com/7eu7d7/HCP-Diffusion.git
cd HCP-Diffusion
pip install -e .
# Initialize configuration
hcpinit

Use xFormers to reduce memory usage and accelerate training:

# Choose the appropriate xformers version for your PyTorch version
pip install xformers==?

πŸš€ Python Configuration Files

RainbowNeko Engine supports configuration files written in a Python-like syntax. This allows users to call functions and classes directly within the configuration file, with function parameters inheritable from parent configuration files. The framework automatically handles the formatting of these configuration files.

For example, consider the following configuration file:

dict(
    layer=Linear(in_features=4, out_features=4)
)

During parsing, this will be automatically compiled into:

dict(
    layer=dict(_target_=Linear, in_features=4, out_features=4)
)

After parsing, the framework will instantiate the components accordingly. This means users can write configuration files using familiar Python syntax.


✨ Features

Features

πŸ“¦ Model Support

Model Name Status
Stable Diffusion 1.5 βœ… Supported
Stable Diffusion XL (SDXL) βœ… Supported
PixArt βœ… Supported
FLUX βœ… Supported
Stable Diffusion 3 (SD3) 🚧 In Development

🧠 Fine-Tuning Capabilities

Feature Description/Support
LoRA Layer-wise Configuration βœ… Supported (including Conv2d)
Layer-wise Fine-Tuning βœ… Supported
Multi-token Prompt-Tuning βœ… Supported
Layer-wise Model Merging βœ… Supported
Custom Optimizers βœ… Supported (Lion, DAdaptation, pytorch-optimizer, etc.)
Custom LR Schedulers βœ… Supported

🧩 Extension Method Support

Method Status
ControlNet (including training) βœ… Supported
DreamArtist / DreamArtist++ βœ… Supported
Token Attention Adjustment βœ… Supported
Max Sentence Length Extension βœ… Supported
Textual Inversion (Custom Tokens) βœ… Supported
CLIP Skip βœ… Supported

πŸš€ Training Acceleration

Tool/Library Supported Modules
πŸ€— Accelerate βœ… Supported
Colossal-AI βœ… Supported
xFormers βœ… Supported (UNet and text encoder)

πŸ—‚ Dataset Support

Feature Description
Aspect Ratio Bucket (ARB) βœ… Auto-clustering supported
Multi-source / Multi-dataset βœ… Supported
LMDB βœ… Supported
webdataset βœ… Supported
Local Attention Enhancement βœ… Supported
Tag Shuffling & Dropout βœ… Multiple tag editing strategies

πŸ“‰ Supported Loss Functions

Loss Type Description
Min-SNR βœ… Supported
SSIM βœ… Supported
GWLoss βœ… Supported

🌫 Supported Diffusion Strategies

Strategy Type Status
DDPM βœ… Supported
EDM βœ… Supported
Flow Matching βœ… Supported

🧠 Automatic Evaluation (Step Selection Assistant)

Feature Description/Status
Image Preview βœ… Supported (workflow preview)
FID 🚧 In Development
CLIP Score 🚧 In Development
CCIP Score 🚧 In Development
Corrupt Score 🚧 In Development

⚑️ Image Generation

εŠŸθƒ½ 描述/ζ”―ζŒζƒ…ε†΅
Batch Generation βœ… Supported
Generate from Prompt Dataset βœ… Supported
Image to Image βœ… Supported
Inpaint βœ… Supported
Token Weight βœ… Supported

Getting Started

Training

HCP-Diffusion provides training scripts based on πŸ€— Accelerate.

# Multi-GPU training, configure GPUs in cfgs/launcher/multi.yaml
hcp_train --cfg cfgs/train/py/your_config.py

# Single-GPU training, configure GPU in cfgs/launcher/single.yaml
hcp_train_1gpu --cfg cfgs/train/py/your_config.py

You can also override config items via command line:

# Override base model path
hcp_train --cfg cfgs/train/py/your_config.py model.wrapper.models.ckpt_path=pretrained_model_path

Image Generation

Use the workflow defined in the Python config to generate images:

hcp_run --cfg cfgs/workflow/text2img.py

Or override parameters via command line:

hcp_run --cfg cfgs/workflow/text2img_cli.py \
    pretrained_model=pretrained_model_path \
    prompt='positive_prompt' \
    negative_prompt='negative_prompt' \
    seed=42

πŸ“š Tutorials


Contributing

We welcome contributions to support more models and features.


Team

Maintained by HCP-Lab at Sun Yat-sen University.


Citation

@article{DBLP:journals/corr/abs-2211-11337,
  author    = {Ziyi Dong and
               Pengxu Wei and
               Liang Lin},
  title     = {DreamArtist: Towards Controllable One-Shot Text-to-Image Generation
               via Positive-Negative Prompt-Tuning},
  journal   = {CoRR},
  volume    = {abs/2211.11337},
  year      = {2022},
  doi       = {10.48550/arXiv.2211.11337},
  eprinttype = {arXiv},
  eprint    = {2211.11337},
}

About

A universal Stable-Diffusion toolbox

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 9

Languages