Hybrid Sigma Scheduler for Stable Diffusion
A blend of Karras & Exponential scheduling with adaptive, randomized control.
SimpleKEScheduler is a custom hybrid scheduler designed to replace or augment the default schedulers used in Stable Diffusion pipelines. It combines Karras-style sigma sampling with exponential decay, allowing for dynamic control over noise, sharpness, and step transitions.
This scheduler is highly configurable, supports structured randomization, and was built for developers and users who want fine-tuned control or experimental behavior beyond standard sampling techniques.
Note: Version 1.2 is unstable and should only be used for testing purposes until I can stabilize it.
- Version 1.2 Adds a new prepass system that allows the scheduler to automatically adjust the number of steps based on how quickly the image starts to stabilize. This means it might take an image with requested steps of 50 and instead use 35 because it estimates faster convergence based off of generated sigma schedules. In the second pass, we pass the steps from prepass into the primary scheduler as the steps and generate a new sigma sequence off of the new step count.
- Purpose of version 1.2: revamp how early_stopping_threshold works. In previous version it existed but didn't work. This version was created specifically to make it work.....it's close but before it was bugging by stopping too soon, now it's indicating that it could stop (via logs) but continues processing.
- A new folder named simple_kes_v2 has been dropped into both the A1111 and Forge folders for v1.2 The code inside is meant to replace the current version 1, and the folder names should be renamed to drop the _v1.2 and placed into the proper "modules" folder in both programs
- Early stopping methods added: mean, max, and sum. In my tests, all 3 methods worked the same in that I did not notice a visual change, but we did see diifferent log outputs as to why it stopped. Examples: "Early stopping triggered by mean method. Mean change: 2.1957881450653076", "Early stopping triggered by max method. Max change: 11.830986022949219", "Early stopping triggered by sum method. Stable steps: 25 / 25"
- Early stopping will now occur when sigmas start to converge
- An option to turn early stopping off has also been added to allow you to run it in version 1 mode. It isn't the exact same as version 1, but it skips prepass function which uses the early stopping methods to reduce step count.
- New config options meant to affect early stopping have been added
- More detailed logging support has been added if enabled.
- A graph can be generated if enabled to show where the steps converge.
- I have tested and updated the default_config values for: sigma_scale_factor, sigma_auto_mode (use sigma_min), rho, sigma_min, sigma_max, start_blend, end_blend, sharpness, and early_stopping_threshold, and I have updated the recommended default values and rand_min/ rand_max values. As long as you stay within those defaults you should be ok. If you go below the rand_min, you will likely not see any visible changes. If you go above the rand_max you will probably notice negative things which may degrade image quality. For example, you can increase sharpness to 15 and you will start to notice very big changes - almost like a different seed. But if you go up to 50 you will have a warped or fuzzy image. Sharpness doesn't even get applied above 0.95, so increasing it above that goes beyond it's intended purpose. Sharpness below 0.75 was not noticeable during my tests.
- Step progress modes have been added: linear (default) is what we used in the previous version. You can also try: exponential, logarithmic, or sigmoid. Each has their own formula and affects how noise is scheduled and how fast it might be removed.
- Early stopping is experimental and performs well without considering hires.
A smooth, predictable pacing where each step changes by the same amount.
progress_value = self.progress[i]
Stronger texture retention early, fast cleanup later.
progress_value = self.progress[i] ** self.settings.get("exp_power", 2)
Higher exp_power β even steeper curve.
Faster image structure formation early, more careful refinement later.
progress_value = torch.log1p(self.progress[i] * (torch.exp(torch.tensor(1.0)) - 1))
Natural feeling pacing, mimics many physical processes. Great for creating balanced transitions.
progress_value = 1 / (1 + torch.exp(-12 * (self.progress[i] - 0.5)))
- The progress curve affects how aggressively or softly your step size changes over time.
- This influences how noise is removed and how details are formed.
- Hires not working if early_stopping triggers. Currently the early_stopping is bugged so this issue shouldn't be a problem (LOL).
- I have not tested the best values for initial_step_size, final_step_size, step_size_factor, initial_noise_scale, final_noise_scale, smooth_blend_factor, or noise_scale_factor, or the new early_stopping settings.
I think it's a better scheduler than before, despite its issues with not exiting soon enough. I'm still working on it everyday, and hope to have a fix in by the end of the week.
- Automatic A1111 (for install click on 'A1111 - simple_kes' then follow the installation instructions)
- Forge WebUi (for install click on Forge, then follow the installation instructions)
- π Hybrid Scheduling: Smooth blend between Karras and exponential sigma schedules.
- βοΈ Parameter Blending:
start_blend
andend_blend
dynamically interpolate during sampling. - π² Structured Randomization: Enable full or partial parameter randomization for variability.
- πͺ Sharpening Controls: Apply adaptive sharpening masks for low-sigma values.
- π§ Early Stopping: Optional convergence threshold for faster inference.
- π¦ Modular Design: Easy integration into pipelines, CLI tools, or training scripts.
- π§ͺ Inspired by A1111 & Diffusers: Seamlessly replaces scheduler logic while remaining transparent to end-users.
Clone this repo:
git clone https://github.com/Kittensx/Simple_KES
cd SimpleKEScheduler
Ensure your environment has the required packages:
pip install torch pyyaml
from simple_kes import SimpleKEScheduler
# Load from YAML config file
scheduler = SimpleKEScheduler(
steps=30,
device='cuda',
config='kes_config/default_config.yaml' # or a custom YAML path
)
# Or use a Python dict:
config_dict = {
"sigma_min": 0.01,
"sigma_max": 1.0,
"start_blend": 0.1,
"end_blend": 0.5,
"sharpness": 0.95,
"early_stopping_threshold": 0.01,
"initial_step_size": 0.9,
"final_step_size": 0.2,
"initial_noise_scale": 1.25,
"final_noise_scale": 0.8,
"smooth_blend_factor": 8,
"step_size_factor": 0.75,
"noise_scale_factor": 0.85,
"rho": 7.0,
"global_randomize": False
}
scheduler = SimpleKEScheduler(steps=30, device='cuda', config=config_dict)
# Generate sigmas
sigmas = scheduler.compute_sigmas(steps=30, device='cuda') # returns torch.FloatTensor
SimpleKEScheduler/
βββ simple_kes.py # Main scheduler logic
βββ get_sigmas.py # Karras & exponential helpers
βββ __init__.py # Plugin hooks & config loader
βββ kes_config/
β βββ default_config.yaml # Default configuration file
βββ proof it works.png # Screenshot test proof
Use the included default_config.yaml
or create your own. All parameters support optional _rand
, _rand_min
, and _rand_max
settings to enable structured randomness.
Name | Type | Description |
---|---|---|
sigma_min / sigma_max |
float | Range of sigmas |
start_blend / end_blend |
float | Controls blending between methods |
sharpness |
float | Adaptive sharpening on low sigmas |
initial_step_size / final_step_size |
float | Controls progression pacing |
initial_noise_scale / final_noise_scale |
float | Noise scaling from start to end |
global_randomize |
bool | Force randomization on all supported params |
For the full schema, see __init__.py:get_settings()
.
- β More flexible than static Karras or exponential schedulers
- β Excellent for experiments, LoRA/finetuning, or style variation
- β Compatible with CLI tools or training scripts
- β Designed to βjust workβ with proper config input
- β Structured randomization lets you simulate creative variability while preserving safety
Enable global_randomize: true
in YAML or pass it in your config dict. You can also toggle flags like sigma_min_rand
individually and specify ranges with sigma_min_rand_min
/ sigma_min_rand_max
.
MIT License
Built with inspiration from:
Feel free to open an issue or contribute improvements to the config system, plugin support, or integration with samplers!