English | 简体中文
A high-performance, highly concurrent, distributed Snowflake ID generator in Rust.
This implementation is lock-free, designed for maximum throughput and minimum latency on multi-core CPUs.
- Lock-Free Concurrency: Uses 
AtomicU64and CAS (Compare-And-Swap) operations to manage internal state, completely eliminating the overhead ofMutexcontention and context switching. - High Performance: The lock-free design makes ID generation extremely fast, performing exceptionally well under high concurrency.
 - Highly Customizable: The 
Builderpattern allows you to flexibly configure:start_time: The epoch timestamp to shorten the time component of the generated ID.machine_idanddata_center_id: Identifiers for your machines and data centers.- Bit lengths for each component (
time,sequence,machine_id,data_center_id). 
 - Smart IP Fallback: With the 
ip-fallbackfeature enabled, ifmachine_idordata_center_idare not provided, the system will automatically use the machine's local IP address.- Supports both IPv4 and IPv6: It prioritizes private IPv4 addresses and falls back to private IPv6 addresses if none are found.
 - Conflict-Free: To ensure uniqueness, 
machine_idanddata_center_idare derived from distinct parts of the IP address:- IPv4: 
data_center_idfrom the 3rd octet,machine_idfrom the 4th octet. - IPv6: 
data_center_idfrom the 7th segment,machine_idfrom the 8th (last) segment. 
 - IPv4: 
 
 - Thread-Safe: 
Snowflakeinstances can be safely cloned and shared across threads. Cloning is a lightweight operation (just anArcreference count increment). 
The generated ID is a 64-bit unsigned integer (u64) with the following default structure:
+-------------------------------------------------------------------------------------------------+
| 1 Bit (Unused, Sign Bit) | 41 Bits (Timestamp, ms) | 5 Bits (Data Center ID) | 5 Bits (Machine ID) | 12 Bits (Sequence) |
+-------------------------------------------------------------------------------------------------+
- Sign Bit (1 bit): Always 0 to ensure the ID is positive.
 - Timestamp (41 bits): Milliseconds elapsed since your configured 
start_time. 41 bits can represent about 69 years. - Data Center ID (5 bits): Allows for up to 32 data centers.
 - Machine ID (5 bits): Allows for up to 32 machines per data center.
 - Sequence (12 bits): The number of IDs that can be generated per millisecond on a single machine. 12 bits allow for 4096 IDs per millisecond.
 
Note: The bit lengths of all components are customizable via the Builder, but their sum must be 63.
Add this library to your Cargo.toml:
Important Notice
The dependency name has been changed from 'snowflake_me' to 'snowflake-me' in version '0.5.0'. Please use the new name in yourCargo.toml:snowflake-me = "0.5.0"No Rust code changes are required; only update the dependency name in
Cargo.toml.
To enable the IP address fallback feature, enable the
ip-fallbackfeature:
[dependencies] snowflake-me = { version = "0.5.0", features = ["ip-fallback"] }
use snowflake_me::Snowflake;
fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create a generator with the default configuration.
    // Note: This requires the `ip-fallback` feature to auto-detect machine and data center IDs.
    let sf = Snowflake::new()?;
    // Generate a unique ID
    let id = sf.next_id()?;
    println!("Generated Snowflake ID: {}", id);
    Ok(())
}Snowflake instances can be efficiently cloned and shared between threads.
use snowflake_me::Snowflake;
use std::thread;
use std::sync::Arc;
use std::collections::HashSet;
fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Manually configure machine_id and data_center_id using the Builder.
    // This is the recommended approach for production environments.
    let sf = Snowflake::builder()
        .machine_id(&|| Ok(10))
        .data_center_id(&|| Ok(5))
        .finalize()?;
    let sf_arc = Arc::new(sf);
    let mut handles = vec![];
    for _ in 0..10 {
        let sf_clone = Arc::clone(&sf_arc);
        let handle = thread::spawn(move || {
            let mut ids = Vec::new();
            for _ in 0..10000 {
                ids.push(sf_clone.next_id().unwrap());
            }
            ids
        });
        handles.push(handle);
    }
    let mut all_ids = HashSet::new();
    for handle in handles {
        let ids = handle.join().unwrap();
        for id in ids {
            // Verify that all IDs are unique
            assert!(all_ids.insert(id), "Found duplicate ID: {}", id);
        }
    }
    println!("Successfully generated {} unique IDs across 10 threads.", all_ids.len());
    Ok(())
}You can decompose a Snowflake ID back into its components for debugging or analysis.
use snowflake_me::{Snowflake, DecomposedSnowflake};
fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Ensure you use the same bit length configuration as when the ID was generated.
    let bit_len_time = 41;
    let bit_len_sequence = 12;
    let bit_len_data_center_id = 5;
    let bit_len_machine_id = 5;
    let sf = Snowflake::builder()
        .bit_len_time(bit_len_time)
        .bit_len_sequence(bit_len_sequence)
        .bit_len_data_center_id(bit_len_data_center_id)
        .bit_len_machine_id(bit_len_machine_id)
        .machine_id(&|| Ok(15))
        .data_center_id(&|| Ok(7))
        .finalize()?;
    let id = sf.next_id()?;
    let decomposed = DecomposedSnowflake::decompose(
        id,
        bit_len_time,
        bit_len_sequence,
        bit_len_data_center_id,
        bit_len_machine_id,
    );
    println!("ID: {}", decomposed.id);
    println!("Time: {}", decomposed.time);
    println!("Data Center ID: {}", decomposed.data_center_id);
    println!("Machine ID: {}", decomposed.machine_id);
    println!("Sequence: {}", decomposed.sequence);
    assert_eq!(decomposed.machine_id, 15);
    assert_eq!(decomposed.data_center_id, 7);
    Ok(())
}Issues and Pull Requests are welcome.
This project is dual-licensed under the MIT and Apache 2.0 licenses.