Skip to content

[RocksDB] Preset & RAM‑backed Disk Virtualization #681

@michaelsutton

Description

@michaelsutton

Background

A discord community contributor, Callidon of Kaspa, demonstrated that archival nodes can keep pace with 10 bps / full DAG (~3 000 tps) even on HDDs by:

  1. Tuned RocksDB options that sharply reduce write‑amplification.
  2. External RAM write‑buffer “virtualization” (e.g. tmpfs/FUSE overlay) that absorbs intermediate writes in RAM and flushes only finalized state to disk.

Integrating these ideas will make archival operation more approachable for power users and researchers.


Scope

1 · Expose Callidon's RocksDB Preset

Goal Provide an opt‑in CLI flag so operators can benchmark Callidon’s settings.
Tasks - Import the configuration into the codebase.
- Add --rocksdb-preset <default\archive> flag.
- Apply chosen preset when initializing RocksDB; allow further per‑flag overrides.
- Document trade‑offs (higher RAM, slower compaction latency) in docs/archival.md.
- Supply a one‑line Docker example for archival operators.

2 · Document & Prototype Disk Virtualization

Goal Make Callidon’s RAM‑backed write‑cache approach reproducible and explore minimal automation.
Tasks - Write docs/disk_virtualization.md with a step‑by‑step guide (tmpfs/FUSE/ZFS‑cache variants).
- Survey self-contained Rust solutions for an in‑process write‑back layer.
- Prototype --rocksdb-ram-cache <size> flag that mounts a tmpfs/FUSE layer beneath the DB path at launch (Linux only for v0).
- Expose runtime metrics: cached bytes, flush bytes, eviction count.
- Benchmark impact on prune latency, write IOPS, and HDD wear.

Metadata

Metadata

Assignees

No one assigned

    Projects

    Status

    No status

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions