AIStore: High-Performance, Scalable Storage for AI Workloads
AIStore (AIS) is a lightweight distributed storage stack tailored for AI applications. It's an elastic cluster that can grow and shrink at runtime and can be ad-hoc deployed, with or without Kubernetes, anywhere from a single Linux machine to a bare-metal cluster of any size. Built from scratch, AIS provides linear scale-out, consistent performance, and a flexible deployment model.
AIS consistently shows balanced I/O distribution and linear scalability across an arbitrary number of clustered nodes. The system supports fast data access, reliability, and rich customization for data transformation workloads.
- ✅ Multi-Cloud Access: Seamlessly access and manage content across multiple cloud backends (including AWS S3, GCS, Azure, and OCI), with the additional benefit of fast-tier performance and configurable data redundancy.
- ✅ Deploy Anywhere: AIS runs on any Linux machine, virtual or physical. Deployment options range from a single Docker container and Google Colab to petascale Kubernetes clusters. There are no built-in limitations on deployment size or functionality.
- ✅ High Availability: Redundant control and data planes. Self-healing, end-to-end protection, n-way mirroring, and erasure coding. Arbitrary number of lightweight access points.
- ✅ HTTP-based API: A feature-rich, native API (with user-friendly SDKs for Go and Python), and compliant Amazon S3 API for running unmodified S3 clients.
- ✅ Monitoring: Comprehensive observability with integrated Prometheus metrics, Grafana dashboards, detailed logs with configurable verbosity, and CLI-based performance tracking for complete cluster visibility and troubleshooting. See AIStore Observability for details.
- ✅ Chunked Objects: High-performance chunked object representation, with independently retrievable chunks, metadata v2, and checksum-protected manifests. Supports rechunking, parallel reads, and seamless integration with Get-Batch, blob-downloader, and multipart uploads to supported cloud backends.
- ✅ Secure Redirects (cluster-key): Configurable cryptographic signing of redirect URLs using HMAC-SHA256 with a versioned cluster key.
- ✅ Load-Aware Throttling: Dynamic request throttling based on a five-dimensional load vector (CPU, memory, disk, FDs, goroutines) to protect AIS clusters under stress.
- ✅ Unified Namespace: Attach AIS clusters together to provide fast, unified access to the entirety of hosted datasets, allowing users to reference shared buckets with cluster-specific identifiers.
- ✅ Turn-key Cache: In addition to robust data protection features, AIS offers a per-bucket configurable LRU-based cache with eviction thresholds and storage capacity watermarks.
- ✅ ETL Offload: Execute I/O intensive data transformations close to the data, either inline (on-the-fly as part of each read request) or offline (batch processing, with the destination bucket populated with transformed results).
- ✅ Get-Batch: Retrieve multiple objects and/or archived files with a single call. Designed for ML/AI pipelines, Get-Batch fetches an entire training batch in one operation, assembling a TAR (or other supported serialization format) that contains all requested items in the exact user-specified order.
- ✅ Data Consistency: Guaranteed consistency across all gateways, with write-through semantics in presence of remote backends.
- ✅ Serialization & Sharding: Native support for TAR, TGZ, TAR.LZ4, and ZIP archives for efficient storage and processing of small-file datasets. Features include seamless integration with existing unmodified workflows across all APIs and subsystems.
- ✅ Kubernetes: For production, AIS runs natively on Kubernetes. The dedicated ais-k8s repository includes the AIS/K8s Operator, Ansible playbooks, Helm charts, and deployment guidance.
- ✅ Batch Jobs: More than 30 cluster-wide batch operations that you can start, monitor, and control otherwise - the list currently includes:
$ ais show job --help
NAME:
archive blob-download cleanup copy-bucket copy-objects delete-objects
download dsort ec-bucket ec-get ec-put ec-resp
elect-primary etl-bucket etl-inline etl-objects evict-objects evict-remote-bucket
get-batch list lru-eviction mirror prefetch-objects promote-files
put-copies rebalance rechunk rename-bucket resilver summary
warm-up-metadataThe feature set continues to grow and also includes: blob-downloader; adding/removing nodes at runtime; runtime management of TLS certificates; listing, copying, prefetching, and transforming virtual directories; executing presigned S3 requests; adaptive rate limiting; and more.
For the original white paper and design philosophy, please see AIStore Overview, which also includes high-level block diagram, terminology, APIs, CLI, and more. For our 2024 KubeCon presentation, please see AIStore: Enhancing petascale Deep Learning across Cloud backends.
AIS includes an integrated, scriptable CLI for managing clusters, buckets, and objects, running and monitoring batch jobs, viewing and downloading logs, generating performance reports, and more:
$ ais <TAB-TAB>
advanced config get object scrub tls
alias cp help performance search wait
archive create job prefetch show
auth download log put space-cleanup
blob-download dsort ls remote-cluster start
bucket etl ml rmb stop
cluster evict mpu rmo storageAIS runs natively on Kubernetes and features open format - thus, the freedom to copy or move your data from AIS at any time using the familiar Linux tar(1), scp(1), rsync(1) and similar.
For developers and data scientists, there's also:
- Go API used in CLI and benchmarking tools
- Python SDK + Reference Guide
- PyTorch integration and usage examples
- Boto3 support
- Read the Getting Started Guide for a 5-minute local install, or
- Run a minimal AIS cluster consisting of a single gateway and a single storage node, or
- Clone the repo and run
make kill cli aisloader deployfollowed byais show cluster
AIS deployment options, as well as intended (development vs. production vs. first-time) usages, are all summarized here.
Since the prerequisites essentially boil down to having Linux with a disk the deployment options range from all-in-one container to a petascale bare-metal cluster of any size, and from a single VM to multiple racks of high-end servers. Practical use cases require, of course, further consideration.
Some of the most popular deployment options include:
| Option | Use Case |
|---|---|
| Local playground | AIS developers or first-time users, Linux or Mac OS. Run make kill cli aisloader deploy <<< $'N\nM', where N is a number of targets, M is a number of gateways |
| Minimal production-ready deployment | This option utilizes preinstalled docker image and is targeting first-time users or researchers (who could immediately start training their models on smaller datasets) |
| Docker container | Quick testing and evaluation; single-node setup |
| GCP/GKE automated install | Developers, first-time users, AI researchers |
| Large-scale production deployment | Requires Kubernetes; provided via ais-k8s |
For performance tuning, see performance and AIS K8s Playbooks.
AIS supports multiple ingestion modes:
- ✅ On Demand: Transparent cloud access during workloads.
- ✅ PUT: Locally accessible files and directories.
- ✅ Promote: Import local target directories and/or NFS/SMB shares mounted on AIS targets.
- ✅ Copy: Full buckets, virtual subdirectories (recursively or non-recursively), lists or ranges (via Bash expansion).
- ✅ Download: HTTP(S)-accessible datasets and objects.
- ✅ Prefetch: Remote buckets or selected objects (from remote buckets), including subdirectories, lists, and/or ranges.
- ✅ Archive: Group and store related small files from an original dataset.
You can install the CLI and benchmarking tools using:
./scripts/install_from_binaries.sh --helpThe script installs aisloader and CLI from the latest or previous GitHub release and enables CLI auto-completions.
PyTorch integration is a growing set of datasets (both iterable and map-style), samplers, and dataloaders:
- Taxonomy of abstractions and API reference
- AIS plugin for PyTorch: usage examples
- Jupyter notebook examples
Let others know your project is powered by high-performance AI storage:
[](https://github.com/NVIDIA/aistore)- Overview and Design
- Getting Started
- Buckets and Bucket Management
- Observability
- Technical Blog
- S3 Compatibility
- Batch Jobs
- Performance and CLI: performance
- CLI Reference
- Production Deployment: Kubernetes Operator, Ansible Playbooks, Helm Charts, Monitoring
- See Extended Index
- Use CLI
searchcommand, e.g.:ais search copy - Clone the repository and run
git grep, e.g.:git grep -n out-of-band -- "*.md"
MIT
Alex Aizman (NVIDIA)