Dockerized distribution for zero knowledge proof services. This repository bundles two independent proving services with consistent deployment and observability:
- Circom Prover: a service for proving Circom circuits.
- Gnark Prover: a service for proving Gnark circuits.
Both services expose a gRPC interface and support CPU and NVIDIA GPU runtime modes.
prover/
├── circom-prover/ # Circom proving service
│ ├── src_cpu/ # CPU entrypoint
│ ├── src_gpu/ # GPU entrypoint
│ ├── src/ # Service sources and gRPC definitions
│ ├── lib/ # Shared library code
│ ├── pull.sh # Deployment helper script
│ └── go.mod # Go module
├── gnark-prover/ # Gnark proving service
│ ├── src_cpu/ # CPU entrypoint
│ ├── src_gpu/ # GPU entrypoint
│ ├── src/ # Service sources and gRPC definitions
│ ├── pull.sh # Deployment helper script
│ └── go.mod # Go module
└── README.md # This document
- Hardware: Any compatible CPU or NVIDIA GPU.
- CUDA: ICICLE targets CUDA Toolkit ≥ 12.0. Older GPUs that only support CUDA 11 may still work, but this is not an officially supported configuration.
- GPU Memory (VRAM): Depends on circuit size and proving key; 8 GB+ recommended, and 16 GB+ is advisable for medium–large circuits.
This project supports both CPU and GPU (NVIDIA) runtime modes. The GPU-related version requirements primarily stem from our dependency on ingonyama-zk/icicle-gnark (hereinafter referred to as icicle-gnark). Please be aware of the following limitations and installation requirements.
- Docker and Docker Compose installed (latest version recommended).
- Permissions to pull images and run containers (root or membership in the
dockergroup). - Internet access to pull images and dependencies.
- No additional requirements (NVIDIA components are not needed).
- NVIDIA Graphics Driver (version must support CUDA 12.x; 535+ recommended).
Refer to the official compatibility matrix: https://docs.nvidia.com/deploy/cuda-compatibility/index.html - NVIDIA Container Toolkit (version as referenced in icicle-gnark; CUDA Toolkit ≥ 12.0 recommended):
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
cd circom-prover
sudo ./pull.sh cpu [port] # default port: 60051cd circom-prover
sudo ./pull.sh gpu [port] # default port: 60051cd gnark-prover
sudo ./pull.sh cpu [port] # default port: 60060cd gnark-prover
sudo ./pull.sh gpu [port] # default port: 60060# Check container status
sudo docker ps
# Test service connectivity
telnet localhost 60051 # Circom Prover
telnet localhost 60060 # Gnark Prover
# View service logs
sudo docker logs -f circom-prover-cpu / circom-prover-gpu
sudo docker logs -f gnark-prover-cpu / gnark-prover-gpu
# Stop a container
sudo docker stop <container>
# Start a container
sudo docker start <container>
# Restart a container
sudo docker restart <container>
# Pause / unpause a container
sudo docker pause <container>
sudo docker unpause <container>Both subprojects include a convenience script to pull and run a pre-built Docker image.
sudo ./pull.sh [mode] [port]Parameter Descriptions:
mode:cpuorgpu(default:gpu)port: Host port number (optional; default values as specified in each service description)
Environment Variables:
IMAGE: Custom Docker image nameNAME: Custom container nameHOST_PORT: Host port (priority lower than command-line argument)CUDA_VISIBLE_DEVICES: GPU device selection (GPU mode only)NVIDIA_VISIBLE_DEVICES: NVIDIA device visibility (GPU mode only)
Examples:
# Use default settings
sudo ./pull.sh
# Specify CPU mode and port
sudo ./pull.sh cpu 8080