A blazingly fast gRPC client & server implementation in Zig, designed for maximum performance and minimal overhead.
- 🔥 Blazingly Fast: Built from ground up in Zig for maximum performance
 - 🔐 Full Security: Built-in JWT authentication and TLS support
 - 🗜️ Compression: Support for gzip and deflate compression
 - 🌊 Streaming: Efficient bi-directional streaming
 - 💪 HTTP/2: Full HTTP/2 support with proper flow control
 - 🏥 Health Checks: Built-in health checking system
 - 🎯 Zero Dependencies: Pure Zig implementation
 - 🔍 Type Safety: Leverages Zig's comptime for compile-time checks
 
// Server
const server = try GrpcServer.init(allocator, 50051, "secret-key");
try server.handlers.append(.{
    .name = "SayHello",
    .handler_fn = sayHello,
});
try server.start();
// Client
var client = try GrpcClient.init(allocator, "localhost", 50051);
const response = try client.call("SayHello", "World", .none);const std = @import("std");
const GrpcServer = @import("server.zig").GrpcServer;
pub fn main() !void {
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    defer _ = gpa.deinit();
    var server = try GrpcServer.init(gpa.allocator(), 50051, "secret-key");
    defer server.deinit();
    try server.start();
}var stream = streaming.MessageStream.init(allocator, 5);
try stream.push("First message", false);
try stream.push("Final message", true);- Fetch the dependency:
 
zig fetch --save "git+https://ziglana/grpc-zig/gRPC-zig#main"- Add to your 
build.zig: 
const grpc_zig = b.dependency("grpc_zig", .{});
exe.addModule("grpc", grpc_zig.module("grpc"));Benchmarked against other gRPC implementations (ops/sec, lower is better):
gRPC-zig    │████████░░░░░░░░░░│  2.1ms
gRPC Go     │██████████████░░░░│  3.8ms
gRPC C++    │████████████████░░│  4.2ms
The repository includes a built-in benchmarking tool to measure performance:
# Build the benchmark tool
zig build
# Run benchmarks with default settings
zig build benchmark
# Run with custom parameters
./zig-out/bin/grpc-benchmark --help
./zig-out/bin/grpc-benchmark --requests 1000 --clients 10 --output json
# Or use the convenient script
./scripts/run_benchmark.shBenchmark Options:
--host <host>: Server host (default: localhost)--port <port>: Server port (default: 50051)--requests <n>: Number of requests per client (default: 1000)--clients <n>: Number of concurrent clients (default: 10)--size <bytes>: Request payload size (default: 1024)--output <format>: Output format: text|json (default: text)
Benchmark Metrics:
- Latency statistics (min, max, average, P95, P99)
 - Throughput (requests per second)
 - Error rates and success rates
 - Total execution time
 
The benchmarks automatically run in CI/CD on every pull request and provide performance feedback.
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
This project is licensed under the Unlicense - see the LICENSE file for details.
If you find this project useful, please consider giving it a star on GitHub to show your support!
- Spice - For the amazing Protocol Buffers implementation
 - Tonic - For inspiration on API design
 - The Zig community for their invaluable feedback and support
 
Made with ❤️ in Zig