A cluster provider for OpenMCP that uses kind (Kubernetes IN Docker) to provision and manage Kubernetes clusters. This provider enables you to create and manage multiple Kubernetes clusters running as Docker containers, making it ideal for:
- Local Development: Quickly spin up multiple clusters for testing multi-cluster scenarios
- E2E Testing: Automated testing of multi-cluster applications and operators
- CI/CD Pipelines: Lightweight cluster provisioning for testing environments
Before using this cluster provider, ensure you have:
- Docker: Running Docker daemon with socket accessible
- kind: kind CLI tool installed
- kubectl: For interacting with Kubernetes clusters
In combination with the OpenMCP Operator, this operator can be deployed via a simple Kubernetes resource:
apiVersion: openmcp.cloud/v1alpha1
kind: ClusterProvider
metadata:
name: kind
spec:
image: "ghcr.io/openmcp-project/images/cluster-provider-kind:<latest-version>"
To run the operator locally for development:
- Start a platform kind cluster:
kind create cluster --name platform
kubectl config use-context kind-platform
-
Install the Platform CRDs of the openmcp-operator: Apply the CRDs from the OpenMCP operator repository here.
-
Initialize the CRDs:
go run ./cmd/cluster-provider-kind/main.go init
- Run the operator:
go run ./cmd/cluster-provider-kind/main.go run
Create a new kind cluster by applying a Cluster resource:
apiVersion: clusters.openmcp.cloud/v1alpha1
kind: Cluster
metadata:
name: my-managedcontrolplane
namespace: default
spec:
profile: kind # This tells the kind provider to handle this cluster
tenancy: Exclusive
kubectl apply -f cluster.yaml
Create an AccessRequest to get kubeconfig for a cluster:
apiVersion: clusters.openmcp.cloud/v1alpha1
kind: AccessRequest
metadata:
name: my-access
namespace: default
spec:
clusterRef:
name: my-managedcontrolplane
namespace: default
permissions: []
The kubeconfig will be stored in a Secret in the same namespace as the AccessRequest
.
In order to create new kind clusters from within a kind cluster, the Docker socket (usually /var/run/docker.sock
) needs to be available to the cluster-provider-kind
pod. As a prerequisite, the Docker socket of the host machine must be mounted into the nodes of the platform kind cluster. In this case, there is only a single node (platform-control-plane
). The socket can then be mounted by the cluster-provider-kind pod using a hostPath
volume.
flowchart TD
subgraph HostMachine
DockerSocket
subgraph "platform-control-plane"
/var/run/docker.sock
cluster-provider-kind
end
DockerSocket -- extraMount --> /var/run/docker.sock
/var/run/docker.sock -- volumeMount --> cluster-provider-kind
subgraph mcp-control-plane
SomeResource
end
subgraph mcp-workload
SomePod
end
cluster-provider-kind -- creates --> mcp-control-plane
cluster-provider-kind -- creates --> mcp-workload
end
style HostMachine fill:#eee
The kind configuration for the platform cluster may look like this:
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/host-docker.sock
In order to test that the socket is functional, a simple pod can be deployed:
apiVersion: v1
kind: Pod
metadata:
name: ubuntu
spec:
containers:
- image: ubuntu
name: ubuntu
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker
command:
- sleep
- "3600"
volumes:
- name: docker
hostPath:
path: /var/run/host-docker.sock
type: Socket
After installing docker CLI and kind, it should be possible to create a new kind cluster on the level of the host machine: kind create cluster --name test
$ kind create cluster --name test
Creating cluster "test" ...
β Ensuring node image (kindest/node:v1.31.0) πΌ
β Preparing nodes π¦
β Writing configuration π
β Starting control-plane πΉοΈ
β Installing CNI π
β Installing StorageClass πΎ
Set kubectl context to "kind-test"
You can now use your cluster with:
kubectl cluster-info --context kind-test
Thanks for using kind! π
This can be verified by running kind get clusters
directly on the host machine:
$ kind get clusters
platform
test
This project is open to feature requests/suggestions, bug reports etc. via GitHub issues. Contribution and feedback are encouraged and always welcome. For more information about how to contribute, the project structure, as well as additional contribution information, see our Contribution Guidelines.
If you find any bug that may be a security problem, please follow our instructions at in our security policy on how to report it. Please do not create GitHub issues for security-related doubts or problems.
We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone. By participating in this project, you agree to abide by its Code of Conduct at all times.
Copyright 2025 SAP SE or an SAP affiliate company and cluster-provider-kind contributors. Please see our LICENSE for copyright and license information. Detailed information including third-party components and their licensing/copyright information is available via the REUSE tool.