Estimated reading time: 7 minutes ⏱️
This Kubernetes cluster is powered by kubeadm and managed using Vagrant virtual machines. 🎉
- Kubernetes: kubeadm
- Container Runtime: containerd
- CNI Plugin: Calico 🌱
- 1 Control Plane Node: Connected to the public network, accessible from the host 🌍
- 2 Worker Nodes: Connected to a private network 🔒
Before starting, make sure the following are installed:
- VirtualBox: Download and install VirtualBox 💻
- Vagrant: Download and install Vagrant ⚙️
-
Initialize the Vagrant environment:
vagrant init
This will create a
Vagrantfile
in your current directory, which you can edit to define the configuration of your virtual machines. -
Start the Vagrant machines:
vagrant up
This will start and provision your Vagrant virtual machines based on the configuration in the
Vagrantfile
. 🌱You can modify the
Vagrantfile
to suit your needs (e.g., increase worker nodes, customize resources, etc.). 📝
-
SSH into the master (control plane) node:
vagrant ssh master
-
Initialize the Kubernetes control plane node with the following command:
sudo kubeadm init --apiserver-advertise-address=192.168.1.200 --pod-network-cidr=192.168.0.0/16
The last output displays useful commands for the next steps. Save it!
-
Configure
kubectl
for your user on the control plane node:mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
Install the Calico CNI plugin: Quickstart for Calico on Kubernetes
CALICO_VERSION=3.29.1 kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v$CALICO_VERSION/manifests/tigera-operator.yaml kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v$CALICO_VERSION/manifests/custom-resources.yaml
You can choose a different CNI plugin if needed.
-
Verify that Calico is installed correctly by checking the pods:
watch kubectl get pods -n calico-system
-
Remove the control plane taint to allow scheduling pods on the control plane node [Optional]:
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
-
Follow the instructions displayed after running the
kubeadm init
command to configure and join your worker nodes. 📝
For each worker node, perform the following steps:
-
SSH into the worker node (e.g.,
worker1
):vagrant ssh worker1
-
On the worker node, run the
kubeadm join
command that was displayed during thekubeadm init
process on the control plane node:sudo kubeadm join <your-kubeadm-join-command>
Repeat steps 1 and 2 for the remaining worker nodes. 🔄
Once the nodes are joined, you can log in to the master node and enjoy your Kubernetes setup!
Your cluster is accessible from the host. To manage it using kubectl
on your host machine, copy the kubeconfig from the Vagrant master node and configure it:
-
Copy the kubeconfig file from the master node:
vagrant ssh master sudo cat /etc/kubernetes/admin.conf
-
Set the
KUBECONFIG
environment variable on your host:export KUBECONFIG=/path/to/config
Replace
/path/to/config
with the actual path to theadmin.conf
file you copied.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 20m v1.31.3
k8s-worker1 Ready <none> 12m v1.31.3
k8s-worker2 Ready <none> 11m v1.31.3
Happy Kuberneting! 🐳🎉
To stop and remove the virtual machines created by Vagrant:
vagrant destroy -f
To completely clean up Vagrant-related files from your project directory:
rm -rf Vagrantfile .vagrant