Multinode kubernetes cluster in virtualbox

Submitted on Fri, 08/13/2021 - 09:12

Tags

Updated on: December 13, 2023

In this tutorial, we will learn to set up a local Kubernetes cluster using virtualbox, kubeadm and docker.

In virtualbox, two VMs have been configured with Ubuntu 22.04.

It should also be noted that each of these VMs are connected to two networks. One is a bridge network, so that we can easily SSH into the VMs for further configuration. The second one is a NAT network which will be used for communication between master and worker nodes in a Kubernetes cluster. This is to avoid any network conflicts.

Bridge network: 192.168.1.0/24

Nat network: 192.168.10.0/24

kubernetes network

LETS BEGIN !

1.Update the package repository (All hosts)

apt-get update

2. Install Docker (All hosts)

apt-get install docker.io -y

3. Access Repos via HTTPS (All hosts)

apt-get install -y apt-transport-https ca-certificates curl gpg

4. Add Kubernetes keyring (All hosts)

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

5. Add kubernetes repo to the apt source list

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list

6. Update the package repository and Install K8S components (All hosts):

apt update

apt install -y kubelet kubeadm kubectl

apt-mark hold kubelet kubeadm kubectl

7. Add the hosts entry (All hosts)
In order for proper communication between master and worker nodes, we will use hosts file to point a domain "kube-master" to its private ip. Hence, this should be done in both nodes.

Scenario:

  • master node private IP: 192.168.10.6 ( from nat network)
  • master node hostname: kube-master

Edit the host file

The hosts file should include following line

192.168.10.6 kube-master

Note: Remove any other IP (such 127.0.1.1 for the same hostname)

hosts

8. Disable SWAP (All hosts)

swapoff -a

edit /etc/fstab to remove the swap entry by adding a comment in the last line as seen below

swap

9. Configure containerd to use systemd cgroups for resource management on master node (EXTREMELY IMPORTANT !)

In some cases, containerd is missing a configuration file. Without this, the kubernetes pods running the api server will stop after running. To fix this, run following commands on master node:

mkdir -p /etc/containerd/

containerd config default | tee /etc/containerd/config.toml

sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

The last command basically updates the value of a parameter called SystemdCgroup from false to true. You can also do this part manually.

Restart containerd with updated confile file:

systemctl restart containerd

10. Pull the kubernetes configuration files

kubeadm config images pull

11. Initiate the control plane on master node

kubeadm init --control-plane-endpoint kube-master:6443 --pod-network-cidr 10.10.0.0/16

Note: kube-master refers to the master node's hostname

If success, you will see the following message at the bottom of the output

control plane

Note: Later, for worker nodes to join this cluster, you can run the following command as root user in worker nodes as seen in the output:

kubeadm join kube-master:6443 --token 5nsm1v.6msi18iuv6bbdkdr \

    --discovery-token-ca-cert-hash sha256:7448be7353ce5a8b292f04a68cb81c13d9c84f2377156b2ad87e1adec09a1f55

Hence, its better if you copy the last line of the output.

12. After the control-plane initiation, to run kubectl as non-root user run the following command as non-root user on master node

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

For root to run kubectl, add the following line at the end of the file /root/.bashrc in master node

export KUBECONFIG=/etc/kubernetes/admin.conf

Save and exit and run the following command to use the updated .bashrc file

source /root/.bashrc

13. Calico configuration  (Master node)
Download the calico configuration file

curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/calico.yaml -O

Edit the calico file and update the value of CALICO_IPV4POOL_CIDR to the value used in kubeadm init command for pod-network-cidr (10.10.0.0/16 in this case) as shown below. Please note that, the line is initially commented, so remove the hash first.

calico

Apply the configuration file using following command:

kubectl apply -f calico.yaml

14. Joing the kubernetes cluster from worker node

In case, you forgot to copy the last line of the output from control plane initation in step 11, do the following in master node:

14.1 Create a token

kubeadm token create

token

14.2 Find out certificate hash

The hash of Cirtificate Authority certificate used by the kubernetes cluster for secure communication

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \

  openssl dgst -sha256 -hex | sed 's/^.* //'

ca hash

14.3 Join the cluster with token and hash values

The command syntax to join the cluster from worker nodes is as follows:

kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>

Hence, in our case it would be as follows:

kubeadm join --token y6gwxx.69iot5jbjz47l2vq kube-master:6443 --discovery-token-ca-cert-hash sha256:c9d1c247f9e50d04965b62a4023d9950abf4d6688246938c815d51679f3c7b4b

At the end you should have a multinode cluster setup !

Check with the command at master node:

kubectl get podes

cluster