A guide to Kubernetes

Submitted on Thu, 01/16/2025 - 18:33

Tags

Why:

 

Applications now are based on microservices rather than monolithic

 

Microservice is useful for:

 

  • High performance / Scalability
  • Agility
  • Flexibility
  • Reliability/fault isolation/disaster recovery
  • High Availability

 

Helps manage application using large number of containers

 

K8s offers

 

High performance (load balancing) and less downtime (self healing)



 

What:

 

Open Source Container Orchestration Tool

 

Orchestration: Manage/Organize multiple elements/components for a greater desired effect

 

Developed by Google

 

 

Key concepts/objects:

 

Pod

Smallest unit

Container abstraction

Has IP which can change

Ephemeral

Node

Host for pods

Can be physical/virtual

Master / Worker type

 

ReplicaSet

Pod abstraction

Defines number of pod replicas

desired state of pod instances

For fault tolerance

 

Deployment

ReplicaSet abstraction

Highest level of abstraction

Administrators work with deployments

 

Service

Static IP address for pod

Independent from pod itself

Internal / External type

For communication among pods

Acts as an intelligent load balancer

 

Ingress

DNS based

Entrypoint to the web application

Forwards to service

 

ConfigMap

Application configurations

Such as database URL

Independant than application itself

 

Secrets

Confidential configurations

Credentials

Uses base64 encoding

 

Volume

For data persistence

Similar to Docker volumes

Local / remote / cloud

 

!Important: k8s doesn't explicitly manage data persistence. It's the administrator’s responsibility.

 

StatefulSet

For database replication

Tedious !

 

Note: DB is stateful. Hence, its replication is different. Issue of data inconsistency. Usually databases are hosted outside the k8s cluster.

 

 

K8S Architecture

 

Master Node

  • Coordinates the worker nodes
  • Requires less resources

 

Components:

Api server

Cluster entrypoint

Handles request from other components in the cluster

etcd

Cluster brain

Key value store

Does Not store application data but cluster data 

Controller Manager

Monitors cluster and detects changes

Scheduler

Schedules pods to nodes intelligently

 

Worker Node

  • Does the actual work
  • Requires more resources

 

Components:

Kube proxy (handles communication / intelligent)

Kubelet

works with container runtime and nodes

Container Runtime (docker, etc)


 

A cluster usually has multiple master nodes for efficiency. A cluster is scalable. Both master and worker nodes can be added to an existing cluster.

 

How:

 

Minikube and kubectl

 

minikube

 

  • When not enough resource
  • For learning purpose only
  • Not suitable for production environment
  • One node kubernetes cluster
  • Master and worker process in one node
  • Container runtime included
  • No need to setup Docker additionally
  • Need a hypervisor such as hyperV, virtualbox or  hyperkit


 

kubectl

 

  • Command line tool to interact with a K8s cluster through API server


 

 

Minikube Installation

 

Go to,

minikube.sigs.k8s.io/docs/start/

 

Choose OS details and proceed.

 

Note: Also installs kubectl and container run time


 

 

Minikube commands


 

Check minikube version

 

minikube version


 

Start a cluster

 

minikube start --vm-driver=hyperkit --nodes 3 --disk-size=5g

 

Note: starting a cluster of 3 nodes using hyperkit as VM driver with a disk size of 5 gigs.

 

!Important: If docker is used as the vm driver, we can’t ping the cluster. Could be challenging with accessing web apps from host OS later.


 

Check cluster status

 

minikube status


 

List clusters

 

minikube profile list


 

Delete cluster

 

minikube delete

 

Note: When profile is not passed, the default profile passed is named minikube


 

Other important commands

 

  • minikube dashboard
  • minikube logs
  • minikube ip

 

Note: This ip along with nodeport is used for accessing the services from the host OS.




 

Kubectl commands

 

Check version

 

kubectl version

 

Get details about a K8S object

 

kubectl get <object> <-o wide>

 

Examples,

kubectl get nodes | pods | deployments | services | configmaps | secrets <-o wide>

 

Note: Pass the argument -w at the end to see in real time


 

Delete an object

 

Kubectl delete <object> <object name>


 

Create a deployment

 

kubectl create deployment <name>  --image=<image name>

 

Example,

kubectl create deployment nginx --image=nginx

 

Note: We only need to work with deployment. Subsequent replicaSets and pods are created by K8s itself.


 

Create the deployment with 4 replica pods

 

kubectl create deployment nginx --image=nginx --replicas=4


 

Describe an object

 

kubectl describe <object> <object name>


 

Check logs

 

kubectl logs <pod name>



 

Run a command in a pod

 

kubectl exec -it <pod name> -- <command>


 

Enter a pod

 

kubectl exec -it <pod name> -- bash


 

Edit deployment

 

kubectl edit deployment <deployment name>

 

Note: When changes are made to the deployment, associated replicaSets and pods are automatically updated.


 

Scale deployment

 

kubectl scale deployment <deployment name> --replicas=<number>


 

For detailed configuration, 

 

kubectl apply -f <configuration-file.yaml>


 

 

COMMUNICATION PORTS

 

nodePort

For access by external clients. Forwards to a particular service on Port. In the range of (30,000 - 32,767)

 

Port

Service port on which a service receives a connection. Forwards to a targetPort.

 

targetPort

Port on which a pod is listening. Service forwards the connection to this port on a pod.

 

containerPort

The port the pod is listening on. Should be the same as targetPort.

 

Note: nodePort is part of the service definition. containerPort part of the pod definition.

Accessing web app using minikube IP and nodePort


 

Step 1. Create a deployment

 

kubectl create deployment nginx --image=nginx


 

Step 2. Create a configuration file for a service titled nginx-service.yaml.

 

apiVersion: v1

kind: Service

metadata:

 name: nginx-service

spec:

 selector:

  app: nginx  # name of the deployment to target

 ports:

  - port: 80

    targetPort: 80

    nodePort: 30010

 type: NodePort

 

!Important: Don't use tabs or extra space in the yaml file. Be consistent with one space.


 

Step 3. Apply the configuration file to create the service

 

kubectl apply -f nginx-service.yaml


 

Step 4. Find the cluster IP 

 

minikube ip


 

Step 5. Access the web app using minikube IP and nodePort 

 

http://192.168.66.13:30010



 

 

Accessing web app using ingress through a local domain 


 

Step 1. Enable the ingress addon

 

minikube addons enable ingress


 

Step 2. Create a config file named nginx-ingress.yaml as follows:

 

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

 name: nginx-ingress

spec:

 rules:

  - host: "kube.local"

    http:

     paths:

     - path: /

       pathType: Prefix

       backend:

        service:

         name: nginx-service

         port:

          number: 80



 

Step 3. Apply the config file

 

kubectl apply -f nginx-ingress.yaml


 

Step 4. Update the hosts file with the ingress domain and minikube IP

 

Example,

 

192.168.66.3 kube.local

 

Step 5. Access the web application at:

 

kube.local

Self Healing, Fault tolerance and Scalability


 

Self Healing


 

Delete all the running pods

 

kubectl delete pods --all


 

Observer the cluster status using

 

kubectl get pods -o wide -w



 

Fault tolerance


 

Delete a node from cluster

 

kubectl delete node minikube-m02

 

Pods on the node will be terminated and new pods will be created in available nodes.


 

Scalability


 

Scale an active deployment 

 

kubectl scale deployment nginx --replicas=4

 

New pods will be created


 

Add new nodes to cluster

 

minikube node add


 

 

Horizontal Pod Autoscaler (HPA)


 

Step 1. Define resources in deployment by editing

 

kubectl edit deployment nginx

 

Under resources section define the resources required per pod as follows:

 

resources:

  requests:

    cpu: 50m

    memory: 128Mi

 

Note: 50m stands for 50milliCPU.


 

Step 2. Define HPA policy as follows:

 

kubectl autoscale deployment nginx --cpu-percent=10 --min=1 --max=5

 

Explanation: When cpu utilization reaches 10% the policy kicks in and adds new pods. The maximum number of pods can be 5.


 

Step 3. Generate traffic by running a docker container using hey tool

 

docker run --rm williamyeh/hey -n 10000 -c 8000 http://192.168.66.13:30010

 

!Important: Domain will not work here. Since, this command will be run inside the container.


 

Check current resource utilization

 

kubectl top pods

 

kubectl get hpa