WELCOME TO THE FUTURE OF INFRASTRUCTURE
In a single sentence: Kubernetes intends to radically simplify the task of building, deploying and maintaining distributed systems.
Kubernetes or K8s is open-source software, originally created by Google as a way to take on the burden of management of container sprawl for applications/microservices running on multiple containers (vm’s sharing a single kernel on a Linux/UNIX OS) that scale out onto potentially tens or hundreds of individual containers across multiple hosts. Google then handed it to the Cloud Native Computing Foundation.
In kubernetes, there is a master node and multiple worker nodes.
Each worker node manages multiple pods. Pods are just a bunch of containers, clustered together as a working unit.
Application developers design their application based on pods. Once those pods are ready, you tell the master node the pod definitions and how many need to be deployed.
Kubernetes takes the pods, deploys them to the worker nodes. In the event a worker node goes down, kubernetes deploys new pods onto a functioning worker node. The complexity of managing many containers is removed.
It is a large and complex system for automating, deploying, scaling and operating applications running on containers.
Rather than create a seperate post on MiniKube, I’ll incorporate it here on my Kubernetes post. MiniKube is a way of learning Kubernetes by running a single node cluster locally on a laptop/desktop machine. The commands required to get it up and running on Linux Mint/Debian are shown below.
INSTALLATION OF MINIKUBE ON DEBIAN/MINT
sudo apt-get install virtualbox virtualbox-qt virtualbox-ext-pack virtualbox-guest-additions-iso virtualbox-guest-utils
sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl
#INSTALL MINIKUBE VIA DIRECT DOWNLOAD
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube
#ADD MINIKUBE EXECUTABLE TO PATH
sudo mkdir -p /usr/local/bin/
sudo install minikube /usr/local/bin/
minikube start --driver=virtualbox
#ENABLE BASH COMPLETION
kubectl completion bash
#ENABLE KUBECTL AUTOCOMPLETION
echo 'source <(kubectl completion bash)' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl
#IF YOU USE A COMMAND ALIAS FOR KUBECTL
echo 'alias k=kubectl' >>~/.bashrc
echo 'complete -F __start_kubectl k' >>~/.bashrc
#START AND OPEN MINIKUBE DASHBOARD
You can check the status of MiniKube and Stop and Start it, using these commands.
Now that minikube is running, we’re ready to open the console and see our Kubernetes single-node cluster that is running on our local machine.
BACK TO KUBERNETES…
CONTAINERS AND ORCHESTRATION
To understand kubernetes, we must first understand containers and orchestration. So make sure you’ve read and understood Docker first.
A Kubernetes (K8’s) Cluster was originally developed by Google to manage containers on a very large scale.
In Kubernetes, a Management Cluster (Kubectl) wraps a container called a “Pod” around each Docker container, and clusters Pods into a Replica set which is wrapped into a “Deployment“. A Service wraps around a Deployment and within the Service, a Daemonset controls what runs on each Pod for consistency and a Load Balancer, Cluster IP and Node Port controls the internal networking to simplify communications between Services. An Ingress Controller serves to protect the Service’s External IP from being visible, by controlling inbound URL and API requests and routing them to the Services, Deployments, Pods and Container via the Load Balancer and NodePorts. A Cluster IP facilitates communications between multiple pods via a single, virtual IP.
Using Pod Autoscalers and Cluster Autoscalers, the number of containers that can be deployed and automatically scaled to meet service level requirements is managed using the kubectl command on the Kubectl management cluster.
Rolling updates or a roll back of an image deployed to thousands of pods is done using the kubectl rolling-update command.
kubectl run --replicas=1000 my-web-mywebserver #Runs a 1000 instances of an image in a kubernetes cluster kubectl scale --replicas=2000 my-web-mywebserver #Scales cluster up to 2000 containers kubectl rolling-update my-web-server --image=web-server:2 #Performs rolling update to a Deployment of Pods kubectl rolling-update my-web-server --rollback #Rolls a Deployment Pod image back to previous version kubectl run hello-minikube #deploy an app to the cluster kubectl cluster-info #view info about the cluster kubectl get nodes #view nodes that are part of the cluster
A Kubernetes system consists of the following six components.
This is the front-end component that interacts with management commands, users, other management components (3rd party vendors storage etc).
This is the distributed, reliable key value store, where information required to access all the containers is kept and maintained.
This is responsible for distributing work to the containers across multiple Nodes. It looks for newly created containers and assigns them to nodes.
This is the brain behind orchestration. It is responsible for noticing when endpoints, containers or nodes close down.
The software used to run containers – in our case Docker but Kubernetes can be used to manage other container technologies too.
The agent that runs on each node in the cluster. It is responsible for making sure the containers are running on the nodes as expected. It listens for instructions from the Controller.
KUBE PROXY SERVICE
The Kube Proxy Service is another service that runs on each Node that makes sure all networking required to facilitate the necessary communication between containers is in place.
So to summarise so far, a Kubernetes Architecture consists of the following key components spread across the Master Node and Worker Nodes in the Cluster.
At the beginning of this post, we cover the steps in setting up a KUBECTL single-node kubernetes cluster, with a single worker node. Setting up a multiple node kubernetes cluster manually is a tedious task. KubeAdm is a command line tool that allows us to set up multiple node clusters more easily.
- You will need multiple virtual machines set up, ready to act as Docker hosts and as the Management and Worker Nodes.
Note that the minikube VM must be started from the command line using the command
minikube start –driver=virtualbox
not from Oracle VM VirtualBox Manager (virtualbox-qt).
Open the minikube web console in your default browser with the command
XML, JSON and YAML
In DevOps, you’ll find yourself interacting with or creating files that are either XML, JSON or YAML format, depending on what you’re doing. All these file formats are a way of representing data, and a comparison of the three different file formats representing the same data is shown below.