Kubernetes

WELCOME TO THE FUTURE OF INFRASTRUCTURE

In a single sentence: Kubernetes intends to radically simplify the task of building, deploying and maintaining distributed systems.

Kubernetes or K8s is open-source software, originally created by Google as a way to take on the burden of management of container sprawl for applications/microservices running on multiple containers (vm’s sharing a single kernel on a Linux/UNIX OS) that scale out onto potentially tens or hundreds of individual containers across multiple hosts. Google then handed it to the Cloud Native Computing Foundation.

In kubernetes, there is a master node and multiple worker nodes.

Each worker node manages multiple pods. Pods are just a bunch of containers, clustered together as a working unit.

Application developers design their application based on pods. Once those pods are ready, you tell the master node the pod definitions and how many need to be deployed.

Kubernetes takes the pods, deploys them to the worker nodes. In the event a worker node goes down, kubernetes deploys new pods onto a functioning worker node. The complexity of managing many containers is removed.

It is a large and complex system for automating, deploying, scaling and operating applications running on containers.

MINIKUBE

Rather than create a seperate post on MiniKube, I’ll incorporate it here on my Kubernetes post. MiniKube is a way of learning Kubernetes by running a single node cluster locally on a laptop/desktop machine. The commands required to get it up and running on Linux Mint/Debian are shown below.

INSTALLATION OF MINIKUBE ON DEBIAN/MINT

#INSTALL VIRTUALBOX
sudo apt-get install virtualbox virtualbox-qt virtualbox-ext-pack virtualbox-guest-additions-iso virtualbox-guest-utils

#INSTALL KUBECTL
sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl

#INSTALL MINIKUBE VIA DIRECT DOWNLOAD
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube

#ADD MINIKUBE EXECUTABLE TO PATH
sudo mkdir -p /usr/local/bin/
sudo install minikube /usr/local/bin/

#VERIFY INSTALLATION
minikube start --driver=virtualbox

#CHECK STATUS
minikube status

#STOP MINIKUBE
minikube stop

#START MINIKUBE
minikube start

#ENABLE BASH COMPLETION
kubectl completion bash

#ENABLE KUBECTL AUTOCOMPLETION
echo 'source <(kubectl completion bash)' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl

#IF YOU USE A COMMAND ALIAS FOR KUBECTL
echo 'alias k=kubectl' >>~/.bashrc
echo 'complete -F __start_kubectl k' >>~/.bashrc

#START AND OPEN MINIKUBE DASHBOARD
minikube dashboard
I had to reboot and enable VTX (virtualisation support) in the BIOS on my HP EliteDesk.

You can check the status of MiniKube and Stop and Start it, using these commands.

#CHECK STATUS
minikube status

#STOP MINIKUBE
minikube stop

#START MINIKUBE
minikube start
There is an issue with VirtualBox that needs to be resolved.
Running minikube config set driver virtualbox, minikube delete and minikube start resolved the issue
The minikube virtual machine can be seen running in the VirtualBox console (virtualbox-qt)

Now that minikube is running, we’re ready to open the console and see our Kubernetes single-node cluster that is running on our local machine.

The minikube dashboard command enables the dashboard and opens the URL in your web browser.
The kubernetes dashboard, running on our minikube virtual machine on VirtualBox is displayed in the Brave browser.

BACK TO KUBERNETES…

CONTAINERS AND ORCHESTRATION

To understand kubernetes, we must first understand containers and orchestration. So make sure you’ve read and understood Docker first.

page under construction…

image_pdfCreate PDF of this post...
Facebooktwitterredditpinterestlinkedinmail

Docker

When it comes to containers, Docker is the most popular technology out there. So, why do we need containers? How do they differ from VM’s? Very briefly, a VM (as in a VMWare ESXi Virtual Machine) is an entirely self contained installation of an entire operating system that sits on top of a “bare metal hypervisor” layer that sits on top of the physical hardware. Unlike installation of an OS on the physical hardware, the bare metal hypervisor layer allows multiple installations of multiple OSes to co-exist on the same hardware – entirely separate apart from the physical resources they share. There may be cluster of ESXi hosts, managed by a Central vCenter server that allows better distribution of multiple VM’s across multiple physical hosts for the purposes of Distributed Resource Scheduling and High Availability. This is a more efficient use of physical servers, but still quite wasteful on resources and storage since many Windows VM’s for example, would be running the same binaries and storing the same information many times over.

Containers on the other hand are logically separated applications and their dependencies that all reside in an isolated “container” or “zone” or “virtual machine” or “jail” depending on the single instance of an underlying UNIX/Linux based OS. So, they share common components of the underlying OS which is a more efficient use of space and physical resources since only one instance of the operating system is running on the physical machine. This also reduces the overhead on patching and to some extent, monitoring, since in the case of an application hosted on multiple, entirely separate full stack VM’s in a virtual environment, only the parts of the stack with unique/incompatible dependencies are separated into their own container. This means that a container compared to a VM may be very small indeed, and containers are typically restarted in the time it takes to start a daemon or service, compared to the time it takes to boot an entire OS.

So in summary, a container is a more intelligent, more efficient way of implementing the various layers in a full stack application that won’t otherwise co-exist on the same OS due to their individual dependencies for slightly different versions of surrounding binaries and/or libraries.

On VMWare ESXi, there is no Operating System layer (shown in Orange), but VMWare Workstation or Oracle Virtualbox provides similar full OS VM separation within a software hypervisor running as an application in its own right atop an underlying desktop OS such as Windows or Linux. Hence the term bare-metal hypervisor (since the Hypervisor layer shown in Blue runs atop the Server hardware shown in Grey). Docker is similar to a software hypervisor, but rather than store multiple similar full OS/App stacks, it provides separation further up the stack, above the OS layer, such that just the unique requirements of the app/daemon/microservice are hosted in any given container, and nothing more in an effort to become as efficient as it’s possible to be.

REASONS FOR ADOPTING CONTAINERISATION.

In most information technology departments, there’ll be a team of developers who code and build apps using combinations of their own code, database, messaging, web server, programming languages that may each have different dependencies in terms of binaries/libraries and versions of the same binaries/libraries. This is referred to the Matrix from Hell as each developer will be building, knocking down, rebuilding their own development environment that likely runs on their own laptop. There’ll be a development environment too, the intention of which is to mirror that of the production environment although there’s no guarantees of that. There may be other Staging or Pre-Production environments too, again with no guaranteed consistency despite everybody’s best efforts. The problems arise when deploying an app from a Development environment to the Production environment only to find it doesn’t work as intended.

The solution to this problem is to put each component in the application you ultimately want to ship into production/cloud into its own container, i.e.

All the components in the application running on a single Linux OS that has Docker installed can be placed in their own container, i.e…
Once an application component is contained within its own container, all the dependencies that component has (other linux packages and libraries) will also be contained within the same container. So each component has exclusive access to just the packages and libraries it needs, without the potential to interfere with and break adjacent components on the same underlying host operating system.

In order to isolate the dependencies and libraries in this way, a typical Docker container will have its own Processes, Network interfaces and Mount points, just sharing the same underlying kernel.

Two containers with their own processes, network interfaces and mounts share the kernel of the OS that Docker is running on.

Linux containers typically run on LXC, a low level hypervisor that is tricky to set up and maintain, hence Docker was born to provide higher level tools to make the process of setting up containers easier.

Since Docker containers only share the kernel of the underlying OS, have their own processes, network interfaces and mounts, it is possible to run entirely different linux OS’es inside each container, since the only part of the underlying OS that Docker is running on, is the underlying kernel of that OS!

If you want, entirely different linux distributions that are able to run on the same kernel can be run in docker containers! However, since the kernel is very small, this is arguable as wasteful as simply using VMWare ESXi to host individual VM’s each running a different linux distro?!

Since the underlying kernel is the shared component, only OS’s that are capable of running on the same kernel can exist on the same docker host. Windows could not run in the scenario above, and would need to be run on a Windows Server based Docker host instead. This is not a restriction for the VMWare ESXi bare-metal hypervisor of course, where Windows and Linux can co-exist on the same physical host since their kernels are contained within the VM, along with everything else.

HOW IS DOCKER CONTAINERISATION DONE?

The good news is that containers are nothing new. They’ve been about for over a decade and most software vendors make their operating systems, databases, services and tools available in container format via Docker Hub.

Some of the official container images available on Docker Hub.

Once you identify the images you need and you install docker on your host, bringing up an application stack for the component you want, is as easy as running a docker command.

docker run ansible  #downloads and runs a container running ansible
docker run mongodb  #downloads and runs a container running mongodb
docker run redis    #downloads and runs a container running redis
docker run nodejs   #downloads and runs a container running node.js

A docker image can be installed mulitple times, for example in a scenario where you want multiple instances of node.js (you’d need a load balancer in front of the docker host(s)) so that in the event of a node.js container going down, docker would re-launch a new instance of that container.

So, the traditional scenario, where a developer puts together a bunch of components and builds an application, then hands it to operations, only for it to not install or run as expected because the platform is different in some way, is eliminated by ensuring all dependencies of each component is contained in its own container, and is thus guaranteed to run. the docker image can subsequently be deployed on any docker host, on prem or in the cloud.

It worked first time!

INSTALLING DOCKER

My everyday desktop machines all run Linux Mint for it’s ease of use and it’s propensity to just work when it comes to my various desktop computing requirements. You’d likely not run it on your servers though, instead choosing Debian or Ubuntu (which Mint is actually based on but not guaranteed to be exactly the same). Your server linux distro choice should be based on support, and by that I mean support for any problems that arise and support in terms of software vendors and in our case, docker image availability.

So, since I’m blogging on this same Mint machine, I’m going to install Docker via the Software Manager for immediacy. I will however cover installation on Ubuntu below.

Quickest and most reliable way of installing a working Docker on Mint, is to use the Software Manager.

The first step is to get docker. Start here.

Take the Docker for Linux option, unless you’re running it on a Windows or Mac Desktop machine

Follow the instructions here to install docker. The steps are also shown below.

#INSTALLING DOCKER ON UBUNTU
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
sudo docker run hello-world
sudo docker run hello-world will download the hello-world container, install it and produce the output shown.
always precede your ‘docker’ command with sudo. It needs those root level privileges to communicate with the docker daemon.

sudo docker run -it ubuntu bash opens a root shell on container ‘ade951cb999’

From the ubuntu bash container, issuing a few linux commands quickly shows that the container has no reach outside of itself but shares stats returned by the kernel, such as disk space for the /dev/sda2 partition (the docker hosts root partition), cpu, memory and swap utilisation .

The hostname is different, there is only one process running ‘bash(excluding the ps -ef command itself), it can see how much physical disk space is used on the docker host (67% /dev/sda2), it has it’s own set of directories (/home shows as being empty) and the output from top shows only the 1 running process.

Standard linux commands being run inside the ubuntu bash container
The CPU, Memory and Swap statistics are the same as the Docker host that the container is running on since they share the same kernel.
/etc/issue says Ubuntu despite the Docker host being Linux Mint
The Docker host running Linux Mint.
cat /etc/*release* reveals more information about the operating system running in the container.

To display the version of docker thats been installed, use sudo docker version

sudo docker version

DOCKER IMAGES AND COMMANDS

Remember, you probably need to precede docker commands with sudo.

Here are some initial commands before I stop to make an important point…

#DOCKER COMMANDS
docker run nginx  #checks for a local copy of the nginx container image, if there isn't one, it'll go out to docker hub
                  # and download it from there.  For each subsequent command, it'll use the local copy.
docker ps         #lists all running containers
docker ps -a      #lists all containers present, running or otherwise
docker start <container-id> or <container-name>   #(re)start a non-running container
docker stop <container-id> or <container-name>    #stop a running container
docker rm <container-id> or <container-name>      #remove container
docker images     #shows all docker images downloaded from docker hub on the local system
docker rmi  nginx   #removes the docker image from the system (make sure non are running)
docker pull ubuntu  #pulls ubuntu image to local system but dont run it until docker run command is issued
Containers are only running while the command executed inside them is running. Once the process stops, the container stops running. This is an important distinction from VM’s that stay running and consuming system resources, irrespective. Note also the final column, a randomly assigned “name” for the container.

An important distinction between containers and VM’s is that whereas a VM stays running all the time, a container is only running while the command inside it is running. Once the process for the command completes, the container is shutdown, thus handing back any and all resources to the docker host.

Taking the 1st, 2nd and 3rd columns from the sudo docker ps -a command above for closer inspection, you can see that there is a container ID, the docker image, and the command run within that docker image, e.g.

Earlier we executed the command sudo docker run ubuntu bash and the docker host checked for a local copy of the ubuntu image, didn’t find one, so downloaded one from docker hub. It then started the container, and ran the bash command within that container, and thus we were left as a running bash command prompt on our container running ubuntu. As soon as we typed exit, the bash terminal closed, and since there were no running processes remaining on that container, the container was subsequently shut down.

Another container, docker/whalesay was also downloaded and ran the command cowsay Hello-World! before exiting and unlike ubuntu bash dropped us back at our own prompt on the docker host. This is because once the cowsay Hello-World! command had executed, there was no further need for the container, so it was shut down by the docker host.

docker exec mystifying_hofstadter cat /etc/hosts    #execute a command on an existing container
docker start <container-id> or <container-name> #starts an existing non-running container
docker stop <container-id> or <container-name> #stops a running container that's been STARTed

So, docker run <image-name> <command> will get a docker image, start it and execute a command within it, downloading the image from docker hub if required. But once the image is stored locally and that container exists on our docker host system, albeit in an exited state, what if we want to run the command again. Well, docker run <image-name> <command> will create another duplicate container and execute the command in that. We want to execute the same command in the existing container. For that we use the docker start command followed by the docker exec command and optionally finish up with the docker stop command e.g.

Before using docker exec to execute a command on an existing container, you’ll need to docker start it first.

DETACH and ATTACH (background and foreground)

If you’re running a container that produces some kind of output to the foreground but you want to run it and return to the prompt instead, like backgrounding a linux command, you can docker run -d <container> to run it in detached mode. To bring that running container to the foreground, you can docker attach <container>. Where <container> is the id or name of the container.

docker run -d kodekloud/simple-webapp               #runs a container in the background if it would otherwise steal foreground
docker run -a <container-id> #bring detached container (running in the background) to the foreground

If you have a docker image like redis, that when run with docker run, will stay running interactively in the foreground, there is no way to background it (detach it) without hitting CTRL-C to kill the process, then re-run it with docker run -d so that it runs in detached mode. However, if you run your containers with docker run -it then you can use the key sequence CTRL-P CTRL-Q to detach it without killing it first. Reattach using the docker attach <container> command. According to the docker run –help page, -i runs in in interactive mode and -t allocates a pseudo tty (terminal) to the running container.

DOCKER COMMANDS AND HELP SYSTEM

Docker has a very nicely thought out help system. Simply type docker and all the management commands and docker commands are listed along with descriptions. Type docker <command> –help and you’ll see more information on that particular command.

docker commands. Use docker <command> –help to dig deeper.

RUN TAG

If we run a container e.g. redis with docker run redis, we can see in the output the the version of redis is Redis version=5.0.8

The version TAG is Redis version=5.0.8 in our redis container.

If we wanted to run a different version of redis, say version 4.0, then we can do so by specifying the TAG, separated by a colon e.g. docker run redis:4.0

Run a different version of the redis container by specifying the TAG in the the docker run redis:4.0 command

In fact, if you specify no TAG, then what you’re actually doing is specifying the :latest tag, which is the default if no tag is specified. To see all the Tags supported by the container, go to docker hub, search for the container and you’ll see the View Available Tags link underneath the command.

RUN -STDIN

Already mentioned above, if you have a simple shell script that prompts for user input, then produces an output, e.g.

#!/bin/bash
echo "What is your name?"
read varname
echo "Hello $varname. It's nice to meet you."
exit
hello.sh needs to prompt the user for input before producing an output.

If this simple program were containerised with docker, when run, it needs to prompt the user for input before it can produce an output. So, the command needed to run this container, would be docker run -i -t <image>. The i runs the container in interactive mode so you can enter stdin, and the t allocates a pseudo terminal so you get to see the stdout.

PORT MAPPING

Before talking about port mapping, I’ll first cover how to see the internal ip address assigned to the container and the port the container is listening on. The output of docker ps will display a PORTS column, showing what port the container is listening on, then use docker inspect <container-name> to see the IP Address.

display the port using docker ps and use docker inspect to display the internal ip address.

The internal IP address is not visible outside of the docker host. In order for users to connect to the port on the container, they must first connect to a port on the docker host itself, that is then mapped to the port on the container i.e.

Here we see port 80 on the docker host is mapped to port 5000 on the container running a web app.

To map a local port on the docker host to a listening port on the container, we use the docker run -p 80:5000 <image-name> command. The -p stands for publish and creates a firewall rule allowing the flow of traffic through the specified ports. By default a container doesn’t publish any of its ports.

Users can connect to the IP and Port on the Docker host, and be tunnelled through to the container.

VOLUME MAPPING AND PERSISTENT DATA

If you’re running a container that stores data, any changes that occur are written inside that container. e.g. a mysql container will store it’s tablespace files in /var/lib/mysql inside the container.

A MySQL database will write data to it’s internal file system. But how does that work?

docker run -v /opt/datadir:/var/lib/mysql mysql mounts the directory /opt/datadir on the docker host into /var/lib/mysql on the mysql container. Any data that is subsequently written to /va/rlib/mysql by the mysql container, will land on /opt/datadir on the docker host, and as such will be retained in the event that the mysql container is destroyed by docker rm mysql.

CONTAINER INFORMATION

Already mentioned before, the docker inspect command returns many details about the container in JSON format. ID, Name, Path, Status, IP Address and many other details.

LOGS AND STDOUT

So, you can run the docker run -it redis command and see the standard output, but if you have an existing container that you start with docker start <container-name> and then attach to it using docker attach <container-name> you won’t see any stdout being sent to the screen. This is because unlike running it interactively with an assigned tty, simply starting the container and attaching to it, will not assign a tty. In order to view the stdout on the container, use the docker logs <container-name> and you’ll see the same output that you would if you used the docker run -it redis command. Obviously, using docker run redis would create a new container using the redis image, not start an existing redis container.

Starting and attaching to a container that produces stdout will not display the stdout
Using the docker logs <container-name> command to view the stdout on that container.

ENVIRONMENT VARIABLES

Consider the following python code web-page.py to create a web server that serves a web page with a defined background colour and a message. If the python program has been packed up into a docker image called my-web-page, then you’d run it using the docker run my-web-page command, connect to it from the web browser on the docker host on port 8080 to view the page.

import os
from flask import Flask

app = Flask (__name__)

color = "red"

@app.route("/")
def main():
    print(color)
    return render_template('hello.html', color=color)
                           
if __name__ == "__main__":
    app.run(host="0.0.0.0", port="8080")

The python program has a variable color=red within it but you want to be able to pass in values for that variable from the docker host when you run the container. To do this, you can move the variable outside of the python code by replacing the line of color=red with color = os.environ.get(‘APP_COLOR’)

import os
from flask import Flask

app = Flask (__name__)

color = os.environ.get('APP_COLOR')

@app.route("/")
def main():
    print(color)
    return render_template('hello.html', color=color)

if __name__ == "__main__":
    app.run(host="0.0.0.0", port="8080")

On the docker host, you can create and export a variable export APP_COLOR=blue; python web-page.py and refresh the page and the colour will change since it’s value is being read from an external variable on the docker host.

To run a docker image and pass in a variable, you can use the command

docker run -e APP_COLOR=orange my-web-page

to pass the variable APP_COLOR-orange into the container image my-web-page before the container is started.

To find the environment variable set on a container, you can use the docker inspect <container-name> command, and in the JSON file, under the “config”: { section, “env”: { subsection, you’ll see “APP_COLOR=blue” variable, along with some other variables too.

docker inspect will show the variables passed in from the docker host

CREATING A DOCKER IMAGE

So, you now have a good idea on how to interact with docker images and docker containers running on your linux system. We’ve even seen some code that can be containerised but we’ve not elaborated on how you get from a python program or shell script to a docker container image. Lets cover that important topic next.

Firstly, lets ask “Why would you want to dockerize a program?”. There are two answers to that question. The first is that you cannot find what you want on docker hub already, so you want to make it yourself. The second is that you have a program on your development laptop/environment and want to containerise it for ease of shipping to operations teams or docker hub and deployment on production infrastructure.

So, taking the above example of a web server and web page python script called web-page.py that uses the python flask module. If i were to build a system to serve that page, I’d follow the following steps.

  1. Install the Ubuntu OS
  2. Perform a full update of all packages using apt-get update && apt-get dist-upgrade
  3. Install python3.x and any dependencies using apt-get install python3
  4. Install python3.x script module dependencies using the python pip package manager
  5. Create/Copy the python source code into /opt directory
  6. Run the web server using the flask command.

DOCKER FILE

A docker file is basically the sequence of commands required to perform the sequence of steps listed above. It is written in an INSTRUCTION, Argument format. Everything on the left in CAPS is an Instruction, and everything that follows it is an argument.

It looks like this…

#Dockerfile for cyberfella/my-web-page
FROM Ubuntu 

RUN apt-get update
RUN apt-get install python

RUN pip install flask
RUN pip install flask-mysql

COPY . /opt/source-code

ENTRYPOINT FLASK_APP=/opt/source-code/web-server.py flask run
The first line contains a FROM instruction. This specifies the base OS or another docker image.

To build the docker image, use docker build Dockerfile -t cyberfellaltd/my-web-page command. This will create a local docker image called my-web-page.

To push the image to Docker Hub, use the command docker push cyberfella/my-web-page

Note that cyberfella is my Docker Hub login name. You will need to register on Docker Hub first so that there’s a repository to push to. You can also link your Docker Hub repository with your GitHub repository and automate new Docker image builds when updated code is pushed to GitHub.

image_pdfCreate PDF of this post...
Facebooktwitterredditpinterestlinkedinmail

Savage leaky programs

It’s come to my attention recently that despite a fresh install of Linux Mint, certain programs seem to leak like a basket and hang around after they’re closed too.

I’d noticed my machine freezing intermittently and adding the memory monitor panel item revealed that the system memory was filling up.

The blue mem bar fills up over time when Brave is left open. Disappointing for such an otherwise excellent Web Browser.

xreader and brave seemed to be the main culprits but since rebuilding my desktop machine, I’ve not been using many other programs apart from ledger live to track the value of my crypto currency portfolio while the fed prints money ad infinitum during the coronavirus pandemic. I digress.

Killing processes gets old really quick, so I wrote a quick’n’dirty little shell script to do it for me. Rather than killing individual processes, it savages all processes by the same name.

I shall call it savage.sh and share it with the world, right here. Not on github.

Killing all running processes for ledger and brave using savage.sh
#!/bin/bash
# savage.sh finds all process ID's for the specified program running under your own user account and kills them
# in order to free up system resources.  Some programs have severe memory leaks and consume vast amount of RAM and 
# swap if left running over time.
#
# Usage: savage.sh 
#
# Written by M. D. Bradley during Coronavirus pandemic, March 2020

#Variables
user=`whoami`
memfree=`free | grep Mem | awk {'print $4'}`
#Code
echo "Program to kill e.g. xreader?: "
read program
pidcount=`ps -fu $user | grep $program | awk {'print$program'} | wc -l`
ps -fu $user | grep $program | awk {'print$2'} | while read eachpid; do 
	kill $eachpid >/dev/null 2>&1
done
memfree2=`free | grep Mem | awk {'print $4'}`
freedmem=$(( memfree2 - memfree ))
if [ $pidcount -eq 1 ] 
then
	echo "Found $pidcount process running for $program"
	echo "Killed it.  Freed up $freedmem bytes."
fi
if [ $pidcount -gt 1 ] 
then
	echo "Found $pidcount processes running for $program"
	echo "Savaged them. Freed up $freedmem bytes."
fi
image_pdfCreate PDF of this post...
Facebooktwitterredditpinterestlinkedinmail

Python

INTRODUCTION

Python is a simple, easy to learn programming language that bears some resemblance to shell script. It has gained popularity very quickly due to its shallow learning curve. It is supported on all operating systems. https://www.python.org/

INSTALLATION

Installers for all operating systems are available here and on linux it tends to be installed by default in most distributions. This is quickly and easily checked by using the python3 -V command.

You may find that the version installed in your distribution lags slightly behind the very latest available from python.org. If you want to install the very latest version, then you can either download the source code and compile it, or add the repository and install it using your package management system. Check the version you want isn’t already included in your package management system first using apt-cache search python3.8

INSTALLATION VIA SOURCE CODE (debian based distributions)

Ensure the pre-requisites are installed first from the distro’s default repo’s.

sudo apt-get install build-essential checkinstall

sudo apt-get install libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev

Download the source code from here or use wget and unzip using tar -zxvf Python-3.8.2.tgz

Download the python source code from the command line using
sudo wget https://www.python.org/ftp/python/3.8.2/Python-3.8.2.tgz

Extract the archive using tar xzf Python-3.8.2.tgz
cd into Python-3.8.2 directory
sudo ./configure –enable-optimizations to create makefile
sudo make altinstall to install python without overwriting the version already installed in /usr/bin/python by your distro
Check python version with the python3.8 -V command

EXECUTING A PYTHON SCRIPT

Before getting into coding in python, I’ll put this section in here just to satisfy your curiosity about how you actually execute a python script, since python ain’t shell script…

The simplest of all python scripts? The simple “hello world” script hello-world.py
Attempting to execute a python script like you would a shell script doesn’t end well. Python ain’t Shell after all.

The hint was in the use of the python3.8 -V command previously in order to check the version of python i.e. to execute your python script using python 3.8.2, you could use the command python3.8 hello-world.py

PYTHON PROGRAMMING

COMMENTS

Comment code or your own comments throughout your python code for readability by placing a # at the front of the line. The python interpreter will ignore any lines beginning with a hash symbol. Alternatively, use a triple quote, e.g. ”’ but hashes are the official method of commenting a line of code/notes.

WORKING WITH STRINGS

hello world.

print(“hello world”) -prints the output hello world to the screen

country_name = “England” -create a variable country_name and assign string value of England

number_of_years = 2020 -create a variable number_of_years and assign numeric value of 2020

brexit_event = True -create a boolean variable with a true or false value

print(“hello ” + country_name + ” and welcome to the year ” + str(number_of_years) -Note that you can’t concatenate a string and an integer so you need to convert the integer to a string using the str() function

executing the hello.py script comprised of the three lines above

print (“Cyberfella Limited”\n”python tutorial”) -puts a new line between the two strings

print (“Cyberfella\\Limited”) -escape character permits the print of the backslash or other special character that would otherwise cause a syntax error such as a “

phrase = “CYBERFELLA”

print (phrase.lower()) -prints the string phrase in lowercase. There are other functions builtin besides upper and lower to change the case.

print (phrase.islower()) -returns False if the string phrase is not lower case

print (phrase.lower().islower()) -returns True after converting the uppercase string phrase to lowercase.

print (len(phrase)) -returns the length of the string, i.e. 10

print (phrase[0]) -returns the first letter of the string, i.e. C

print (phrase.index(“Y”)) -returns the location of the first matching parameter in the string i.e. 1 Note you can have a string as a parameter e.g. CYB

print (phrase.replace(“FELLA”,”FELLA LTD”)) -replaces matching part of the string (substring) FELLA with FELLA LTD

WORKING WITH NUMBERS

Python can do basic arithmetic as follows, print (2.5 + 3.2 * 5 / 2)

python performing arithmetic 2.5 + 3.2 * 5 / 2 = 10.5 based on the PEMDAS order of operations. The “operations” are addition, subtraction, multiplication, division, exponentiation, and grouping; the “order” of these operations states which operations take precedence (are taken care of) before which other operations. … Multiplication and Division (from left to right) Addition and Subtraction (from left to right)

To change the order of operations, place the higher priority arithmetic in parenthesis, e.g. print (2 * (3 + 2))

3+2 = 5 and 2 * 5 = 10. The addition is prioritised over the multiplication by placing the addition in () to be evaluated first.

print (10 % 3) is called the Modulus function, i.e. 10 mod 3. This will divide 10 by 3 and give the remainder, i.e. 10 / 3 = 3 remainder 1. So it outputs 1.

How to perform a MOD function, i.e. give the remainder when two numbers are divided. e.g. 10 % 3 (10 mod 3) = 1

The absolute value can be obtained using the abs function, e.g. print (abs(my_num))

The variable my_num had been assigned a value of -98

print (pow(3,2)) prints the outcome of 3 to the power of 2, i.e. 3 squared.

3 squared is 9

print (max(4,6)) prints the larger of the two numbers, i.e. 6 min does the opposite

6 is larger than 4

print (round(3.7)) rounds the number, i.e. 4

3.7 rounded is 4

There are many more math functions but they need to be imported from the external math module. This is done by including the line from math import * in your code

print (floor(3.7)) takes the 3 and chops off the 7 (rounds down) ceil does the opposite (rounds up)

The floor function imported from the math module, returns the whole number but not the fraction

print (sqrt(9)) returns the square root of a number, i.e. 3.

the square root of 9 is 3.0 according to pythons sqrt function

GETTING INPUT FROM USERS

name = input (“Enter your name: “) will create a variable name and assign it the value entered by the user when prompted by the input command.

num1 = input (“Enter a number: “)

num2 = input (“Enter another number: “)

Any input from a user is treated as a string, so in order to perform arithmetic functions on it, you need to use int or float to tell python to convert the string to an integer or floating point number if it contains a decimal point.

result = int(num1) + int(num2) or result = float(num1) + float(num2)

print (result)

WORKING WITH LISTS

Multiple values are stored in square brackets, in quotes separated by commas,

friends = [“Eldred”, “Chris”, “Jules”, “Chris”]

You can store strings, numbers and booleans together in a list,

friend = [“Eldred”, 1, True]

Elements in the list start with an index of zero. So Chris would be index 1.

print(friends[1])

Note that print (friends[-1]) would return the value on the other end of the list and print (friends[1:]) will print the value at index 1 and everything after it. print (friends[1:3]) will return element at index 1 and everything after it, up to but not including element at index position 3.

To replace the values in a list,

friends[1] = “Benny” would replace Chris with Benny

USING LIST FUNCTIONS

lucky_numbers = [23, 8, 90, 44, 42, 7, 3]

To extend the list of friends with the values stored in lucky_numbers, effectively joining the two lists together,

friends.extend(lucky_numbers) Note that since Python3 you’d need to use friends.extend(str(lucky_numbers)) to convert the integers to strings before using functions such as sort or you’ll receive an error when attempting to sort a list that is a mix of integers and strings.

To simply add a value to the existing list

friends.append(“Helen”)

To insert a value into a specific index position,

friends.insert(1, “Sandeep”)

To remove a value from the list,

friends.remove(“Benny”)

To clear a list, use friends.clear()

To remove the last element of the list, use friends.pop()

To see if a value exists in the list and to return its index value,

print (friends.index(“Julian”))

To count the number of similar elements in the list,

print (friends.count(“Chris”))

To sort a list into alphabetical order,

friends.sort()

print (friends)

To sort a list of numbers in numerical order,

lucky_numbers.sort()

print (lucky_numbers)

To reverse a list, lucky_numbers.reverse()

Create a copy of a list with,

friends2 = friends.copy()

WORKING WITH TUPLES (pronounced tupples)

A tuple is a type of data structure that stores multiple values but has a few key differences to a list. A tuple is immutable. You can’t change it, erase elements, add elements or any of the above examples of ways you can manipulate a list. Once set, that’s it.

Create a tuple the same way you would a list, only using parenthesis instead of square brackets,

coordinates = (3, 4)

print (coordinates[0]) returns the first element in the tuple, just as it does in a list.

Generally, if python stores data for any reason whereby it doesn’t stand to get manipulated in any way, it’s stored in a tuple, not a list.

FUNCTIONS

Just as with Shell Scripting, a function is a collection of code that can be called from within the script to execute a sequence of commands. function names should be in all lowercase and underscores are optional if you want to see a space in the function name for better readability, e.g. hello_world or helloworld are both acceptable.

def hello_world ():

print (“Hello World!”)

commands inside the function MUST be indented. To call the function from within the program, just use the name of the function followed by parenthesis, e.g.

hello_world()

You can pass in parameters to a function as follows,

def hello_user (name):

print (“Hello ” + name)

pass the name in from the program with hello_user(“Bob”)

def hello_age (name, age):

print (“Hello ” + name = ” you are ” + str(age))

hello_age (“Matt”, 45)

RETURN STATEMENT

In the following example, we’ll create a function to cube a number and return a value

def cube (num);

return (num ^3)

Call it with print (cube(3)). Note that without the return statement, the function would return nothing despite performing the math as instructed.

result = cube(4)

print (result)

Note that in a function that has a return statement, you cannot place any more code after the return statement in that function.

IF STATEMENTS

Firstly, set a Boolean value to a variable,

is_male = True

If statement in python process the first line of code when the boolean value of the variable in the IF statement is True, i.e.

if is_male:

print (“You are a male”)

This would print “You are a male” to the screen, whereas if is_male = False, it’d do nothing.

if is_male:

print (“You are a male”)

else:

print (“You are not a male”)

Now, what about an IF statement that checks multiple boolean variables? e.g.

is_male = True

is_tall = True

if is_male or is_tall:

print “You’re either male or tall or both”

else:

print “You’re neither male nor tall”

an alternative to using or is to use and e.g.

if is_male and is_tall:

print “You are a tall male”

else:

print “You’re either not male or not tall or both”

Finally, by using the elif statement(s) between the if and else statements, we can execute a command or commands in the event that is_male = True but is_tall is False, i.e.

if is_male and is_tall:

print “You are male and tall”

elif is_male and not(is_tall):

print “You are not a tall male”

elif not(is_male) and is_tall:

print “You are tall but not male”

else:

print “You are neither male nor tall”

IF STATEMENTS AND COMPARISONS

The following examples show how you might compare numbers or strings using a function containing if statements and comparison operators.

#Comparison Operators
#Function to return the biggest number of three numbers
def max_num(num1, num2, num3):
if num1 >= num2 and num1 >= num3:
return num1
elif num2 >= num1 and num2 >= num3:
return num2
else:
return num3

print (max_num(3, 4, 5))

#Function to compare three strings
def match(str1, str2, str3):
if str1 == str2 and str1 == str3:
return "All strings match"
elif str1 == str2 and str1 != str3:
return "Only the first two match"
elif str1 != str2 and str2 == str3:
return "Only the second two match"
elif str1 == str3 and str1 != str2
return "Only the first and last match"
else:
return "None of them match"

print (match("Bob", "Alice", "Bob"))

python also supports <> as well as != and can also compare strings and numbers e.g. ’12’ <> 12

BUILDING A CALCULATOR

This calculator will be able to perform all basic arithmetic, addition, subtraction, multiplication and division.

#A Calculator
num1 = float(input("Enter first number: "))
op = input("Enter operator: ")
num2 = float(input("Enter first number: "))

if op == "+":
print(num1 + num2)
elif op == "-":
print(num1 - num2)
elif op == "/":
print(num1 / num2)
elif op == "*":
print(num1 * num2)
else:
print "Invalid operator"

DICTIONARIES

Key and Value pairs can be stored in python dictionaries. To create a dictionary to store say, three letter months and full month names, you’d use the following structure. Note that in a dictionary the keys must be unique.

monthConversions = {
"Jan": "January",
"Feb": "February",
"Mar": "March",
"Apr": "April",
"May": "May",
"Jun": "June",
"Aug": "August",
"Sep": "September",
"Oct": "October",
"Nov": "November",
"Dec": "December",
}

To retrieve the value for a given key, use print(monthConversions[Sep]) or print(monthConversions.get(“Sep”) using the get function.

The get function also allows you to specify a default value in the event the key is not found in the dictionary, e.g.

print(monthConversions.get(“Bob”, “Key not in dictionary”)

WHILE LOOPS

The following example starts at 1 then loops until 10

#WHILE LOOP
i = 1
while i <= 10:
    print(i)
    i = i + 1)  #or use i += 1 to increment by 1
    
print ("Done with loop")

The while loop will execute the indented code while the condition remains True.

BUILDING A GUESSING GAME

#GUESSING GAME
secret_word = "cyberfella"
guess = ""
tries = 0
limit = 3
out_of_guesses = False

while guess != secret_word and not out_of_guesses: #will loop code while conditions are True
if tries < limit:
guess = input("Enter guess: ")
tries += 1
else:
out_of_guesses = True

if out_of_guesses:
#executes if condition/boolean variable is True
print ("Out of guesses, you lose!")
else:
#executes if boolean condition is False
print ("You win")

FOR LOOPS

Here are some examples of for loops

#FOR LOOPS
for eachletter in "Cyberfella Ltd":
    print (eachletter)

friends = ["Bob", "Alice", "Matt"]
for eachfriend in friends:
    print(eachfriend)

for index in range(10):
    print(index) #prints all numbers starting at 0 excluding 10

for index in range(3,10):
    print(index) #prints all numbers between 3 and 9 but not 10

for index in range (len(friends)):
    print(friends[index]) #prints out all friends at position 0, 1, 2 etc depending on the length of the list or tuple of friends

for index in range (5):
    if index == 0:
         print ("first iteration of loop")
    else:
         print ("not first iteration")

EXPONENT FUNCTIONS

print (2**3) #prints 2 to the power of 3

Create a function called raise_to_power to take a number and multiply it by itself a number of times,

def raise_to_power(base_num, pow_num):
result = 1
for index in range (pow_num):
#carry on multiplying the number by itself until you hit the range limit specified by pow_num
result = result * base_num
return result

2D LISTS AND NESTED LOOPS

In python, you can store values in a table, or 2D list, and print the values from certain parts of the table depending on their row and column positions. note that positions start at zero, not 1.

#Create a grid of numbers, that is 4 rows and 3 columns
number_grid = [
[1,2,3],
[4,5,6],
[7,8,9],
[0]
]
#return the value from first row (row 0) first column (position 0)
print number_grid [0][0]
#returns 1

#return the value from third row (row 2) third column (position 2)
print number_grid [0][0]
#returns 9
for eachrow in number_grid:
print (row)

for eachrow in number_grid:
for column in eachrow:
print (column)
#returns the value of each column in each row until it hits the end

BUILD A TRANSLATOR

This little program is an example of nested if statements that take user input and translate any vowels in the string input to an upper or lowercase x

#CONVERTS ANY VOWELS TO A X
def translate(phrase):
translation = ""
for letter in phrase:
if letter.lower() in "aeiou":
if letter.isupper():
translation = translation + "X"
else
translation = translation + "x"
else:
translation = translation + letter
return translation

print(translate(input("Enter a phrase: ")))

TRY EXCEPT

Catching errors in your program to prevent the program from being prevented from running. Error handling in python for example, if you prompt the user for numerical input and they provide alphanumerical input, the program would error and stop.

number = int(input(“Enter a number: “))

The variable number is set based upon the numerical, or more specifically the integer value of the users input. In order to handle all the potential pitfalls, we can create a “try except” block, whereby the code that could “go wrong” is indented after a try: and the code to execute in the event of an error, being indented after the except: block, e.g.

try:
number = int(input("Enter a number: "))
print(number)
except:
print("Invalid input, that number's not an integer")

Specific error types can be caught by specifying the type of error after except. Using an editor like pycharm will display the possible options or errors that can be caught but the specific error will be in the output of the script with it stops.

So if we execute the code outside of a try: block, and enter a letter when asked for an integer, we’d get the following error output that we can then use to create our try: except: block to handle that specific ValueError error type in future.

You can add multiple except: blocks in a try: except: block of code.

If you want to capture a certain type of error and then just display that error, rather than a custom message or execute some alternative code then you can do this…

This can be useful during troubleshooting.

READING FROM FILES

You can “open” a file in different modes, read “r”, write “w”, append “a”, read and write “r+”

employee_file = open(“employees.txt”, “r”) #Opens a file named employees.txt in read mode

It’s worth checking that the file can be read, print(employee_file.readable())

You need to close the file once you’re done with it,

employee_file.close()

The examples below show different ways you can read from a file

#READING FROM FILES
employee_file = open("employees.txt", "r") #opens file read mode
print(employee_file.readable()) #checks file is readable
print(employee_file.read()) #reads entire file to screen
print(employee_file.readline()) #can be used multiple times to read next line in file
print(employee_file.readlines()) #reads each line into a list
print(employee_file.readlines()[1]) #reads into a list and displays item at position 1 in the list
employee_file.close() #closes file

for employee in employee_file.readlines():
    print (employee)
employee_file.close()

WRITING AND APPENDING TO FILES

You can append a a file by opening it in append mode, or overwrite/write a new file by opening it in write mode. You may need to add newline characters in append mode to avoid appending your new line onto the end of the existing last line.

#WRITING AND APPENDING TO FILE
employee_file = open("employees.txt", "a") #opens file append mode
employee_file.write("Toby - HR") #in append mode will append this onto the end of the last line, not after the last line
employee_file.write("\nToby - HR") #in append mode will add a newline char to the end of the last line, then write a new line

employee_file = open("employees.txt", "w") #opens file write mode
employee_file.write("Toby - HR") #in write mode, this will overwrite the file entirely

employee_file.close()

MODULES AND PIP

Besides the builtin modules in python, python comes with some additional modules that can be read in by your python code to increase the functionality available to you. This can reduce time since many things you may want to achieve have already been written in one of these modules. This is really what python is all about – the ability to pull in the modules you need, keep everything light and reduce time.

IMPORTING FUNCTIONS FROM EXTERNAL FILES

import useful_tools
print(useful_tools.roll_dice(10))
# MORE MODULES AT docs.python.org/3/py-modindex.html

SOME USEFUL COMMANDS
feet_in_mile = 5280
metres_in_kilometer = 1000
beatles=["John", "Ringo", "Paul", "George"]
def get_file_ext(filename):
    return filename[filename.index(".") + 1:]
def roll_dice(num):
    return random.randint(1, num)

To use a module in an external file, use print(useful_tools.roll_dice(10)) for example (rolls a 10 sided dice).

EXTERNAL MODULES

Besides the additional internal modules that can be read into your python script, there are also many external modules maintained by a huge community of python programmers. Many of these external modules can be installed using the builtin pip command that comes as part of python. e.g. pip install python-docx will install the external python module that allows you to read and write to Microsoft Word documents.

You can install pip using your package manager

To uninstall a python modules, use pip uninstall python-docx for example.

CLASSES AND OBJECTS

A class defines a datatype. In the example, we create a datatype or class for a Student.

class Student:
def __init__(self, name, major, gpa, is_on_probation):
self.name = name
self.major = major
self.gpa = gpa
self.is_on_probation = is_on_probation

This can be saved in its own class file called Student.py and can be imported into your python script using the command from Student import Student i.e. from the Student file, I want to import the Student class.

To create an object that is an instance of a class, or in our case, a student that is an instance of the Student class, we can use student1 = Student(“Bob”, “IT”, 3.7, False)

print (student1.name) will print the name attribute of the student1 object.

BUILDING A MULTIPLE CHOICE QUIZ

If you’re using pycharm to create your python code, then Click File, New, Python File and create an external class named Question.py. This will define the data type for the questions in your main multiple choice quiz code.

class Question:
def __init__(self, prompt, answer):
self.prompt = prompt
self.answer = answer

Now in your main code, read in that Questions.py class, create some questions in a list called question_prompts, and define the answers to those questions in another list called questions, e.g.

from Question import Question

question_prompts = [
    "What colour are apples?\n(a) Red/Green\n(b) Purple\n(c) Orange\n\n",
    "What colour are Bananas\n(a) Teal\n(b) Magenta\n(c) Yellow\n\n",
    "What colour are Strawberries?\n(a) Yellow\n(b) Red\n(c) Blue\n\n"
]

questions = [
    Question(question_prompts[0], "a"),
    Question(question_prompts[1], "c"),
    Question(question_prompts[2], "b"),
]

Now create a function to ask the questions, e.g.

def run_test(questions):
score = 0
for question in questions:
answer = input(question.prompt)
if answer == question.answer:
score +=1
print("You got " + str(score) + "/" + str(len(questions)) + " correct")

and lastly, create one line of code in the main section that runs the test.

run_test (questions)

OBJECT FUNCTIONS

Consider the following scenario – a python class that defines a Student data type, i.e.

class Student:
def __init__(selfself, name, major, gpa):
self.name = name
self.major = major
self.gpa = gpa

and some code as follows,

from Student import Student

student1 = Student("Oscar", "Accounting", 3.1)
student2 = Student("Phyllis", "Business", 3.8)

An object function is a function that exists within a class. In this example, we’ll add a function that determines if the student is on the honours list, based upon their gpa being above 3.5

In the Student.py class, we’ll add a on_honours_list function

class Student:
    def __init__(self, name, major, gpa):
        self.name = name
        self.major = major
        self.gpa = gpa

    def on_honours_list(self):
        if self.gpa >= 3.5:
            return True
        else:
            return False

and in our app, we’ll add a line of code to check if a particular student is on the honours list.

from Student import Student

student1 = Student("Oscar", "Accounting", 3.1)
student2 = Student("Phyllis", "Business", 3.8)

print(student1.on_honours_list())

INHERITANCE

Classes can inherit the functions from other classes, this is done as follows. Consider a class that defines a Chef object, e.g.

class Chef:

def make_chicken(self):
print("The chef makes a chicken")

def make_salad(self):
print("The chef makes a salad")

def make_special_dish(self):
print("The chef makes bbq ribs")

Within the main app code, you can instruct the Chef as follows.

from Chef import Chef
myChef = Chef()
myChef.make_chicken()
myChef.make_special_dish()

But what if there was a Chinese Chef who could do everything that the Chef could do, but also made additional dishes and a different special dish? Creating an additional class for the ChineseChef as follows would facilitate this. e.g. ChineseChef.py would contain…

from Chef import Chef
class ChineseChef(Chef):
def make_special_dish(self):
print ("The chef makes Orange Chicken")

def make_fried_rice(self):
print ("The chef makes fried rice")

So, the class imports the other class, then the skills unique to the Chinese Chef are added, and also, any of the same skills overridden by re-defining them within the ChineseChef class.

PYTHON INTERPRETER

On Windows, Mac or Linux, you can access the python command line interpreter to perform some quick and dirty tests of your commands, Note that python is very particular about it’s tab indented code where applicable – something that was done in say Shell Scripting at the discretion of the programmer for ease of readability – python really enforces it. e.g.

Using the python command will likely open an interpreter for python v2.x whereas the python3 command will open the interpreter for python v3.x. Be sure to add the path to the PATH environment variable if using Windows.

For coding in python, it’s best to use a good text editor, Notepad++ (notepadqq on linux), or a proper coding text editor such as Visual Studio Code (runs on linux as well as Windows and Mac) or Atom or the best dedicated to writing python in particular is PyCharm. The community edition is free, or there is a paid for, Professional Edition.

I found PyCharm be be available via my Software Manager on Linux Mint.

image_pdfCreate PDF of this post...
Facebooktwitterredditpinterestlinkedinmail

What is DevOps?

DevOps is the key to Continuous Delivery. How is this achieved? First it is useful to consider the evolution of the previous models of project management, namely Waterfall and Agile.

AGILE

Agile addressed the gap between client requirements and development, but a disconnect remained between the developers and the operations teams, i.e. applications were being developed on different systems to the ones they would ultimately be deployed upon with the assumption that the production infrastrcuture was bigger and more powerful than the development laptop so it would all be fine.

Client + Requirements <—> Developers + Testers <- X -> Operations + Infrastructure

DEVOPS

DevOps is a logical evolution of the Agile shift and addresses the link between developers and operations so that continuous delivery and continuous integration can be achieved along with the promise of fast product to market times and quicker return of value to the client. This utopia is further realised since much infrastructure is now hosted in the cloud and is in itself code (infrastructure as code). This doesn’t so much bring the operations and dev teams closer together as blur the divide between them, since they now use many common tools.

Client + Requirements <—> Developers + Testers <—> Operations + Infrastructure

It also facilitates a feedback loop rather than a left-to-right delivery paradigm.

The figure of 8 diagram above shows the sequence of phases in a DevOps environment that facilitate continuous delivery, continuous integration and continous deployment. More on what that means below… First lets have a quick look at each of the phases shown in the diagram, starting with the Planning Phase.

It’s worth noting at this point that many open-source tools are used in each stage of the DevOps process. We’ll cover some of the more commonly used tools as we go along.

It is also worth noting that many of these tools are designed to automate the functions of a build engineer, tester or operator.

PLAN

To sit down with business teams and understand their goals. Tools used in this phase are Subversion and IntelliJIDEA.

CODE

Programmers design code using git to carefully control versioning and branches of code that may be a collaborative effort, ultimately merging the branches into a new build. More on the elementary use of git here. Code may be shell script, python, powershell or any other language and git can maintain version control of developers local repositories of code and the projects main private and public repositories held online at github that collaborating devs keep in sync with.

BUILD

Build tools such as Maven and Gradle take code from different repositories and combine them to build the complete application.

TEST

Testing of code is automated using tools such as Selenium, JUnit to ensure software quality. The testing environment is scripted just as the build environment is.

INTEGRATE

Jenkins integrates new features once testing is complete, to the already existing codebase. Another tool used in the integration phase is Bamboo.

DEPLOY

BMC XebiaLabs can be used to package the application after Jenkins release and is deployed from the Developement Server to the Production Server.

OPERATE

Operations elements such as Servers, VM’s and Containers are deployed using tools such as Puppet, Chef and their configuration managed and maintained using tools such as Ansible and Docker. Like the application hosted on the platform, these tools are used to execute code in the cloud and that code can be maintained using git etc just the same as application code for a consistent, self-healing deployment where the scale of the application may require many identically configured elements to host an application that could also be subjected to attacks.

MONITOR

Monitoring frameworks such as Nagios are used to schedule scripted checks of parts of the solution and the consequences of the results collated by Groundwork and/or Splunk>. A nagios monitoring script should exit with a status of 0, 1 or 2 (OK, Error, Warning) and may be displayed on a board for operators to see, but may also feed back into the development cycle automatically.

So, all these tools, all this code and the utopia of entirely automated, cloud infrastructure is often described as Continuous Integration/ Continuous Delivery/Continous Deployment or “CI/CD” for short. Lets make the distinction between the three…

CONTINUOUS DELIVERY

This is effectively the outcome of the PLAN, CODE, BUILD and TEST phases.

CD = PLAN <–> CODE <–> BUILD <–> TEST

CONTINUOUS INTEGRATION

This is effectively the outcome of the CD phases above plus the outcome of the RELEASE phase, i.e.

CI = CD <–> RELEASE

or

CI = PLAN <–> CODE <–> BUILD <–> TEST <–> RELEASE

whereby the outcome of the RELEASE phase (defect or success) is fed back into the Continuous Delivery phases above or moved into the DEPLOY, OPERATE and MONITOR phases or “Continuous Deployment” phases respectively, i.e.

CONTINUOUS DEPLOYMENT

success” outcome from CI –> DEPLOY <–> OPERATE <–> MONITOR

So in summary, the terms Continuous Delivery, Continous Integration and Continuous Deployment is simply a collective term for multiple phases of the DevOps cycle…

COMPARISON WITH WATERFALL and AGILE

Waterfall projects can take weeks, months or years before the first deployment of a working product, only to find bugs when released into the wild on a much larger user base than the developers and testing teams. This is an extremely stressful time if you’re the dev tasked with finding and fixing the root cause of the bugs in the days after go-live, especially when the production system is separate in every sense of the word from the developers working environment, not to mention the potential for poor public image, poor customer PR and spiralling costs after heavy upfront costs.

REQUIREMENTS ANALYSIS –> DESIGN –> DEVELOPMENT –> TESTING –> MAINTENANCE

Agile projects use kanban boards to monitor tasks in the Pending, Active, Complete and Resolved columns. Agreed Sprints lasting 2 weeks or 4 weeks (sprint cadence) ultimately resulting in a new release, drives value back to the customer in a guaranteed schedule, with outstanding tasks and bugs still being worked on during the next sprint.

SPRINT = [ PLAN <–> CODE <–> TEST <–> REVIEW ] + SCRUM

DevOps in comparison, heavily leverages automation and a diverse toolset to bring the sprint cadence down to days or even a daily release.

-> PLAN <–> CODE <–> BUILD <–> TEST <–> INTEGRATE <–> DEPLOY <–> OPERATE <–> MONITOR <-

ADVANTAGES OF DEVOPS

As an example, Netflix accounts for a third of all network traffic on the internet, yet it’s DevOps team is just 70 people.

The time taken to create and deliver software is greatly reduced.

The complexity of maintaining an application is reduced via automation and scripting.

Teams aren’t silo’d according to discrete skill sets. They work cohesively at various phases in the loop, their roles assigned during daily scrums.

Value is delivered more readily to the customer and up-front costs reduced.

image_pdfCreate PDF of this post...
Facebooktwitterredditpinterestlinkedinmail

git Cheat Sheet

My super concise git notes

Developed by Linus Torvalds, git is a…

  1. Distributed Version Control System (VCS) for any type of file
  2. Co-ordinates work between multiple developers
  3. Tracks who made what changes and when
  4. Revert back at any time
  5. Supports local and remote repositories (hosted on github, bitbucket)

It keeps track of code history and takes snapshots of your files
You decide when to take a snapshot by making a commit
You can visit any snapshot at any time
You can stage files before committing

INSTALLING git
sudo apt-get install git (debian)
sudo yum install git (red hat)
https://git.scm.com (installers for mac and windows)
gitbash is a linux-like command cli for windows

CONFIGURING git
git config –global user.name ‘matt bradley’
git config –global user.email ‘matt@cyberfella.co.uk’
touch .gitignore
echo “log.txt” >> .gitignore
Add file to be ignored by git, e.g. log file generated by script
echo “/log” >> .gitignore Add directory to be ignored, e.g. log directory

BASIC COMMANDS (local repository)
git init Initialize a local git repository (creates a hidden .git subdirectory in the directory)
git add Adds file(s) to Index and Staging area ready for commit.
git add . Adds all files in directory to Staging area
git status check status of working tree, show files in Staging area and any untracked files you still need to add
git commit commit changes in index – takes files in staging are and puts them in local repository
git commit -m ‘my comment’ Skips git editing stage adding comment from command.
git rm –cached removes from staging area (untracked/unstaged).

BASIC COMMANDS (remote repo)
git push push files to remote repository
git pull pull latest version from remote repo
git clone clone repo into a local directory

git clone https://github.com/cyberfella/cyberfella.git clones my cyberfella repository

git –version shows version of git installed

BRANCHES
git branch loginarea creates a branch from master called “loginarea”
git checkout loginarea switches to the “loginarea” branch
git checkout master switches back to the master branch version
git merge ‘loginarea’ merges changes made to ‘loginarea’ files in loginarea branch to master branch

REMOTE REPOSITORY
https://github.com/new
Create a public or private repository
Shows the commands required to create a new repository on the command line or push an existing repository from the command line

README.md
A readme.md (markdown format) file displays nicely in github.

#MyApp

This is my app

Basically it should look like this in github

MyApp

This is my app

USEFUL COMPLIMENTARY INFORMATION

atom is a very nice, simple text editor for programmers that supports integration with git. https://flight-manual.atom.io/getting-started/sections/installing-atom/

image_pdfCreate PDF of this post...
Facebooktwitterredditpinterestlinkedinmail

Create Windows 10 bootable USB on Linux

The following commands install the WoeUSB program used to create a bootable USB stick for the installation of Windows 10.

First add the repository (assuming Ubuntu or Linux Mint OS)

sudo add-apt-repository ppa:nilarimogard/webupd8

sudo apt-get update

sudo apt-get install woeusb

The WoeUSB GUI can be found in your Applications Menu but I don’t recommend it. The likelihood is, you’ll run into the problem described here

Using gparted, create a NTFS partition on your USB stick – you may need to install ntfs-tools from your repository to do this.

Create the USB stick using the following command

sudo umount /dev/sdb1

sudo woesub –target-filesystem NTFS –device Win10_1809Oct_EnglishInternational_x64.iso /dev/sdb

Download the Windows 10 ISO from Microsoft here

image_pdfCreate PDF of this post...
Facebooktwitterredditpinterestlinkedinmail

Make bootable USB from .iso in Linux

The following command will write a downloaded .iso file of your favourite distro to a USB stick.  You can then boot off it and install to hardware.

sudo dd bs=4M if=./manjaro-xfce-18.0-stable-x86_64.iso of=/dev/sdb status=progress

Note that in my example, there was no partition on the usb stick to start with.  I’d removed it using gparted (not necessary though).

image_pdfCreate PDF of this post...

Facebooktwitterredditpinterestlinkedinmail

Conky

One of my first ever posts was about conky and wbar on crunchbang linux.

Crunchbang has since been replaced with a community led fork, Bunsenlabs, and it’s well worth checking out.  I’m so impressed with it that it’s my laptop OS of choice, giving me very little grief installing onto my disappointingly-not-particularly-linux-friendly Dell XPS 15, unlike other popular distros.  Suffice to say, Bunsenlabs has saved my XPS15 from the financial damage limitation exercise known as ebay.

In any case, I thought I’d include a link to my own .conkyrc file.  It’s simple and neat, nothing too fancy.

The download file is called conkyrc.  Once downloaded, just rename it to .conkyrc i.e. put the dot in front (hidden file and the conky default), and copy it to your home directory, remembering to back up any existing .conkyrc file already in your home directory first.

If you want to edit yours to make it your own, the man page for conky is very good, but I find this better.

image_pdfCreate PDF of this post...

Facebooktwitterredditpinterestlinkedinmail

Linux disk space consumption analysis.

Desktop distro’s have wonderful graphical disk space analysis programs such as Baobab (KDirStat), QDirStat, xdiskusage, duc, JDiskReport and with your desktop distro being connected to the internet, even if you dont already have them installed, installing them from your repositories is easy.   You can quickly drill down using these treemapper programs and find the culprit for filling your disk up.

In the datacentre, things are never so easy.  You have no internet access, and no local repository configured, and even if you did, you have no change control to install it on a live system, and even if you did, no GUI to view it. All you have is a production problem, a stressed out ops manager and a flashing cursor winking at you -oh and native tools.

Sure, you can use the find command to go looking for files over a certain size,

find ./ -type f -size +1000000M -exec ls -al {} \;

removing a zero and re-running as required until it starts finding something, but you’ll fight with the find command syntax for 15 minutes trying to get it to work, only to be unconvinced of the results.  As good as find is, it’s not exactly easy trying to put together a command that does something that should be simple.

Here is a much simpler solution.  Just use du.  In particular…

du -h –max-depth=1

This will summarize the size of the top level sub-directories underneath your present working directory.  You then cd in to the biggest one, run it again and repeat until you basically end up digging down and arriving at the largest file on disk – in my case a 32GB mysql database in /var/lib/mysql/zabbix.

So there you go.  Have a play with it and you’ll see what I mean.  It’s my favourite way of finding out what’s eating all my disk space.

image_pdfCreate PDF of this post...

Facebooktwitterredditpinterestlinkedinmail