Here are some Linux commands that everyone should be familiar with. In fact, you could argue that these are the first commands to memorise and build out your repertoire from there.
#BASIC LINUX COMMANDS
#Clear the terminal window
clear
#Show kernel version
uname -a
#Show all tunable kernel parameters in the /proc/sys directory
sudo /sbin/sysctl -a
#Set a kernel parameter on the fly without persistence
sudo /sbin/sysctl -w kernel.sysrq="1"
#Set a kernel parameter with persistence
/etc/sysctl.conf
#Kernel parameters startup script
/etc/rc.d/rc.sysinit
#Show network interfaces
ifconfig
ip addr show
#Configure network interface with persistence
/etc/sysconfig/network
/etc/sysconfig/network-scripts/ifcfg-eth0
#Show all filesystems and space
df -ah
#Show service status
service udev status
systemctl status udev
#How much disk space is used by a given directory
du ~/Downloads
#What TCP and UDP ports is the listem listening on?
netstat -tulpn
sudo netstat -tulpn #gives more info on process name
#Show information about a given process
ps aux | grep containerd
#Show free memory stats
free
#List block storage devices known to the system
lsblk
#Show mounted storage devices
mount
#Show filesystems that should be mounted at boot
cat /etc/fstab
#Mount everything in /etc/fstab
mount -a
#Mount a block storage device
mount /dev/sdb1 /mnt
#LVM Commands
pvdisplay pvcreate pvremove pvchange
vgdisplay vgcreate vgextend vgremove vgchange
lvdisplay lvcreate lvextend lvremove lvchange
mkfs.ext4
#Copy files
cp
rsync
dd
#Show command history
history
#Look up a command
man -k <search-string>
man grep
Did you like this? Tip cyberfella with Cryptocurrency
It’s come to my attention recently that despite a fresh install of Linux Mint, certain programs seem to leak like a basket and hang around after they’re closed too.
I’d noticed my machine freezing intermittently and adding the memory monitor panel item revealed that the system memory was filling up.
xreader and brave seemed to be the main culprits but since rebuilding my desktop machine, I’ve not been using many other programs apart from ledger live to track the value of my crypto currency portfolio while the fed prints money ad infinitum during the coronavirus pandemic. I digress.
Killing processes gets old really quick, so I wrote a quick’n’dirty little shell script to do it for me. Rather than killing individual processes, it savages all processes by the same name.
I shall call it savage.sh and share it with the world, right here. Not on github.
#!/bin/bash
# savage.sh finds all process ID's for the specified program running under your own user account and kills them
# in order to free up system resources. Some programs have severe memory leaks and consume vast amount of RAM and
# swap if left running over time.
#
# Usage: savage.sh
#
# Written by M. D. Bradley during Coronavirus pandemic, March 2020
#Variables
user=`whoami`
memfree=`free | grep Mem | awk {'print $4'}`
#Code
echo "Program to kill e.g. xreader?: "
read program
pidcount=`ps -fu $user | grep $program | awk {'print$program'} | wc -l`
ps -fu $user | grep $program | awk {'print$2'} | while read eachpid; do
kill $eachpid >/dev/null 2>&1
done
memfree2=`free | grep Mem | awk {'print $4'}`
freedmem=$(( memfree2 - memfree ))
if [ $pidcount -eq 1 ]
then
echo "Found $pidcount process running for $program"
echo "Killed it. Freed up $freedmem bytes."
fi
if [ $pidcount -gt 1 ]
then
echo "Found $pidcount processes running for $program"
echo "Savaged them. Freed up $freedmem bytes."
fi
Did you like this? Tip cyberfella with Cryptocurrency
Distributed Version Control System (VCS) for any type of file
Co-ordinates work between multiple developers
Tracks who made what changes and when
Revert back at any time
Supports local and remote repositories (hosted on github, bitbucket)
It keeps track of code history and takes snapshots of your files You decide when to take a snapshot by making a commit You can visit any snapshot at any time You can stage files before committing
INSTALLING git sudo apt-get install git (debian) sudo yum install git (red hat) https://git.scm.com (installers for mac and windows) gitbash is a linux-like command cli for windows
CONFIGURING git git config –global user.name ‘matt bradley’ git config –global user.email ‘matt@cyberfella.co.uk’ touch .gitignore echo “log.txt” >> .gitignore Add file to be ignored by git, e.g. log file generated by script echo “/log” >> .gitignore Add directory to be ignored, e.g. log directory
BASIC COMMANDS (local repository) git init Initialize a local git repository (creates a hidden .git subdirectory in the directory) git add Adds file(s) to Index and Staging area ready for commit. git add . Adds all files in directory to Staging area git status check status of working tree, show files in Staging area and any untracked files you still need to add git commit commit changes in index – takes files in staging are and puts them in local repository git commit -m ‘my comment’ Skips git editing stage adding comment from command. git rm –cached removes from staging area (untracked/unstaged).
BASIC COMMANDS (remote repo) git push push files to remote repository git pull pull latest version from remote repo git clone clone repo into a local directory
git clone https://github.com/cyberfella/cyberfella.git clones my cyberfella repository
git –version shows version of git installed
BRANCHES git branch loginarea creates a branch from master called “loginarea” git checkout loginarea switches to the “loginarea” branch git checkout master switches back to the master branch version git merge ‘loginarea’ merges changes made to ‘loginarea’ files in loginarea branch to master branch
REMOTE REPOSITORY https://github.com/new Create a public or private repository Shows the commands required to create a new repository on the command line or push an existing repository from the command line
README.md A readme.md (markdown format) file displays nicely in github.
One of my first ever posts was about conky and wbar on crunchbang linux.
Crunchbang has since been replaced with a community led fork, Bunsenlabs, and it’s well worth checking out. I’m so impressed with it that it’s my laptop OS of choice, giving me very little grief installing onto my disappointingly-not-particularly-linux-friendly Dell XPS 15, unlike other popular distros. Suffice to say, Bunsenlabs has saved my XPS15 from the financial damage limitation exercise known as ebay.
In any case, I thought I’d include a link to my own .conkyrc file. It’s simple and neat, nothing too fancy.
The download file is called conkyrc. Once downloaded, just rename it to .conkyrc i.e. put the dot in front (hidden file and the conky default), and copy it to your home directory, remembering to back up any existing .conkyrc file already in your home directory first.
If you want to edit yours to make it your own, the man page for conky is very good, but I find this better.
Did you like this? Tip cyberfella with Cryptocurrency
Desktop distro’s have wonderful graphical disk space analysis programs such as Baobab (KDirStat), QDirStat, xdiskusage, duc, JDiskReport and with your desktop distro being connected to the internet, even if you dont already have them installed, installing them from your repositories is easy. You can quickly drill down using these treemapper programs and find the culprit for filling your disk up.
In the datacentre, things are never so easy. You have no internet access, and no local repository configured, and even if you did, you have no change control to install it on a live system, and even if you did, no GUI to view it. All you have is a production problem, a stressed out ops manager and a flashing cursor winking at you -oh and native tools.
Sure, you can use the find command to go looking for files over a certain size,
find ./ -type f -size +1000000M -exec ls -al {} \;
removing a zero and re-running as required until it starts finding something, but you’ll fight with the find command syntax for 15 minutes trying to get it to work, only to be unconvinced of the results. As good as find is, it’s not exactly easy trying to put together a command that does something that should be simple.
Here is a much simpler solution. Just use du. In particular…
du -h –max-depth=1
This will summarize the size of the top level sub-directories underneath your present working directory. You then cd in to the biggest one, run it again and repeat until you basically end up digging down and arriving at the largest file on disk – in my case a 32GB mysql database in /var/lib/mysql/zabbix.
So there you go. Have a play with it and you’ll see what I mean. It’s my favourite way of finding out what’s eating all my disk space.
Using QDIRSTAT on headless servers
We live in strange times, where despite the best efforts of the likes of Edward Snowden to open our eyes to the fact that we’re being monitored at any and every opportunity by the intelligence community, we’re still hell bent on moving our enterprise computing into huge corporate cloud data centres that the CIA and NSA have back doors into. If you think “That’s OK, I have nothing to hide.” then great. How ’bout you hand me your phone and let me go and have a good look around it? Oh, that’s not OK? Well make your mind up, will you?You think you’re gonna be as successful as Google and Amazon if you use their cloud services? Whose cloud service do you think they use? That’s right, their own. So your Cloud is their On Prem. I know, I’m such a cynic.
For those who are tasked with monitoring disk space consumption on their cloud servers, containers, headless stuff, you can use a neat little qdirstatcache file writer to generate a cache file that you can then open in qdirstat on your workstation for analysis.
I’ve summarised its use below, assuming you’ll understand what each command is doing.
I’d like to issue a special thanks to Mike Schlegel in the comments section below for dragging me kicking and screaming into the 21st Century. I guess there’s still some of us out there who are clever enough to be working with Linux but stupid enough that we didn’t buy Bitcoin at 10$ back in 2012 when I started this blog.
Did you like this? Tip cyberfella with Cryptocurrency
Do you need to find a file and then perform some action on it and get caught up in curly brackets, back slashes and syntax errors when you could swear “this command worked in the past?”. It’s one of the joys of Linux I guess, but quickly becomes tedious when you’re working against a problem and are under stress.
Here is a reference find command that works. I hope it helps. It’ll no doubt help me at some point (the entire purpose of my blog is to actually remind myself how to do half of this stuff from time to time).
sudo find ./ -name *.mkv -exec ls {} \;
Something I like to do is create shell functions in the .bashrc file in your home directory to simplify commonly used commands that are long to type and quite syntax sensitive.
This is a nice useful one that can be used to find any files that have the specified string anywhere in the filename. Just type f All to find any files with the word All occurring anywhere in the filename.
You could create other versions such as this one, that will find and remove files with a specified string in the filename – but I’d really not recommend it.
fr() { find ./ -name “*$1*” -exec rm {} \; }
Be sure to run man fr first to check that your shell function name isn’t the name of an existing binary on the system!
Did you like this? Tip cyberfella with Cryptocurrency
With the recent announcement that Dropbox is dropping its support for linux filesystems (other than ext4) in November, you’ll no doubt be searching for an alternative cloud storage provider that supports linux file system synchronisation.
In short, Relax and Recover (rear) is a tool that creates .tar.gz images of the running server and creates bootable rescue media as .iso images
Relax and Recover (ReaR) generates an appropriate rescue image from a running system and also acts as a migration tool for Physical to Virtual or Virtual to Virtual migrations of running linux hosts.
It is not per se, a file level backup tool. It is akin to Clonezilla – another popular bare metal backup tool also used to migrate Linux into virtual environments.
There are two main commands, rear mkbackuponly and rear mkrescue to create the backup archive and the bootable image respectively. They can be combined in the single command rear mkbackup.
The bootable iso provides a configured bootable rescue environment that, provided your backup is configured correctly in /etc/rear/local.conf, will make recovery as simple as typing rear recover from the recovery prompt.
You can back up to NFS or CIFS Share or to a USB block storage device pre-formatted by running rear format /dev/sdX
A LITTLE MORE DETAIL
A professional recovery system is much more than a simple backup tool.
Experienced admins know they must control and test the entire workflow for the recovery process in advance, so they are certain all the pieces will fall into place in case of an emergency.
Versatile replacement hardware must be readily available, and you might not have the luxury of using a replacement system that exactly matches the original.
The partition layout or the configuration of a RAID system must correspond.
If the crashed system’s patch level was not up to date, or if the system contained an abundance of manually installed software, problems are likely to occur with drivers, configuration settings, and other compatibility issues.
Relax and Recover (ReaR) is a true disaster recovery solution that creates recovery media from a running Linux system.
If a hardware component fails, an administrator can boot the standby system with the ReaR rescue media and put the system back to its previous state.
ReaR preserves the partitioning and formatting of the hard disk, the restoration of all data, and the boot loader configuration.
ReaR is well suited as a migration tool, because the restoration does not have to take place on the same hardware as the original.
ReaR builds the rescue medium with all existing drivers, and the restored system adjusts automatically to the changed hardware.
ReaR even detects changed network cards, as well as different storage scenarios with their respective drivers (migrating IDE to SATA or SATA to CCISS) and modified disk layouts.
The ReaR documentation provides a number of mapping files and examples.
An initial full backup of the protected system is the foundation.
ReaR works in collaboration with many backup solutions, including Bacula/Bareos SEP SESAM, Tivoli Storage Manager, HP Data Protector, Symantec NetBackup, CommVault Galaxy, and EMC Legato/Networker.
WORKING EXAMPLE
Below is a working example of rear in action, performed on fresh Centos VM’s running on VirtualBox in my own lab environment.
Note: This example uses a Centos 7 server and a NFS Server on the same network subnet.
START A BACKUP
On the CentOS machine
Add the following lines to /etc/rear/local.conf:
OUTPUT=iso
BACKUP=NETFS
BACKUP_TYPE=incremental
BACKUP_PROG=tar
FULLBACKUPDAY=”Mon”
BACKUP_URL=”nfs://NFSSERVER/path/to/nfs/export/servername”
BACKUP_PROG_COMPRESS_OPTIONS=”–gzip”
BACKUP_PROG_COMPRESS_SUFFIX=”.gz”
BACKUP_PROG_EXCLUDE=( ‘/tmp/*’ ‘/dev/shm/*’ )
BACKUP_OPTIONS=”nfsvers=3,nolock”
Now make a backup
[root@centos7 ~]# rear mkbackup -v
Relax-and-Recover 1.16.1 / Git
Using log file: /var/log/rear/rear-centos7.log
mkdir: created directory ‘/var/lib/rear/output’
Creating disk layout
Creating root filesystem layout
TIP: To login as root via ssh you need to set up /root/.ssh/authorized_keys or SSH_ROOT_PASSWORD in your configuration file
Copying files and directories
Copying binaries and libraries
Copying kernel modules
Creating initramfs
Making ISO image
Wrote ISO image: /var/lib/rear/output/rear-centos7.iso (90M)
Copying resulting files to nfs location
Encrypting disabled
Creating tar archive ‘/tmp/rear.QnDt1Ehk25Vqurp/outputfs/centos7/2014-08-21-1548-F.tar.gz’
Archived 406 MiB [avg 3753 KiB/sec]OK
Archived 406 MiB in 112 seconds [avg 3720 KiB/sec]
Now look on your NFS server
You’ll see all the files you’ll need to perform the disaster recovery.
total 499M
drwxr-x— 2 root root 4.0K Aug 21 23:51 .
drwxr-xr-x 3 root root 4.0K Aug 21 23:48 ..
-rw——- 1 root root 407M Aug 21 23:51 2014-08-21-1548-F.tar.gz
-rw——- 1 root root 2.2M Aug 21 23:51 backup.log
-rw——- 1 root root 202 Aug 21 23:49 README
-rw——- 1 root root 90M Aug 21 23:49 rear-centos7.iso
-rw——- 1 root root 161K Aug 21 23:49 rear.log
-rw——- 1 root root 0 Aug 21 23:51 selinux.autorelabel
-rw——- 1 root root 277 Aug 21 23:49 VERSION
INCREMENTAL BACKUPS
ReaR is not a file level Recovery tool (Look at fwbackups) however, you can perform incremental backups, in fact, in the “BACKUP_TYPE=incremental” parameter which takes care of that.
As you can see from the file list above, it shows the letter “F” before the .tar.gz extension which is an indication that this is a full backup.
Actually it’s better to make the rescue ISO seperately from the backup.
The command “rear mkbackup -v” makes both the bootstrap ISO and the backup itself, but running “rear mkbackup -v” twice won’t create incremental backups for some reason.
So first:
[root@centos7 ~]# time rear mkrescue -v
Relax-and-Recover 1.16.1 / Git
Using log file: /var/log/rear/rear-centos7.log
Creating disk layout
Creating root filesystem layout
TIP: To login as root via ssh you need to set up /root/.ssh/authorized_keys or SSH_ROOT_PASSWORD in your configuration file
Copying files and directories
Copying binaries and libraries
Copying kernel modules
Creating initramfs
Making ISO image
Wrote ISO image: /var/lib/rear/output/rear-centos7.iso (90M)
Copying resulting files to nfs location
real 0m49.055s
user 0m15.669s
sys 0m10.043s
And then:
[root@centos7 ~]# time rear mkbackuponly -v
Relax-and-Recover 1.16.1 / Git
Using log file: /var/log/rear/rear-centos7.log
Creating disk layout
Encrypting disabled
Creating tar archive ‘/tmp/rear.fXJJ3VYpHJa9Za9/outputfs/centos7/2014-08-21-1605-F.tar.gz’
Archived 406 MiB [avg 4166 KiB/sec]OK
Archived 406 MiB in 101 seconds [avg 4125 KiB/sec]
real 1m44.455s
user 0m56.089s
sys 0m16.967s
Run again (for incrementals)
[root@centos7 ~]# time rear mkbackuponly -v
Relax-and-Recover 1.16.1 / Git
Using log file: /var/log/rear/rear-centos7.log
Creating disk layout
Encrypting disabled
Creating tar archive ‘/tmp/rear.Tk9tiafmLyTvKFm/outputfs/centos7/2014-08-21-1608-I.tar.gz’
Archived 85 MiB [avg 2085 KiB/sec]OK
Archived 85 MiB in 43 seconds [avg 2036 KiB/sec]
real 0m49.106s
user 0m10.852s
sys 0m3.822s
Now look again at those backup files: -F.tar.gz is the Full Backup, -I.tar.gz is the Incremental. There’s also basebackup.txt and timestamp.txt files.
total 585M
drwxr-x— 2 root root 4.0K Aug 22 00:09 .
drwxr-xr-x 3 root root 4.0K Aug 22 00:04 ..
-rw-r–r– 1 root root 407M Aug 22 00:07 2014-08-21-1605-F.tar.gz
-rw-r–r– 1 root root 86M Aug 22 00:09 2014-08-21-1608-I.tar.gz
-rw-r–r– 1 root root 2.6M Aug 22 00:09 backup.log
-rw-r–r– 1 root root 25 Aug 22 00:05 basebackup.txt
-rw——- 1 root root 202 Aug 22 00:05 README
-rw——- 1 root root 90M Aug 22 00:05 rear-centos7.iso
-rw——- 1 root root 161K Aug 22 00:05 rear.log
-rw-r–r– 1 root root 0 Aug 22 00:09 selinux.autorelabel
-rw-r–r– 1 root root 11 Aug 22 00:05 timestamp.txt
-rw——- 1 root root 277 Aug 22 00:05 VERSION
RECOVERY
ReaR is designed to create bootable .iso, making recovery very easy and flexible in terms of options. .iso files can be booted from CD/DVD optical media, USB Block Storage Devices & Hard disks and also in VMWare & Virtual Box.
To recover a system, you first need to boot to the .ISO that was created with the backup.
You may use your favorite method for booting to the .ISO whether it’s creating a bootable USB sick, burning it to a CD, mounting it in iDRAC, etc.
Just boot to it on the server in which you want to restore to.
When the recovery screen loads, select the top option to recover.
Type root to log in.
To start recovery, type rear -v recover
# Navigate to and list files in /var/lib/rear/layout/xfs
# Edit each file ending in .xfs with vi and remove “sunit=0 blks” from the “log” section.
# In my case, the following files, then save them: vi /var/lib/rear/layout/xfs/fedora_serv–build-root.xfs vi /var/lib/rear/layout/xfs/sda1.xfs vi /var/lib/rear/layout/xfs/sdb2.xfs
# Run the following commands to get a list of LVs and VGs: lvdisplay vgdisplay
# Run the following commands to remove the above listed LVs and VGs: lvremove vgremove
Unlike VMWare/Virtualbox virtualisation, containers wraps up individual workloads (instead of the entire OS and kernel) and their dependencies into relatively tiny containers (or “jails” if youre talking FreeNAS/AIX, or “zones” if you’re talking Solaris, “snaps” if you’re talking Ubuntu Core). There are many solutions to linux containerisation in the marketplace at present, and LXC is free.
This post serves as a go-to reference page for examples of all the most commonly used lxc commands when dealing with linux containers. I highly recommend completing all the sections below, running all the commands on the test server available at linuxcontainers.org, to fully appreciate the context of what you are doing. This post is a work in progress and will likely be augmented over time with examples from my own lab, time permitting.
Your first container
LXD is image based, however by default no images are loaded into the image store as can be seen with:
lxc image list
LXD knows about 3 default image servers:
ubuntu: (for Ubuntu stable images)
ubuntu-daily: (for Ubuntu daily images)
images: (for a bunch of other distributions)
The stable Ubuntu images can be listed with:
lxc image list ubuntu: | less
To launch a first container called “first” using the Ubuntu 16.04 image, use:
lxc launch ubuntu:16.04 first
Your new container will now be visible in:
lxc list
Running state details and configuration can be queried with:
lxc info first lxc config show first
Limiting resources
By default your container comes with no resource limitation and inherits from its parent environment. You can confirm it with:
free -m lxc exec first — free -m
To apply a memory limit to your container, do:
lxc config set first limits.memory 128MB
And confirm that it’s been applied with:
lxc exec first — free -m
Snapshots
LXD supports snapshoting and restoring container snapshots.
Before making a snapshot, lets do some changes to the container, for example, updating it:
lxc exec first — apt-get update lxc exec first — apt-get dist-upgrade -y lxc exec first — apt-get autoremove –purge -y
Now that the container is all updated and cleaned, let’s make a snapshot called “clean”:
lxc snapshot first clean
Let’s break our container:
lxc exec first — rm -Rf /etc /usr
Confirm the breakage with (then exit):
lxc exec first — bash
And restore everything to the snapshotted state: (be sure to execute these from the container host, not from inside the container or it won’t work.
lxc restore first clean
And confirm everything’s back to normal (then exit):
lxc exec first — bash
Creating images
As your probably noticed earlier, LXD is image based, that is, all containers must be created from either a copy of an existing container or from an image.
You can create new images from an existing container or a container snapshot.
To publish our “clean” snapshot from earlier as a new image with a user friendly alias of “clean-ubuntu”, run:
lxc publish first/clean –alias clean-ubuntu
At which point we won’t need our “first” container, so just delete it with:
lxc stop first lxc delete first
And lastly we can start a new container from our image with:
lxc launch clean-ubuntu second
Accessing files from the container
To pull a file from the container you can use the “lxc file pull” command:
lxc file pull second/etc/hosts .
Let’s add an entry to it:
echo “1.2.3.4 my-example” >> hosts
And push it back where it came from:
lxc file push hosts second/etc/hosts
You can also use this mechanism to access log files:
lxc file pull second/var/log/syslog – | less
We won’t be needing that container anymore, so stop and delete it with:
lxc delete –force second
Use a remote image server
The lxc client tool supports multiple “remotes”, those remotes can be read-only image servers or other LXD hosts.
LXC upstream runs one such server at https://images.linuxcontainers.org which serves a set of automatically generated images for various Linux distributions.
It comes pre-added with default LXD but you can remove it or change it if you don’t want it.
You can list the available images with:
lxc image list images: | less
And spawn a new Centos 7 container with:
lxc launch images:centos/7 third
Confirm it’s indeed Centos 7 with:
lxc exec third — cat /etc/redhat-release
And delete it:
lxc delete -f third
The list of all configured remotes can be obtained with:
lxc remote list
Interact with remote LXD servers
For this step, you’ll need a second demo session, so open a new one here
Copy/paste the “lxc remote add” command from the top of the page of that new session into the shell of your old session.
Then confirm the server fingerprint for the remote server.
Note that it may take a few seconds for the new LXD daemon to listen to the network, just retry the command until it answers.
At this point you can list the remote containers with:
lxc list tryit:
And its images with:
lxc image list tryit:
Now, let’s start a new container on the remote LXD using the local image we created earlier.
lxc launch clean-ubuntu tryit:fourth
You now have a container called “fourth” running on the remote host “tryit”. You can spawn a shell inside it with (then exit):
lxc exec tryit:fourth bash
Now let’s copy that container into a new one called “fifth”:
lxc copy tryit:fourth tryit:fifth
And just for fun, move it back to our local lxd while renaming it to “sixth”: