Dropbox alternative for Linux users

With the recent announcement that Dropbox is dropping its support for linux filesystems (other than ext4) in November, you’ll no doubt be searching for an alternative cloud storage provider that supports linux file system synchronisation.

Look no further than MEGA.

50GB for free, local filesystem synchronisation, download and retain your own private key,  a great, easy to use web browser client.

File system sync client: https://mega.nz/sync

Facebooktwitterredditpinterestlinkedinmail

Bare metal DR and Linux Migration with Relax and Recover (rear)

INTRODUCTION

In short, Relax and Recover (rear) is a tool that creates .tar.gz images of the running server and creates bootable rescue media as .iso images

Relax and Recover (ReaR) generates an appropriate rescue image from a running system and also acts as a migration tool for Physical to Virtual or Virtual to Virtual migrations of running linux hosts.

It is not per se, a file level backup tool.  It is akin to Clonezilla – another popular bare metal backup tool also used to migrate Linux into virtual environments.

There are two main commands, rear mkbackuponly and rear mkrescue to create the backup archive and the bootable image respectively.  They can be combined in the single command rear mkbackup.

The bootable iso provides a configured bootable rescue environment that, provided your backup is configured correctly in /etc/rear/local.conf, will make recovery as simple as typing rear recover from the recovery prompt.

You can back up to NFS or CIFS Share or to a USB block storage device pre-formatted by running rear format /dev/sdX

A LITTLE MORE DETAIL

A professional recovery system is much more than a simple backup tool.
Experienced admins know they must control and test the entire workflow for the recovery process in advance, so they are certain all the pieces will fall into place in case of an emergency.
Versatile replacement hardware must be readily available, and you might not have the luxury of using a replacement system that exactly matches the original.
The partition layout or the configuration of a RAID system must correspond.
If the crashed system’s patch level was not up to date, or if the system contained an abundance of manually installed software, problems are likely to occur with drivers, configuration settings, and other compatibility issues.
Relax and Recover (ReaR) is a true disaster recovery solution that creates recovery media from a running Linux system.
If a hardware component fails, an administrator can boot the standby system with the ReaR rescue media and put the system back to its previous state.
ReaR preserves the partitioning and formatting of the hard disk, the restoration of all data, and the boot loader configuration.

ReaR is well suited as a migration tool, because the restoration does not have to take place on the same hardware as the original.
ReaR builds the rescue medium with all existing drivers, and the restored system adjusts automatically to the changed hardware.
ReaR even detects changed network cards, as well as different storage scenarios with their respective drivers (migrating IDE to SATA or SATA to CCISS) and modified disk layouts.
The ReaR documentation provides a number of mapping files and examples.
An initial full backup of the protected system is the foundation.
ReaR works in collaboration with many backup solutions, including Bacula/Bareos SEP SESAM, Tivoli Storage Manager, HP Data Protector, Symantec NetBackup, CommVault Galaxy, and EMC Legato/Networker.

WORKING EXAMPLE

Below is a working example of rear in action, performed on fresh Centos VM’s running on VirtualBox in my own lab environment.

Note: This example uses a Centos 7 server and a NFS Server on the same network subnet.

INSTALLATION
Add EPEL repository
yum install wget
wget http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/epel-release-7-0.2.noarch.rpm
rpm -ivh epel-release-7-0.2.noarch.rpm
yum install rear

START A BACKUP
On the CentOS machine
Add the following lines to /etc/rear/local.conf:
OUTPUT=iso
BACKUP=NETFS
BACKUP_TYPE=incremental
BACKUP_PROG=tar
FULLBACKUPDAY=”Mon”
BACKUP_URL=”nfs://NFSSERVER/path/to/nfs/export/servername”
BACKUP_PROG_COMPRESS_OPTIONS=”–gzip”
BACKUP_PROG_COMPRESS_SUFFIX=”.gz”
BACKUP_PROG_EXCLUDE=( ‘/tmp/*’ ‘/dev/shm/*’ )
BACKUP_OPTIONS=”nfsvers=3,nolock”

Now make a backup
[root@centos7 ~]# rear mkbackup -v
Relax-and-Recover 1.16.1 / Git
Using log file: /var/log/rear/rear-centos7.log
mkdir: created directory ‘/var/lib/rear/output’
Creating disk layout
Creating root filesystem layout
TIP: To login as root via ssh you need to set up /root/.ssh/authorized_keys or SSH_ROOT_PASSWORD in your configuration file
Copying files and directories
Copying binaries and libraries
Copying kernel modules
Creating initramfs
Making ISO image
Wrote ISO image: /var/lib/rear/output/rear-centos7.iso (90M)
Copying resulting files to nfs location
Encrypting disabled
Creating tar archive ‘/tmp/rear.QnDt1Ehk25Vqurp/outputfs/centos7/2014-08-21-1548-F.tar.gz’
Archived 406 MiB [avg 3753 KiB/sec]OK
Archived 406 MiB in 112 seconds [avg 3720 KiB/sec]

Now look on your NFS server
You’ll see all the files you’ll need to perform the disaster recovery.
total 499M
drwxr-x— 2 root root 4.0K Aug 21 23:51 .
drwxr-xr-x 3 root root 4.0K Aug 21 23:48 ..
-rw——- 1 root root 407M Aug 21 23:51 2014-08-21-1548-F.tar.gz
-rw——- 1 root root 2.2M Aug 21 23:51 backup.log
-rw——- 1 root root 202 Aug 21 23:49 README
-rw——- 1 root root 90M Aug 21 23:49 rear-centos7.iso
-rw——- 1 root root 161K Aug 21 23:49 rear.log
-rw——- 1 root root 0 Aug 21 23:51 selinux.autorelabel
-rw——- 1 root root 277 Aug 21 23:49 VERSION

INCREMENTAL BACKUPS
ReaR is not a file level Recovery tool (Look at fwbackups) however, you can perform incremental backups, in fact, in the “BACKUP_TYPE=incremental” parameter which takes care of that.
As you can see from the file list above, it shows the letter “F” before the .tar.gz extension which is an indication that this is a full backup.
Actually it’s better to make the rescue ISO seperately from the backup.
The command “rear mkbackup -v” makes both the bootstrap ISO and the backup itself, but running “rear mkbackup -v” twice won’t create incremental backups for some reason.

So first:
[root@centos7 ~]# time rear mkrescue -v
Relax-and-Recover 1.16.1 / Git
Using log file: /var/log/rear/rear-centos7.log
Creating disk layout
Creating root filesystem layout
TIP: To login as root via ssh you need to set up /root/.ssh/authorized_keys or SSH_ROOT_PASSWORD in your configuration file
Copying files and directories
Copying binaries and libraries
Copying kernel modules
Creating initramfs
Making ISO image
Wrote ISO image: /var/lib/rear/output/rear-centos7.iso (90M)
Copying resulting files to nfs location

real 0m49.055s
user 0m15.669s
sys 0m10.043s

And then:
[root@centos7 ~]# time rear mkbackuponly -v
Relax-and-Recover 1.16.1 / Git
Using log file: /var/log/rear/rear-centos7.log
Creating disk layout
Encrypting disabled
Creating tar archive ‘/tmp/rear.fXJJ3VYpHJa9Za9/outputfs/centos7/2014-08-21-1605-F.tar.gz’
Archived 406 MiB [avg 4166 KiB/sec]OK
Archived 406 MiB in 101 seconds [avg 4125 KiB/sec]

real 1m44.455s
user 0m56.089s
sys 0m16.967s

Run again (for incrementals)
[root@centos7 ~]# time rear mkbackuponly -v
Relax-and-Recover 1.16.1 / Git
Using log file: /var/log/rear/rear-centos7.log
Creating disk layout
Encrypting disabled
Creating tar archive ‘/tmp/rear.Tk9tiafmLyTvKFm/outputfs/centos7/2014-08-21-1608-I.tar.gz’
Archived 85 MiB [avg 2085 KiB/sec]OK
Archived 85 MiB in 43 seconds [avg 2036 KiB/sec]

real 0m49.106s
user 0m10.852s
sys 0m3.822s

Now look again at those backup files: -F.tar.gz is the Full Backup, -I.tar.gz is the Incremental. There’s also basebackup.txt and timestamp.txt files.
total 585M
drwxr-x— 2 root root 4.0K Aug 22 00:09 .
drwxr-xr-x 3 root root 4.0K Aug 22 00:04 ..
-rw-r–r– 1 root root 407M Aug 22 00:07 2014-08-21-1605-F.tar.gz
-rw-r–r– 1 root root 86M Aug 22 00:09 2014-08-21-1608-I.tar.gz
-rw-r–r– 1 root root 2.6M Aug 22 00:09 backup.log
-rw-r–r– 1 root root 25 Aug 22 00:05 basebackup.txt
-rw——- 1 root root 202 Aug 22 00:05 README
-rw——- 1 root root 90M Aug 22 00:05 rear-centos7.iso
-rw——- 1 root root 161K Aug 22 00:05 rear.log
-rw-r–r– 1 root root 0 Aug 22 00:09 selinux.autorelabel
-rw-r–r– 1 root root 11 Aug 22 00:05 timestamp.txt
-rw——- 1 root root 277 Aug 22 00:05 VERSION

RECOVERY
ReaR is designed to create bootable .iso, making recovery very easy and flexible in terms of options. .iso files can be booted from CD/DVD optical media, USB Block Storage Devices & Hard disks and also in VMWare & Virtual Box.
To recover a system, you first need to boot to the .ISO that was created with the backup.
You may use your favorite method for booting to the .ISO whether it’s creating a bootable USB sick, burning it to a CD, mounting it in iDRAC, etc.
Just boot to it on the server in which you want to restore to.
When the recovery screen loads, select the top option to recover.
Type root to log in.
To start recovery, type
rear -v recover

TROUBLESHOOTING RECOVERY
# Create missing directory:
mkdir /run/rpcbind

# Manually start networking:
chmod a+x /etc/scripts/system-setup.d/60-network-devices.sh
/etc/scripts/system-setup.d/60-network-devices.sh

# Navigate to and list files in /var/lib/rear/layout/xfs
# Edit each file ending in .xfs with vi and remove “sunit=0 blks” from the “log” section.
# In my case, the following files, then save them:
vi /var/lib/rear/layout/xfs/fedora_serv–build-root.xfs
vi /var/lib/rear/layout/xfs/sda1.xfs
vi /var/lib/rear/layout/xfs/sdb2.xfs

# Run the following commands to get a list of LVs and VGs:
lvdisplay
vgdisplay

# Run the following commands to remove the above listed LVs and VGs:
lvremove
vgremove

# Now run recovery again:
rear recover

USEFUL URLs / FURTHER READING
ReaR Project Page:
ReaR on Github:
ReaR in OpenSuse:
YaST Module for Suse:
ReaR User Guide:
SEP-SESAM Support:
ReaR1.15 Release Notes:

Facebooktwitterredditpinterestlinkedinmail

Linux Containers with LXC/LXD

Unlike VMWare/Virtualbox virtualisation, containers wraps up individual workloads (instead of the entire OS and kernel) and their dependencies into relatively tiny containers (or “jails” if youre talking FreeNAS/AIX, or “zones” if you’re talking Solaris, “snaps” if you’re talking Ubuntu Core).  There are many solutions to linux containerisation in the marketplace at present, and LXC is free.

This post serves as a go-to reference page for examples of all the most commonly used lxc commands when dealing with linux containers.  I highly recommend completing all the sections below, running all the commands on the test server available at linuxcontainers.org, to fully appreciate the context of what you are doing.  This post is a work in progress and will likely be augmented over time with examples from my own lab, time permitting.

Your first container

LXD is image based, however by default no images are loaded into the image store as can be seen with:

lxc image list

LXD knows about 3 default image servers:

ubuntu: (for Ubuntu stable images)
ubuntu-daily: (for Ubuntu daily images)
images: (for a bunch of other distributions)

The stable Ubuntu images can be listed with:

lxc image list ubuntu: | less

To launch a first container called “first” using the Ubuntu 16.04 image, use:

lxc launch ubuntu:16.04 first

Your new container will now be visible in:

lxc list

Running state details and configuration can be queried with:

lxc info first
lxc config show first

 

Limiting resources

By default your container comes with no resource limitation and inherits from its parent environment. You can confirm it with:

free -m
lxc exec first — free -m

To apply a memory limit to your container, do:

lxc config set first limits.memory 128MB

And confirm that it’s been applied with:

lxc exec first — free -m

 

Snapshots

LXD supports snapshoting and restoring container snapshots.
Before making a snapshot, lets do some changes to the container, for example, updating it:

lxc exec first — apt-get update
lxc exec first — apt-get dist-upgrade -y
lxc exec first — apt-get autoremove –purge -y

Now that the container is all updated and cleaned, let’s make a snapshot called “clean”:

lxc snapshot first clean

Let’s break our container:

lxc exec first — rm -Rf /etc /usr

Confirm the breakage with (then exit):

lxc exec first — bash

And restore everything to the snapshotted state: (be sure to execute these from the container host, not from inside the container or it won’t work.

lxc restore first clean

And confirm everything’s back to normal (then exit):

lxc exec first — bash

 

Creating images

As your probably noticed earlier, LXD is image based, that is, all containers must be created from either a copy of an existing container or from an image.

You can create new images from an existing container or a container snapshot.

To publish our “clean” snapshot from earlier as a new image with a user friendly alias of “clean-ubuntu”, run:

lxc publish first/clean –alias clean-ubuntu

At which point we won’t need our “first” container, so just delete it with:

lxc stop first
lxc delete first

And lastly we can start a new container from our image with:

lxc launch clean-ubuntu second

 

Accessing files from the container

To pull a file from the container you can use the “lxc file pull” command:

lxc file pull second/etc/hosts .

Let’s add an entry to it:

echo “1.2.3.4 my-example” >> hosts

And push it back where it came from:

lxc file push hosts second/etc/hosts

You can also use this mechanism to access log files:

lxc file pull second/var/log/syslog – | less

We won’t be needing that container anymore, so stop and delete it with:

lxc delete –force second

 

Use a remote image server

The lxc client tool supports multiple “remotes”, those remotes can be read-only image servers or other LXD hosts.

LXC upstream runs one such server at https://images.linuxcontainers.org which serves a set of automatically generated images for various Linux distributions.

It comes pre-added with default LXD but you can remove it or change it if you don’t want it.

You can list the available images with:

lxc image list images: | less

And spawn a new Centos 7 container with:

lxc launch images:centos/7 third

Confirm it’s indeed Centos 7 with:

lxc exec third — cat /etc/redhat-release

And delete it:

lxc delete -f third

The list of all configured remotes can be obtained with:

lxc remote list

 

Interact with remote LXD servers

For this step, you’ll need a second demo session, so open a new one here

Copy/paste the “lxc remote add” command from the top of the page of that new session into the shell of your old session.
Then confirm the server fingerprint for the remote server.

Note that it may take a few seconds for the new LXD daemon to listen to the network, just retry the command until it answers.

At this point you can list the remote containers with:

lxc list tryit:

And its images with:

lxc image list tryit:

Now, let’s start a new container on the remote LXD using the local image we created earlier.

lxc launch clean-ubuntu tryit:fourth

You now have a container called “fourth” running on the remote host “tryit”. You can spawn a shell inside it with (then exit):

lxc exec tryit:fourth bash

Now let’s copy that container into a new one called “fifth”:

lxc copy tryit:fourth tryit:fifth

And just for fun, move it back to our local lxd while renaming it to “sixth”:

lxc move tryit:fifth sixth

And confirm it’s all still working (then exit):

lxc start sixth
lxc exec sixth — bash

Then clean everything up:

lxc delete -f sixth
lxc delete -f tryit:fourth
lxc image delete clean-ubuntu

Facebooktwitterredditpinterestlinkedinmail

Rapid VM Deployment using Vagrant

I need a VM and I need it asap.

Vagrant is designed to be the quickest way to a running VM, and I’m impressed.  I have VirtualBox running on my trusty Dell XPS 13  laptop; “Sputnik” (named after the collaboration between Ubuntu and Dell).

Installing Virtualbox and Vagrant on Linux Mint (or any  Debian/Ubuntu derivative) is as easy as typing…

sudo apt-get install virtualbox vagrant

…and thanks to Vagrant and the many virtual machines available for VirtualBox and VMWare platforms, getting your first VM up and running is as simple as typing…

vagrant init centos/7 or vagrant init debian/jessie64

or vagrant init hashicorp/precise64 the latter hashicorp Ubuntu LTS build being the one that Vagrant’s own documentation is based upon.  For my example here, I’m going to start with a RHEL based Centos 7 offering..

This creates a text file called Vagrantfile in the current directory.

Rather than have this file in the root of my home directory, I’ve relocated it to a subdirectory ~/Vagrant/Centos7.  This will allow me to have other Vagrantfiles for other types of VM all stored under ~/Vagrant in their own subdirectory.  Probably not a bad idea as I’ll likely want to spin up a few different VM’s over time.

I’m now ready to “up” my VM…

vagrant up

Since I don’t already have a copy of the image downloaded, it goes off to sort all that for you.  While it’s doing that, there’s nothing stopping me from spinning up an Ubuntu Precise64 VM in another terminal window…

Since I already had the hashicorp/precise64 “Box” image from a previous deployment, it procured this VM in seconds while it continued to download the Centos Box image in the other terminal.

In my other terminal window, Centos 7 has now also been procured, along with some helpful tips should any issues arise around non-installation of VirtualBox Guest Additions on my host  (In my case, I’m running VirtualBox version 5.1.34 at the time of writing).

Flick across to VirtualBox Manager and you can see the two new running VMs based on the downloaded Boxes have been added to the Inventory.  Note: Do not rename them.

To connect to them, simply use the command…

vagrant ssh

Both VM’s allow you to log on instantly over SSH with just this minimalist command run from within the directory containing the Vagrantfile.

So there you have it, a Centos VM and a Ubuntu VM up and running in seconds.  Not hours.  Not Days.  Not Weeks.

It is that simple.  From Zero to Virtualbox, Vagrant and logged on to a running VM of your choice in three commands and dare I say it, about three seconds.

To shut them down, or bring them online again, use the following commands, just make sure you run them from within the correct subdirectory or you could shut the wrong VM down…

vagrant halt

vagrant up

It’s worth checking out the Vagrantfile and the documentation online as you can copy and re-use the Vagrantfile and make useful modifications to it.  Here are some more vagrant box commands to explore.

You can see here that although the vagrant box list command shows all boxes/images downloaded on your host system, if you execute vagrant box outdated, it’ll only check for updated box images for the box image specified in your local Vagrantfile, not all Boxes on the host system at once.

Note that this is not the same thing as performing sudo apt-get update && sudo apt-get dist-upgrade (or redhat equivalent yum update command) on the VM built using the Box image (shown below).

As with any new VM or Server, you will probably want to bring all packages up to date using the VM’s own OS package management system.

Vagrant Boxes and Shared Folders

As already established, Vagrant images for VMWare or VirtualBox can be downloaded from the internet using the vagrant command line or as a quick google search will reveal, from here.

The image files (boxes) are stored in .vagrant.d/boxes

Once an image has been downloaded, a “box” has been “created”.  This doesn’t mean a server (VM) has been created.  It just means that your local installation of vagrant has a box, ready to be deployed as a VM.

Before this can be done, it is prudent to create a “Project” for your VM, to put a structure in place to allow for some separation, given that you’re likely to want more than one VM.  This is very easy to do.  Just create some folder e.g. vagrant-project1 in your home directory, or anything you like.

cd into that directory and initialise a new project,

cd ~/vagrant-project1

vagrant init

This will create a file called Vagrantfile in your project folder.  Edit this file to read as follows…

Vagrant.configure(“2”) do |config|
   config.vm.box = “hashicorp/precise64”
   config.vm.box_version = “1.1.0”
   config.vm.box_url = “https://vagrantcloud.com/hashicorp/precise64”
end

You don’t have to put all three lines in it, just the first one will do, but why not while you’re in there?

You are not at a point where you have a project, you have a box and you’ve configured your new project to use that box.  Now you can bring up a VM in Virtualbox by running the following command from inside your Project folder,

vagrant up

You can log in with no password, by typing

vagrant ssh

Then create your first user and add him/her to the sudoers file, just as you would with any linux server,

adduser matt && adduser matt sudo

or adduser matt && usermod -aG sudo matt

whatever’s your way of doing it, it doesn’t matter.

Shared folders

You can edit files on your VM locally, you don’t need to ssh to the server in order to access files on it.  Vagrant has mounted your Project folder from inside the VM, so if you’re on the VM and you cd into /vagrant, you’ll be in the same folder as if you’re on your host machine and you cd into ~/vagrant-project1.  Really cool!

Provisioning

Vagrant can also be configured to automatically provision.  For example, the Vagrantfile can be edited as follows to automatically execute a script – in this case, the script installs Apache if it is not already present.

The bootstrap.sh script referenced in the Vagrantfile above, looks as follows.

Since it was created in my project folder, it is automatically also on the VM since that folder is automatically mounted as mentioned previously.

To effect the changes, you can vagrant reload –provision a running VM to quickly restart it, or if you’ve not yet started the VM, vagrant up will automatically do it.

You can see the VM getting restarted by Vagrant (white text shown above) and Apache getting installed (green text) in the console.

The Apache webserver will not be available from your local web browser, but it can be tested on the VM command line with wget -q0- 127.0.0.1

Networking

Port Forwarding

We can forward the webserver port 80 to our local machine on say, port 4567 and test the webserver accordingly using our own web browser.

We can see that due to the tunnel we’ve created from the VM, by browsing local port 4567, we’re seeing what’s being served on port 80 on our VM.

 

 

 

 

Facebooktwitterredditpinterestlinkedinmail

Installing ExpressVPN on Manjaro

The title of this post is deliberately misleading, but that’s for a good reason.  The likelihood is, you are an ExpressVPN subscriber (the worlds most popular VPN service provider and arguably the best) and have just switched from Linux Mint to Manjaro, only to find that Fedora and Debian based distributions are always well catered for, but Arch Linux based distributions like Manjaro, well not so much.

The title is misleading since the solution to this immediate brick wall you’ve come up against, is to not install ExpressVPN at all – but still use it.

Enter OpenVPN.  Installed already in Manjaro, and just waiting for you to perform a manual configuration.  (Cue the groans)

In fact it is no more taxing that installing the regular fedora or debian pre-compiled packages and then entering your subscription code obtained by logging onto ExpressVPN’s website using your email address and password set up when you originally subscribed.

On the page where you can download the packages for many different devices and operating systems (except Arch Linux), there is a Manual Config option too.  You can use this with OpenVPN.

Ensure OpenVPN is selected in the right-hand pane and expand your region at the bottom and choose from a list of ExpressVPN Servers for say, Europe and download the .ovpn file.

Now you can configure OpenVPN to use the ExpressVPN Server of your choice, with the following command…

You will be immediately prompted for your VPN Username and Password which you can copy and paste from the same ExpressVPN Manual Config page shown above.

You should see that a connection has been established.   Just be sure to leave the terminal window open (maybe move it to a different workspace to keep it out of harms way if you’re a habitual window-closer like I am).

To close the VPN connection, just CTRL-C it in the Terminal window.

That’s it.  But I’m always keen to give that little bit extra value, so I’ll continue, describing how you can also configure it using your Network Manager

Right-click on your network icon in the bottom right hand corner (or ‘systray’ as the Windows folks would call it) and you’ll see there is an option to Add a VPN connection.

Select Import a saved VPN configurationnot OpenVPN!

Select your preferred .ovpn file downloaded from ExpressVPN’s site.

Copy and Paste the username and password from the ExpressVPN page…

Next, click on the Advanced… button.

Under the General tab, make sure to following boxes are checked:

Use custom gateway port: 1195
Use LZO data compression
Use custom tunnel Maximum Transmission Unit (MTU): 1500
Use custom UDP fragment size: 1300
Restrict tunnel TCP Maximum Segment Size (MSS)
Randomize remote hosts

Under the Security tab…

Under TLS Authentication tab…

Click OK to finish.

You may need to reboot the computer at this point.

To connect to the ExpressVPN Server, simply select it from the Network icon on the bottom right-hand corner…

 

Facebooktwitterredditpinterestlinkedinmail

Linux Cheatsheets

The following post is for convenience where solutions and answers to your everyday IT challenges are not found in the many posts published on the site.

It serves as a single point of download for many useful cheat sheets freely published by other linux systems admins – not me.

The original authors are credited on each cheatsheet.

Redhat Linux 5 6 7

Regular Expressions

Centos

Linux Command Line

Bash

Bash and ZSH

Basic Systems Admin

Linux Cluster

Pocket Guide Linux Commands

Linux Network Commands

Things I Forget

Linux Systems Admin

Users and Groups

Vim Editor

Fstab and NFS

Puppet

Shell Scripting

Metasploit

Rsync

Yum

LVM Logical Volume Manager

Awk

Logrotate and Cron

Wget

Bash Script Colours

Docker

Git

SSH

Find

Aircrack

DevOps and SecOps

 

Facebooktwitterredditpinterestlinkedinmail

Protect your privacy with a VPN

Protecting your privacy doesn’t need to be as complicated as using all manner of CIA-beating tech to hide yourself and your computer from the evils that lurk on the interwebs these days, where literally nobody is to be trusted.  It’s fun setting all that stuff up, if that’s what you’re into, but for most of you, you just want a nice, easy solution that works and doesn’t affect your day-to-day online experience.

Frankly, everyone should be using a VPN, whether they realise it or not and whether they think they have anything to hide or not.

My personal favourite service (there are a few very good ones) is ExpressVPN.

Sign up for a small monthly fee and download the software for your given operating system – in my case Linux Mint (so I downloaded the Ubuntu 64bit .deb package).

The commands to install it, activate it using the code supplied when you subscribe, and connect to it are shown below….

Does it get any easier than that?  I don’t think so.

Once it’s installed and running, you should add it to your startup applications, so that it starts automatically when you log in for convenience.

Lastly and for completeness, you can add the extension for Firefox (not essential but why wouldn’t you?).

You can activate up to 3 devices with your subscription.  All major operating systems and phone operating systems are supported.

It just works.

Facebooktwitterredditpinterestlinkedinmail

Scan a network using nmap

Quickly scan a network or subnet to see which hosts are up using nmap. You may need to install it first.

Installation is as easy as sudo apt-get install nmap

Once installation has completed, scan a range of IP addresses to see which ones are live using the following command as an example…

nmap -sP 192.168.1.1-254

The output will be something like this…

Here you can see that in my network, at present hosts 192.168.1.1 thru 5 are up, along with 14 and 15.

 

Facebooktwitterredditpinterestlinkedinmail

Mount USB HDD by UUID in Linux

The danger with USB hard disk drives is that when you have more than one plugged into your workstation, the device name assigned to it by the operating system might not be consistent between reboots.  i.e. /dev/sdb1 and /dev/sdb2 might swap places.  Potential disaster if you rsync data from one to the other on a periodic basis.

If permanently mounting usb hard disks, it’s much safer to mount according to the UUID of the disk instead of the device name assigned by the OS.

If you change to root using sudo su – and cd into /dev/disk you’ll see that there are multiple links in there, organised into different folders.  The unique unit id is written in /dev/disk/by-uuid and links the device name to the unique id.

You can see what device name is mounted where using df -h.  Then use the output of ls -al of /dev/dsk/by-uuid to correlate uuid to filesystem mount.  There’s probably other ways to match filesystem to uuid but this is quick and easy enough to do.

Note that I’ve also taken the liberty of piping the commands through grep to reduce output, just showing me what I want to know,  i.e. the uuid’s mounted to devices named /sda1, /sda2, /sdb1 etc.

Once you’re confident you know what UUID is what disk, then you can permanently mount the disk or disks that are permanent fixtures by creating a mount point in the filesystem and adding a line to /etc/fstab

finally, mount -a will pick up the UUID and mount it into the mount point.

Facebooktwitterredditpinterestlinkedinmail

Accidentally formatted hard disk recovery

So you had more than one hard disk plugged into your nice new Humax FreeSat set top box, one containing all your existing downloaded media and the other, an empty one intended for recording.

Upon formatting the drive intended for recording you subsequently discover that your other FAT32 disk with all your media on it, now has a nice new, empty NTFS partition on it too.  A real WTF moment that absolutely is not your fault.  It happens to the best of us.  It’s just happened to me.

It’s in these moments that having a can-do attitude is of the utmost importance.  Congratulations are in order, because Life has just presented a real challenge for you to overcome.

The likelihood is 95% of your friends will feign sympathy and tell you…

“there’s nothing you can do if you’ve re-formatted the drive”

the largely self-appointed “tech experts” (on the basis they have all-the-gear) will likely tell you…

“you’ve reformatted your FAT32 partition with NTFS so you’ve lost everything.”

…like you’d have stood a chance if you’d gone over it with a like-for-like file system format and they could have got all your data back for you (yeah, right).

Well, if you’ve been sensible enough to not make any writes to the drive, then I can tell you that you absolutely can recover all your data.  In fact, there’s no data to recover as it’s all still on the drive, so “recovery” will be instantaneous.   I’m here to tell you…

You need a computer running Linux and you need to install the testdisk package.

In a console window, run sudo testdisk

You may need to unmount the disk first using gparted but leave it plugged in.

In testdisk, you need to list partitions and it’ll display the new high performance file system NTFS partition and nothing else at this point.  There is an option to do a “deeper scan”.  This walks the cylinders looking for any evidence that a previous file system was here.  If you’ve not done any writes to the drive since it got reformatted with NTFS, then it’ll instantly find details of a previous FAT32 partition.  You can cancel the scan at this point as it’s found all it needs (see below)

What you need to do now is tell the disk that it’s this format you want on the primary partition, not the current NTFS one.  You can select it, and even list the files on it (P).

This can in someways be the most frustrating part, as you can see that the files and the index are there, but your file manager will still show an empty NTFS disk.  Now you need to switch the NTFS structured disk back over to FAT32 by writing the previously discovered FAT32 structure over the top of the primary partition.

You’ll receive a message along the lines of needing a reboot.  You just need to quit testdisk, and remove and re-add the hard disk (if it’s USB) or reboot if it’s an internal drive and re-run test disk after to see that the NTFS partition structure has been replaced with the FAT32 one that existed before.

Like before, you can list the files on the partition using testdisk.  Seeing as this partition is now the current live one, the files should also appear in your file manager.  In my case, I’m using the Nemo file manager on Linux Mint 18.1 Serena, Cinnamon 3.0 edition (and I can highly recommend it).

So there you go.  There are a few lessons to be learned here -for all of us, but like many things in life, things are not always as they seem.  Your computers file manager does not show you what data is on the disk – it is merely reading the contents of the current known good file allocation table from an address on the front of the disk that the partition is known to begin at.  Such file allocation tables will exist all over the disk from previous lives in between re-formatted for re-use.  When you re-format a disk, you’re just giving the file allocation table a fresh start with a new address but the old one will still exist somewhere and in multiple places on the disk.  The file allocation table is the index of disk contents that is read by the file manager in order to give you a representation of what it believes to be retrievable data on your disk.  The data itself can then be found starting at the addresses contained in that index for each file.  The data is still there on parts of the disk that have not yet been written over with replacement blocks of data, hence if you’ve not performed any writes, then all your data is all still there.  So if you want your data to be truly irrecoverable, then you must perform multiple random writes over the top of all cylinders using a tool like DBAN that will take hours to complete, or better, take an angle grinder to it.  Just remember to take a backup first.

So if you want your data to be truly irrecoverable, then you can perform multiple random writes over the top using a tool like DBAN, or better, take an angle grinder to it.

So the real proof that the data is indeed readable once again would be to open and play a movie file.  So as proof, here’s a little screenie of VLC Media Player playing Penelope Spheeris’ 1981 punk rock documentary “The Decline Of Western Civilization”.

Coincidentally, 1981 is quite a significant year for me, I was 6 years old and my parents had just bought me my first computer -a BBC Model B micro computer that had just been released.  I began teaching myself BASIC right away.

 

Facebooktwitterredditpinterestlinkedinmail