Firefox Bookmarks location (Linux)

At some point you may find yourself wishing you could get your firefox boomarks from another user home directory on your laptop or on backup.

I want to be able to find them, and restore them into Firefox in my new profile.

This is the location in the filesystem where the Firefox bookmarks are kept.  Firefox appears to retain backups of the bookmarks on its own, which is very convenient indeed.

/home/matt/.mozilla/firefox/n5ufmg0e.default/bookmarkbackups

Simply copy the most recent file to the same location in your new profile.  The filename is fairly cryptic looking, e.g.

‘bookmarks-2018-09-13_15_AWyxn1GRB8WngmWVnMswTg==.jsonlz4’

so my advice would be to not change it at all, just copy it as it is into the same location in your new profile.

You’ll need to restart firefox before you’ll see it as a recoverable item in  the Firefox Bookmarks, Show All Bookmarks, Backup and Restore dialog.

In newer versions of Firefox, the menu bar and bookmarks toolbar are missing by default (I hate all this modern minimalism as it smacks of form over function) but you can enable it again by right clicking on a blank bit of GUI and selecting Menu bar and Bookmarks bar (Highly recommended).

Facebooktwitterredditpinterestlinkedinmail

Firefox Bookmarks location (Windows)

At some point you may find yourself wishing you could get your firefox boomarks from an old profile still resident on your laptop, but no longer the one that you’re logging into.

For example, I have an old profile here for user matthewbradley, but upon getting my account reactivated, I’m now finding myself logging into matthewbradley.UK – i.e. same human, different local profile on the laptop.

The frustrating thing is that all my bookmarks are in Firefox in my old profile and I want to be able to back them up and restore them into Firefox in my new profile.

This is the location in the filesystem where the Firefox bookmarks are kept.  Firefox appears to retain backups of the bookmarks on its own, which is very convenient indeed.  You should be able to access it via Explorer or Cmd console.

C:\Users\matthewbradley\AppData\Roaming\Mozilla\Firefox\Profiles\jrc8wnze.default\bookmarkbackups\

Note that AppData is a hidden folder so if you’re using Explorer, you’ll need to change the View options to show Hidden Items before you’ll see it.

Simply copy the most recent file to the same location in your new profile.  The filename is fairly cryptic looking, e.g.

bookmarks-2018-09-13_15_AWyxn1GRB8WngmWVnMswTg==.jsonlz4

so my advice would be to not change it at all, just copy it as it is, into your new profile.

You’ll need to restart firefox before you’ll see it as a recoverable item in  the Firefox Bookmarks, Show All Bookmarks, Backup and Restore dialog.

In newer versions of Firefox, the menu bar and bookmarks toolbar are missing by default (I hate all this modern minimalism as it smacks of form over function) but you can enable it again by right clicking on a blank bit of GUI and selecting Menu bar and Bookmarks bar (Highly recommended).

Facebooktwitterredditpinterestlinkedinmail

Simplify linux find command using shell functions

Do you need to find a file and then perform some action on it and get caught up in curly brackets, back slashes and syntax errors when you could swear “this command worked in the past?”.  It’s one of the joys of Linux I guess, but quickly becomes tedious when you’re working against a problem and are under stress.

Here is a reference find command that works.  I hope it helps.  It’ll no doubt help me at some point (the entire purpose of my blog is to actually remind myself how to do half of this stuff from time to time).

sudo find ./ -name *.mkv -exec ls {} \;

Something I like to do is create shell functions in the .bashrc file in your home directory to simplify commonly used commands that are long to type and quite syntax sensitive.

#SHELL FUNCTIONS FOR .bashrc
f() { find . -name “*$1*”; }

This is a nice useful one that can be used to find any files that have the specified string anywhere in the filename.  Just type f All  to find any files with the word All occurring anywhere in the filename.

You could create other versions such as this one, that will find and remove files with a specified string in the filename – but I’d really not recommend it.

fr() { find ./ -name “*$1*” -exec rm {} \; }

Be sure to run man fr first to check that your shell function name isn’t the name of an existing binary on the system!

Facebooktwitterredditpinterestlinkedinmail

Dropbox alternative for Linux users

With the recent announcement that Dropbox is dropping its support for linux filesystems (other than ext4) in November, you’ll no doubt be searching for an alternative cloud storage provider that supports linux file system synchronisation.

Look no further than MEGA.

50GB for free, local filesystem synchronisation, download and retain your own private key,  a great, easy to use web browser client.

File system sync client: https://mega.nz/sync

Facebooktwitterredditpinterestlinkedinmail

Bare metal DR and Linux Migration with Relax and Recover (rear)

INTRODUCTION

In short, Relax and Recover (rear) is a tool that creates .tar.gz images of the running server and creates bootable rescue media as .iso images

Relax and Recover (ReaR) generates an appropriate rescue image from a running system and also acts as a migration tool for Physical to Virtual or Virtual to Virtual migrations of running linux hosts.

It is not per se, a file level backup tool.  It is akin to Clonezilla – another popular bare metal backup tool also used to migrate Linux into virtual environments.

There are two main commands, rear mkbackuponly and rear mkrescue to create the backup archive and the bootable image respectively.  They can be combined in the single command rear mkbackup.

The bootable iso provides a configured bootable rescue environment that, provided your backup is configured correctly in /etc/rear/local.conf, will make recovery as simple as typing rear recover from the recovery prompt.

You can back up to NFS or CIFS Share or to a USB block storage device pre-formatted by running rear format /dev/sdX

A LITTLE MORE DETAIL

A professional recovery system is much more than a simple backup tool.
Experienced admins know they must control and test the entire workflow for the recovery process in advance, so they are certain all the pieces will fall into place in case of an emergency.
Versatile replacement hardware must be readily available, and you might not have the luxury of using a replacement system that exactly matches the original.
The partition layout or the configuration of a RAID system must correspond.
If the crashed system’s patch level was not up to date, or if the system contained an abundance of manually installed software, problems are likely to occur with drivers, configuration settings, and other compatibility issues.
Relax and Recover (ReaR) is a true disaster recovery solution that creates recovery media from a running Linux system.
If a hardware component fails, an administrator can boot the standby system with the ReaR rescue media and put the system back to its previous state.
ReaR preserves the partitioning and formatting of the hard disk, the restoration of all data, and the boot loader configuration.

ReaR is well suited as a migration tool, because the restoration does not have to take place on the same hardware as the original.
ReaR builds the rescue medium with all existing drivers, and the restored system adjusts automatically to the changed hardware.
ReaR even detects changed network cards, as well as different storage scenarios with their respective drivers (migrating IDE to SATA or SATA to CCISS) and modified disk layouts.
The ReaR documentation provides a number of mapping files and examples.
An initial full backup of the protected system is the foundation.
ReaR works in collaboration with many backup solutions, including Bacula/Bareos SEP SESAM, Tivoli Storage Manager, HP Data Protector, Symantec NetBackup, CommVault Galaxy, and EMC Legato/Networker.

WORKING EXAMPLE

Below is a working example of rear in action, performed on fresh Centos VM’s running on VirtualBox in my own lab environment.

Note: This example uses a Centos 7 server and a NFS Server on the same network subnet.

INSTALLATION
Add EPEL repository
yum install wget
wget http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/epel-release-7-0.2.noarch.rpm
rpm -ivh epel-release-7-0.2.noarch.rpm
yum install rear

START A BACKUP
On the CentOS machine
Add the following lines to /etc/rear/local.conf:
OUTPUT=iso
BACKUP=NETFS
BACKUP_TYPE=incremental
BACKUP_PROG=tar
FULLBACKUPDAY=”Mon”
BACKUP_URL=”nfs://NFSSERVER/path/to/nfs/export/servername”
BACKUP_PROG_COMPRESS_OPTIONS=”–gzip”
BACKUP_PROG_COMPRESS_SUFFIX=”.gz”
BACKUP_PROG_EXCLUDE=( ‘/tmp/*’ ‘/dev/shm/*’ )
BACKUP_OPTIONS=”nfsvers=3,nolock”

Now make a backup
[root@centos7 ~]# rear mkbackup -v
Relax-and-Recover 1.16.1 / Git
Using log file: /var/log/rear/rear-centos7.log
mkdir: created directory ‘/var/lib/rear/output’
Creating disk layout
Creating root filesystem layout
TIP: To login as root via ssh you need to set up /root/.ssh/authorized_keys or SSH_ROOT_PASSWORD in your configuration file
Copying files and directories
Copying binaries and libraries
Copying kernel modules
Creating initramfs
Making ISO image
Wrote ISO image: /var/lib/rear/output/rear-centos7.iso (90M)
Copying resulting files to nfs location
Encrypting disabled
Creating tar archive ‘/tmp/rear.QnDt1Ehk25Vqurp/outputfs/centos7/2014-08-21-1548-F.tar.gz’
Archived 406 MiB [avg 3753 KiB/sec]OK
Archived 406 MiB in 112 seconds [avg 3720 KiB/sec]

Now look on your NFS server
You’ll see all the files you’ll need to perform the disaster recovery.
total 499M
drwxr-x— 2 root root 4.0K Aug 21 23:51 .
drwxr-xr-x 3 root root 4.0K Aug 21 23:48 ..
-rw——- 1 root root 407M Aug 21 23:51 2014-08-21-1548-F.tar.gz
-rw——- 1 root root 2.2M Aug 21 23:51 backup.log
-rw——- 1 root root 202 Aug 21 23:49 README
-rw——- 1 root root 90M Aug 21 23:49 rear-centos7.iso
-rw——- 1 root root 161K Aug 21 23:49 rear.log
-rw——- 1 root root 0 Aug 21 23:51 selinux.autorelabel
-rw——- 1 root root 277 Aug 21 23:49 VERSION

INCREMENTAL BACKUPS
ReaR is not a file level Recovery tool (Look at fwbackups) however, you can perform incremental backups, in fact, in the “BACKUP_TYPE=incremental” parameter which takes care of that.
As you can see from the file list above, it shows the letter “F” before the .tar.gz extension which is an indication that this is a full backup.
Actually it’s better to make the rescue ISO seperately from the backup.
The command “rear mkbackup -v” makes both the bootstrap ISO and the backup itself, but running “rear mkbackup -v” twice won’t create incremental backups for some reason.

So first:
[root@centos7 ~]# time rear mkrescue -v
Relax-and-Recover 1.16.1 / Git
Using log file: /var/log/rear/rear-centos7.log
Creating disk layout
Creating root filesystem layout
TIP: To login as root via ssh you need to set up /root/.ssh/authorized_keys or SSH_ROOT_PASSWORD in your configuration file
Copying files and directories
Copying binaries and libraries
Copying kernel modules
Creating initramfs
Making ISO image
Wrote ISO image: /var/lib/rear/output/rear-centos7.iso (90M)
Copying resulting files to nfs location

real 0m49.055s
user 0m15.669s
sys 0m10.043s

And then:
[root@centos7 ~]# time rear mkbackuponly -v
Relax-and-Recover 1.16.1 / Git
Using log file: /var/log/rear/rear-centos7.log
Creating disk layout
Encrypting disabled
Creating tar archive ‘/tmp/rear.fXJJ3VYpHJa9Za9/outputfs/centos7/2014-08-21-1605-F.tar.gz’
Archived 406 MiB [avg 4166 KiB/sec]OK
Archived 406 MiB in 101 seconds [avg 4125 KiB/sec]

real 1m44.455s
user 0m56.089s
sys 0m16.967s

Run again (for incrementals)
[root@centos7 ~]# time rear mkbackuponly -v
Relax-and-Recover 1.16.1 / Git
Using log file: /var/log/rear/rear-centos7.log
Creating disk layout
Encrypting disabled
Creating tar archive ‘/tmp/rear.Tk9tiafmLyTvKFm/outputfs/centos7/2014-08-21-1608-I.tar.gz’
Archived 85 MiB [avg 2085 KiB/sec]OK
Archived 85 MiB in 43 seconds [avg 2036 KiB/sec]

real 0m49.106s
user 0m10.852s
sys 0m3.822s

Now look again at those backup files: -F.tar.gz is the Full Backup, -I.tar.gz is the Incremental. There’s also basebackup.txt and timestamp.txt files.
total 585M
drwxr-x— 2 root root 4.0K Aug 22 00:09 .
drwxr-xr-x 3 root root 4.0K Aug 22 00:04 ..
-rw-r–r– 1 root root 407M Aug 22 00:07 2014-08-21-1605-F.tar.gz
-rw-r–r– 1 root root 86M Aug 22 00:09 2014-08-21-1608-I.tar.gz
-rw-r–r– 1 root root 2.6M Aug 22 00:09 backup.log
-rw-r–r– 1 root root 25 Aug 22 00:05 basebackup.txt
-rw——- 1 root root 202 Aug 22 00:05 README
-rw——- 1 root root 90M Aug 22 00:05 rear-centos7.iso
-rw——- 1 root root 161K Aug 22 00:05 rear.log
-rw-r–r– 1 root root 0 Aug 22 00:09 selinux.autorelabel
-rw-r–r– 1 root root 11 Aug 22 00:05 timestamp.txt
-rw——- 1 root root 277 Aug 22 00:05 VERSION

RECOVERY
ReaR is designed to create bootable .iso, making recovery very easy and flexible in terms of options. .iso files can be booted from CD/DVD optical media, USB Block Storage Devices & Hard disks and also in VMWare & Virtual Box.
To recover a system, you first need to boot to the .ISO that was created with the backup.
You may use your favorite method for booting to the .ISO whether it’s creating a bootable USB sick, burning it to a CD, mounting it in iDRAC, etc.
Just boot to it on the server in which you want to restore to.
When the recovery screen loads, select the top option to recover.
Type root to log in.
To start recovery, type
rear -v recover

TROUBLESHOOTING RECOVERY
# Create missing directory:
mkdir /run/rpcbind

# Manually start networking:
chmod a+x /etc/scripts/system-setup.d/60-network-devices.sh
/etc/scripts/system-setup.d/60-network-devices.sh

# Navigate to and list files in /var/lib/rear/layout/xfs
# Edit each file ending in .xfs with vi and remove “sunit=0 blks” from the “log” section.
# In my case, the following files, then save them:
vi /var/lib/rear/layout/xfs/fedora_serv–build-root.xfs
vi /var/lib/rear/layout/xfs/sda1.xfs
vi /var/lib/rear/layout/xfs/sdb2.xfs

# Run the following commands to get a list of LVs and VGs:
lvdisplay
vgdisplay

# Run the following commands to remove the above listed LVs and VGs:
lvremove
vgremove

# Now run recovery again:
rear recover

USEFUL URLs / FURTHER READING
ReaR Project Page:
ReaR on Github:
ReaR in OpenSuse:
YaST Module for Suse:
ReaR User Guide:
SEP-SESAM Support:
ReaR1.15 Release Notes:

Facebooktwitterredditpinterestlinkedinmail

Linux Containers with LXC/LXD

Unlike VMWare/Virtualbox virtualisation, containers wraps up individual workloads (instead of the entire OS and kernel) and their dependencies into relatively tiny containers (or “jails” if youre talking FreeNAS/AIX, or “zones” if you’re talking Solaris, “snaps” if you’re talking Ubuntu Core).  There are many solutions to linux containerisation in the marketplace at present, and LXC is free.

This post serves as a go-to reference page for examples of all the most commonly used lxc commands when dealing with linux containers.  I highly recommend completing all the sections below, running all the commands on the test server available at linuxcontainers.org, to fully appreciate the context of what you are doing.  This post is a work in progress and will likely be augmented over time with examples from my own lab, time permitting.

Your first container

LXD is image based, however by default no images are loaded into the image store as can be seen with:

lxc image list

LXD knows about 3 default image servers:

ubuntu: (for Ubuntu stable images)
ubuntu-daily: (for Ubuntu daily images)
images: (for a bunch of other distributions)

The stable Ubuntu images can be listed with:

lxc image list ubuntu: | less

To launch a first container called “first” using the Ubuntu 16.04 image, use:

lxc launch ubuntu:16.04 first

Your new container will now be visible in:

lxc list

Running state details and configuration can be queried with:

lxc info first
lxc config show first

 

Limiting resources

By default your container comes with no resource limitation and inherits from its parent environment. You can confirm it with:

free -m
lxc exec first — free -m

To apply a memory limit to your container, do:

lxc config set first limits.memory 128MB

And confirm that it’s been applied with:

lxc exec first — free -m

 

Snapshots

LXD supports snapshoting and restoring container snapshots.
Before making a snapshot, lets do some changes to the container, for example, updating it:

lxc exec first — apt-get update
lxc exec first — apt-get dist-upgrade -y
lxc exec first — apt-get autoremove –purge -y

Now that the container is all updated and cleaned, let’s make a snapshot called “clean”:

lxc snapshot first clean

Let’s break our container:

lxc exec first — rm -Rf /etc /usr

Confirm the breakage with (then exit):

lxc exec first — bash

And restore everything to the snapshotted state: (be sure to execute these from the container host, not from inside the container or it won’t work.

lxc restore first clean

And confirm everything’s back to normal (then exit):

lxc exec first — bash

 

Creating images

As your probably noticed earlier, LXD is image based, that is, all containers must be created from either a copy of an existing container or from an image.

You can create new images from an existing container or a container snapshot.

To publish our “clean” snapshot from earlier as a new image with a user friendly alias of “clean-ubuntu”, run:

lxc publish first/clean –alias clean-ubuntu

At which point we won’t need our “first” container, so just delete it with:

lxc stop first
lxc delete first

And lastly we can start a new container from our image with:

lxc launch clean-ubuntu second

 

Accessing files from the container

To pull a file from the container you can use the “lxc file pull” command:

lxc file pull second/etc/hosts .

Let’s add an entry to it:

echo “1.2.3.4 my-example” >> hosts

And push it back where it came from:

lxc file push hosts second/etc/hosts

You can also use this mechanism to access log files:

lxc file pull second/var/log/syslog – | less

We won’t be needing that container anymore, so stop and delete it with:

lxc delete –force second

 

Use a remote image server

The lxc client tool supports multiple “remotes”, those remotes can be read-only image servers or other LXD hosts.

LXC upstream runs one such server at https://images.linuxcontainers.org which serves a set of automatically generated images for various Linux distributions.

It comes pre-added with default LXD but you can remove it or change it if you don’t want it.

You can list the available images with:

lxc image list images: | less

And spawn a new Centos 7 container with:

lxc launch images:centos/7 third

Confirm it’s indeed Centos 7 with:

lxc exec third — cat /etc/redhat-release

And delete it:

lxc delete -f third

The list of all configured remotes can be obtained with:

lxc remote list

 

Interact with remote LXD servers

For this step, you’ll need a second demo session, so open a new one here

Copy/paste the “lxc remote add” command from the top of the page of that new session into the shell of your old session.
Then confirm the server fingerprint for the remote server.

Note that it may take a few seconds for the new LXD daemon to listen to the network, just retry the command until it answers.

At this point you can list the remote containers with:

lxc list tryit:

And its images with:

lxc image list tryit:

Now, let’s start a new container on the remote LXD using the local image we created earlier.

lxc launch clean-ubuntu tryit:fourth

You now have a container called “fourth” running on the remote host “tryit”. You can spawn a shell inside it with (then exit):

lxc exec tryit:fourth bash

Now let’s copy that container into a new one called “fifth”:

lxc copy tryit:fourth tryit:fifth

And just for fun, move it back to our local lxd while renaming it to “sixth”:

lxc move tryit:fifth sixth

And confirm it’s all still working (then exit):

lxc start sixth
lxc exec sixth — bash

Then clean everything up:

lxc delete -f sixth
lxc delete -f tryit:fourth
lxc image delete clean-ubuntu

Facebooktwitterredditpinterestlinkedinmail

NFS Server and Client on Centos 7

Here is a quick and dirty working example of an NFS Server setup on Centos 7 that allows anonymous connectivity from any host to an exported filesystem and a Client mount from Fedora.  It can be used to assist in troubleshooting problematic NFS mounts.

On Centos Server, Install NFS

Centos server should be able to ping Fedora machine.

yum install nfs-utils

mkdir /var/nfsshare

chmod -R 755 //var/nfsshare

chown nfsnobody:nfsnobody /var/nfsshare

systemctl enable rpcbind
systemctl enable nfs-server
systemctl enable nfs-lock
systemctl enable nfs-idmap
systemctl start rpcbind
systemctl start nfs-server
systemctl start nfs-lock
systemctl start nfs-idmap

vi /etc/exports

/var/nfsshare *(rw,sync,no_root_squash,no_all_squash)

systemctl restart nfs-server

firewall-cmd –permanent –zone=public –add-service=nfs
firewall-cmd –permanent –zone=public –add-service=mountd
firewall-cmd –permanent –zone=public –add-service=rpc-bind
firewall-cmd –reload

On  Fedora Client, Mount NFS Export

fedora machine should be able to ping centos machine.  edit /etc/hosts if you want to use hostnames (and don’t have DNS).

yum install nfs-utils

mkdir -p /mnt/nfs/var/nfsshare

vi /etc/fstab

centos1:/var/nfsshare /mnt/nfs/var/nfsshare nfs defaults 0 0

mount -a

or mount -t nfs centos1:/var/nfsshare /mnt/nfs/var/nfsshare/

Note that all the above steps are necessary to work.  Once this works, modify it one thing at a time and keep retesting the mount until you find what breaks it.

 

Facebooktwitterredditpinterestlinkedinmail

Installing different Java JRE on Linux

It you need to install a newer/alternative/multiple version(s) of the Java Runtime Environment on a RHEL Server, read the following guide.  It will enable you to switch between multiple installed JRE’s.  This can be useful for development / pre-prod servers, where prod is running a different version.

If you have a requirement to run multiple JREs on a single RHEL server, then use the “alternatives” package to facilitate switching between them via a convenient menu system.  Notes on configuring alternatives are at the end of this post.

CHECK PROD JRE VER
[matt@CyberfellaProdSvr ~]$ java -version
java version “1.7.0_67”
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)

CHECK PRE-PROD JRE VER
[matt@CyberfellaPreSvr JRE 1.7.65]$ java -version
java version “1.6.0_34
OpenJDK Runtime Environment (IcedTea6 1.13.6) (rhel-1.13.6.1.el6_6-x86_64)
OpenJDK 64-Bit Server VM (build 23.25-b01, mixed mode)

CHECK EXISTING JRE(s) INSTALLED
rpm -qa | sort | grep ^j
java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64
java-1.6.0-openjdk-1.6.0.34-1.13.6.1.el6_6.x86_64
java-1.6.0-openjdk-devel-1.6.0.34-1.13.6.1.el6_6.x86_64

DOWNLOAD JRE VERSION
The latest version of JRE (Java Runtime Environment) is easy to find. The older ones, not so much. You’ll need an Oracle Support Account. Don’t panic, it’s free to set up.

WINSCP JRE TO LINUX SERVER
Install WinSCP on Windows. It will prompt to import all your PuTTY Sessions! How Convenient!  I love stuff that saves time.
Using WinSCP on Windows, SCP JRE 1.7.67 to the server.

INSTALL/UPGRADE JRE 1.7.67
rpm -Uvh jre-7u67-linux-x64.rpm

ADDING JAVA to ALTERNATIVES
View current selection of different JREs in alternatives
alternatives –config java
Add new version after installing rpm
alternatives –install <exe path> <binary> <install path> <selection>
alternatives –install /usr/bin/java java /usr/java/jre1.7.0_67/bin/java 3
Switch to new version
update-alternatives –config java
or just “alternatives –config java” seems to do the same thing.
Show current JRE
java -version

Facebooktwitterredditpinterestlinkedinmail

Fun with Cowsay

The terminal can get a little tiresome by the end of a full working week, so why not use cowsay to add a little fun to your stdout?

Just be sure to check its actually installed before you start calling it from your shell scripts.  I found it was installed by default on Debian based distros but not on a Centos7 VM i spun up using vagrant, so you’re mileage may vary as they say.

Installation

sudo apt-get install cowsay

Basic usage

cowsay “hello”

View all the possible “cows”

ls -1 /usr/share/cowsay/cows | cut -d . -f1 | while read eachline; do cowsay -f $eachline “$eachline”; done

There’s loads of them and more to choose from online too.  In the meantime, here’s a couple dragons to whet your appetite…

Facebooktwitterredditpinterestlinkedinmail

Automation with Ansible

The DevOps revolution has no end of brilliant projects and products that promise to get you closer to the “Infrastructure as Code” ideology.   I’ve briefly introduced the rapid deployment of virtual machines using Vagrant here and now it’s time to introduce Ansible.

Ansible is a tool that should be available from your repositories already, just like the afore mentioned Vagrant.  It’s a RedHat project, but is available across the Linux distribution landscape.  So on Debian/Ubuntu/Mint, installation is as easy as…

sudo apt-get install ansible

It is agentless, which is great as it radically simplifies the process of getting up and running, but like many agentless tools (the ones that don’t require the installation of a client daemon on all machines), you will either need to be using a directory admin account that is already set up to have privileges on all other servers in the domain, or else copy SSH keys out to all the machines that you intend to use ansible against in order to bring automation and consistency to your linux network.  The process of setting up passwordless authentication has already been covered here but it’s simple enough so I’ll summarise it here for convenience.

Lets say you have a machine linux1 with a user matt that you want to use as your ansible “server” to run commands against servers linux10, linux11 and linux12.  The other servers also have a user matt but it’s a local user, not a user in a directory.  In order for matt on linux1 to be accepted as being synonymous with matt on the other servers linux10, linux11 and linux12, matt‘s SSH keys will need to be generated and the public key copied to the other machines.

In order for matt on linux1 to be accepted as being synonymous with matt on the other servers linux10, linux11 and linux12, matt‘s SSH keys will need to be generated on linux1 and the public key copied to the other linux10, 11 and 12 machines.

su – matt

ssh-keygen  (Note:  do not use passphrase, leave blank or you’ll be prompted every time you attempt a passwordless connection to a remote host and this will obstruct using your public key authentication as root on remote system. )

cd .ssh

ssh-copy-id -i id_rsa.pub linux10 linux11 linux12  (Note:  On reflection, use the full path to the id_rsa file, e.g. /home/root/.ssh/id_rsa.pub.  This is because there is the potential to su to root and land in the previous users .ssh folder, and subsequently copy that users keys instead of the root users.  You’ll be hours figuring that one out).

Now that we’ve got that out of the way, we can get back to the subject in hand, namely ansible.  Ansible is a way of doing away with having to ssh to every machine in order to execute something locally on that remote machine in order to make it consistent with the other machines in your enterprise environment.

Ansible is a way of doing away with having to ssh to every machine in order to execute something locally on that remote machine in order to make it consistent with the other machines in your enterprise environment.

There are many modules available in ansible, documented  here but in order to keep this introduction to ansible simple, we’ll just demo the command module.

There is just one last thing to set up before that, and that is a hosts file that groups together your hosts in your network.  hosts can belong to more than one group, but in this simple demo, we need to create a file called hosts and in it, create a group called [group1] with linux10, linux11 and linux12 hosts as members…

vi hosts

[group1]

linux10

linux11

linux12

With this group created, we can now execute a command against each of the hosts in the group using ansible.

The syntax is ansible, followed by the group name, followed by -i (information), in our case the hosts file (not to be confused with /etc/hosts) , followed by -m (module name, in our case command module), followed by -a (arguments to be passed to the module, in our case “uname -a”).

ansible group1 -i ./hosts -m command -a “uname -a”

This will return the results of running uname -a on each of the servers listed in the group in our hosts file, to stdout just as if we had ssh’d to each of them in the same terminal and executed the command.

The example below shows the results of executing uptime against my laptop from a centos vm running on virtualbox, as user matt, where the ssh keys have been prior copied to my laptop, then again as the root user where the ssh keys have not.  Note also that once the passphrase has been entered once for the user, that’s it from that point on and the ansible host is effectively trusted to execute commands on remote hosts.  Powerful and Convenient stuff.

If you want to be able to use ansible as root to execute commands remotely (using ansibles -b option, i.e. become) then you’ll need to copy the root users ssh keys over to the remote hosts too.  You can do this the exact same way as you copy over any other users ssh keys, only this one comes with an added obstacle – ssh as root is not permitted by default in most modern linux distributions as a way of hardening against a brute force attack as root.  Sensible stuff, and not that difficult to overcome.  You just need to edit the /etc/ssh/sshd-config file on the remote host to permit root login while you copy the keys across.

Just comment out the existing PermitRootLogin prohibit-password line and replace it with PermitRootLogin yes.  Note: not PermitRootLogin PermitRootLogin as in the example above – I couldn’t restart sshd.

service sshd restart

And voila, the root users ssh keys copy across fine.

Now you need to change the ssh-config file back to PermitRootLogin prohibit-password and restart sshd again to put the system back to it’s secure default state whereby the root user is allowed to attempt a connection, it’s just not allowed to send a password.  If ssh keys are in place of course, passwords don’t need to be sent – that’s the whole point of ssh keys, after all!

Voila, I can now ssh to the remote system as root, even thought the ssh daemon on the remote system is configured to not permit password authentication for inbound connections by the user root.   If that’s the case, then you will now be able to use the ansible -b option (become) to execute commands or playbooks to configure remote systems as root.

At this point, you may find yourself saying “not on my system it doesn’t!”

If that’s the case, please go to the end of the post and read the Troubleshooting SSH connections section for tips on what to do.

Although ansible now works as root on remote systems, you’ll find that sudoers throws you an error when attempting to use the -b (become root) option when running the ansible command as a user other than root on the ansible server.

Adding the user to the sudo group on the remote host doesn’t fix this either, since the sudoers mechanism will still (by default) ask for the users password in order to run a command as root.

Once this edit has been made to sudoers using visudo then you can see below, that re-running the same ansible -b command as the vagrant user, successfully executes the uptime command as root on the remote system.

And therein ends my initial introduction to ansible and hopefully some tips on getting it working the way you want.  Playbooks will be covered in a separate post.

Troubleshooting SSH connections

You may find yourself having issues with connecting as root or any other user for that matter.  Despite having created and copied you public keys to the remote systems, you’re still being prompted for passphrases or passwords for the user, defeating the whole point of setting up passwordless authentication.

Here’s a quick checklist of things to look out for and ways to troubleshoot the connection.

service stop sshd && /usr/sbin/sshd -d  (restart sshd in debug mode on the remote machine)

ssh -vv <remote-host> (connect to the remote host using ssh in verbose mode)

Before Googling the errors, make sure you can confirm the following:

When you generated the public keys using ssh-keygen you left the passphrase blank.

When you copied the keys over to the remote machine using ssh-copy-id you used the full path to the id_rsa.pub file.  If you’re root, it’s quite probable you copied another users ssh keys over instead of your own!

The .ssh directory in the users home directory has 700 permissions and the authorized-keys file has 600 permissions.

Facebooktwitterredditpinterestlinkedinmail