JuJu, Hadoop and Openstack. Amazing.

Wow.  This is the coolest thing I’ve seen since vMotion.  Watch from 15:00 for 40 seconds as Mark Shuttleworth migrates an entire infrastructure as a service  stack from one public cloud in amazon ec2 to another public cloud in hp with juju.  Just amazing.

 

Facebooktwitterredditpinterestlinkedinmail

List UIDs of failed files

If you’re copying data from an NFS device, the local root user of your NFS client will not have omnipotent access over the data, and so if the permissions are set with everyone noaccess, i.e. r-wr-w— or similar (ending in — instead of r–) then even root will fail to copy some files.

To capture the outstanding files after the initial rsync run as root, you’ll need to determine the UID of the owner(s) of the failed files, create dummy users for those uids and perform subsequent rsync’s su’d to those dummy users.  You won’t get read access any other way.

The following shell script will take a look at the log file of failures generated by rysnc -au /src/* /dest/ 2> rsynclog and list uid’s of user accounts that have read access to the failed-to-copy data.  (Note: when using rsync, appending a * will effectively miss .hidden files.  Lose the * and use trailing slashes to capture all files including hidden files and directories).

subsequent rsync operations can be run by each of these users in turn to catch the failed data.  This requires the users to be created on the system performing the copy, e.g. useradd -o -u<UID> -g0 -d/home/dummyuser -s/bin/bash dummyuser

This could also easily be incorporated into the script of course.

#!/usr/bin/bash

#Variables Section

    SRC=”/source_dir”
    DEST=”/destination_dir”
    LOGFILE=”/tmp/rsynclog”
    RSYNCCOMMAND=”/usr/local/bin/rsync -au ${SRC}/* ${DEST} 2> ${LOGFILE}”
    FAILEDDIRLOG=”/tmp/faileddirectorieslog”
    FAILEDFILELOG=”/tmp/failedfileslog”
    UIDLISTLOG=”/tmp/uidlistlog”
    UNIQUEUIDS=”/tmp/uniqueuids”

#Code Section

    #Create a secondary list of all the failed directories
    grep -i opendir ${LOGFILE} | grep -i failed ${LOGFILE} | cut -d\” -f2 > ${FAILEDDIRLOG}

    #Create a secondary list of all the failed files
    grep -i “send_files failed” ${LOGFILE} | cut -d\” -f2 > ${FAILEDFILELOG}

    #You cannot determine the UID of the owner of a directory, but you can for a file
    
    #Remove any existing UID list log file prior to writing a new one
    if [ -f ${UIDLISTLOG} ]; then
        rm ${UIDLISTLOG}
    fi

    #Create a list of UID’s for failed file copies    
    cat ${FAILEDFILELOG} | while read EACHFILE; do
        ls -al ${EACHFILE} | awk {‘print $3’} >> ${UIDLISTLOG}
    done

    #Sort and remove duplicates from the list
    cat ${UIDLISTLOG} | sort | uniq > ${UNIQUEUIDS}    

    cat ${UNIQUEUIDS}

exit

Don’t forget to chmod +x a script before executing it on a Linux/UNIX system.

Facebooktwitterredditpinterestlinkedinmail

Counting number of files in a Linux/UNIX filesystem

cd to the starting directory, then to count how many files and folders exist beneath,

find . -depth | wc -l

although in practice find . | wc -l works just as well leaving off -depth.  Or to just count the number of files

find . -type f | wc -l

Note that on Linux, a better way to compare source and destination directories, might be to count the inodes used by either filesystem.

df -i

Exclude a hidden directory from the file count, e.g. .snapshots directory on a NetApp filer

#find ./ -type f \( ! -name “.snapshot” -prune \) -print | wc -l – Note:  had real trouble with this!

New approach…  :o(

ls -al | grep ^d | awk {‘print $9’} | grep -v “^\.” | while read eachdirectory; do

     find ./ -depth | wc -l

done

Then add up numbers at the end.

Another way to count files in a large filesystem is to ask the backup software.  If you use emc Networker, the following example may prove useful.

sudo mminfo -ot -q ‘client=mynas,level=full,savetime<7 days ago’ -r ‘name,nfiles’

name                         nfiles

/my-large-volume          894084

Facebooktwitterredditpinterestlinkedinmail

Copying the contents of one filesystem to another.

Sometimes on older operating systems, rsync (first choice for copying files from one filesystem to another) may not be available.  In such circumstances, you can use tar.  If it’s an initial copy of a large amount of data you’re doing, then this may actually be 2 – 4 times faster due to the lack of rsync’s checksum calculations, although rsync would be faster for subsequent delta copies.

timex tar -cf – /src_dir | ( cd /dest_dir ; tar -xpf – )

Add a v to the tar -xpf command if you want to see a scrolling list of files as the files are copied but be aware that this will slow it down.  I prefer to leave it out and just periodically ls -al /dest_dir in another terminal to check the files are being written correctly.  timex at the front of the command will show you how long it ran for once it completes (may be useful to know).

With the lack of verbose output, if you need confirmation that the command is still running, use ps -fu user_name | grep timex although the originating terminal should not have returned a command prompt unless you backgrounded the process with an & upon execution, or CTRL Z, jobs, bg job_id subsequently. Note that backgrounding the process may hinder your collection of timings so is not recommended if you are timing the operation.

Another alternative would be to pipe the contents of find . -depth into cpio -p thus using cpio’s passthru mode…

timex find . -depth | cpio -pamVd /destination_dir

Note that this command can appear to take a little while to start, before printing a single dot to the screen per file copied (the capital V verbose option as opposed to the lowercase v option)

If you wish to copy data from one block storage device to another, it’d be faster to do it at block level rather than file level.  To do this, ensure the filesystems are unmounted, then use the dd command dd if=/dev/src_device of=/dev/dest_device

Do not use dd on mounted filesystems.  You will corrupt the data.

Overall progress can be monitored throughout the long copy process with df -h in a separate command windowprepending the cpio command with timex will not yield any times once the command has completed – but it is faster than both tar or rsync for initial large copies of data.

To perform a subsequent catch-up copy of new or changed files, simultaneously deleting any files from the Destination that no longer exist on the Source for a true “syncronisation” of the two sides, much like a mirror synchronisation, use…

timex ./rsync -qazu –delete /src_dir/* /dest_dir  

Note this will not include hidden files.  To do that, lose the * off the source fs and add a trailing slash to the destination fs

or to catch up the new contents on the Src side to the Dest side and not delete any files on the Dest side that have been deleted on Src, use

rsync -azu –progress /NFS_Src/* /NFS_Dest

a= archive mode; equals –rlptgoD (recursive, links, permissions, times, group, owner and device files preserved)

z = compress file during transfer (optional but generally best practice)

u = update

–progress in place of v (verbose) or q (quiet).  A touch faster and more meaningful than a scrolling list of files going up the screen.

Facebooktwitterredditpinterestlinkedinmail

FTP backup script

If you have a remote web server, then for a small fee, your hosting company will back it up for you.  This is money for old rope.  If you run Linux at home, then you can back it up yourself – just by transferring the contents to a local folder on your computer using a shell script that performs the ftp transfer, which can be fully automated by adding it to cron (crontab -e)

#!/bin/bash
HOST=’ftp.mywebserver.co.uk‘ # change the ipaddress accordingly
USER=’myftpusername‘ # username also change
PASSWD=’myftpuserpassword‘ # password also change
ftp -n $HOST < quote USER $USER
quote PASS $PASSWD
bin
prompt off
cd /www # this folder contains files to be backed up…
lcd /webserverbackup # this location is the local directory to backup to.
mget *
bye
exit

Don’t forget to change the username, password, ftp server name/ip address and remote and local mount points to suit your requirements.  And don’t forget to chmod +x the ftpbackup.sh script to make it executable.  Finally use crontab -e to add a scheduled job to run this script automatically.  You can also add to it in order to create a readable log file or to warn you via email in the event of an error.

 

Facebooktwitterredditpinterestlinkedinmail

Edit wbar dock and conky in crunchbang/openbox

Besides editing the menu.xml to customise the menu, why not install wbar and edit /usr/share/wbar/dot.wbar to add convenient quick launch icons to the wbar dock for the most commonly called upon apps.  It’s even simpler than editing the menu.xml file, especially if you use vi.

My desktop is quite nicely themed and as conky shows, is very light on resources.

conky – the monitor on the left hand side of the screen can be customised by editing .conkyrc in your home directory.  To install it, simply type sudo apt-get install conky then get hacking.

To effect the changes, simply right click on wbar, or restart the conky process using kill -HUP

Facebooktwitterredditpinterestlinkedinmail

Edit Openbox menus in Crunchbang Linux

Unlike some of the heavier, fully functional desktop environments typically provided by the top five on Distrowatch, Openbox used by Crunchbang will not always automatically add the names of newly installed programs to the menu used to subsequently invoke them.

Most folks who are not as far down the rabbit hole as I am, understandably just want a desktop that works but they should pause for a moment before turning off to the idea of Openbox and Crunchbang for the following two reasons.

1. It makes full disk encryption (not just your home directory) available to you during installation which is very reassuring if you should get your laptop stolen.

2. Each time I consider parting company with it and going back to a heavier distro, I find I can’t bring myself to do it because it does everything I need it to.  Plus it does it more efficiently and in as minimalist a way as my puny hardware resources could ever hope for, so why would I?

On top of the Linux kernel, you’ll already be running sufficient packages put into place by the installation procedure to provide a working desktop environment that handles a bunch of important stuff you won’t have thought about, such as handling removable devices such as usb sticks, encrypting the files that get written down to disk, searching for wireless lans or sending a DHCP request if you plug into a network in the hope that it’ll learn of some nearby DNS servers so that your web browser will work when you ask for google.com, but depending on how lightweight your chosen distro is in nature, it may not have much else.  Crunchbang is one of these.  It does the hard stuff up front, and leaves you with a pretty blank canvas on which to build and have fun.  For those of you who say I only need web and email, that’s nonsense. There’s a whole bunch of stuff you need for web and email at the application level to work properly but rest assured Crunchbang already provides it, despite it’s blank, black appearance.

It’ll even keep on working when you find yourself needing to do real work. I have to successfully run my own company using just my laptop during the week when I’m away from my home and the rest of my infrastructure and also use it for entertainment so there’s really no better test than that. Reviews are great, but the proof of the pudding is in the eating. I want to get the work done, and that means I want a fast, super responsive interface that doesn’t mess about. My laptop isn’t for impressing my friends with, it’s key to my survival and my only source of free entertainment. It has to deliver and if it comes up short, I will find out quickly. I also like messing about with photographs so those extra system resources are appreciated.

Additional functionality comes in the form of freely available modules (programs and their dependent libraries) installed and removed at will using Synaptic Package Manager which downloads all the software you’ll ever need from known repositories as and when functionality is required or retired on your desktop – much like your iphone or android phone, only crunchbang doesn’t carry the advertising or any of the bad stuff that leaves you wondering if your computer is actually free or even your own.  Install it and you’ll see much blackness!  No childish fisher-price icons here to lure in paying consumers, just a blank, black canvas and a package manager.  That’s as simple as it gets.

BUT, as I started out saying, it won’t necessarily add the programs to your desktop menu after they’re installed.  Before you let that become an issue for you and miss out on feeling like that kid felt in the 70’s when he/she opened that box and smelt that plastic, read on.  It’s easy to edit the menu to add the programs you’ve just installed.

 

Settings, Openbox, Edit menu.xml (not Reconfigure as shown – thats for afterwards).

The menu.xml file will open in geany text editor.  Anything between <item> and </item> is a, well, item.  So copy an existing block of code and paste it in somewhere appropriate according to what type of program it is (Media, Office, Graphics etc), then just modify the label and executable as required.  I added the xcalc calculator (shown below).

When you’re happy with your edit, save it, then Settings, Openbox, Reconfigure to re-load the .xml file you just modified and see the new item in the menu.  Test it to make sure it works.

 

Facebooktwitterredditpinterestlinkedinmail

vi Reference

Anybody can google the answer right?  Correct.  However, not everybody can then apply the solution – especially if it involves editing text files from the command line.  Cue The vi Editor.

Before you attempt to modify a file with vi, take a copy of the file so you have something to fall back on when you get it 1. horribly wrong, then 2. subconsciously quit with :wq! subsequently writing your wrongs back to disk.  D’oh!

 

Navigation

Basic editing                                                   

Esc       Switch to Command Mode

a          Append after cursor

i           Insert before cursor

R          Overtype

u          Undo (maintains history)

x           Delete character under cursor

O          Open a new line

 

Display settings

:set ic               turn search case sensitivity off

:set noic            turn search case sensitivity on

:set nu              turn line numbering on

:set nonu           turn off line numbers

 

Cut, Copy and Paste                                      

dw        Cut whole word

dd         Cut whole line

cw        Change word

4dd       Cut four lines

d4w      Cut four words

yy         Yank (Copy) whole line

y$         Yank from cursor to end of line

y3w      Yank three words

3yy       Yank three lines

p          Paste after cursor

cc         Change whole line

c4l        Change next 4 chars

c4w      Change next 4 words

c$         Change from cursor to end of line

c0         Change from cursor to beginning of line

 

Searching and Replacing                                

/word    find “word” (forwards)

?word   find “word” (backwards)

n          goto next match of “word”

N          goto previous match of word

:s/dog/cat/gi                             find and replace all dogs with cats on this line only, ignoring case

:%s /dog/cat/g                          find dog and replace with with cat on all lines (gl0bally).

:g/mywrod/s//myword/g find ‘mywrod’ and replace it with ‘myword’

:g/matt/s/fooobar/foobar/g         find ‘matt’ and replace ‘fooobar’ with ‘foobar’ on those lines.

 

Saving, Loading and Quitting

Note: hit Esc to enter Command Mode first…

:w        save with current filename

:wq       save and quit

:q         quit

:q!        forcibly quit

:wq!      forcibly write and quit

:r <filename>    read <filename>

 

Setting up vi

On UNIX edit the .exrc file in your home dir…  smd showmatch ic wrapmargin=0 report=1

If your Linux system uses vim instead of vi, then edit .vimrc, not .exrc to get the same result, though in vim it’s probably already set up nicely to start with.

Add syn on in .vimrc to set syntax highlighting on (nice).  Also, set cindent, set autoindent and nu for indentation and line numbering if you want that too.

 

Facebooktwitterredditpinterestlinkedinmail

Xubuntu 64 bit vs Crunchbang 64 bit

My recent purchase of a Lenovo IdeaCentre Q180 has proved to be interesting.  Hey, Lenovo, your choice of a Radeon graphics chipset was a poor one.  I think.

64 bit Linux has always interested me.  The real UNIXes like HPUX and AIX are 64 bit and rock solid number crunching beasts, so the prospect of running 64bit UNIX* for free* on my every day machine (without the cost of purchasing a Mac) has always interested me.  The trouble is, 64bit Linux has a checkered past on the desktop with showstopping issues being graphics driver support, flash plugin, and support for scanning and printing, i.e. all the things that a 64bit UNIX number crunching server would never have to worry about.

Despite this, I figure things must have moved on a bit by now, especially with so many inexpensive 64bit CPU’s gracing the systemboards of most modern machines so I’d give it another go.  The first thing to go was the 32bit installation of Xubuntu 11.10 on my 11.6″ Dell Inspiron 11z laptop – a trusty servant and a faultless OS, in favour of trialling 64bit Cruncbang Statler – and not the BPM (backported modules) one on Linux kernel 3, but the more stable, stoical 2.6 Linux kernel.  This is 64 bit desktop OS territory so stability is important, and buggy, bleeding edge software modules are not welcome here.  Not on my machine anyway.

The Lenovo Ideacentre Q180 didn’t come with an OS – an attractive proposition, not paying for an unwanted Microsoft license, and one which helped seal the deal if I’m honest.  I installed 32bit Xubuntu with all my usual post-install customisations, i.e. adding the Medibuntu repository, installing recommended hardware drivers and a full apt-get update && apt-get upgrade and reboot, and finally adding Ad Block Plus and DownThemAll plugins to firefox, and installing Ubuntu One and Dropbox to re-sync all my important stuff stored in the cloud.  After that, for me, apps are just apt-get installed on demand as and when I need them, such as the wonderfully convenient gscan2pdf for scanning receipts and saving them as a pdf in the cloud for safe keeping.  I’m not spending hours trying to think of all the software I need and installing it before I need it.  Life’s too short.  Go do something else instead.  If I want to rip and re-encode a DVD to divx, I’ll just apt-get install dvdrip rar libdvdcss2 as and when I need to.  Not that I’d ever want to do that of course.  I digress.

Once I’d verified that xubuntu 32 ran OK on the hardware, I blew it away in favour of trialling 64bit Xubuntu.  My initial tests with Crunchbang 64 on the laptop were proving to be very very successful indeed.  It’s been on there a couple weeks now, and I have no intention of replacing it anytime soon.  So a win for Crunchbang.  Yay.

I had to download and use unetbootin to create a bootable live usb stick from the downloaded .iso image since the startup disk creator packaged with the OS just didn’t like booting on the Lenovo.  Not to worry.  unetbootin worked a treat.  Installation went without a hitch and hardware drivers installed etc as per the normal routine detailed above.  I was all set to feel that “new vanilla OS” warm comfortable feeling experienced by a graffiti artist when they spot a white wall, or a surfer when turns up at an empty break, when all of a sudden the user interface started to play up.  Frown time.

After some research I quickly realise there are issues with Radeon drivers and Linux full-stop, let alone on 64 bit linux, despite ATI’s claim that their driver supports both 32 bit and 64bit Linux.  It also claims to support RedHat and Suse if you look closely enough, which left me wondering about the “automatic” install on Ubuntu I’d just done.

I’ve tried a number of things but the graphics card driver is definitely problematic.  So it’s going to be Crunchbang 64 on the off chance it’s OK, with a post-install of XFCE for a slightly more user friendly experience (seeing as how Gnome has gone right off the rails since 3.0).  Failing that, I’ll be going back to the more tried and true 32 bit distributions in the hope that the graphics driver behaves itself better.  Reliability is king.  Having a 640 horsepower supercar is no good if it breaks down.   You’re better off with a 320 horsepower Evo.

Facebooktwitterredditpinterestlinkedinmail

Linux Broadcom Wifi problems

I’ve seen a few issues lately with some of the more modern linux distro’s and connecting to wireless networks with a Broadcom wifi adapter (usually with built in bluetooth).

I couldn’t fix it on xubuntu 12.04 (slow transfer speeds) and now I think I know why, so I’ll have to go back and check.

On Crunchbang Statler 64bit, this seems to have worked…

Note my kernel version, model of wi-fi card and the two commands at the bottom that actually did the magic – removing the b43 module from the kernel and then reloading it with pio enabled and qos disabled.

Now I can connect to the pub’s wifi and, well, blog this.  🙂

If it works for you too, you can make the changes permanent like this…

sudo touch /etc/modprobe.d/b43.conf 

echo "options b43 pio=1 qos=0" | sudo tee -a /etc/modprobe.d/b43.conf
Facebooktwitterredditpinterestlinkedinmail