Tuning an SSD powered Linux PC

So you’ve bought an SSD to give your everyday computing device a performance boost?  Well done.

The good news is, if you’re running Linux, there’s a handful of things you can do to make the most of your new super-powered block storage device.  My results below speak for themselves.  The bad news is, if you’re just a gadget consumer who has to have the latest and greatest, then simply buying it, fitting it and reinstalling the OS / cloning your previous drive is not going to cut it.  It’s more common sense than out-and-out rocket science, but whatever your OS, you can use my guide to give you ideas on what you can do to improve both performance and possibly the longevity of your device.  Being relatively new to the consumer market, the longevity of solid state block storage devices is yet to be seen.  At least you can do your bit to reduce the number of writes going to the device and (one would think) extend its life.

I chose to buy two relatively small capacity Intel SSD’s, connected each one to its own SATA controller on the system board and mount / on one and /home on the other.  I don’t see the point in buying large capacity SSD’s when it’s the performance you’re after rather than huge capacity to store your documents, photos, mp3, movie and software collections on – Thats what relatively cheap 2TB USB HDD’s and Cloud Storage providers like DropBox and Ubuntu One are for.  Oh and buy two of those two external HDD’s too because nobody wants to see 2TB of their data go irretrievably down the pan.

Incidentally, if you do lose data there is a nice previous blog entry on data forensics that will help you get it back. Search for forensics at the top or follow this link for that…

Disk Recovery and Forensics

Anyway, here’s the comparison of HDD performance to whet your appetite.

single hard disk in Lenovo IdeaCentre Q180

New dual SSD’s in HP dc7900 SFF PC

Tuning your SSD powered system…

Make sure the partitions are aligned.  This means that when a block is written to the filesystem, there are far fewer boundaries crossed on the ssd with each block written.

Much is written on the web about how to achieve this, I found the easiest way was to create a small ext2 /boot partition on the front of one drive, swap on the front of the other, and create my big / and /home partitions at the end of the disks (I have two remember) when using the manual partitioning tool gparted during installation.  By doing this, when i divided my starting sector number (returned by fdisk -l) by 512, i found the number was perfectly divisable – which is indicative of properly aligned partitions.  Job done then.

For each ssd in your computer, prepend noatime and discard to the options, leaving errors=remount-ro or defaults on the end.

/dev/sda   /   ext4   noatime,discard,errors=remount-ro 0 1

Change the scheduler from noop to deadline.
Add the following line for each SSD in your system:

echo deadline >/sys/block/sda/queue/scheduler

Make it do this each time you reboot
As root, vi /etc/rc.local and add these above the exit 0 line at the end of the file

echo deadline > /sys/block/sda/queue/scheduler
echo 1 > /sys/block/sda/queue/iosched/fifo_batch

GRUB Boot loader

vi /etc/default/grub    and change the following line…

GRUB_CMDLINE_LINUX_DEFAULT=”elevator=deadline quiet splash”

sudo update-grub

Reduce how aggressive swap is on the system.  A linux system with 2GB or more RAM will hardly ever swap.

echo 0 > /proc/sys/vm/swappiness
sudo vi /etc/sysctl.conf
change vm.swappiness=1

vm.vfs_cache_pressure=50

Move tmp areas to memory instead of ssd.  You’ll lose the contents of these temporary filesystems between boots, but on a desktop that may not be important.
In your /etc/fstab, add the following:

tmpfs   /tmp       tmpfs   defaults,noatime,mode=1777   0  0
tmpfs   /var/spool tmpfs   defaults,noatime,mode=1777   0  0
tmpfs   /var/tmp   tmpfs   defaults,noatime,mode=1777   0  0
tmpfs   /var/log   tmpfs   defaults,noatime,mode=0755   0  0

Move firefox cache (you’ll loose this between boots)
in firefox, type about:config and right click and create a new variable

browser.cache.disk.parent_directory

set it to /tmp

Boot from a live usb stick so the disks aren’t mounted, and as root, deactivate the journals on your ext4 partitions of your internal ssd’s e.g.

sudo tune2fs -O ^has_journal /dev/sda1

Add TRIM command to /etc/rc.local for each SSD i.e.

Above the line exit 0, add the following

fstrim -v /

fstrim -v /home     (only if your /home is mounted on a second SSD)

For computers that are always on, add trim command to /etc/cron.daily/trim

#!/bin/sh

fstrim -v / && fstrim -v /home

chmod +x /etc/cron.daily/trim

BIOS Settings

Set SATA mode to AHCI.  It will probably be set to IDE.  You’ll need to hunt for this setting is it varies between BIOS types.

SSD Firmware

Use lshw to identify your SSD and download the latest firmware from the manufacturer.  For Intel SSDs, go here

https://downloadcenter.intel.com/confirm.aspx?httpDown=http://downloadmirror.intel.com/18363/eng/issdfut_2.0.10.iso&lang=eng&Dwnldid=18363

 

That’s it.  I’ll add other tips to this list as and when I think of them or see them on the net.  You could reboot using a live usb and delete the residual files left behind in the tmp directories that you’ll be mounting in RAM from here on, but that’s up to you.  If you do, DO NOT remove the directories themselves or the system won’t boot.  If you do remove them, then fix it by booting using a live usb stick, mount the / partition into say, /ssd and mkdir the directories you deleted in /ssd/var/tmp and /ssd/tmp.  Be aware though that /tmp and /var/tmp have special permissions set on them.  chmod 777, followed by chmod +t to set the sticky bit -drwxrwxrwt (sticky bit set – with execution) on them.

Facebooktwitterredditpinterestlinkedinmail

Backing up your ageing CD collection – efficiently.

Our CD’s are getting a bit old now, and if you have a large collection, ripping them to your iTunes collection gets tedious quickly.  The fastest, most efficient way as always, is to use the command line.  The Linux program abcde “A Better CD Encoder” is a fantastic, simple binary for the task.  Like many other Linux packages, it has dependencies.  The following line, is an example of how to rip an Audio CD, and re-encode the wav file to quality 320Kbps mp3 files written to your home directory.

abcde -o mp3:”-b 320″ -a move,clean

The following script, which I’ve called mytunes.sh will handle all dependencies if needed, and run the above command so you don’t have to remember the syntax.  Don’t forget to chmod 777 it to make it executable.

#!/bin/sh

# This script will turn your CD into a bunch of fully tagged mp3 files.  Just pop the CD in, and run ./mytunes.sh

# Software pre-req checks…
if [ ! -f /usr/bin/cdparanoia ]; then
    print “Attempting to retrieve the cdparanoia cd ripping package…”
    sudo apt-get install cdparanoia
fi
if [ ! -f /usr/bin/lame ]; then
    print “Attempting to retreive the lame mp3 encoding package…”
    sudo apt-get install lame
fi
if [ ! -f /usr/bin/abcde ]; then
    print “Attempting to retreive abcde A Better CD Encoder package…”
    sudo apt-get install id3v2 cd-discid abcde
fi

#Yes, you read it right.  One line of actual code to do the meaty bit.
abcde -o mp3:”-b 320″ -a move,clean

# SOFTWARE PRE-REQUISITES (handled by script if non-existent)
# cdparanoia     Takes the wavs off the CD
# lame         mp3 encoder
# abcde     A Better CD Encoder
# cd-discid     Uses the Disc ID to obtain CDDB information for mp3 files.
# id3v2     Command    line id3 tag editor

Facebooktwitterredditpinterestlinkedinmail

Merging and Splitting avi’s

Everybody loves DIVX/XVID .avi files.  Here’s a couple of useful tips when dealing with them.  You may want to join together two halves or split a large video file into multiple, smaller files for easier handling between storage devices.

 

MERGING AVI FILES

install transcode (sudo apt-get install transcode)

avimerge -o merged.avi -i part1.avi part2.avi

Its as simple as that.

 

SPLITTING AVI FILES

To split a file into two pieces, install mencoder (sudo apt-get install mencoder) and execute the following commands:

mencoder -endpos 01:00:00 -ovc copy -oac copy movie.avi -o first_half.avi

mencoder -ss 01:00:00 -oac copy -ovc copy movie.avi -o second_half.avi

Done!

Facebooktwitterredditpinterestlinkedinmail

JuJu, Hadoop and Openstack. Amazing.

Wow.  This is the coolest thing I’ve seen since vMotion.  Watch from 15:00 for 40 seconds as Mark Shuttleworth migrates an entire infrastructure as a service  stack from one public cloud in amazon ec2 to another public cloud in hp with juju.  Just amazing.

 

Facebooktwitterredditpinterestlinkedinmail

List UIDs of failed files

If you’re copying data from an NFS device, the local root user of your NFS client will not have omnipotent access over the data, and so if the permissions are set with everyone noaccess, i.e. r-wr-w— or similar (ending in — instead of r–) then even root will fail to copy some files.

To capture the outstanding files after the initial rsync run as root, you’ll need to determine the UID of the owner(s) of the failed files, create dummy users for those uids and perform subsequent rsync’s su’d to those dummy users.  You won’t get read access any other way.

The following shell script will take a look at the log file of failures generated by rysnc -au /src/* /dest/ 2> rsynclog and list uid’s of user accounts that have read access to the failed-to-copy data.  (Note: when using rsync, appending a * will effectively miss .hidden files.  Lose the * and use trailing slashes to capture all files including hidden files and directories).

subsequent rsync operations can be run by each of these users in turn to catch the failed data.  This requires the users to be created on the system performing the copy, e.g. useradd -o -u<UID> -g0 -d/home/dummyuser -s/bin/bash dummyuser

This could also easily be incorporated into the script of course.

#!/usr/bin/bash

#Variables Section

    SRC=”/source_dir”
    DEST=”/destination_dir”
    LOGFILE=”/tmp/rsynclog”
    RSYNCCOMMAND=”/usr/local/bin/rsync -au ${SRC}/* ${DEST} 2> ${LOGFILE}”
    FAILEDDIRLOG=”/tmp/faileddirectorieslog”
    FAILEDFILELOG=”/tmp/failedfileslog”
    UIDLISTLOG=”/tmp/uidlistlog”
    UNIQUEUIDS=”/tmp/uniqueuids”

#Code Section

    #Create a secondary list of all the failed directories
    grep -i opendir ${LOGFILE} | grep -i failed ${LOGFILE} | cut -d\” -f2 > ${FAILEDDIRLOG}

    #Create a secondary list of all the failed files
    grep -i “send_files failed” ${LOGFILE} | cut -d\” -f2 > ${FAILEDFILELOG}

    #You cannot determine the UID of the owner of a directory, but you can for a file
    
    #Remove any existing UID list log file prior to writing a new one
    if [ -f ${UIDLISTLOG} ]; then
        rm ${UIDLISTLOG}
    fi

    #Create a list of UID’s for failed file copies    
    cat ${FAILEDFILELOG} | while read EACHFILE; do
        ls -al ${EACHFILE} | awk {‘print $3’} >> ${UIDLISTLOG}
    done

    #Sort and remove duplicates from the list
    cat ${UIDLISTLOG} | sort | uniq > ${UNIQUEUIDS}    

    cat ${UNIQUEUIDS}

exit

Don’t forget to chmod +x a script before executing it on a Linux/UNIX system.

Facebooktwitterredditpinterestlinkedinmail

Counting number of files in a Linux/UNIX filesystem

cd to the starting directory, then to count how many files and folders exist beneath,

find . -depth | wc -l

although in practice find . | wc -l works just as well leaving off -depth.  Or to just count the number of files

find . -type f | wc -l

Note that on Linux, a better way to compare source and destination directories, might be to count the inodes used by either filesystem.

df -i

Exclude a hidden directory from the file count, e.g. .snapshots directory on a NetApp filer

#find ./ -type f \( ! -name “.snapshot” -prune \) -print | wc -l – Note:  had real trouble with this!

New approach…  :o(

ls -al | grep ^d | awk {‘print $9’} | grep -v “^\.” | while read eachdirectory; do

     find ./ -depth | wc -l

done

Then add up numbers at the end.

Another way to count files in a large filesystem is to ask the backup software.  If you use emc Networker, the following example may prove useful.

sudo mminfo -ot -q ‘client=mynas,level=full,savetime<7 days ago’ -r ‘name,nfiles’

name                         nfiles

/my-large-volume          894084

Facebooktwitterredditpinterestlinkedinmail

Copying the contents of one filesystem to another.

Sometimes on older operating systems, rsync (first choice for copying files from one filesystem to another) may not be available.  In such circumstances, you can use tar.  If it’s an initial copy of a large amount of data you’re doing, then this may actually be 2 – 4 times faster due to the lack of rsync’s checksum calculations, although rsync would be faster for subsequent delta copies.

timex tar -cf – /src_dir | ( cd /dest_dir ; tar -xpf – )

Add a v to the tar -xpf command if you want to see a scrolling list of files as the files are copied but be aware that this will slow it down.  I prefer to leave it out and just periodically ls -al /dest_dir in another terminal to check the files are being written correctly.  timex at the front of the command will show you how long it ran for once it completes (may be useful to know).

With the lack of verbose output, if you need confirmation that the command is still running, use ps -fu user_name | grep timex although the originating terminal should not have returned a command prompt unless you backgrounded the process with an & upon execution, or CTRL Z, jobs, bg job_id subsequently. Note that backgrounding the process may hinder your collection of timings so is not recommended if you are timing the operation.

Another alternative would be to pipe the contents of find . -depth into cpio -p thus using cpio’s passthru mode…

timex find . -depth | cpio -pamVd /destination_dir

Note that this command can appear to take a little while to start, before printing a single dot to the screen per file copied (the capital V verbose option as opposed to the lowercase v option)

If you wish to copy data from one block storage device to another, it’d be faster to do it at block level rather than file level.  To do this, ensure the filesystems are unmounted, then use the dd command dd if=/dev/src_device of=/dev/dest_device

Do not use dd on mounted filesystems.  You will corrupt the data.

Overall progress can be monitored throughout the long copy process with df -h in a separate command windowprepending the cpio command with timex will not yield any times once the command has completed – but it is faster than both tar or rsync for initial large copies of data.

To perform a subsequent catch-up copy of new or changed files, simultaneously deleting any files from the Destination that no longer exist on the Source for a true “syncronisation” of the two sides, much like a mirror synchronisation, use…

timex ./rsync -qazu –delete /src_dir/* /dest_dir  

Note this will not include hidden files.  To do that, lose the * off the source fs and add a trailing slash to the destination fs

or to catch up the new contents on the Src side to the Dest side and not delete any files on the Dest side that have been deleted on Src, use

rsync -azu –progress /NFS_Src/* /NFS_Dest

a= archive mode; equals –rlptgoD (recursive, links, permissions, times, group, owner and device files preserved)

z = compress file during transfer (optional but generally best practice)

u = update

–progress in place of v (verbose) or q (quiet).  A touch faster and more meaningful than a scrolling list of files going up the screen.

Facebooktwitterredditpinterestlinkedinmail

FTP backup script

If you have a remote web server, then for a small fee, your hosting company will back it up for you.  This is money for old rope.  If you run Linux at home, then you can back it up yourself – just by transferring the contents to a local folder on your computer using a shell script that performs the ftp transfer, which can be fully automated by adding it to cron (crontab -e)

#!/bin/bash
HOST=’ftp.mywebserver.co.uk‘ # change the ipaddress accordingly
USER=’myftpusername‘ # username also change
PASSWD=’myftpuserpassword‘ # password also change
ftp -n $HOST < quote USER $USER
quote PASS $PASSWD
bin
prompt off
cd /www # this folder contains files to be backed up…
lcd /webserverbackup # this location is the local directory to backup to.
mget *
bye
exit

Don’t forget to change the username, password, ftp server name/ip address and remote and local mount points to suit your requirements.  And don’t forget to chmod +x the ftpbackup.sh script to make it executable.  Finally use crontab -e to add a scheduled job to run this script automatically.  You can also add to it in order to create a readable log file or to warn you via email in the event of an error.

 

Facebooktwitterredditpinterestlinkedinmail

Edit wbar dock and conky in crunchbang/openbox

Besides editing the menu.xml to customise the menu, why not install wbar and edit /usr/share/wbar/dot.wbar to add convenient quick launch icons to the wbar dock for the most commonly called upon apps.  It’s even simpler than editing the menu.xml file, especially if you use vi.

My desktop is quite nicely themed and as conky shows, is very light on resources.

conky – the monitor on the left hand side of the screen can be customised by editing .conkyrc in your home directory.  To install it, simply type sudo apt-get install conky then get hacking.

To effect the changes, simply right click on wbar, or restart the conky process using kill -HUP

Facebooktwitterredditpinterestlinkedinmail

Edit Openbox menus in Crunchbang Linux

Unlike some of the heavier, fully functional desktop environments typically provided by the top five on Distrowatch, Openbox used by Crunchbang will not always automatically add the names of newly installed programs to the menu used to subsequently invoke them.

Most folks who are not as far down the rabbit hole as I am, understandably just want a desktop that works but they should pause for a moment before turning off to the idea of Openbox and Crunchbang for the following two reasons.

1. It makes full disk encryption (not just your home directory) available to you during installation which is very reassuring if you should get your laptop stolen.

2. Each time I consider parting company with it and going back to a heavier distro, I find I can’t bring myself to do it because it does everything I need it to.  Plus it does it more efficiently and in as minimalist a way as my puny hardware resources could ever hope for, so why would I?

On top of the Linux kernel, you’ll already be running sufficient packages put into place by the installation procedure to provide a working desktop environment that handles a bunch of important stuff you won’t have thought about, such as handling removable devices such as usb sticks, encrypting the files that get written down to disk, searching for wireless lans or sending a DHCP request if you plug into a network in the hope that it’ll learn of some nearby DNS servers so that your web browser will work when you ask for google.com, but depending on how lightweight your chosen distro is in nature, it may not have much else.  Crunchbang is one of these.  It does the hard stuff up front, and leaves you with a pretty blank canvas on which to build and have fun.  For those of you who say I only need web and email, that’s nonsense. There’s a whole bunch of stuff you need for web and email at the application level to work properly but rest assured Crunchbang already provides it, despite it’s blank, black appearance.

It’ll even keep on working when you find yourself needing to do real work. I have to successfully run my own company using just my laptop during the week when I’m away from my home and the rest of my infrastructure and also use it for entertainment so there’s really no better test than that. Reviews are great, but the proof of the pudding is in the eating. I want to get the work done, and that means I want a fast, super responsive interface that doesn’t mess about. My laptop isn’t for impressing my friends with, it’s key to my survival and my only source of free entertainment. It has to deliver and if it comes up short, I will find out quickly. I also like messing about with photographs so those extra system resources are appreciated.

Additional functionality comes in the form of freely available modules (programs and their dependent libraries) installed and removed at will using Synaptic Package Manager which downloads all the software you’ll ever need from known repositories as and when functionality is required or retired on your desktop – much like your iphone or android phone, only crunchbang doesn’t carry the advertising or any of the bad stuff that leaves you wondering if your computer is actually free or even your own.  Install it and you’ll see much blackness!  No childish fisher-price icons here to lure in paying consumers, just a blank, black canvas and a package manager.  That’s as simple as it gets.

BUT, as I started out saying, it won’t necessarily add the programs to your desktop menu after they’re installed.  Before you let that become an issue for you and miss out on feeling like that kid felt in the 70’s when he/she opened that box and smelt that plastic, read on.  It’s easy to edit the menu to add the programs you’ve just installed.

 

Settings, Openbox, Edit menu.xml (not Reconfigure as shown – thats for afterwards).

The menu.xml file will open in geany text editor.  Anything between <item> and </item> is a, well, item.  So copy an existing block of code and paste it in somewhere appropriate according to what type of program it is (Media, Office, Graphics etc), then just modify the label and executable as required.  I added the xcalc calculator (shown below).

When you’re happy with your edit, save it, then Settings, Openbox, Reconfigure to re-load the .xml file you just modified and see the new item in the menu.  Test it to make sure it works.

 

Facebooktwitterredditpinterestlinkedinmail