Tuning an SSD powered Linux PC

So you’ve bought an SSD to give your everyday computing device a performance boost?  Well done.

The good news is, if you’re running Linux, there’s a handful of things you can do to make the most of your new super-powered block storage device.  My results below speak for themselves.  The bad news is, if you’re just a gadget consumer who has to have the latest and greatest, then simply buying it, fitting it and reinstalling the OS / cloning your previous drive is not going to cut it.  It’s more common sense than out-and-out rocket science, but whatever your OS, you can use my guide to give you ideas on what you can do to improve both performance and possibly the longevity of your device.  Being relatively new to the consumer market, the longevity of solid state block storage devices is yet to be seen.  At least you can do your bit to reduce the number of writes going to the device and (one would think) extend its life.

I chose to buy two relatively small capacity Intel SSD’s, connected each one to its own SATA controller on the system board and mount / on one and /home on the other.  I don’t see the point in buying large capacity SSD’s when it’s the performance you’re after rather than huge capacity to store your documents, photos, mp3, movie and software collections on – Thats what relatively cheap 2TB USB HDD’s and Cloud Storage providers like DropBox and Ubuntu One are for.  Oh and buy two of those two external HDD’s too because nobody wants to see 2TB of their data go irretrievably down the pan.

Incidentally, if you do lose data there is a nice previous blog entry on data forensics that will help you get it back. Search for forensics at the top or follow this link for that…

Disk Recovery and Forensics

Anyway, here’s the comparison of HDD performance to whet your appetite.

single hard disk in Lenovo IdeaCentre Q180

New dual SSD’s in HP dc7900 SFF PC

Tuning your SSD powered system…

Make sure the partitions are aligned.  This means that when a block is written to the filesystem, there are far fewer boundaries crossed on the ssd with each block written.

Much is written on the web about how to achieve this, I found the easiest way was to create a small ext2 /boot partition on the front of one drive, swap on the front of the other, and create my big / and /home partitions at the end of the disks (I have two remember) when using the manual partitioning tool gparted during installation.  By doing this, when i divided my starting sector number (returned by fdisk -l) by 512, i found the number was perfectly divisable – which is indicative of properly aligned partitions.  Job done then.

For each ssd in your computer, prepend noatime and discard to the options, leaving errors=remount-ro or defaults on the end.

/dev/sda   /   ext4   noatime,discard,errors=remount-ro 0 1

Change the scheduler from noop to deadline.
Add the following line for each SSD in your system:

echo deadline >/sys/block/sda/queue/scheduler

Make it do this each time you reboot
As root, vi /etc/rc.local and add these above the exit 0 line at the end of the file

echo deadline > /sys/block/sda/queue/scheduler
echo 1 > /sys/block/sda/queue/iosched/fifo_batch

GRUB Boot loader

vi /etc/default/grub    and change the following line…

GRUB_CMDLINE_LINUX_DEFAULT=”elevator=deadline quiet splash”

sudo update-grub

Reduce how aggressive swap is on the system.  A linux system with 2GB or more RAM will hardly ever swap.

echo 0 > /proc/sys/vm/swappiness
sudo vi /etc/sysctl.conf
change vm.swappiness=1

vm.vfs_cache_pressure=50

Move tmp areas to memory instead of ssd.  You’ll lose the contents of these temporary filesystems between boots, but on a desktop that may not be important.
In your /etc/fstab, add the following:

tmpfs   /tmp       tmpfs   defaults,noatime,mode=1777   0  0
tmpfs   /var/spool tmpfs   defaults,noatime,mode=1777   0  0
tmpfs   /var/tmp   tmpfs   defaults,noatime,mode=1777   0  0
tmpfs   /var/log   tmpfs   defaults,noatime,mode=0755   0  0

Move firefox cache (you’ll loose this between boots)
in firefox, type about:config and right click and create a new variable

browser.cache.disk.parent_directory

set it to /tmp

Boot from a live usb stick so the disks aren’t mounted, and as root, deactivate the journals on your ext4 partitions of your internal ssd’s e.g.

sudo tune2fs -O ^has_journal /dev/sda1

Add TRIM command to /etc/rc.local for each SSD i.e.

Above the line exit 0, add the following

fstrim -v /

fstrim -v /home     (only if your /home is mounted on a second SSD)

For computers that are always on, add trim command to /etc/cron.daily/trim

#!/bin/sh

fstrim -v / && fstrim -v /home

chmod +x /etc/cron.daily/trim

BIOS Settings

Set SATA mode to AHCI.  It will probably be set to IDE.  You’ll need to hunt for this setting is it varies between BIOS types.

SSD Firmware

Use lshw to identify your SSD and download the latest firmware from the manufacturer.  For Intel SSDs, go here

https://downloadcenter.intel.com/confirm.aspx?httpDown=http://downloadmirror.intel.com/18363/eng/issdfut_2.0.10.iso&lang=eng&Dwnldid=18363

 

That’s it.  I’ll add other tips to this list as and when I think of them or see them on the net.  You could reboot using a live usb and delete the residual files left behind in the tmp directories that you’ll be mounting in RAM from here on, but that’s up to you.  If you do, DO NOT remove the directories themselves or the system won’t boot.  If you do remove them, then fix it by booting using a live usb stick, mount the / partition into say, /ssd and mkdir the directories you deleted in /ssd/var/tmp and /ssd/tmp.  Be aware though that /tmp and /var/tmp have special permissions set on them.  chmod 777, followed by chmod +t to set the sticky bit -drwxrwxrwt (sticky bit set – with execution) on them.

image_pdfCreate PDF of this post...
Facebooktwittergoogle_plusredditpinterestlinkedinmail

Networker Cheatsheet

Here is a handy cheatsheet in troubleshooting failing backups and recoveries using emc’s Networker all taken from real-world experience (and regularly updated).

If it’s helped you out of a pinch, and is worth a dollar, then please consider donating to help maintain this useful blog.


Is backup server running?

nsrwatch -s backupserver        -Gives a console version of the NMC monitoring screen

Check the daemon.raw is being written to…

cp /nsr/logs/daemon.raw ~/copyofdaemon.raw

nsr_render_log -l ~/copyofdaemon.raw > ~/copyofdaemon.log

tail -10 ~/copyofdaemon.log

You may find mminfo and nsradmin commands are unsuccessful.  the media database may be unavailable and/or you may receive “program not registered” error that usually implies the Networker daemons/services are not running on the server/client.  This can also occur during busy times such as clone groups running (even though this busy-ness is not reflected in the load averages on the backup server.

Client config.

Can you ping the client / resolve the hostname or telnet to 7937?

Are the static routes configured (if necessary).

Can the client resolve the hostnames for the backup interfaces? have connectivity to them?

Does the backup server appear in the nsr/res/servers file?

Can you run a save -d3 -s /etc on the client?

From the backup server (CLI)…

nsradmin -p 390113 -s client

Note:  If the name field is incorrect according to nsradmin (happens when machines are re-commissioned without being rebuilt) then you need to stop nsrexecd, rename /nsr/nsrladb folder to /nsr/nsrladb.old, restart nsrexecd, and most importantly, delete and recreate the client on the networker backup server, before retrying a savegrp -vc client_name group_name

Also check that all interface names are in the servers file for all interfaces on all backup servers and storage nodes likely to back the client up.

Can you probe the client?

savegrp -pvc client groupname

savegrp -D2 -pc client groupname (more verbose)

Bulk import of clients

Instead of adding clients manually one at a time in the NMC, you can perform an initial bulk import.

nsradmin -i bulk-import-file

where the bulk-import-file contains many lines like this

create type: NSR Client;name:w2k8r2;comment:SOME COMMENT;aliases:w2k8r2,w2k8r2-b,w2k8r2.cyberfella.co.uk;browse policy:Six Weeks;retention policy:Six Weeks;group:zzmb-Realign-1;server network interface:backupsvrb1;storage nodes:storagenode1b1;

Use excel to form a large csv, then use Notepad++ to remove commas.  Be aware there is a comma in the aliases field, so use an alternative character in excel to represent this then replace it with a comma once all commas have been removed from the csv.

Add user to admin list on bu server

nsraddadmin -u user=username, host=*     

where username is the username minus the domain name prefix (not necessary).

Reset NMC Password (Windows)

The default administrator password is administrator.  If that doesn’t work, check to see that the GST service is started using a local system account (it is by default), then in Computer Management, Properties, Advanced Properties, create a System Environment Variable; GST_RESET_PW=1

Stop and start the GST Service and attempt to logon to the NMC using the default username and password pair above.

When done, set GST_RESET_PW=<null>

Starting a Backup / Group from the command line

On the backup server itself:  savegrp -D5 -G <group_name>

Ignore the index save sets if you are just testing a group by adding  -I

Just backing up the :index savesets in a group: savegrp -O -G <group_name>

On a client: save -s <backup_server_backupnic_name> <path>

Reporting with mminfo

List names of all clients backed up over the last 2 weeks (list all clients)

mminfo -q “savetime>2 weeks ago” -r ‘client’ | sort | uniq

mminfo -q ‘client=client-name, level=full’ -r ‘client,savetime,ssid,name,totalsize’

in a script with a variable, use double quotes so that the variable gets evaluated, and to sort on american date column…

mminfo -q client=${clientname},level=full -r ‘client,savetime,ssid,level,volume’ | sort -k 2.7,2.10n -k 2.1,2.5n -k 2.4,2.5n

mminfo -ot -c client -q “savetime>2 weeks ago”

mminfo -r “ssid,name,totalsize,savetime(16),volume” -q “client=client_name,savetime >10/01/2012,savetime <10/16/2012”

List the last full backup ssid’s for subsequent use with recover command (unix clients)

mminfo -q ‘client=server1,level=full’ -r ‘client,savetime,ssid’

Is the client configured properly in the NMC? (see diagram above  for hints on what to check in what tabs)

How many files were backed up in each saveset (useful for counting files on a NetApp which is slow using the find command at host level)

sudo mminfo -ot -q ‘client=mynetappfiler,level=full,savetime<7 days ago’ -r ‘name,nfiles’

name                         nfiles

/my_big_volume          894084

You should probably make use of the ssflags option in the mminfo report too, which adds an extra column regarding the status of the saveset displaying one or more of the following characters CvrENiRPKIFk with the common fields shown in bold below along with their meanings.

C Continued, v valid, r purged, E eligible for recycling, N NDMP generated, i incomplete, R raw, P snapshot, K cover, I in progress, F finished, k checkpoint restart enabled.

Check Client Index

nsrck -L7 clientname

Backing up Virtual Machines using Networker,VCentre and VADP

To back up virtual machine disk files on vmfs volumes at the vmware level (as opposed to the individual file level backups of the individual vm’s), networker can interface with the vcenter servers to discover what vm’s reside on the esxi clusters managed by them, and their locations on the vmfs shared lun.  For this to work, the shared lun’s also need to be presented/visible to the VADP Proxy (Windows server with Networker client and/or Server running as a storage node) in the fc switch fabric zone config.

The communication occurs as shown in blue.  i.e.

The backup server starts backup group containing vadp clients.

The vadp proxy asks vcentre what physical esxi host has the vm, and where the files reside on the shared storage luns.

The vadp proxy / networker storage node then tells the esxi host to maintain a snapshot of the vm while the vmdk files are locked for backup.

the vmdk files are written to the storage device (in my example, a data domain dedup device)

when the backup is complete, the client index is updated on the backup server, and the changes logged by the snapshot are applied to the now unlocked vmdk and then the snapshot is deleted on the esxi host.

Configuring Networker for VADP Backups via a VADP Proxy Storage Node

The VADP Proxy is just a storage node with fibre connectivity to the SAN and access to the ESXi DataStore LUNs.

In Networker, right click Virtualisation, Enable Auto Discovery

VADP-enable

Complete the fields, but notice there is an Advanced tab.  This is to be completed as follows…  not necessarily like you’d expect…

vadp-advanced

Note that the Command Host is the name of the VADP Proxy, NOT the name of the Virtual Center Server.

Finally, Run Auto Discovery.  A map of the infrastructure should build in the Networker GUI

vadp-gui

Ensure vc, proxy and networker servers all have network comms and can resolve each others names.

You should now be ready to configure a VADP client.

Configuring a VADP client (Checklist)

GENERAL TAB

vadp-client-general

IDENTITY
COMMENT
application_name – VADP
VIRTUALIZATION
VIRTUAL CLIENT
(TICK)
PHYSICAL HOST
client_name
BACKUP
DIRECTIVE
VCB DIRECTIVE
SAVE SET
*FULL*
SCHEDULE
Daily Full

APPS AND MODULES TAB

vadp-client-appsmods

 

 

 

 

 

 

BACKUP
BACKUP COMMAND
nsrvadp_save -D9
APPLICATION INFORMATION
VADP_HYPERVISOR=fqdn_of_vcenter (hostname in caps)
VADP_VM_NAME=hostname_of_vm (in caps)
VADP_TRANSPORT_MODE=san
DEDUPLICATION
Data Domain Backup
PROXY BACKUP
VMWare
hostname_of_vadp_proxy:hostname_of_vcenter.fqdn(VADP)

GLOBALS 1 OF 2 TAB
ALIASES
hostname
        hostname.fqdn
        hostname_backup
        hostname_backup.fqdn
        ip_front
        ip_back

GLOBALS 2 OF 2 TAB
REMOTE ACCESS
user=svc_vvadpb,host=hostname_vadp_proxy
        user=SYSTEM,host=hostname_vadp_proxy
        *@*

OWNER NOTIFICATION
  /bin/mail -s “client completion : hostname_client” nwmonmail

Recovery using recover on the backup client

sudo recover -s backup_server_backup_interface_name

Once in recover, you can cd into any directory irrespective of permissions on the file system.

Redirected Client Recovery using the command line of the backup server.

Initiate the recover program on the backup server…
sudo recover -s busvr_interface -c client_name -iR -R client_name

or use…  -iN (No Overwrite / Discard)
-iY (Overwrite)

-iR (Rename ~ )

Using recover> console

Navigate around the index of recoverable files just like a UNIX filesystem

Recover>    ls    pwd cd\

Change Browsetime
Recover>    changetime yesterday
1 Nov 2012 11:30:00 PM GMT

Show versions of a folder or filename backed up
Recover>      versions     (defaults to current folder)
Recover>    versions myfile

Add a file to be recovered to the “list” of files to be recovered
Recover>    add
Recover>     add myfile

List the marked files in the “list” to be recovered
Recover>    list

Show the names of the volumes where the data resides
Recover>    volumes

Relocate recovered data to another folder
Recover>    relocate /nsr/tmp/myrecoveredfiles

Recover>  relocate “E:\\Recovered_Files”     (for Redirected Windows Client Recovery from Linux Svr)

View the folder where the recovered files will be recovered to
Recover>    destination

Start Recovery
Recover>    recover

SQL Server Recovery (database copy) on a SQL Cluster

First, rdc to cluster name and run command prompt as admin on cluster name (not cluster node)
nsrsqlrc -s <bkp-server-name> -d MSSQL:CopyOfMyDatabase -A <sql cluster name> -C MyDatabase_Data=R:\MSSQL10_50.MSSQLSERvER\MSSQL\Data\CopyOfMyDatabase.mdf,MyDatabase_log=R:\MSSQL_10_50\MSSQLSERVER\MSSQL\Data\CopyOfMyDatabase.ldf MSSQL:MyDatabase

Delete the NSR Peer Information of the NetWorker Server on the client/storage node.

Please follow the steps given below to delete the NSR peer information on NetWorker Server and on the Client.

1. At NetWorker server command line, go to the location /nsr/res

2. Type the command:

nsradmin -p nsrexec
print type:nsr peer information; name:client_name
delete
y

Delete the NSR Peer Information for the client/storage node from the NetWorker Server.

Specify the name of the client/storage node in the place of client_name.

1. At the client/storage node command line, go to the location /nsr/res

2. Type the command:

nsradmin -p nsrexec
print type:nsr peer information
delete

y

VADP Recovery using command line

Prereqs to a successful VADP restore are that the virtual machine be removed from the Inventory in VCenter (right click vm, remove from Inventory), and the folder containing the virtual machines files in the vmware datastore be renamed or removed. If the vm still exists in vmware or in the datastore, VADP will not recover it.

Log onto the backup server over ssh and obtain the save set ID for your VADP “FULLVM” backup.

mminfo –avot –q “name=FULLVM,level=full”

Make a note of the SSID for the vm/backup client (or copy it to the cut/paste buffer)

e.g. 1021210946

Log onto the VADP Proxy (which has SAN connectivity over fibre necessary to recover the files back to the datastore using the san VADP recover mode)

recover.exe –S 1021210946 –o VADP:host=VC_Svr;VADP:transmode=san

Note that if you want to recover a VM back to a different vCenter,Datastore,ESX host and/or different resource pool, you can do that from the recover command too, rather than waiting to do it using the vsphere client.  this can be used if your vm still exists in vmware and you don’t want to overwrite it.  You can additionally specify VADP:host=  VADP:datacenter=  VADP:resourcepool=  VADP:hostsystem= and VADP:datastore= fields in the recover command, separated by semicolons and no spaces.

I’ve found that whilst the minimal command above may work on some environments, others demand a far more detailed recover.exe command with all VADP parameters set before it’ll communicate with the VC.  A working example is shown below (with each VADP parameter separated on a newline for readability – you’ll need to put it into a single line, and remove any spaces between each .

recover.exe -S 131958294 -o

VADP:host=vc.fqdn;

VADP:transmode=san;

VADP:datacenter=vmware-datacenter-name;

VADP:hostsystem=esxihost.fqdn;

VADP:displayname=VM_DISPLAYNAME;

VADP:datastore=“config=VM_DataStore#Hard disk 2=VM_DataStore_LUN_Name#Hard disk 1=VM_DataStore_LUN_Name”;

VADP:user=mydomain\vadp_user;

VADP:password=vadp_password

Creating new DataDomain Devices in Networker

In Networker Administrator App from NMC Console, Click Devices button at the top.
Right click Devices in the Left hand pane, New Device Wizard (shown)

Select Data Domain, Next, Next

 Use an existing data domain system
Choose a data domain system in the same physical location to your backup server!
Enter the Data Domain OST username and password

Browse and Select
Create a New Folder in sequence, e.g. D25, tick it.

Highlight the automatically generated Device Name, Copy to clipboard (CTRL-C), Next

Untick Configure Media Pools (label device afterwards using Paste from previous step), Next

Select Storage Node to correspond with device locality from “Use an existing storage node”, Next

Agree to the default SNMP info (unless reconfiguration for custom monitoring environment is required), Next

Configure, Finish

Select new device (unlabelled, Volume name blank), right click, Label

Paste Device Name in clipboard buffer (CTRL-V)
Select Pool to add the Device into, OK.

 

 

 

 

Slow backups of large amounts of data to DataDomain deduplication device

If you have ridiculously slow backups of large amounts of data, check in Networker NMC to see the name of the storage node (Globals2 tab of the client configuration), then connect to the DataDomain and look under the Data Management, DD Boost screen for “Clients” of which your storage node will be one.  Check how many CPU’s and Memory it has.  e.g. Guess which one is the slow one (below)

 

 

 

 

 

Then SSH to the storage node and check what processes are consuming the most CPU and Memory (below)

 

 

 

 

 

 

 

In this example (above), despite dedicating a storage node backup a single large applications data, the fact that it only has 4 cpu’s and is scanning every file that ddboost is attempting to deduplicate means that a huge bottleneck is introduced.  This is a typical situation whereby decommissioned equipment has been re-purposed.

Networker Server

ssh to the networker server and issue the nsrwatch command.  It’s a command line equivalent to connecting to the Enterprise app in the NMC and looking at the monitoring screen.  Useful if you can’t connect to the NMC.

 

 

 

 

 

 

Blank / Empty Monitoring Console

If you’re NMC is displaying a blank monitoring console, try this before restarting the NMC…

Tick or Un-tick and Re-tick Archive Requests.

monitoring-refresh

 

 

 

 

 

 

 

 

 

 

Tape Jukebox Operations

ps -ef | grep nsrjb     -Maybe necessary to kill off any pending nsrjb processes before new ones will work.

nsrjb -C | grep <volume>    -Identify the slot that contains the tape (volume)

nsrjb -w -S <slot>      -Withdraw the tape in slot <slot>

nsrjb -d       -Deposit all tapes in the cap/load port into empty slots in the jukebox/library.

Note:  If you are removing and replacing tapes you should take note what pools the removed tapes belong it and allocate new blank tapes deposited into the library to the same pools to eliminate impact on backups running out of tapes.

Exchange Backups

The application options of the backup client (exchange server in DAG1 would be as follows

NSR_SNAP_TYPE=vss

NSR_ALT_PATH=C:\temp

NSR_CHECK_JET_ERRORS=none

NSR_EXCH2010_BACKUP=passive

NSR_EXCH_CHECK=no

NSR_EXCH2010_DAG=GB-DAG1

NSR_EXCH_RETAIN_SNAPSHOTS=no

NSR_DEVICE_INTERFACE=DATA_DOMAIN

NSR_DIRECT_ACCESS=no

Adding a NAS filesystem to backup (using NDMP)

Some pre-reqs on the VNX need to be satisfied before NDMP backups will work.  This is explained here

General tab

general-tab

 

 

 

 

 

 

 

 

 

 

 

The exported fs name can be determined by logging onto the VNX as nasadmin and issuing the following command

server_mountpoint server_2 -list

Apps and Modules tab

apps_modules_tab

 

 

 

 

 

 

 

 

 

Application Options that have worked in testing NDMP Backups.

Leave datadomain unticked in Networker 8.x and ensure you’ve selected a device pool other than default, or Networker may just sit waiting for a tape while you’re wondering why NDMP backups aren’t starting!

HIST=y
UPDATE=y
DIRECT=y
DSA=y
SNAPSURE=y
#OPTIONS=NT
#NSR_DIRECT_ACCESS=NO
#NSR_DEVICE_INTERFACE=DATA_DOMAIN

Backup Command: nsrndmp_save -s backup_svr -c nas_name -M -T vbb -P storage_node_bu_interface or don’t use -P if Backup Server acts as SN.

To back up an NDMP client to a non-NDMP device, use the -M option.

The value for the NDMP backup type depends on the type of NDMP host. For example, NetApp, EMC, and Procom all support dump, so the value for the Backup Command attribute is:

nsrndmp_save -T dump

Globals 1 tab

globals1

 

 

 

 

Globals2 tab

globals2

 

 

 

 

 

 

 

 

List full paths of VNX filesystems required for configuring NDMP save client on Networker (run on VNX via SSH)

server_mount server_2

List full paths required to configure NDMP backup clients (emc VNX)

server_mount server_2

e.g. /root_vdm_2/CYBERFELLA_Test_FS

Important:  If the filesystem being backd up contains more than 5 million files, set the timeout attribute to zero in the backup group’s properties.

Command line equivalent to the NMC’s Monitoring screen

nsrwatch

Command line equivalent to the NMC’s Alerts pane

printf “show pending\nprint type:nsr\n” | /usr/sbin/nsradmin -i-

Resetting Data Domain Devices

Running this in one go if you’ve not done it before is not advised.  Break it up into individual commands (separated here by pipes) and ensure the output is what you’d expect, then re-join commands accordingly so you’re certain you’re getting the result you want.  This worked in practice though.  It will only reset Read Only (.RO) devices so it won’t kill backups, but will potentially kill recoveries or clones if they are in progress.

nsr_render_log -lacedhmpty -S “1 hour ago” /nsr/logs/daemon.raw | grep -i critical | grep RO | awk {‘print $10’} | while read eachline; do nsrmm | grep $eachline | cut -d, -f1 | awk {‘print $7’}; done | while read eachdevice; do nsrmm -HH -v -y -f “${eachdevice}”; done

Identify OS of backup clients via CLI

The NMC will tell you what the Client OS is, but it won’t elaborate and tell you what type, e.g. Solaris, not Solaris 11 or Linux, not Linux el6.  Also, as useful as the NMC is, it continually drives me mad how you cant export the information on the screen to excel.  (If someone figures this out, leave a comment below).

So, here’s how I got what I wanted using the good ol’ CLI on the backup server.  Luckily for me the backup server is Linux.
Run the following command on the NetWorker server, logging the putty terminal output to a file:

nsradmin
. type: nsr client
show client OS type
show name
show os type
p

This should get you a list of client names and what OS they’re running according to Networker in your putty.log file.  Copy and paste the list into a new file called mylist.  Extract just the Solaris hosts…

grep -i -B1 solaris >mylist
grep name mylist | cut -d: -f2 | cut -d\; -f1 >mysolarislist

sed ‘s/^ *//’ mysolarislist | grep -v \\-bkp > solarislist

You’ll now have a nice clean list of solaris networker client hostnames.  You can remove any backup interface names by using

grep -v b$

to remove all lines ending in b.

One liner…

grep -i -B1 solaris mylist | grep name | cut -d: -f2 | cut -d\; -f1 | sed ‘s/^ *//’ | grep -v \\-bkp | grep -v b$ | sort | uniq > solarislist

Now this script will use that list of hostnames to ssh to them and retrieve more OS detail with the uname -a command.  Note that if SSH keys aren’t set up, you’ll need to enter your password each time a new SSH session is established.  This isn’t as arduous as it sounds.  use PuTTY right click to paste the password each time, reducing effort to a single mouse click.
#!/bin/bash

cat solarislist | while read eachhost; do
echo “Processing ${eachhost}”
ssh -n -l cyberfella -o StrictHostKeyChecking=no ${eachhost} ‘uname -a’ >> solaris_os_ver 2>&1
done

This generates a file solaris_os_ver that you can just grep for ^SunOS and end up with a list of all the networker clients and the full details of the OS on them.

grep ^SunOS solaris_os_ver | awk ‘{print $1 $3 $2}’

image_pdfCreate PDF of this post...
Facebooktwittergoogle_plusredditpinterestlinkedinmail

Create filesystem on emc VNX

USEFUL CLI EXAMPLES for emc VNX NAS

Information

List all filesystems on the NAS:   nas_fs -list -all

Delete a filesystem:   nas_fs -delete -force

View info for all filesystems on the NAS:   nas_fs -info -all

View human friendly error message description:  nas_message -info 2216

List inodes:   server_df -inode   (Data Mover name obtained on filesystem properties in unisphere.  nas_fs -info -all doesn’t display it.

Create new filesystem

nas_fs -name NFS_Cyberfella_Dest -type uxfs -create size=20G pool=storage=SINGLE -thin no -option slice=y,nbpi=4096,mover=-mount_option mountmover=,mountpoint=/NFS_Cyberfella_Dest,mountmode=rw,accesspolicy=UNIX

Note: It’s best to delete filesystems via Unisphere GUI.  Use CLI for troublesome deletes.

Export/Unexport Filesystem over NFS

server_export-P nfs -list -all

server _export-P nfs -unexport -perm

image_pdfCreate PDF of this post...
Facebooktwittergoogle_plusredditpinterestlinkedinmail

Backing up your ageing CD collection – efficiently.

Our CD’s are getting a bit old now, and if you have a large collection, ripping them to your iTunes collection gets tedious quickly.  The fastest, most efficient way as always, is to use the command line.  The Linux program abcde “A Better CD Encoder” is a fantastic, simple binary for the task.  Like many other Linux packages, it has dependencies.  The following line, is an example of how to rip an Audio CD, and re-encode the wav file to quality 320Kbps mp3 files written to your home directory.

abcde -o mp3:”-b 320″ -a move,clean

The following script, which I’ve called mytunes.sh will handle all dependencies if needed, and run the above command so you don’t have to remember the syntax.  Don’t forget to chmod 777 it to make it executable.

#!/bin/sh

# This script will turn your CD into a bunch of fully tagged mp3 files.  Just pop the CD in, and run ./mytunes.sh

# Software pre-req checks…
if [ ! -f /usr/bin/cdparanoia ]; then
    print “Attempting to retrieve the cdparanoia cd ripping package…”
    sudo apt-get install cdparanoia
fi
if [ ! -f /usr/bin/lame ]; then
    print “Attempting to retreive the lame mp3 encoding package…”
    sudo apt-get install lame
fi
if [ ! -f /usr/bin/abcde ]; then
    print “Attempting to retreive abcde A Better CD Encoder package…”
    sudo apt-get install id3v2 cd-discid abcde
fi

#Yes, you read it right.  One line of actual code to do the meaty bit.
abcde -o mp3:”-b 320″ -a move,clean

# SOFTWARE PRE-REQUISITES (handled by script if non-existent)
# cdparanoia     Takes the wavs off the CD
# lame         mp3 encoder
# abcde     A Better CD Encoder
# cd-discid     Uses the Disc ID to obtain CDDB information for mp3 files.
# id3v2     Command    line id3 tag editor

image_pdfCreate PDF of this post...
Facebooktwittergoogle_plusredditpinterestlinkedinmail

Merging and Splitting avi’s

Everybody loves DIVX/XVID .avi files.  Here’s a couple of useful tips when dealing with them.  You may want to join together two halves or split a large video file into multiple, smaller files for easier handling between storage devices.

 

MERGING AVI FILES

install transcode (sudo apt-get install transcode)

avimerge -o merged.avi -i part1.avi part2.avi

Its as simple as that.

 

SPLITTING AVI FILES

To split a file into two pieces, install mencoder (sudo apt-get install mencoder) and execute the following commands:

mencoder -endpos 01:00:00 -ovc copy -oac copy movie.avi -o first_half.avi

mencoder -ss 01:00:00 -oac copy -ovc copy movie.avi -o second_half.avi

Done!

image_pdfCreate PDF of this post...
Facebooktwittergoogle_plusredditpinterestlinkedinmail

List UIDs of failed files

If you’re copying data from an NFS device, the local root user of your NFS client will not have omnipotent access over the data, and so if the permissions are set with everyone noaccess, i.e. r-wr-w— or similar (ending in — instead of r–) then even root will fail to copy some files.

To capture the outstanding files after the initial rsync run as root, you’ll need to determine the UID of the owner(s) of the failed files, create dummy users for those uids and perform subsequent rsync’s su’d to those dummy users.  You won’t get read access any other way.

The following shell script will take a look at the log file of failures generated by rysnc -au /src/* /dest/ 2> rsynclog and list uid’s of user accounts that have read access to the failed-to-copy data.  (Note: when using rsync, appending a * will effectively miss .hidden files.  Lose the * and use trailing slashes to capture all files including hidden files and directories).

subsequent rsync operations can be run by each of these users in turn to catch the failed data.  This requires the users to be created on the system performing the copy, e.g. useradd -o -u<UID> -g0 -d/home/dummyuser -s/bin/bash dummyuser

This could also easily be incorporated into the script of course.

#!/usr/bin/bash

#Variables Section

    SRC=”/source_dir”
    DEST=”/destination_dir”
    LOGFILE=”/tmp/rsynclog”
    RSYNCCOMMAND=”/usr/local/bin/rsync -au ${SRC}/* ${DEST} 2> ${LOGFILE}”
    FAILEDDIRLOG=”/tmp/faileddirectorieslog”
    FAILEDFILELOG=”/tmp/failedfileslog”
    UIDLISTLOG=”/tmp/uidlistlog”
    UNIQUEUIDS=”/tmp/uniqueuids”

#Code Section

    #Create a secondary list of all the failed directories
    grep -i opendir ${LOGFILE} | grep -i failed ${LOGFILE} | cut -d\” -f2 > ${FAILEDDIRLOG}

    #Create a secondary list of all the failed files
    grep -i “send_files failed” ${LOGFILE} | cut -d\” -f2 > ${FAILEDFILELOG}

    #You cannot determine the UID of the owner of a directory, but you can for a file
    
    #Remove any existing UID list log file prior to writing a new one
    if [ -f ${UIDLISTLOG} ]; then
        rm ${UIDLISTLOG}
    fi

    #Create a list of UID’s for failed file copies    
    cat ${FAILEDFILELOG} | while read EACHFILE; do
        ls -al ${EACHFILE} | awk {‘print $3’} >> ${UIDLISTLOG}
    done

    #Sort and remove duplicates from the list
    cat ${UIDLISTLOG} | sort | uniq > ${UNIQUEUIDS}    

    cat ${UNIQUEUIDS}

exit

Don’t forget to chmod +x a script before executing it on a Linux/UNIX system.

image_pdfCreate PDF of this post...
Facebooktwittergoogle_plusredditpinterestlinkedinmail

Counting number of files in a Linux/UNIX filesystem

cd to the starting directory, then to count how many files and folders exist beneath,

find . -depth | wc -l

although in practice find . | wc -l works just as well leaving off -depth.  Or to just count the number of files

find . -type f | wc -l

Note that on Linux, a better way to compare source and destination directories, might be to count the inodes used by either filesystem.

df -i

Exclude a hidden directory from the file count, e.g. .snapshots directory on a NetApp filer

#find ./ -type f \( ! -name “.snapshot” -prune \) -print | wc -l – Note:  had real trouble with this!

New approach…  :o(

ls -al | grep ^d | awk {‘print $9’} | grep -v “^\.” | while read eachdirectory; do

     find ./ -depth | wc -l

done

Then add up numbers at the end.

Another way to count files in a large filesystem is to ask the backup software.  If you use emc Networker, the following example may prove useful.

sudo mminfo -ot -q ‘client=mynas,level=full,savetime<7 days ago’ -r ‘name,nfiles’

name                         nfiles

/my-large-volume          894084

image_pdfCreate PDF of this post...
Facebooktwittergoogle_plusredditpinterestlinkedinmail

Customising your bash prompt and titlebar

It still surprises me how many servers I log on to that don’t have friendly prompts.  It strikes me as being downright dangerous in the right hands, let alone the wrong ones.

Solaris defaults to csh, HP-UX to ksh and Linux to bash.  Despite having “grown up” on Korn, I much prefer the intuitiveness of bash (command recall using arrow keys!) and would highly recommend all UNIX sysadmins install it.  One word of caution though – If you configure your user account to use bash as it’s default shell, then make sure you have some way into the system in the event that a customisation of your bash configuration goes awry.  HP-UX has a convenient backdoor by means of it’s Service Processor, but you may not be so lucky, subsequently shutting yourself out of your system.  Logging on as root may not be an option for you is local root logins or log in as root over ssh is disabled (common practice in these times of heightened security).  Lastly, don’t change the root users default shell in /etc/passwd.  Stick with the unmodified OS defaults set by the vendor.

OK, with the pitfalls and warnings out of the way, lets customise our bash prompt.  When bash is invoked (either by typing bash at the command prompt, or by your /etc/passwd entry for your user account), the /etc/bashrc file is read to set things up.  I’d not recommend modifying the /etc/bashrc file.  If you want to make customisations that apply only to yourself, then create a .bashrc file in your home directory and play in there.  Don’t forget to chmod +x the file afterwards if you want it to work.

Mine reads as follows

#!/bin/sh

PS1=”\u@\h \w $ “

clear

MYNAM=`grep ${USER} /etc/passwd | cut -d: -f5 | awk {‘print $1’}`

HOSTNAM=`hostname | cut -d. -f1`

echo “${MYNAM}, you are connected to ${HOSTNAM}”

exit

It sets an informative PS1 variable so that the prompt displays my username ‘at’ hostname followed by the present working directory with the obligatory $ prompt on the end.  This is all I need I find, but there are other things you can add if you choose.

\a : an ASCII bell character (07)
\d : the date in “Weekday Month Date” format (e.g., “Tue May 26”)
\D{format} : the format is passed to strftime(3) and the result is inserted into the prompt string; an empty format results in a locale-specific time representation. The braces are required
\e : an ASCII escape character (033)
\h : the hostname up to the first ‘.’
\H : the hostname
\j : the number of jobs currently managed by the shell
\l : the basename of the shell’s terminal device name
\n : newline
\r : carriage return
\s : the name of the shell, the basename of $0 (the portion following the final slash)
\t : the current time in 24-hour HH:MM:SS format
\T : the current time in 12-hour HH:MM:SS format
\@ : the current time in 12-hour am/pm format
\A : the current time in 24-hour HH:MM format
\u : the username of the current user
\v : the version of bash (e.g., 2.00)
\V : the release of bash, version + patch level (e.g., 2.00.0)
\w : the current working directory, with $HOME abbreviated with a tilde
\W : the basename of the current working directory, with $HOME abbreviated with a tilde
\! : the history number of this command
\# : the command number of this command
\$ : if the effective UID is 0, a #, otherwise a $
\nnn : the character corresponding to the octal number nnn
\\ : a backslash
\[ : begin a sequence of non-printing characters, which could be used to embed a terminal control sequence into the prompt
\] : end a sequence of non-printing characters

The remainder of my custom .bashrc creates a couple of additional variables MYNAM and HOSTNAM (don’t use names for variables that can be confused with commands e.g. hostname) which extract my first name from the /etc/passwd file and the hostname up to the first dot in the fqdn returned by the hostname command.  These variables are then used to construct a friendly welcome message when you log in.

I’ve kept it simple, but you can be as elaborate as you like.

More reading here on colours etc (can be useful to colour the root user prompt red for example)…

http://www.cyberciti.biz/tips/howto-linux-unix-bash-shell-setup-prompt.html

CUSTOMISING THE TITLEBAR TO DISPLAY THE CURRENTLY EXECUTING COMMAND

To have the terminal window display the currently running command in the titlebar (useful info during long scrolling outputs), add the following to the .bashrc file in your home directory.

if [ “$SHELL” = ‘/bin/bash’ ]
then
    case $TERM in
         rxvt|*term)
            set -o functrace
            trap ‘echo -ne “\e]0;$BASH_COMMAND\007″‘ DEBUG
            export PS1=”\e]0;$TERM\007$PS1″
         ;;
    esac
fi


image_pdfCreate PDF of this post...
Facebooktwittergoogle_plusredditpinterestlinkedinmail

Matching an IP Address using sed

Regular expression for matching an IP address.

sed ‘s/\([0-9]\{1,3\}\.\)\{3\}[0-9]\{1,3\}/&HelloWorld/gp’ infile > outfile

will append the string HelloWorld directly after an ipaddress in a file containing ipaddresses e.g.

10.200.200.10 blah

with

10.200.200.10HelloWorld blah

This can be adjusted to replace ip addresses with a string or used to inject some whitespace between an ipaddress and a string as required.

It looks more complicated than it is due to the many escape characters \ needed in the regular expression to tell the sed command that you’re passing a command and not a character to be matched.

image_pdfCreate PDF of this post...
Facebooktwittergoogle_plusredditpinterestlinkedinmail