Mount USB HDD by UUID in Linux

The danger with USB hard disk drives is that when you have more than one plugged into your workstation, the device name assigned to it by the operating system might not be consistent between reboots.  i.e. /dev/sdb1 and /dev/sdb2 might swap places.  Potential disaster if you rsync data from one to the other on a periodic basis.

If permanently mounting usb hard disks, it’s much safer to mount according to the UUID of the disk instead of the device name assigned by the OS.

If you change to root using sudo su – and cd into /dev/disk you’ll see that there are multiple links in there, organised into different folders.  The unique unit id is written in /dev/disk/by-uuid and links the device name to the unique id.

You can see what device name is mounted where using df -h.  Then use the output of ls -al of /dev/dsk/by-uuid to correlate uuid to filesystem mount.  There’s probably other ways to match filesystem to uuid but this is quick and easy enough to do.

Note that I’ve also taken the liberty of piping the commands through grep to reduce output, just showing me what I want to know,  i.e. the uuid’s mounted to devices named /sda1, /sda2, /sdb1 etc.

Once you’re confident you know what UUID is what disk, then you can permanently mount the disk or disks that are permanent fixtures by creating a mount point in the filesystem and adding a line to /etc/fstab

finally, mount -a will pick up the UUID and mount it into the mount point.


Missing sidHistory attributes on migrated accounts

Log onto a server and open a command prompt as Administrator.

Issue the following dsquery to create a four column, comma separated text file of all users names, logon names, primary object SID and if applicable, sidHistory SID.  Then open this .csv file in Excel and Auto Filter the sidHistory column to show all blanks.  This is the list of accounts that have NOT been subject to an inter-domain user account migration.

dsquery * “OU=Groups,OU=MigratedGroups,OU=Cromford,OU=UK,OU=DEV,OU=VMFARM,DC=cyberfella,DC=co,DC=uk” -filter “(&(objectClass=User))” -attr samAccountName cn ObjectSID sidHistory -limit 20000 > missing-sidhistories.txt



Manually Upgrading WordPress

  1. Get the latest WordPress tar.gz file.
  2. Unpack it on your computer
  3. Deactivate plugins on your wordpress site.
  4. Using filezilla, rename the old wp-includes and wp-admin directories on your web host.
  5. Using filezilla, upload the new wp-includes and wp-admin directories to your web host, in place of the previously renamed directories.
  6. Upload the individual files from the new wp-content folder to your existing wp-content folder, overwriting existing files. Do NOT delete your existing wp-content folder. Do NOT delete any files or folders in your existing wp-content directory (except for the one being overwritten by new files).
  7. Upload all new loose files from the root directory of the new version to your existing wordpress root directory.
  8. Reconnect to admin page.  If a database update is required it’ll automatically notify you.
  9. Reactivate all plugins.
  10. Any problems, restore easyspace backups from control panel.

Deleting inaccessible data such as users homedirectories

Users Home Directories are often hardened such that even Domain Administrators have problems migrating them and subsequently deleting them.  A way to deal with that is already documented here so this post is really just about the subsequent cleanup of the stubborn source data.

You can sit in Windows Explorer taking ownership and rattling the new permissions down each users tree if you like, but it’s a laborious process when you have 2000 users.  It doesn’t always work out 100% successful either.

This is my way of clearing out all users home directories that begin with the characters u5 for example.  You can adapt or scale it up it to suit your own requirements easily and save yourself a lot of time and effort.

First, make a list of the directories you want to delete.  Whether you have access to them or not is irrelevant at this stage.

dir /ad /b | findstr ^u5 > mylist.txt

dir /ad /b findstr ^U5 >> mylist.txt

Create an empty folder if you dont have one already.

mkdir empty

Now mirror that empty folder over the top of the users in the list, exploiting the operating backup right in robocopy that conveniently bypasses the NTFS security

for /f %F in (mylist.txt) DO robocopy empty %F /MIR /B /TIMFIX

This will leave empty folders behind but the security on them will have been overwritten with that of your empty folder, giving you the permission to delete it.

for /f %F in (mylist.txt) DO rmdir %F


Note: Use /TIMFIX with /B to correct non-copying of datestamps on files, resulting in 02/01/1980 datestamps on all files copied with /B Backup rights.


Robocopy folders with ampersands in the name

Don’t use & in file and folder names.

With that little pearl of wisdom out of the way, what about when your users have used ampersand characters in their folder names and you’re trying to robocopy the data to it’s new home, only to have the copy fail?

Try this…

SET “source=dogs & cats”

SET “destination=dogs & cats”

or if you can get away with it without breaking links…

SET “destination=dogs and cats”

robocopy.exe “%source%” “%destination%” /MIR

For more robocopy wisdom, check this post here

In real-world practice, I have found that robocopy is woefully unreliable when it comes to copying permissions (using the /e /sec /xf * switches).  I recommend using emcopy to copy folder structures and their NTFS permissions.  Similar to the robocopy commands above, these emcopy commands worked almost* perfectly for me

SET “source=dogs & cats”

SET “destination=dogs and cats”

emcopy “%source%” “%destination%” /secfix /xf * /lev:1

*Note how I’ve changed the destination folder to not include the ampersand character.  In practice, permissions were not copied to folders with ampersands in the name using robocopy or emcopy – in fact robocopy didn’t copy permissions at all!

If you’re copying a subset of data from a bigger source set of data, then never use /MIR or you will run a high risk of loosing data.   Oh yes you will.  Use the above emcopy commands one folder at a time to get your destination folder structure in place, before finally syncing the subfolder you want into the new destination.  This saves a potentially troublesome cleanup exercise later, deleting superfluous data, e.g.


SET “source=dogs & cats”

SET “destination=dogs and cats”

emcopy “%source%” “%destination%” /secfix /xf * /lev:1

Followed by…

SET “source=dogs & cats\spaniels”

SET “destination=dogs and cats\spaniels”

emcopy “%source%” “%destination%” /secfix /xf * /lev:1

Followed by…

SET “source=dogs & cats\spaniels\springer”

SET “destination=dogs and cats\spaniels\springer”

emcopy “%source%” “%destination%” /secfix /xf * /lev:1

and finally sync your file data into the new secured folder structure…

SET “source=dogs & cats\spaniels\springer”

SET “destination=dogs and cats\spaniels\springer”

Synchronise all file data using your preferred robocopy or emcopy command here.


Manually set IP, Subnet and Gateway addresses on VNX Control Station

How to change the Control Station IP Address and Subnet Mask

Log in to the Control Station as root.

Change the IP address and network mask by using this command syntax:

Note: /sbin/ifconfig -a revealed eth3 to be my cs0 interface.


# /sbin/ifconfig eth3 <ipaddr> netmask <netmask>

e.g. /sbin/ifconfig eth3 netmask


This changes the immediate configuration, but does not persist across restarts.

Edit the network scripts file, /etc/sysconfig/network-scripts/ifcfg-eth3, by using a text editor (that means vi)



Edit the local hosts file, /etc/hosts

Look for lines with the old IP address.

Replace the old IP address with your new IP address.

Save the file and exit.

If you are changing the Control Station IP address, but remaining on the same network, then the SP IP addresses for an integrated model need not be modified. However, if you are changing to a different network, the SP IP addresses must be modified to be on the same physical network as the Control Station for the Integrated model. Use the clariion_mgmt -modify -network command to update the IP addresses on the SP, as it will also update the files and Celerra database with the modified IP addresses.

How to change the Control Station default gateway

Log in to the Control Station as root using SSH. Add a default route by typing:


# /sbin/route add default gw


This changes the immediate configuration, but does not persist across restarts.

Edit the network configuration file, /etc/sysconfig/network-scripts/ifcfg-eth3, by using a text editor.

Add the new gateway IP address for the entries similar to:


Save the file and exit.


Download the full Firefox stand-alone installer

There’s nothing more frustrating than downloading an installer that assumes that you’re going to have internet access on the machine that you subsequently intend to run the installer on (called a stub installer).

For example, downloading firefox so that you can get to your enterprise storage arrays java based admin interface without the agony presented by internet explorer’s tendency to throw its toys out the pram over the certificate and the settings are locked down by IE policy, this policy, that policy and the other policy that all exist to make the environment so much more “secure” but actually just don’t allow anything, anywhere, ever.  It’s secure!, it’s been signed off as being suitably unusable to prevent exposing ourselves to any kind of imaginary threat!  Aren’t we clever?.  No.  Rant over.

It’s secure!, it’s been signed off as being suitably unusable to prevent exposing ourselves to any kind of imaginary threat!

I’ve probably digressed, I can’t tell.  I’m too angry.  And you are too probably, if you’ve ended up here.  Installers that assume an internet connection are completely useless in the enterprise environment (best read in the voice of Clarkson).

Whats even more frustrating is that the stub installer is the only apparent option, judging by mozillas website.  Well it isn’t the only option – you can still download the full-fat, stand-alone installer from their ftp site – but ftp is blocked by your firewall!

No bother, just replace ftp:// with http:// at the beginning of the URL, or even better just click here for the 64 bit version (or here for the 32 bit version).



Enable NFSv4 on VNX

To enable NFSv4 on your up-to-date (post VNX OE for File v7.1) VNX Unified storage system and configure a datamover to mount a filesystem to allow for NFSv4 access with a MIXED access policy, the following steps serve as a concise guide.  NFSv4 cannot be done via Unisphere.

Log onto control station as nasadmin user via SSH using PuTTY.

START NFSv4 Server on VNX
server_nfs server_2 -v4 -service -start

SET DOMAIN NAME to nfsv4.domain (change as required)
server_param server_2 -facility nfsv4 -modify domain -value nfsv4.domain

server_param server_2 -facility nfsv4 -info domain

server_param server_2 -facility nfsv4 -list

MOUNT NFS_TEST_2 on server_2 for NFSv4 access
server_mount server_2 -option accesspolicy=MIXED NFS_TEST_2 /NFS_TEST_2

TRANSLATE existing, mounted NFS filesystem from NATIVE access policy to MIXED access policy
nas_fs -translate NFS_TEST_2 -access_policy start -to MIXED -from NATIVE

server_nfs server_2 -v4 -client -list

NFSv4 requires UNICODE enabled on DM. Check…
server_cifs server_2 | grep I18N
I18N mode = UNICODE

server_nfs server_2 -v4

It’s highly likely that if you require NFS v4, then you’ll also need to authenticate access, using a UNIX based Kerberos DC.  The following notes cover the configuration steps involved.  Please note that this section below is still a work in progress and you should refer to the official EMC documentation for a complete set of instructions with examples.

SECURE NFS (using UNIX Kerberos Authentication)

server_kerberos server_2 -add realm=<realm-name>,kdc=<fqdn_kdc_name>,kadmin=<kadmin_server>,domain=<domain_name>,defaultrealm
Note realm,kdc, kadmin,domain should all be entered as fqdn’s

server_kerberos server_2 -list

server_nfs <datamovername> -secnfs
Note server_2 is set already during VNX installation.

server_nfs <newdatamovername> -secnfs -principal -delete nfs@server_2
Note This is only required if you change the default datamover hostname from server_2 to e.g. Ingbe245
server_nfs <newdatamovername> -secnfs -principal -create nfs@<server>
Note <server> is type of the realm, and needs to be entered twice, once with short name, e.g. Ingbe245 and once more with fqdn

server_nfs server_2 -secnfs -service -stop
server_nfs Ingbe245 -secnfs -service -start

Copy /.etc/krb.keytab file (if it exists) to the Kerberos KDC.

Note. The kadmin steps are performed on the Kerberos KDC, not the VNX
kadmin: addprinc=randkey nfs/Ingbe245
kadmin: addprinc=randkey nfs/Ingbe245.fqdn.local

kadmin: listprincs

kadmin: ktadd -k <keytab_file_path> nfs/ <name>
<keytab_file_path> = location of key file
<name>=name of previously created service principal e.g. nfs/Ingbe245

Copy the krb5.keytab file from Kerberos KDC to the Data Mover by using FTP and the server_file command.
Note. EMC Common Anti-Virus Agent (CAVA) is also configured using the server_file command to place and displace the viruschecker.conf file.  There are notes on that here but to save you the trouble, the command for your convenience is…

server_file server_2 -get krb5.keytab krb5.keytab

server_file server_2 -put krb5.keytab krb5.keytab
server_kerberos Inbe245 -keytab

server_nfs <datamovername> -secnfs -mapper -info

server_nfs <datamover_name> -secnfs -mapper -set -source auto

server_nfs server_2 -v4 -client -list


Accidentally formatted hard disk recovery

So you had more than one hard disk plugged into your nice new Humax FreeSat set top box, one containing all your existing downloaded media and the other, an empty one intended for recording.

Upon formatting the drive intended for recording you subsequently discover that your other FAT32 disk with all your media on it, now has a nice new, empty NTFS partition on it too.  A real WTF moment that absolutely is not your fault.  It happens to the best of us.  It’s just happened to me.

It’s in these moments that having a can-do attitude is of the utmost importance.  Congratulations are in order, because Life has just presented a real challenge for you to overcome.

The likelihood is 95% of your friends will feign sympathy and tell you…

“there’s nothing you can do if you’ve re-formatted the drive”

the largely self-appointed “tech experts” (on the basis they have all-the-gear) will likely tell you…

“you’ve reformatted your FAT32 partition with NTFS so you’ve lost everything.”

…like you’d have stood a chance if you’d gone over it with a like-for-like file system format and they could have got all your data back for you (yeah, right).

Well, if you’ve been sensible enough to not make any writes to the drive, then I can tell you that you absolutely can recover all your data.  In fact, there’s no data to recover as it’s all still on the drive, so “recovery” will be instantaneous.   I’m here to tell you…

You need a computer running Linux and you need to install the testdisk package.

In a console window, run sudo testdisk

You may need to unmount the disk first using gparted but leave it plugged in.

In testdisk, you need to list partitions and it’ll display the new high performance file system NTFS partition and nothing else at this point.  There is an option to do a “deeper scan”.  This walks the cylinders looking for any evidence that a previous file system was here.  If you’ve not done any writes to the drive since it got reformatted with NTFS, then it’ll instantly find details of a previous FAT32 partition.  You can cancel the scan at this point as it’s found all it needs (see below)

What you need to do now is tell the disk that it’s this format you want on the primary partition, not the current NTFS one.  You can select it, and even list the files on it (P).

This can in someways be the most frustrating part, as you can see that the files and the index are there, but your file manager will still show an empty NTFS disk.  Now you need to switch the NTFS structured disk back over to FAT32 by writing the previously discovered FAT32 structure over the top of the primary partition.

You’ll receive a message along the lines of needing a reboot.  You just need to quit testdisk, and remove and re-add the hard disk (if it’s USB) or reboot if it’s an internal drive and re-run test disk after to see that the NTFS partition structure has been replaced with the FAT32 one that existed before.

Like before, you can list the files on the partition using testdisk.  Seeing as this partition is now the current live one, the files should also appear in your file manager.  In my case, I’m using the Nemo file manager on Linux Mint 18.1 Serena, Cinnamon 3.0 edition (and I can highly recommend it).

So there you go.  There are a few lessons to be learned here -for all of us, but like many things in life, things are not always as they seem.  Your computers file manager does not show you what data is on the disk – it is merely reading the contents of the current known good file allocation table from an address on the front of the disk that the partition is known to begin at.  Such file allocation tables will exist all over the disk from previous lives in between re-formatted for re-use.  When you re-format a disk, you’re just giving the file allocation table a fresh start with a new address but the old one will still exist somewhere and in multiple places on the disk.  The file allocation table is the index of disk contents that is read by the file manager in order to give you a representation of what it believes to be retrievable data on your disk.  The data itself can then be found starting at the addresses contained in that index for each file.  The data is still there on parts of the disk that have not yet been written over with replacement blocks of data, hence if you’ve not performed any writes, then all your data is all still there.  So if you want your data to be truly irrecoverable, then you must perform multiple random writes over the top of all cylinders using a tool like DBAN that will take hours to complete, or better, take an angle grinder to it.  Just remember to take a backup first.

So if you want your data to be truly irrecoverable, then you can perform multiple random writes over the top using a tool like DBAN, or better, take an angle grinder to it.

So the real proof that the data is indeed readable once again would be to open and play a movie file.  So as proof, here’s a little screenie of VLC Media Player playing Penelope Spheeris’ 1981 punk rock documentary “The Decline Of Western Civilization”.

Coincidentally, 1981 is quite a significant year for me, I was 6 years old and my parents had just bought me my first computer -a BBC Model B micro computer that had just been released.  I began teaching myself BASIC right away.