Configure Solaris 11 ISCSI Initiator

With my ISCSI Target configured on FreeNAS and my Solaris 11 Global Zone installed, it’s time to configure the ISCSI initiator to discover the ISCSI target using the second NIC in my Solaris 11 host (or “Global Zone”).

In my lab environment, I have created one big volume called “ONEBIGVOLUME” on my FreeNAS, consisting of 4 x 7500 RPM SATA Disks.  Within this single volume, I have created 5 x 250GB ZVols from which I’ve then created 5 x iSCSI device extents for my Solaris 11 host to discover.  I’ll then create a single ZPool on my Solaris host, using these 5 iSCSI extents on FreeNAS as if they were local disks.

First I need to configure the 2nd NIC that I intend to use for iSCSI traffic on my network.  I’ll refer to my own post here to assist me in configuring that 2nd NIC.

The screen shot below shows the process end-to-end.

The oracle document here describes the process of enabling iSCSI.

I noticed that the subnet mask was incorrect on my 2nd NIC.  My fault for not specifying it, the OS assumed a 8 bit instead of a 24 bit mask for my network.  I’ve included the steps taken to fix that below.

Note the commands highlighted below, that were not accepted by the OS and how I ultimately fixed it below.

Enable iSCSI Initiator

svcadm enable network/iscsi/initiator

From my FreeNAS, Services, iSCSI section, I can see that my base name is…

…and my target is called…

Dynamic Discovery

Here, I use dynamic discovery to find all disks on the FreeNAS iSCSI target, using just the IP Address.

This is probably the simplest way of discovering the disks, but also dangerous as there may be another disk amongst the list that is being used by another system (in my case, I have a VMWare DataStore too).

iscsiadm add discovery-address

iscsiadm modify discovery –sendtargets enable

devfsadm -i iscsi


It is far from easy to correlate which of these “solaris disks” pertain to which “iscsi extents” on FreeNAS.  The only give away as to which one is my VMWare DataStore is the size, shown below…

So, I definitely do not want to use this disk on the Solaris system as it’s already in use elsewhere by VMWare here.  This is why it’s a good idea to use static discovery and/or authentication!

On my Solaris host, I can go back and remove the FreeNas discovery address and start over using Static Discovery instead.

Static Discovery

I know the IP Address, port, base name and target name of my FreeNAS where my iSCSI extents are waiting to be discovered so I may as well use static discovery.

As I’ve already used dynamic discovery, I first need to list the discovery methods, disable Send Targets (dynamic discovery) and enable Static (static discovery)

It’s a bad idea to use both static discovery and dynamic discovery simultaneously.

iscsiadm remove discovery-address

iscsiadm modify discovery -t disable   (Disables Send Targets)

iscsiadm modify discovery -s enable   (Enables Static)

iscsiadm list discovery                                    (Lists discovery methods)

With static discovery set, I can now re-add the discovery address, not forgetting the port (like I just did, above).

iscsiadm add discovery-address

You can see now, that by using Static discovery to only discover extents available at the “” target at on port 3260, my Solaris 11 host has only discovered the 5 devices (extents) I have in mind for my ZPool, and the VMWare DataStore has not been discovered.

The format command is a convenient way to list the device names for your “disks” but you don’t need to use format to do anything else to them.  So CTRL-C to exit format.

Create ZPool

I can use my notes here to help with configuring ZPools and ZFS.

Since my FreeNAS uses ZFS itself to turn 4 x Physical 2TB SATA disks into it’s 7TB “ONEBIGVOLUME” that is subsequently carved up into a 1TB VMWare DataStore and my 5 x 250GB Solaris 11 ZPool1 volumes, the RAIDZ resilience to physical drive failure is set at the NAS level, and need not be used when configuring the ZPool from the 5 iSCSI extents.  I could have created a single 1TB iSCSI extent and created my ZPool on the Solaris host with just one disk.

I could have created a single 1TB iSCSI extent and created my ZPool on the Solaris host from just the one “disk”, since the RAIDZ resilience to physical disk failure exists on the FreeNAS.  By creating 5, at least I have the option of creating my ZPool with RAIDZ on the Solaris host in my lab also.

zpool create ZPOOL1 <device1> <device2> <device3> <device4><device5>

Here you can see the system warning about the lack of RAIDZ redundancy in my new pool.  If the disks were physical, it’d be a risk but in my lab environment, it’s not a problem.

Although FreeNAS defaults to compression being turned on when you create a new volume in a pool, I created each of my 5 volumes used as iscsi extents here with compression disabled.  This is because I intend to use the compression and deduplication options when creating the ZFS file systems that will be hosting my Solaris Zones on my Solaris 11 host instead.

I have a separate post here on Administering Solaris 11 Zones with the requisite commands but will post screenshots here from my own lab.

This is really where the post ends within the context of connecting Solaris 11 to iSCSI storage.

Create ZFS mount point for Zones

Create/Configure Zone1

Create system configuration for Zone1

Install the Zone1

Boot Zone1

Ping Zone1

Log into Zone1

SSH From Linux Workstation

ZLOGIN from Solaris Global Zone

So that’s the process end-to-end of discovering iSCSI SAN storage through logging into your new Solaris11 Zone.












Solaris 11 ZFS Administration

This concise post aims to cover the basics of ZFS administration on Solaris.  Excuse the brevity, it is for reference rather than a detailed explanation.

ZFS Pools

zpool list && zpool status rpool

In a lab environment, files can replace actual physical disks

cd /dev/dsk && mkfile 200m disk {0..9}

Create a ZPool and Expand a ZPool

zpool create labpool raidz disk0 disk1 disk2 && zpool list && zpool list labpool

zpool add labpool raidz disk4 disk5 disk6 && zpool list && zfs list labpool

ZFS Compression

zfs create labpool/zman && zfs set compression=gzip labpool/zman

You can copy files to this zfs filesystem that has gzip compression enabled and save nearly half your disk space.

ZFS Deduplication

zfs create -o dedup=on -o compression=gzip labpool/archive

zfs create labpool/archive/a  (b, c d)

By copying multiple instances of the same file into /labpool/archive/a, b, c and d whereby the /labpool/archive filesystem has deduplication turned on, you’ll see that zpool list labpool will increment the value in the DEDUP column to reflect the deduplication ratio as more and more similar files are added.

Note also that compression is enabled at the zfs file system level but copying an already gzipped file will not result in further gains – the value returned by zfs get compressratio labpool/archive stays at 1.00x.

ZFS Snapshots

zfs snapshot -r labpool/archive@snap1 && zfs list -r -t all labpool

zfs rollback labpool/archive/a@snap1

Snapshots can be created that copy-on-write (or any other kind of IO) such that changes made can be rolled back.   As a result, snapshots don’t take up a lot of space, unless left in place for filesystems with high IO of course.

ZFS Clones

zfs clone labpool/archive/a@snap1 labpool/a_work

zfs list -r -t all labpool

zfs list -r -t all labpool will show all the zfs filesystems including snapshots and clones.  Changes can be made to the clone filesystem without affecting the original.


Console Access on HP/3COM OfficeConnect Managed Gigabit Switch

  1. Purchase USB console cable
  2. In Windows, plug in cable, search for Device Manager, then click on “Update Driver” on any Serial port items that show warnings.  The internet found and installed working drivers for me.
  3. Optionally download the manual for the switch.  OfficeConnect 3CDSG8 Manual
  4. Download and Install PuTTY
  5. Create a serial connection with the following settings, BAUD 38,400/8 bit/no parity/1 stop bit/no hardware flow control
  6. Log on to the switch as admin and refer to the screenshot below to disable DHCP and configure a static IP address.

Next ping the new IP address, and attempt to connect using a web browser.

Log in using the same admin and password as with the console.


Mount USB HDD by UUID in Linux

The danger with USB hard disk drives is that when you have more than one plugged into your workstation, the device name assigned to it by the operating system might not be consistent between reboots.  i.e. /dev/sdb1 and /dev/sdb2 might swap places.  Potential disaster if you rsync data from one to the other on a periodic basis.

If permanently mounting usb hard disks, it’s much safer to mount according to the UUID of the disk instead of the device name assigned by the OS.

If you change to root using sudo su – and cd into /dev/disk you’ll see that there are multiple links in there, organised into different folders.  The unique unit id is written in /dev/disk/by-uuid and links the device name to the unique id.

You can see what device name is mounted where using df -h.  Then use the output of ls -al of /dev/dsk/by-uuid to correlate uuid to filesystem mount.  There’s probably other ways to match filesystem to uuid but this is quick and easy enough to do.

Note that I’ve also taken the liberty of piping the commands through grep to reduce output, just showing me what I want to know,  i.e. the uuid’s mounted to devices named /sda1, /sda2, /sdb1 etc.

Once you’re confident you know what UUID is what disk, then you can permanently mount the disk or disks that are permanent fixtures by creating a mount point in the filesystem and adding a line to /etc/fstab

finally, mount -a will pick up the UUID and mount it into the mount point.


Deleting inaccessible data such as users homedirectories

Users Home Directories are often hardened such that even Domain Administrators have problems migrating them and subsequently deleting them.  A way to deal with that is already documented here so this post is really just about the subsequent cleanup of the stubborn source data.

You can sit in Windows Explorer taking ownership and rattling the new permissions down each users tree if you like, but it’s a laborious process when you have 2000 users.  It doesn’t always work out 100% successful either.

This is my way of clearing out all users home directories that begin with the characters u5 for example.  You can adapt or scale it up it to suit your own requirements easily and save yourself a lot of time and effort.

First, make a list of the directories you want to delete.  Whether you have access to them or not is irrelevant at this stage.

dir /ad /b | findstr ^u5 > mylist.txt

dir /ad /b findstr ^U5 >> mylist.txt

Create an empty folder if you dont have one already.

mkdir empty

Now mirror that empty folder over the top of the users in the list, exploiting the operating backup right in robocopy that conveniently bypasses the NTFS security

for /f %F in (mylist.txt) DO robocopy empty %F /MIR /B /TIMFIX

This will leave empty folders behind but the security on them will have been overwritten with that of your empty folder, giving you the permission to delete it.

for /f %F in (mylist.txt) DO rmdir %F


Note: Use /TIMFIX with /B to correct non-copying of datestamps on files, resulting in 02/01/1980 datestamps on all files copied with /B Backup rights.


Manually set IP, Subnet and Gateway addresses on VNX Control Station

How to change the Control Station IP Address and Subnet Mask

Log in to the Control Station as root.

Change the IP address and network mask by using this command syntax:

Note: /sbin/ifconfig -a revealed eth3 to be my cs0 interface.


# /sbin/ifconfig eth3 <ipaddr> netmask <netmask>

e.g. /sbin/ifconfig eth3 netmask


This changes the immediate configuration, but does not persist across restarts.

Edit the network scripts file, /etc/sysconfig/network-scripts/ifcfg-eth3, by using a text editor (that means vi)



Edit the local hosts file, /etc/hosts

Look for lines with the old IP address.

Replace the old IP address with your new IP address.

Save the file and exit.

If you are changing the Control Station IP address, but remaining on the same network, then the SP IP addresses for an integrated model need not be modified. However, if you are changing to a different network, the SP IP addresses must be modified to be on the same physical network as the Control Station for the Integrated model. Use the clariion_mgmt -modify -network command to update the IP addresses on the SP, as it will also update the files and Celerra database with the modified IP addresses.

How to change the Control Station default gateway

Log in to the Control Station as root using SSH. Add a default route by typing:


# /sbin/route add default gw


This changes the immediate configuration, but does not persist across restarts.

Edit the network configuration file, /etc/sysconfig/network-scripts/ifcfg-eth3, by using a text editor.

Add the new gateway IP address for the entries similar to:


Save the file and exit.


Download the full Firefox stand-alone installer

There’s nothing more frustrating than downloading an installer that assumes that you’re going to have internet access on the machine that you subsequently intend to run the installer on (called a stub installer).

For example, downloading firefox so that you can get to your enterprise storage arrays java based admin interface without the agony presented by internet explorer’s tendency to throw its toys out the pram over the certificate and the settings are locked down by IE policy, this policy, that policy and the other policy that all exist to make the environment so much more “secure” but actually just don’t allow anything, anywhere, ever.  It’s secure!, it’s been signed off as being suitably unusable to prevent exposing ourselves to any kind of imaginary threat!  Aren’t we clever?.  No.  Rant over.

It’s secure!, it’s been signed off as being suitably unusable to prevent exposing ourselves to any kind of imaginary threat!

I’ve probably digressed, I can’t tell.  I’m too angry.  And you are too probably, if you’ve ended up here.  Installers that assume an internet connection are completely useless in the enterprise environment (best read in the voice of Clarkson).

Whats even more frustrating is that the stub installer is the only apparent option, judging by mozillas website.  Well it isn’t the only option – you can still download the full-fat, stand-alone installer from their ftp site – but ftp is blocked by your firewall!

No bother, just replace ftp:// with http:// at the beginning of the URL, or even better just click here for the 64 bit version (or here for the 32 bit version).



Enable NFSv4 on VNX

To enable NFSv4 on your up-to-date (post VNX OE for File v7.1) VNX Unified storage system and configure a datamover to mount a filesystem to allow for NFSv4 access with a MIXED access policy, the following steps serve as a concise guide.  NFSv4 cannot be done via Unisphere.

Log onto control station as nasadmin user via SSH using PuTTY.

START NFSv4 Server on VNX
server_nfs server_2 -v4 -service -start

SET DOMAIN NAME to nfsv4.domain (change as required)
server_param server_2 -facility nfsv4 -modify domain -value nfsv4.domain

server_param server_2 -facility nfsv4 -info domain

server_param server_2 -facility nfsv4 -list

MOUNT NFS_TEST_2 on server_2 for NFSv4 access
server_mount server_2 -option accesspolicy=MIXED NFS_TEST_2 /NFS_TEST_2

TRANSLATE existing, mounted NFS filesystem from NATIVE access policy to MIXED access policy
nas_fs -translate NFS_TEST_2 -access_policy start -to MIXED -from NATIVE

server_nfs server_2 -v4 -client -list

NFSv4 requires UNICODE enabled on DM. Check…
server_cifs server_2 | grep I18N
I18N mode = UNICODE

server_nfs server_2 -v4

It’s highly likely that if you require NFS v4, then you’ll also need to authenticate access, using a UNIX based Kerberos DC.  The following notes cover the configuration steps involved.  Please note that this section below is still a work in progress and you should refer to the official EMC documentation for a complete set of instructions with examples.

SECURE NFS (using UNIX Kerberos Authentication)

server_kerberos server_2 -add realm=<realm-name>,kdc=<fqdn_kdc_name>,kadmin=<kadmin_server>,domain=<domain_name>,defaultrealm
Note realm,kdc, kadmin,domain should all be entered as fqdn’s

server_kerberos server_2 -list

server_nfs <datamovername> -secnfs
Note server_2 is set already during VNX installation.

server_nfs <newdatamovername> -secnfs -principal -delete nfs@server_2
Note This is only required if you change the default datamover hostname from server_2 to e.g. Ingbe245
server_nfs <newdatamovername> -secnfs -principal -create nfs@<server>
Note <server> is type of the realm, and needs to be entered twice, once with short name, e.g. Ingbe245 and once more with fqdn

server_nfs server_2 -secnfs -service -stop
server_nfs Ingbe245 -secnfs -service -start

Copy /.etc/krb.keytab file (if it exists) to the Kerberos KDC.

Note. The kadmin steps are performed on the Kerberos KDC, not the VNX
kadmin: addprinc=randkey nfs/Ingbe245
kadmin: addprinc=randkey nfs/Ingbe245.fqdn.local

kadmin: listprincs

kadmin: ktadd -k <keytab_file_path> nfs/ <name>
<keytab_file_path> = location of key file
<name>=name of previously created service principal e.g. nfs/Ingbe245

Copy the krb5.keytab file from Kerberos KDC to the Data Mover by using FTP and the server_file command.
Note. EMC Common Anti-Virus Agent (CAVA) is also configured using the server_file command to place and displace the viruschecker.conf file.  There are notes on that here but to save you the trouble, the command for your convenience is…

server_file server_2 -get krb5.keytab krb5.keytab

server_file server_2 -put krb5.keytab krb5.keytab
server_kerberos Inbe245 -keytab

server_nfs <datamovername> -secnfs -mapper -info

server_nfs <datamover_name> -secnfs -mapper -set -source auto

server_nfs server_2 -v4 -client -list


Obtaining disk serial numbers from VNX

Most things VNX can be exported using Unisphere’s little export icon in the top right hand corner of most if not all dialogs.  Disk information would be found under System, Hardware, Disks.  You’ll see there is a part number column, but no serial number column in Unisphere for the disks.

To obtain the serial number of the HDD’s in your array, download and install naviseccli on your laptop/storage management server and use the following command…

naviseccli –h <sp-ip-address> -User sysadmin –Password ********* -Scope 0 getdisk –serial

If a security file containing the credentials is already present on the storage management server, then you won’t need to specify the username and password in plain text as shown above.




Change Cisco MDS Admin password

Step 1 Use the show user-accounts command to verify that your user name has network-admin privileges.

switch# show user-account
this user account has no expiry date

Step 2 If your user name has network-admin privileges, issue the username command to assign a new administrator password.

switch# config t
switch(config)# username admin password <new password>
switch(config)# exit

Step 3 Save the software configuration.

switch# copy running-config startup-config
Full cisco documentation here (includes password recovery for lost passwords)