Connecting to SPARC Server ILOM console

This post covers connecting to the server console via the SER MGT port and using a terminal emulator to assign a static IP address to the NET MGT port on an Oracle Solaris SPARC Server.

It also covers the process of temporarily preventing automatic boot of the preinstalled OS so that you can boot from alternative installation media of your own choosing, to perform a fresh install for example.

Verify that Device Manager on Windows has the correct drivers loaded for your USB console cable if applicable, making a note of the COM port assigned.

Connect the console cable to the SER MGT port

Configure a terminal such as PuTTY or terminal emulator with these settings:

9600 baud
8 bits
No parity
1 Stop bit
No handshake

Oracle Solaris 11 installation options are covered here.

Default username and password


To set the password on the ILOM console for the first time, use the following command… 

set /SP/users/root password

Assign a Static IP Address to the NET MGT Port

If you plan to connect to the SP through its NET MGT port, the SP must have a valid IP address.

By default, the server is configured to obtain an IP address from DHCP services in your network.  If the network your server is connected to does not support DHCP for IP addressing, perform this procedure.

Set the SP to accept a static IP address.
->set /SP/network pendingipdiscovery=static

Set the IP address for the SP gateway.
-> set /SP/network pendingipgateway=gateway-IPaddr

Set the netmask for the SP.
-> set /SP/network pendingipnetmask=

This example uses to set the netmask. Your network environment subnet might require a different netmask. Use a netmask number most appropriate to your environment.

Verify that the parameters were set correctly.
-> show /SP/network -display properties

Set the changes to the SP network parameters.
-> set /SP/network commitpending=true

Note – You can type the show /SP/network command again to verify that the parameters have been updated.

The Oracle Solaris SPARC Server comes with a pre-installed OS.  You will be prompted to boot it, or it’ll automatically boot.  If you wish to install a fresh OS via you’re own boot media then the following section, distilled from the official instructions here may be useful.

Reach a State to Install a Fresh OS (Oracle ILOM CLI)

-> set /HOST/bootmode script=”setenv auto-boot? false”
This setting prevents the server from booting from the preinstalled OS. When you use bootmode, the change applies only to a single boot and expires in 10 minutes if the power on the host is not reset.

When you are ready to initiate the OS installation, reset the host.
-> reset /System

Switch communication to the server host.
-> start /HOST/console
Are you sure you want to start /HOST/console (y/n)? y
Serial console started. To stop, type #.

The server might take several minutes to complete POST, and then the OpenBoot prompt (ok) is displayed.

Boot from the appropriate boot media for your installation method.

For a list of valid boot commands that you can enter at the OpenBoot prompt, type:

{0} ok help boot

Useful ILOM Console Commands

A link to the ILOM Console commands available here with some examples used regularly listed below.

You can also show all the CLI commands with

show /SP/cli/commands

cd     -e.g. cd /HOST/console

create     -create a target property and set a value




help     -shows help commands, e.g. help or help targets, editing or legal


ls     -lists (sub)targets in target  e.g. /HOST/console/bootlog or history

reset   e.g. reset /System

set     e.g. set /HOST/bootmode script=”setnv auto-boot? false”

show     e.g. show /SP/network or show /HOST status

start     e.g. start /HOST/console or start /SP/console


version     -shows firmware versions of SP

ILOM Targets

All HOST, SYSTEM and SP Targets can be listed using the following ILOM console command.

> help targets

Target Meaning

/ Hierarchy Root
/HOST Manage the Host
/HOST/bootmode Manage the Host Boot Method
/HOST/console Redirect Host Serial Console to SP
/HOST/console/bootlog View Host Console Output From Last Power On
/HOST/console/history View Host Console Output
/HOST/diag Manage Host Power On Self Test Diagnostics
/HOST/domain Manage Logical Domains
/HOST/domain/control Manage Host Control and Guest Boot Methods
/HOST/tpm Manage the Trusted Platform Module Device
/HOST/verified_boot Manage Verified Boot configuration
/HOST/verified_boot/system_certs Verified Boot system certificates
/HOST/verified_boot/system_certs/1 Verified Boot Certificate
/HOST/verified_boot/system_certs/2 Verified Boot Certificate
/HOST/verified_boot/user_certs Verified Boot user certificates
/HOST/verified_boot/user_certs/1 Verified Boot Certificate
/HOST/verified_boot/user_certs/2 Verified Boot Certificate
/HOST/verified_boot/user_certs/3 Verified Boot Certificate
/HOST/verified_boot/user_certs/4 Verified Boot Certificate
/HOST/verified_boot/user_certs/5 Verified Boot Certificate
/System View System Summary
/System/Open_Problems View Open Problems
/System/Processors View Processors Summary
/System/Processors/CPUs View List of CPUs
/System/Processors/CPUs/CPU_0 CPU Details
/System/Processors/CPUs/CPU_1 CPU Details
/System/Memory View Memory Summary
/System/Memory/DIMMs View List of DIMMs
/System/Memory/DIMMs/DIMM_0 DIMM Details
/System/Memory/DIMMs/DIMM_2 DIMM Details
/System/Memory/DIMMs/DIMM_4 DIMM Details
/System/Memory/DIMMs/DIMM_6 DIMM Details
/System/Memory/DIMMs/DIMM_8 DIMM Details
/System/Memory/DIMMs/DIMM_10 DIMM Details
/System/Memory/DIMMs/DIMM_12 DIMM Details
/System/Memory/DIMMs/DIMM_14 DIMM Details
/System/Memory/DIMMs/DIMM_16 DIMM Details
/System/Memory/DIMMs/DIMM_18 DIMM Details
/System/Memory/DIMMs/DIMM_20 DIMM Details
/System/Memory/DIMMs/DIMM_22 DIMM Details
/System/Memory/DIMMs/DIMM_24 DIMM Details
/System/Memory/DIMMs/DIMM_26 DIMM Details
/System/Memory/DIMMs/DIMM_28 DIMM Details
/System/Memory/DIMMs/DIMM_30 DIMM Details
/System/Power View Power Summary
/System/Power/Power_Supplies View List of Power Supplies
/System/Power/Power_Supplies/Power_Supply_0 Power Supply Details
/System/Power/Power_Supplies/Power_Supply_1 Power Supply Details
/System/Cooling View Cooling Summary
/System/Cooling/Fans View List of Fans
/System/Cooling/Fans/Fan_0 Fan Module Details
/System/Cooling/Fans/Fan_1 Fan Module Details
/System/Cooling/Fans/Fan_2 Fan Module Details
/System/Cooling/Fans/Fan_3 Fan Module Details
/System/Cooling/Fans/Fan_4 Fan Module Details
/System/Cooling/Fans/Fan_5 Fan Module Details
/System/Cooling/Fans/Fan_6 Fan Module Details
/System/Cooling/Fans/Fan_7 Fan Module Details
/System/Storage View Storage Summary
/System/Storage/Disks View List of Storage Disks
/System/Storage/Controllers View List of Storage Controllers
/System/Storage/Volumes View List of Storage Volumes
/System/Storage/Expanders View List of Storage Expanders
/System/Networking View Network Summary
/System/Networking/Ethernet_NICs View List of Ethernet NICs
/System/Networking/Ethernet_NICs/Ethernet_NIC_0 Ethernet NIC Details
/System/Networking/Ethernet_NICs/Ethernet_NIC_1 Ethernet NIC Details
/System/Networking/Ethernet_NICs/Ethernet_NIC_2 Ethernet NIC Details
/System/Networking/Ethernet_NICs/Ethernet_NIC_3 Ethernet NIC Details
/System/Networking/Infiniband_HCAs View List of Infiniband HCAs
/System/PCI_Devices View Devices Summary
/System/PCI_Devices/On-board View List of On-board Devices
/System/PCI_Devices/On-board/Device_0 On-board device details
/System/PCI_Devices/On-board/Device_1 On-board device details
/System/PCI_Devices/On-board/Device_2 On-board device details
/System/PCI_Devices/On-board/Device_3 On-board device details
/System/PCI_Devices/On-board/Device_4 On-board device details
/System/PCI_Devices/On-board/Device_5 On-board device details
/System/PCI_Devices/Add-on View List of Add-on Devices
/System/PCI_Devices/Add-on/Device_1 Add-on device details
/System/PCI_Devices/Add-on/Device_2 Add-on device details
/System/PCI_Devices/Add-on/Device_3 Add-on device details
/System/PCI_Devices/Add-on/Device_6 Add-on device details
/System/PCI_Devices/Add-on/Device_7 Add-on device details
/System/PCI_Devices/Add-on/Device_8 Add-on device details
/System/Firmware View Firmware Summary
/System/Firmware/Other_Firmware View List of Other Firmware
/System/Log Manage the System Log
/System/Log/list View System Log Entries
/SP Manage the Service Processor
/SP/alertmgmt Manage Alerts
/SP/alertmgmt/rules Manage Alert Rules (IPMI, SNMP, Email)
/SP/cli Manage Command Line Interface Sessions
/SP/clients Manage Client External Services
/SP/clients/activedirectory Manage Active Directory Authentication
/SP/clients/activedirectory/admingroups Manage Administrator Groups
/SP/clients/activedirectory/alternateservers Manage Alternate Servers
/SP/clients/activedirectory/cert Manage Certificates
/SP/clients/activedirectory/customgroups Manage Custom Groups
/SP/clients/activedirectory/dnslocatorqueries Manage DNS Locator Queries
/SP/clients/activedirectory/opergroups Manage Operator Groups
/SP/clients/activedirectory/userdomains Manage User Domains
/SP/clients/asr Manage Automatic Service Request.
/SP/clients/asr/cert Manage Certificate
/SP/clients/dns Manage Domain Name Service Resolution
/SP/clients/ldap Manage LDAP Authentication
/SP/clients/ldapssl Manage LDAP/SSL Authentication
/SP/clients/ldapssl/admingroups Manage Administrator Groups
/SP/clients/ldapssl/alternateservers Manage Alternate Servers
/SP/clients/ldapssl/cert Manage Certificates
/SP/clients/ldapssl/customgroups Manage Custom Groups
/SP/clients/ldapssl/opergroups Manage Operator Groups
/SP/clients/ldapssl/optionalUserMapping Manage Alternate User Mapping
/SP/clients/ldapssl/userdomains Manage User Domains
/SP/clients/ntp Manage the Network Time Protocol Service
/SP/clients/ntp/server Manage the NTP Servers
/SP/clients/oeshm OESHM status and state info
/SP/clients/oeshm/ssl HMN SSL configuration and status
/SP/clients/oeshm/ssl/agent_cert OESHM SSL Client certificate
/SP/clients/oeshm/ssl/agent_key OESHM SSL Client key
/SP/clients/oeshm/ssl/server_cert OESHM SSL Server certificate
/SP/clients/radius Manage RADIUS Authentication
/SP/clients/radius/alternateservers Alternate RADIUS servers configuration
/SP/clients/radius/alternateservers/1 Alternate RADIUS servers configuration
/SP/clients/radius/alternateservers/2 Alternate RADIUS servers configuration
/SP/clients/radius/alternateservers/3 Alternate RADIUS servers configuration
/SP/clients/radius/alternateservers/4 Alternate RADIUS servers configuration
/SP/clients/radius/alternateservers/5 Alternate RADIUS servers configuration
/SP/clients/smtp Manage the SMTP Server Service
/SP/clients/syslog Manage the Syslogd Remote Logging Server
/SP/clock Manage the SP Clock
/SP/config Manage SP Configuration (Backup/Restore)
/SP/diag Manage SP Diagnositics
/SP/diag/snapshot Save SP Snapshot for Diagnostic Purposes
/SP/faultmgmt Manage System FRU Faults
/SP/faultmgmt/shell Fault Management Shell
/SP/firmware Manage the SP Firmware
/SP/firmware/backupimage Manage Firmware Backup Image Information
/SP/firmware/host Manage Host-accessible SP firmware
/SP/firmware/host/miniroot Miniroot information
/SP/firmware/keys Image Signing Public Keys
/SP/firmware/keys/sun View Sun Keys
/SP/logs Manage Logs
/SP/logs/audit Manage the Audit Log
/SP/logs/audit/list View Audit Log Entries
/SP/logs/event Manage the Event Log
/SP/logs/event/list View Event Log Entries
/SP/network Manage Network Port Configuration
/SP/network/interconnect Manage Internal USB Ethernet Port Configu ration
/SP/network/ipv6 Manage IPv6 Network Configuration
/SP/policy Manage System Policies
/SP/preferences Manage SP Preferences
/SP/preferences/banner Manage SP Login Messages
/SP/preferences/banner/connect Manage SP Connect Message
/SP/preferences/banner/login Manage SP Login Message
/SP/preferences/password_policy Manage SP Password Policy
/SP/serial Manage Serial Interfaces
/SP/serial/external Manage the External Serial Port
/SP/services Manage SP Access Services
/SP/services/fips Manage the FIPS mode of ILOM (Federal Inf ormation Processing Standards,
publication 140-2, Security Requirements for Cryptographic Modules)
/SP/services/http Manage the HTTP Service
/SP/services/https Manage the HTTPS Service
/SP/services/https/ssl Manage the HTTPS SSL Certificate
/SP/services/https/ssl/custom_cert Manage the Custom SSL Certificate
/SP/services/https/ssl/custom_key Manage the Custom SSL Private Key
/SP/services/https/ssl/default_cert View the Default SSL Certificate
/SP/services/ipmi Manage the IPMI Service
/SP/services/kvms Manage the Remote KVMS Service
/SP/services/kvms/host_storage_device Manage the KVMS Host Storage
/SP/services/kvms/host_storage_device/remote Manage the KVMS Remote Virtual Device
/SP/services/servicetag Manage Service Tags
/SP/services/snmp Manage the SNMP Agent Service
/SP/services/snmp/communities Manage SNMP Communities (v2)
/SP/services/snmp/users Manage SNMP Users (v3)
/SP/services/ssh Manage the Secure Shell Service
/SP/services/ssh/keys Manage Secure Shell Authentication
/SP/services/ssh/keys/dsa Manage the SSH DSA Key
/SP/services/ssh/keys/rsa Manage the SSH RSA key
/SP/services/sso Manage the Single Sign-on Service
/SP/sessions View User Sessions
/SP/users Manage Local SP User Accounts




ESXi UI timeout

You may find that your ESXi web-based UI frustratingly times out after a few minutes during attempted uploads of large files to your Datastore, such as .iso’s.

You can disable the time out altogether.

Click on the little drop down over on the far, right hand side…

Settings, Application timeout, Off.


Configure Solaris 11 ISCSI Initiator

With my ISCSI Target configured on FreeNAS and my Solaris 11 Global Zone installed, it’s time to configure the ISCSI initiator to discover the ISCSI target using the second NIC in my Solaris 11 host (or “Global Zone”).

In my lab environment, I have created one big volume called “ONEBIGVOLUME” on my FreeNAS, consisting of 4 x 7500 RPM SATA Disks.  Within this single volume, I have created 5 x 250GB ZVols from which I’ve then created 5 x iSCSI device extents for my Solaris 11 host to discover.  I’ll then create a single ZPool on my Solaris host, using these 5 iSCSI extents on FreeNAS as if they were local disks.

First I need to configure the 2nd NIC that I intend to use for iSCSI traffic on my network.  I’ll refer to my own post here to assist me in configuring that 2nd NIC.

The screen shot below shows the process end-to-end.

The oracle document here describes the process of enabling iSCSI.

I noticed that the subnet mask was incorrect on my 2nd NIC.  My fault for not specifying it, the OS assumed a 8 bit instead of a 24 bit mask for my network.  I’ve included the steps taken to fix that below.

Note the commands highlighted below, that were not accepted by the OS and how I ultimately fixed it below.

Enable iSCSI Initiator

svcadm enable network/iscsi/initiator

From my FreeNAS, Services, iSCSI section, I can see that my base name is…

…and my target is called…

Dynamic Discovery

Here, I use dynamic discovery to find all disks on the FreeNAS iSCSI target, using just the IP Address.

This is probably the simplest way of discovering the disks, but also dangerous as there may be another disk amongst the list that is being used by another system (in my case, I have a VMWare DataStore too).

iscsiadm add discovery-address

iscsiadm modify discovery –sendtargets enable

devfsadm -i iscsi


It is far from easy to correlate which of these “solaris disks” pertain to which “iscsi extents” on FreeNAS.  The only give away as to which one is my VMWare DataStore is the size, shown below…

So, I definitely do not want to use this disk on the Solaris system as it’s already in use elsewhere by VMWare here.  This is why it’s a good idea to use static discovery and/or authentication!

On my Solaris host, I can go back and remove the FreeNas discovery address and start over using Static Discovery instead.

Static Discovery

I know the IP Address, port, base name and target name of my FreeNAS where my iSCSI extents are waiting to be discovered so I may as well use static discovery.

As I’ve already used dynamic discovery, I first need to list the discovery methods, disable Send Targets (dynamic discovery) and enable Static (static discovery)

It’s a bad idea to use both static discovery and dynamic discovery simultaneously.

iscsiadm remove discovery-address

iscsiadm modify discovery -t disable   (Disables Send Targets)

iscsiadm modify discovery -s enable   (Enables Static)

iscsiadm list discovery                                    (Lists discovery methods)

With static discovery set, I can now re-add the discovery address, not forgetting the port (like I just did, above).

iscsiadm add discovery-address

You can see now, that by using Static discovery to only discover extents available at the “” target at on port 3260, my Solaris 11 host has only discovered the 5 devices (extents) I have in mind for my ZPool, and the VMWare DataStore has not been discovered.

The format command is a convenient way to list the device names for your “disks” but you don’t need to use format to do anything else to them.  So CTRL-C to exit format.

Create ZPool

I can use my notes here to help with configuring ZPools and ZFS.

Since my FreeNAS uses ZFS itself to turn 4 x Physical 2TB SATA disks into it’s 7TB “ONEBIGVOLUME” that is subsequently carved up into a 1TB VMWare DataStore and my 5 x 250GB Solaris 11 ZPool1 volumes, the RAIDZ resilience to physical drive failure is set at the NAS level, and need not be used when configuring the ZPool from the 5 iSCSI extents.  I could have created a single 1TB iSCSI extent and created my ZPool on the Solaris host with just one disk.

I could have created a single 1TB iSCSI extent and created my ZPool on the Solaris host from just the one “disk”, since the RAIDZ resilience to physical disk failure exists on the FreeNAS.  By creating 5, at least I have the option of creating my ZPool with RAIDZ on the Solaris host in my lab also.

zpool create ZPOOL1 <device1> <device2> <device3> <device4><device5>

Here you can see the system warning about the lack of RAIDZ redundancy in my new pool.  If the disks were physical, it’d be a risk but in my lab environment, it’s not a problem.

Although FreeNAS defaults to compression being turned on when you create a new volume in a pool, I created each of my 5 volumes used as iscsi extents here with compression disabled.  This is because I intend to use the compression and deduplication options when creating the ZFS file systems that will be hosting my Solaris Zones on my Solaris 11 host instead.

I have a separate post here on Administering Solaris 11 Zones with the requisite commands but will post screenshots here from my own lab.

This is really where the post ends within the context of connecting Solaris 11 to iSCSI storage.

Create ZFS mount point for Zones

Create/Configure Zone1

Create system configuration for Zone1

Install the Zone1

Boot Zone1

Ping Zone1

Log into Zone1

SSH From Linux Workstation

ZLOGIN from Solaris Global Zone

So that’s the process end-to-end of discovering iSCSI SAN storage through logging into your new Solaris11 Zone.












Solaris 11 Networking with ipadm (Basic)

The following concise post is intended as a reference to the networking commands used to satisfy basic networking requirements on a Solaris 11 host.

Using dladm and ipadm commands to modify the live configuration and also modify the config files in one go, means that networking changes made this way are persistent, i.e. survive a system reboot.

Show Network Links

dladm show-link

dladm show-phys

Show Network Addresses

ipadm show-addr

Create IP interface

ipadm create-ip net0 && dladm show-link && dladm show-phys net0

At this point although there is an IPv4 interface, there is no IP address bound to it (just the internal loopback address).

Configure IP interface to use DHCP

ipadm create-addr -T dhcp net0  && ipadm show-addr

Configure Static IP address on IP interface

ipadm create-addr -T static -a net0 && ipadm show-addr

Delete IP interface

In our case, the IP interface that is configured to use DHCP.

ipadm delete-addr -r net0/v4




Oracle Solaris 11 Networking and Virtualization with Zones

This concise post is intended to be used as reference rather than a detailed explanation, so please excuse any apparent brevity.  A more comprehensive explanation can be found here.

The basic steps of creating a zone, installing a zone, installing services in a zone, cloning a zone and monitoring resource use are all set out below in the sequential, logical order that they would be performed.

Create a ZFS Filesystem, VNIC and Configure a Zone

Note:  You first “configure” a zone, then “install” the zone.  zoneadm list -cv displays their statuses as “installed” and “running” respectively.

zfs create -o mountpoint=/zones rpool/zones

zfs list rpool/zones

dladm create-vnic -l net0 vnic1

zonecfg -z zone1

zoneadm list -cv shows all zones on the system, namely the global zone and the zone1 zone created above.

Install the zone

Before installing the zone with its own instance of Solaris (that’s basically the definition of a zone, i.e. a cordoned off install of Solaris, running on the Solaris “global zone”), you should create a System Profile first.  A System Profile is an answer file in .xml format, built by answering the same on-screen questions as when you installed the Global Zone originally, i.e. hostname, NIC, IP Address, DNS addresses, Timezone and so on.

sysconfig create-profile -o zone1-profile.xml

F2 you’re way through the screens, filling in the fields as required before being dropped back to the command prompt.

Next, proceed with installing your zone…

zoneadm -z zone1 install -c /root/zone1-profile.xml

As you can see, it took about 10 minutes to install the first zone.  Subsequent zones, install much quicker.  Although installed, the zone is not automatically booted.

zoneadm list -cv

Boot the Zone

zoneadm -z zone1 boot

zoneadm list -cv

Login to Zone

zlogin -C zone1

Note that you cannot login as root.  This is because roles cannot log in to zones directly.  It’s part of the Secure-by-Default configuration’s Role Based Access Control feature’s Root-as-a-Role Security feature.

You must log in with the account created during the creation of the System Profile, prior to installing the zone.  The you can su – to the root user once logged in.  This is much like Linux with it’s sudoers mechanism.

View Network Status



Install Apache Web Server in the Zone.

pkg install apache-22

svcadm enable apache22

svcs apache22

Connect to the ip address of your zone from your web browser to see the “It Works!” message from Apache.

Note that this file is contained in /var/apache2/2.2/htdocs/index.html and can be modified to reflect the name of the zone youre logged into as proof its the zones webserver responding, not the global zone’s.

Create VNIC for second zone

Performed as root, logged on to the global zone.

dladm create -vnic -l net0 vnic2

zonecfs -z zone2


set zonepath=/zones/zone2

add net

set physical=nvic2



Clone a Zone

You can only clone a zone if it’s not online.  Halt the zone you want to clone.

zoneadm -z zone1 halt

zoneadm -z zone2 clone -c /root/zone2-profile.xml zone1

Run through the service profile screens completing the fields unique to the cloned zone, eg. hostname, VNIC and IP address.

zoneadm -z zone2 clone -c /root/zone2-profile.xml zone1

Within seconds you’ll see the clone process has completed.

Boot cloned zone

zoneadm -z zone2 boot

zoneadm list -cv

You can see that the zone1 is still down from when it was cloned, but zone2 is now running.  Don’t forget to reboot zone1 too if it’s intended to be online.

It takes a little while before the booted clone will have started all its network services.

Log in to Clone

Log into the cloned zone, and view the IP configuration.

zlogin zone2


Check apache is running…

svcs apache22

It’s running!  No need to install apache as the zone was cloned from an existing zone with apache already installed.

Monitoring zones

Start zone1 so that both zones are running

zoneadm -z zone1 boot

zoneadm -list -cv

You can monitor zones using a single command, zonestat

zonestat 2 (where 2 is the number of seconds between each monitoring interval/collection of resource use data)

Zonestat can be used to summarise resource use over a long period of time.





Solaris 11 ZFS Administration

This concise post aims to cover the basics of ZFS administration on Solaris.  Excuse the brevity, it is for reference rather than a detailed explanation.

ZFS Pools

zpool list && zpool status rpool

In a lab environment, files can replace actual physical disks

cd /dev/dsk && mkfile 200m disk {0..9}

Create a ZPool and Expand a ZPool

zpool create labpool raidz disk0 disk1 disk2 && zpool list && zpool list labpool

zpool add labpool raidz disk4 disk5 disk6 && zpool list && zfs list labpool

ZFS Compression

zfs create labpool/zman && zfs set compression=gzip labpool/zman

You can copy files to this zfs filesystem that has gzip compression enabled and save nearly half your disk space.

ZFS Deduplication

zfs create -o dedup=on -o compression=gzip labpool/archive

zfs create labpool/archive/a  (b, c d)

By copying multiple instances of the same file into /labpool/archive/a, b, c and d whereby the /labpool/archive filesystem has deduplication turned on, you’ll see that zpool list labpool will increment the value in the DEDUP column to reflect the deduplication ratio as more and more similar files are added.

Note also that compression is enabled at the zfs file system level but copying an already gzipped file will not result in further gains – the value returned by zfs get compressratio labpool/archive stays at 1.00x.

ZFS Snapshots

zfs snapshot -r labpool/archive@snap1 && zfs list -r -t all labpool

zfs rollback labpool/archive/a@snap1

Snapshots can be created that copy-on-write (or any other kind of IO) such that changes made can be rolled back.   As a result, snapshots don’t take up a lot of space, unless left in place for filesystems with high IO of course.

ZFS Clones

zfs clone labpool/archive/a@snap1 labpool/a_work

zfs list -r -t all labpool

zfs list -r -t all labpool will show all the zfs filesystems including snapshots and clones.  Changes can be made to the clone filesystem without affecting the original.


Creating bootable Solaris 11 USB drive

Download the requisite Solaris OS image from Oracle here.  You may need to create a free account first.

Note that if you are building a SPARC server e.g. T8-2, it comes with Solaris pre-installed.  You should start with this document here, connecting to the ILOM System Console via the SER MGT Port using the instructions here.

The instructions from Oracle are as follows, but I don’t like the way they say to use dmesg | tail to identify the USB device when lsusb to identify the make and model and df -h to identify the device name provide much clearer, humanly readable output.


  • On Linux:
    1. Insert the flash drive and locate the appropriate device.
      # dmesg | tail
    2. Copy the image.
      # dd if=/path/image.usb of=/dev/diskN bs=16k

For other client operating systems such as Solaris itself or MacOSX, instructions from Oracle can be found here.

In my case, the USB stick was mounted to /dev/sdg1 automatically when plugged into Linux desktop, so I unmounted /dev/sdg1 then changed to the directory containing my Solaris 11 image, then used dd as shown in the screenshot below.

The commands are therefore,

df -h to Identify the USB device e.g. /dev/sdg

sudo umount /dev/sdg1 to unmount the filesystem on the USB device

cd ~/Downloads/Solaris11 to change to the location of your downloaded image file

sudo dd if=sol-11_3.usb of=/dev/sdg bs=16k to write it to the USB device

Since dd is a block level, not a file level copy, you don’t need to make the USB device bootable or anything like that.  That’s all contained in the blocks copied to the device.



Console Access on HP/3COM OfficeConnect Managed Gigabit Switch

  1. Purchase USB console cable
  2. In Windows, plug in cable, search for Device Manager, then click on “Update Driver” on any Serial port items that show warnings.  The internet found and installed working drivers for me.
  3. Optionally download the manual for the switch.  OfficeConnect 3CDSG8 Manual
  4. Download and Install PuTTY
  5. Create a serial connection with the following settings, BAUD 38,400/8 bit/no parity/1 stop bit/no hardware flow control
  6. Log on to the switch as admin and refer to the screenshot below to disable DHCP and configure a static IP address.

Next ping the new IP address, and attempt to connect using a web browser.

Log in using the same admin and password as with the console.


Oracle SPARC T8-2 Server


The Oracle SPARC T8-2 is a 2 processor server with Oracle SPARC M8 Processors (each with 32 x 8 dynamically threading cores running at 5GHz) and Oracles “Software-in-Silicon” technology to massively accelerate operations such as SQL Primitives on OLTP Oracle Databases, Java applications, Queries of large, compressed databases in-memory and operations involving floating point data, virtualization using Solaris 11 and encryption all with little to no additional processor overhead.

DAX Units (Data Analytics Accelerator)

DAX Units operate on data at full memory speeds, taking advantage of the very high memory bandwidth of the processor.  This results in extreme acceleration of in-memory queries and analytics operations (i.e. generating data about your database data) while the processor cores are freed up to do other useful work.

DAX Units can handle compressed data on the fly, so larger DB’s can be held in memory and with less memory needed to be configured for a given database size.

The DAX Unit can also be exploited to handle Java applications whereby the available API is used by the Java application developers.

Oracle Numbers Units

These software-in-silicon units greatly accelerate Oracle database operations involving floating point data.  This results in fast, in-memory analytics on your database without affecting your OLTP (Online Transaction Processing) operations.

Silicon Secured Memory

This is capable of detecting and preventing invalid operations on application data via hardware monitoring of software access to memory.  A hardware approach is must faster than a software based detection tool that places additional overhead on your processors.

Each core contains the fastest cryptographic acceleration in the industry with near zero overhead.

Dynamic Threading Technology

Each of the 2 processors has 32 cores, each capable of handling 8 threads using dynamic threading technology that adapts to extreme single-thread performance or massive throughput 256 thread performance on the fly.

Efficient design with Solaris Virtualization technology means that a much larger number of VMs can be supported compared with Intel Xeon based systems, lowering per-VM cost.


This breakthrough in SPARC is enabled by the Solaris 11 OS.

Secure, Integrated, Open platform engineered for large scale enterprise cloud environments with unique optimization for oracle databases, middleware and application deployments.  Security is easily set up and enabled by default with single-step patching to the OS running on the logical domain, hosting immutable zones that allow compliance to be maintained easily.

You can create complete application software stacks, lock them securely, deploy them in a cloud and update them in a single step.

Oracle Solaris 11 combines unique management options with powerful application driven software-defined networking for agile deployment of cloud infrastructure.

More here, including full hardware specification, summarized below.



Thirty-two core, 5.0 GHz SPARC M8 processor

Up to 256 threads per processor (up to 8 threads per core)

Eight Data Analytics Accelerator units per processor, each supporting four concurrent in-memory analytics engines with decompression

Thirty two on chip encryption instruction accelerators (one per core) with direct non-privileged support for 16 industry standard cryptographic algorithms: AES, Camellia, CRC32c, DES, 3DES, DH,
DSA, ECC, MD5, RSA, SHA-1, SHA-224, SHA-256, SHA-3, SHA-384, and SHA-512

Thirty two floating point units and thirty two Oracle Numbers units per processor (one per core)

One random number generator (one per processor)


Level 1: 32 KB instruction and 16 KB data per core

Level 2: 256 KB L2 I$ per four cores, 128 KB L2 D$ per core

Level 3: 64 MB L3$ on chip System Configuration

SPARC T8-2 servers are always configured with two SPARC M8 processors; not expandable


Sixteen dual inline memory module (DIMM) slots per processor supporting half and fully populated memory configurations using 16, 32, or 64 GB DDR4 DIMMs

2 TB maximum memory configuration with 64 GB DIMMs


Network: Four 10 GbE (100 Mb/sec, 1 Gb/sec, 10 Gb/sec)ports, full duplex only, auto-negotiating

Disks and internal storage: Two SAS-3 controllers providing hardware RAID 0, 1, and 1E/10(ZFS file system provides higher levels of RAID)

Expansion bus: Eight low-profile PCIe 3.0 (four x8 and four x16) slots

Ports: Four external USB (two front USB 2.0 and two rear USB 3.0), one RJ45 serial management port, console 100Mb/1Gb network port, and two VGA ports (one front, one rear)


Internal storage: Up to six 600 GB or 1,200 GB 2.5-inch SAS-3 drives

Optional internal storage may be installed within the standard drive bays

800 GB solid-state drives (SSDs), maximum of six 6.4 TB NVMe drives, maximum of four