Solaris P2V Process

This post is a work-in-progress.  The initial content is taken directly from Oracle’s own knowledge base here.  In time, it is my intention to augment the fundamental steps laid out below with real-world observations, screenshots, modifications and additional steps.

The P2V (Physical Server to Virtual Machine Migration is effectively the technical process of decommissioning a physical SPARC server by relocating its workload to an isolated instance of Solaris, running in software (known as a non-global zone) on a more powerful server, like a T8 or M8 running Solaris 11 and Oracle VM Server for SPARC (known as the global zone).


Obtain the hostname:
# hostname

Obtain the hostid:
# hostid

Also see Host ID Emulation.

Obtain the root password.

View the software being run on the system:
# ps -eaf

Check the networking configuration on the system:
# ifconfig -a

View the storage utilized, for example, by viewing the contents of /etc/vfstab.

View the amount of local disk storage in use, which determines the size of the archive:
# df -k

Determine the packages and patches that are on the system. See pkginfo(1) for more information.
Examine the contents of /etc/system.


This example procedure uses NFS to place the flash archive on the target Solaris system, but you could use any method to move the file.

You must be the global administrator in the global zone to perform this procedure.

Become superuser, or assume the Primary Administrator role.

Log in to the source system to be archived.

Change directories to the root directory.
# cd /

Use flarcreate to create a flash archive image file named s10-system on the source system, and place the archive onto the target system:
# flarcreate -S -n s10-system -L cpio /net/target/export/s10-system.flar
Determining which filesystems will be included in the archive…
Creating the archive…
cpio: File size of “etc/mnttab” has
increased by 4352068650 blocks
1 error(s)
Archive creation complete.

The target machine will require root write access to the /export file system. Depending on the size of the file system on the host system, the archive might be several gigabytes in size, so enough space should be available in the target filesystem.

Tip –
In some cases, flarcreate can display errors from the cpio command. Most commonly, these are messages such as File size of etc/mnttab has increased by 435. When these messages pertain to log files or files that reflect system state, they can be ignored. Be sure to review all error messages thoroughly.


Note that the only required elements to create a native non-global zone are the zonename and zonepath properties.  Other resources and properties are optional.  Some optional resources also require choices between alternatives, such as the decision to use either the dedicated-cpu resource or the capped-cpu resource.  See Zone Configuration Data for information on available zonecfg properties and resources.

You must be the global administrator in the global zone to perform this procedure.

Become superuser, or assume the Primary Administrator role.

To create the role and assign the role to a user, see Using the Solaris Management Tools With RBAC (Task Map) in System Administration Guide: Basic Administration.

Set up a zone configuration with the zone name you have chosen.

The name my-zone is used in this example procedure.
global# zonecfg -z my-zone

If this is the first time you have configured this zone, you will see the following system message:
my-zone: No such zone configured

Use ‘create’ to begin configuring a new zone.
Create the new zone configuration.

This procedure uses the default settings.
zonecfg:my-zone> create

Set the zone path, /export/home/my-zone in this procedure.
zonecfg:my-zone> set zonepath=/export/home/my-zone

Do not place the zonepath on ZFS for releases prior to the Solaris 10 10/08 release.
Set the autoboot value.

If set to true, the zone is automatically booted when the global zone is booted. Note that for the zones to autoboot, the zones service svc:/system/zones:default must also be enabled. The default value is false.
zonecfg:my-zone> set autoboot=true

Set persistent boot arguments for a zone.
zonecfg:my-zone> set bootargs=”-m verbose”

Dedicate one CPU to this zone.
zonecfg:my-zone> add dedicated-cpu

Set the number of CPUs.
zonecfg:my-zone:dedicated-cpu> set ncpus=1-2
(Optional) Set the importance.
zonecfg:my-zone:dedicated-cpu> set importance=10
The default is 1.

End the specification.
zonecfg:my-zone:dedicated-cpu> end

Revise the default set of privileges.
zonecfg:my-zone> set limitpriv=”default,sys_time”

This line adds the ability to set the system clock to the default set of privileges.
Set the scheduling class to FSS.
zonecfg:my-zone> set scheduling-class=FSS

Add a memory cap.
zonecfg:my-zone> add capped-memory

Set the memory cap.
zonecfg:my-zone:capped-memory> set physical=50m

Set the swap memory cap.
zonecfg:my-zone:capped-memory> set swap=100m

Set the locked memory cap.
zonecfg:my-zone:capped-memory> set locked=30m

End the memory cap specification.
zonecfg:my-zone:capped-memory> end

Add a file system.
zonecfg:my-zone> add fs

Set the mount point for the file system, /usr/local in this procedure.
zonecfg:my-zone:fs> set dir=/usr/local

Specify that /opt/zones/my-zone/local in the global zone is to be mounted as /usr/local in the zone being configured.
zonecfg:my-zone:fs> set special=/opt/zones/my-zone/local

In the non-global zone, the /usr/local file system will be readable and writable.
Specify the file system type, lofs in this procedure.
zonecfg:my-zone:fs> set type=lofs

The type indicates how the kernel interacts with the file system.

End the file system specification.
zonecfg:my-zone:fs> end
This step can be performed more than once to add more than one file system.
(Optional) Set the hostid.
zonecfg:my-zone> set hostid=80f0c086

Add a ZFS dataset named sales in the storage pool tank.
zonecfg:my-zone> add dataset

Specify the path to the ZFS dataset sales.
zonecfg:my-zone> set name=tank/sales

End the dataset specification.
zonecfg:my-zone> end

(Sparse Root Zone Only) Add a shared file system that is loopback-mounted from the global zone.

Do not perform this step to create a whole root zone, which does not have any shared file systems. See the discussion for whole root zones in Disk Space Requirements.
zonecfg:my-zone> add inherit-pkg-dir

Specify that /opt/sfw in the global zone is to be mounted in read-only mode in the zone being configured.
zonecfg:my-zone:inherit-pkg-dir> set dir=/opt/sfw
Note – The zone’s packaging database is updated to reflect the packages. These resources cannot be modified or removed after the zone has been installed using zoneadm.

End the inherit-pkg-dir specification.
zonecfg:my-zone:inherit-pkg-dir> end

This step can be performed more than once to add more than one shared file system.

Note –If you want to create a whole root zone but default shared file systems resources have been added by using inherit-pkg-dir, you must remove these default inherit-pkg-dir resources using zonecfg before you install the zone:
zonecfg:my-zone> remove inherit-pkg-dir dir=/lib
zonecfg:my-zone> remove inherit-pkg-dir dir=/platform
zonecfg:my-zone> remove inherit-pkg-dir dir=/sbin
zonecfg:my-zone> remove inherit-pkg-dir dir=/usr
(Optional) If you are creating an exclusive-IP zone, set the ip-type.
zonecfg:my-zone> set ip-type=exclusive

Note –Only the physical device type will be specified in the add net step.

Add a network interface.
zonecfg:my-zone> add net

(shared-IP only) Set the IP address for the network interface, in this procedure.
zonecfg:my-zone:net> set address=

Set the physical device type for the network interface, the hme device in this procedure.
zonecfg:my-zone:net> set physical=hme0

Solaris 10 10/08: (Optional, shared-IP only) Set the default router for the network interface, in this procedure.
zonecfg:my-zone:net> set defrouter=

End the specification.
zonecfg:my-zone:net> end

This step can be performed more than once to add more than one network interface.

Add a device.
zonecfg:my-zone> add device
Set the device match, /dev/sound/* in this procedure.
zonecfg:my-zone:device> set match=/dev/sound/*
End the device specification.
zonecfg:my-zone:device> end
This step can be performed more than once to add more than one device.

Add a zone-wide resource control by using the property name.
zonecfg:my-zone> set max-sem-ids=10485200
This step can be performed more than once to add more than one resource control.

Add a comment by using the attr resource type.
zonecfg:my-zone> add attr
Set the name to comment.
zonecfg:my-zone:attr> set name=comment
Set the type to string.
zonecfg:my-zone:attr> set type=string
Set the value to a comment that describes the zone.
zonecfg:my-zone:attr> set value=”This is my work zone.”
End the attr resource type specification.
zonecfg:my-zone:attr> end

Verify the zone configuration for the zone.
zonecfg:my-zone> verify

Commit the zone configuration for the zone.
zonecfg:my-zone> commit
Exit the zonecfg command.
zonecfg:my-zone> exit
Note that even if you did not explicitly type commit at the prompt, a commit is automatically attempted when you type exit or an EOF occurs.

Using Multiple Subcommands From the Command Line
Tip –The zonecfg command also supports multiple subcommands, quoted and separated by semicolons, from the same shell invocation.
global# zonecfg -z my-zone “create ; set zonepath=/export/home/my-zone”


The zoneadm command described in Part II, Zones and in the zoneadm(1M) man page is the primary tool used to install and administer non-global zones. Operations using the zoneadm command must be run from the global zone on the target system.

In addition to unpacking files from the archive, the install process performs checks, required postprocessing, and other functions to ensure that the zone is optimized to run on the host.

You can use an image of a Solaris system that has been fully configured with all of the software that will be run in the zone. See Creating the Image Used to Directly Migrate A Solaris System Into a Zone.

If you created a Solaris system archive from an existing system and use the -p (preserve sysidcfg) option when you install the zone, the zone will have the same identity as the system used to create the image.

If you use the -u (sys-unconfig) option when you install the zone on the target, the zone produced will not have a hostname or name service configured.

Caution –
You must specify either the -p option or the -u option, or an error results.

Installer options and description

-a archive

Location of archive from which to copy system image. Full flash archive and  cpio, gzip compressed cpio, bzip compressed cpio, and level 0 ufsdump are    supported. Refer to the gzip man page available in the SUNWsfman              package.

-d path

Location of directory from which to copy system image.

-d —

Use the -d option with the dash parameter to direct that the existing directory layout be used in the zonepath. Thus, if the administrator manually sets up the zonepath directory before the installation, the -d — option can be used to indicate that the directory already exists.

-p Preserve system identity.

-s Install silently.

-u sys-unconfig the zone.

-v Verbose output.

-b patchid

One or more -b options can be used to specify a patch ID for a patch installed in the system image. These patches will be backed out during the installation process.

The -a and -d options are mutually exclusive. The -p, -s, -u and -v options are only allowed when either -a or -d is provided.

Become superuser, or assume the Primary Administrator role.

Install the configured zone s-zone by using the zoneadm command with the install -a option and the path to the archive.
global# zoneadm -z s-zone install -u -a /net/machine_name/s-system.flar

You will see various messages as the installation completes. This can take some time.

When the installation completes, use the list subcommand with the -i and -v options to list the installed zones and verify the status.

If an installation fails, review the log file. On success, the log file is in /var/log inside the zone. On failure, the log file is in /var/tmp in the global zone.

If a zone installation is interrupted or fails, the zone is left in the incomplete state. Use uninstall -F to reset the zone to the configured state.

You must be the global administrator in the global zone to perform this procedure.

If the -u option was used, you must also zlogin to the zone console and perform system configuration as described in Performing the Initial Internal Zone Configuration.

Become superuser, or assume the Primary Administrator role.

Use the zoneadm command with the -z option, the name of the zone, which is s-zone, and the boot subcommand to boot the zone.
global# zoneadm -z s-zone boot

When the boot completes, use the list subcommand with the -v option to verify the status.
global# zoneadm list -v


Connecting to SPARC Server ILOM console

This post covers connecting to the server console via the SER MGT port and using a terminal emulator to assign a static IP address to the NET MGT port on an Oracle Solaris SPARC Server.

It also covers the process of temporarily preventing automatic boot of the preinstalled OS so that you can boot from alternative installation media of your own choosing, to perform a fresh install for example.

Verify that Device Manager on Windows has the correct drivers loaded for your USB console cable if applicable, making a note of the COM port assigned.

Connect the console cable to the SER MGT port

Configure a terminal such as PuTTY or terminal emulator with these settings:

9600 baud
8 bits
No parity
1 Stop bit
No handshake

Oracle Solaris 11 installation options are covered here.

Default username and password


To set the password on the ILOM console for the first time, use the following command… 

set /SP/users/root password

Assign a Static IP Address to the NET MGT Port

If you plan to connect to the SP through its NET MGT port, the SP must have a valid IP address.

By default, the server is configured to obtain an IP address from DHCP services in your network.  If the network your server is connected to does not support DHCP for IP addressing, perform this procedure.

Set the SP to accept a static IP address.
->set /SP/network pendingipdiscovery=static

Set the IP address for the SP gateway.
-> set /SP/network pendingipgateway=gateway-IPaddr

Set the netmask for the SP.
-> set /SP/network pendingipnetmask=

This example uses to set the netmask. Your network environment subnet might require a different netmask. Use a netmask number most appropriate to your environment.

Verify that the parameters were set correctly.
-> show /SP/network -display properties

Set the changes to the SP network parameters.
-> set /SP/network commitpending=true

Note – You can type the show /SP/network command again to verify that the parameters have been updated.

The Oracle Solaris SPARC Server comes with a pre-installed OS.  You will be prompted to boot it, or it’ll automatically boot.  If you wish to install a fresh OS via you’re own boot media then the following section, distilled from the official instructions here may be useful.

Reach a State to Install a Fresh OS (Oracle ILOM CLI)

-> set /HOST/bootmode script=”setenv auto-boot? false”
This setting prevents the server from booting from the preinstalled OS. When you use bootmode, the change applies only to a single boot and expires in 10 minutes if the power on the host is not reset.

When you are ready to initiate the OS installation, reset the host.
-> reset /System

Switch communication to the server host.
-> start /HOST/console
Are you sure you want to start /HOST/console (y/n)? y
Serial console started. To stop, type #.

The server might take several minutes to complete POST, and then the OpenBoot prompt (ok) is displayed.

Boot from the appropriate boot media for your installation method.

For a list of valid boot commands that you can enter at the OpenBoot prompt, type:

{0} ok help boot

Useful ILOM Console Commands

A link to the ILOM Console commands available here with some examples used regularly listed below.

You can also show all the CLI commands with

show /SP/cli/commands

cd     -e.g. cd /HOST/console

create     -create a target property and set a value




help     -shows help commands, e.g. help or help targets, editing or legal


ls     -lists (sub)targets in target  e.g. /HOST/console/bootlog or history

reset   e.g. reset /System

set     e.g. set /HOST/bootmode script=”setnv auto-boot? false”

show     e.g. show /SP/network or show /HOST status

start     e.g. start /HOST/console or start /SP/console


version     -shows firmware versions of SP

ILOM Targets

All HOST, SYSTEM and SP Targets can be listed using the following ILOM console command.

> help targets

Target Meaning

/ Hierarchy Root
/HOST Manage the Host
/HOST/bootmode Manage the Host Boot Method
/HOST/console Redirect Host Serial Console to SP
/HOST/console/bootlog View Host Console Output From Last Power On
/HOST/console/history View Host Console Output
/HOST/diag Manage Host Power On Self Test Diagnostics
/HOST/domain Manage Logical Domains
/HOST/domain/control Manage Host Control and Guest Boot Methods
/HOST/tpm Manage the Trusted Platform Module Device
/HOST/verified_boot Manage Verified Boot configuration
/HOST/verified_boot/system_certs Verified Boot system certificates
/HOST/verified_boot/system_certs/1 Verified Boot Certificate
/HOST/verified_boot/system_certs/2 Verified Boot Certificate
/HOST/verified_boot/user_certs Verified Boot user certificates
/HOST/verified_boot/user_certs/1 Verified Boot Certificate
/HOST/verified_boot/user_certs/2 Verified Boot Certificate
/HOST/verified_boot/user_certs/3 Verified Boot Certificate
/HOST/verified_boot/user_certs/4 Verified Boot Certificate
/HOST/verified_boot/user_certs/5 Verified Boot Certificate
/System View System Summary
/System/Open_Problems View Open Problems
/System/Processors View Processors Summary
/System/Processors/CPUs View List of CPUs
/System/Processors/CPUs/CPU_0 CPU Details
/System/Processors/CPUs/CPU_1 CPU Details
/System/Memory View Memory Summary
/System/Memory/DIMMs View List of DIMMs
/System/Memory/DIMMs/DIMM_0 DIMM Details
/System/Memory/DIMMs/DIMM_2 DIMM Details
/System/Memory/DIMMs/DIMM_4 DIMM Details
/System/Memory/DIMMs/DIMM_6 DIMM Details
/System/Memory/DIMMs/DIMM_8 DIMM Details
/System/Memory/DIMMs/DIMM_10 DIMM Details
/System/Memory/DIMMs/DIMM_12 DIMM Details
/System/Memory/DIMMs/DIMM_14 DIMM Details
/System/Memory/DIMMs/DIMM_16 DIMM Details
/System/Memory/DIMMs/DIMM_18 DIMM Details
/System/Memory/DIMMs/DIMM_20 DIMM Details
/System/Memory/DIMMs/DIMM_22 DIMM Details
/System/Memory/DIMMs/DIMM_24 DIMM Details
/System/Memory/DIMMs/DIMM_26 DIMM Details
/System/Memory/DIMMs/DIMM_28 DIMM Details
/System/Memory/DIMMs/DIMM_30 DIMM Details
/System/Power View Power Summary
/System/Power/Power_Supplies View List of Power Supplies
/System/Power/Power_Supplies/Power_Supply_0 Power Supply Details
/System/Power/Power_Supplies/Power_Supply_1 Power Supply Details
/System/Cooling View Cooling Summary
/System/Cooling/Fans View List of Fans
/System/Cooling/Fans/Fan_0 Fan Module Details
/System/Cooling/Fans/Fan_1 Fan Module Details
/System/Cooling/Fans/Fan_2 Fan Module Details
/System/Cooling/Fans/Fan_3 Fan Module Details
/System/Cooling/Fans/Fan_4 Fan Module Details
/System/Cooling/Fans/Fan_5 Fan Module Details
/System/Cooling/Fans/Fan_6 Fan Module Details
/System/Cooling/Fans/Fan_7 Fan Module Details
/System/Storage View Storage Summary
/System/Storage/Disks View List of Storage Disks
/System/Storage/Controllers View List of Storage Controllers
/System/Storage/Volumes View List of Storage Volumes
/System/Storage/Expanders View List of Storage Expanders
/System/Networking View Network Summary
/System/Networking/Ethernet_NICs View List of Ethernet NICs
/System/Networking/Ethernet_NICs/Ethernet_NIC_0 Ethernet NIC Details
/System/Networking/Ethernet_NICs/Ethernet_NIC_1 Ethernet NIC Details
/System/Networking/Ethernet_NICs/Ethernet_NIC_2 Ethernet NIC Details
/System/Networking/Ethernet_NICs/Ethernet_NIC_3 Ethernet NIC Details
/System/Networking/Infiniband_HCAs View List of Infiniband HCAs
/System/PCI_Devices View Devices Summary
/System/PCI_Devices/On-board View List of On-board Devices
/System/PCI_Devices/On-board/Device_0 On-board device details
/System/PCI_Devices/On-board/Device_1 On-board device details
/System/PCI_Devices/On-board/Device_2 On-board device details
/System/PCI_Devices/On-board/Device_3 On-board device details
/System/PCI_Devices/On-board/Device_4 On-board device details
/System/PCI_Devices/On-board/Device_5 On-board device details
/System/PCI_Devices/Add-on View List of Add-on Devices
/System/PCI_Devices/Add-on/Device_1 Add-on device details
/System/PCI_Devices/Add-on/Device_2 Add-on device details
/System/PCI_Devices/Add-on/Device_3 Add-on device details
/System/PCI_Devices/Add-on/Device_6 Add-on device details
/System/PCI_Devices/Add-on/Device_7 Add-on device details
/System/PCI_Devices/Add-on/Device_8 Add-on device details
/System/Firmware View Firmware Summary
/System/Firmware/Other_Firmware View List of Other Firmware
/System/Log Manage the System Log
/System/Log/list View System Log Entries
/SP Manage the Service Processor
/SP/alertmgmt Manage Alerts
/SP/alertmgmt/rules Manage Alert Rules (IPMI, SNMP, Email)
/SP/cli Manage Command Line Interface Sessions
/SP/clients Manage Client External Services
/SP/clients/activedirectory Manage Active Directory Authentication
/SP/clients/activedirectory/admingroups Manage Administrator Groups
/SP/clients/activedirectory/alternateservers Manage Alternate Servers
/SP/clients/activedirectory/cert Manage Certificates
/SP/clients/activedirectory/customgroups Manage Custom Groups
/SP/clients/activedirectory/dnslocatorqueries Manage DNS Locator Queries
/SP/clients/activedirectory/opergroups Manage Operator Groups
/SP/clients/activedirectory/userdomains Manage User Domains
/SP/clients/asr Manage Automatic Service Request.
/SP/clients/asr/cert Manage Certificate
/SP/clients/dns Manage Domain Name Service Resolution
/SP/clients/ldap Manage LDAP Authentication
/SP/clients/ldapssl Manage LDAP/SSL Authentication
/SP/clients/ldapssl/admingroups Manage Administrator Groups
/SP/clients/ldapssl/alternateservers Manage Alternate Servers
/SP/clients/ldapssl/cert Manage Certificates
/SP/clients/ldapssl/customgroups Manage Custom Groups
/SP/clients/ldapssl/opergroups Manage Operator Groups
/SP/clients/ldapssl/optionalUserMapping Manage Alternate User Mapping
/SP/clients/ldapssl/userdomains Manage User Domains
/SP/clients/ntp Manage the Network Time Protocol Service
/SP/clients/ntp/server Manage the NTP Servers
/SP/clients/oeshm OESHM status and state info
/SP/clients/oeshm/ssl HMN SSL configuration and status
/SP/clients/oeshm/ssl/agent_cert OESHM SSL Client certificate
/SP/clients/oeshm/ssl/agent_key OESHM SSL Client key
/SP/clients/oeshm/ssl/server_cert OESHM SSL Server certificate
/SP/clients/radius Manage RADIUS Authentication
/SP/clients/radius/alternateservers Alternate RADIUS servers configuration
/SP/clients/radius/alternateservers/1 Alternate RADIUS servers configuration
/SP/clients/radius/alternateservers/2 Alternate RADIUS servers configuration
/SP/clients/radius/alternateservers/3 Alternate RADIUS servers configuration
/SP/clients/radius/alternateservers/4 Alternate RADIUS servers configuration
/SP/clients/radius/alternateservers/5 Alternate RADIUS servers configuration
/SP/clients/smtp Manage the SMTP Server Service
/SP/clients/syslog Manage the Syslogd Remote Logging Server
/SP/clock Manage the SP Clock
/SP/config Manage SP Configuration (Backup/Restore)
/SP/diag Manage SP Diagnositics
/SP/diag/snapshot Save SP Snapshot for Diagnostic Purposes
/SP/faultmgmt Manage System FRU Faults
/SP/faultmgmt/shell Fault Management Shell
/SP/firmware Manage the SP Firmware
/SP/firmware/backupimage Manage Firmware Backup Image Information
/SP/firmware/host Manage Host-accessible SP firmware
/SP/firmware/host/miniroot Miniroot information
/SP/firmware/keys Image Signing Public Keys
/SP/firmware/keys/sun View Sun Keys
/SP/logs Manage Logs
/SP/logs/audit Manage the Audit Log
/SP/logs/audit/list View Audit Log Entries
/SP/logs/event Manage the Event Log
/SP/logs/event/list View Event Log Entries
/SP/network Manage Network Port Configuration
/SP/network/interconnect Manage Internal USB Ethernet Port Configu ration
/SP/network/ipv6 Manage IPv6 Network Configuration
/SP/policy Manage System Policies
/SP/preferences Manage SP Preferences
/SP/preferences/banner Manage SP Login Messages
/SP/preferences/banner/connect Manage SP Connect Message
/SP/preferences/banner/login Manage SP Login Message
/SP/preferences/password_policy Manage SP Password Policy
/SP/serial Manage Serial Interfaces
/SP/serial/external Manage the External Serial Port
/SP/services Manage SP Access Services
/SP/services/fips Manage the FIPS mode of ILOM (Federal Inf ormation Processing Standards,
publication 140-2, Security Requirements for Cryptographic Modules)
/SP/services/http Manage the HTTP Service
/SP/services/https Manage the HTTPS Service
/SP/services/https/ssl Manage the HTTPS SSL Certificate
/SP/services/https/ssl/custom_cert Manage the Custom SSL Certificate
/SP/services/https/ssl/custom_key Manage the Custom SSL Private Key
/SP/services/https/ssl/default_cert View the Default SSL Certificate
/SP/services/ipmi Manage the IPMI Service
/SP/services/kvms Manage the Remote KVMS Service
/SP/services/kvms/host_storage_device Manage the KVMS Host Storage
/SP/services/kvms/host_storage_device/remote Manage the KVMS Remote Virtual Device
/SP/services/servicetag Manage Service Tags
/SP/services/snmp Manage the SNMP Agent Service
/SP/services/snmp/communities Manage SNMP Communities (v2)
/SP/services/snmp/users Manage SNMP Users (v3)
/SP/services/ssh Manage the Secure Shell Service
/SP/services/ssh/keys Manage Secure Shell Authentication
/SP/services/ssh/keys/dsa Manage the SSH DSA Key
/SP/services/ssh/keys/rsa Manage the SSH RSA key
/SP/services/sso Manage the Single Sign-on Service
/SP/sessions View User Sessions
/SP/users Manage Local SP User Accounts




Configure Solaris 11 ISCSI Initiator

With my ISCSI Target configured on FreeNAS and my Solaris 11 Global Zone installed, it’s time to configure the ISCSI initiator to discover the ISCSI target using the second NIC in my Solaris 11 host (or “Global Zone”).

In my lab environment, I have created one big volume called “ONEBIGVOLUME” on my FreeNAS, consisting of 4 x 7500 RPM SATA Disks.  Within this single volume, I have created 5 x 250GB ZVols from which I’ve then created 5 x iSCSI device extents for my Solaris 11 host to discover.  I’ll then create a single ZPool on my Solaris host, using these 5 iSCSI extents on FreeNAS as if they were local disks.

First I need to configure the 2nd NIC that I intend to use for iSCSI traffic on my network.  I’ll refer to my own post here to assist me in configuring that 2nd NIC.

The screen shot below shows the process end-to-end.

The oracle document here describes the process of enabling iSCSI.

I noticed that the subnet mask was incorrect on my 2nd NIC.  My fault for not specifying it, the OS assumed a 8 bit instead of a 24 bit mask for my network.  I’ve included the steps taken to fix that below.

Note the commands highlighted below, that were not accepted by the OS and how I ultimately fixed it below.

Enable iSCSI Initiator

svcadm enable network/iscsi/initiator

From my FreeNAS, Services, iSCSI section, I can see that my base name is…

…and my target is called…

Dynamic Discovery

Here, I use dynamic discovery to find all disks on the FreeNAS iSCSI target, using just the IP Address.

This is probably the simplest way of discovering the disks, but also dangerous as there may be another disk amongst the list that is being used by another system (in my case, I have a VMWare DataStore too).

iscsiadm add discovery-address

iscsiadm modify discovery –sendtargets enable

devfsadm -i iscsi


It is far from easy to correlate which of these “solaris disks” pertain to which “iscsi extents” on FreeNAS.  The only give away as to which one is my VMWare DataStore is the size, shown below…

So, I definitely do not want to use this disk on the Solaris system as it’s already in use elsewhere by VMWare here.  This is why it’s a good idea to use static discovery and/or authentication!

On my Solaris host, I can go back and remove the FreeNas discovery address and start over using Static Discovery instead.

Static Discovery

I know the IP Address, port, base name and target name of my FreeNAS where my iSCSI extents are waiting to be discovered so I may as well use static discovery.

As I’ve already used dynamic discovery, I first need to list the discovery methods, disable Send Targets (dynamic discovery) and enable Static (static discovery)

It’s a bad idea to use both static discovery and dynamic discovery simultaneously.

iscsiadm remove discovery-address

iscsiadm modify discovery -t disable   (Disables Send Targets)

iscsiadm modify discovery -s enable   (Enables Static)

iscsiadm list discovery                                    (Lists discovery methods)

With static discovery set, I can now re-add the discovery address, not forgetting the port (like I just did, above).

iscsiadm add discovery-address

You can see now, that by using Static discovery to only discover extents available at the “” target at on port 3260, my Solaris 11 host has only discovered the 5 devices (extents) I have in mind for my ZPool, and the VMWare DataStore has not been discovered.

The format command is a convenient way to list the device names for your “disks” but you don’t need to use format to do anything else to them.  So CTRL-C to exit format.

Create ZPool

I can use my notes here to help with configuring ZPools and ZFS.

Since my FreeNAS uses ZFS itself to turn 4 x Physical 2TB SATA disks into it’s 7TB “ONEBIGVOLUME” that is subsequently carved up into a 1TB VMWare DataStore and my 5 x 250GB Solaris 11 ZPool1 volumes, the RAIDZ resilience to physical drive failure is set at the NAS level, and need not be used when configuring the ZPool from the 5 iSCSI extents.  I could have created a single 1TB iSCSI extent and created my ZPool on the Solaris host with just one disk.

I could have created a single 1TB iSCSI extent and created my ZPool on the Solaris host from just the one “disk”, since the RAIDZ resilience to physical disk failure exists on the FreeNAS.  By creating 5, at least I have the option of creating my ZPool with RAIDZ on the Solaris host in my lab also.

zpool create ZPOOL1 <device1> <device2> <device3> <device4><device5>

Here you can see the system warning about the lack of RAIDZ redundancy in my new pool.  If the disks were physical, it’d be a risk but in my lab environment, it’s not a problem.

Although FreeNAS defaults to compression being turned on when you create a new volume in a pool, I created each of my 5 volumes used as iscsi extents here with compression disabled.  This is because I intend to use the compression and deduplication options when creating the ZFS file systems that will be hosting my Solaris Zones on my Solaris 11 host instead.

I have a separate post here on Administering Solaris 11 Zones with the requisite commands but will post screenshots here from my own lab.

This is really where the post ends within the context of connecting Solaris 11 to iSCSI storage.

Create ZFS mount point for Zones

Create/Configure Zone1

Create system configuration for Zone1

Install the Zone1

Boot Zone1

Ping Zone1

Log into Zone1

SSH From Linux Workstation

ZLOGIN from Solaris Global Zone

So that’s the process end-to-end of discovering iSCSI SAN storage through logging into your new Solaris11 Zone.












Solaris 11 Networking with ipadm (Basic)

The following concise post is intended as a reference to the networking commands used to satisfy basic networking requirements on a Solaris 11 host.

Using dladm and ipadm commands to modify the live configuration and also modify the config files in one go, means that networking changes made this way are persistent, i.e. survive a system reboot.

Show Network Links

dladm show-link

dladm show-phys

Show Network Addresses

ipadm show-addr

Create IP interface

ipadm create-ip net0 && dladm show-link && dladm show-phys net0

At this point although there is an IPv4 interface, there is no IP address bound to it (just the internal loopback address).

Configure IP interface to use DHCP

ipadm create-addr -T dhcp net0  && ipadm show-addr

Configure Static IP address on IP interface

ipadm create-addr -T static -a net0 && ipadm show-addr

Delete IP interface

In our case, the IP interface that is configured to use DHCP.

ipadm delete-addr -r net0/v4




Oracle Solaris 11 Networking and Virtualization with Zones

This concise post is intended to be used as reference rather than a detailed explanation, so please excuse any apparent brevity.  A more comprehensive explanation can be found here.

The basic steps of creating a zone, installing a zone, installing services in a zone, cloning a zone and monitoring resource use are all set out below in the sequential, logical order that they would be performed.

Create a ZFS Filesystem, VNIC and Configure a Zone

Note:  You first “configure” a zone, then “install” the zone.  zoneadm list -cv displays their statuses as “installed” and “running” respectively.

zfs create -o mountpoint=/zones rpool/zones

zfs list rpool/zones

dladm create-vnic -l net0 vnic1

zonecfg -z zone1

zoneadm list -cv shows all zones on the system, namely the global zone and the zone1 zone created above.

Install the zone

Before installing the zone with its own instance of Solaris (that’s basically the definition of a zone, i.e. a cordoned off install of Solaris, running on the Solaris “global zone”), you should create a System Profile first.  A System Profile is an answer file in .xml format, built by answering the same on-screen questions as when you installed the Global Zone originally, i.e. hostname, NIC, IP Address, DNS addresses, Timezone and so on.

sysconfig create-profile -o zone1-profile.xml

F2 you’re way through the screens, filling in the fields as required before being dropped back to the command prompt.

Next, proceed with installing your zone…

zoneadm -z zone1 install -c /root/zone1-profile.xml

As you can see, it took about 10 minutes to install the first zone.  Subsequent zones, install much quicker.  Although installed, the zone is not automatically booted.

zoneadm list -cv

Boot the Zone

zoneadm -z zone1 boot

zoneadm list -cv

Login to Zone

zlogin -C zone1

Note that you cannot login as root.  This is because roles cannot log in to zones directly.  It’s part of the Secure-by-Default configuration’s Role Based Access Control feature’s Root-as-a-Role Security feature.

You must log in with the account created during the creation of the System Profile, prior to installing the zone.  The you can su – to the root user once logged in.  This is much like Linux with it’s sudoers mechanism.

View Network Status



Install Apache Web Server in the Zone.

pkg install apache-22

svcadm enable apache22

svcs apache22

Connect to the ip address of your zone from your web browser to see the “It Works!” message from Apache.

Note that this file is contained in /var/apache2/2.2/htdocs/index.html and can be modified to reflect the name of the zone youre logged into as proof its the zones webserver responding, not the global zone’s.

Create VNIC for second zone

Performed as root, logged on to the global zone.

dladm create -vnic -l net0 vnic2

zonecfs -z zone2


set zonepath=/zones/zone2

add net

set physical=nvic2



Clone a Zone

You can only clone a zone if it’s not online.  Halt the zone you want to clone.

zoneadm -z zone1 halt

zoneadm -z zone2 clone -c /root/zone2-profile.xml zone1

Run through the service profile screens completing the fields unique to the cloned zone, eg. hostname, VNIC and IP address.

zoneadm -z zone2 clone -c /root/zone2-profile.xml zone1

Within seconds you’ll see the clone process has completed.

Boot cloned zone

zoneadm -z zone2 boot

zoneadm list -cv

You can see that the zone1 is still down from when it was cloned, but zone2 is now running.  Don’t forget to reboot zone1 too if it’s intended to be online.

It takes a little while before the booted clone will have started all its network services.

Log in to Clone

Log into the cloned zone, and view the IP configuration.

zlogin zone2


Check apache is running…

svcs apache22

It’s running!  No need to install apache as the zone was cloned from an existing zone with apache already installed.

Monitoring zones

Start zone1 so that both zones are running

zoneadm -z zone1 boot

zoneadm -list -cv

You can monitor zones using a single command, zonestat

zonestat 2 (where 2 is the number of seconds between each monitoring interval/collection of resource use data)

Zonestat can be used to summarise resource use over a long period of time.





Solaris 11 ZFS Administration

This concise post aims to cover the basics of ZFS administration on Solaris.  Excuse the brevity, it is for reference rather than a detailed explanation.

ZFS Pools

zpool list && zpool status rpool

In a lab environment, files can replace actual physical disks

cd /dev/dsk && mkfile 200m disk {0..9}

Create a ZPool and Expand a ZPool

zpool create labpool raidz disk0 disk1 disk2 && zpool list && zpool list labpool

zpool add labpool raidz disk4 disk5 disk6 && zpool list && zfs list labpool

ZFS Compression

zfs create labpool/zman && zfs set compression=gzip labpool/zman

You can copy files to this zfs filesystem that has gzip compression enabled and save nearly half your disk space.

ZFS Deduplication

zfs create -o dedup=on -o compression=gzip labpool/archive

zfs create labpool/archive/a  (b, c d)

By copying multiple instances of the same file into /labpool/archive/a, b, c and d whereby the /labpool/archive filesystem has deduplication turned on, you’ll see that zpool list labpool will increment the value in the DEDUP column to reflect the deduplication ratio as more and more similar files are added.

Note also that compression is enabled at the zfs file system level but copying an already gzipped file will not result in further gains – the value returned by zfs get compressratio labpool/archive stays at 1.00x.

ZFS Snapshots

zfs snapshot -r labpool/archive@snap1 && zfs list -r -t all labpool

zfs rollback labpool/archive/a@snap1

Snapshots can be created that copy-on-write (or any other kind of IO) such that changes made can be rolled back.   As a result, snapshots don’t take up a lot of space, unless left in place for filesystems with high IO of course.

ZFS Clones

zfs clone labpool/archive/a@snap1 labpool/a_work

zfs list -r -t all labpool

zfs list -r -t all labpool will show all the zfs filesystems including snapshots and clones.  Changes can be made to the clone filesystem without affecting the original.


Creating bootable Solaris 11 USB drive

Download the requisite Solaris OS image from Oracle here.  You may need to create a free account first.

Note that if you are building a SPARC server e.g. T8-2, it comes with Solaris pre-installed.  You should start with this document here, connecting to the ILOM System Console via the SER MGT Port using the instructions here.

The instructions from Oracle are as follows, but I don’t like the way they say to use dmesg | tail to identify the USB device when lsusb to identify the make and model and df -h to identify the device name provide much clearer, humanly readable output.


  • On Linux:
    1. Insert the flash drive and locate the appropriate device.
      # dmesg | tail
    2. Copy the image.
      # dd if=/path/image.usb of=/dev/diskN bs=16k

For other client operating systems such as Solaris itself or MacOSX, instructions from Oracle can be found here.

In my case, the USB stick was mounted to /dev/sdg1 automatically when plugged into Linux desktop, so I unmounted /dev/sdg1 then changed to the directory containing my Solaris 11 image, then used dd as shown in the screenshot below.

The commands are therefore,

df -h to Identify the USB device e.g. /dev/sdg

sudo umount /dev/sdg1 to unmount the filesystem on the USB device

cd ~/Downloads/Solaris11 to change to the location of your downloaded image file

sudo dd if=sol-11_3.usb of=/dev/sdg bs=16k to write it to the USB device

Since dd is a block level, not a file level copy, you don’t need to make the USB device bootable or anything like that.  That’s all contained in the blocks copied to the device.



Oracle SPARC T8-2 Server


The Oracle SPARC T8-2 is a 2 processor server with Oracle SPARC M8 Processors (each with 32 x 8 dynamically threading cores running at 5GHz) and Oracles “Software-in-Silicon” technology to massively accelerate operations such as SQL Primitives on OLTP Oracle Databases, Java applications, Queries of large, compressed databases in-memory and operations involving floating point data, virtualization using Solaris 11 and encryption all with little to no additional processor overhead.

DAX Units (Data Analytics Accelerator)

DAX Units operate on data at full memory speeds, taking advantage of the very high memory bandwidth of the processor.  This results in extreme acceleration of in-memory queries and analytics operations (i.e. generating data about your database data) while the processor cores are freed up to do other useful work.

DAX Units can handle compressed data on the fly, so larger DB’s can be held in memory and with less memory needed to be configured for a given database size.

The DAX Unit can also be exploited to handle Java applications whereby the available API is used by the Java application developers.

Oracle Numbers Units

These software-in-silicon units greatly accelerate Oracle database operations involving floating point data.  This results in fast, in-memory analytics on your database without affecting your OLTP (Online Transaction Processing) operations.

Silicon Secured Memory

This is capable of detecting and preventing invalid operations on application data via hardware monitoring of software access to memory.  A hardware approach is must faster than a software based detection tool that places additional overhead on your processors.

Each core contains the fastest cryptographic acceleration in the industry with near zero overhead.

Dynamic Threading Technology

Each of the 2 processors has 32 cores, each capable of handling 8 threads using dynamic threading technology that adapts to extreme single-thread performance or massive throughput 256 thread performance on the fly.

Efficient design with Solaris Virtualization technology means that a much larger number of VMs can be supported compared with Intel Xeon based systems, lowering per-VM cost.


This breakthrough in SPARC is enabled by the Solaris 11 OS.

Secure, Integrated, Open platform engineered for large scale enterprise cloud environments with unique optimization for oracle databases, middleware and application deployments.  Security is easily set up and enabled by default with single-step patching to the OS running on the logical domain, hosting immutable zones that allow compliance to be maintained easily.

You can create complete application software stacks, lock them securely, deploy them in a cloud and update them in a single step.

Oracle Solaris 11 combines unique management options with powerful application driven software-defined networking for agile deployment of cloud infrastructure.

More here, including full hardware specification, summarized below.



Thirty-two core, 5.0 GHz SPARC M8 processor

Up to 256 threads per processor (up to 8 threads per core)

Eight Data Analytics Accelerator units per processor, each supporting four concurrent in-memory analytics engines with decompression

Thirty two on chip encryption instruction accelerators (one per core) with direct non-privileged support for 16 industry standard cryptographic algorithms: AES, Camellia, CRC32c, DES, 3DES, DH,
DSA, ECC, MD5, RSA, SHA-1, SHA-224, SHA-256, SHA-3, SHA-384, and SHA-512

Thirty two floating point units and thirty two Oracle Numbers units per processor (one per core)

One random number generator (one per processor)


Level 1: 32 KB instruction and 16 KB data per core

Level 2: 256 KB L2 I$ per four cores, 128 KB L2 D$ per core

Level 3: 64 MB L3$ on chip System Configuration

SPARC T8-2 servers are always configured with two SPARC M8 processors; not expandable


Sixteen dual inline memory module (DIMM) slots per processor supporting half and fully populated memory configurations using 16, 32, or 64 GB DDR4 DIMMs

2 TB maximum memory configuration with 64 GB DIMMs


Network: Four 10 GbE (100 Mb/sec, 1 Gb/sec, 10 Gb/sec)ports, full duplex only, auto-negotiating

Disks and internal storage: Two SAS-3 controllers providing hardware RAID 0, 1, and 1E/10(ZFS file system provides higher levels of RAID)

Expansion bus: Eight low-profile PCIe 3.0 (four x8 and four x16) slots

Ports: Four external USB (two front USB 2.0 and two rear USB 3.0), one RJ45 serial management port, console 100Mb/1Gb network port, and two VGA ports (one front, one rear)


Internal storage: Up to six 600 GB or 1,200 GB 2.5-inch SAS-3 drives

Optional internal storage may be installed within the standard drive bays

800 GB solid-state drives (SSDs), maximum of six 6.4 TB NVMe drives, maximum of four



Oracle/Solaris 11 Virtualization on M5-32 and M6-32 Servers

This concise post is intended to provide a terminology and concepts reference for the Oracle M5-32 and M6-32 Servers, the Domain Configurable Units (DCU’s) of which are divided into one isolated to four combined “Bounded (1)” or “Non-bounded (2-4)” Physical Domains.  The combined or “non-bounded” DCU’s are connected via the Scalability Switchboards in order to combine their resources into a single Physical Domain.  Each Physical Domain can be further divided into 192/384 Logical Domains on M5-32 or M6-32 Servers by using “Oracle VM Server for SPARC” software.  Each Logical Domain runs it’s own instance of the Oracle Solaris 11 operating system that can run thousands of Zones.  Each zone is a means of isolating applications running on the same Solaris 11 operating system instance.  Each zone contains a controlled environment through which you can allocate the exact resources an application requires.  More on Zones in a separate, complimenting post.  This post covers the server hardware layer through to the zone layer in the technology stack (illustrated below).

Oracle M5-32 and M6-32 Servers

DCU’s provide the building blocks of Physical Domains.

A Physical Domain operates as a server with full hardware isolation from the other physical domains.

DCU’s can be combined or divided into 1 – 4 physical domains to suit business application requirements.

Each Physical Domain can be restarted without affecting other Physical Domains in the M5-32 / M6-32 Server.

An initial hardware purchase of a minimum of 8 processors can be configured into 1 or 2 Physical Domains and the remainder purchased later for expansion.

A maximum of 32 processors and 32TB memory per M5/M6 Server is possible.

Scalability Switchboards

The physically separate DCU’s can be joined together to make a single Physical Domain that spans multiple Domain Configurable Units.  The communications are serviced by the Scalability Switch Boards.

A “Bounded” Physical Domain is one whereby a single DCU is allocated to a single Physical Domain and is therefore not connected to the Scalability Switch Boards, isolating it from the other DCU’s.

A Bounded Physical Domain can operate on 2 processors, whereas non-bounded require a minimum of 4.

A single M5/M6 server can be a mix of Bounded and Un-bounded (combined) Physical Domains.

Supported Virtualization Software (LDOMs and Solaris Zones)


Oracle VM Server for SPARC is installed and supports the creation of 192/384 Logical Domains on M5-32/M6-32 Servers respectively.

Each LDOM can be configured and rebooted independently of the others.

Each LDOM runs its own instance of the Oracle Solaris 11 operating system.

Solaris Zones

Each instance of the Solaris 11 Operating System that comes pre-installed on each Logical Domain running Oracle VM Server supports Solaris Zones.

Each Zone contains a controlled environmen through which you can allocate the exact operating system resources that an application needs.

Zones are ultimately used to isolate applications running on the same instance of Solaris 11 in the same Logical Domain so that they don’t interfere with each other in terms of pre-alocated resource maximums and also files written to the underlying OS file system.

Solaris 11 supports thousands of zones on any given LDOM.

Links/Further Reading

M5-32 and M6-32 Server Documentation

Best Practices Whitepapers

M5 Documentation

M6 Documentation

Oracle Virtualization Products and Solutions