vCenter Server Appliance installer fails on Linux

If you’ve downloaded the vCenter Server Appliance .iso file, unpacked it to a folder on your Linux workstation, then hit a problem during installation reading the .ovf file during deployment to your VMWare ESXi hypervisor

./vcsa-ui-installer/lin64/installer

The end of the installation log will read something like this

There were a couple additional steps I had to do in order to get it to run from my filesystem, rather than from a mounted .iso.

firstly, chmod -R 777 the whole lot, e.g. if you’ve unpacked the iso into a folder called /vCentre-deployment then chmod -R 777 /vCentre -deployment

You will likely have to chmod +x the  ./vcsa-ui-installer/lin64/installer file too.  I didn’t need to run it using sudo since the installation is to a remote ESXi host on the network, not the local machine.

Upon re-running the installer, you should progress past the point where the installer throws the error shown above and see the following screen.

Note that even for a “tiny” deployment, 10GB of RAM is required on the ESXi host.  A frankly obscene minimum requirement and hence where this blog post subsequently ends.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

ESXi UI timeout

You may find that your ESXi web-based UI frustratingly times out after a few minutes during attempted uploads of large files to your Datastore, such as .iso’s.

You can disable the time out altogether.

Click on the little drop down over on the far, right hand side…

Settings, Application timeout, Off.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Configure Solaris 11 ISCSI Initiator

With my ISCSI Target configured on FreeNAS and my Solaris 11 Global Zone installed, it’s time to configure the ISCSI initiator to discover the ISCSI target using the second NIC in my Solaris 11 host (or “Global Zone”).

In my lab environment, I have created one big volume called “ONEBIGVOLUME” on my FreeNAS, consisting of 4 x 7500 RPM SATA Disks.  Within this single volume, I have created 5 x 250GB ZVols from which I’ve then created 5 x iSCSI device extents for my Solaris 11 host to discover.  I’ll then create a single ZPool on my Solaris host, using these 5 iSCSI extents on FreeNAS as if they were local disks.

First I need to configure the 2nd NIC that I intend to use for iSCSI traffic on my network.  I’ll refer to my own post here to assist me in configuring that 2nd NIC.

The screen shot below shows the process end-to-end.

The oracle document here describes the process of enabling iSCSI.

I noticed that the subnet mask was incorrect on my 2nd NIC.  My fault for not specifying it, the OS assumed a 8 bit instead of a 24 bit mask for my 10.0.0.0 network.  I’ve included the steps taken to fix that below.

Note the commands highlighted below, that were not accepted by the OS and how I ultimately fixed it below.

Enable iSCSI Initiator

svcadm enable network/iscsi/initiator

From my FreeNAS, Services, iSCSI section, I can see that my base name is…

…and my target is called…

Dynamic Discovery

Here, I use dynamic discovery to find all disks on the FreeNAS iSCSI target, using just the IP Address.

This is probably the simplest way of discovering the disks, but also dangerous as there may be another disk amongst the list that is being used by another system (in my case, I have a VMWare DataStore too).

iscsiadm add discovery-address 10.0.0.50

iscsiadm modify discovery –sendtargets enable

devfsadm -i iscsi

format

It is far from easy to correlate which of these “solaris disks” pertain to which “iscsi extents” on FreeNAS.  The only give away as to which one is my VMWare DataStore is the size, shown below…

So, I definitely do not want to use this disk on the Solaris system as it’s already in use elsewhere by VMWare here.  This is why it’s a good idea to use static discovery and/or authentication!

On my Solaris host, I can go back and remove the FreeNas discovery address and start over using Static Discovery instead.

Static Discovery

I know the IP Address, port, base name and target name of my FreeNAS where my iSCSI extents are waiting to be discovered so I may as well use static discovery.

As I’ve already used dynamic discovery, I first need to list the discovery methods, disable Send Targets (dynamic discovery) and enable Static (static discovery)

It’s a bad idea to use both static discovery and dynamic discovery simultaneously.

iscsiadm remove discovery-address 10.0.0.50

iscsiadm modify discovery -t disable   (Disables Send Targets)

iscsiadm modify discovery -s enable   (Enables Static)

iscsiadm list discovery                                    (Lists discovery methods)

With static discovery set, I can now re-add the discovery address, not forgetting the port (like I just did, above).

iscsiadm add discovery-address 10.0.0.50:3260

You can see now, that by using Static discovery to only discover extents available at the “iqn.2005-10.org.freenas.ctl:solariszp1” target at 10.0.0.50 on port 3260, my Solaris 11 host has only discovered the 5 devices (extents) I have in mind for my ZPool, and the VMWare DataStore has not been discovered.

The format command is a convenient way to list the device names for your “disks” but you don’t need to use format to do anything else to them.  So CTRL-C to exit format.

Create ZPool

I can use my notes here to help with configuring ZPools and ZFS.

Since my FreeNAS uses ZFS itself to turn 4 x Physical 2TB SATA disks into it’s 7TB “ONEBIGVOLUME” that is subsequently carved up into a 1TB VMWare DataStore and my 5 x 250GB Solaris 11 ZPool1 volumes, the RAIDZ resilience to physical drive failure is set at the NAS level, and need not be used when configuring the ZPool from the 5 iSCSI extents.  I could have created a single 1TB iSCSI extent and created my ZPool on the Solaris host with just one disk.

I could have created a single 1TB iSCSI extent and created my ZPool on the Solaris host from just the one “disk”, since the RAIDZ resilience to physical disk failure exists on the FreeNAS.  By creating 5, at least I have the option of creating my ZPool with RAIDZ on the Solaris host in my lab also.

zpool create ZPOOL1 <device1> <device2> <device3> <device4><device5>

Here you can see the system warning about the lack of RAIDZ redundancy in my new pool.  If the disks were physical, it’d be a risk but in my lab environment, it’s not a problem.

Although FreeNAS defaults to compression being turned on when you create a new volume in a pool, I created each of my 5 volumes used as iscsi extents here with compression disabled.  This is because I intend to use the compression and deduplication options when creating the ZFS file systems that will be hosting my Solaris Zones on my Solaris 11 host instead.

I have a separate post here on Administering Solaris 11 Zones with the requisite commands but will post screenshots here from my own lab.

This is really where the post ends within the context of connecting Solaris 11 to iSCSI storage.

Create ZFS mount point for Zones

Create/Configure Zone1

Create system configuration for Zone1

Install the Zone1

Boot Zone1

Ping Zone1

Log into Zone1

SSH From Linux Workstation

ZLOGIN from Solaris Global Zone

So that’s the process end-to-end of discovering iSCSI SAN storage through logging into your new Solaris11 Zone.

 

 

 

 

 

 

 

 

 

 

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Oracle Solaris 11 Networking and Virtualization with Zones

This concise post is intended to be used as reference rather than a detailed explanation, so please excuse any apparent brevity.  A more comprehensive explanation can be found here.

The basic steps of creating a zone, installing a zone, installing services in a zone, cloning a zone and monitoring resource use are all set out below in the sequential, logical order that they would be performed.

Create a ZFS Filesystem, VNIC and Configure a Zone

Note:  You first “configure” a zone, then “install” the zone.  zoneadm list -cv displays their statuses as “installed” and “running” respectively.

zfs create -o mountpoint=/zones rpool/zones

zfs list rpool/zones

dladm create-vnic -l net0 vnic1

zonecfg -z zone1

zoneadm list -cv shows all zones on the system, namely the global zone and the zone1 zone created above.

Install the zone

Before installing the zone with its own instance of Solaris (that’s basically the definition of a zone, i.e. a cordoned off install of Solaris, running on the Solaris “global zone”), you should create a System Profile first.  A System Profile is an answer file in .xml format, built by answering the same on-screen questions as when you installed the Global Zone originally, i.e. hostname, NIC, IP Address, DNS addresses, Timezone and so on.

sysconfig create-profile -o zone1-profile.xml

F2 you’re way through the screens, filling in the fields as required before being dropped back to the command prompt.

Next, proceed with installing your zone…

zoneadm -z zone1 install -c /root/zone1-profile.xml

As you can see, it took about 10 minutes to install the first zone.  Subsequent zones, install much quicker.  Although installed, the zone is not automatically booted.

zoneadm list -cv

Boot the Zone

zoneadm -z zone1 boot

zoneadm list -cv

Login to Zone

zlogin -C zone1

Note that you cannot login as root.  This is because roles cannot log in to zones directly.  It’s part of the Secure-by-Default configuration’s Role Based Access Control feature’s Root-as-a-Role Security feature.

You must log in with the account created during the creation of the System Profile, prior to installing the zone.  The you can su – to the root user once logged in.  This is much like Linux with it’s sudoers mechanism.

View Network Status

ipadm

 

Install Apache Web Server in the Zone.

pkg install apache-22

svcadm enable apache22

svcs apache22

Connect to the ip address of your zone from your web browser to see the “It Works!” message from Apache.

Note that this file is contained in /var/apache2/2.2/htdocs/index.html and can be modified to reflect the name of the zone youre logged into as proof its the zones webserver responding, not the global zone’s.

Create VNIC for second zone

Performed as root, logged on to the global zone.

dladm create -vnic -l net0 vnic2

zonecfs -z zone2

create

set zonepath=/zones/zone2

add net

set physical=nvic2

end

exit

Clone a Zone

You can only clone a zone if it’s not online.  Halt the zone you want to clone.

zoneadm -z zone1 halt

zoneadm -z zone2 clone -c /root/zone2-profile.xml zone1

Run through the service profile screens completing the fields unique to the cloned zone, eg. hostname, VNIC and IP address.

zoneadm -z zone2 clone -c /root/zone2-profile.xml zone1

Within seconds you’ll see the clone process has completed.

Boot cloned zone

zoneadm -z zone2 boot

zoneadm list -cv

You can see that the zone1 is still down from when it was cloned, but zone2 is now running.  Don’t forget to reboot zone1 too if it’s intended to be online.

It takes a little while before the booted clone will have started all its network services.

Log in to Clone

Log into the cloned zone, and view the IP configuration.

zlogin zone2

ipadm

Check apache is running…

svcs apache22

It’s running!  No need to install apache as the zone was cloned from an existing zone with apache already installed.

Monitoring zones

Start zone1 so that both zones are running

zoneadm -z zone1 boot

zoneadm -list -cv

You can monitor zones using a single command, zonestat

zonestat 2 (where 2 is the number of seconds between each monitoring interval/collection of resource use data)

Zonestat can be used to summarise resource use over a long period of time.

 

 

 

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Console Access on HP/3COM OfficeConnect Managed Gigabit Switch

  1. Purchase USB console cable
  2. In Windows, plug in cable, search for Device Manager, then click on “Update Driver” on any Serial port items that show warnings.  The internet found and installed working drivers for me.
  3. Optionally download the manual for the switch.  OfficeConnect 3CDSG8 Manual
  4. Download and Install PuTTY
  5. Create a serial connection with the following settings, BAUD 38,400/8 bit/no parity/1 stop bit/no hardware flow control
  6. Log on to the switch as admin and refer to the screenshot below to disable DHCP and configure a static IP address.

Next ping the new IP address, and attempt to connect using a web browser.

Log in using the same admin and password as with the console.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Oracle SPARC T8-2 Server

Overview

The Oracle SPARC T8-2 is a 2 processor server with Oracle SPARC M8 Processors (each with 32 x 8 dynamically threading cores running at 5GHz) and Oracles “Software-in-Silicon” technology to massively accelerate operations such as SQL Primitives on OLTP Oracle Databases, Java applications, Queries of large, compressed databases in-memory and operations involving floating point data, virtualization using Solaris 11 and encryption all with little to no additional processor overhead.

DAX Units (Data Analytics Accelerator)

DAX Units operate on data at full memory speeds, taking advantage of the very high memory bandwidth of the processor.  This results in extreme acceleration of in-memory queries and analytics operations (i.e. generating data about your database data) while the processor cores are freed up to do other useful work.

DAX Units can handle compressed data on the fly, so larger DB’s can be held in memory and with less memory needed to be configured for a given database size.

The DAX Unit can also be exploited to handle Java applications whereby the available API is used by the Java application developers.

Oracle Numbers Units

These software-in-silicon units greatly accelerate Oracle database operations involving floating point data.  This results in fast, in-memory analytics on your database without affecting your OLTP (Online Transaction Processing) operations.

Silicon Secured Memory

This is capable of detecting and preventing invalid operations on application data via hardware monitoring of software access to memory.  A hardware approach is must faster than a software based detection tool that places additional overhead on your processors.

Each core contains the fastest cryptographic acceleration in the industry with near zero overhead.

Dynamic Threading Technology

Each of the 2 processors has 32 cores, each capable of handling 8 threads using dynamic threading technology that adapts to extreme single-thread performance or massive throughput 256 thread performance on the fly.

Efficient design with Solaris Virtualization technology means that a much larger number of VMs can be supported compared with Intel Xeon based systems, lowering per-VM cost.

Summary

This breakthrough in SPARC is enabled by the Solaris 11 OS.

Secure, Integrated, Open platform engineered for large scale enterprise cloud environments with unique optimization for oracle databases, middleware and application deployments.  Security is easily set up and enabled by default with single-step patching to the OS running on the logical domain, hosting immutable zones that allow compliance to be maintained easily.

You can create complete application software stacks, lock them securely, deploy them in a cloud and update them in a single step.

Oracle Solaris 11 combines unique management options with powerful application driven software-defined networking for agile deployment of cloud infrastructure.

More here, including full hardware specification, summarized below.

Specifications

PROCESSOR

Thirty-two core, 5.0 GHz SPARC M8 processor

Up to 256 threads per processor (up to 8 threads per core)

Eight Data Analytics Accelerator units per processor, each supporting four concurrent in-memory analytics engines with decompression

Thirty two on chip encryption instruction accelerators (one per core) with direct non-privileged support for 16 industry standard cryptographic algorithms: AES, Camellia, CRC32c, DES, 3DES, DH,
DSA, ECC, MD5, RSA, SHA-1, SHA-224, SHA-256, SHA-3, SHA-384, and SHA-512

Thirty two floating point units and thirty two Oracle Numbers units per processor (one per core)

One random number generator (one per processor)

CACHE PER PROCESSOR

Level 1: 32 KB instruction and 16 KB data per core

Level 2: 256 KB L2 I$ per four cores, 128 KB L2 D$ per core

Level 3: 64 MB L3$ on chip System Configuration

SPARC T8-2 servers are always configured with two SPARC M8 processors; not expandable

MEMORY

Sixteen dual inline memory module (DIMM) slots per processor supporting half and fully populated memory configurations using 16, 32, or 64 GB DDR4 DIMMs

2 TB maximum memory configuration with 64 GB DIMMs

INTERFACES

Network: Four 10 GbE (100 Mb/sec, 1 Gb/sec, 10 Gb/sec)ports, full duplex only, auto-negotiating

Disks and internal storage: Two SAS-3 controllers providing hardware RAID 0, 1, and 1E/10(ZFS file system provides higher levels of RAID)

Expansion bus: Eight low-profile PCIe 3.0 (four x8 and four x16) slots

Ports: Four external USB (two front USB 2.0 and two rear USB 3.0), one RJ45 serial management port, console 100Mb/1Gb network port, and two VGA ports (one front, one rear)

MASS STORAGE AND MEDIA

Internal storage: Up to six 600 GB or 1,200 GB 2.5-inch SAS-3 drives

Optional internal storage may be installed within the standard drive bays

800 GB solid-state drives (SSDs), maximum of six 6.4 TB NVMe drives, maximum of four

 

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Oracle/Solaris 11 Virtualization on M5-32 and M6-32 Servers

This concise post is intended to provide a terminology and concepts reference for the Oracle M5-32 and M6-32 Servers, the Domain Configurable Units (DCU’s) of which are divided into one isolated to four combined “Bounded (1)” or “Non-bounded (2-4)” Physical Domains.  The combined or “non-bounded” DCU’s are connected via the Scalability Switchboards in order to combine their resources into a single Physical Domain.  Each Physical Domain can be further divided into 192/384 Logical Domains on M5-32 or M6-32 Servers by using “Oracle VM Server for SPARC” software.  Each Logical Domain runs it’s own instance of the Oracle Solaris 11 operating system that can run thousands of Zones.  Each zone is a means of isolating applications running on the same Solaris 11 operating system instance.  Each zone contains a controlled environment through which you can allocate the exact resources an application requires.  More on Zones in a separate, complimenting post.  This post covers the server hardware layer through to the zone layer in the technology stack (illustrated below).

Oracle M5-32 and M6-32 Servers

DCU’s provide the building blocks of Physical Domains.

A Physical Domain operates as a server with full hardware isolation from the other physical domains.

DCU’s can be combined or divided into 1 – 4 physical domains to suit business application requirements.

Each Physical Domain can be restarted without affecting other Physical Domains in the M5-32 / M6-32 Server.

An initial hardware purchase of a minimum of 8 processors can be configured into 1 or 2 Physical Domains and the remainder purchased later for expansion.

A maximum of 32 processors and 32TB memory per M5/M6 Server is possible.

Scalability Switchboards

The physically separate DCU’s can be joined together to make a single Physical Domain that spans multiple Domain Configurable Units.  The communications are serviced by the Scalability Switch Boards.

A “Bounded” Physical Domain is one whereby a single DCU is allocated to a single Physical Domain and is therefore not connected to the Scalability Switch Boards, isolating it from the other DCU’s.

A Bounded Physical Domain can operate on 2 processors, whereas non-bounded require a minimum of 4.

A single M5/M6 server can be a mix of Bounded and Un-bounded (combined) Physical Domains.

Supported Virtualization Software (LDOMs and Solaris Zones)

LDOMs

Oracle VM Server for SPARC is installed and supports the creation of 192/384 Logical Domains on M5-32/M6-32 Servers respectively.

Each LDOM can be configured and rebooted independently of the others.

Each LDOM runs its own instance of the Oracle Solaris 11 operating system.

Solaris Zones

Each instance of the Solaris 11 Operating System that comes pre-installed on each Logical Domain running Oracle VM Server supports Solaris Zones.

Each Zone contains a controlled environmen through which you can allocate the exact operating system resources that an application needs.

Zones are ultimately used to isolate applications running on the same instance of Solaris 11 in the same Logical Domain so that they don’t interfere with each other in terms of pre-alocated resource maximums and also files written to the underlying OS file system.

Solaris 11 supports thousands of zones on any given LDOM.

Links/Further Reading

M5-32 and M6-32 Server Documentation

Best Practices Whitepapers

M5 Documentation

M6 Documentation

Oracle Virtualization Products and Solutions

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Is that VMWare task still running?

So, you’ve told vmware to create a fairly large disk out of one of the datastores, and it’s been ticking along nicely, then bam, an IO latency spike on your SAN has caused vmware to throw an error.  You can see your .vmdk file on the datastore, but the vm can’t see it, and you can’t delete it.  You’re left wondering whether “it’s doing anything or not” with no apparent way to tell in vSphere.   Grr.  You need information about vmware tasks.

ssh to the ESXi host

List all tasks running on host for all VMs

vim-cmd vimsvc/task_list

Obtain vmid of your VM

vim-cmd vmsvc/getallvms

Make a note of vmid

vim-cmd vmsvc/get.tasklist

Make a note of the task identifier – this is the number on the end of the task, i.e. ..sometask-3360′]

View task information

vim-cmd vimsvc/task_info 

Look for the state=”running” field.

My blog posts are intended to be as concise as possible, since they are only intended to serve as a quick reminder on how I once did so-and-so.  Official instructions for this particular topic are available from vmware’s knowledge base here…

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1013003

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Anti-virus on VNX CIFS Servers

To scan viruses on your Windows File Servers using local or block (SAN) storage is easy – you just install an AV agent on the Windows Server and voila.  But what if your Windows File Server is replaced by an emc VNX CIFS Server?

The VNX uses an optional agent called CAVA (Common Anti-virus Agent) that enables a filter driver on the CIFS Server that sends  the file off to a third party AV server for scanning.  If a virus signature is found, the VNX subsequently deletes the file.

Here’s everything you need to set it up…   (Note that versions described below may change over time).

emc CAVA for VNX Installation, Configuration and Administration

Create a Windows Server, preferably 2 or a couple of VMs and add to the domain.

Download VNX Common Event Enabler from here (291MB)…

You’ll need to register an account on support.emc.com if you don’t already have one (Powerlink account).

https://download.emc.com/downloads/DL48037_Common-Event-Enabler-6.3.1-for-Windows.iso

Install VNX Common Event Event Enabler 6.3.1 (includes CAVA) and a 3rd party AV product of your choice.
emc_VEE_Pack_x64_6.3.1.exe

You will also need to install <vnx nas version>_VNXFileCifsMgmt.exe which sadly is only available on CD2 of the Tools Pack that came with your VNX.  If you’ve subsequently upgraded the NAS to a more recent version, you’ll need to obtain the latest software from EMC.  I was able to download the elusive software from a link sent to me by EMC support, even though I couldn’t find it or search for it on Powerlink.  The links below may work for you, it may not.  Try it.

https://support.emc.com/search/?text=”cifs%20tools”&facetResource=ST

or try this one…

https://support.emc.com/search/?text=Dl48750%20DL32448

Start, Administrative Tools, Celerra Management,
Expand Data Mover Management (you’ll need to point it at the IP address of your CIFS interface)
Expand Anti-virus
Set file masks (don’t use *.*), and exclude files that don’t harbor viruses, configure CAVA CIFS Server name to exactly match that on the VNX CIFS Server name (may need to be in caps!), and IP addresses of CAVA AV Servers.  Example viruschecker.conf shown below.  How you get this into your viruschecker.conf is your problem.  Personally, I’d take the easy option of using the gui, then manually edit the viruschecker.conf file using vi to fix any problems, remove square brackets and stuff.  To edit the viruschecker.conf file manually on the datamover over ssh, log on as nasadmin, su to root and use these commands…

server_file server_2 -get viruschecker.conf viruschecker.conf

vi viruschecker.conf (and tidy it up)

server_file server_2 -put viruschecker.conf viruschecker.conf

CIFSserver=globalcifsserver  -Note that this CIFS Server must reside on physical DM, not your CIFS Server on VDM
Addr=<IP addresses of AV engines separated by semi colons> eg 10.1.1.1:10.1.1.2
shutdown=viruschecking

excl=*.dwl:*.edb:*.fmb:*.fmt:*.fmx:*.frm:*.inp:*.ldb:*.ldf:*.mad:*.maf:*.mam:*.maq:*.mar:*.mat:*.mda:*.mdb:*.mde:*.mdf:*.mdn:*.mdw:*.mdz:*.ndf:*.ora:*.orc:*.ost:*.pst:*.sc:*.sqc:*.sql:*.sqr:*.stm:*.tar:*.tmp:*.zip:????????:*RECYCLER*

masks=*.386:*.ace:*.acm:*.acv:*.acx:*.add:*.ade:*.adp:*.adt:*.app:*.asd:*.asp:*.asx:*.avb:*.ax:*.ax?:*.bas:*.bat:*.bin:*.bo?:*.btm:*.cbt:*.cdr:*.cer:*.cfm:*.chm:*.cla:*.class:*.cmd:*.cnv:*.com:*.cpl:*.cpy:*.crt:*.csc:*.csh:*.css:*.dat:*.dbx:*.der:*.dev:*.dl?:*.dll:*.do?:*.do??:*.doc:*.docx:*.dot:*.drv:*.dvb:*.dwg:*.eml:*.exe:*.fon:*.fxp:*.gadget:*.gms:*.gvb:*.hlp:*.hta:*.htm:*.html:*.htt:*.htw:*.htx:*.im?:*.inf:*.ini:*.ins:*.ins:*.isp:*.its:*.js:*.js?:*.jse:*.jtd:*.lgp:*.lib:*.lnk:*.lnk:*.mad:*.maf:*.mag:*.mam:*.maq:*.mar:*.mas:*.mat:*.mau:*.mav:*.maw:*.mb?:*.mda:*.mdb:*.mde:*.mdt:*.mdw:*.mdz:*.mht:*.mhtm:*.mhtml:*.mod:*.mp?:*.mpd:*.mpp:*.mpt:*.mrc:*.ms?:*.msc:*.msg:*.msh:*.msh1:*.ksh:*.msh1xml:*.msh2:*.msh2xml:*.mshxml:*.msi:*.mso:*.msp:*.mst:*.nch:*.nws:*.obd:*.obj:*.obz:*.ocx:*.oft:*.olb:*.ole:*.ops:*.otm:*.ov?:*.pcd:*.pcd:*.pci:*.pdb:*.pdf:*.pdr:*.php:*.pif:*.pl:*.plg:*.pm:*.pnf:*.pnp:*.pot:*.pot:*.pp?:*.pp??:*.ppa:*.pps:*.pps:*.ppt:*.prc:*.prf:*.prg:*.ps1:*.ps1xml:*.ps2:*.ps2xml:*.psc2:*.pwz:*.qlb:*.qpw:*.reg:*.rtf:*.sbf:*.scf:*.sco:*.scr:*.sct:*.sh:*.shb:*.shs:*.sht:*.shtml:*.shw:*.sis:*.smm:*.swf:*.sys:*.td0:*.tlb:*.tmp:*.tsk:*.tsp:*.tt6:*.url:*.vb:*.vb?:*.vba:*.vbe:*.vbs:*.vbx:*.vom:*.vs?:*.vsd:*.vsmacros:*.vss:*.vst:*.vsw:*.vwp:*.vxd:*.vxe:*.wbk:*.wbt:*.wiz:*.wk?:*.wml:*.wms:*.wpc:*.wpd:*.ws:*.ws?:*.wsc:*.wsf:*.wsh:*.xl?:*.xl??:*.xla:*.xls:*.xlt:*.xlw:*.xml:*.xnk:*.xtp

Create a service account in the domain and check the user rights

Create a local group viruscheckers on the CIFS Server using the local users and groups snap-in, and add your service account in.

Make your service account a local admin on the CAVA Servers and double check that the debug programs right in group policy has local administrators in it (windows default setting) or put the cava service account in it.  This is needed for the CAVA service to query the OS on the VM to determine the AV engine.

GPO_name\Computer Configuration\Windows Settings\Security Settings\Local Policies\User Rights Assignment        Debug Programs

Restart the EMC CAVA service on the CAVA Vm’s using this service account – note: it’ll get assigned Log On As A Service rights automatically.

If you need to re-add rights to the CAVA service account in group policy for any reason (they’ve been stripped out in an update), then you’ll need to also restart the CAVA Service on the VM before the CAVA Agent on the Datamover will re-recognise the AV engine.

In the EMC Celerra Management snap-in

Expand User Rights Assignment
Expand EMC Virus Check
Add
Select the service account in the Domain to give virus checking right to, Add, OK, OK

PuTTY/SSH to VNX Control Station
Login as nasadmin
server_viruschk server_2
You should see ONLINE, plus details of file masks and AV server used.

If you get Unknown AV Engine or Third Party AV engine, even though you’re using McAfee or Sophos or one of the other supported AV engines, then something is up – HP Protect Tools can get in the way of the DM authenticating to the CAVA VM’s.  I’m using McAfee and although mcshield.exe is a known av engine and its running, it didn’t pick it up because the password was getting scrambled by ProtectTools.  Check your AV policy being applied to the AV engine includes Network Drives.  It may not.  Until you solve this problem, set shutdown=viruschecking in your viruschecker.conf to shutdown=no to prevent it from stopping all the time.  Use the snap-in to adjust this setting.  Also make sure your viruschecker.conf is pointing as a global cifs server permanently resident on the physical datamover and not your cifs server on a virtual data mover thats actually sharing your filesystems.

server_viruschk server_2 -audit
Should see details of viruses caught. This can be tested using EICAR test virus and dropping the file into the CIFS Share on the CIFS Server.
The file should get automatically deleted by your anti-virus software.

Reboot everything once it’s all set up (CAVA Vm’s).  A reboot can cure most problems.

Common Commands via the CLI

Replace server_x with the data mover you are accessing eg server_2

server_viruschk server_x Shows if virus checking is running and scanning rules
server_viruschk server_x -audit Shows CAVA scanning stats and scan queue. Very useful to see if the CAVA queue is blocked
server_log server_x To see if there are any errors on the data movers
server_setup server_x –P viruschk –o start=64 Start the virus checker service on the data mover
server_setup server_x –P viruschk –o stop Stop the virus checker service on the data mover
server_viruschk server_x –fsscan fs1 –create Starts a virus scanning job a on file system
server_viruschk server_x –fsscan fs1 –delete Stops a virus scanning job on a file system
server_viruschk server_x –fsscan fs1 –list Show the scanning status

Debugging CAVA

You can set debug logging on the data mover

.server_config server_2 “param viruschk Traces=0x00000004” #turns on debug for AV in the server_log
.server_config server_2 “param viruschk Traces=0x00000000” #turns off debug for AV in the server_log

server_log server_x To see if there are any errors logged on the data movers.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Networker Cheatsheet

Here is a handy cheatsheet in troubleshooting failing backups and recoveries using emc’s Networker all taken from real-world experience (and regularly updated).

If it’s helped you out of a pinch, and is worth a dollar, then please consider donating to help maintain this useful blog.


Is backup server running?

nsrwatch -s backupserver        -Gives a console version of the NMC monitoring screen

Check the daemon.raw is being written to…

cp /nsr/logs/daemon.raw ~/copyofdaemon.raw

nsr_render_log -l ~/copyofdaemon.raw > ~/copyofdaemon.log

tail -10 ~/copyofdaemon.log

You may find mminfo and nsradmin commands are unsuccessful.  the media database may be unavailable and/or you may receive “program not registered” error that usually implies the Networker daemons/services are not running on the server/client.  This can also occur during busy times such as clone groups running (even though this busy-ness is not reflected in the load averages on the backup server.

Client config.

Can you ping the client / resolve the hostname or telnet to 7937?

Are the static routes configured (if necessary).

Can the client resolve the hostnames for the backup interfaces? have connectivity to them?

Does the backup server appear in the nsr/res/servers file?

Can you run a save -d3 -s /etc on the client?

From the backup server (CLI)…

nsradmin -p 390113 -s client

Note:  If the name field is incorrect according to nsradmin (happens when machines are re-commissioned without being rebuilt) then you need to stop nsrexecd, rename /nsr/nsrladb folder to /nsr/nsrladb.old, restart nsrexecd, and most importantly, delete and recreate the client on the networker backup server, before retrying a savegrp -vc client_name group_name

Also check that all interface names are in the servers file for all interfaces on all backup servers and storage nodes likely to back the client up.

Can you probe the client?

savegrp -pvc client groupname

savegrp -D2 -pc client groupname (more verbose)

Bulk import of clients

Instead of adding clients manually one at a time in the NMC, you can perform an initial bulk import.

nsradmin -i bulk-import-file

where the bulk-import-file contains many lines like this

create type: NSR Client;name:w2k8r2;comment:SOME COMMENT;aliases:w2k8r2,w2k8r2-b,w2k8r2.cyberfella.co.uk;browse policy:Six Weeks;retention policy:Six Weeks;group:zzmb-Realign-1;server network interface:backupsvrb1;storage nodes:storagenode1b1;

Use excel to form a large csv, then use Notepad++ to remove commas.  Be aware there is a comma in the aliases field, so use an alternative character in excel to represent this then replace it with a comma once all commas have been removed from the csv.

Add user to admin list on bu server

nsraddadmin -u user=username, host=*     

where username is the username minus the domain name prefix (not necessary).

Reset NMC Password (Windows)

The default administrator password is administrator.  If that doesn’t work, check to see that the GST service is started using a local system account (it is by default), then in Computer Management, Properties, Advanced Properties, create a System Environment Variable; GST_RESET_PW=1

Stop and start the GST Service and attempt to logon to the NMC using the default username and password pair above.

When done, set GST_RESET_PW=<null>

Starting a Backup / Group from the command line

On the backup server itself:  savegrp -D5 -G <group_name>

Ignore the index save sets if you are just testing a group by adding  -I

Just backing up the :index savesets in a group: savegrp -O -G <group_name>

On a client: save -s <backup_server_backupnic_name> <path>

Reporting with mminfo

List names of all clients backed up over the last 2 weeks (list all clients)

mminfo -q “savetime>2 weeks ago” -r ‘client’ | sort | uniq

mminfo -q ‘client=client-name, level=full’ -r ‘client,savetime,ssid,name,totalsize’

in a script with a variable, use double quotes so that the variable gets evaluated, and to sort on american date column…

mminfo -q client=${clientname},level=full -r ‘client,savetime,ssid,level,volume’ | sort -k 2.7,2.10n -k 2.1,2.5n -k 2.4,2.5n

mminfo -ot -c client -q “savetime>2 weeks ago”

mminfo -r “ssid,name,totalsize,savetime(16),volume” -q “client=client_name,savetime >10/01/2012,savetime <10/16/2012”

List the last full backup ssid’s for subsequent use with recover command (unix clients)

mminfo -q ‘client=server1,level=full’ -r ‘client,savetime,ssid’

Is the client configured properly in the NMC? (see diagram above  for hints on what to check in what tabs)

How many files were backed up in each saveset (useful for counting files on a NetApp which is slow using the find command at host level)

sudo mminfo -ot -q ‘client=mynetappfiler,level=full,savetime<7 days ago’ -r ‘name,nfiles’

name                         nfiles

/my_big_volume          894084

You should probably make use of the ssflags option in the mminfo report too, which adds an extra column regarding the status of the saveset displaying one or more of the following characters CvrENiRPKIFk with the common fields shown in bold below along with their meanings.

C Continued, v valid, r purged, E eligible for recycling, N NDMP generated, i incomplete, R raw, P snapshot, K cover, I in progress, F finished, k checkpoint restart enabled.

Check Client Index

nsrck -L7 clientname

Backing up Virtual Machines using Networker,VCentre and VADP

To back up virtual machine disk files on vmfs volumes at the vmware level (as opposed to the individual file level backups of the individual vm’s), networker can interface with the vcenter servers to discover what vm’s reside on the esxi clusters managed by them, and their locations on the vmfs shared lun.  For this to work, the shared lun’s also need to be presented/visible to the VADP Proxy (Windows server with Networker client and/or Server running as a storage node) in the fc switch fabric zone config.

The communication occurs as shown in blue.  i.e.

The backup server starts backup group containing vadp clients.

The vadp proxy asks vcentre what physical esxi host has the vm, and where the files reside on the shared storage luns.

The vadp proxy / networker storage node then tells the esxi host to maintain a snapshot of the vm while the vmdk files are locked for backup.

the vmdk files are written to the storage device (in my example, a data domain dedup device)

when the backup is complete, the client index is updated on the backup server, and the changes logged by the snapshot are applied to the now unlocked vmdk and then the snapshot is deleted on the esxi host.

Configuring Networker for VADP Backups via a VADP Proxy Storage Node

The VADP Proxy is just a storage node with fibre connectivity to the SAN and access to the ESXi DataStore LUNs.

In Networker, right click Virtualisation, Enable Auto Discovery

VADP-enable

Complete the fields, but notice there is an Advanced tab.  This is to be completed as follows…  not necessarily like you’d expect…

vadp-advanced

Note that the Command Host is the name of the VADP Proxy, NOT the name of the Virtual Center Server.

Finally, Run Auto Discovery.  A map of the infrastructure should build in the Networker GUI

vadp-gui

Ensure vc, proxy and networker servers all have network comms and can resolve each others names.

You should now be ready to configure a VADP client.

Configuring a VADP client (Checklist)

GENERAL TAB

vadp-client-general

IDENTITY
COMMENT
application_name – VADP
VIRTUALIZATION
VIRTUAL CLIENT
(TICK)
PHYSICAL HOST
client_name
BACKUP
DIRECTIVE
VCB DIRECTIVE
SAVE SET
*FULL*
SCHEDULE
Daily Full

APPS AND MODULES TAB

vadp-client-appsmods

 

 

 

 

 

 

BACKUP
BACKUP COMMAND
nsrvadp_save -D9
APPLICATION INFORMATION
VADP_HYPERVISOR=fqdn_of_vcenter (hostname in caps)
VADP_VM_NAME=hostname_of_vm (in caps)
VADP_TRANSPORT_MODE=san
DEDUPLICATION
Data Domain Backup
PROXY BACKUP
VMWare
hostname_of_vadp_proxy:hostname_of_vcenter.fqdn(VADP)

GLOBALS 1 OF 2 TAB
ALIASES
hostname
        hostname.fqdn
        hostname_backup
        hostname_backup.fqdn
        ip_front
        ip_back

GLOBALS 2 OF 2 TAB
REMOTE ACCESS
user=svc_vvadpb,host=hostname_vadp_proxy
        user=SYSTEM,host=hostname_vadp_proxy
        *@*

OWNER NOTIFICATION
  /bin/mail -s “client completion : hostname_client” nwmonmail

Recovery using recover on the backup client

sudo recover -s backup_server_backup_interface_name

Once in recover, you can cd into any directory irrespective of permissions on the file system.

Redirected Client Recovery using the command line of the backup server.

Initiate the recover program on the backup server…
sudo recover -s busvr_interface -c client_name -iR -R client_name

or use…  -iN (No Overwrite / Discard)
-iY (Overwrite)

-iR (Rename ~ )

Using recover> console

Navigate around the index of recoverable files just like a UNIX filesystem

Recover>    ls    pwd cd\

Change Browsetime
Recover>    changetime yesterday
1 Nov 2012 11:30:00 PM GMT

Show versions of a folder or filename backed up
Recover>      versions     (defaults to current folder)
Recover>    versions myfile

Add a file to be recovered to the “list” of files to be recovered
Recover>    add
Recover>     add myfile

List the marked files in the “list” to be recovered
Recover>    list

Show the names of the volumes where the data resides
Recover>    volumes

Relocate recovered data to another folder
Recover>    relocate /nsr/tmp/myrecoveredfiles

Recover>  relocate “E:\\Recovered_Files”     (for Redirected Windows Client Recovery from Linux Svr)

View the folder where the recovered files will be recovered to
Recover>    destination

Start Recovery
Recover>    recover

SQL Server Recovery (database copy) on a SQL Cluster

First, rdc to cluster name and run command prompt as admin on cluster name (not cluster node)
nsrsqlrc -s <bkp-server-name> -d MSSQL:CopyOfMyDatabase -A <sql cluster name> -C MyDatabase_Data=R:\MSSQL10_50.MSSQLSERvER\MSSQL\Data\CopyOfMyDatabase.mdf,MyDatabase_log=R:\MSSQL_10_50\MSSQLSERVER\MSSQL\Data\CopyOfMyDatabase.ldf MSSQL:MyDatabase

Delete the NSR Peer Information of the NetWorker Server on the client/storage node.

Please follow the steps given below to delete the NSR peer information on NetWorker Server and on the Client.

1. At NetWorker server command line, go to the location /nsr/res

2. Type the command:

nsradmin -p nsrexec
print type:nsr peer information; name:client_name
delete
y

Delete the NSR Peer Information for the client/storage node from the NetWorker Server.

Specify the name of the client/storage node in the place of client_name.

1. At the client/storage node command line, go to the location /nsr/res

2. Type the command:

nsradmin -p nsrexec
print type:nsr peer information
delete

y

VADP Recovery using command line

Prereqs to a successful VADP restore are that the virtual machine be removed from the Inventory in VCenter (right click vm, remove from Inventory), and the folder containing the virtual machines files in the vmware datastore be renamed or removed. If the vm still exists in vmware or in the datastore, VADP will not recover it.

Log onto the backup server over ssh and obtain the save set ID for your VADP “FULLVM” backup.

mminfo –avot –q “name=FULLVM,level=full”

Make a note of the SSID for the vm/backup client (or copy it to the cut/paste buffer)

e.g. 1021210946

Log onto the VADP Proxy (which has SAN connectivity over fibre necessary to recover the files back to the datastore using the san VADP recover mode)

recover.exe –S 1021210946 –o VADP:host=VC_Svr;VADP:transmode=san

Note that if you want to recover a VM back to a different vCenter,Datastore,ESX host and/or different resource pool, you can do that from the recover command too, rather than waiting to do it using the vsphere client.  this can be used if your vm still exists in vmware and you don’t want to overwrite it.  You can additionally specify VADP:host=  VADP:datacenter=  VADP:resourcepool=  VADP:hostsystem= and VADP:datastore= fields in the recover command, separated by semicolons and no spaces.

I’ve found that whilst the minimal command above may work on some environments, others demand a far more detailed recover.exe command with all VADP parameters set before it’ll communicate with the VC.  A working example is shown below (with each VADP parameter separated on a newline for readability – you’ll need to put it into a single line, and remove any spaces between each .

recover.exe -S 131958294 -o

VADP:host=vc.fqdn;

VADP:transmode=san;

VADP:datacenter=vmware-datacenter-name;

VADP:hostsystem=esxihost.fqdn;

VADP:displayname=VM_DISPLAYNAME;

VADP:datastore=“config=VM_DataStore#Hard disk 2=VM_DataStore_LUN_Name#Hard disk 1=VM_DataStore_LUN_Name”;

VADP:user=mydomain\vadp_user;

VADP:password=vadp_password

Creating new DataDomain Devices in Networker

In Networker Administrator App from NMC Console, Click Devices button at the top.
Right click Devices in the Left hand pane, New Device Wizard (shown)

Select Data Domain, Next, Next

 Use an existing data domain system
Choose a data domain system in the same physical location to your backup server!
Enter the Data Domain OST username and password

Browse and Select
Create a New Folder in sequence, e.g. D25, tick it.

Highlight the automatically generated Device Name, Copy to clipboard (CTRL-C), Next

Untick Configure Media Pools (label device afterwards using Paste from previous step), Next

Select Storage Node to correspond with device locality from “Use an existing storage node”, Next

Agree to the default SNMP info (unless reconfiguration for custom monitoring environment is required), Next

Configure, Finish

Select new device (unlabelled, Volume name blank), right click, Label

Paste Device Name in clipboard buffer (CTRL-V)
Select Pool to add the Device into, OK.

 

 

 

 

Slow backups of large amounts of data to DataDomain deduplication device

If you have ridiculously slow backups of large amounts of data, check in Networker NMC to see the name of the storage node (Globals2 tab of the client configuration), then connect to the DataDomain and look under the Data Management, DD Boost screen for “Clients” of which your storage node will be one.  Check how many CPU’s and Memory it has.  e.g. Guess which one is the slow one (below)

 

 

 

 

 

Then SSH to the storage node and check what processes are consuming the most CPU and Memory (below)

 

 

 

 

 

 

 

In this example (above), despite dedicating a storage node backup a single large applications data, the fact that it only has 4 cpu’s and is scanning every file that ddboost is attempting to deduplicate means that a huge bottleneck is introduced.  This is a typical situation whereby decommissioned equipment has been re-purposed.

Networker Server

ssh to the networker server and issue the nsrwatch command.  It’s a command line equivalent to connecting to the Enterprise app in the NMC and looking at the monitoring screen.  Useful if you can’t connect to the NMC.

 

 

 

 

 

 

Blank / Empty Monitoring Console

If you’re NMC is displaying a blank monitoring console, try this before restarting the NMC…

Tick or Un-tick and Re-tick Archive Requests.

monitoring-refresh

 

 

 

 

 

 

 

 

 

 

Tape Jukebox Operations

ps -ef | grep nsrjb     -Maybe necessary to kill off any pending nsrjb processes before new ones will work.

nsrjb -C | grep <volume>    -Identify the slot that contains the tape (volume)

nsrjb -w -S <slot>      -Withdraw the tape in slot <slot>

nsrjb -d       -Deposit all tapes in the cap/load port into empty slots in the jukebox/library.

Note:  If you are removing and replacing tapes you should take note what pools the removed tapes belong it and allocate new blank tapes deposited into the library to the same pools to eliminate impact on backups running out of tapes.

Exchange Backups

The application options of the backup client (exchange server in DAG1 would be as follows

NSR_SNAP_TYPE=vss

NSR_ALT_PATH=C:\temp

NSR_CHECK_JET_ERRORS=none

NSR_EXCH2010_BACKUP=passive

NSR_EXCH_CHECK=no

NSR_EXCH2010_DAG=GB-DAG1

NSR_EXCH_RETAIN_SNAPSHOTS=no

NSR_DEVICE_INTERFACE=DATA_DOMAIN

NSR_DIRECT_ACCESS=no

Adding a NAS filesystem to backup (using NDMP)

Some pre-reqs on the VNX need to be satisfied before NDMP backups will work.  This is explained here

General tab

general-tab

 

 

 

 

 

 

 

 

 

 

 

The exported fs name can be determined by logging onto the VNX as nasadmin and issuing the following command

server_mountpoint server_2 -list

Apps and Modules tab

apps_modules_tab

 

 

 

 

 

 

 

 

 

Application Options that have worked in testing NDMP Backups.

Leave datadomain unticked in Networker 8.x and ensure you’ve selected a device pool other than default, or Networker may just sit waiting for a tape while you’re wondering why NDMP backups aren’t starting!

HIST=y
UPDATE=y
DIRECT=y
DSA=y
SNAPSURE=y
#OPTIONS=NT
#NSR_DIRECT_ACCESS=NO
#NSR_DEVICE_INTERFACE=DATA_DOMAIN

Backup Command: nsrndmp_save -s backup_svr -c nas_name -M -T vbb -P storage_node_bu_interface or don’t use -P if Backup Server acts as SN.

To back up an NDMP client to a non-NDMP device, use the -M option.

The value for the NDMP backup type depends on the type of NDMP host. For example, NetApp, EMC, and Procom all support dump, so the value for the Backup Command attribute is:

nsrndmp_save -T dump

Globals 1 tab

globals1

 

 

 

 

Globals2 tab

globals2

 

 

 

 

 

 

 

 

List full paths of VNX filesystems required for configuring NDMP save client on Networker (run on VNX via SSH)

server_mount server_2

List full paths required to configure NDMP backup clients (emc VNX)

server_mount server_2

e.g. /root_vdm_2/CYBERFELLA_Test_FS

Important:  If the filesystem being backd up contains more than 5 million files, set the timeout attribute to zero in the backup group’s properties.

Command line equivalent to the NMC’s Monitoring screen

nsrwatch

Command line equivalent to the NMC’s Alerts pane

printf “show pending\nprint type:nsr\n” | /usr/sbin/nsradmin -i-

Resetting Data Domain Devices

Running this in one go if you’ve not done it before is not advised.  Break it up into individual commands (separated here by pipes) and ensure the output is what you’d expect, then re-join commands accordingly so you’re certain you’re getting the result you want.  This worked in practice though.  It will only reset Read Only (.RO) devices so it won’t kill backups, but will potentially kill recoveries or clones if they are in progress.

nsr_render_log -lacedhmpty -S “1 hour ago” /nsr/logs/daemon.raw | grep -i critical | grep RO | awk {‘print $10’} | while read eachline; do nsrmm | grep $eachline | cut -d, -f1 | awk {‘print $7’}; done | while read eachdevice; do nsrmm -HH -v -y -f “${eachdevice}”; done

Identify OS of backup clients via CLI

The NMC will tell you what the Client OS is, but it won’t elaborate and tell you what type, e.g. Solaris, not Solaris 11 or Linux, not Linux el6.  Also, as useful as the NMC is, it continually drives me mad how you cant export the information on the screen to excel.  (If someone figures this out, leave a comment below).

So, here’s how I got what I wanted using the good ol’ CLI on the backup server.  Luckily for me the backup server is Linux.
Run the following command on the NetWorker server, logging the putty terminal output to a file:

nsradmin
. type: nsr client
show client OS type
show name
show os type
p

This should get you a list of client names and what OS they’re running according to Networker in your putty.log file.  Copy and paste the list into a new file called mylist.  Extract just the Solaris hosts…

grep -i -B1 solaris >mylist
grep name mylist | cut -d: -f2 | cut -d\; -f1 >mysolarislist

sed ‘s/^ *//’ mysolarislist | grep -v \\-bkp > solarislist

You’ll now have a nice clean list of solaris networker client hostnames.  You can remove any backup interface names by using

grep -v b$

to remove all lines ending in b.

One liner…

grep -i -B1 solaris mylist | grep name | cut -d: -f2 | cut -d\; -f1 | sed ‘s/^ *//’ | grep -v \\-bkp | grep -v b$ | sort | uniq > solarislist

Now this script will use that list of hostnames to ssh to them and retrieve more OS detail with the uname -a command.  Note that if SSH keys aren’t set up, you’ll need to enter your password each time a new SSH session is established.  This isn’t as arduous as it sounds.  use PuTTY right click to paste the password each time, reducing effort to a single mouse click.
#!/bin/bash

cat solarislist | while read eachhost; do
echo “Processing ${eachhost}”
ssh -n -l cyberfella -o StrictHostKeyChecking=no ${eachhost} ‘uname -a’ >> solaris_os_ver 2>&1
done

This generates a file solaris_os_ver that you can just grep for ^SunOS and end up with a list of all the networker clients and the full details of the OS on them.

grep ^SunOS solaris_os_ver | awk ‘{print $1 $3 $2}’

Facebooktwittergoogle_plusredditpinterestlinkedinmail