Renaming a vSwitch in VMWare ESXi

Your vSwitches visible in vSphere client are allocated names, e.g. vSwitch0, vSwitch 1 and so on.  In order to create dvSwitches (Distributed vSwitches), you need to point vSphere Client at a VirtualCenter Server, not directly at an ESX host in order to access the enterprise features enabled therein.

Going back to pain old vSwitches though, the names need to match if you have VMotion VMKernel ports contained inside them, and if they don’t then it won’t work.

You soon realise that you can’t rename a vSwitch from within vSphere Client either – oh no!  Deleting it and recreating it may be a problem too if there are VM’s living inside an internal Virtual Machines network that cannot be VMotioned away to another host.

The good news is that you can fix this scenario using the “unsupported” console on the ESX host.

At the ESX Console, log in and hit Alt-F1 then type unsupported and hit Enter.  You won’t see the word “unsupported” appear as you type it but upon hitting Enter, you’ll be prompted for the root password.  Type it in and hit Enter.

You be presented with a Linuxesque command prompt.  If you don’t do vi, go find someone who does or you’re about to break stuff.

cd /etc/vmware

vi esx.conf

Search for “name” using Esc, /name, Enter and keep hitting n (next) until you find the incorrectly named vSwitch.  Change the word by hitting Esc, cw followed by the correct name, followed by Esc.

/net/vswitch/child[0001]/name = “vSwitch4

If you’re happy the name has been changed correctly in esx.conf, hit Esc, :wq! and hit Enter to write the changes back to disk and quit vi.

Back at the Linux prompt, type clear to clear the screen, and type exit and hit Enter to log out of the console.

Alt-F2 will close the “Unsupported Console” returning you back to the black and yellow ESX Console.

Esc to log out, then finally F11 to restart the host.

When the ESX host restarts, you can reconnect using vSphere Client and the vSwitch will now have the correct name.



Troubleshooting Openfiler (missing NFS shares)

I came home on Friday evening to find my DLNA server wasn’t available :(.  It’s not the scenario I needed after an intense few days squeezing 5 days worth of work into a 4 day week due to the Easter bank holiday weekend, plus the 3 hour drive home.

Firstly, my DLNA server is simply Serviio running on a Xubuntu VM which mounts an NFS share containing my media files.

The virtual infrastructure in my lab that underpins it is a two node ESXi cluster with a third node running Openfiler to provide the shared storage to ESXi.  This includes a RAID 0 (not recommended I might add) iSCSI target for maximum IO within a constrained home budget and a 1TB USB HDD containing a NFS Datastore where I store my ISO’s and vm backups so as to save space on the relatively expensive, high performance iSCSI target intended for the VM’s disk files, which are also thinly provisioned to further save on space.  The Openfiler NAS also has a second 1TB USB HDD containing a second NFS Media Store share, mounted by Serviio/Xubuntu VM already mentioned (as well as any other machine in the network). The network is an 8 port, 1 GB/s managed switch with two VLANs and two Networks, one which joins the rest of the LAN, and one which just contains VMotion and iSCSI traffic.


So, like I said, my Serviio DLNA server was u/a and some troubleshooting was in order.

My first reaction was that something was wrong in VMWare Land, but this turned out not to be the case – however, the storage configuration tab revealed that the NFS datastores were not available, and df -h on my workstation confirmed it, so almost immediately my attention switched from VMWare to Openfiler.

Now, I won’t go into it too much here, but I’m torn with Openfiler.  The trouble is most folks would only ever interface with the web-based GUI, and they’d quickly come unstuck, since conary updateall to install all the latest updates or not, certain changes don’t seem to get written back.  I had to perform all my LVM configuration manually at the command line as root, not via the web-gui as openfiler.  I’ve yet to investigate this any further as it’s now working OK for me, but my guess would be a permissions issue.

I connected to the Openfiler web interface and could see that the shared folders (shown below) were missing, so the NFS shares were not being shared but more importantly it also implied that the logical volumes containing the filesystems exported via NFS were not mounted.  df -h on Openfiler’s command line interface confirmed this.

In order to check that Openfiler could see the hard drives at all, I issued the command fdisk -l but because the USB HDD’s are LVM physical volumes, they have gpt partition tables on them, not msdos, so fdisk does not support it, but is kind enough to recommend using GNU Parted instead.  Despite the recommendation, I used lshw > /tmp/allhardware and just used vi to go looking for the hard drive information.  The USB HDD’s are Western Digital, so I just :/WD to find them amongst the reams of hardware information, and find them I did.  Great, so the OS could see the disks, but they weren’t mounted.  I quickly checked /etc/fstab and sure enough, the devices were in there, but mount -a wasn’t fixing the problem.

Remember I mentioned that the drives had a gpt partition table, and that they were LVM physical volumes?  Well therein lies the problem.  You can’t mount a filesystem on a logical volume if the volume group that it is a part of is not activated.  Had my volume groups deactivated?  Yes, they had.

vgchange -ay /dev/vg_nfs

vgchange -ay /dev/vg_vmware

Now my volume groups were active, mount -a should work, confirmed by df -h showing that the /dev/mapper/vg_vmware-lv_vmware and /dev/mapper/vg_nfs-lv_nfs block storage devices were now mounted into /mnt/vg_vmware/lv_vmware and /mnt/vg_nfs/lv_nfs respectively.  exportfs -a should reshare the NFS shares provided the details were still in /etc/exports which they were.  Going back to the Openfiler web-interface, the shares tab now revealed the folders shown in blue (above) and their mount points needed by any NFS clients in order to mount them.  Since the mountpoint details were already in /etc/fstab on my workstation, mount -a re-mounted them and into /nfs/nfsds and /nfs/nfsms and ls -al showed that the files were all there.

rdesktop to my VirtualCenter server, mount -a in the Xubuntu terminal to remount them on the DLNA server, re-run and that’s it.

So that’s how I diagnosed what was wrong and how I fixed it.  Now I just need to investigate the system logs on Openfiler to see why the volume groups deactivated in the first place.  After continuous uptime without issue for 4 months, I must admit that it did come as a surprise.



Upgrading VMWare ESXi hosts

So your ESXi environment has a few virtual machines running, and their OS’s are all kept up to date, but what about bringing the ESXi host itself up to date?  This is the quickest and easiest way I ‘ve found of getting the job done.

Download the .zip package from VMWare for your ESXi version.  This will need an internet connection.


If you don’t already have it installed, you can download and install vSphere client by typing the name or IP address of your ESXi host into your web browser.  This will not need an internet connection.

You’ll also need the VSphere CLI, which will need to be downloaded from VMWare.  This will need an internet connection.

Should you have any installation issues, you may want to download the .NET Redistributable Package from Microsoft and pre-install that before attempting to install the VMWare products.

Once you have vSphere Client and vSphere CLI installed and the .zip package ready,

Connect to the VCenter Server / ESXi host and shutdown or VMotion any running virtual machines.

Place the ESXi host into Maintenance Mode.

Open vSphere CLI.

cd C:\Program Files\vmware\vmware vsphere cli\bin\perl

perl –server esx_host_ip –username root –bundle –install

Enter the root password when prompted.

It’ll go quiet for a while, but you can see that something is happening in VSphere Client.  The job will be “In Progress” for around 2 or 3 minutes on modern hardware with 1Gb/s network connectivity.  Only do one host upgrade at a time to prevent IO errors occurring which will halt the upgrade and leave locked files in /var/update/cache/ which will require a restart of the host to clear costing you time.  This is especially true if you are connected at only 100Mb/s over the network.

When you see the words “Installation Complete” in the vSphere CLI terminal, the upgrade part is complete.  Leave the host in Maintenance Mode for now and reboot it from VSphere Client.

When the host is back up, log on again using VSphere Client, and take it out of Maintenance Mode.

Thats it.  power up the VM’s, VMotion them back from the other hosts in the cluster, or just let DRS take care of it depending on your environment.

Repeat for each host in the cluster.