I came home on Friday evening to find my DLNA server wasn’t available :(. It’s not the scenario I needed after an intense few days squeezing 5 days worth of work into a 4 day week due to the Easter bank holiday weekend, plus the 3 hour drive home.
Firstly, my DLNA server is simply Serviio running on a Xubuntu VM which mounts an NFS share containing my media files.
The virtual infrastructure in my lab that underpins it is a two node ESXi cluster with a third node running Openfiler to provide the shared storage to ESXi. This includes a RAID 0 (not recommended I might add) iSCSI target for maximum IO within a constrained home budget and a 1TB USB HDD containing a NFS Datastore where I store my ISO’s and vm backups so as to save space on the relatively expensive, high performance iSCSI target intended for the VM’s disk files, which are also thinly provisioned to further save on space. The Openfiler NAS also has a second 1TB USB HDD containing a second NFS Media Store share, mounted by Serviio/Xubuntu VM already mentioned (as well as any other machine in the network). The network is an 8 port, 1 GB/s managed switch with two VLANs and two Networks, one which joins the rest of the LAN, and one which just contains VMotion and iSCSI traffic.
So, like I said, my Serviio DLNA server was u/a and some troubleshooting was in order.
My first reaction was that something was wrong in VMWare Land, but this turned out not to be the case – however, the storage configuration tab revealed that the NFS datastores were not available, and df -h on my workstation confirmed it, so almost immediately my attention switched from VMWare to Openfiler.
Now, I won’t go into it too much here, but I’m torn with Openfiler. The trouble is most folks would only ever interface with the web-based GUI, and they’d quickly come unstuck, since conary updateall to install all the latest updates or not, certain changes don’t seem to get written back. I had to perform all my LVM configuration manually at the command line as root, not via the web-gui as openfiler. I’ve yet to investigate this any further as it’s now working OK for me, but my guess would be a permissions issue.
I connected to the Openfiler web interface and could see that the shared folders (shown below) were missing, so the NFS shares were not being shared but more importantly it also implied that the logical volumes containing the filesystems exported via NFS were not mounted. df -h on Openfiler’s command line interface confirmed this.
In order to check that Openfiler could see the hard drives at all, I issued the command fdisk -l but because the USB HDD’s are LVM physical volumes, they have gpt partition tables on them, not msdos, so fdisk does not support it, but is kind enough to recommend using GNU Parted instead. Despite the recommendation, I used lshw > /tmp/allhardware and just used vi to go looking for the hard drive information. The USB HDD’s are Western Digital, so I just :/WD to find them amongst the reams of hardware information, and find them I did. Great, so the OS could see the disks, but they weren’t mounted. I quickly checked /etc/fstab and sure enough, the devices were in there, but mount -a wasn’t fixing the problem.
Remember I mentioned that the drives had a gpt partition table, and that they were LVM physical volumes? Well therein lies the problem. You can’t mount a filesystem on a logical volume if the volume group that it is a part of is not activated. Had my volume groups deactivated? Yes, they had.
vgchange -ay /dev/vg_nfs
vgchange -ay /dev/vg_vmware
Now my volume groups were active, mount -a should work, confirmed by df -h showing that the /dev/mapper/vg_vmware-lv_vmware and /dev/mapper/vg_nfs-lv_nfs block storage devices were now mounted into /mnt/vg_vmware/lv_vmware and /mnt/vg_nfs/lv_nfs respectively. exportfs -a should reshare the NFS shares provided the details were still in /etc/exports which they were. Going back to the Openfiler web-interface, the shares tab now revealed the folders shown in blue (above) and their mount points needed by any NFS clients in order to mount them. Since the mountpoint details were already in /etc/fstab on my workstation, mount -a re-mounted them and into /nfs/nfsds and /nfs/nfsms and ls -al showed that the files were all there.
rdesktop to my VirtualCenter server, mount -a in the Xubuntu terminal to remount them on the DLNA server, re-run serviio.sh and that’s it.
So that’s how I diagnosed what was wrong and how I fixed it. Now I just need to investigate the system logs on Openfiler to see why the volume groups deactivated in the first place. After continuous uptime without issue for 4 months, I must admit that it did come as a surprise.
Super-Duper website! I am loving it!! Will be back later to read some more. I am taking your feeds also
I am always looking online for tips that can help me. Thanks!