• Software: ESXi 6.5, Ubuntu 17.04, ZFS On Linux (ZOL)

I recently came by a couple of 3 TB WD Blues and naturally decided it was time to build a storage array. Given the minuscule size of my mini-ITX not-quite-a-server I didn’t have space for a hardware RAID controller so instead I elected to go with a software-based approach I’ve always been interested in: ZFS. If you don’t know about ZFS, as usual the Arch Wiki has a good, practical introduction.

My server was in the midst of hosting a virtualized windows environment under ESXi and as such was also not available to run ZFS on bare metal, but I still wanted access to SMART data in Ubuntu and to minimize hypervisor overhead on the disk IO. So, raw device mapping to the rescue – note that RDM is not supported by VMware on local SATA disks like this. It’s generally used to directly map LUNs to guests, but luckily it also works for this case.

There are two kinds of raw device mapping available in ESXi: physical and virtual. Virtual mapping gives limited SCSI access to the physical disk and it appears in the guest OS as a “VMware-something-something” drive, whereas physical RDM gives nearly full access to the disk and it appears to the guest as the actual hardware. This approach also allows access to the drive’s SMART data, which I made use of with smartmontools.


  1. SSH into your ESXi host and list the disk devices:

    $ ls -l /vmfs/devices/disks

    Identify the vml names of the disks you want to use for RDM. They will appear as links beginning with vml.0 that point to the hardware name (which in my case began with t10.ATA_____WDC_).

  2. Change to the vmfs volume and directory where you want to create the RDM files – for me, this was /vmfs/volumes/<datastore_name>/RDM.

  3. Create the RDM file using the vmkfstools utility:

    vmkfstools -z /vmfs/devices/disks/<vml_name> <RDM_filename.vmdk>

    You can name the RDM file whatever you want, but I used the hard drives’ serial numbers. The -z switch is used to make this a physical RDM (a virtual RDM would use the -r switch).

  4. Add the RDM files to the guest VM using the ‘Add existing disk’ option in the GUI. Reboot and the guest should detect the disks as though they were physically attached to the machine.

Then, on the Ubuntu machine that’ll be hosting your ZFS array:

  1. Install the zfsutils-linux package.
  2. Create the zpool. I used the disk IDs found in /dev/disk/by-id as recommended by the ZOL devs here:

    zpool create tank mirror scsi-1ATA_WDC_WD30EZRZ-00Z5HB0_WD-WCC4N5RPFZJS scsi-1ATA_WDC_WD30EZRZ-00Z5HB0_WD-WCC4N6HRPFJY

And now you have a ZFS array running within VMware! My setup so far has been rock solid but we’ll see if I run into any problems with zpool scrubs down the road.