ESXi as a Desktop with VMDirectPath I/O

Introduction
Since VMware released ESXi the entire world has had access to an enterprise grade virtualisation product for no cost (should you have the correct hardware to support it). ESXi is the basis of VMwares flagship product vSphere and is constantly updated with new features, many of which are available in the ESXi standalone mode (i.e. sans vSphere).

One of these which was introduced with ESXi 4.0 was VMDirectPath I/O, a technology which allows virtualised machines to obtain access to real hardware, to do this it uses the popular Intel VT-d or AMD IOMMU technologies which have been on most mid-high end hardware for over three years.

It's primary purpose was to provide guests the highest possible speed (and lowest latency) to peripherals such as 10Gb NICs and Storage Adapter. However in practice the vast majority of devices are capable of being accessed through this feature.

Why?
Since this feature was introduced I'd had the idea implanted in my head by co-workers that this would be a neat way to provide high speed virtual machines on desktops. Sure there are other products out there and most are pretty good.

VMWare's own Workstation is the obvious choice if you're looking to spend money, Oracle's VirtualBox is a good alternative if you're not, Linux KVM support is interesting as are Solaris Zones if you want the Oracle route, and FreeBSD jails and similar work well if you just want one operating system and one kernel.

No matter whichever system you're using you have a "full fat" operating system under the virtualisation technology; ESXi on the other hand is extremely lightweight.

I'm lucky that in my work I'm free to run my own desktop computer (not true for the entire organisation, this is usually decided by those who would support you, which in my case is myself). Our team tends to be primarily Linux, although the majority of us run a Windows virtual machine for Outlook and other non portable applications. I'm a fan of FreeBSD for servers and try and use them when possible.

I use Ubuntu as my distribution of choice, which when doing purely *nix type work is great. One thing that I've always seemed to encounter with Linux is a poorly performing IO scheduler, perhaps it's not targeted at the interactive desktop audience, but it never seems to quite do the job. Ironically Windows IO scheduling has never really caused me an issue with Windows 7.

I also like to reinstall regularly to a modern version of the operating system I'm using, a nice way to clean house once in a while. At the moment I put off updating or upgrading the base operating system because I'll have to restart the Windows 7 and FreeBSD guests.

The Goal
Simple, pass the graphics card, sound card and USB inputs through to a guest machine running on an ESXi host to provide a desktop experience as close as to a native installation as possible. It's obvious that there's going to be a performance hit with ESXi in place, the previous native OS will now have to go through ESXi to perform any operation.

Hardware
My test machine is:
* Asus P6T
* Intel Core i7 920 2.6Hz running at 3.0Ghz
* 6Gb of RAM
* C-Audio USB Sound Card
* Intel Pro 1000 GT
* ATI Radeon HD 2400 Pro (from a Dell Optiplex 755) w/ 256Gb RAM

Graphics Card Choice
From what I've experienced and read on-line you will need an ATI Radeon based graphics card, nVidia cards will not work. From what I can gather this is something to do with the way the nVidia drivers address and control bits of memory which is incompatible with ESXi (maybe even VT-d/IOMMU).

I've personally had it working with the HD 2400 Pro, reports suggest the 5670, HD 3450 and HD 6850 also work.

Setting It Up - ESXi
Installing ESXi is a fairly simple process, if you've never done it before be aware that it will wipe the destination disks you give it. If you have any disks which have valuable data on them either back it up or simply unplug them.

ESXi is not compatible with several features on the P6T, the most important one being it's 1Gb Realtek NIC, remember ESXi is enterprise and the Realtek chips tend to be fairly consumer but you may be able to get a driver pack to support it, I haven't looked. Intel and Broadcom are likely to work as they are found on many server boards. I've always used the Intel card in this machine as it's a better NIC. ESXi does recognise the SATA controller and USB ports happily.

Remember with ESXi you are always going to need a way of running the vSphere Client to manage it, now once running you may be able to live with running vSphere inside a guest for managing other guests. However for set up and when you need to alter your "desktop" guest you're going to need it on another machine.

Once (or before) ESXi is installed you need to go into the BIOS of your computer and ensure that the required extension is enabled (VT-d or IOMMU), wthout this ESXi will not be able to pass hardware through. If your platform supports it and is enabled you should now be able to go into the "Configuration -> Advanced Settings" section of vSphere Client.

Clicking "Edit" will provide you with a list of PCI/PCIe devices ESXi believes it can provide guests. You need to select the device which represents your graphics card. In my case it appears as "ATI Technologies Inc Optiplex 755" for one of my cards, then you'll need to attempt to match your USB controllers on the BUS to your physical USB ports. In my case I have two USB 2.0 controllers amongst six other USB 1.0 controllers, the difference of which is documented in the motherboard manual.

Unfortunately you will need to reboot the ESXi host in order to activate the passthrough configuration. I'm unaware if selecting all devices as possible passthrough options will cause you any issues, I specifically took the time to only make those required available.

Setting It Up - Guest
Now install your new desktop operating system in a guest, as per normal, using the vSphere Client as the interface to it. At this point it may be work adding a "USB Controller" this will allow you to optionally pass USB devices directly to your VM without giving it the whole USB port (it does however increase your dependence on the vSphere Client).

Once it's up and running feel free to update, etc. Install VMware Tools from ESXi so that it can be monitored and commands issued to it as needed.

Setting It Up - Adding Graphics Card
Shut down the guest, edit the virtual machine and add a PCI Device, initially select only your graphics card and boot the VM. If the VM doesn't start then you may need to reduce the memory allocation, there is an issue I haven't yet quiet understood to do with memory reservations within ESXi.

Hopefully your guest will boot ignoring the new graphics card, the fun task no matter which OS is to get it to ignore the standard SVGA adapter VMware emulates. After the reboot in the following section you will need to continue using the mouse and keyboard through vSphere, just wiggle the mouse you'll get it.

Windows:

1. Install the ATI Catalyst drivers.
2. Reboot.
3. With luck on boot your screens might expand the desktop automatically, or they may stay blank.
4. Use the standard Windows screen management to extend your desktop to the new screens, optionally disable the SVGA interface.

Ubuntu:
1. Install the ATI drivers (fglrx) from the Proprietary Drivers screen under Preferences.
2. Run "aticonfig --initial" (this overwrites the Xorg config to use the ati card over the SVGA card).
3. Reboot.
4. Use the ATI tools to configure Xorg and your Window Manager as you wish.

Setting It Up - USB
There are two choices to make here, either you attempt to pass a USB controller directly through to the guest or you use VMware's USB passthrough functionality.

The former means identifying which controller goes to which physical ports, assuming you can do this easily (worst case you add each controller to your VM one by one and plug something into each port).

The latter will mean that you are reliant on using vSphere Client in order to add new USB devices to your computer, just plugging in a flash drive isn't as easy as it used to be.

I'm currently having to use both, as for some reason at current I've been unable to get the USB soundcard to come online through the passthrough'd USB ports and have VMWare passing the individual USB device through. This isn't a staggering issue as the USB sound card doesn't change often.

Setting It Up - Adding to "Virtual Machine Startup/Shutdown"
Back in vSphere you'll want to make your desktop VM be the very first VM to come online after the host has been rebooted. This is relatively easy and is done through "Configuration - Virtual Machine Startup/Shutdown" on the host.

Result
Well this article is being written in a Windows 7 64bit guest with the graphics card and USB peripherals pass through. It's watched Jurassic Park with no skew between audio and video, currently playing internet radio without a glitch (never got Spotify to do that in Workstation).

The vSphere Client sucks, which is an issue. It's written in J# and is slow, it takes patience to use. However weighing that against how often you'll need it it's liveable.

No matter what you do with ESXi you will not see the machine boot on your real monitors, the VM BIOS only seems to be capable of outputting via the fake graphics card. If your machine hangs then you'll need to look in the vSphere client.

With that if ESXi crashes after the "desktop" VM has started then you will not see any output from ESXi, it's possible to get ESXi to output debug via the serial interface which may be an option for debugging.