VMworld Day 2

Yesterday afternoon, the Solutions Exchange area opened. I’m not sure I’ve ever seen more people and more vendors/booths in a single event. Walking around, it seems that everyone hires professional EmCees (masters of ceremony) to work their booths. Magicians and gimmicks are at a high level. The Blue Cat booth is doing the same thing as last year, but better. One vendor has a phone booth-type set up where you can get in and grab cash blown around by a fan. Crazy! I was a little tired and hung out with the good folks at Veeam for a while at their booth.

This morning is the big keynote – this room is huge and the people are filing in steadily. The keynote started with a video about what the cloud is, and immediately fired a joke at Oracle. We are all dumb terminals in the cloud! The cloud is a collective of vast resources, and catching pizza matrix-style. We were just informed that there are 17,021 registered attendees at this year’s conference! CMO Rick Jackson opened the keynote and described the hybrid cloud used this year, since there is no data center on site as there has been in the past. He also spoke about the value of the VMware User Group and promoted joining one. Rick then moved into the slogan for this year, which is “Virtual Roads. Actual Clouds” and spoke about the phases of virtualization: IT Production, Business Production, and IT as a Service. Paul Maritz, VMware President and CEO, took the stage next. Thankfully, he moved into theory instead of practice quickly and it seems he only gets 20 minutes to speak. He discussed infrastructure as a means to and end and asked if old apps on new infrastructure is enough? There will be a new focus on how applications are built going forward so that they will better support a virtualized environment. For example, application platforms are now popping up everywhere—iPhone, iPad, Droid, Blackberry—so applications must be able to adjust and adapt. He brought up “the new stack”, which is new infrastructure, applications, and end user access. Steve Herrod followed and picked up where Paul ended. Steve is a little more energetic than Paul, but I find myself still wishing Carl Eschenbach would speak. Steve highlighted some features of vSphere 4.1, initially about “elastic resource scheduling” and why we would need additional vMotion capabilities. Steve announced the VMware acquisition of Integrien to help “manage the virtual giant.” It looked cool from the screenshots, complete with traffic light red/yellow/green graphs and granularity into VMs, but I wonder how this places partners like Veeam? IT as a Service (ITaaS) should move toward a Service Catalog, similar to an app store so you can get what you want, when you want it, and then only have to pay for what is used.

Now – REDWOOD….otherwise known as VMware vCloud Director is available today. Also announced was vShield Endpoint , vShield App, and vShield edge to offload virtual machine security. Endpoint is the anti-virus portion of it….seems cool. Symantec Endpoint Protection’s initial release made me really leery of the word “endpoint”, so I hope this is better. We then saw a quick demo of the vCloud Director and went into vFabric. Cloud portability is key! The next big announcement is that View 4.5 is coming in a few weeks. We’ve been waiting for this one! Local mode is the new offline desktop.




Stay tuned for the next installment…

Dan Anderson is my hero! (VMworld update)


If you’re at VMworld 2010, and you haven’t been to session ALT2004 – Building the VMworld Lab Cloud Infrastructure, I recommend you put that one on your schedule.  Dan Anderson (VMware Principal Architect) is a great presenter, and hearing him describe in detail how he architected and built this year’s lab environment is mind-boggling.

This guy is basically doing something that might take us a year or two, in only a few months.  There’s no load testing, and no way to know if it’s all going to work well until we all get in the labs and hammer on the environment.  Talk about flying by the seat of one’s pants!

Here’s a quick summary of what’s in the lab environment:

  • 329 TB of storage from EMC and NetApp
  • 352 Servers (HP Blades and Cisco UCS)
  • 736 CPU sockets with 3,072 cores
  • 7.5 THz of total CPU
  • 14.6 TB RAM
  • 480 thin clients
  • 4,000+ VM’s deployed every hour
  • 12 Lab Manager Instances
  • 4 vCenters per site

This is all spread across 3 sites:

  • Verizon – Ashburn, VA
  • Terremark – Miami, FL
  • Moscone – San Francisco

Each site can run the entire lab environment if necessary, so Hurricane Earl won’t be spoiling our fun!

Taking note of the lessons learned during Dan’s experience will help all of us, even if we’re not building at that speed or scale.  I encourage you to go to his session.  Here are the remaining opportunities to see this one:

Tuesday 11:00 AM Moscone West Room 2014
Wednesday 03:00 PM Moscone West Room 2014
Thursday 12:00 PM Moscone West Room 2014

Now back to your regularly scheduled long line.

VMworld 2010 Sunday/Monday update


Well, my first VMworld experience started with a bang!  After some seemingly too-long flights and an interesting shuttle ride from the airport to the hotel, I made it!  I dropped off my stuff at the hotel  and went to register.  Thankfully, by 5:30, there was no one in line and the process was very quick.  I went back to the hotel and got connected with Scott Sauer and John Blessing (@ssauer and @vTrooper) and we headed over to the Thirsty Bear for the VMunderground party, WuPaaS.  I can’t say enough how great this event was, so thanks to all the sponsors, Sean Clark (@vSeanClark), and Theron Conrey (@theronconrey).  I had a chance to finally meet a lot of the VMware/virtualization twitterverse I’ve been talking to for the past year, which was really cool.  After the Warm-Up party, we moved to the Chieftan and continued chatting.

This morning, I grabbed some food in Moscone South and met up with Tommy Trogden (@StorageTexan) and Peter Selin (@pjselin) and headed to an informative session about View performance tuning and View Planner.  We tried to get into a storage session but it closed early because it was full.  We headed to an early lunch and then I spent some time in the blogger lounge.

So far, I am really impressed with everything, except for the lines.  What can you expect though, with 16,000+ people in attendance?

More to come….


Life as a VMware Virtual Machine




I was talking with a local customer the other day that was inquiring about the differences between Microsoft virtualization (Hyper-V) and VMware virtualization solutions.  This customer was hung up on putting the two vendors into a cage match and making them go at each other to see who won.  I used to work in the end user IT environment, and know people that think this is a smart approach.  Competition is great for the end user environment because it drives innovation and keeps costs in check.   But let’s take a look at the technology rather than the cage and understand what’s under the hood.  Just to set the record straight this is not a slam Microsoft Hyper-V blog post.  I don’t get wrapped up into the battle of the hypervisor conversations, if you want to go with a competitive solution have at it.  We will be talking down the road again eventually. 

This individual understood the basics of virtualization but wasn’t that clear on how the hypervisor worked in conjunction with the hardware.  I point my customers to quality blog sites to answer questions or if they need instructions on how to configure or stand something up.  I figured I would do a write-up to help shed some light on how virtual machines work, how they are handled by the hypervisor and how they are lean and mean.

Hypervisor types

First let’s set the record straight by saying that VMware, Microsoft or Citrix did not come up with the concept of virtualization.  The idea of abstraction has been around for over 50 years, and was first mastered by the smart people over at IBM on some old iron in the late 60’s.  VMware did launch the first x86 based virtualization software in May of 1999, which since has changed the open systems world greatly.

Currently there are two types of hypervisors:

Type 1 – A native or bare metal hypervisors run directly on the host’s hardware to manage and monitor the guest operating systems.  Because it has direct access to the hardware resources and doesn’t go through an operating system, it runs more efficiently than a hosted model or Type 2.

Type 2 – A hosted hypervisor that runs within a conventional operating system.  This hypervisor does not have direct access to the hardware thus traditionally has more overhead that a Type 1 hypervisor.

Examples of Type 1 hypervisors would include, VMware ESX, VMware ESXi, Microsoft Hyper-V, Citrix Xen Server.  Examples of a Type 2 hypervisor would be, VMware workstation, VMware server, Microsoft Virtual Server.  The Type 1 hypervisors run more efficiently as they are designed specifically to handle virtual workloads.  They also don’t have a host operating system to have to share, schedule and contend with resources for.  The

The VMware Architecture

Two of the most important components of the VMware Type 1 hypervisor are the VMkernel and the VMM.  The VMkernel is the actual VMware ESX hypervisor product that we all know and love.  It is responsible for interacting with the physical server hardware that you install vSphere onto.  Sounds pretty simple in concept right?  It’s not.  I disagree with people that think the hypervisor is a commodity technology because there are some very special things that VMware does differently.  VMware takes on an approach unlike other virtualization vendors in the marketplace implements a hardware compatibility list (HCL)  to ensure you will be running a supported configuration.  That means when you install the product, VMware has already QA’d the configuration and is a eliminating the guess work of a supported stable environment. 

The VMkernel doesn’t actually run the virtual machines, it invokes yet another layer of protection called the virtual machine monitor.  This “Thin candy shell” (Tommy Boy reference) is the special sauce that takes various communications from the VMkernel and translates them to the actual virtual machine, and visa versa.  I put together a diagram here to help illustrate where the thin candy shell resides in the virtualization space:



The VMM implements the virtualized CPU, memory, network and storage into the actual guest operating system that is hosted on the hypervisor or VMkernel.  It also provides each virtual machine with its own personalized custom build BIOS!  The VMM detects and understands the hardware type that the hypervisor is running on.  It examines the advanced CPU functionality and then adapts (monitor mode) to pass along those benefits to the guest. 

The VMM handles three different types of virtualization, software, hardware and paravirtualization.  Software virtualization we already discussed above.  Hardware virtualization is leveraging technologies from our x86 based server vendors such as Intel and AMD.  Intel offers advanced processor virtualization features such as Intel VT-x, while AMD offers their own solution called AMD-V.  The hardware virtualization helps offset the overhead of virtualization by offloading the binary translation (BT) to the hardware.  Paravirtualization is the concept of reducing virtualization overhead by having both the host and guest work in conjunction with each other.  A classic example of this approach is pvSCSI, if you want more detailed information check out my write up over here.

As with all great idea’s come trade off’s, the use of the VMM adds a layer of “overhead” to the virtual machine.  There is a translation that has to take place to create this isolated secured environment.  The goal of VMware is to lower this overhead to help drive efficiencies (CPU and memory) and help you consolidate more with less.  Here is a diagram that helps illustrate this concept (BTW I have heard 4.1 has taken this down to 1-3% overhead):



Hopefully this helped shed some light on how VMware’s type 1 hypervisor works, and how it interacts with the virtual machines that is designed to support.  Remember that the VMkernel is responsible for working in conjunction with the hardware layer, and the VMM is responsible for translating that information to the virtual machine.  Overhead is a byproduct of this translation, but leveraging hardware and using VMware will help drive this overhead into complete transparency.



Is VMware the Novell of Virtualization?


I think VMware is currently well positioned to become the Novell of virtualization.  Consider the similarities:  technical superiority, “career-safe” choice, large vested corporate user base, complex pricing models, arbitrary price increases in a captive market, Microsoft buying market share with free and nearly-free product.  http://www.networkworld.com/community/tech-debate-microsoft-vmware

Recently, the above comment was made by Guy Chapman on an interesting Network World debate over who has the better hypervisor, Microsoft or VMware.  With respect to Guy, I’d like to touch on the reasons I think this is not (currently) a fair comparison.

First off, a bit of background on Novell for the younger readers.  Novell developed the first network operating system back in the 80’s.  By 1990, any business that had a need for networking was using Novell NetWare.  In a decade, Microsoft’s massive marketing machine had relegated Novell to the background in the corporate world.  MS continued gaining market share with NT, and the Back Office suite of server products.  Y2K gave larger customers a reason to upgrade their old stuff, and most of Novell’s larger customers began moving toward Microsoft, which was cheaper than Novell, and GUI based, which attracted enterprise customers.

You have to hand it to Microsoft.  NT was a solid product, priced right, and with the introduction of Active Directory in 2000, Novell faded fast.

That was then.  Let’s take a look at Microsoft’s track record of innovation over the past decade. What has Microsoft brought to market in the past several years?  Zune?  SCOM?  Windows Mobile?  Bing?  Each one of these is nothing more than a re-hash of a product that someone else brought to market first.  Were any of them better than their predecessors?  The market says no.

In contrast, VMware over the past decade has brought incredible innovations to the market faster even than master innovators like Apple.  Who would have dreamed 10 years ago that one could vMotion a server onto different hardware, and even different storage?  The feature differences between 4.0 and 4.1 releases of vSphere alone are more innovative than anything I have seen out of Redmond for many years.

Microsoft is quite good at commoditizing technologies that have been developed and incubated by others.  I suspect this is what Guy was referring to when he made the Novell comment.  I think the difference here though is the sheer speed at which VMware is innovating.  Microsoft is at least a couple years behind vSphere 4.0, and with new features like VAAI, SIOC, and NIOC, they might have to add another year just to catch 4.1.  Microsoft is capable of throwing amazing amounts of cash at the problem, as evidenced by the current pricing structure of Hyper-V.  But is it enough?

I submit that if VMware continues innovating at their current pace, Microsoft will never catch them, unless there is a massive shake-up.  With the current MS management team and company culture, it is impossible.  Even if there were a tectonic shift this week in Microsoft’s management, and the culture change started next week, you’re still looking at a decade before they can innovate like VMware can today.  Microsoft is a behemoth of a company, and culture shifts are exponentially harder to pull off as a company grows larger.

Rest assured, if VMware is indeed the Novell of virtualization, they’re going to enjoy a couple more decades at the top before disappearing into irrelevancy.