VCDX, VMworld2009, RoR … and a Fractured Sternum


Given my recent inactivity here and on Twitter, I feel the need to post some updates.  So, where have I been and what have been doing for the past few weeks?  I’ll start with the most recent drama, which should give you a good laugh.

Fractured Sternum

playland A few months ago, my wife and I decided to buy our four year old son an outdoor playset.  Our local Costco had the “Rainbow All-American Double Decker Playset” (pictured left) on sale, so we decided to buy it.  That was back in March.  Nearing the end of June, do you know where the playset is?  Still boxed up in our garage.  Some Dad I am. 

Anyway, on Monday I decided I was sick and tired of all the clutter in our garage.  Plus I was determined to get that playset built before the end of June.  So I decided to rearrange the garage and get the playset boxes ready to be moved to the back yard.

Before I continue, let’s take a quick look at the weight and dimension of these boxes …

Shipping Box Dimensions:

  • Box-1: 14 1/2” L x 11 1/4” W x 9” H: Approximately 40-lbs

  • Box-2: 22 1/2” L x 11 1/4” W x 9” H: Approximately 40-lbs

  • Box-3: 106” L x 24” W x 7” H: Approximately 210-lbs

  • Box-4: 106” L x 24” W x 7” H: Approximately 240-lbs

  • Box-5: 106” L x 24” W x 7” H: Approximately 195-lbs

  • Box-6: 106” L x 24” W x 7” H: Approximately 200-lbs

  • Slide: 115 1/2” L x 24 3/4” W x 16 3/4” H: Approximately 40-lbs


Well, as I was trying to be the big, bad, super dad and move Box #4 on my own … I had a little accident.  That’s right, I have a fractured sternum because a playset box fell on my chest!  How embarrassing.  My friends affectionately now call me “crash” and my wife will no longer allow me to go into the garage without first showing her my helmet is securely fastened.

The good doctor from the ER gave me some Vicodin for the pain, which has been very helpful.  But the side effect is that Vicodin makes me loopy, making it difficult to write.


If you’re still reading then you’re probably questioning my level of intelligence (and I wouldn’t entirely blame you :)  So I figured I would try to redeem myself with an update on my VCDX progress.  Even though I have not yet posted all of my VCDX study notes, I actually took the VCDX admin exam a few weeks ago.  And I just found out last week that I passed!  Woooohoooo!  Now I’ve got to start preparing for the next test, the VCDX Design Exam.



Looks like I’ll be speaking at VMworld2009.  So if you’re planning to attend this year’s event, be sure to say “hello.”  Just look for my shaved noggin’ wondering the halls (or the guy with the shiny helmet, hehehe).  Even better, as I’ll be the speaker of session DV3567, you can certainly find me at the following breakout session …

Session ID:

Don’t throw that PC away! How to convert old PCs to Thin Clients using a thin Linux OS and VMware View Open Client.

More and more, companies are looking for additional ways to cut costs though virtualization. And it isn’t long before IT teams start exploring the possibility of a Virtual Desktop Infrastructure. But with desktops out numbering servers by a factor of 10:1 (or more), converting users to a virtual desktop can be technically challenging and a significant upfront expense. A potential solution to this problem is to convert existing PCs into Thin Clients, extending the life of the hardware and easing the transition into a VDI. This session will show IT professionals various ways to convert older PCs into Thin Clients, capable of connecting to a VMware VM hosted on ESX via the VMware View Manager.

RoR (Ruby on Rails) and other next generation frameworks

I like to think of myself as an amateur developer (though, even amateur developers might have a thing or two to say about that!! :)  I began programming in Perl about 10 years ago and since then I’ve dabbled in a number of different languages, like C++, Java and Ruby.  

About two years ago I was introduced to Ruby on Rails and since then, most of my development work has been with RoR.  Thus far, however, I haven’t posted anything on this blog about RoR.  Why?  Two reasons.  The apps I’ve written to date have absolutely nothing to do with VMware.  And second, like I said, I’m an amateur.  Anyone looking for RoR help and advice can probably find better info on actual RoR blogs.

But I’ve decided that this is about to change.  Most recently I’ve been working on a little RoR front end that will “drive” vSphere via SOAP.  So I certainly find that work relevant here.  Plus, if you think about it, Rails provides a level of abstraction and therefore, by definition, can be called a type of virtualization. 

So if you’re an RoR developer (or any other kind of next generation framework, for that matter), please let me know.  I’m interested in reading your blog, checking out your applications, sharing code, chatting about issues / concerns / challenges, etc.  Just post a comment or email me at asweemer [at] gmail [dot] com.

The E.T.D.F. Series – Setting up the Network and Dedicated Remote Access (Part 3)

    OK, we are *almost* done getting our network set up properly for VDI, but we’ve got a few more things to do.  Specifically, we need to address:

  1. Handling our external, Internet facing, dynamic IP address.
  2. DHCP and DNS
  3. External VPN access

Dynamic External IP

Most ISP’s (not all) provide a single, dynamic IP address for consumer grade service (i.e. home use).  But when we’re trying to connect to our virtual desktops from somewhere out on the public Internet, how do we know which IP address to connect to?

Generally speaking, connecting to an IP address is bad practice because it’s inflexible.  Instead, we should connect to a Fully Qualified Domain Name (FQDN).  OK fine, so we’ll set up a DNS entry and use a FQDN to connect to our desktops.  But what happens when our dynamic IP address changes and the DNS entry is still mapped to the old IP address?

What we need is an external Dynamic DNS (aka DDNS) service which will allow us to programmatically update our IP address whenever it changes.  There are a number of both free and paid-for DNS providers out there that can deliver DDNS services.  Personally, I use EditDNS (  They have a ton of functionality and they’ve been rock solid for the past few years I’ve been using their services, so I’m quite happy with them.

Now, many home use routers these days have the capability to update a DDNS provider.  But in my experience, the functionality is somewhat limited.  What if, for example, I want and to be dynamic entries and to be a static entry, pointing to my blog server hosted somewhere else?  In reality, I’ve got about 20 FQDN’s that I need to be dynamically updated and about 100 that I want static.  So instead, I created a script that will:

  1. Query my external IP address (check this out, a free tool from
  2. Compare the result of the query with the IP obtained from the previous query
  3. If the IP is the same, or contains something other than an IP (e.g. HTTP error), the scirpt exits.
  4. If the IP is different, the script updates my DDNS entries via the EditDNS API, then updates a log file documenting the change, and finally adds the new IP to the last line of a file called previous_ips.

If you’d like to use the script I wrote, you’ll first need to do the following:

  1. If don’t already have one, set up an account with EditDNS and make sure you have properly configured the domain name(s) you own.
  2. Verify your linux distro has lynx (a command line, text only, www client)
  3. Verify your linux distro has curl (a tool to transfer data using HTTP)
  4. Create a directory (anywhere you have rwx access is fine) for the script and its files to live
  5. In this directory, create a text file called  Paste the content (below) into it.
  6. Replace XXXXXXXX with your EditDNS password.
  7. Make executable (chmod +x /path/to/editdns/
  8. Create another text file called records and, one per line, enter the FQDN’s of the DDNS entries you wanted updated (e.g.
  9. Add the script to your crontab to run at regular intervals (e.g. mine runs every five minutes and the entry in cron looks like this:    */5 * * * * cd /usr/local/editdns; ./
    And here’s a copy of the actual script …


LYNX=`which lynx`
CIP=`curl -s | awk –re-interval ‘$1 ~ /^([0-9]{1,3}\.){3}[0-9]{1,3}$/ {print}’`
PIP=`tail -1 ./previous_ips`

if [ “$CIP” != “$PIP” && –n “$CIP” ]; then
cat ./records | while read FQDN; do
echo “IP Change!  New IP is $CIP. was updated at $TIME.” >> ./editdns.log
echo $CIP >> ./previous_ips
exit 0

If you have any issues or questions, feel free to email me.  Also, keep in mind, this a quick and dirty script that accomplishes what I want it to accomplish.  Feel free to make it more robust (e.g. error handling or better logging) to suit your needs.


It’s quite possible you’ll want to skip this section and opt for setting up DHCP and DNS via Microsoft’s built in DHCP and DNS services that come out of the box with their server products.  To properly set up VMware View, we’ll need to set up Active Directory anyway, and quite frankly, it’s far easier to set up a Microsoft server with DHCP and DNS than it is to set up a Linux server.  So feel free to skip this section and leverage Microsoft for these services.  If, however, you’re a gluten for punishment then by all means, read on.

Let’s first start with DNS.  Here too we need Dynamic DNS because as we’re handing out IP addresses via DHCP, we want our DNS server to properly reflect current information as IP addresses change.  So, if you don’t already have bind9 (the DNS server), go ahead and install it (sudo apt-get install bind9 should work on Ubuntu / Debian distros).

The default configuration for bind9 is to act as a caching server, so the first thing we need to do is configure our DNS to forward all unknown DNS requests to another DNS server.  These should be provided to you from your ISP.  Edit the forwarders {} section of your named.conf.options file (usually located in /etc/bind/) to look like this …

asweemer@cincylab-rtr1:/etc/bind$ more named.conf.options
options {
directory “/var/cache/bind”;

forwarders {;;

auth-nxdomain no;    # conform to RFC1035
listen-on-v6 { any; };

Obviously, you’ll need to change and to the IP addresses given to you by your ISP.

Next, we need to modify our master named.conf to allow dynamic updates to DNS.  Add the following entry to the bottom of your named.conf file.

controls {
inet allow {;;;; } keys {“rndc-key”;};

This tells the DNS server to allow updates from the IP address located between the {}.  Notice the first three IP addresses are local IP addresses.  The fourth IP address is a slave DNS server, which I have yet to set up.  The rndc-key is the default key generated during installation of bind9 and it’s used to authorize the updating of DNS records.  If you’re using Ubuntu, then you’ll likely find the key in the file /etc/bind/rndc.key …

asweemer@cincylab-rtr1:/etc/bind$ sudo cat rndc.key
key “rndc-key” {
algorithm hmac-md5;
secret “QZ5jOmcr/OW3nzksR5q0Hw==”;

Note the file is a text file named rndc.key, and the actual key is called rndc-key located within the text file.

OK, next we need to define our zones in the named.conf.local file.  For each domain you’re using (probably just one), you’ll need two entries:  one for the domain and one for the reverse lookup of the domain.  I have two domains I’ll be updating, so my named.conf.local file looks like this …

asweemer@cincylab-rtr1:/etc/bind$ cat named.conf.local
// Do any local configuration here

include “/etc/bind/rndc.key”;

zone “” {
type master;
file “/etc/bind/zones/”;
allow-update { key “rndc-key”; };
allow-transfer {10.10.7/24; };

zone “” {
type master;
file “/etc/bind/zones/”;
allow-update { key “rndc-key”; };
allow-transfer {10.10.7/24; };

zone “” {
type master;
file “/etc/bind/zones/”;
allow-update { key “rndc-key”; };
allow-transfer {192.168.9/24; };

zone “” {
type master;
file “/etc/bind/zones/”;
allow-update { key “rndc-key”; };
allow-transfer {192.168.9/24; };

A couple points to note here:

  • I created a subdirectory called “zones” under /etc/bind/ where I put all my zone files.  This isn’t the default location, and in addition, this isn’t necessary as the zone files can be located anywhere you’d like.  But be aware the configuration file above reflects the location of my files.
  • Notice the include “/etc/bind/rndc.key” on the first line and the all-update directive within each zone definition?  This should be self explanatory at this point.
  • The allow-transfer directive within each zone definition explicitly limits zone transfers (copy) to the IP(s) defined.  This is an important security feature since, by default, DNS allows transfers to anyone, and the info contained within a DNS zone file can really give hackers visibility into your network.

Now we need to create the zone files we just defined above, which will contain our actual DNS records.  Here is the zone file for our …

asweemer@cincylab-rtr1:/etc/bind/zones$ cat
$TTL 3600       ; 1 hour             IN SOA (
2009060514 ; serial
86400      ; refresh (1 day)
86400      ; retry (1 day)
2419200    ; expire (4 weeks)
3600       ; minimum (1 hour)
MX      10
MX      20
computer-1              A
TXT     “317bf41a2c5b70fd9ca4e283d364dcddd5”
computer-2              A
TXT     “00cf6242f693ebbf1d545159548e44ab81”
computer-3              A
TXT     “31a0cb7e096a96c63dc998d2db3be6e450”
mail                    A
mail-spool              A
master                  A
www                     CNAME   master

A couple important things to point out here:

  • The entries for computer-1, 2 and 3 are dynamic entries there were generated by the DHCP server.  The TXT record that follows these entries is a unique identifier which is also generated by the DHCP server and is used to ensure it won’t overwrite existing DNS records that were generated by another process/server.
  • You’ll obviously need to change the domain names and IP addresses to match your environment.
  • If you haven’t worked with bind9 before, this file probably looks pretty cryptic to you.  If so, I would recommend taking a look at, which gives a pretty good overview of the SOA (defined in the first part of the file).  The balance of the file (i.e. the record definitions) is pretty straight forward.

The reverse zone should look like this …

asweemer@cincylab-rtr1:/etc/bind/zones$ more
$TTL    1h;
IN      SOA (
2009060501      ;
1d              ;
1d              ;
4w              ;
1h              ;
IN      NS
IN      NS
25      IN      PTR
26      IN      PTR

I mixed it up just a bit in this file to point out a few different ways to configure a zone file.  In this file, notice the following differences:

  • The $ORIGIN directive sets the domain name to be appended to any unqualified records.  If the $ORIGIN directive doesn’t exist (as it doesn’t in the first config file), then it is implicitly defined by the zone name.
  • The time variables can be defined with d (day), w (week), h (hour), etc.

That’s about it for DNS.  Once you’ve got your bind9 server configured, restart your bind9 server (sudo /etc/init.d/bind9 restart).  And of course, be sure to test your configurations by using the standard DNS tools (e.g. dig, nslookup).  If you get errors, pay careful attention to your local syslog file (probably located at /var/log/syslog) as that’s where DNS and DHCP errors typically write their error messages.

OK, next up is configuring our DHCP server.  And once again, this post is starting to get way to long, so it looks like I’ll need a fourth and (hopefully) final post to this section.

View 3.1 HID Filtering

With the release of View 3.1 we received some more flexibility with presenting/hiding Human Interface Devices  (think foot pedals for a transcriptionist, some type of bardcode scanners, etc).

HID devices are filtered out by default as it would be a bad thing if your local mouse was redirected for example.  So to enable a specific device to be passed through we need to do a few things:

1:  First we need to determine the VID/PID of the HID device.  There are two ways to determine this:

a:  Debug Logs:  Go to C:\Documents and Settings\All Users\Application Data\Vmware\VDM\logs.  Search through the log for “Devices”  This will contain the information of all the devices available before filtering takes place

b:  Windows Device Manager:  Open Windows Device Manager and find the HID device you are interested in.  If you right-click–>Properties on the object and go to the Details tab you will see a drop down.  Choose Device Instance ID from the drop-down.  The VID/PID value will be displayed.  Usually this looks something like USB\VID_xxxx&PID_xxxx\…

2:  Now that we know the VID/PID we can go to the client and create the appropriate registry keys to tell the View Client to pass that particular HID device through:

a:  Go to HKLM\Software\VMware, Inc.\VMware VDM\USB\

b:  Create a new Multi-String Value named AllowHardwareIDs

c:  Set the value data to the VID_xxxx&PID_xxxx you documented earlier

d:  Restart the client and things should work upon the next connection

Special Thanks to Pete Barber for this info!

The E.T.D.F. Series — Setting up the Network and Dedicated Remote Access (Part 2)

I got up early to get some work done before driving down to KY to work with a customer on their VDI pilot.  As I was preparing for the meeting, I thought to myself, “wow with my studies for the VCDX admin exam, and the recently launch of vSphere, I haven’t done a whole lot of VDI recently.”  And then BAM!  It hit me like a ton of bricks.  I haven’t completed the E.T.D.F series I started almost 6 months ago!  If this blog has done anything for me, it has made me painfully aware of my numerous character flaws. <sniffle><tear><sniffle>

Anyway, not that anyone is following along anymore, and purely in the interest of self improvement, I’m determined to finish what I’ve started (both for this series and my VCDX study notes series).  So without further delay, here is the second part of the networking section (here is part one).  And for your convenience, here again is the Visio diagram of my lab.

Router / Firewall (cincylab-rtr1)

In my environment, the vast majority of all relevant network configurations are on cincylab-rtr1, which is really an old Gateway PC that I had lying around with a single 2.2GHz processor and 1Gig of RAM and a single 100Mbps NIC.  I installed Ubuntu server 8.04.1 (kernel 2.6.24-19-server) on it and made it the gateway between my lab and the DMZ (aka, my home network).

The first thing I needed to do was get the basic networking on the server set up.  I have three networks in my house …

  1. An external DMZ, VLAN 192 (aka, my home network)
  2. An internal “production” network, VLAN 10.  I put the word production in bunny ears because nothing is *really* production … it’s all just a lab.  But I try to protect this network a little more than the next.
  3. An internal “lab” network.  This is where I can really have fun!

Configuring the Interface(s)
The server only has one NIC and I was too lazy (and cheap) to go buy a new one.  But it’s a 1GigE card, which is plenty for my environment.  And it’s a snap to configure …

root@cincylab-rtr1:/etc/network# more interfaces

auto lo
iface lo inet loopback

auto vlan10
auto vlan192
auto vlan10:1

iface vlan192 inet static
mtu 1500
vlan_raw_device eth1

iface vlan10 inet static
mtu 1500
vlan_raw_device eth1

iface vlan10:1 inet static
mtu 1500
vlan_raw_device eth1

Turn on Routing

Now that the interfaces are configured, we need to turn on routing.  In Linux, this can be accomplished a couple different ways.  The easies, IMHO, is to simply edit the /etc/sysctl.conf file and set net.ipv4.ip_forward=1.  You could also add echo 1 > /proc/sys/net/ipv4/ip_forward to your /etc/rc.local file.  Either way should turn on IPv4 routing on your server.

Configure NAT and PAT (Port Address Translation)

Once routing is turned on, we need to set up Network Address translation and Port Address Translation.  This needs to be done for two reasons.

  1. My lab networks need outside access to the Internet and they have private IP addresses.
  2. My server has a single IP address in the DMZ, which needs to serve as the gateway IP for multiple internal IP’s and TCP/UDP ports.  As an example, I want all traffic arriving on, TCP port 8080 to be forwarded to the internal IP port 80.  And more specifically, here’s what I want available to the outside world …
    • –>
    • –>
    • –>
    • –>
  3. OK, so how do we do this?   We need configure iptables.  An iptables tutorial is out of scope for this post, but if you’d like to learn more about Linux IP tables, I personally like this one:

To set up iptables, I’ve created a file called fw_rules in /usr/local/bin and made it executable (chmod +x /usr/local/bin/fw_rules).  Here is what the file looks like.

root@cincylab-rtr1:/usr/local/bin# cat fw_rules
iptables -t nat -F
iptables -t filter -F

iptables -t nat -A PREROUTING -p tcp -i vlan192 -d –dport 8080 -j DNAT –to
iptables -t nat -A PREROUTING -p tcp -i vlan192 -d –dport 8181 -j DNAT –to
iptables -t nat -A PREROUTING -p tcp -i vlan192 -d –dport 8282 -j DNAT –to
iptables -t nat -A PREROUTING -p tcp -i vlan192 -d –dport 8383 -j DNAT –to

iptables -t filter -P INPUT ACCEPT
iptables -t filter -P OUTPUT ACCEPT
iptables -t filter -P FORWARD ACCEPT


To make these changes persistent across reboots, you’ll need to add /usr/local/bin/fw_rules to your /etc/rc.local file.

Now, for all you linux experts out there looking at this file, you’re probably saying “uh, that’s a pretty insecure firewall you got there!.”  And you’d be right 🙂  Remember, this is merely an internal firewall/router which is protected by a much more secure, Internet facing, Cisco ASA (thanks again to the local Cisco team!!).  It’s also this Cisco that forwards outside connections on specific ports (not 8080, 8181, and 8282, for additional security) to IP  And because of this, my goal for this server isn’t to protect, but to separate my lab networks from my home network, and proxy the connections between them.

What’s Next?
We’ve configured our networks, turned on routing and configured NAT / PAT on our server.  What next?  Three things:

  1. Because my external IP is dynamic, we need to set up a script that will periodically check to see if our external IP has changed and, if so, update our dynamic DNS service.
  2. Configure DHCP and DNS.
  3. Set up external VPN access.

Step three is actually optional because when we’re done, we’ll be able to tunnel via SSL to our desktop.  And from our desktop, we’ll have full access to the local LAN.  But sometimes full remote access via VPN is nice without being forced to first “hop” to another desktop.  So, I’ll include my VPN configuration as well.

But for now, it looks like I’m going to need have a part three of this “setting up the network and dedicated remote access” section, because I need to get on the road down to KY.  But if you’re interested, look for part three later today.  I’m almost done with it and will try to finish it during my lunch break.

The Virtualization Capability of Your Processor is Already in Use.

I haven’t had much exposure to KVM yet, so over the weekend I decided to check it out on my laptop.  After playing around with it a bit, I needed to power on an instance of VMware Workstation, and the following error popped up on my screen …

The virtualization capability of your processor is already in use.  Disable any other running hypervisors before running VMware Workstation.

Well that makes sense.  So I uninstalled KVM and the issue was resolved … or so I thought.  This morning as I powered on another instance of VMware Workstation, I got the same error again.  Hmmmm.  That was a bit more confusing because, to my knowledge, KVM was completely removed from my system.  Again, so I thought.  But a quick look at the currently loaded kernel modules revealed both the kvm_intel and the kvm modules.

As it turns out, when you remove KVM via apt-get (meaning, this *could* be a debian / ubuntu issue, not sure if other package managers do the same thing), it doesn’t actually completely remove itself.  The kvm and kvm_intel modules not only remain, but they continue to get loaded upon startup.  When I removed the modules, my VMware Workstation powered on without issue.

So then, that voice inside of my head — the one I should NEVER listen to — said “I wonder what happens when you load the KVM modules after you’ve powered on your VM?”  I *knew* it could only lead to bad things, but I couldn’t help myself.  Guess what?  Not only did VMware Workstation completely freeze, but now I can’t power on my VM, no matter what I try.   Grrrr.  I swear, someday I’ll be a news headline that reads … An eyewitness confirms his last words were, “I wonder what this button does?”

Anyway, if you get this error, simply check for the kvm modules (lsmod | grep kvm should do the trick).  Simply removing the modules will fix the issue.