Back The F:\ Up!


I am sure it comes as no surprise to any of our readers that virtualization is not the exclusive full-time focus for most of us.  Most of us have a breadth of responsibility spanning gobs of infrastructure layers in our respective organizations.  One common pain point that most of us have is backups.

For many companies, backup is an afterthought.  It doesn’t contribute to the profitability of the company.  It doesn’t help you make more widgets in the same amount of time.  The result often times is a neglected backup system when it comes to budgets and spending.  Most of the time, even though we know the importance of backups, we’re okay with it taking a back seat.  After all, who wants to goof around with tape drives when there are cool new blades and SSD storage to play with?

It was this frame of mind that I found myself in on Tuesday of this week.  I had signed up for W. Curtis Preston’s Backup Central Live a while back on Stephen Foskett’s recommendation.  I knew it would be decent, as I had used for a long time as a valuable resource to help deal with those dreaded backup problems.  But when Monday came, I found myself wondering why the heck I signed up for this seminar.  I had so much work to do this week, and most of it was fun SAN and VMware planning and design stuff.  I didn’t have time for baaaaackups. . . Grrrr.

In the end, my boss was pumped about the seminar.  I knew I couldn’t back out without getting grief, so reluctantly, I made the 1.5 hour drive to Cary, NC for a full day of backups.  I knew Curtis would be a great speaker, and have good insight.  I have heard him many times on Infosmack, and I know from his blog posts that he knows his stuff.  I just wasn’t looking forward to a full day of vendor pitches between the valuable information.

Ultimately, I was impressed with the event, and it was far from a waste of time.  Even the vendor presentations were decent, and they kept to a reasonable time limit, so the pace was perfect.  I’ll give you a quick rundown of what I learned at this event.

Often times we feel alone in our backup struggles.  At the seminar, there was wireless polling during the presentation, so we had real time answers to our questions.  That alone was a fantastic change; and I prefer this to raising my hand 48 times during a session.  From this polling data, I learned that I am not alone.  Many share in my misery.

  • 49% of attendees still do backups DIRECT TO TAPE.

So while us 49% think that no one hears our screams, at least now we know that we’re not the only ones screaming.  I think we all know that tape is not a suitable target for server backups.  The problem only gets worse as tape drives get faster.  Disk, at least as a staging area, is a necessity now for reliable backup to tape.

That said, Preston points out that tape is a long way from being displaced from the datacenter.  Tape is still 50x cheaper than disk, and more reliable for long-term data storage.  One fact I found enlightening was that hard disks are not designed or tested to store data long term while powered off.  This is something I had never thought about, and only a couple of companies, like ProStor, are trying to solve this problem.  Even if we solve for the reliability difference, it will likely be decades before we see a significant degree of cost parity (if ever).

A speaker from Cambridge Computer Services talked about new cool ways people are using tape as part of a tiered strategy for primary data.  Some are even using tape as a mirror for their primary storage.  Of course this requires a gateway appliance with plenty of cache, and good software, but the savings are real.

Another crucial area we touched on was that of archival, especially as it relates to electronic discovery (ED).  Almost NO ONE is doing this.  The vast majority is using their primary backup software and methodology for archival.  This is an expensive mistake if you ever are called upon to do discovery.  In addition to my own experience with ED, Preston tells a story of a client who spent millions to satisfy a single discovery request.

Apparently a single user’s e-mail for the past three years was requested.  As they were only doing normal Exchange backups, that meant restoring 156 different monthly Exchange backups, and then fishing for this guy’s mails.  It took an army of consultants working three shifts MONTHS to do this.  Since we live in a litigious world these days, it might be a good idea to get your ED and archival in order.  One product that was recommended at the seminar was Index Engines.  I haven’t had time to look at it yet, but it sounds brilliant!

One interesting statistic we saw in the polling data was that the majority of attendees had an overblown opinion of themselves when it comes to their own backup environments.  The majority said their backups ran well.  Preston’s experience tells quite a different story.  The scary part of this is that people don’t know that their backups suck.  They find out when it’s too late.

The most valuable part of this seminar was the discussion time at the end.  There was many interesting discussion around cloud backups, AWS outage, and snapshots.  This brought everything together that we had learned during the day.

There isn’t space in a single blog post to cover all the material from a full day seminar, but I hope I’ve given you enough to help make a decision to check this event out when it comes your way.  I have to give it to the Backup Central Live crew for taking a topic that most people hate, and turning it into a valuable day of learning.

VMware Newsletter for April 2011


Download The Newsletter VMware Newsletter April 2011

Welcome back, I hope you found our first newsletter helpful in some way shape or form.  The newsletter seems to be getting larger and larger which is a great thing.  It might soon start to qualify as a magazine rather than a newsletter.

We got some good feedback so we are going to keep going with this for a while.  Please let us know via the comments section if you are enjoying it, would like to see different content,  or just want to say hello.


Improving VMware Performance and Operations (Memory Bottlenecks?)

When talking about VMware virtualization bottlenecks, 9 out of 10 customers answer their number one bottleneck is memory. Notice how I said bottleneck, not problem. This relates to capacity planning or trying to understand and right size the environment so you can gauge when you need to order more physical infrastructure. Their number one problem is storage, which is quite a different story altogether and I won’t be covering storage in this article (this time). Since memory is such a common point of discussion with my customers, I thought I would dig a little deeper on this topic and share some information around utilization and what it all means.


My customers typically track their utilization in the most common area of vSphere that one might expect to find this information, the DRS Resource Distribution graph at the cluster level.



From the image displayed above, one might think that I am close to memory capacity and I should look at ordering more hardware for my cluster. While in a general sense that might not be a bad idea to begin planning for growth, but let’s take a closer look at what we are seeing.  Notice the blue informational icon and how it’s telling us that the displayed information is based on memory consumption. Let’s do a mouse over on the chart that’s being displayed to get some more granular information and what this means.




You can see in the above image that my Virtual Center VM is “Consuming” ~4GB of memory, but in all reality the active memory being used is sitting at ~700MB. DRS entitlement is a measurement that calculates what the load or demand is on the vSphere host/cluster over time, and then projects an average entitlement number for planning purposes. You can use the DRS entitlement numbers as a general planning/forecasting number, but to be honest you still have some capacity within the cluster.

Now I wouldn’t be doing my job if I didn’t make you aware of an easier way to track this information by using software rather than brain power. For those of you that haven’t seen Capacity IQ yet, I would highly encourage you to evaluate the product. Capacity IQ was built for this specific reason, to help you understand when you will need to start thinking about more hardware. It can also help you run your environment more efficiently. There are some great reports that help you identify which virtual machines are not using the resources that were allocated to them.  Take them back!

Coming from a VMware system engineer end user position, I can tell you that as your environment begins to grow, capacity management and planning becomes critical. I evaluated Capacity IQ when I was still on the customer side, and did a write up if you are interested in my thoughts on the product.

Limiting Processors and Memory To Windows For POCs

As customers transition from phase 1 into phase 2 of their virtualization journey, they begin virtualizing business critical applications. As they move into this phase, they often perform a POC to understand how their application performs on physical versus a virtual platform. Customers often ask for guidance on conducting a POC and we talk to them about the importance of an apple to apples analysis. What I mean by this is making sure the physical server and the virtual machine are configured identically (or as identical as possible). One area we often find differences is in the number of processors a physical server has versus the number of virtual CPUs (vCPUs) you can assign to a virtual machine.  Using the Microsoft System Configuration utility we can bring these two into alignment.

In our example, we will look at how to take a server that has 8 processors and 32 GB of RAM and configure this server to access 4 processors and 16 GB of RAM.

Below is a screen shot of System Properties screen and is accessible by clicking START > right click COMPUTER > Properties

From this screen shot we see the system has 8 processors and 32 GB of RAM.


Windows Task Manager, accessed by right clicking the Task bar, displays the same information.


Since we have determined the baseline for this analysis will be 4 processors and 16 GB of RAM, we will move onto configuring this server using the System Configuration utility.

First, click START > RUN > and type msconfig


This will open the System Configuration dialog box, and in this box click the Boot tab. On the Boot tab, click the Advanced options… button.


On the BOOT Advanced Options dialog box, check the  box next to Number of processors and then use the drop down to select the number of processors you want Windows to be able to access. In this case we selected 4.

Next, check the box next to Maximum memory and enter the amount of memory you want Windows to access. In this case we entered 17,408 (17*1024) since we want the OS to have 16 GB usable memory.


Once satisfied with the configuration, click OK to close the BOOT Advanced Options dialog box, then click OK to close the System Configuration dialog box, and then click Restart to apply the configuration changes you just made


After the system restarts, log in and open Computer Properties by clicking START > right click COMPUTER > Properties

As you can see from the screen shot below, the system has 4 processors and 16 GB of usable RAM


Windows Task Manager, accessed by right clicking the Task Bar, displays the same information.


To remove these settings, open the System Configuration dialog box by clicking START > RUN  > type msconfig

Next, click the Advanced options… button

And then uncheck the boxes next to Number of processors and Maximum memory.


Click OK on the System Configuration dialog box and then click Restart so your changes are saved. When the system reboots it will return to the original configuration.