VCDX Admin Exam Notes – Section 1.2

Last week I was in San Francisco with most (maybe all) of the VMware field technical folks for a three day technical summit.  One of the evenings we had an awards ceremony and dinner.  And guess what?  The first eight VCDX certifications ever to be awarded were announced.

Now, VMware is a pretty big company, so I didn’t recognize seven of the eight names.  But I definitely recognized one of them.  Well, that is, I should say I recognized his name.  Having never officially met him fact to face, I couldn’t pick him out of a crowd of two.  You might know him as the rock star blogger from Yellow Bricks.  Congratulations Duncan Epping!  I believe he said he is VCDX number seven and the first VCDX in Europe.  Very cool.  And well deserved, for sure.  🙂

I’m a little behind Duncan.  I’m scheduled to take the Admin Exam later this month, which is the first of two exams.  Then I’ll have to present and defend a successful design and deployment before a jury … er, I mean, panel of my peers.

Anyway, here are my notes for section 1.2 of the VMware Enterprise Administration Exam Blueprint v3.5. (Section 1.1 can be found here).  Everything in Blue is a direct cut and paste from the exam blueprint.

Objective 1.2 – Implement and manage complex data security and replication


Describe methods to secure access to virtual disks and related storage devices

  • Distributed Lock Handling

    In the graphic below, notice how each ESX server sees and has access to the same LUN? This is achieved via VMFS, a clustered file system which leverages distributed file locking to allow multiple hosts to access the same storage.  When a Virtual Machine is powered on, VMFS places a lock on its files, ensuring no other ESX server can access them.

Identify tools and steps necessary to manage replicated VMFS volumes

  • Resignaturing
    First, there’s a really good article on VMFS resignaturing by Duncan (go figure).  Also, Chad Sakac over at Virtual Geek has a great article too.  I’m not going to reinvent the wheel, so make sure you read their posts.  You’ll need to understand this.  For the exam, you’ll certainly need to know the following …

    The following is from the Fibre Channel SAN Configuration Guide:

EnableResignature=0, DisallowSnapshotLUN=1 (default)
In this state:

  • You cannot bring snapshots or replicas of VMFS volumes by the array into the ESX Server host regardless of whether or not the ESX Server has access to the original LUN.
  • LUNs formatted with VMFS must have the same ID for each ESX Server host.

EnableResignature=1, (DisallowSnapshotLUN is not relevant)
In this state, you can safely bring snapshots or replicas of VMFS volumes into the same servers as the original and they are automatically resignatured.

    EnableResignature=0, DisallowSnapshotLUN=0
    This is similar to ESX 2.x behavior.  In this state, the ESX Server assumes that it sees only one replica or snapshot of a given LUN and never tries to resignature. This is ideal in a DR scenario where you are bringing a replica of a LUN to a new cluster of ESX Servers, possibly on another site that does not have access to the source LUN. In such a case, the ESX Server uses the replica as if it is the original.

Do not use this setting if you are bringing snapshots or replicas of a LUN into a server
with access to the original LUN. This can have destructive results including:

  • If you create snapshots of a VMFS volume one or more times and dynamically
    bring one or more of those snapshots into an ESX Server, only the first copy is
    usable. The usable copy is most likely the primary copy. After reboot, it is
    impossible to determine which volume (the source or one of the snapshots) is
    usable. This nondeterministic behavior is dangerous.
  • If you create a snapshot of a spanned VMFS volume, an ESX Server host might
    reassemble the volume from fragments that belong to different snapshots. This can
    corrupt your file system.

Skills and Abilities

Configure storage network segmentation

  • FC Zoning
    Zoning delivers access control in the SAN, restricting visibility to devices in the zone solely to other members of that zone.  It is a common technique used to do things like group ESX servers into production/test/dev, increase security and decrease traffic, among other things.
    Storage segmentation for IP storage can be accomplished in one of two ways:  VLANs or physical segmentation (i.e. separate layer 2 switches for storage).

Configure LUN masking

The Disk.MaskLUNs parameter should be used when you’re trying to mask specific LUNs to your ESX host.  This is a useful option when you don’t want  your ESX server to access a particular LUN, but are unwilling (or unable) to configure your FC switch.

To configure LUN masking in the VI Client go to Configuration –> Advanced Settings for the host you want to configure. You’ll find the Disk.MaskLUNs parameter under the section Disk.  It looks like this in my VI Client.


Enter a value in the following format … <adapter>:<target>:<comma separated LUN range list>. Be sure to rescan when your done and verify the Mask has been properly applied.

Use esxcfg-advcfg
This one’s easy.  Just use the man page (type “man esxcfg-advcfg” at the command prompt).  It’ll tell you everything you need to know 🙂

Set Resignaturing and Snapshot LUN options
So, following along with the man page above, here is a cut and paste from my server …

[asweemer@cincylab-esx3 config]$ su –
[root@cincylab-esx3 root]# esxcfg-advcfg -s 0 /LVM/EnableResignature
Value of EnableResignature is 0
[root@cincylab-esx3 root]# esxcfg-advcfg -s 1 /LVM/EnableResignature
Value of EnableResignature is 1
[root@cincylab-esx3 root]#
[root@cincylab-esx3 root]# esxcfg-advcfg -s 0 /LVM/DisallowSnapshotLun
Value of DisallowSnapshotLun is 0
[root@cincylab-esx3 root]# esxcfg-advcfg -s 1 /LVM/DisallowSnapshotLun
Value of DisallowSnapshotLun is 1
[root@cincylab-esx3 root]#

Manage RDMs in a replicated environment
RDMs can be created via the CLI with the following command …

vmkfstools -r /vmfs/devices/disks/vmhbaX:Y:Z:0 my-vm.vmdk

By default, the RDM will be created in Virtual Compatibility Mode.  But should you need and/or prefer Physical Compatibility Mode, you can change this by editing the VMDK file and changing the createType value to vmfsPassthroughRawDeviceMap.

Use proc nodes to identify driver configuration and options
The proc filesystem is a pseudo filesystem, it’s not “real.”  It consumes no storage space and is used to access process information from the kernel.  You’ll find quite a bit of valuable data and configuration options in the many subdirectories of /proc/vmware/config.  Here’s a quick example from my ESX server …

[asweemer@cincylab-esx3 LVM]$ pwd
[asweemer@cincylab-esx3 LVM]$ ls
DisallowSnapshotLun  EnableResignature
[asweemer@cincylab-esx3 LVM]$ cat EnableResignature
EnableResignature (Enable Volume Resignaturing) [0-1: default = 0]: 0
[asweemer@cincylab-esx3 LVM]$

Use esxcfg-module

Just like esxcfg-adv, use the man page.


Comments are closed.