VCDX Admin Exam Notes — Section 1.1

I finally got a chance to sit down and reformat some of my notes for the VCDX Admin Exam.  Below are my notes for Section 1.1 of the VMware Enterprise Administration Exam Blueprint v3.5.  Everything in Blue is a direct cut and past from the exam blueprint.

Oh, and thanks to the Disqus comment from VirtualizationTeam (Blog), letting me know that Peter van den Bosch has a more recent version of his VMware Enterprise Administration Exam Study Guide 3.5


Section 1 – Storage

Objective 1.1 – Create and Administer VMFS datastores using advanced techniques.


Describe how to identify iSCSI, Fibre channel, SATA and NFS configurations using CLI commands and log entries

Here are a few command line examples that I believe would work well …

1)  esxcfg-mpath –l
This command produces the following output on my server:


[root@cincylab-esx3 root]# esxcfg-mpath -l

Disk vmhba0:0:0 /dev/sdb (152627MB) has 1 paths and policy of Fixed

Local 0:31.2 vmhba0:0:0 On active preferred

Disk vmhba32:0:0 /dev/sda (152627MB) has 1 paths and policy of Fixed

Local 0:31.2 vmhba32:0:0 On active preferred

Disk vmhba35:0:0 /dev/sdc (923172MB) has 1 paths and policy of Fixed

iScsi sw<-> vmhba35:0:0 On active preferred

2)  esxcfg-info –s

The –s flag will narrow the scope of the output to just storage and disk related info.  But even with the narrowed scope, this command produces way too much output to be displayed here.  You’ll likely want to pipe the output into grep, or at a minimum to a more/less to get what you’re looking for.

3)  cat /var/log/vmkernel | grep vmhba | tail –10

This will search the vmkernel log file and display the last 10 lines containing the text vmhba.  If you want more (or fewer lines) change the –10 to whatever suits your needs.

If found this one particularly useful when you’ve enabled the software iSCSI initiator at the command line, but don’t know yet number has been assigned to the vmhba (e.g. vmhba35). 

4)  esxcfg-vmhbadevs –m  and  ls –lah /vmfs/volumes

The command esxcfg-vmhbadevs –m will show the mapping between vmhba numbers, device files and their UUIDs.  If you’d like a quick and easy way to see what UUIDs are mapped to their human readable name, you can follow that up with a ls –lah /vmfs/volumes.  The two commands back to back produce the following output on my server:

[root@cincylab-esx3 root]# esxcfg-vmhbadevs -m
vmhba35:0:0:1   /dev/sdc1                        4986310d-6525e5e6-ebbd-00237d0681e7
vmhba0:0:0:3    /dev/sdb3                        49e115fb-3e22358c-c10a-00237d0681e7
vmhba32:0:0:1   /dev/sda1                        4985c53e-e7b1904f-5042-00237d0681e7


[root@cincylab-esx3 root]# ls -lah /vmfs/volumes/
total 10M
drwxr-xr-x    1 root     root          512 Apr 20 23:07 .
drwxrwxrwt    1 root     root          512 Apr 11 18:12 ..
drwxr-xr-t    1 root     root         1.2K Feb  1 21:34 4985c53e-e7b1904f-5042-00237d0681e7
drwxr-xr-t    1 root     root         3.7K Apr 14 14:49 4986310d-6525e5e6-ebbd-00237d0681e7
drwxr-xr-t    1 root     root          980 Apr 11 18:13 49e115fb-3e22358c-c10a-00237d0681e7
lrwxr-xr-x    1 root     root           35 Apr 20 23:07 cincylab-esx3:storage1 -> 4985c53e-e7b1904f-5042-00237d0681e7
lrwxr-xr-x    1 root     root           35 Apr 20 23:07 cincylab-esx3:storage2 -> 49e115fb-3e22358c-c10a-00237d0681e7
lrwxr-xr-x    1 root     root           35 Apr 20 23:07 vol1 -> 4986310d-6525e5e6-ebbd-00237d0681e7

5)  vmkiscsi-ls

This one only applies to iSCSI storage, of course, and produces the following output on my server:

[root@cincylab-esx3 root]# vmkiscsi-ls

        SFNet iSCSI Driver Version … 3.6.3 (27-Jun-2005 )
TARGET NAME             :
TARGET ALIAS            :
HOST NO                 : 4
BUS NO                  : 0
TARGET ID               : 0
SESSION STATUS          : ESTABLISHED AT Sun Apr 12 11:35:09 2009
NO. OF PORTALS          : 1
PORTAL ADDRESS 1        :,1
SESSION ID              : ISID 00023d000001 TSIH 1400

Describe the VMFS file system

There are many subsections here and before digging into each one, check out the following three links …

The simple definition of Metadata is “data about data.”  All file systems handle metadata differently.  VMFS uses metadata, stored in a special area of each volume, to manage all the files, directories (in VMFS-3 only), and attributes about the volume.  VMFS is a clustered file system, meaning more than one ESX server can access the same file system at the same time.  Therefore an update to the metadata requires locking of the LUN using a SCSI reservation.

Multi-access and locking
The following was taking from Advanced VMFS Configuration and Troubleshooting.

  Distributed Lock handling by VMFS3

  • Done in-band
  • Hosts mount a VMFS3 volume
  • Hosts’ ids posted to heartbeat region
  • Heartbeat records are updated at regular intervals by hosts
  • Host X locks a file, the lock is associated with its ID
  • If host X dies or loses access to volume the file lock is stale
  • Host Z attempts to lock the same file which is locked
  • Host Z check the heartbeat record of Host X (~5 times)
  • If host X heartbeat record is not updated, Host Z will age the lock
  • All other hosts yield to host Z and not attempt to lock the file
  • Lock is broken and Host Z acquires the lock
  • Journal is replayed by Host Z

Extents are logical extensions of a file system.  They are typically used to grow a volume beyond the VMFS size limitations.  Essentially, an extent is the “joining” of two or more volumes into a single, logical VMFS volume.

Tree structure and files
The vmfs partition is mounted to the directory with the corresponding UUID found in /vmfs/volumes.  The human readable name of the volume is merely a symbolic link to that directory.  By default, all VMs are given a directory at the root of the partition.  So, for example, a VM with the name of AaronSweemer would have the directory /vmfs/volume/UUID/AaronSweemer.  In this directory you will find all files specific and relevant to that VM.  This is the default behavior as some (not all) of these files can be configured to reside elsewhere. 

Here is a table of common files found on the VMFS file system. 

Extension Usage
.dsk VM disk file
.vmdk VM disk file
.hlog VMotion log file
.vswp Virtual swap file
.vmss VM suspend file
.vmtd VM template disk file
.vmtx VM Template configuration file
.REDO Files used when VM is in REDO mode
.vmx VM configuration file
.log VM log file
.nvram Nonvolatile RAM

From Wikipedia …

A journaling file system is a file system that logs changes to a journal (usually a circular log in a dedicated area) before committing them to the main file system. Such file systems are less likely to become corrupted in the event of power failure or system crash.

Explain the process used to align VMFS partitions 

The following procedure was found in VMware Enterprise Administration Exam study guide 3.5 (page 5) and Advanced VMFS Configuration and Troubleshooting (slide 36).

Aligned partitions start at 128. If the Start value is 63 (the default), the partition is
not aligned. If you choose not to use the VI Client and create partitions with
vmkfstools, or if you want to align the default installation partition before use, take
the following steps to use fdisk to align a partition manually from the ESX Server
service console:
1. Enter fdisk /dev/sd<x> where <x> is the device suffix.
2. Determine if any VMware VMFS partitions already exist. VMware VMFS
partitions are identified by a partition system ID of fb. Type d to delete to
delete these partitions.
Note: This destroys all data currently residing on the VMware VMFS partitions you
3. Ensure you back up this data first if you need it.
4. Type n to create a new partition.
5. Type p to create a primary partition.
6. Type 1 to create partition No. 1.
Select the defaults to use the complete disk.
7. Type t to set the partition’s system ID.
8. Type fb to set the partition system ID to fb (VMware VMFS volume).
9. Type x to go into expert mode.
10. Type b to adjust the starting block number.
11. Type 1 to choose partition 1.
12. Type 128 to set it to 128 (the array’s stripe element size).
13. Type w to write label and partition information to disk.

Explain the use cases for round-robin load balancing

Multipathing is typically used for failover.  Meaning, if one storage path becomes available the host can failover to an alternate path.  However, multipathing can also be used in a round-robin fashion to achieve load balancing to achieve better utilization of the HBAs.  There are a couple different configurable options that specify when an ESX server switches paths.  From the Round-Robin Load Balancing technical note …

When to switch – Specify that the ESX Server host should attempt a path switch after a specified number of I/O blocks have been issued on a path or after a specified number of read or write commands have been issued on a path. If another path exists that meets the specified path policy for the target, the active path to the target is switched to the new path. The –custom-max-commands and –custom-max-blocks options specify when to switch.

Which target to use – Specify that the next path should be on the preferred target, the most recently used target, or any target. The –custom-target-policy option specifies which target to use.

Which HBA to use – Specify that the next path should be on the preferred HBA, the most recently used HBA, the HBA with the minimum outstanding I/O requests, or any HBA. The –custom-HBA-policy option specifies which HBA to use.

Skills and Abilities

Perform advanced multi-pathing configuration

  • Configure multi-pathing policy
  • Configure round-robin behavior using command-line tools
  • Manage active and inactive paths

Setting the Path Switching Policy
You can set the path?switching policy for failover and for load balancing by using the esxcfg-mpath command.

You can set the path switching policy on a per?LUN basis by using the esxcfg-mpath command’s –policy custom option. If you specify –policy custom, you must also specify one of the custom policy options. Because the path switching policy is set on a per?LUN basis, you must always specify the LUN using the –lun option.


If you set the custom-max-blocks and custom-max-commands, options, the system attempts to switch paths as soon as one of the limits is reached.

If you set the target or the HBA policy to preferred, the system chooses the target or the HBA of the preferred path when possible. If a preferred policy is set on an active/passive SAN array, and the preferred target is not on the active SP (Storage Processor), the system does not select the preferred target but a target on the active SP.

Path switching is not performed if an outstanding SCSI reservation is on the target, or if a path failover is underway. Path switching is delayed until an I/O request is performed when no reservations or path failovers are pending.


 Configure and use NPIV HBAs

<<I don’t have NPIV in my lab.  Need to revisit this section>>

Manage VMFS file systems using command-line tools

The command line tool you’ll use for managing VMFS file systems in vmkfstools.  It’s a very powerful tool and there are many options available, so I suggest you read the man page.  The following examples (taken from the online documentation) are certainly not inclusive, just a quick sample of what the tool can do. 

Example for Creating a VMFS File System
vmkfstools -C vmfs3 -b 1m -S my_vmfs /vmfs/devices/disks/vmhba1:3:0:1

Example for Extending a VMFS-3 Volume
vmkfstools -Z /vmfs/devices/disks/vmhba0:1:2:1 /vmfs/devices/disks/vmhba1:3:0:1

Upgrading a VMFS-2 to VMFS-3
-T –tovmfs3 -x –upgradetype [zeroedthick|eagerzeroedthick|thin]

Example for Creating a Virtual Disk
vmkfstools -c 2048m /vmfs/volumes/myVMFS/rh6.2.vmdk

Example for Cloning a Virtual Disk
vmkfstools -i /vmfs/volumes/templates/gold-master.vmdk /vmfs/volumes/myVMFS/myOS.vmdk


 Configure NFS datastores using command-line tools

Assuming your NAS is configured properly, this is pretty easy.  The following command will mount an NFS datastore on an ESX host …

esxcfg-nas –a –o –s /nfs/share NAS

In this example, the –a adds a host with the IP address followed by the –o flag using the share configured after the –s flag.  Upon successfully adding the datastore, the NFS mount will be found at /vmfs/volumes/NAS

The following command will remove the datastore

esxcfg-nas –d –o NAS

Configure iSCSI hardware and software initiators using command-line tools

I don’t know if I’ve seen an official, formal example of how to do this (though I’m sure it exists somewhere).  So, here’s how I do it …

Step 1:  Add the portgroup to vSwitch0 
esxcfg-vswitch –add-pg=VMkernel vSwitch0

Step 2:  Add the IP to the VMkernel portgroup
esxcfg-vmknic -a -i -n VMkernel

Step 4:  Enable iSCSI
esxcfg-swiscsi –e

Step 5:  Add the target
vmkiscsi-tool -D -a vmhba34

Step 6:  Rescan the HBA
esxcfg-rescan vmhba34


That’s it for section 1.1 … time to go reformat my notes for section 1.2!

  1. I really appreciate your post and you explain each and every point very well.Thanks for sharing this information.And I’ll love to read your next post too.

Comments are closed.