VMConAWS is evolving on a weekly (maybe even daily) basis. So, not unexpectedly, one of the foundational components of VMC, NSX, has also undergone transformational change. The original release of VMConAWS was underpinned by NSX-v, a tried and true SDN solution. However VMware recognized that NSX needed to support more than just vSphere, and that it also needed to meet the demands of web scale environments like AWS, so NSX-T was born and is now the default for VMConAWS. Along with NSX-T, VMware expanded the connectivity options for VMConAWS.
With an NSX-v based VMC SDDC, connectivity to the Management Gateway (MGW) and the Compute Gateway (CGW) had to run across separate connections. In this scenario, only ESXi Management, cold migration or network file copy/NFC, and live migration/vMotion traffic is supported across a Direct Connect, and all other traffic, like management appliance traffic (e.g. vCenter, HCX Cloud Manager, etc) and compute/workload traffic is carried over VPN connected to the Public side of the SDDC VPC.
Now, with the NSX-T based VMC SDDC, all traffic to the Management Gateway (MGW) and the Compute Gateway (CGW) can be passed across a Direct Connect.
When switching to a Direct Connect only design, there are some things that need to be considered. For example, when you try to access the SDDC vCenter for configuring things like HLM, or when trying to access the HCX Cloud Manager after it is deployed (both of which default to using their Public interface for management access when they are deployed) you will likely need to reconfigure which interface they use for communication with your on-premises data center.
Changing the configuration for the SDDC vCenter to have it resolve to its Private Interface (vs. its Public Interface) is fairly easy to do, you simply go into the “Settings” section of the VMC Console and change the vCenter FQDN to use the Private IP that is assigned.
But HCX is a little more involved. Unlike the SDDC vCenter, the HCX management console doesn’t have an entry under “Settings” like vCenter does, so when you reach the point in the HCX deployment where you are instructed to “click the Open HCX button” under the “SDDCs” tab, you may not be able to connect to it (e.g. you’ll get a “connection timed out” message in your browser). The most likely cause of this is because the link defaults to using the Public interface for connecting to the HCX Console.
A temporary workaround for this is to go in to the VMC SDDC vCenter (not the VMC Console) and find the Private IP for the hcx_cloud_manager VM under Hosts and Clusters and use that to connect to the HCX Cloud Console in your web browser (it should be something like https://<vm_local_ip>/hybridity/ui/services-1.0/hcx-cloud/index.html#/login). Assuming all else is correct with routing between your on-premises data center and the VMC SDDC, you should be able to successfully access the HCX Cloud Management Console.
Note: if you’re using a VPN that has split tunnel disabled to do all of the configuration (like I do), additional routing and name resolution configuration may need to be done to pass traffic from the SDDC back to the data center (through the AWS Direct Connect) and then through the VPN.
I’ll break the tasks for getting everything configured and working into two sections. The first section covers what is provided in the VMware documentation, along with some clarification on several of the configuration steps. The second section will outline some additional steps that will need to be taken to configure the Edge Firewall (a.k.a. the Gateway Firewall) that are not documented as of the time this was written.
Configuration Steps (Section 1)
Follow the configuration steps listed in the Configuring HCX for Direct Connect Private Virtual Interfaces section of the VMware NSX Hybrid Connect User Guide here
The documentation shows the procedure for re-configuring the HCX Interconnect as follows:
- Log in to the VMC Console at vmware.com.
- On the Add-ons tab of your SDDC, click “OPEN HYBRID CLOUD EXTENSION” on the Hybrid Cloud Extension card.
- Navigate to the SDDCs tab and click OPEN HCX.
- Enter the firstname.lastname@example.org user and credentials and click LOG IN. (Note: In the current release, this procedure requires a VMware Support account, in the upcoming release, the cloud administrator will be able to perform this operation).
- Navigate to the Interconnect Configuration section of the Administration tab and click Edit.
- Locate the Network Profile with Type: Internet and click the X to delete it.
- Create a Network Profile:
- Select the “Distributed Portgroup” Network Type
- Select the “Direct Connect Network” Network Type
- Enter the private IP address ranges reserved for HCX.
- Enter the Prefix Length and the Gateway IP address.
- Click Next and then click Finish.
The first issue you will probably need to address is, when you reach step 3 of the configuration tasks, you are instructed to “Open HCX”. In my case, because I connect to the VMC SDDC through a VPN that has Split Tunnel disabled, I wasn’t able to connect to the HCX Console via its Public Interface to complete the configuration. There are also some other scenarios where you may not be able to access the HCX Console as well – like if you’re not able to access the Internet from where you are connected to your on-premises data center.
As I mentioned earlier, you can use the Private Interface IP to access the HCX Cloud Console, but ideally, you will want that to work as intended when you click the “Open HCX” button in the VMware Hybrid Cloud Extension UI.
In order to access the HCX Cloud Console on its Private interface, some changes need to be made to force HCX to pass all traffic across the Direct Connect (including console traffic). This is where things get a little confusing.
Based on the instructions in the VMware documentation (as shown above) the procedure makes sense until you reach steps 4 – 8, where you end up in a bit of a chicken and egg situation. In order for HCX to send all traffic over the Direct Connect, you need to reconfigure the Interconnect interface which requires you to login to the HCX Cloud console as email@example.com … “Wait … neither customers nor partners are given firstname.lastname@example.org credentials”. And then, when it says, “this procedure requires a VMware Support account”, does that mean if I am listed as a support contact for my company’s VMware account, I should be able to do this? The answer to that question is “No”.
What all of that actually means is, you will need to contact VMware’s VMC Support team to make the change, at least until they make that capability available for the cloudadmin account. The easiest (and fastest) way to do this is through the online chat support in the VMC Console. When you’re ready to proceed with step 4 (above), you will need to make sure you have already removed any HCX Interconnects that were previously deployed for the specific VMC SDDC you’re working with because those components will need to be re-deployed once you get to the on-premises portion of the HCX deployment. VMware support will also ask you to confirm that any previously deploy HCX Interconnects for this SDDC have been removed and will then ask for the CIDR range you want to use with HCX.
Once you have provided VMC Support with the pertinent information, they will make the changes on the backend, which usually takes 2 – 4 hours.
But wait … there’s more!
In addition to the changes VMC Support needs to make to have HCX Cloud use the Direct Connect, there are some other changes that will need to be made in the Gateway Firewall.
Configuration Steps (Section 2)
In order for the HCX Enterprise Manager (that you will deploy in your on-premises data center) and the HCX Provider to communicate with the HCX Cloud Manager, there are some Edge Firewall rules that will need to be configured.
First, you will need to create entries in the Networking & Security > Inventory > Groups > Management Groups section for the HCX activation and update servers (connect.hcx.vmware.com and hybridity-depot.vmware.com) to use with the rules for allowing the HCX Cloud Manager to communicate with the Mothership (shown below).
Second, you will want to create a rule in the SDDC Gateway Firewall that allows your the HCX Cloud Manager to communicate on port TCP-443 to the HCX provider sites you just created for activation and updates. (Note: The image shows “ANY” as the services, but 443 is all that is needed).
Third, you will create a rule in the Management Gateway Firewall that allows inbound access from your on-premises networks (the ones where the HCX Enterprise components will reside and any that will be stretched by HCX to the VMC SDDC).
Next, you will create a Gateway Firewall rule that allows inbound access to the HCX Cloud Manager from external (in this case ANY) on port TCP-443.
Once these steps have been completed, you should be able to access the HCX Cloud Management Console using the OPEN HCX button on the SDDCs tab and all traffic for HCX (except its communication with the provider) should be sent across the Direct Connect between your on-premises data center and the VMC SDDC.
One final note, particularly if your VMC SDDC has been configured to use on-premises DNS servers, make sure all of the other solutions that are deployed to support on-premises to VMC failover, DR or migration (e.g. SRM, vR, etc) are configured to use the Private IP address of the corresponding components hosted in the VMC SDDC.