Tag Archives: SDDC

Policy Routing with HCX Mobility Optimized Networking (MON)

**First and foremost, thank you Michael Kolos for the assist on this blog!!!**

If there is anything I’ve learned over the past four years with VMware Cloud on AWS (also applies to other hyperscalers) it’s that spinning up the SDDC is easy. It’s connecting on-premise and other cloud providers together that’s the hard part. In order to simplify these connections, VMware engineered a little bit of magic with VMware HCX™. In summary, VMware HCX™ is an application mobility platform designed for simplifying application migration, workload rebalancing and business continuity across datacenters and clouds. As a part of an HCX deployment, a few appliances are deployed. HCX Manager, WAN Interconnect (IX)m, WAN Optimization (WAN-OPT), Network Extension (L2C), and a Proxy Host. More details can be found here. For this post, I am going to focus on enabling MON via the HCX Network Extension and why you need to understand policy routing. I’ve run into this a few times with customers. Hopefully, this helps!!

So what is Mobility Optimized Networking?
Mobility Optimized Networking improves routed connectivity patterns for multi-segment applications and virtual machines with inter-VLAN dependencies as those virtual machines are migrated into the cloud. Without MON, HCX Network Extension expands the on-premises broadcast domain to the cloud SDDC while the first hop routing function remains at the source. The network “tromboning” effect is observed when virtual machines connected to different extended segments communicate. More information can be found here.

""

Scenario: Customer has a Layer 2 network stretched via HCX L2C. In order for the customer to allow their teams to deploy and manage workloads on VMC on a secondary domain, they want to add a Windows Domain Controller as an Identity Source to the VMC SDDC vCenter. This DC is deployed on the same VLAN where the network is stretched. Firewall rules have been validated similar to my previous blog which details the rules needed for the domain controller to communicate with the vCenter. When trying to add the following error appears. F So what gives?!!!!!

Here’s my list of assumptions:

  • A source VM, on an HCX L2E network with MON enabled, and the VM set to use the local (MON) gateway – not the source(on-prem) L3 gateway.
  • Default policy routes (RFC1918) in place.
  • VM trying to reach the vCenter of the local SDDC (i.e. the one where the MON gateway is, not the source L3 gateway)

In this scenario, MON will only optimize the path for other VMs on the MON enabled networks whose gateway is also the MON network, or for routed segments in the same SDDC (connected to the same T1 Router aka Compute Gateway). The SDDC’s management CIDR (which vCenter is a part of) is not connected to the same T1 Route as the vCenter is connected to the Management Gateway. As a result, there is not a matching T1 route. Without this route, the traffic decision then moves to the policy routes. Since the vCenter is using private IP, which is in the default Policy routes for MON, the traffic will be sent back over the L2E to the source L3 gateway. Depending on the routing configuration, this path may or may not work.

To resolve this, we need to modify the default policy routes. Depending on what the desired path is (e.g. what destinations are reachable on the on-prem side), we can update them to match that. The easiest way to do this is to add a DENY route with the SDDC’s management CIDR to the Policy Routes.

In my lab, the SDDC uses 10.62.0.0/16 as the management CIDR, this is how you can add the Policy Route. Go to HCX Manager via the HCX Center Plug-in or HCX Manager URL and select Network Extension > Advanced > Policy Routes.

Enter in the SDDC Management CIDR as a deny rule.

Once this change is made, the Domain Controller is able to communicate with vCenter and is able to be added as an identity source.

Advertisement

Native AWS VPC Connecting to VMware Cloud on AWS (Part 1) – Native VPC Connectivity via VPN

With the release of VMware Cloud on AWS 1.12, we delivered additional connectivity options with the addition of transit connect (VMware managed AWS Transit Gateway). Over the past few weeks, I have been working on a project that has specific success criteria as well as challenges that prevent certain connectivity options so I thought what better way to show how we got around the issue than to show how we did it as Transit Connect and AWS Direct Connect were out of scope. The plan is to have this be a multiple part blog series that details how to setup and test a route based VPN, attach to native AWS EC2 workloads via ENI in the VMC VPC as well as leverage a Transit connect that makes connectivity easier to configure, manage and maintain. The diagram below is the overall architecture for those already leveraging AWS native services via a Direct Connect and Transit Gateway but can only connect to VMC via IPSEC VPN for certain reasons (shout out to fellow VMC Architect Will Lin for the assist!). While Direct Connect provides much better speeds and feeds, you may still be able to accomplish what you need depending on your application requirements.

STEP 1: To get things started, we need to either identify or create a VPC that we want to communicate with the VMware Cloud on AWS SDDC. If you already have a VPC configured you can skip to Step 2. To create a new VPC, log into your AWS account go to Services > VPC> Create VPC. Similar to creating your VMC SDDCm it is paramount that plan your CIDR range appropriately so can assign subnetting correctly based on what you are trying to accomplish. ** Make sure you are selecting the correct AWS Region based on your requirements! **

Next, create a subnet that you will assign to the VPC. This is why understanding the CIDR ranges is important as you cannot have any overlap between your VPC CIDR and VPC subnets so keep that in mind as you build things out.

Creating a subnet assigned to the AWS VPC

Step 2: Deploy a AWS Transit Gateway (TGW). For reference, an AWS Transit Gateway connects VPCs and on-premises networks through a central hub. Keep in mind the default ASN number for the AWS TGW is 64512. If you are going to use an existing TGW, get the correct number from your team. For the sake of this blog, I setup the TGW with all the default settings.

Step 3: Create a Customer Gateway (CGW). A CGW is a resource that you create in AWS that represents the customer gateway device in your on-premises network, or VMC in this situation. 

In order to configure the CGW, you need to enter the VPN Public IP listed in the Networking & Security section of the SDDC console.

Step 4: With your transit and customer gateways configured, it’s time to create the VPN connection from the AWS side. Go to Services > VPC > Site-to-Site VPN Connections > Create VPN Connection. Name the VPN connection and select “Transit Gateway” as the gateway type and add your CGW. Take note that you can also add a new CGW didn’t as a part of the previous steps. We want to set the routing options to dynamic so we take advantage of BGP. My personal preference is to define the CIDR and per AWS documentation, this needs to be a /30 within a certain range. I also created a basic preshared key rather than have AWS create the key randomly.

AWS VPN Configuration

Step 5: Download the VPN configuration as a generic file. This is where you can validate the configuration and use it for configuring the VPN from the VMware Cloud SDDC Console.

Step 6: Configure the VPN on the SDDC. Go to the VMC SDDC Console > Networking & Security > VPN > Route Based > Add VPN. Here you want to take the Virtual Private Gateway Outside IP and enter that into the Remote Public IP Field. Take the Customer Gateway Inside IP Address and enter that number as the BGP Local IP. Next, take the Virtual Private Gateway Inside IP and enter that into the BGP Remote IP field. All that’s left is to enter the BGP ASN number that your configured earlier as a part of the TGW creation. It should also be listed in the config file you downloaded. If all numbers are correct, you should see the VPN Tunnel and BGP come up in a matter of minutes! If the VPN comes up and BGP does not, check your IPs and ASN numbers. Additional help can also be found here!!!!

You now have a working VPN from AWS to VMC!!! While the tunnel is up, there is more work to do so you can fully test traffic. In Part 2 I will cover routing tables and SDDC Gateway rules to enable two way communication. Stay tuned!!!!

Day 2 VMware Cloud on AWS SDDC Scale Up…in Four Clicks or Keystrokes!!!!!

As customers continue to build their cloud strategy with a combination of VMware products and services, one thing has been heard loud and clear…”Make Day 2 Operations easy!” As customers continue to move and increase their footprint in VMWonAWS, the SDDC’s demand for management resources will increase. While the VMC Sizer is a great tool to help understand the recommended size of an new SDDC, there will be times when SDDC growth is too big for the management VMs to handle after the SDDC is deployed…kind of like the time when Sheriff Brody realized he was going to need something much larger to catch a Great White shark.

When an SDDC is created, two resource pools are created. One named “Compute- ResourcePool” and one named “Mgmt-ResourcePool”. Mgmt-ResourcePool (MRP) is VMware managed and is comprised of vCenter, 2 NSX Edges, and 3 NSX Managers by default. In order ensure uptime and performance, all resources in this MRP have reservations assigned so these appliances always have what they need.

For more information, Product Manager Vish Kalsi wrote a quick blog on choosing the correct SDDC deployment. In short, medium management appliances require 34 vCPU and 116GB memory to run vCenter, NSX Manager and other management appliances. Large management appliances require 68 vCPU and 240GB memory. Large SDDCs are ideal for addressing a larger density of workloads . Large SDDCs support enhanced network throughput on the NSX Edge appliance. VMware recommends large-sized deployments with more than 30 hosts or 3000 VMs, or if the resources (CPU or memory) are oversubscribed in the management cluster.

Previously, a VMware support ticket needed to be opened in order to convert a regular aka medium SDDC to large. This method was obviously not preferred by most as this is the opposite of a self-service cloud operation model. However, begging with VMWonAWS 1.10, you can now upscale your SDDC to large with just a few clicks….or keystrokes!!!

Start by logging into the Cloud Services Portal, select your SDDC and go to Settings > SDDC > Management Appliance. You will see your SDDC as well as the “Upsize” option listed as seen below.

Upsize Option within the Cloud Services Portal

The only thing left to do is accept the addition of hosts if necessary and understand that you can never go back to a regular size SDDC. Once Upsize is selected, the process takes about 2 hours to complete and you will lose connectivity. It is recommended to do this during a maintenance window.

Once complete, the Management appliances will reflect as a “Large”

Once, in vCenter, you will see that the NSX Edges have gone from 4 CPU x 8 GB RAM to 8 CPU x 32 GB RAM and vCenter has gone from 8 CPU x 28 GB RAM to 16 CPU x 37 GB RAM (only 12 of the 16 CPUs are reserved in this configuration). You can check the before and after in the VM summary as seen below.

Regular SDDC vCenter
Large SDDC vCenter

Now that the SDDCs have been upscaled, it’s onto bigger and better things for your VMWonAWS SDDC!

Effectively Planning VMware Cloud on AWS SDDC Upgrades

One of the questions I am often asked is now that I am using VMware Cloud on AWS, how do I go about managing my SDDC life cycle? The answer…..VMware has you covered! As of March 2020, we have made some significant enhancements to the Notification Gateway (NGW) that give you several options to receive updates from VMware Cloud Services regarding maintenance activities such as certificate replacements and SDDC upgrades to new releases. While the NGW can be leveraged in several different areas, my preferred integrations are with Slack and Microsoft Teams. Setting up these integrations are fairly straightforward. Look no further than William Lam’s blog for details.

Even if you have Webhook integrations setup, you will still get a notification email similar to the image below letting you know when your SDDC is scheduled for an upgrade.

Notification email from Notification Gateway detailing each phase of the SDDC upgrade.

It is imperative that you take note of the dates and times your SDDC is scheduled for each phase as your times will all be in UTC timezone so do your time conversions accordingly. When you login to your SDDC console and go to the maintenance tab and you will see each phase listed along with recommendations for each phase.

Each phase of the SDDC is highlighted below as well as details around SDDC accessibility during the upgrade. For detailed information, read my associate Tom Twyman’s blog and the SDDC upgrade notes found here. We continue to improve upgrade processes in the background so check back often!! There are additional considerations to make when integrating with HCX, Site Recovery and Horizon so be sure to understand the impacts listed in the read me!! Keep in mind that during Phase 1 your vCenter certificate will be updated and the NSX certificate will be updated during Phase 3. If you have other products and services that depend on vCenter, you will need to take the proper steps to accept the new certs.

While there are time estimates for each phase, mileage may vary during the upgrade. To make things a bit easier for you. I have included a simple excel spread sheet to help you plan your SDDC upgrade.

After going through several customer upgrades over the past two years, my top 5 things to do are

  1. Don’t forget about certificate validation afterwards!
  2. Plan your outages around each phase and best to be conservative. Allot for the full estimated time.
  3. Setup integrations with the NGW. While emails are nice, it has been my observation that people get too many emails these days and these notifications are often ignored. Pick a delivery method that will get your attention!
  4. Read the release notes as well as upgrade notes before your scheduled upgrade.
  5. Don’t panic! For some, giving VMware the keys to the car (SDDC) is unnerving, and they want to watch and be involved. Remember this is a service, we have you covered. Sit back and relax!

VMware Cloud on AWS Connection Options

Happy New Year!!! This is going to be an exciting year for VMware Cloud on AWS and I wanted to kick off 2018 by highlighting the way in which you are going to connect into and out of VMware Cloud on AWS.

First of all, VMware Cloud on AWS is optimized (VMware Cloud Foundation) to run on dedicated, elastic bare metal infrastructure at a very high level inside Amazon’s data centers. For security purposes, the VMware Cloud on AWS SDCC is bifurcated to the components that manage the SDDC itself such as ESXi, VSAN, NSX, and vCenter.

Here’s a simple explanation of how you can setup the connectivity framework.

The first thing you need to setup is a connection to the management components of the SDDC.  You will first need to create a Management VPN and choose a set range of IP addresses that will be used by management components such as the ESXi hosts and vCenter. This range will be in the form of a simple CIDR block. We recommend using a /20 CIDR block for management purposes. After you connect the management portion of the SDDC, you will then need to setup an IPSec VPN between your on-prem data center and management components. This VPN can be setup over the Internet or AWS Direct Connect (DirectX). After this connection is established, you can then build firewall rules on the VMware Cloud on AWS Console. With these rules you can control access to the  vCenter from your on-prem data center.

VMCMgtVPN

There is an optional connection you can setup if you need access to your vCenter Server directly from the Internet. A public IP is automatically provided during the provisioning process. It is important to note that all access to this IP is restricted. To provide access, you will need to configure firewall rules in the VMware Cloud on AWS console to allow this direct type of Internet access.

PublicAccess

The second VPN you will need to setup is between your compute workloads and your on-premise data center. Several logical networks are required to provide the IP addresses for the workloads you plan on migrating or build in VMware Cloud on AWS. This VPN secures these workloads and allows them to connect back to your on-prem data center. This can be an IPSec VPN or L2VPN. The L2VPN advantage is that you can stretch a single L3 IP space from on-prem to the cloud and is also required for live migrations. This VPN can go over the Internet or AWS DirectX. You can again create firewall rules as needed to access on-prem workloads.

ComputeVPN

The next connection is between your SDDC workloads and your Amazon VPC. This is automatically configured and built during the SDDC provisioning process. Once you select the Amazon VPC subnet that will be associated with your VMware Cloud on AWS SDDC an elastic network interface (ENI) will be created allowing traffic to flow between both environments.  In order to control security, you will need to configure AWS IAM policies as well as firewall rules on the VMware Cloud on AWS side to allow access between both. Lastly, you will likely need to give direct public internet access to some of your SDDC workloads. To make these accessible to the Internet, you will need to leverage AWS elastic IPs along with NAT and firewall configurations to allow this type of access.

ENI

That’s it! Now you are ready to leverage your SDDC on VMware Cloud on AWS!

Also, here’s a video that covers the content discussed above.

-SL