Tanzu vSphere 7 with Kubernetes on NSX-T 3.0 VDS Install Part 3: NSX-T Edges, Segments, Tier-0 Routing

In this section, we will configure NSX-T such as setting up the Transport Nodes, Edge VMs, configure Layer 2 Segments, Tier-0 uplinks and the routing required in preparation for vSphere with Kubernetes.

Step 0 – Prerequisite, as this guide is broken down into multiple sections and this section is mostly focus on the NSX-T Manager, it would be good to ensure that the following are configured. This will prevent switching back and forth between vCenter and NSX-T Manager. In customer or production environment, it would also likely to have different teams managing different things, such as Systems / VI admins managing vCenter and Network team will be managing NSX-T. Therefore, sometimes its hard to get both teams to be online at the same time, and thus, would be good to be clear on who needs to be configure what.

1) VDS is created and MTU size has been increased to 1600 or more than 1600. MTU 9000 is recommended. This MTU size has to match the size port configuration.

Screen Shot 2020-04-27 at 10.33.30 AM

2) Portgroups that are going to be used for Edge VMs are created. This is where it gets tricky. Depending on the switch port configuration, you either create a portgroup with VLAN ID tagged or Trunk. This portgroup VLAN tag or no-tag or trunk has to match the switch port configuration. Trunk configuration is recommended. In my installation, since I’m going to validate one Physical NIC set up, the switch port configuration has to be Trunk.  

Screen Shot 2020-04-27 at 10.51.19 AM

Screen Shot 2020-04-27 at 10.50.59 AM

This shows the required relevant portgroups are configured.

Screen Shot 2020-04-27 at 10.53.59 AM

— Network Team —

1) Ensure the switch is configured with the right MTU as well as the routing are configured. As you can see, the following is showing VLAN116 and VLAN120, these two VLANs are use for Geneve Overlay TEP.

Screen Shot 2020-04-27 at 11.18.10 AM

2) Uplink VLAN with Static Route being configured for Ingress / Egress Subnet.

Screen Shot 2020-04-27 at 11.21.56 AM

Step 1 – Add License to NSX-T Manager.

Logging in to NSX-T manager.

Screen Shot 2020-04-27 at 8.46.32 AM

System -> Licenses under Settings -> ADD

Screen Shot 2020-04-27 at 9.47.24 AM

 

Step 2 – Add vCenter as Compute Manager.

Start by adding the vCenter as Compute Manager. System -> Fabric -> Compute Managers

Screen Shot 2020-04-27 at 8.49.59 AM

This shows that the Compute Manager is added successfully.

Screen Shot 2020-04-27 at 8.51.57 AM

 

Step 3 – Create Uplink Profiles for ESXi Transport Nodes.

System -> Fabric -> Profiles -> Add

Screen Shot 2020-04-27 at 9.15.08 AM

Configure Teaming & Active Uplinks and Transport VLAN.

Screen Shot 2020-04-27 at 8.55.41 AM

Step 4 – Create Uplink Profiles for Edge VMs Transport Nodes.

System -> Fabric -> Profiles -> Add

Screen Shot 2020-04-27 at 9.16.37 AM

Configure Teaming & Active Uplinks, Transport VLAN and MTU.

Screen Shot 2020-04-27 at 9.20.28 AM

It will look like the following with the 2 new uplink profiles being created successfully.

Screen Shot 2020-04-27 at 6.17.54 PM

 

 

Step 5 – Add IP Pools. As my Edge VMs are running in the same cluster as the Compute Cluster, I use a two VLANs approach to workaround. **You can read more on the this topic in the first part of the blog. Therefore, instead of one IP pools for TEPs, there will be a need for two IP Pools.

Networking -> IP Address Pools -> Add IP Address Pool

Screen Shot 2020-04-27 at 10.07.24 AM

Under Subnets, Click on Set. **Ensure the Gateway IP is being configured as we require routing between the ESXi hosts TEP overlay network and the Edge TEP overlay network. Click ADD.

Screen Shot 2020-04-27 at 6.22.47 PM

Click Apply.

Screen Shot 2020-04-27 at 6.25.11 PM

Add IP Pool for Edge VMs.  Networking -> IP Address Pools -> Add IP Address Pool

Screen Shot 2020-04-27 at 10.11.11 AM

**Ensure the Gateway IP is being configured as we require routing between the Edge TEP overlay network and the ESXi hosts TEP overlay network. Click Add.

Screen Shot 2020-04-27 at 6.20.18 PM

Click Apply. 

Screen Shot 2020-04-27 at 6.27.04 PM

This is how it looks like once the two IP Pools for the TEPs are configured.

Screen Shot 2020-04-27 at 10.15.45 AM

 

Step 6 – Create Transport Nodes Profiles for ESXi Hosts.

System -> Fabric -> Transport Node Profiles -> Add

Screen Shot 2020-04-27 at 9.35.25 AM

Configure Transport Zone, Uplink Profile.

Screen Shot 2020-04-27 at 10.25.34 AM

 

Step 7 – Enabling NSX ESXi as Transport Nodes

System -> Fabric -> Nodes -> Host Transport Nodes. Select the Computer Cluster and then Configure NSX.

Screen Shot 2020-04-27 at 11.10.03 AM

Select the Transport Node Profile we created at Step 6.

Screen Shot 2020-04-27 at 11.12.28 AM

The following shows the installation is progressing.

Screen Shot 2020-04-27 at 11.15.13 AM

This show the following the hosts are successfully installed with NSX.

Screen Shot 2020-04-27 at 11.31.48 AM

 

Step 8 – Deploying Edges

System -> Fabric -> Nodes -> Edge Transport Nodes. Click on ADD Edge VM.

Screen Shot 2020-04-27 at 5.53.55 PM

Fill in the name, host name and choose the size. For vSphere with Kubernetes, you will need to deploy a minimum of Large Edge VM appliance.

Screen Shot 2020-04-27 at 5.55.31 PM

Fill in the credentials.

Screen Shot 2020-04-27 at 5.57.23 PM

Select the Compute Manager, Cluster and Datastore.

Screen Shot 2020-04-27 at 5.58.20 PM

Configure the Management IP and choose the Portgroup for the Management Interface.

Screen Shot 2020-04-27 at 6.00.57 PM

Select the Transport Zone, Uplink Profile, IP Assignment and the IP Pool for the TEPs for the Edge VM. I usually keep the naming convention of nvds1 for overlay and nvds2 for VLAN transport zone.

Screen Shot 2020-04-27 at 6.02.09 PM

Select the VDS Portgroup – VDS01-Trunk which we created in Step 1.

Screen Shot 2020-04-27 at 6.04.34 PM

It will look like the following with all the required fields being selected and fill up.

Screen Shot 2020-04-27 at 6.06.34 PM

Add another switch by click on ADD SWITCH. **You might need to scroll up to the page to see the button.

Screen Shot 2020-04-27 at 6.09.01 PM

Again, select the Transport Zone, Uplink Profile and the VDS portgroup that would be use for the uplinks. I usually keep the naming convention of nvds1 for overlay and nvds2 for VLAN transport zone.

Screen Shot 2020-04-27 at 6.11.05 PM

Repeat the above steps for Edge VM02. It will look like the following once both the Edge VMs are deployed successfully.

Screen Shot 2020-04-27 at 6.14.40 PM

 

Step 9 – Configure the Edge Cluster

System -> Fabric -> Nodes -> Host Transport Nodes -> ADD. Select the two edge nodes that we have been just created in the previous step.

Screen Shot 2020-04-28 at 9.04.19 AM

The following shows the Edge Cluster – EC01 has been successfully created.

Screen Shot 2020-04-28 at 9.04.52 AM

Step 10 – Create the Segment required for Tier-0 Uplinks.

Networking -> Segments -> ADD Segment. Fill up the Segment Name, Transport Zone and Subnets.

Screen Shot 2020-04-28 at 9.12.33 AM

 

Step 10 – Configure the Tier-0 Gateway.

Networking -> Tier-0 Gateway -> ADD Gateway.

Screen Shot 2020-04-28 at 9.15.12 AM

Select Yes when ask whether you wish to continue to configure this Tier-0 Gateway.

Screen Shot 2020-04-28 at 9.16.31 AM

Clik on Set under Interfaces.

Screen Shot 2020-04-28 at 9.19.00 AM

Click on Add Interface.

Name: T0-Uplink1-Int

Type: External

IP Address/Mask: 10.149.1.524

Connect To(Segment): Seg-T0-Uplink1

Edge Node: sun05-nsxtedgevm01

Screen Shot 2020-04-28 at 9.20.26 AM

Click on Add Interface for the 2nd Edge VM.

Name: T0-Uplink2-Int

Type: External

IP Address/Mask: 10.149.1.624

Connect To(Segment): Seg-T0-Uplink1

Edge Node: sun05-nsxtedgevm02

Screen Shot 2020-04-28 at 9.23.54 AM

The following shows both interfaces for the Tier-0 Gateway are created correctly.

Screen Shot 2020-04-28 at 9.24.48 AM

Click on Set under HA VIP Configuration.

Screen Shot 2020-04-28 at 9.25.30 AM

Click on **ADD HA VIP CONFIGURATION.

** IP Address / Mask: 10.149.1.424

Interface: T0-Uplink1-Int1, T0-Uplink1-Int2

Screen Shot 2020-04-28 at 9.27.38 AM

The following shows the HA VIP Configuration has been successfully created.

Screen Shot 2020-04-28 at 9.28.00 AM

To ensure that the Tier-0 Gateway Uplink is configured correctly, we shall login to the next hop device, in my case is the Nexus 3K, to do a ping test.

I first ping myself ie. 10.149.1.1 which is configured on the switch then follow be the HA VIP configured on the Tier-0 Gateway.

Screen Shot 2020-04-28 at 9.29.45 AM

Lastly we need to configured a default route out so that the containers can communicate back to IP addresses outside the NSX-T domain.

Under Routing, Click on Set under Static Routes. **BTW, if you are using BGP, then probably this step would differ.

Screen Shot 2020-04-28 at 9.35.58 AM

Click on ADD STATIC ROUTE.

Name: Default

Network: 0.0.0.0/0

Screen Shot 2020-04-28 at 9.38.51 AM

Click on Set Next Hop, ADD NEXT HOP.

IP Address: 10.149.1.1

Screen Shot 2020-04-28 at 9.37.53 AM

Once the static route has been added, one way to test is from outside the NSX-T domain. In my case, I have a jumphost which is outside the NSX-T domain and the gateway of the jumphost is pointing to the N3K as well. I did a ping test from the jumphost to the Tier-0 Gateway VIP. If the ping test is successful, it means the static route we added to the Tier-0 gateway is successfully configured.

Screen Shot 2020-04-28 at 9.41.53 AM

Step 10 – Validate whether NSX-T has been successfully set up for vSphere with Kubernetes.

With all the configuration on the NSX-T, vSphere VDS and physical network are set up, its now to go back to Workload Management to see whether we are ready to deploy Workload Management Clusters! And YES we can! NSX-T is now detected by Workload Management!

Screen Shot 2020-04-28 at 9.48.55 AM

Now we are done with NSX-T and related networking configurations. Give yourself a pat here! A lot of the questions related to vSphere with Kubernetes during the Beta whether for customers or internally were due networking and NSX related questions. Next, we will start configuring things required for Workload Management such as storage policies, Content Library, etc, mostly on the vCenter side.

 

Reference:

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-8D0E905F-9ABB-4CFB-A206-C027F847FAAC.html

 

 

Tanzu vSphere 7 with Kubernetes on NSX-T 3.0 VDS Install

Part 1: Overview, Design, Network Topology, Hardware Used

Part 2: ESXi, vCenter, VDS Config and NSX-T Manager

Part 3: NSX-T Edges, Segments, Tier-0 Routing

Part 4: Supervisor Cluster, Content Library, TKG Clusters

Part 5: Testing, Demo Apps