Tanzu vSphere 7 with Kubernetes on NSX-T 3.0 VDS Install Part 2: ESXi, vCenter, VDS Config and NSX-T Manager

In this article, we will prepare the vSphere and NSX components required for the vSphere with Kubernetes.

 

Step 1 – Install the ESXi 7.0 on to the hosts. For me, I found it very convenient to just place the iso file onto a NAS then mount it on the virtual CD-ROM feature on the Supermicro IPMI. This is so much easier as compared to create a bootable USB.

Screen Shot 2020-04-23 at 12.40.20 PM

Installing ESXi using the Supermicro Remote Control iKVM/HTML5 Client.

 

Screen Shot 2020-04-24 at 9.07.26 AM

 

 

Select the Virtual CDROM during the host boot up process.

Screen Shot 2020-04-24 at 9.14.32 AM

Although I have previously beta versions of ESXi 7.0 installed, I could choose the upgrade option. I did successfully upgrade through this method during the beta testing times and save me time from configuring the IP addresses, hostnames, etc everytime. Since this installation is going on the GA version, I think its worthwhile to do a fresh install. Therefore, I pick the Install option which will overwrite. As likely, most of the readers of this article when doing for POC, is likely to go for fresh install option.

Screen Shot 2020-04-24 at 9.03.43 AM

vSphere ESXi7.0 successfully installed. I basically repeat the above steps for rest of the ESXi hosts.

Screen Shot 2020-04-24 at 11.12.30 AM

I will basically skip explaining the steps of configuring the ESXi hosts for hostname, IP addresses, VLANs and enabling SSH for troubleshooting purposes as its pretty much self explanatory. I did record the process though and will probably putting up video to show the process.

 

Step 2 – Install the vCenter Appliance. I typically prefer to install vCenter using CLI and ovftool. As I use a Windows Server jumphost, what I did was just mount the vCenter ISO as a CDROM drive and I pretty much trigger the installation script straight from the jumphost.

You can find the CLI installation scripts in the vCenter Install ISO.

Screen Shot 2020-04-24 at 9.40.58 AM

I use the embedded_vCSA_on_VC.json.

Screen Shot 2020-04-24 at 10.02.57 AM

Install-vCenter02.bat file :-

E:\vcsa-cli-installer\win32\vcsa-deploy.exe install Z:\Scripts\SUN05-vcenter02-embedded_vCSA_on_ESXi.json –accept-eula –no-ssl-certificate-verification

Pause

Screen Shot 2020-04-24 at 9.33.38 AM

This completes the installation of vCenter.

Screen Shot 2020-04-24 at 11.01.10 AM

 

Step 3 – Configuring virtual DC, Clusters, Adding Hosts to Clusters and Creation of VDS. Again, I typically configure the above mention items through script and I use PowerCLI. There are other automation tools like Ansible or Terraform where you can consider.

Screen Shot 2020-04-24 at 12.44.10 PM

This is how it looks like after all the hosts have been added to the Cluster.

Screen Shot 2020-04-24 at 12.32.27 PM

Next is to enable DRS and vSphere HA on the vSphere cluster. This will be check when you are enable vSphere with Kubernetes, specifically call Workload Management. Screen Shot 2020-04-24 at 1.53.50 PM

 

I also have created the VDS and Portgroups  using PowerCLI. Although there are two VDS created, only one VDS will be used. In this the SUN05-VDS01-Mgmt VDS would be used.

Screen Shot 2020-04-24 at 12.44.40 PM

After running the PowerCLI script, you can see now VDS and the related portgroups were created at vCenter.

Screen Shot 2020-04-24 at 12.40.04 PM

Edit settings on the VDS swtich, changing number of uplinks to 2 and also enable NIOC.

Screen Shot 2020-04-24 at 12.47.50 PM

 

Adding Hosts to VDS and configuring Uplinks. Right click on SUN05-VDS01-Mgmt, Click on Add and Manage Hosts..

Screen Shot 2020-04-24 at 12.45.57 PM

Select Add hosts

Screen Shot 2020-04-24 at 12.46.26 PM

Screen Shot 2020-04-24 at 12.46.50 PM

Select all the new hosts

Screen Shot 2020-04-24 at 12.46.50 PM

Click on Assign uplink on the NIC which is currently use by vSwitch0.

Screen Shot 2020-04-24 at 12.47.23 PM

Assign to uplink1. Since all my hosts vmincs are homogenous, a convenient way is make this config applied to all the hosts is to click on the “Apply this uplink assignment to the rest of the hosts”.

Screen Shot 2020-04-24 at 12.48.45 PM

Review the configurations…

Screen Shot 2020-04-24 at 12.49.12 PM

We are also going to migrate the management vmkernels interfaces from Standard vSwitch(vSwitch0) to VDS as well.

Screen Shot 2020-04-24 at 12.49.40 PM

Select the VDS portgroup, in my case is the VDS01-VLAN116-Mgmt which has a VLAN tag ID of 116.

Screen Shot 2020-04-24 at 12.50.05 PM

Review settings again…

Screen Shot 2020-04-24 at 12.50.20 PM

The following shows that the vmnic migration from standard switch as well as the management vmkernels were successfully migrated.

Screen Shot 2020-04-24 at 12.50.59 PM

 

Next, adding all the rest VMkernels interfaces such as vMotion, IP Storage, vSAN, etc. using PowerCLI again.

Screen Shot 2020-04-24 at 12.43.42 PM

The following shows all the vmkernels were created successfully.

Screen Shot 2020-04-24 at 1.51.45 PM

Lets try enabling Workload Management.

Screen Shot 2020-04-24 at 1.55.56 PM

Workload Management Wizard does a good job in checking whether you have the relevant pre-reqs in order to have it enabled. You can see now we need NSX-T to be installed before proceeding further.

Screen Shot 2020-04-24 at 1.53.02 PM

 

Step 4 – Install the NSX-T Manager. I also typically prefer to install NSX-T Manager using CLI and ovftool.

Install-nsxtmanager02.bat file :-

C:\Users\Administrator\Downloads\ovftool\win32\ovftool –name=sun05-nsxtmanager02 –X:injectOvfEnv –X:logFile=ovftool.log –allowExtraConfig –datastore=vsanDatastore –network=“VDS01-VLAN115-IaaS” –acceptAllEulas –noSSLVerify –diskMode=thin –prop:nsx_ip_0=10.115.1.26 –prop:nsx_netmask_0=255.255.255.0 –prop:nsx_gateway_0=10.115.1.1 –prop:nsx_dns1_0=192.168.1.10 –prop:nsx_domain_0=acepod.com –prop:nsx_ntp_0=192.168.1.10 –prop:nsx_isSSHEnabled=True –prop:nsx_allowSSHRootLogin=True –prop:nsx_passwd_0=VMware1!VMware1! –prop:nsx_cli_passwd_0=VMware1!VMware1! –prop:nsx_hostname=sun05-nsxtmanager02 “Z:\Downloads\VMware\vSphere\7.0-GA\nsx-unified-appliance-3.0.0.0.0.15946739.ova” vi://administrator@vsphere.local@sun01-vcenter00.acepod.com/?ip=192.168.1.202 Pause

Screen Shot 2020-04-24 at 11.27.41 AM

Once complete, we can try login to the UI of the NSX-T Manager.

Screen Shot 2020-04-24 at 2.01.28 PM

 

Once you see this page after login, this shows that the NSX-T Manager appliance was installed successfully.

Screen Shot 2020-04-24 at 2.02.02 PM

Now we are done with installing and setup the vCenter and NSX-T Manager, next we will configure the virtual networking like NSX-T segments, Tier-0 routers, uplinks and network routing.

 

 

Tanzu vSphere 7 with Kubernetes on NSX-T 3.0 VDS Install

Part 1: Overview, Design, Network Topology, Hardware Used

Part 2: ESXi, vCenter, VDS Config and NSX-T Manager

Part 3: NSX-T Edges, Segments, Tier-0 Routing

Part 4: Supervisor Cluster, Content Library, TKG Clusters

Part 5: Testing, Demo Apps