Tanzu vSphere 7 with Kubernetes on NSX-T 3.0 VDS Install Part 4: Supervisor Cluster, Content Library, TKG Clusters

In this section, we will enable Workload Management, creation of Supervisor Cluster, enabling Content Library as well as creation of TKG Clusters also known as Guest Clusters.

Step 1 – Create the VM Storage Policies.

Login to vCenter if you have not. Menu -> Datastores. Click on the datastore that you like to use for vSphere with Kubernetes.

Under Tags, Click on Assign.

Screen Shot 2020-04-30 at 9.02.16 AM

Click on ADD TAG.

Screen Shot 2020-04-30 at 9.03.11 AM

Give a name to the tag. Then click on Create New Category.

Name: pp-storage

Screen Shot 2020-04-30 at 9.05.36 AM

 

Category Name: pp. Leave the rest of the settings unchanged.

Screen Shot 2020-04-30 at 9.08.19 AM

 

Screen Shot 2020-04-30 at 9.06.56 AM

This is how it looks like when the Tag and Tag Category were created successfully. Select the Tag you just created and Click on Assign.

Screen Shot 2020-04-30 at 9.09.23 AM

Menu -> Storage Policies -> VM Storage Policies.

Screen Shot 2020-04-30 at 9.11.37 AM

Name: pp-storage-policy

Screen Shot 2020-04-30 at 9.12.31 AM

Select Enable tag based placement rules.

Screen Shot 2020-04-30 at 9.13.20 AM

Select the Tag category and Tags you created in previous steps.

Screen Shot 2020-04-30 at 9.14.21 AM

It will show you the datastore that you previously tag.

Screen Shot 2020-04-30 at 9.16.03 AM

Summary page. Review and finish. Click Finish.

Screen Shot 2020-04-30 at 9.17.12 AM

**Step 2 – Enabling Workload Management and creation of  Supervisor Cluster.

**

Menu -> Workload Management. Click on Enable.

Screen Shot 2020-04-30 at 9.19.21 AM

Select the Compatible Cluster.

Screen Shot 2020-04-30 at 9.20.20 AM

Cluster Settings. Select Control Plane Size. For me, I choose Tiny since this is a lab/Testing set up.

Screen Shot 2020-04-30 at 9.21.24 AM

Configure Networking for the Control Plane and Worker Nodes.

Management Network

Network: VDS01-VLAN115-IaaS

Starting IP Address: 10.115.1.201

Subnet Mask: 255.255.255.0

Gateway: 10.115.1.1

DNS Server: 192.168.1.10

NTP Server: 207.148.72.47 (As my lab as Internet access, I use public NTP server.) **NTP is very important. You will see authentication errors in the wcpsvc logs and usually this has to do with NTP not working correctly.

Screen Shot 2020-04-30 at 9.26.03 AM

 

Workload Network

vSphere Distributed Switch: SUN05-VDS01-MGMT

Edge Cluster: EC01

DNS Server: 192.168.1.10

Pod CIDRs: 10.244.0.0/21 (Default)

Server CIDRs: 10.96.0.0/24 (Default)

Ingress CIDRs: 10.30.10.0/24

Egress CIDRs: 10.30.20.0/24

Screen Shot 2020-04-30 at 9.33.16 AM

Storage

Screen Shot 2020-04-30 at 9.34.01 AM

 

 

Review and Confirm

Screen Shot 2020-04-30 at 9.34.57 AM

Once that is done, you will now see the Supervisor Cluster Control VMs being deployed.

Screen Shot 2020-04-30 at 9.36.45 AM

Go make yourself a cup of coffee about come back in about 25 mins. :)

Screen Shot 2020-05-01 at 10.19.35 AM

 

You can view the Network configuration here.

Screen Shot 2020-05-01 at 10.17.52 AM

 

Step 3 – Create Namespace, Set up Permissions and Storage

Click on Create Namespace. Select Cluster where you want to create the namespace and give a name to the namespace. **BTW, just a note here. Don’t confuse this namespace with the Kubernetes namespace. The way I about think this namespace construct in vSphere with Kubernetes is like a project or an application. This project or application can comprises of Containers as well as VMs.

Name: demo-app-01

Screen Shot 2020-05-01 at 10.30.01 AM

This show the namespace has been created successfully.

Screen Shot 2020-05-01 at 10.33.55 AM

 

 

Click on Add Permissions.

Screen Shot 2020-05-01 at 10.34.46 AM

 

Give permissions to Administrator@vsphere.local with edit role.

Screen Shot 2020-05-01 at 10.35.11 AM

Add Storage Policies.

Screen Shot 2020-05-01 at 10.35.24 AM

This will how it looks like with Permissions and Storage Policies configured successfully.

Screen Shot 2020-05-01 at 10.35.44 AM

 

Step 4 – Test Pod VM

Logging in to Supervisor Cluster.

Screen Shot 2020-05-01 at 10.26.12 AM

kubectl config use-context demo-app-01
  
kubectl get nodes

Screen Shot 2020-05-01 at 11.58.14 AM

 

Start a Pod VM.

Screen Shot 2020-05-01 at 11.37.39 AM

Pod VM runningScreen Shot 2020-05-01 at 11.31.08 AM

 

Step 5 – Enabling Content Library

vCenter -> Menu -> Content Libraries -> Create.

Name: Kubernetes

Screen Shot 2020-05-01 at 11.40.44 AM

Subscription URL: https://wp-content.vmware.com/v2/latest/lib.json

Screen Shot 2020-05-01 at 11.40.18 AM

Select the storage where you want to store the ova images.

Screen Shot 2020-05-01 at 11.43.57 AM

Ready to Complete.

Screen Shot 2020-05-01 at 11.49.18 AM

This is how it looks like when the image is downloaded successfully.

Screen Shot 2020-05-01 at 11.52.23 AM

 

Step 6 – Create TKG Clusters

Create the following yaml file.

nano create-tanzu-k8s-cluster01.yaml

apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
name: tkg-cluster #name of cluster
namespace: demo-app-01
spec:
topology:
controlPlane:
count: 1
class: best-effort-xsmall # vmclass to be used for master(s)
storageClass: pp-storage-policy
workers: 
count: 2
class: best-effort-xsmall # vmclass to be used for workers(s)
storageClass: pp-storage-policy 
distribution: 
version: v1.16.8
settings:
network:
cni:
name: calico
services:
cidrBlocks: [“198.51.100.0/12″] 
pods:
cidrBlocks: [“192.0.2.0/16″]

 

Login to the Supervisor Cluster.

kubectl vsphere login –server=10.30.10.1 –insecure-skip-tls-verify –vsphere-username=administrator@vsphere.local
  
kubectl config use-context demo-app-01

Screen Shot 2020-05-01 at 12.07.35 PM

Screen Shot 2020-05-01 at 12.08.21 PM

Apply the yaml file.

Screen Shot 2020-05-01 at 12.04.53 PM

You can now see the TKG Cluster VMs being deployed.

Screen Shot 2020-05-01 at 12.10.53 PM

 

Login to the TKG Cluster.

kubectl vsphere login server 10.30.10.1 vsphere-username administrator@vsphere.local insecure-skip-tls-verify tanzu-kubernetes-cluster-name tkg-cluster –tanzu-kubernetes-cluster-namespace demo-app-01

Screen Shot 2020-05-01 at 12.03.26 PM

TKG Cluster deployed successfully.

kubectl config use-context tkg-cluster
kubectl cluster-info
kubectl get nodes

Screen Shot 2020-05-01 at 12.01.15 PM  

Now we are done with creating TKG Clusters, we shall deploy some demo apps next.

 

 

Tanzu vSphere 7 with Kubernetes on NSX-T 3.0 VDS Install

Part 1: Overview, Design, Network Topology, Hardware Used

Part 2: ESXi, vCenter, VDS Config and NSX-T Manager

Part 3: NSX-T Edges, Segments, Tier-0 Routing

Part 4: Supervisor Cluster, Content Library, TKG Clusters

Part 5: Testing, Demo Apps