Monthly Archives: December 2017

Screen Shot 2017-12-24 at 3.47.57 PM

NSX-T 2.1 GA Installation using ovftool

Update on 23 Dec 2017
NSX-T 2.1 was GA. I thought since Im going to re-do the whole process, I might as well take new screenshots as well.

You might be wondering why would I want to use ovftool to install NSX-T appliances. This is because my management host is not being managed by a vCenter and it failed when using the vSphere Client.

Screen Shot 2017-12-17 at 4.42.29 PM

You can see from the screenshot above, I only have hosts for the EdgeComp Cluster. As I do not have additional hosts for the management cluster, I will be using an existing management host that is standalone.

While reading the NSX-T Installation Guide documentation, realize they did mention of using an alternative method ie using the OVF Tool to install the NSX Manager. I reckon, this would be useful for automated install and the other reason, is that NSX-T architecture is to move away from the dependency of vCenter. NSX-T could be deployed in a 100% non-vSphere environment, like for example KVM.

Preparing for Installation

These are the files I will be using for the NSX-T Installation.
1) NSX Manager – nsx-unified-appliance-2.1.0.0.0.7395503.ova
2) NSX Controllers – nsx-controller-2.1.0.0.0.7395493.ova
3) NSX Edges – nsx-edge-2.1.0.0.0.7395503.ova

Installing NSX-T Manager using ovftool

Following the guide, and had to modify the ovftool command. So this is the command I used and I put into a batch file. Maybe later I will incorporate it into the powershell script I used to deploy the vSphere part.

Screen Shot 2017-12-24 at 2.21.54 PM

You can find the script here.

The ESXi host Im using is 6.0U2 and it does not takes in the OVF properties. So I had no choice, but to deploy to the vcenter instead and to the EdgeComp hosts.

Screen Shot 2017-12-17 at 9.05.44 PM

Finally able to login to the NSX Manager console.

Screen Shot 2017-12-17 at 9.06.50 PM

Trying to login to the web console of the NSX Manager.

Screen Shot 2017-12-17 at 9.10.36 PM

Awesome! Able to login and dashboard is up!

Screen Shot 2017-12-17 at 9.11.33 PM

Alright. so next will be the NSX-T Controllers.

Screen Shot 2017-12-17 at 9.23.16 PM

Screen Shot 2017-12-17 at 9.29.42 PM

Configuring the Controller Cluster

Retrieve the NSX Manager API thumbprint

  1. Log onto the NSX Manager via SSH using the admin credentials.
  2. Use “get certificate api thumbprint” to retrieve the SSL certificate thumbprint. Copy the output to use in commands later
    Screen Shot 2017-12-17 at 9.31.42 PM

Join the NSX Controllers to the NSX Manager

  1. Log onto each of the NSX Controllers via SSH using the admin credentials.
  2. Use “join management-plane <NSX Manager> username admin thumbprint <API Thumbprint>Screen Shot 2017-12-17 at 9.31.42 PM
  3. Enter the admin password when prompted
  4. Validate the controller has joined the Manager with “get managers” – you should see a status of “Connected”

    join management-plane 10.136.1.102 username admin thumbprint f24e53ef5c440d40354c2e722ed456def0d0ceed2459fad85803ad732ab8e82bScreen Shot 2017-12-17 at 9.51.04 PM

  5. Repeat this procedure for all three controllers

Screen Shot 2017-12-17 at 10.21.13 PM

Screen Shot 2017-12-17 at 10.22.13 PM

Initialise the Controller Cluster

To configure the Controller cluster we need to log onto any of the Controllers and initialise the cluster. This can be any one of the Controllers, but it will make the controller the master node in the cluster. Initialising the cluster requires a shared secret to be used on each node.

  1. Log onto the Controller node via SSH using the admin credentials.
  2. Use “set control-cluster security-model shared-secret” to configure the shared secret
  3. When the secret is configured, use “initialize control-cluster” to promote this node:

Screen Shot 2017-12-17 at 10.25.18 PM

Validate the status of the node using the “get control-cluster status verbose” command. You can also check the status in the NSX Manager web interface. The command shows that the Controller is the master, in majority and can connect to the Zookeeper Server (a distributed configuration service)

Screen Shot 2017-12-17 at 10.27.10 PM

Notice in the web interface that the node has a Cluster Status of “Up”

Screen Shot 2017-12-17 at 10.28.39 PM

Preparing ESXi Hosts

With ESXi hosts you can prepare them for NSX by using the “Compute Manager” construct to add a vCenter server, and then prepare the hosts automatically, or you can manually add the hosts. You can refer to Sam’s blog posts as he prepare the hosts manually for his learning exercise. Since my purpose is to quickly get the deployment up for PKS/PCF, Im going to use the automatic method using the “Compute Manager”

1. Login to NSX-T Manager.
2. Select Compute Managers.
3. Click on Add.

Screen Shot 2017-12-18 at 2.03.50 AM

4. Put in the details for the vcenter.

Screen Shot 2017-12-18 at 2.05.55 AM

Success!
Screen Shot 2017-12-18 at 2.07.11 AM

5. Go into Nodes under Fabric.

6. Change the Managed by from Standalone to the name of the compute manager you just specified.
Screen Shot 2017-12-18 at 2.09.44 AM

7. Select the Cluster and click on Configure Cluster. Enabled “Automatically Install NSX” and leave “Automatically Create Transport Node” as Disabled as I have not create the Transport Zone.
Screen Shot 2017-12-18 at 2.12.07 AM

You will see NSX Install In Progress
Screen Shot 2017-12-18 at 2.13.43 AM

Error! Host certificate not updated.
Screen Shot 2017-12-18 at 2.16.34 AM

After some troubleshooting, I realize the host has multiple IP addresses, So what I did was to remove all of them except for the management IP address and the host preparation went on smoothly.

Screen Shot 2017-12-23 at 3.40.23 PMScreen Shot 2017-12-23 at 3.40.11 PM

Screen Shot 2017-12-23 at 3.39.39 PM

Host preparation was successful. As I was in the middle of writing this blog post, NSX-T 2.1 was just GA. Although the build number was pretty similar, I decided I will reinstall with the GA version. So much for the host preparation, I will uninstall and re-do everything again.

Screen Shot 2017-12-23 at 10.16.29 PM

 

References
1. Sam NSX-T Installation Blog Posts
2. VMware NSX-T Installation Docs

Screen Shot 2017-12-24 at 3.47.57 PM

NSX-T 2.1 Installation using ovftool (GA ver)

Update on 23 Dec 2017
NSX-T 2.1 was GA. I was using a pre-GA version before. Since Im going to reinstall using the GA version, I thought might as well I take the screenshots again.

You might be wondering why would I want to use ovftool to install NSX-T appliances. This is because my management host is not being managed by a vCenter and it failed when using the vSphere Client.

Screen Shot 2017-12-17 at 4.42.29 PM

You can see from the screenshot above, I only have hosts for the EdgeComp Cluster. As I do not have additional hosts for the management cluster, I will be using an existing management host that is standalone.

While reading the NSX-T Installation Guide documentation, realize they did mention of using an alternative method ie using the OVF Tool to install the NSX Manager. I reckon, this would be useful for automated install and the other reason, is that NSX-T architecture is to move away from the dependency of vCenter. NSX-T could be deployed in a 100% non-vSphere environment, like for example KVM.

Preparing for Installation

These are the files I will be using for the NSX-T Installation.
1) NSX Manager – nsx-unified-appliance-2.1.0.0.0.7395503.ova
2) NSX Controllers – nsx-controller-2.1.0.0.0.7395493.ova
3) NSX Edges – nsx-edge-2.1.0.0.0.7395503.ova

Installing NSX-T Manager using ovftool

Following the guide, and had to modify the ovftool command. So this is the command I used and I put into a batch file. Maybe later I will incorporate it into the powershell script I used to deploy the vSphere part.

Screen Shot 2017-12-17 at 7.53.54 PM

You can find the batch script here.

The ESXi host Im using is 6.0U2 and it does not takes in the OVF properties. So I had no choice, but to deploy to the vcenter instead and to the EdgeComp hosts.

Screen Shot 2017-12-24 at 2.21.54 PM

Finally able to login to the NSX Manager console.

Screen Shot 2017-12-24 at 2.28.19 PM

Trying to login to the web console of the NSX Manager

Screen Shot 2017-12-24 at 3.47.57 PM

Awesome! Able to login and dashboard is up!

Screen Shot 2017-12-24 at 3.49.29 PM

The dashboard. Nothing to report at the moment.

Screen Shot 2017-12-24 at 3.49.29 PM

Alright. so next will be the NSX-T Controllers.

Screen Shot 2017-12-24 at 3.54.16 PM

NSX-T Controllers booted up.
Screen Shot 2017-12-24 at 3.55.44 PM

 

Configuring the Controller Cluster

Retrieve the NSX Manager API thumbprint

  1. Log onto the NSX Manager via SSH using the admin credentials.
  2. Use “get certificate api thumbprint” to retrieve the SSL certificate thumbprint. Copy the output to use in commands later
    Screen Shot 2017-12-24 at 11.39.44 PM

Join the NSX Controllers to the NSX Manager

  1. Log onto each of the NSX Controllers via SSH using the admin credentials.
  2. Use “join management-plane <NSX Manager> username admin thumbprint <API Thumbprint>
    Screen Shot 2017-12-24 at 11.41.10 PM
  3. Enter the admin password when prompted
  4. Validate the controller has joined the Manager with “get managers” – you should see a status of “Connected”

    join management-plane 10.136.1.102 username admin thumbprint 77d62c521b6c1477f709b67425f5e6e84bf6f1117bdca0439233db7921b67a28

    Screen Shot 2017-12-24 at 11.45.57 PM

  5. Repeat this procedure for all three controllers. *For my lab, I will deploy only one controller.
    Screen Shot 2017-12-24 at 11.49.03 PMScreen Shot 2017-12-24 at 11.49.19 PMScreen Shot 2017-12-24 at 11.50.10 PM

Initialise the Controller Cluster

To configure the Controller cluster we need to log onto any of the Controllers and initialise the cluster. This can be any one of the Controllers, but it will make the controller the master node in the cluster. Initialising the cluster requires a shared secret to be used on each node.

  1. Log onto the Controller node via SSH using the admin credentials.
  2. Use “set control-cluster security-model shared-secret” to configure the shared secret
  3. When the secret is configured, use “initialize control-cluster” to promote this node:

Screen Shot 2017-12-24 at 11.52.30 PM

Validate the status of the node using the “get control-cluster status verbose” command. You can also check the status in the NSX Manager web interface. The command shows that the Controller is the master, in majority and can connect to the Zookeeper Server (a distributed configuration service)
Screen Shot 2017-12-24 at 11.53.16 PM

Notice in the web interface that the node has a Cluster Status of “Up”
Screen Shot 2017-12-24 at 11.53.58 PM

Preparing ESXi Hosts

With ESXi hosts you can prepare them for NSX by using the “Compute Manager” construct to add a vCenter server, and then prepare the hosts automatically, or you can manually add the hosts. You can refer to Sam’s blog posts as he prepare the hosts manually for his learning exercise. Since my purpose is to quickly get the deployment up for PKS/PCF, Im going to use the automatic method using the “Compute Manager”

1. Login to NSX-T Manager.
2. Select Compute Managers.
3. Click on Add.
Screen Shot 2017-12-25 at 12.08.39 AM

4. Put in the details for the vcenter.
Screen Shot 2017-12-25 at 12.09.56 AM

Success!
Screen Shot 2017-12-25 at 12.12.11 AM

5. Go into Nodes under Fabric.

6. Change the Managed by from Standalone to the name of the compute manager you just specified.
Screen Shot 2017-12-25 at 12.17.13 AM

7. If you notice above, there are multiple IP addresses listed and this will pose problems to the installation. Click on each host and remove all the IP addresses except the management IP address of the hosts.

8. Select the hosts you would like to Install NSX.
Screen Shot 2017-12-25 at 12.15.48 AM

8. Select the Cluster and click on Configure Cluster. Enabled “Automatically Install NSX” and leave “Automatically Create Transport Node” as Disabled as I have not create the Transport Zone.

 

You will see NSX Install In Progress
Screen Shot 2017-12-18 at 2.13.43 AM

Error! Host certificate not updated.
Screen Shot 2017-12-18 at 2.16.34 AM

After some troubleshooting, I realize the host has multiple IP addresses, So what I did was to remove all of them except for the management IP address and the host preparation went on smoothly.

Screen Shot 2017-12-23 at 3.40.23 PM

 

Screen Shot 2017-12-28 at 4.26.22 PM

 

Yeah! Host preparation is successful!Screen Shot 2017-12-28 at 3.48.24 PM

Deploying a VM Edge Node

Following the instructions from Install NSX Edge on ESXi Using the Command-Line OVF Tool, we can deploy NSX Edges using ovftool.

Screen Shot 2017-12-28 at 5.11.56 PM

Once the OVF deployment has completed, power on the VM Edge Node.

Join NSX Edges with the management plane

If you enabled SSH (as I did) you can connect with the newly deployed Edge on it’s management IP address. If not you should be able to use the console to configure it. Once on the console/SSH, authenticate as the admin user with the password you specified during deploy time.

Screen Shot 2017-12-28 at 5.25.44 PM

Validate the management IP address using “get interface eth0”
Screen Shot 2017-12-28 at 5.15.26 PM

Retrieve the Manager API thumbprint using “get certificate api thumbprint” from the NSX Manager console/SSH, or using the web interface
Screen Shot 2017-12-28 at 5.28.58 PM

Join the VM Edge Node to the management plane using the following command:

join management-plane <NSX Manager> username <NSX Manager admin> thumbprint <NSX-Manager’s-thumbprint>

join management-plane 10.136.1.102 username admin thumbprint 77d62c521b6c1477f709b67425f5e6e84bf6f1117bdca0439233db7921b67a28

You will be prompted for the password of the NSX admin user and the node will be registered
Screen Shot 2017-12-28 at 5.19.03 PM

You can validate the Edge has joined the Management plane using the command “get managers”.
Screen Shot 2017-12-28 at 5.19.30 PM

Below you can see that in the NSX Manager console under Fabric > Nodes > Edges I have added two Edge VMs, the deployment is up and connected to the manager, but the Transport Node is not configured yet – that will be the next post!

Screen Shot 2017-12-28 at 5.25.17 PM Screen Shot 2017-12-28 at 5.28.19 PM

References
1. Sam NSX-T Installation Blog Posts
2. VMware NSX-T Installation Docs

NSX Test App Container Based

In my last post, I share the NSX Test App. Nowadays, we talks about containers, dockers and kubernetes. So I was thinking, why not create this NSX Test App as a container as well. This could save people time from downloading the NSX Test App PHP script as well as setting up the apache and PHP environment.

These are the steps I took.

 

docker run -it -p 81:80 –name web1 nimmis/apache-php5 /bin/bash
cd /var/www/html
wget https://github.com/vincenthanjs/nsxtestapp/blob/master/nsxtest.php
/etc/init.d/apache2 start
exit

docker commit -m “Added NSX Test App Web” -a “Vincent” web1 vincenthan/nsx-test-web:latest
docker login
docker push vincenthan/nsx-test-web

Screen Shot 2017-12-19 at 8.39.40 PM

 

At dockerhub, I can see the is entry for the container.

Screen Shot 2017-12-19 at 8.38.58 PM

 

OK. Lets try to see if it works.

docker run -it -p 82:80 –name web2 vincenthan/nsx-test-web /bin/bash
/etc/init.d/apache2 start

Screen Shot 2017-12-19 at 9.40.18 PM

 

Great! ts working.

Next, I will containerised the database server as well. Stay tune!

 

NSX Test App

A few of you have watched my youtube videos such as VIC, SRM with NSX and requested for the NSX Test App that I used for my demos. Alright, I thought why not share to the community the simple PHP Web application that I wrote. I also written a Lucky Draw App for my demo use which I use at large audience events, to have a little engagement with the audience. I will also share with the community, maybe the next time round.

The NSX test App is basically a PHP file. You can download the PHP file from https://github.com/vincenthanjs/nsxtestapp/blob/master/nsxtest.php. Once you downloaded, you can use it with for example Apache+PHP or a LAMP. Usually its place in the /var/www folder.

If you open up the PHP file, you will see the servername which is the IP address of the DB VM. I will export the DB into an OVA.       Screen Shot 2017-12-19 at 9.28.02 PM

You can either use the database server in the LAMP or use another VM to be the database. In my demos, I usually use a separate VM as the DB server so mimic a real world scenario, like a 2 or 3 Tier Application. So save you the trouble, I export the whole DB as ova so you can easily import it.

This is the DB OVA. You can download from here https://www.dropbox.com/s/ou17rui2pxjco54/DB-NSX_Test_App-v1.ova?dl=0

It will look like it if you get it work.

Screen Shot 2017-12-19 at 9.30.30 PM

 

You watch the VIC with NSX video here which utilising the web application.

NSX-T 2.1 Installation using ovftool

You might be wondering why would I want to use ovftool to install NSX-T appliances. This is because my management host is not being managed by a vCenter and it failed when using the vSphere Client.

Screen Shot 2017-12-17 at 4.42.29 PM

You can see from the screenshot above, I only have hosts for the EdgeComp Cluster. As I do not have additional hosts for the management cluster, I will be using an existing management host that is standalone.

While reading the NSX-T Installation Guide documentation, realize they did mention of using an alternative method ie using the OVF Tool to install the NSX Manager. I reckon, this would be useful for automated install and the other reason, is that NSX-T architecture is to move away from the dependency of vCenter. NSX-T could be deployed in a 100% non-vSphere environment, like for example KVM.

Preparing for Installation

These are the files I will be using for the NSX-T Installation. Please note Im using the pre-GA build here ie 7374156 as at this point of writing or setting up, Im not sure which build will be the GA build. I believe even if GA, it would roughly be the same experience. Will update this post again once NSX-T 2.1 is GA.
1) NSX Manager – nsx-unified-appliance-2.1.0.0.0.7374161.ova
2) NSX Controllers – nsx-controller-2.1.0.0.0.7374156.ova
3) NSX Edges – nsx-edge-2.1.0.0.0.7374178.ova

Update on 23 Dec 2017
NSX-T 2.1 was GA. These are the GA files. The build number does not differ much. I will leave the screenshot and steps unchanged unless while going through the installation, something breaks.
1) NSX Manager – nsx-unified-appliance-2.1.0.0.0.7395503.ova
2) NSX Controllers – nsx-controller-2.1.0.0.0.7395503.ova
3) NSX Edges – nsx-edge-2.1.0.0.0.7395503.ova

Installing NSX-T Manager using ovftool

Following the guide, and had to modify the ovftool command. So this is the command I used and I put into a batch file. Maybe later I will incorporate it into the powershell script I used to deploy the vSphere part.

Screen Shot 2017-12-17 at 7.53.54 PM

You can find the script here.

The ESXi host Im using is 6.0U2 and it does not takes in the OVF properties. So I had no choice, but to deploy to the vcenter instead and to the EdgeComp hosts.

Screen Shot 2017-12-17 at 9.05.44 PM

 

Finally able to login to the NSX Manager console.

Screen Shot 2017-12-17 at 9.06.50 PM

 

Trying to login to the web console of the NSX Manager.

Screen Shot 2017-12-17 at 9.10.36 PM

Awesome! Able to login and dashboard is up!

Screen Shot 2017-12-17 at 9.11.33 PM

Alright. so next will be the NSX-T Controllers.

Screen Shot 2017-12-17 at 9.23.16 PM

Screen Shot 2017-12-17 at 9.29.42 PM

Configuring the Controller Cluster

Retrieve the NSX Manager API thumbprint

  1. Log onto the NSX Manager via SSH using the admin credentials.
  2. Use “get certificate api thumbprint” to retrieve the SSL certificate thumbprint. Copy the output to use in commands later
    Screen Shot 2017-12-17 at 9.31.42 PM

Join the NSX Controllers to the NSX Manager

  1. Log onto each of the NSX Controllers via SSH using the admin credentials.
  2. Use “join management-plane <NSX Manager> username admin thumbprint <API Thumbprint>Screen Shot 2017-12-17 at 9.31.42 PM
  3. Enter the admin password when prompted
  4. Validate the controller has joined the Manager with “get managers” – you should see a status of “Connected”

    join management-plane 10.136.1.102 username admin thumbprint f24e53ef5c440d40354c2e722ed456def0d0ceed2459fad85803ad732ab8e82bScreen Shot 2017-12-17 at 9.51.04 PM

  5. Repeat this procedure for all three controllers

Screen Shot 2017-12-17 at 10.21.13 PM

 

Screen Shot 2017-12-17 at 10.22.13 PM

Initialise the Controller Cluster

To configure the Controller cluster we need to log onto any of the Controllers and initialise the cluster. This can be any one of the Controllers, but it will make the controller the master node in the cluster. Initialising the cluster requires a shared secret to be used on each node.

  1. Log onto the Controller node via SSH using the admin credentials.
  2. Use “set control-cluster security-model shared-secret” to configure the shared secret
  3. When the secret is configured, use “initialize control-cluster” to promote this node:

Screen Shot 2017-12-17 at 10.25.18 PM

 

Validate the status of the node using the “get control-cluster status verbose” command. You can also check the status in the NSX Manager web interface. The command shows that the Controller is the master, in majority and can connect to the Zookeeper Server (a distributed configuration service)

Screen Shot 2017-12-17 at 10.27.10 PM

Notice in the web interface that the node has a Cluster Status of “Up”

Screen Shot 2017-12-17 at 10.28.39 PM

Preparing ESXi Hosts

With ESXi hosts you can prepare them for NSX by using the “Compute Manager” construct to add a vCenter server, and then prepare the hosts automatically, or you can manually add the hosts. You can refer to Sam’s blog posts as he prepare the hosts manually for his learning exercise. Since my purpose is to quickly get the deployment up for PKS/PCF, Im going to use the automatic method using the “Compute Manager”

1. Login to NSX-T Manager.
2. Select Compute Managers.
3. Click on Add.

Screen Shot 2017-12-18 at 2.03.50 AM

4. Put in the details for the vcenter.

Screen Shot 2017-12-18 at 2.05.55 AM

Success!
Screen Shot 2017-12-18 at 2.07.11 AM

5. Go into Nodes under Fabric.

6. Change the Managed by from Standalone to the name of the compute manager you just specified.
Screen Shot 2017-12-18 at 2.09.44 AM

7. Select the Cluster and click on Configure Cluster. Enabled “Automatically Install NSX” and leave “Automatically Create Transport Node” as Disabled as I have not create the Transport Zone.
Screen Shot 2017-12-18 at 2.12.07 AM

You will see NSX Install In Progress
Screen Shot 2017-12-18 at 2.13.43 AM

Error! Host certificate not updated.
Screen Shot 2017-12-18 at 2.16.34 AM

After some troubleshooting, I realize the host has multiple IP addresses, So what I did was to remove all of them except for the management IP address and the host preparation went on smoothly.

Screen Shot 2017-12-23 at 3.40.23 PMScreen Shot 2017-12-23 at 3.40.11 PM

 

Screen Shot 2017-12-23 at 3.39.39 PM

Host preparation was successful. As I was in the middle of writing this blog post, NSX-T 2.1 was just GA. Although the build number was pretty similar, I decided I will reinstall with the GA version. So much for the host preparation, I will uninstall and re-do everything again.

Screen Shot 2017-12-23 at 10.16.29 PM

References
1. Sam NSX-T Installation Blog Posts
2. VMware NSX-T Installation Docs

Screen Shot 2017-12-17 at 2.43.30 PM

NSX-T 2.1 for PKS and PCF 2.0 Lab

Screen Shot 2017-12-17 at 2.43.30 PMFrom VMware PKS architecture, slide from VMworld 2017 US, you can see there is NSX to provide the network & security services for BOSH. To be more precise, this is going to be NSX-T. In the following few posts, I will cover setting up the vSphere Lab and prepare the hosts for NSX-T, and be ready for PKS.

 

Introducing NSX-T 2.1 with Pivotal Integration

Screen Shot 2017-12-17 at 4.00.10 PM

https://blogs.vmware.com/networkvirtualization/2017/12/nsx-t-2-1.html/

As you might have guess it by now, the version of NSX-T I will be using the lab will be 2.1 which supports PCF2.0 and PKS, specially I want to understand the CNI plugin.

Stay tune. In the next few posts, I will cover the installation of vSphere, NSX-T, PCF and PKS.

 

pks1

My First KUBO Deployment – PKS

Pivotal Container Service (PKS) was announced in VMworld 2017 US. Its not GA yet but through the VMworld CNA sessions, I learnt that its going to use BOSH to spin up Kubernetes cluster, thus the name KUBO – Kubernetes on BOSH. Through my googling, I saw my fellow colleague Simon from Ireland had the same thinking and did a fantastic job in detailing in the installation steps required to get KUBO up and running.

 

My First KUBO Deployment Screenshots

Screen Shot 2017-11-13 at 7.50.27 AM

 

Below you can see the kubernetes cluster spun up by BOSH. I had to scale down some of the nodes due to the limited amount of Memory and Storage resources I had.

 

Screen Shot 2017-11-14 at 10.08.40 PM

 

Below show the vSphere resources. Almost consuming all resources on my 64GB RAM, 1TB Storage Host.

Screen Shot 2017-12-17 at 2.20.29 PM

 

The screenshot below shows the amount of storage those k8s nodes consume.

Screen Shot 2017-12-17 at 2.29.43 PM

 

This is the worker node. You can identify from the Custom Attributes. 121.2GB Storage Used. Ouch!

 

Screen Shot 2017-12-17 at 2.33.41 PM

 

Thats about it. Next I will be setting up NSX-T for PKS. Follow the very good guide by Simon Guyennet – https://blog.inkubate.io/deploy-kubernetes-on-vsphere-with-kubo/