NSX-T 2.1 Installation using ovftool (GA ver)

Update on 23 Dec 2017

NSX-T 2.1 was GA. I was using a pre-GA version before. Since Im going to reinstall using the GA version, I thought might as well I take the screenshots again.

You might be wondering why would I want to use ovftool to install NSX-T appliances. This is because my management host is not being managed by a vCenter and it failed when using the vSphere Client.

Screen Shot 2017-12-17 at 4.42.29 PM

You can see from the screenshot above, I only have hosts for the EdgeComp Cluster. As I do not have additional hosts for the management cluster, I will be using an existing management host that is standalone.

While reading the NSX-T Installation Guide documentation, realize they did mention of using an alternative method ie using the OVF Tool to install the NSX Manager. I reckon, this would be useful for automated install and the other reason, is that NSX-T architecture is to move away from the dependency of vCenter. NSX-T could be deployed in a 100% non-vSphere environment, like for example KVM.

Preparing for Installation

These are the files I will be using for the NSX-T Installation.

1) NSX Manager – nsx-unified-appliance-2.1.0.0.0.7395503.ova

2) NSX Controllers – nsx-controller-2.1.0.0.0.7395493.ova

3) NSX Edges – nsx-edge-2.1.0.0.0.7395503.ova

Installing NSX-T Manager using ovftool

Following the guide, and had to modify the ovftool command. So this is the command I used and I put into a batch file. Maybe later I will incorporate it into the powershell script I used to deploy the vSphere part.

Screen Shot 2017-12-17 at 7.53.54 PM

You can find the batch script here.

The ESXi host Im using is 6.0U2 and it does not takes in the OVF properties. So I had no choice, but to deploy to the vcenter instead and to the EdgeComp hosts.

Screen Shot 2017-12-24 at 2.21.54 PM

Finally able to login to the NSX Manager console.

Screen Shot 2017-12-24 at 2.28.19 PM

Trying to login to the web console of the NSX Manager

Screen Shot 2017-12-24 at 3.47.57 PM

Awesome! Able to login and dashboard is up!

Screen Shot 2017-12-24 at 3.49.29 PM

The dashboard. Nothing to report at the moment.

Screen Shot 2017-12-24 at 3.49.29 PM

Alright. so next will be the NSX-T Controllers.

Screen Shot 2017-12-24 at 3.54.16 PM

NSX-T Controllers booted up.

Screen Shot 2017-12-24 at 3.55.44 PM

 

Configuring the Controller Cluster

Retrieve the NSX Manager API thumbprint

  1. Log onto the NSX Manager via SSH using the admin credentials.
  2. Use “get certificate api thumbprint” to retrieve the SSL certificate thumbprint. Copy the output to use in commands later

    Screen Shot 2017-12-24 at 11.39.44 PM

Join the NSX Controllers to the NSX Manager

  1. Log onto each of the NSX Controllers via SSH using the admin credentials.
  2. Use “__join management-plane username admin thumbprint

    Screen Shot 2017-12-24 at 11.41.10 PM

__ 3. Enter the admin password when prompted 4. Validate the controller has joined the Manager with “_get managers_” – you should see a status of “Connected”

> join management-plane 10.136.1.102 username admin thumbprint 77d62c521b6c1477f709b67425f5e6e84bf6f1117bdca0439233db7921b67a28
> 
> [<img class="alignnone size-large wp-image-496" src="http://acepod.com/wp-content/uploads/2017/12/Screen-Shot-2017-12-24-at-11.45.57-PM-1024x301.png" alt="Screen Shot 2017-12-24 at 11.45.57 PM" width="625" height="184" />][14]
  1. Repeat this procedure for all three controllers. *For my lab, I will deploy only one controller.

    Screen Shot 2017-12-24 at 11.49.03 PMScreen Shot 2017-12-24 at 11.49.19 PMScreen Shot 2017-12-24 at 11.50.10 PM

Initialise the Controller Cluster

To configure the Controller cluster we need to log onto any of the Controllers and initialise the cluster. This can be any one of the Controllers, but it will make the controller the master node in the cluster. Initialising the cluster requires a shared secret to be used on each node.

  1. Log onto the Controller node via SSH using the admin credentials.
  2. Use “_set control-cluster security-model shared-secret_” to configure the shared secret
  3. When the secret is configured, use “initialize control-cluster” to promote this node:

Screen Shot 2017-12-24 at 11.52.30 PM

Validate the status of the node using the “get control-cluster status verbose” command. You can also check the status in the NSX Manager web interface. The command shows that the Controller is the master, in majority and can connect to the Zookeeper Server (a distributed configuration service)

Screen Shot 2017-12-24 at 11.53.16 PM

Notice in the web interface that the node has a Cluster Status of “Up”

Screen Shot 2017-12-24 at 11.53.58 PM

Preparing ESXi Hosts

With ESXi hosts you can prepare them for NSX by using the “Compute Manager” construct to add a vCenter server, and then prepare the hosts automatically, or you can manually add the hosts. You can refer to Sam’s blog posts as he prepare the hosts manually for his learning exercise. Since my purpose is to quickly get the deployment up for PKS/PCF, Im going to use the automatic method using the “Compute Manager”

  1. Login to NSX-T Manager.

  2. Select Compute Managers.

  3. Click on Add.

Screen Shot 2017-12-25 at 12.08.39 AM

  1. Put in the details for the vcenter.

Screen Shot 2017-12-25 at 12.09.56 AM

Success!

Screen Shot 2017-12-25 at 12.12.11 AM

  1. Go into Nodes under Fabric.

  2. Change the Managed by from Standalone to the name of the compute manager you just specified.

Screen Shot 2017-12-25 at 12.17.13 AM

7. If you notice above, there are multiple IP addresses listed and this will pose problems to the installation. Click on each host and remove all the IP addresses except the management IP address of the hosts.

  1. Select the hosts you would like to Install NSX.

Screen Shot 2017-12-25 at 12.15.48 AM

  1. Select the Cluster and click on Configure Cluster. Enabled “Automatically Install NSX” and leave “Automatically Create Transport Node” as Disabled as I have not create the Transport Zone.

 

You will see NSX Install In Progress

Screen Shot 2017-12-18 at 2.13.43 AM

Error! Host certificate not updated.

Screen Shot 2017-12-18 at 2.16.34 AM

After some troubleshooting, I realize the host has multiple IP addresses, So what I did was to remove all of them except for the management IP address and the host preparation went on smoothly.

Screen Shot 2017-12-23 at 3.40.23 PM

 

Screen Shot 2017-12-28 at 4.26.22 PM

 

Yeah! Host preparation is successful<img class="alignnone size-large wp-image-512" src="http://acepod.com/wp-content/uploads/2017/12/Screen-Shot-2017-12-28-at-3.48.24-PM-1024x302.png" alt="Screen Shot 2017-12-28 at 3.48.24 PM" width="625" height="184" />

Deploying a VM Edge Node

Following the instructions from [Install NSX Edge on ESXi Using the Command-Line OVF Tool][31], we can deploy NSX Edges using ovftool.

Screen Shot 2017-12-28 at 5.11.56 PM

Once the OVF deployment has completed, power on the VM Edge Node.

Join NSX Edges with the management plane

If you enabled SSH (as I did) you can connect with the newly deployed Edge on it’s management IP address. If not you should be able to use the console to configure it. Once on the console/SSH, authenticate as the admin user with the password you specified during deploy time.

Screen Shot 2017-12-28 at 5.25.44 PM

Validate the management IP address using “get interface eth0”

Screen Shot 2017-12-28 at 5.15.26 PM

Retrieve the Manager API thumbprint using “get certificate api thumbprint” from the NSX Manager console/SSH, or using the web interface

Screen Shot 2017-12-28 at 5.28.58 PM

Join the VM Edge Node to the management plane using the following command:

join management-plane username thumbprint

join management-plane 10.136.1.102 username admin thumbprint 77d62c521b6c1477f709b67425f5e6e84bf6f1117bdca0439233db7921b67a28

 

You will be prompted for the password of the NSX admin user and the node will be registered

Screen Shot 2017-12-28 at 5.19.03 PM

You can validate the Edge has joined the Management plane using the command “get managers”.

Screen Shot 2017-12-28 at 5.19.30 PM

Below you can see that in the NSX Manager console under Fabric > Nodes > Edges I have added two Edge VMs, the deployment is up and connected to the manager, but the Transport Node is not configured yet – that will be the next post!

Screen Shot 2018-02-13 at 10.21.17 PM

Create Transport Zones & Transport Nodes

Create an IP Pool for the Tunnel Endpoints (TEPs)

Both the hosts and edges will require an IP address for the GENEVE tunnel endpoints, so in order to address these I will create an IP Pool.

Click on Groups, IP Pools, Add New IP Pool.

Name: Pool-TEP

IP Ranges: 10.140.1.11 – 10.140.1.250

Gateway: 10.140.1.1

CIDR: 10.140.1.0/24

Screen Shot 2018-02-14 at 9.05.47 AM

 

This shows that the IP Pool is added successfully.Screen Shot 2018-02-14 at 9.06.37 AM

Click on FabricProfilesUplink Profiles and ADD.

Name: uplink-profile-nsx-edge-vm

Teaming Policy: Failover Order

Active Uplinks: uplink-1

Transport VLAN: 0 (My VDS portgroup already tag with a VLAN, therefore there is no need to tag here. If you are using a trunk portgroup, then you have specify the VLAN ID here.)

MTU: 1600 (Default)

Screen Shot 2018-02-14 at 9.15.17 AM

Creating the Transport Zones

In my setup, I will be creating 2 transport zones. One for the VLAN and the second for the Overlay.

Click on Transport Zones and ADD.

Name: TZ-VLAN

N-VDS Name: N-VDS-STD-VLAN

N-VDS Mode: Standard

Traffic Type: VLAN

Screen Shot 2018-02-14 at 9.20.51 AM

Click on Transport Zones and ADD.

Name: TZ-OVERLAY

N-VDS Name: N-VDS-STD-OVERLAY

N-VDS Mode: Standard

Traffic Type: Overlay

Screen Shot 2018-02-14 at 9.22.58 AM

 

Once done, you should be able to see similar results as per this screenshot.

Screen Shot 2018-02-14 at 9.23.26 AM

Once done, you should be able to see similar results as per this screenshot.

Creating Host Transport Nodes

A Transport Node participates in the GENEVE overlay network as well as the VLAN networking – however for my configuration the Host Transport Nodes will actually only participate in overlay.

Click on Fabric, Nodes, Transport Nodes and ADD.

Name: TN-SUN03-ESX153

Node: sun03-esxi153.acepod.com (10.136.1.153) 

Transport Zones: TZ-OVERLAY

**

** Screen Shot 2018-02-14 at 9.36.07 AM

N-VDS Configuration.

Screen Shot 2018-02-14 at 9.42.54 AM

You have to login to vCenter to check which is the vmnic that is available and not connected as that would be used for the host switch for overlay networking. For my setup, vmnic3 is not used by any vSwitch and therefore I will be using that for my transport node uplink.

Screen Shot 2018-02-14 at 9.40.47 AM

Adding the 2nd host or N host (depending on how many hosts you want to add as Transport Node)

Name: TN-SUN03-ESX154

Node: sun03-esxi154.acepod.com (10.136.1.154) 

Transport Zones: TZ-OVERLAY

Screen Shot 2018-02-14 at 10.04.52 AM

Screen Shot 2018-02-14 at 10.05.31 AM

Adding the NSX Edge-VM as Transport Node. Edge Node will be participating in VLAN and Overlay Transport Zone.

Name: TN-SUN03-ESX154

Node: sun03-esxi154.acepod.com (10.136.1.154) 

Transport Zones: TZ-OVERLAY

Screen Shot 2018-02-14 at 10.10.04 AM

Creating Edge Transport Nodes

A Transport Node participates in the GENEVE overlay network as well as the VLAN uplinks and provides transport between the two. The previously configured VM Edge Nodes will be configured as Edge Transport Nodes, using the Uplink Profile and Transport Zones configured above.

Adding the NSX Edge-VM as Transport Node. Edge Node will be participating in VLAN and Overlay Transport Zone.

Name: TN-SUN03-EDGEVM01

Node: sun03-edgevm01 (10.136.1.111)

Transport Zones: TZ-OVERLAY, TZ-VLAN

Screen Shot 2018-02-14 at 10.12.42 AM

 

Screen Shot 2018-02-14 at 10.14.37 AM

Screen Shot 2018-02-14 at 10.14.46 AM

 

 

Screen Shot 2018-02-14 at 10.27.54 AM

Click on ADD N-VDS

Screen Shot 2018-02-14 at 10.29.51 AM

 

Note: I have fp-eth2 which is suppose to be another uplink, however I only added one N-VDS previously. So if you want the 2nd VLAN uplink, you will need to create another N-VDS switch.

 

 

Screen Shot 2018-02-14 at 10.32.26 AM

Do the same for the 2nd Edge-VM Node.

Misc: About AES-NI

[Screen Shot 2017-12-28 at 5.25.17 PM

]56

Previously I had some problems with my Edge-VMs complaining the physical host does not have AES-NI support. I have check my Intel CPU did support AES-NI but however, after checking the BIOS, the AES-NI feature was disabled. After enabling that, I did not receive this error anymore.

WhatsApp Image 2018-02-13 at 2.53.50 PM

References

1. Sam NSX-T Installation Blog Posts

2. VMware NSX-T Installation Docs

[31]: http://Install NSX Edge on ESXi Using the Command-Line OVF Tool