Preparing the NSX-T 3.0 Datacenter Infrastructure

Here are a few notes I have made along the way whilst implementing NSX-T 3.0 for the datacenter

Implementing NSX-T in vSphere

Here is the preparation workflow for deploying NSX-T in vSphere
  1. Use the OVF template to deploy NSX Manager
  2. Access NSX UI
  3. Register vCenter Server with NSX Manager
  4. Deploy additional NSX Manager instances to for a cluster
  5. Configure transport nodes
  6. Prepare hypervisor hosts as transport nodes
  7. Deploy NSX Edge nodes
  8. Create an NSX Edge Cluster
After that we can create our segments, Tier-0 and Tier-1 gateways etc.

Considerations for Deploying NSX Manager

On ESXi hosts managed by vCenter server or standalone ESXi hosts. NSX Manager can also be installed as a virtual machine on KVM hosts.
A virtual IP address can be used for multiple NSX manager nodes.
NSX Manager has its own self-signed certificates but these could be changed with third party certificates.

NSX Manager Node Sizing

Small  – 4 CPU  –  16GB memory  –  300GB Hard disk, only suitable for POC or lab environments
Medium  – 6 CPU  –  24GB memory  –  300GB Hard disk, supports up to 64 hosts
Large  – 12 CPU  –  48GB memory  –  300GB  –  Hard disk, for large scale environments

External Network

A physical network or VLAN not managed by NSX-T. You can link your logical network or overlay network to an external network through an NSX Edge.

Transport Zone

  • Collection of transport nodes that defines the maximum span of logical switches
  • Communication occurs over TEP
  • A transport zone defines a collection of transport nodes that can communicate with each other across the physical infrastructure
  • Determines which hosts can participate in a particular network
  • A single transport zone can have all types of transport nodes (ESXi, KVM, Bare Metal servers and NSX edge)
  • A transport zone identifies the type of traffic (VLAN or overlay)
  • A transport zone is associated with an N-VDS name
  • You can configure more than one transport zones
  • A transport zone does not represent a security boundary
  • Simply defines the scope inside the datacenter of where the networks are going to stretch

Host Transport Node

  • Hypervisor node that has been registered with the NSX-T management plane.
  • A transport node is a node that participates in an NSX-T Data Center overlay or NSX-T Data Center VLAN networking.
  • Can include, ESXI, Edge node, KVM and Physical server, Now Windows 2016
  • Needs networking to both management and transport traffic
  • Can belong to multiple transport zones
  • A segment can only below to one transport zone.
  • NSX-T requires transport nodes to perform networking (overlay or VLAN) and security functions. It is responsible for forwarding the data plane traffic originating from VMs, containers or apps running on bare metal servers. NSX-T supports the various types of transport nodes including: Hypervisor (ESXi or KVM), Bare Metal (RHEL, CentOS, Ubuntu), and NSX Edge. Since NSX-T is decoupled from the hypervisor, ESXi and KVM transport nodes can work together and networks and topologies can extend to both ESXi and KVM environments.

Transport Node Components

Each transport node has a management plane agent (MPA), local control plane (LCP), and N-VDS installed. The NSX Manager polls for configuration, statistics and status from the transport node using the MPA. The LCP computes the local runtime state for the endpoint based on updates from the central control plane (CCP) and local data plane information. It also pushes stateless configurations to forwarding engines in the data plane and reports the information back to the central control plane. The N-VDS, also known as the host switch, is the primary component in the data plane. It does the switching, overlay encapsulation and decapsulation, firewall creation, and routing. The N-VDS is what is used to attach VMs to NSX-T logical switches and for creating logical router uplinks and downlinks. The N-VDS gets installed on a transport node once the node has been added to a transport zone, as each transport zone has it’s own N-VDS.

Transport Node Profiles

Tranport node profiles are useful for consistency, speed up deployments and avoid manual errors.
The profiles contains all the configurations required to create a transport node.
A transport node defines
  • Transport zones
  • N-VDS or VDS switch configs
  • Uplink profile
  • IP assignment
  • Mapping of physical NICs
  • VMkernel and physical adaptor migrations

 Edge Transport Node

  • Purpose of the NSX Edge is to provide computational power to deliver IP routing and services
  • Pool of compute resources for providing network services, load balancer etc
  • Edge Node that has been registered with NSX-T management plane. The Edge Transport node hosts the NSX service routers (SR) that are associated with Tier-0 and Tier-1 routers.
  • Edge is commonly deployed in DMZ and multi-tenant cloud environments, where it creates virtual boundaries for each tenant.
  • Maximum of 10 nodes in a cluster
  • Maximum of 16 clusters – could be used for clustering each service.
  • Gateway can only connect to an edge cluster, not an individual node
  • Makes up the north-south connectivity to external network
  • Gives gateways the SR components like NAT services
  • Edge node is a transport node
  • Offer DPDK based VM
  • Offers t-shirt sizes small/medium/large
  • First nic is the management
  • Other interfaces must be assigned to the data path process that creates the overlay or VLAN based N-VDS
  • The subnet for the edge node TEP must be on a different subnet from the host transport node TEP IP range
SSH disabled by default,  log on to console cli to enable:
  • > start service ssh
  • > Set service ssh start-on-boot
  • > get service ssh

Edge Node Profiles

Contains a specific configuration that can be associated with an NSX Edge cluster. e.g. the fabric profile might contain the tunnelling properties for dead detection.

Logical Switch

N-VDS
The N-VDS was previously known as the host switch is the software abstraction layer between servers and the physical network.  This is based on the vSphere Distributed Switch.
  • Is a software logical switch providing the forwarding service on a transport node. (Previously known as host switch)
  • N-VDS is created and distributed across hypervisor and NSX Edge transport nodes.
  • Centrally managed by NSX Manager
  • N-VDS configuration is consistent across hypervisor and NSX Edge transport nodes
The N-VDS switch is managed by NSX Manager and can be seen in vCenter Server as opaque network switches. They can be seen but not edited.
VDS
ESXi hosts that are managed by vCenter Server can be configured to use VDS during the transport node preparation. The segments from NSX manager are shown as distributed port groups in vCenter.
VDS depends on vCenter Server and replies on the configuration of the VDS v7 on vSphere 7. If using VDS v7, the MTU size must be set to 1600 bytes or greater in vCenter for NSX-T to utilise it.
Both the N-VDS and VDS are mapped to a single transport zone.

IP address Pools

Each transport node has as tunnel endpoint (TEP). Each TEP requires an IP address which can be assigned by DHCP or using the IP address pool.