NSX-T Logical Switching

Use Cases for Logical Switching

With todays traditional datacentre switching challenges, businesses require multitenant or application segmentation. Virtual machine mobility requirements over layer 2 across networks and sites. However having such a large layer 2 physical networks can cause STP issue nightmares. Traditional hardware has its own limitations, swapping out ageing hardware requires downtime or patching firmware. Network hardware can be fragile and for this reason businesses do not often upgrade or change the networking components, especially at the core level to prevent downtime. It’s usually, if it’s not broken don’t fix it attitude, which can bring its own host of problems if the kit is years old.
NSX-T switching benefits includes but not limited to: flexible scalability across multi tenancy datacentres, the ability to use existing hardware and stretching layer-2 over layer-3 networks.
Logical switching involves several concepts including the segments, segment ports and uplinks. These are defined on the Transport nodes. Segments will flow traffic over an overlay or VLAN backed N-VDS or VDS switch. The use of segment profiles allow for common configurations to be applied.


Traffic from one hypervisor to another happens through tunnelling or overlay protocol which means that the traffic leaves the virtual machine as layer 2 frame. The logical switch will transfer the layer-2 frame, see that the destination is a different hypervisor and send over the tunnel. The overlay protocol used in NSX-T is Geneve which provides encapsulation of data packets.
The Geneve encapsulated packets are communicated in the following ways:
  • The source TEP encapsulates the VM is frame in the Geneve header
  • The encapsulated UDP packet is transmitted to the destination TEP over port 6081
  • The destination TEP decapsulate the Geneve header and delivers the source frame to the destination VM.


Tunnel encapsulation protocol. Used to be called vxlan. Each broadcast domain is implemented by tunnelling VM-to-VM and VM-to-logical router traffic. Runs on UDP, adds an 8-byte UDP header, uses a 24-bit VNI to identify the segment.

TEP – Tunnel End Point

  • Tunnel endpoints enable hypervisor hots to participate in and NSX-T network overlay.
  • The NSX-T overlay deploys a Layer 2 network over an existing physical network fabric by encapsulating frames inside of packets, and transferring the encapsulating packets over the underlying transport network.
  • The underlying transport network can consist of either L2 or L3 networks. The TEP is the connection point at which encapsulating and decapsulating takes place.

Segments (logical switch)

  • Segments are virtual layer 2 domains. A segment was earlier called a logical switch.
           There are two types of segments in NSX-T Data Center:
    • VLAN-backed segments
    • Overlay-backed segments
  • Defined by VNI numbers, similar to a VLAN tag ID.
A VLAN-backed segment is a layer 2 broadcast domain that is implemented as a traditional VLAN in the physical infrastructure. This means that traffic between two VMs on two different hosts but attached to the same VLAN-backed segment is carried over a VLAN between the two hosts. The resulting constraint is that you must provision an appropriate VLAN in the physical infrastructure for those two VMs to communicate at layer 2 over a VLAN-backed segment.
In an overlay-backed segment, traffic between two VMs on different hosts but attached to the same overlay segment have their layer 2 traffic carried by a tunnel between the hosts. NSX-T Data Center instantiates and maintains this IP tunnel without the need for any segment-specific configuration in the physical infrastructure. As a result, the virtual network infrastructure is decoupled from the physical network infrastructure. That is, you can create segments dynamically without any configuration of the physical network infrastructure.
The default number of MAC addresses learned on an overlay-backed segment is 2048. The default MAC limit per segment can be changed through the API field remote_overlay_mac_limit in MacLearningSpec.
When creating a segment, you can define a gateway address, what upstream router connecting to, define a DHCP range if required. When creating a logical switch in the manager, you’re only creating a layer-2 domain. So you can see the segments are powerful and more advanced than a logical switch. A segment is really an advanced logical switch, a logical switch plus.

Segment Profiles

Segment profiles provide layer 2 networking configurations for segments and ports. There are various types of choices when creating a segment profile
  • Spoof Guard
  • IP Discovery – Learns the VM MAC and IP addresses
  • MAC Discovery – Supports MAC learning and MAC address change
  • Segment Security – Provides stateless layer 2 and layer 3 security
  • QoS – Provides high quality and dedicated network performance for traffic
These default profiles are not editable

Logical Switching Packet Forwarding

NSX Controller maintains a set of tables the are used to identify data plane component associations and for traffic forwarding. For networking to work in the NSX-T datacenter, each virtual segment must be configured with network floe tables that form broadcast domains.
  • TEP Table – Maintains VNI-to-TEP IP bindings
  • ARP Table – Maintains VM MAC to VM IP mapping
  • MAC Table – Maintains the VM MAC address to TEP IP mapping
When a VM is powered-on within a segment:
TEP Table update
  1. When a VM is powered-on within a segment, the VNI-to-TEP mapping is registered on the transport node in its local TEP table.
  2. Each transport node updates the central control plane about the learned VNI-to-TEP IP mapping.
  3. Central Control Plane maintains the consolidated entries of VNI-to-TEP IP mappings.
  4. The central control plan send the updated TEP table to all the transport nodes where VNI is realised.
MAC Table update
  1. The VM MAC-to-TEP IP mapping is registered on the transport nodes in its local MAC address table.
  2. Each transport note updates the central control plane about the learned VM MAC-to-TEP IP mapping.
  3. The central control plane maintains the consolidated entries of VM MAC-to-TEP IP mappings.
  4. The central control plane sends out the updated MAC table to all transport notes where the VNI is realised.
ARP Table update
  1. The local VM IP-to-MAC mapping is registered on each transport node in its ARP local table.
  2. Each transport node sends known VM IP-to-MAC mappings to the central control plane.
  3. The central control plane updates its ARP table based on the VM IP-to-MAC received from the transport nodes.
  4. The central control plane sends the updated ARP table to all the transport nodes.
The ARP tables in both the central control plane and transport nodes are flushed after 10 minutes.

Flooding Traffic

NSX-T segment behaves like a LAN, providing the capability of flooding traffic to all the devices attached to this segment.
NSX-T does not differentiate between the different kinds of frames replicated to multiple destinations. Broadcast, unknown unicast, or multicast traffic will be flooded in a similar fashion across a segment.
In the overlay model, the replication of a frame to be flooded on a segment is orchestrated by the different NSX-T components.
  • Head-End Replication Mode
    • The frame is flooded to every transport node connected to the segment. Sending packets across all available links.
  • Two-tier hierarchical Mode (Recommend best practise, performs better in terms of physical uplink bandwidth utilisation)
    • Transport nodes are grouped in according to the subnet of the IP address of their TEP.