Configuring Physical Nexus Switches as VXLAN Layer 2 Gateways

Today, a VLAN can be read as a “villain” in some data centers (pun unashamedly intended). The rate of virtual machine deployment in cloud and scalable server virtualization environments environments is seriously challenging how physical networks in provisioning speed and sheer capacity.

In my book (“Data Center Virtualization Fundamentals”, CiscoPress 2013), I have explained the principles of Virtual eXtensible LAN (VXLAN): basically a network virtualization technique focused on hypervisor-based server virtualization environments using the encapsulation of Ethernet frames into UDP segments. VXLAN is a hot topic today not because how it encapsulates Ethernet frames over Layer 3 networks, but because who does it: the hypervisor itself.

Rather than VLANs, a server virtualization administrator may use VXLANs to provide isolated broadcast domains between VMs, overcoming the following challenges:

  • Defining more than 4094 broadcast domains (you can theoretically provision more than 16M distinct VXLAN segments);
  • Provisioning additional Layer 2 segments without any operations on the physical network;
  • Avoiding MAC address table “explosions” on the physical network due to an extreme number of virtual machines. With VXLANs, only the VXLAN Tunnel End Points (VTEPs) MAC addresses are learned by the physical switches.

The first Cisco product that offered VXLAN was the Nexus 1000V, using the essential (free) license. As I explained in the book, Nexus 1000V allows the communication of several VMs using IP multicast for BUM (Broadcast, Unknown unicast, or Multicast) traffic exactly as defined on the first VXLAN draft. However, applications are surely not entirely composed of VMs. Physical servers and network appliances (such as firewalls and load balancers) must also communicate with them, and therefore, VXLAN Gateways are needed for this objective.

Also in the book I have explored Layer 3 gateways, such as CSR 1000V and ASA 1000V. These specialized virtual machines can basically route VXLAN packets from VMs to VLANs and vice-versa. Nevertheless, a very interesting question may arise from this discussion? How can a VXLAN-bound VM exchange Layer 2 traffic with a physical server (such as a database server) connected to a VLAN? In other words, a Layer 2 Gateway is needed to “weld” a VLAN+VXLAN pair, providing a single broadcast domain with these two network abstractions.

Last year, Cisco has released a virtual service blade for the Nexus 1100 to provide this communication. The gateway main focus is to leverage Nexus 1000V´s Enhanced VXLAN, which allows unicast-only communication among VTEPs. Because this will be a topic for a future post, I will focus here on Nexus physical switches that can be used as Layer 2 Gateway.

Figure 1 represents a very simple topology where the switch Nexus is deployed as a Layer 2 Gateway between a server on VLAN 1500 and a virtual machine on VXLAN 10000.


Figure 1: NEXUS switch as a Layer 2 Gateway

This is the (select) configuration for the physical switch.


NEXUS# show running-config[output suppressed]! Enabling VXLAN with VLAN-based configurationfeature nv overlay

feature vn-segment-vlan-based

[output suppressed]


! Allowing the advertisement of multicast routes

ip pim rp-address group-list


! Changing the default VXLAN UDP port (4789) to the Nexus 1000V original port

vxlan udp port 8472


! “Welding” VLAN 1500 with VXLAN 10000

vlan 1500

vn-segment 10000


! Configuring the virtual L2 interface

interface nve1

no shutdown

source-interface loopback0

member vni 10000 mcast-group


! Configuring the physical L2 interface

interface Ethernet1/1

switchport access vlan 1500

speed 1000


[output suppressed]

! Physical L3 interface

Interface loopback0

ip address

ip router ospf 1 area

ip pim sparse-mode


[output suppressed]


As you can see, the configuration somehow resembles the very famous OTV from Nexus 7000 and ASR 1000. Because VXLAN is also an Ethernet-over-IP overlay, the configuration requires that both protocols are correctly established (interfaces Ethernet1/1 and Loopback0, respectively). After all VXLAN processes are enabled (feature commands) and the UDP port is configured for compatibility. Finally, a Network Virtual Edge (NVE) interface is configured to behave as a VTEP.

A mild curiosity: if you check IANA´s service port numbers, UDP port 8472 is reserved for “Overlay Transport”. Later VXLAN drafts changed that to UDP port 4789 to further distance the protocol from its “humble” origins.

After all machines communicate with each other using ARP requests and replies, the Nexus switch Figure 1 displays the following MAC address table for VLAN 1500.

NEXUS# show mac address-table vlan 1500Legend:* – primary entry, G – Gateway MAC, (R) – Routed MAC, O – Overlay MACage – seconds since last seen,+ – primary entry using vPC Peer-LinkVLAN       MAC Address     Type     age       Secure NTFY     Ports/SWID.SSID.LID


* 1500     0050.56b0.5009   dynamic     100       F   F   nve1 /

* 1500     0050.56b0.58d9   dynamic     10         F   F   nve1 /

* 1500     8843.e1c2.b4cc   dynamic     10         F   F   Eth1/1


In the meanwhile, Nexus 1000V shows the following MAC address table for VXLAN segment 1000.

VSM# show mac address-table bridge-domain segment-10000Bridge-domain: segment-10000MAC Address       Type     Age       Port           IP Address     Mod————————–+——-+———+—————+—————+—

0050.56b0.5009   static   0         Veth11         3

0050.56b0.58d9   static   0         Veth12         3

8843.e1c2.b4cc   dynamic 1         Eth3/3         3


Total MAC Addresses: 3


You can notice that both devices have entries for all three end hosts. The physical switch sees the MACs on VLAN 1500 and VXLAN 1000 while the Nexus 1000V instance has it on VXLAN 10000 (or bridge domain “segment-10000”). Local MAC addresses are directed to local interfaces and remote addresses point to the VTEP (IP addresses).

For sure other aspects of this implementation, such as high availability and possibility of VLAN extension between data centers, must be further discussed. But I hope that for now these simple principles are enough food for thought.

Please feel free to add your comments below.

Best regards and stay tuned!


CCIE Data Center!

Hello, everybody! How are you all doing?

I hope my book (“Data Center Virtualization Fundamentals”) was sufficient to cover my absence from the blog. But as shown below, you may agree with me that it was for a good reason.


Those who know me personally can attest that I am a big fan of this certification. Surpassing such challenge made me really happy, especially considering that CCIE DC was the hardest of them, in my humble opinion.

While I will certainly publish a post about my study strategy, much weight was also lifted for me to discuss other subjects in the near future. These are exciting times for Data Center innovation and my intention is to bring the book approach to new technologies such as:

  • Software Defined Networking (SDN)
  • Network Functions Virtualization (NFV)
  • Dynamic Fabric Automation (DFA)
  • Application Centric Infrastructure (ACI)
  • … and much more!

Anyway, thanks for your patience. It´s nice to have you back.

Best regards,


Watch my “Meet the Author” Webinar Recording on Nexus 1000V New Virtual Network Services

Hello All,

You can check the webinar content (and my Brazilian accent) at this URL:

The CiscoPress-sponsored webinar happened on October 15th and its agenda was:

  • Nexus 1000V for Microsoft Hyper-V
  • Enhanced VLAN
  • Nexus 1000V integration with Imperva SecureSphere WAF and Citrix NetScaler 1000V
  • Nexus 1000V InterCloud

I hope you will enjoy it!