Today, a VLAN can be read as a “villain” in some data centers (pun unashamedly intended). The rate of virtual machine deployment in cloud and scalable server virtualization environments environments is seriously challenging how physical networks in provisioning speed and sheer capacity.
In my book (“Data Center Virtualization Fundamentals”, CiscoPress 2013), I have explained the principles of Virtual eXtensible LAN (VXLAN): basically a network virtualization technique focused on hypervisor-based server virtualization environments using the encapsulation of Ethernet frames into UDP segments. VXLAN is a hot topic today not because how it encapsulates Ethernet frames over Layer 3 networks, but because who does it: the hypervisor itself.
Rather than VLANs, a server virtualization administrator may use VXLANs to provide isolated broadcast domains between VMs, overcoming the following challenges:
- Defining more than 4094 broadcast domains (you can theoretically provision more than 16M distinct VXLAN segments);
- Provisioning additional Layer 2 segments without any operations on the physical network;
- Avoiding MAC address table “explosions” on the physical network due to an extreme number of virtual machines. With VXLANs, only the VXLAN Tunnel End Points (VTEPs) MAC addresses are learned by the physical switches.
The first Cisco product that offered VXLAN was the Nexus 1000V, using the essential (free) license. As I explained in the book, Nexus 1000V allows the communication of several VMs using IP multicast for BUM (Broadcast, Unknown unicast, or Multicast) traffic exactly as defined on the first VXLAN draft. However, applications are surely not entirely composed of VMs. Physical servers and network appliances (such as firewalls and load balancers) must also communicate with them, and therefore, VXLAN Gateways are needed for this objective.
Also in the book I have explored Layer 3 gateways, such as CSR 1000V and ASA 1000V. These specialized virtual machines can basically route VXLAN packets from VMs to VLANs and vice-versa. Nevertheless, a very interesting question may arise from this discussion? How can a VXLAN-bound VM exchange Layer 2 traffic with a physical server (such as a database server) connected to a VLAN? In other words, a Layer 2 Gateway is needed to “weld” a VLAN+VXLAN pair, providing a single broadcast domain with these two network abstractions.
Last year, Cisco has released a virtual service blade for the Nexus 1100 to provide this communication. The gateway main focus is to leverage Nexus 1000V´s Enhanced VXLAN, which allows unicast-only communication among VTEPs. Because this will be a topic for a future post, I will focus here on Nexus physical switches that can be used as Layer 2 Gateway.
Figure 1 represents a very simple topology where the switch Nexus is deployed as a Layer 2 Gateway between a server on VLAN 1500 and a virtual machine on VXLAN 10000.
Figure 1: NEXUS switch as a Layer 2 Gateway
This is the (select) configuration for the physical switch.
|NEXUS# show running-config[output suppressed]! Enabling VXLAN with VLAN-based configuration
feature nv overlay
! Allowing the advertisement of multicast routes
ip pim rp-address 220.127.116.11 group-list 18.104.22.168/32
! Changing the default VXLAN UDP port (4789) to the Nexus 1000V original port
vxlan udp port 8472
! “Welding” VLAN 1500 with VXLAN 1000
! Configuring the virtual L2 interface
member vni 10000 mcast-group 22.214.171.124
! Configuring the physical L2 interface
switchport access vlan 1500
! Physical L3 interface
ip address 126.96.36.199/32
ip router ospf 1 area 0.0.0.0
ip pim sparse-mode
As you can see, the configuration somehow resembles the very famous OTV from Nexus 7000 and ASR 1000. Because VXLAN is also an Ethernet-over-IP overlay, the configuration requires that both protocols are correctly established (interfaces Ethernet1/1 and Loopback0, respectively). After all VXLAN processes are enabled (feature commands) and the UDP port is configured for compatibility. Finally, a Network Virtual Edge (NVE) interface is configured to behave as a VTEP.
A mild curiosity: if you check IANA´s service port numbers, UDP port 8472 is reserved for “Overlay Transport”. Later VXLAN drafts changed that to UDP port 4789 to further distance the protocol from its “humble” origins.
After all machines communicate with each other using ARP requests and replies, the Nexus switch Figure 1 displays the following MAC address table for VLAN 1500.
|NEXUS# show mac address-table vlan 1500Legend:* – primary entry, G – Gateway MAC, (R) – Routed MAC, O – Overlay MACage – seconds since last seen,+ – primary entry using vPC Peer-Link
VLAN MAC Address Type age Secure NTFY Ports/SWID.SSID.LID
* 1500 0050.56b0.5009 dynamic 100 F F nve1 /188.8.131.52
* 1500 0050.56b0.58d9 dynamic 10 F F nve1 /184.108.40.206
* 1500 8843.e1c2.b4cc dynamic 10 F F Eth1/1
In the meanwhile, Nexus 1000V shows the following MAC address table for VXLAN segment 1000.
|VSM# show mac address-table bridge-domain segment-10000Bridge-domain: segment-10000MAC Address Type Age Port IP Address Mod
0050.56b0.5009 static 0 Veth11 0.0.0.0 3
0050.56b0.58d9 static 0 Veth12 0.0.0.0 3
8843.e1c2.b4cc dynamic 1 Eth3/3 220.127.116.11 3
Total MAC Addresses: 3
You can notice that both devices have entries for all three end hosts. The physical switch sees the MACs on VLAN 1500 and VXLAN 1000 while the Nexus 1000V instance has it on VXLAN 10000 (or bridge domain “segment-10000”). Local MAC addresses are directed to local interfaces and remote addresses point to the VTEP (IP addresses).
For sure other aspects of this implementation, such as high availability and possibility of VLAN extension between data centers, must be further discussed. But I hope that for now these simple principles are enough food for thought.
Please feel free to add your comments below.
Best regards and stay tuned!