Don’t Forget to Check the New Data Center Labs on

Design versus deployment. White board versus black screen. Pre-sales versus post-sales.

Do these separations make sense any longer? In my humble opinion, IT companies cannot afford these silos today. A field engineer should be aware of good design practices as much as architects should know how operationally adequate is a said solution. But how can the latter experience these products in an efficient way?

A while ago I’ve published a blog about some data center lab resources and a lot of people thanked me for pointing dCloud in that article. Since then, this website has become the beachhead for Cisco demonstrations solutions and has gathered a respectable set of data center labs, such as:

Cisco Nexus Data Broker 2.0 v1

Cisco Nexus Data Broker (formerly Cisco Monitor Manager) is a software-defined, programmable solution to aggregate copies of network traffic using SPAN or network taps for monitoring and visibility purposes. It is a simple, scalable and cost-effective monitoring solution.

Cisco UCS Director 5.0 v1

This demonstration shows how Cisco UCS Director reduces the complexity of data centers, allowing IT staff to shift from manually managing infrastructure to developing new services for the business

Cisco Nexus 9000: NX-OS Programmability v1

The Nexus 9000 in standalone mode provides APIs for external tools to be able to automatically provision network resources. This demo shows the interaction with NX-API via both, custom made and off-the-shelf applications like Graphite, Splunk and Puppet.

Cisco UCS Invicta with Citrix VDI v1

Show how the UCS Invicta Appliance allows customers to quickly and efficiently configure IT infrastructure to support I/O-intensive applications with real-time analytics.

Cisco UCS Central 1.1.2a v1

Show how the updated Cisco UCS Central 1.1.2a can provide global definition capabilities for policies and resource pools, which can be flexibly allocated across distributed data centers.

Cisco Intelligent Automation for Cloud 4.0 v1

This demo showcases the powerful capabilities of CIAC 4.0, from onboarding tenants and creating organizations to requesting a vDC and creating a VM from a template. It covers all Administration user levels: CPTA, TTA, OTA and End User.

Cisco Extensible Network Controller 1.5 v1

Show how network operations can create powerful, reusable network service abstractions for business applications through northbound APIs, simply by adding Cisco XNC to a network.

Cisco Unified Computing System 2.2 v1

This demonstration shows the ease and flexibility of building, deploying and managing the data center via Cisco UCS 2.2 (El Capitan).

FlexPod with Microsoft Hyper-V v1

Show how the FlexPod architecture can be leveraged for Microsoft Hyper-V virtualized environments to reduce costs, accelerate time to market, and improve IT efficiency.

Cisco Prime Network Services Controller 3.0 v1

Show how Cisco Prime NSC, combined with the Cisco Nexus 1000V switch, Cisco ASA 1000V Cloud Firewall and Cisco VSG, provides a rapid and scalable deployment through dynamic, template-oriented policy management based on security profiles.

FlexPod with VMware v1

Highlight the FlexPod value proposition in the context of a Virtual Desktop deployment. Focus on features that clearly illustrate the business benefits and unique differentiators of the FlexPod architecture.

Impressive, right?

One last observation: you will need your CCO login to access the labs. And be prepared for some pleasant surprises after you log into the website…

Have fun!


VXLAN is Now RFC 7348

In the last August, the Internet Engineering Task Force (IETF) galvanized the work of roughly 3 years in a Request for Comments (RFC) document. Although it can be considered a young virtualization technology, Virtual eXtensible Local Area Networks (VXLANs) are already molding how Cloud networks function and are quickly becoming a fundamental building block of Software-Defined Networking (SDN).

As I have explored in Chapter 15 from my book (“Data Center Virtualization Fundamentals”), VXLANs were at first invented to provide Layer 2 communication between Virtual Machines (VMs) over IP networks. Nevertheless, its flexibility can be used to implement specialized data center fabrics such as Cisco Application Centric Infrastructure (ACI).

The document was written by employees from Cisco Systems, Storvisor, Cumulus Networks, Arista, Broadcom, VMware, Intel and Red Hat and can be downloaded here:

Here are some of its highlights:

  • The RFC has achieved informational status, meaning that it should not be considered a mandatory standard but simply the publishing of an experience. IETF´s motto in these cases is “rather document than ignore” according to RFC 1796 (“Not All RFCs are Standards”), which funnily is also informational.
  • According to the RFC, VXLAN overcomes the following challenges: STP limitations, VLAN range size, multi-tenant environments in cloud computing environments, and inadequate table sizes at Top-of-Rack switches.
  • The document focuses on data plane learning scheme as control plane for VXLAN, meaning that the association of MAC to VTEP´s IP address is discovered via source MAC address learning.
  • Multicast is used of carrying unknown destination, broadcast, and multicast frames. VTEPs use (*,G) joins for that objective.
  • The RFC does not discard that other control plane options for MAC/VTEP learning may exist, such as Nexus 1000V´s Enhanced VXLAN.
  • To ensure traffic delivery without fragmentation, it is recommended that the MTUs across the physical network are set to a value that accommodates the larger frame size due to the VXLAN encapsulation.
  • The Internet Assigned number Authority (IANA) assigned the value of 4789 for the VXLAN´s destination UDP port. It is recommended that the source port is calculated using a hash of the inner Ethernet frame´s headers to leverage entropy on networks that deploy traffic load-balancing methods such as ECMP. Also, the UDP source port range should be within the dynamic/private port range of 49152-65535).
  • VXLAN gateways are defined as elements that can forward traffic between VXLAN and non-VXLAN environments.

Even under information status, the publishing of VXLAN as a RFC will surely help the convergence of distinct vendor research and development. These are exciting news indeed for network engineers, server virtualization admins, and cloud architects.

Have you already configured your first VXLAN?

Best regards,


Configuring Physical Nexus Switches as VXLAN Layer 2 Gateways

Today, a VLAN can be read as a “villain” in some data centers (pun unashamedly intended). The rate of virtual machine deployment in cloud and scalable server virtualization environments environments is seriously challenging how physical networks in provisioning speed and sheer capacity.

In my book (“Data Center Virtualization Fundamentals”, CiscoPress 2013), I have explained the principles of Virtual eXtensible LAN (VXLAN): basically a network virtualization technique focused on hypervisor-based server virtualization environments using the encapsulation of Ethernet frames into UDP segments. VXLAN is a hot topic today not because how it encapsulates Ethernet frames over Layer 3 networks, but because who does it: the hypervisor itself.

Rather than VLANs, a server virtualization administrator may use VXLANs to provide isolated broadcast domains between VMs, overcoming the following challenges:

  • Defining more than 4094 broadcast domains (you can theoretically provision more than 16M distinct VXLAN segments);
  • Provisioning additional Layer 2 segments without any operations on the physical network;
  • Avoiding MAC address table “explosions” on the physical network due to an extreme number of virtual machines. With VXLANs, only the VXLAN Tunnel End Points (VTEPs) MAC addresses are learned by the physical switches.

The first Cisco product that offered VXLAN was the Nexus 1000V, using the essential (free) license. As I explained in the book, Nexus 1000V allows the communication of several VMs using IP multicast for BUM (Broadcast, Unknown unicast, or Multicast) traffic exactly as defined on the first VXLAN draft. However, applications are surely not entirely composed of VMs. Physical servers and network appliances (such as firewalls and load balancers) must also communicate with them, and therefore, VXLAN Gateways are needed for this objective.

Also in the book I have explored Layer 3 gateways, such as CSR 1000V and ASA 1000V. These specialized virtual machines can basically route VXLAN packets from VMs to VLANs and vice-versa. Nevertheless, a very interesting question may arise from this discussion? How can a VXLAN-bound VM exchange Layer 2 traffic with a physical server (such as a database server) connected to a VLAN? In other words, a Layer 2 Gateway is needed to “weld” a VLAN+VXLAN pair, providing a single broadcast domain with these two network abstractions.

Last year, Cisco has released a virtual service blade for the Nexus 1100 to provide this communication. The gateway main focus is to leverage Nexus 1000V´s Enhanced VXLAN, which allows unicast-only communication among VTEPs. Because this will be a topic for a future post, I will focus here on Nexus physical switches that can be used as Layer 2 Gateway.

Figure 1 represents a very simple topology where the switch Nexus is deployed as a Layer 2 Gateway between a server on VLAN 1500 and a virtual machine on VXLAN 10000.


Figure 1: NEXUS switch as a Layer 2 Gateway

This is the (select) configuration for the physical switch.


NEXUS# show running-config[output suppressed]! Enabling VXLAN with VLAN-based configurationfeature nv overlay

feature vn-segment-vlan-based

[output suppressed]


! Allowing the advertisement of multicast routes

ip pim rp-address group-list


! Changing the default VXLAN UDP port (4789) to the Nexus 1000V original port

vxlan udp port 8472


! “Welding” VLAN 1500 with VXLAN 10000

vlan 1500

vn-segment 10000


! Configuring the virtual L2 interface

interface nve1

no shutdown

source-interface loopback0

member vni 10000 mcast-group


! Configuring the physical L2 interface

interface Ethernet1/1

switchport access vlan 1500

speed 1000


[output suppressed]

! Physical L3 interface

Interface loopback0

ip address

ip router ospf 1 area

ip pim sparse-mode


[output suppressed]


As you can see, the configuration somehow resembles the very famous OTV from Nexus 7000 and ASR 1000. Because VXLAN is also an Ethernet-over-IP overlay, the configuration requires that both protocols are correctly established (interfaces Ethernet1/1 and Loopback0, respectively). After all VXLAN processes are enabled (feature commands) and the UDP port is configured for compatibility. Finally, a Network Virtual Edge (NVE) interface is configured to behave as a VTEP.

A mild curiosity: if you check IANA´s service port numbers, UDP port 8472 is reserved for “Overlay Transport”. Later VXLAN drafts changed that to UDP port 4789 to further distance the protocol from its “humble” origins.

After all machines communicate with each other using ARP requests and replies, the Nexus switch Figure 1 displays the following MAC address table for VLAN 1500.

NEXUS# show mac address-table vlan 1500Legend:* – primary entry, G – Gateway MAC, (R) – Routed MAC, O – Overlay MACage – seconds since last seen,+ – primary entry using vPC Peer-LinkVLAN       MAC Address     Type     age       Secure NTFY     Ports/SWID.SSID.LID


* 1500     0050.56b0.5009   dynamic     100       F   F   nve1 /

* 1500     0050.56b0.58d9   dynamic     10         F   F   nve1 /

* 1500     8843.e1c2.b4cc   dynamic     10         F   F   Eth1/1


In the meanwhile, Nexus 1000V shows the following MAC address table for VXLAN segment 1000.

VSM# show mac address-table bridge-domain segment-10000Bridge-domain: segment-10000MAC Address       Type     Age       Port           IP Address     Mod————————–+——-+———+—————+—————+—

0050.56b0.5009   static   0         Veth11         3

0050.56b0.58d9   static   0         Veth12         3

8843.e1c2.b4cc   dynamic 1         Eth3/3         3


Total MAC Addresses: 3


You can notice that both devices have entries for all three end hosts. The physical switch sees the MACs on VLAN 1500 and VXLAN 1000 while the Nexus 1000V instance has it on VXLAN 10000 (or bridge domain “segment-10000”). Local MAC addresses are directed to local interfaces and remote addresses point to the VTEP (IP addresses).

For sure other aspects of this implementation, such as high availability and possibility of VLAN extension between data centers, must be further discussed. But I hope that for now these simple principles are enough food for thought.

Please feel free to add your comments below.

Best regards and stay tuned!