Although the Cisco Nexus platforms have been around for a while now they are not commonly deployed outside of the data centre, this blog gives a high level overview of some of the main differences between Nexus and Catalyst switching.
What is NX-OS?
Cisco Nexus switching platforms are at heart L2\L3 (with a L2 bias) switches, when viewed from a Classical Ethernet (CE) perspective. Whilst there is continued support for all of the features that you would expect from a CE switching platform (VLANs, Trunking, VTP, Rapid-PVST, MST, Etherchannel, LACP (802.1ad), PVLANs, UDLD, IGPs, BGP, FHRPs, etc…) the Nexus platform also supports some entirely new features including Fabric Extenders (FEX), virtual Port Channel (vPC), FabricPath, OTV (Overlay Transport Virtualisation), VDCs (virtual device contexts (N7K)) Native FC (Fibre Channel) switching, FCoE (Fibre Channel over Ethernet) and FCIP (Fibre Channel over IP) to name but a few.
With these new technologies at their disposal, does Cisco intend for the Nexus switching platforms to eventually replace Catalyst iOS switching platforms altogether? In my opinion, Nexus and Catalyst IOS switches cover completely different market verticals; Catalyst switches continue to be aimed at Campus L2\L3 switching, whereas Nexus is traditionally aimed at datacentre switching.
Where do I use NX-OS?
There is the potential to justify a Nexus 5K or 6K in a larger campus network, but certainly not as an Access layer switch (there are some nexus platforms that could fulfil this role, the Nexus 3K and 4K, but in my opinion these are not typical data centre switching platforms).
Although typical campus networks may have a large server room (or small data centre, depending on the way you look at it) the design guidelines that would apply to a data centre still apply here (albeit on a smaller scale) in terms of FC SANs and Blade based compute power. In a typical campus server room, an N1Kv will probably be a more common sight (figuratively speaking) than, let’s say a 5K.
Nexus switching and its associated technologies typically bring to together the main components of a data centre: Storage and Compute power. This is evidenced by the support for typically SAN only FC and its many flavours (Unified Fabric), and more importantly its integration with the various server platforms, and specialised CNAs (converged network adapters).
Let’s look at three of the Nexus specific technologies that are roughly analogous to the CE Access layer, Distribution layer (Aggregation) and DCI (Data Centre Interconnect):
vPC does not require VSS or Stackwise to form a multichassis port channel, whereas VSS and Stackwise maintain a single control plane and multiple data planes, vPC peers retain individual control planes with control plane synchronisation performed by CFS (cisco fabric services).
A vPC provides the following benefits:
- Allows a single device to use a Port Channel across two upstream devices.
- Eliminates Spanning Tree Protocol (STP) blocked ports.
- Provides a loop-free topology.
- Uses all available uplink bandwidth.
- Provides fast convergence if either the link or a device fails.
- Provides link-level resiliency.
- Helps ensure high availability.
The Cisco implementation of MAC in FabricPath routing, similar in many ways to TRILL (Transparent Interconnection of Lots of Links) and uses IS-IS (Intermediate System to Intermediate System) to perform routing at L2.
FabricPath has the following benefits:
- FabricPath is extremely simple to configure.
- A single control protocol is used for unicast and multicast forwarding.
- Switches that do not support Cisco FabricPath can still be attached to the Cisco FabricPath fabric in a redundant way without resorting to STP.
- Loop prevention and mitigation is available in the data plane transparently.
- Cisco FabricPath frames include a time-to-live (TTL) field similar to the one used in IP, and a reverse-path forwarding (RPF), similar to PIM (Protocol Independent Multcasting).
- Because equal-cost multipath (ECMP) can be used the data plane, the network can use all the links available between any two devices.
- Frames are forwarded along the shortest path (SPF) algorithm to their destination, using IS-IS L2 routing.
Designed specifically for the DCI as an alternative to VPLS (Virtual Private LAN Services) and AToM (Any Transport over MPLS) etc…, OTV transports L2 over L3 (MAC over MPLS over GRE over IP) allowing for multipoint to multipoint L2 connectivity between data centres, by leveraging IP multicast.
The key points of OTV are as follows:
- OTV is supported on a number of devices including the Nexus 7000 and 1000v CSR.
- Essentially a L2 VPN over IP.
- Has built in enhancements to help scale a DCI.
- Is configured as a client side service and doesn’t have to be provisioned by an SP (Service Provider).
- Designed for services requiring L2 connectivity such as virtual machine workload mobility (vMotion).
- Optimises ARP (address resolution protocol) flooding over the DCI, by using proxy ARP/ICMPv6 ND (neighbour discovery)
- Is the demarcation point for STP.
- Can carry multiple VLANs without a complex design.
- Can transport both IPV4\6 broadcast and Multicast control plane protocols.
As you can probably see from this overview of just three core Nexus platform technologies, that they fit in closer in Data centre than they do in a campus LAN.
When looked at in isolation the Nexus switching platforms provide a solid core of alternative switching technologies, however the true benefits of the Nexus platforms can be seen when used as part of the UCS (Unified Computing System) ecosystem, to form a unified datacentre fabric for both SAN, LAN and DCI.