Check out the TEC37 Video Podcast: Cisco ACI, Vmware NSX, or Both?

Why do we have a battle?

Since 2014, there has been an ongoing discussion for the next-generation data center: do we use Cisco Application Centric Infrastructure (ACI) or VMware NSX?

Both architectures have been through many upgrades, and VMware even changed the architecture of NSX from living within vSphere only to supporting multiple hypervisors, containers and bare-metal workloads. One thing that remains constant is that Cisco relies on the Nexus 9000 as the hardware that ACI is dependent on, where NSX is independent of the switching architecture that it runs on top of.

In many cases, software-defined networking is being considered due to a data center network refresh. It is the perfect time for an organization to look at the current challenges they have in providing networking services to the business. What does this next-generation data center need to provide for functionality? A long list of requirements often encompasses automation, multiple data centers, virtualization, segmentation and many other functions.

The problem is each SDN solution has strengths and weaknesses and rarely checks every box. The vital question then becomes, what is the value of running two SDN architectures to meet the needs of the business?

ACI provides the physical fabric.

If you have a refresh or new data center project in the works, we have to account for a new switching fabric. Today's underlay needs to be simple to manage and be automated. There is no reason to live in a CLI anymore when deploying a data center fabric. 

This is where automation is key. Putting an overlay on top of it is where things become more complex. Implementing an overlay via CLI is doable but tends to be a management nightmare and requires a much higher skill set. Cisco ACI makes this easy! 

The ACI fabric itself is pretty easy to setup and discover. That said, don't be fooled into thinking that there isn't much going on here. I equate this frequently to an automatic transmission. In many cases, people choose an automatic transmission because it is simple to drive. 

That doesn't mean the technology is simple. Ask any mechanic which they prefer to work on, automatic or manual.

Data center fabrics of the future should be easier to operate, but that doesn't mean there isn't complexity under the covers. ACI makes networks easier to operate, but there is still a lot going on under the covers. You have VXLAN, MP-BGP, IS-IS and much more. The APIC controller automates much of it for you. 

At the end of the day, you have the same number of spine and leaf switches for a CLOS fabric regardless of it being ACI or traditional VXLAN. The delta in cost is typically the APIC controllers, which are relatively minor once the sales promotions are taken into consideration.

Another benefit of utilizing ACI for the underlay outside of the basic fabric is the need to connect non-virtualized workload. NSX-T only supports ESXi and KVM hypervisors, specific Linux bare-metal workloads, Windows 2016 bare-metal workloads and specific container platforms. That leaves plenty of workload that ACI could be used to help provide not only data center connectivity but implement application policies as well. 

This is one of the use cases to utilize both environments. You may want to implement application policy end-to-end, but some bare-metal workloads are not supported on NSX-T. ACI would help fill that gap while still providing a simple data center fabric.

How and why is NSX utilized with ACI?

Now that we have addressed the underlay, let's discuss how NSX would be integrated. NSX on ACI could be implemented in multiple ways depending on the workload requirements. 

First is a single overlay. This means ACI handles the networking while NSX provides distributed firewall functionality. Until recently, the single overlay design was the most frequently implemented architecture of NSX on ACI. 

The other method is utilizing a double overlay. ACI would provide an overlay and security for workloads not being managed by NSX-T. NSX-T runs on top of the ACI fabric but has its own overlay for networking while using ACI's overlay as the transport. NSX-T then peers with ACI using BGP or static routes by connecting the NSX Edge to an ACI border leaf. This peering allows the enterprise network to access NSX-T resources through the ACI fabric. 

The double overlay architecture is becoming more popular as environments such as Pivotal Container Service (PKS), VMware Cloud Foundation and VMware Tanzu utilize NSX-T to provide their networking for cloud services. We cover these design elements in detail during our NSX on ACI Design Workshop.

Operational concerns are real.

Even though running both technologies together is possible, it doesn't come without challenges. One of the biggest challenges is ownership. The most common push-back is who is going to own NSX. Each environment is different due to technical capabilities, politics, budget and most importantly, teamwork.

Another challenge is cost. As mentioned earlier, the hardware cost to do ACI or not on a Cisco data center fabric is similar when doing a refresh or greenfield implementation. The additional cost then is the NSX component. If the requirements demand NSX functionality that ACI doesn't have, is the incremental cost worth that functionality? If so, now we will have to account for additional training and tools for the teams. Each environment has its own set of tools for troubleshooting, visibility and analytics.

How can WWT assist?

If you are at a crossroads of which architecture is best for you, we would be happy to provide a briefing to help determine which or both are best for your environment. We'll weigh the pros and cons of each architecture based on your requirements to help provide guidance. You can also reach out to your account team or contact us directly to schedule time with our experts.

Schedule an NSX on ACI Workshop Briefing
Request

Technologies