Product and service reviews are conducted independently by our editorial team, but we sometimes make money when you click on links. Learn more.

Data Center Networking Tools And Technologies

Data Center Networking Tools And Technologies

Data center networking is progressing through a major shift that aims to simplify and automate the provisioning of network resources. We take a look at the next generation networking technologies and tools from some of the top enterprise vendors.

During the last decade, the majority of data center networks have been built using a very common design that involves the implementation of a two to three layer modeled solution. This solution works quite well, which is why it has been implemented for so long, but change is on the horizon. The recent introduction of software defined networking (SDN), network overlay technologies, network virtualization (NV), and a general focus on deploying and upgrading to more operationally efficient systems has lit a fire under many companies, forcing them to evolve their data center networking strategies.

As we dive into data center networking technologies and tools, this article will review some of the main network designs that have been used over the last 15 years as well as some of the emerging technologies that are currently being developed and deployed to evolve the standard methods of implementation. We will also take a look at some of the available tools and the top data center networking vendors. 

A Short History Of Data Center Networking

As many a network engineer will discuss, the basic design used within most networks (and not just in data centers) over the last 20 years has been one that involves the implementation of a two to three layer architecture (Fat Tree). The Fat Tree architecture typically involved the implementation of server racks placed at the access/edge layer, which was then connected to each other via an aggregation/distribution layer, which in turn was interconnected together at a core layer.

Figure 1: Fat Tree (Red Links are Optional)Figure 1: Fat Tree (Red Links are Optional)

This design was originally developed with the idea that most traffic would be going from the access layer to the core layer and back (North - South). The problem however, in many modern networks is that this pattern assumption is incorrect in the data center. Modern data centers have many different resources (compute, storage, etc) that are interconnected with one or many different virtualization technologies. This evolution has changed the traffic patterns so that much more access to access layer (East - West) traffic occurs than up through the aggregation and core layers; this causes many different potential bottlenecks to form and for the network to quickly become inefficient.

On top of these traffic changes, the network has been going through a evolutionary shift in the way that resources are used and managed. 20 years ago if an organization was going to deploy a web server, they would buy a physical server (or blade) and install in in their data center. Now this is done typically without any additional hardware by utilizing virtualized compute and storage resources. However, the problem up until recently has been that while the compute and storage resources have evolved to a point where they could be quickly provisioned from a central location, the network resources required to make the changes could take considerably more time to provision (hours or days, not minutes).

Next Generation Data Center Networking

The next generation of data center networking is still in the process of being determined, but there are a few main camps that have the momentum, whether they combine together or go in completely different directions is the current big unknown question. One thing that is a common objective is to reduce the amount of time it takes to provision network resources on a network and to automate as much of the individual Command Line Interface (CLI) provisioning as possible.

Software Defined Networking (SDN)

The first of camps that exist revolve around the concept of Software Defined Networking (SDN). SDN itself is broadly defined but basically involves the de-coupling of the control and data planes of the networking equipment (typically switches). What this means is that while it is common for protocols like OSPF (Open Shortest Path First), IS-IS (Intermediate System to Intermediate System) or BGP (Border Gateway Protocol) to be used inside a modern data center the implementation of SDN would remove this routing intelligence (control plane) from the individual devices and move it to a central controller. The actual forwarding of the data (data plane) would remain the responsibility of the individual device, but its forwarding tables would be controlled and modified from a central location.

There are a number of new terms and protocols that have been introduced and implemented for SDN. One of the most common terms that was initially written about quite frequently was OpenFlow. OpenFlow is a "Southbound" control protocol that is used to communicate between the controller and the individual devices; this is one method that can be used to control the behavior of these devices.

MORE: A Guide To Software Defined Networking Solutions
MORE: Introduction To OpenFlow

The main organization that is pushing OpenFlow as well as Open SDN is the Open Networking Foundation (ONF). ONF includes a large membership of most of the major network vendors, which support the general idea of bringing SDN into the mainstream. The key point to highlight is in the openness of the protocols developed with and by ONF members; this is going to be a potential deciding factor in many organizations' selection of vendors. Some vendors support the idea of Open SDN, but still have invested in other non-open technologies in the interest of maintaining their existing customers. Sometimes this method is successful as these specific vendors have decided to develop in other directions that end up with a superior implementation, and sometimes it doesn't; this is the fundamental question being answered by decision makers in today's data centers.

SDN And The Evolution Of Existing Topologies

One of the biggest questions that many organizations will need to answer is whether they wish to jump on the SDN bandwagon and whether this decision is even worthy of their specific implementation. In small to medium sized networks, the cost of changing over to SDN may not be worth the cost of implementation in the short or long term. Where the ideas behind SDN really shine are on larger networks and especially in large- to massive-sized data center networks. Another shift that is coming along at the same time as the SDN debate is whether the legacy topologies are designed for their specific traffic requirements.

In data center networks this really isn't much of a question, it is a fact that the traffic patterns within data centers have changed fundamentally from a North-South traffic pattern to a East-West traffic pattern. There's also the fact that excessive and/or wasted capacity is also becoming less of an acceptable condition and is being more focused on as a source of potential savings.

With this, there is a slow evolution from the legacy Fat Tree topology to a newer Clos style of topology. Many forward thinking organizations are well on their way to changing the way their data center networks are connected by completely altering the way that devices are interconnected.

Figure 2: Clos TopologyFigure 2: Clos TopologyOverall, most organizations will end up with a hybrid of these topologies, depending on their specific requirements and how their current networks are interconnected. There are variations of these two topology types that allow existing Fat Tree topologies to implement some of the Clos topology advantages without needing to completely alter the way things are currently interconnected.

Overlay Technologies

Some of the technologies that have gotten the most traction over the last several years have revolved around their ability to "overlay" on top of existing networks regardless of their physical or logical design. Some of the common technologies that are being implemented include Virtual Extensible LAN (VXLAN), Network Virtualization Using Generic Routing Encapsulation (NVGRE), Transparent Interconnection of Lots of Links (TRILL), Shortest Path Bridging (SPB), and Location/Identifier Separation Protocol (LISP). Each of these have their own advantages and disadvantages and are implemented more or less depending on the specific vendor that is selected. The selection of which one is better than the other is not a settled argument.

One thing is for sure, data center networks, along with networking in general, are eventually going to evolve from their existing best practices. While data centers do require a specific use case, they are also a place where networking will need to evolve the fastest and with it will be an intense time of technology review. The technologies that best fit the solutions of the majority will eventually bleed out and be used on other parts of the network. 

Now that we have an overview of the existing and evolving data center networking technologies, we'll take a closer look at some of the specific networking tools. In the coming weeks we'll review some of the top data center networking solutions from vendors including Arista Networks, Cisco, Dell and HPE.