Eight Key Impacts on Your Data Center LAN Network

  • 27 April 2011

  • Bjarne Munch

  • Research Note G00211994

Data center network planners need to make decisions concerning eight key design impacts on their networks. This research discusses these key impacts, as well as their drivers and solutions.

Gartner Webinar
Best Data Center Sourcing Model: Cloud, Host, Co-lo, or Internal?


The ongoing cost and growth pressures on the enterprise data center continue to stress the data center network. Today, network congestion and connectivity architecture are most often mentioned as the key issues. However, the data center network is undergoing significant and, in some cases, radical changes, where network planners will have to consider and plan for several network and technology changes. As we see it, these changes will evolve the network from the traditional tiered-tree topology to a flat-meshed Layer 2 network topology architecture. There remain diverse positions about Ethernet being destined to be the sole transport technology, and whether the data center network will evolve into a unified network of the data and storage networks. However, it is clear that the data center network will evolve into a network fabric supporting the concept of fabric computing. This requires changes in current network design practices, as well as new technologies and new types of switches. The technology and product evolution will be significant, and will not, in all cases, be backward-compatible, which will require a planned road map to ensure a smooth evolution.

The changes that are taking place in the data center network can be related to four specific data center issues undertaken by the enterprise: cost reduction, server and storage consolidation (cost reduction), server virtualization and changes in application deployments. In response to these four projects, we believe that there are eight specific key network initiatives that network planners must consider (see Figure 1). Because at least some of the decisions that network planners need to make can be tied to specific projects, while others are tied into technology maturity, they can use these project and technology maturity timelines to establish the main timeline for their network road map.

Figure 1. The Eight Main Network Initiatives for the Data Center LAN

Figure 1. The Eight Main Network Initiatives for the Data Center LAN

Source: Gartner (April 2011)

Of the eight network initiatives, the first five are specific changes that should be done now to solve specific issues, while the last three are related to ongoing changes in the data center that will lead to changing network requirements during at least the next five years. The main, immediate issues faced by data center networks are congestion and scalability within the edge network and cabling design due to server virtualization, but the key aspect that is hampering the data center is the inability to scale cost-effectively. During the next five years, network planners need to focus on enabling virtualization, and during the next five to 10 years, the two main focus areas will be the enterprise strategy for unifying the data and storage networks, and the enterprise strategy for fabric computing.

No. 1: Reduce Network Tiers

To make the network scalable and resilient, today's data center LAN has at least a three- to five-tier architecture based on edge, aggregation and core, which was originally introduced to enable a scalable and resilient network design. However, the middle tier does nothing but aggregate other switches, making it difficult to scale the network in a cost-effective manner. As discussed in "Minimize LAN Switch Tiers to Reduce Cost and Increase Efficiency," core switches have become very resilient, with larger switching capacity and port numbers. This means that network architects can obtain significant savings by reducing the number of tiers in the network, and the resulting simplification of the aggregation and core network will also enable scalability.

No. 2: Virtualize Network Functions

Established network design practices tend to strongly associate each instance of each network function (such as server load balancing, intrusion prevention and firewalling) with a physical device or a layer of devices in the network. This approach leads to a need to purchase a large number of devices and switches. However, as discussed in "Modernize Your Data Center Network by Virtualizing Network Functions," the combination of high-end LAN switches with high-capacity, virtualizable security switches and application delivery controllers (ADCs) makes it possible to design a new data center network architecture, where networking functions are treated as resource pools, and traffic is routed to and from these devices. Delivering network functions from dedicated platforms frees enterprises to use the smallest number of the highest-capacity LAN switches to construct their data center LANs, thus reducing the LAN switch numbers and footprint and improving agility.

No. 3: Improve Cable Management

As enterprises consolidate more servers into one data center, the physical impact of cabling has become a significant inhibitor for scalability and agility. Each server may have eight or more input/output (I/O) interfaces and cables, which can lead to 150 to 300 cables per rack. This is a physical constraint where basic cable management inhibits scalability, and also can be a significant cause of downtime. Servers and their cabling have been aggregated via end-of-row switches; however, this design practice can be simplified by using top-of-rack switches instead of end-of-rack switches (see "New Data Center Cabling Requirements"). Because cabling of the servers can be maintained within the rack, it will be easier to adapt cabling as server interfaces evolve from several 1 Gbps to fewer 10 Gbps adaptors, and it will enable easier migration to data center bridging (DCB) Ethernet. It also means that there will be improved flexibility in the migration toward network fabrics for fabric computing. Network planners should, therefore, view top-of-rack switches as a key component in the network road map, because they enable network agility.

No. 4: Manage Congestion Challenges

Enterprises are facing an increase in aggregate bandwidth needs by about 30% annually, due to increased application and Internet use. However, the new challenge caused by server virtualization results in an even larger increase in bandwidth needs from each physical server. Today, the average is approximately 10 virtual machines (VMs) per physical server, which is generally equivalent to 10 times the network bandwidth needed. As described in "Is Your Data Center Edge Network Ready for Virtualization?,", Gartner expects VM density per physical server to continue to increase at least through 2013. Therefore, network architects need to plan for increasing bandwidth density during the next two to four years, which means that the network must be designed with ample room for growth in all aggregation points in the network. This will mean dual 10 Gbps connections to each physical server, and the aggregation network should be updated to 40 Gbps, and perhaps further upgraded to 100 Gbps, depending on how aggressively server administrators deploy and manage virtualized servers.

No. 5: Network Multiple VMs Within a Server Host

As enterprises place more VMs on their servers, there is a need for switching functionality within the server. Hypervisors, such as Xen Server, Hyper-V and ESX, all support virtual switches. However, these switches offer very limited features beyond basic switching, and these switches often are managed by the server administrator. As discussed in "The Virtual Switch Will Be the Network Managers Next Headache," the networking needs of these virtual switches will increase during the next two to three years. This means that network planners will need to focus more on how to ensure appropriate network functionality of these switches, configuration alignment between physical and virtual switches, and operational integration of network monitoring. This is not just a matter of establishing a virtual LAN for security purposes, but also a critical aspect of ensuring end-to-end performance of critical business applications.

No. 6: Optimize Traffic Flows Within the Data Center

Network planners need to prepare their data center network for an increased level of new types of internal traffic between servers in the data center. As described in "Your Data Center Network Is Heading for Traffic Chaos," there are several causes for this: Virtualization, new applications and new application deployment models will, in a two- to five-year time frame, increase the level of server-to-server traffic significantly. Within the next two to three years, live VM migrations will require a significant redesign of the network, to become a highly meshed Layer 2 network. However, to manage the performance of business applications, there will be a need for increased focus on traffic management in the data center network. The increased use of new applications and application deployment models will also require increased focus on integrating application and network monitoring tools, to establish a complete view of all traffic flows in the data center.

No. 7: Consolidate Storage and LAN Networks

A great deal of vendor hype continues to drive discussions around the opportunity to reduce network costs by consolidating the data and storage networks into one, converged core network via Ethernet. Gartner believes that the financial benefits of such a fully unified core network are questionable with current network design, because a unified network is likely to be difficult to scale cost-effectively in the current tiered architecture (see "Use Top-of-Rack Switching for I/O Virtualization and Convergence; the 80/20 Benefits Rule Applies" and "Myth: A Single FCoE Data Center Network = Fewer Ports, Less Complexity and Lower Costs"). For enterprises that intend to consolidate their storage and LAN networks, it is likely to take another three to four years for the technology to mature. Instead of fully converged data and storage network, Gartner recommends that enterprises focus on unifying and virtualizing server I/O within the rack based on Ethernet via top-of-rack switches. The cost benefit is that, instead of having eight or more I/O interfaces and cables per server, the cable count can be reduced to two 10 Gbps interfaces and cables per server. However, network planners should not just assume that Fibre Channel over Ethernet (FCoE) is their only option, as there are several viable options, such as FCoE over TCP/IP or Small Computer System Interface (SCSI) over TCP/IP (see "Comparing Data and Storage Network Convergence Options").

No. 8: Introduce the Network Fabric

Fabric-based infrastructure refers to the concept where physical resources, such as processor cores, network bandwidth and links and storage, are abstracted into pools of resources that are interconnected via a "network fabric" to form a complete system (see "Clearing the Confusion About Fabric-Based Infrastructure: A Taxonomy"). The key benefits are the ability to:

  • Cost-optimize purchasing because resource units can be more granular
  • Use resources more effectively because utilization can adapt to needs

Although Gartner expects that, by year-end 2013, 30% of the Global 2000 data centers will be, in part or whole, fabric-based (up from fewer than 5% today), fabric computing is still only in its adolescence, with five to 10 years before becoming mainstream. Network planners need to prepare for significant technology changes within the network, as well as organizational and operational challenges. The core network will need to support fully meshed Layer 2 topologies, and it must also be nonblocking and low latency. Network planners should expect at least the core to be vendor-proprietary, with potentially only the edge being open-standards-based, but this will depend on how integrated the solution is across the entire system.