Colocation-Based Interconnection Will Serve as the 'Glue' for Advanced Digital Business Applications

Foundational Refreshed: 10 August 2017 | Published: 28 July 2016 ID: G00308712



Digital business is enabled and enhanced through high-speed, secure, low-latency communication among enterprise assets, cloud resources, and an ecosystem of service providers and peers. Architects and IT leaders must consider carrier-neutral data center interconnection as a digital business enabler.


Key Findings

  • Digital business will require the integration of a wide variety of applications and information sources. Some will be traditional customer-owned and on-premises, while others will involve new data types and sources, as well as third-party data feeds.

  • It is unlikely that enterprises will be able to build rich digital business applications based solely on a traditional on-premises data center, with a limited number of data sources and applications.

  • The ability to integrate multiple applications, data types and data sources in a secure, predictable, lower-latency fashion will spell the difference between digital business success and failure.


  • Design and build prospective digital business applications paying particular attention to the complexities of connecting many "sources and sinks" (originators and consumers) of information with the appropriate business logic, in a manner that is low latency enough to prevent excessive delay.

  • Deploy applications that can benefit using a data center interconnect fabric model offered in a carrier-neutral facility.

  • Plan for the eventual marriage between nascent data center interconnect fabrics, technologies such as container orchestration systems, and concepts such as workload and application migration, to provide the next generation of dynamic workload scheduling and placement.



While there are many definitions and examples of digital business offered by pundits, several points are universal. Digital business will include enterprise-owned assets, whether enterprise-premises-based, colocated or in the cloud. These assets will utilize cloud technology for integration and deployment, and will be optimized in designs where dynamic, high-speed, secure communications will reduce the friction between multiple sources and sinks of information. We propose that topology, technology and design all favor building a digital business solution using colocation-based, programmable networking, which we will call "data center interconnect fabric," allowing dynamic interconnection between enterprise peers, cloud providers, communications providers and a growing marketplace of service providers.

Digital Business Defined

Digital business is the creation of new business designs by blurring the digital and physical worlds. The transition from simple digital marketing to digital business occurs as things become actors in transactions, and information about transactions (in fact, information about any measured or reported activities that can provide value to someone somewhere in a value chain) can be systematically gathered and sold for their strategic value. Digital business is closely related to digital infonomics, which assigns economic value to digital information and develops frameworks to manage digital information assets. Digital business then, is based on the interconnection of enterprises, partners and service providers. To support digital business and related new initiatives, the data center infrastructure must keep up with the changing demands of fragmented applications, many and diverse data sources and sinks, massive data growth, the desire for real-time analytics, and bimodal IT.

Inefficiencies of an On-Premises-Only Model for Digital Business

While many modern data centers may be well-connected with other enterprise data centers and have adequate internet access, they are not likely to have many specialized high-speed circuits to multiple cloud providers, service providers, information and data sources, or a plethora of peers. In essence, we look at the enterprise data center as somewhat of an island, necessitating building "bridges" or connections one by one to the partners, technologies and information that will be required of a full-featured digital business application (see Figure 1 and "The Five Dimensions of Network Designs to Improve Performance and Save Money" ).

Figure 1. Star-Connected Enterprise Service Access
Research image courtesy of Gartner, Inc.

Source: Gartner (July 2016)

In other words, in an enterprise-located integration model, we must bring external networks and services to the enterprise, bring the cloud-based and enterprise-based applications to the enterprise, and, finally, build out connections to the many peer organizations involved in the digital business solution. Where applications may include integration of multiple databases and data sources, the latency involved in communications is likely to stack up in complex transactions, slowing information flow to an unusable crawl. Latency becomes a killer. In this model, the enterprise is also responsible for building out a security framework to ensure a consistent stance and degrees of access for all the various elements of the solution. While none of these pressures are entirely new, the explosion of sources, locations and datasets has exponentially raised the complexity.

Shortcomings of a Cloud-Only Model for Digital Business

One seeming solution to the inefficiencies of an on-premises model could be to base digital business applications, data and connections to peers, all meeting at a single public cloud provider . While this might mitigate the need to bring all these assets to the enterprise , as we bring the enterprise instead to the cloud, this presupposes little use of existing on-premises-based applications, and also implies the use of one single cloud provider. In fact, many of the most valuable assets of enterprises are likely still to be located on-premises, and it is also extremely unlikely that large enterprises will rely solely on one cloud provider. We expect enterprises to use major cloud providers based on use case, much like they may have used an OS or hardware platform for specific use cases in the past. This doesn't include simply multiple SaaS offerings and a single infrastructure as a service (IaaS) provider, but also multiple IaaS providers based on use case. We also expect the need for integration between multiple cloud providers, including SaaS providers and cloud-based information sources and services (see Figure 2).

Figure 2. East-West Traffic Between Cloud Providers and the Enterprise
Research image courtesy of Gartner, Inc.

Source: Gartner (July 2016)

Basing the solution solely on cloud-based assets, and concentrating on one cloud provider, complicates the integration with existing assets, and particularly with other cloud providers, service providers, and sources and sinks of information. As shown in Figure 2, we expect significant value will be reliant on east-west traffic — that is, the integration between multiple cloud providers and on-premises-based assets. What we are describing is the role of hybrid applications that make use of enterprise-located or colocated assets in conjunction with one or more cloud platforms. With hybrid solutions and a plurality of cloud providers and service providers overwhelmingly likely, establishing a "home base" in a neutral, well-connected location is critical for future proofing an architecture. Whether this fits into a neat definition of "hybrid cloud" or not, such messy integration will be necessary and common. These needs lay the groundwork for using an interconnected, carrier-neutral data center as the meeting place for this multitude of information sources and partners.

Data Center Interconnect Fabric Defined

Data center interconnection is a model in which discrete assets within a multitenant data center are connected to each other directly (today, usually over fiber), and in a peer-to-peer fashion. These connections may be as simple as fiber-optic cross-connects, but allow data-center-based assets to horizontally connect to multiple carriers, cloud providers, peers and service providers. When we combine interconnection with high-speed enterprise access to the multitenant data center (for example, Ethernet over fiber), and include enterprise assets such as compute, storage and, in particular, networking, located in the multitenant data center, what we've done is bring the enterprise and its applications to the network, as opposed to the outdated model of bringing the network to the enterprise (see Figure 3). This creates many opportunities for advanced solutions based on technology as well as topology options.

Figure 3. Data Center Interconnect
Research image courtesy of Gartner, Inc.

Source: Gartner (July 2016)

Peering in Ecosystems

The concept of data center peering using interconnection is not new. Colocation providers were connecting network providers to each other over a decade ago in order to facilitate extended reach for network providers, interconnecting content delivery networks with communications providers, etc. Spurred by demand for very low latency connectivity for applications such as high-frequency trading, the concept of using interconnection to build ecosystems of like-minded enterprises and information providers was spawned. In high-frequency trading (HFT) applications, where the speed of communications between partners can mean the difference in winning an individual transaction, or making millions of dollars in a short period of time, connectivity between systems directly over fiber had obvious appeal. Where there is a technological advantage, there is demand, and soon financial services ecosystems sprang up in selected colocation centers in key markets, where customers were willing to pay a premium to be as close as possible to the exchanges and each other (see Figure 4).

Figure 4. Enterprise Peering Example in Financial Services
Research image courtesy of Gartner, Inc.

Source: Gartner (July 2016)

This high-speed, low-latency concept has expanded beyond HFT to other vertical industries where very large file sizes, or the need for speed or low latency, are in effect. Examples include oil and gas exploratory data in the energy sector, radiographic images in healthcare, etc. To date, many of these solutions have been based on simple point-to-point connections of fiber optics between the routers of participants. While such connections are not particularly dynamic, requiring human physical intervention to make or break a connection, the model has been quite successful. Another important effect of this interconnection has been, to paraphrase Metcalfe's Law, to increase the value of the ecosystem or local network as each additional partner joins the network. This leads to the demonstrable business value of presence in the respective interconnection centers, making it difficult, if not impossible, for partners to leave.

Peering Ecosystem Model With Programmable Networking (Data Center Interconnect Fabric)

Marrying modern switching technology with the topological benefit of intra-data center interconnection increases the utility, intelligence and use cases of the interconnection model. A programmable network model involves using a software-defined network that can make or break connections much like the fiber-optic connections of the past, but on a dynamic basis, either from a command line or, increasingly, API-driven. This allows participants to interconnect with peers and service providers based on logic, such as external triggers, thresholds or events. A simplified example might be establishing a connection with a cloud provider's router to spin up additional cloud instances when utilization or performance dictate. Not surprisingly, the more services, communications providers and cloud providers that are located in such an automated, switch-fabric-enabled data center, the richer the set of solutions that can be built.

The Current and Future State of Data Center Interconnection Using Programmable Network Infrastructure

Such switching technologies are not yet universally deployed across all colocation providers. CoreSite demonstrated an early product with its Any2 switch capability, and Equinix has provided the capability in its IBX, in conjunction with its multicloud capability. Epsilon has had such a capability in the European and Asia/Pacific markets for some time, and Console offers an "as a service" implementation. What is emerging, however, is the ability to control such switching through an API, enabling a broad range of application possibilities. In Gartner's view of the software-defined data center (SDDC; see "2016 Planning Guide for Data Center Modernization and Consolidation" ), we migrate IT services — in this case, networking and security — from hardware provided by a single provider to a more open software model (see Figure 5).

Figure 5. SDDC Components
Research image courtesy of Gartner, Inc.

CMP: cloud management platform; SDC: software-defined compute; SDN: software-defined networking; SDS: software-defined storage

Source: Gartner (July 2016)

As we show in Figure 6, interoperable services (and, of particular interest to interconnection, programmable networking) enable a fertile ground to serve as the glue in complex digital business architectures. While such systems are complex, their design and implementation by the colocation provider reduces this burden to simply a service to be consumed by the enterprise. Once we can dynamically connect between a range of applications running on cloud platforms as well as colocated assets, and interconnect them via a programmable networking system under the watchful eye of an orchestrator (including, perhaps, a security service, pulling data from a SaaS solution), the value of proximity (i.e., the immediacy of such services) and the performance inherent in a high-speed switching fabric become apparent.

Figure 6. SDDC Evaluation Attributes
Research image courtesy of Gartner, Inc.

Source: Gartner (July 2016)

Future Promise: Data Center OS and Container Orchestration Frameworks Will Provide Coordination, Scheduling and Management of Applications Deployed Across Myriad Distributed Physical Servers and Service Providers

Containers will continue to become very broadly implemented in net-new applications and those "born in the cloud," continuing the advances in efficiency and decoupling of logical from physical workloads begun by virtualization. Containers decouple applications from infrastructure, requiring business application developers and operations professionals to think about their software in an application-service-oriented, rather than server-infrastructure-oriented, way. While virtualization radically changed application hosting and deployment options by allowing workloads to be moved and replicated across physical hosts, containers will take this concept to the next level, enabling more efficient and evolved development models, as well as simple deployment of virtual machines and their applications. A future phase of deriving value from containers lies in the use of orchestration systems to serve as scheduling and deployment masters. As the market shakes out, and orchestration offerings such as Kubernetes, Apache Mesos or Docker Swarm gain traction, it is likely that data center interconnect fabric-based networking (see "Hype Cycle for Infrastructure Strategies, 2016" ) will be integrated at the base of the stack, in order to facilitate workload balancing, movement and distribution. In this way, the data center of the future becomes even more software-defined and dynamic.

Applicability of the Data Center Interconnection Model to Digital Business Requirements

Advanced digital business applications are likely to involve significant database use, disparate data sources and multiple cloud providers, with a need for very low latency between the systems. This is unlikely to be served through WAN links from enterprise to individual cloud providers, with switching taking place back on the enterprise's premises. What's needed is very high performance via a programmatic, secure and manageable fabric. Early trials of such technologies between cloud providers and associated service providers have been very promising, but have demonstrated the need for speed. As we add distributed data sources and sinks, and the Internet of Things, a data center interconnection model in conjunction with edge data centers (see "The Edge Manifesto: Digital Business, Rich Media, Latency Sensitivity and the Use of Distributed Data Centers" ) will be the most likely means of success.

A Contrarian View

There are a number of factors that, if they come to fruition, could inhibit or even prevent the data center interconnection model from becoming successful:

  • The cost of the pipes from enterprises to colocation centers remains prohibitively expensive, blocking the three-tier model at the start (namely, local access).

  • If cloud will not be a major factor in large enterprise solutions going forward, the need to connect multiple clouds to application logic or disparate data sources will be diminished.

  • Organizations with no physical infrastructure needs may use cloud services connected via a managed service provider for their digital business interconnection.

  • A lack of agreed-on and supported standards and/or interfaces for applications and cloud providers to interact with the fabric would drive fragmentation, with a smaller number of custom-made designs implemented by those with critical needs.

  • A lack of gravity or availability of "peering targets" in the data center will make some centers less likely to gain momentum.

  • A lack of attention to standard interfaces acceptable to cloud and service providers will leave some solutions handicapped.

  • Any inability of the colocation providers to scale their interconnection resources to meet the demands of hundreds of thousands or even millions of users could make the model unusable by the very constituency it most appeals to — very large enterprises.

  • If hurdles to implement security across the fabric and extended to the applications prove too complex, then enterprises may opt for other solutions.

  • Timing limits market success. While there are providers and customers working on solutions today, it may be five years before the average enterprise makes significant use of such services. The extent to which early adopters can thrive in the interim without more fully monetizing such solutions will be critical. One encouraging sign is that, while enterprises may see the need for such technology to be a future capability, there may be enough immediate business from the hyperscale providers to fuel market growth.

Bottom Line

In the near term, simple topology and physical interconnection via fiber optics will provide compelling reasons to use data center interconnect as the integration point, or glue, for digital business. As data center interconnect fabrics become more prevalent, and their APIs standardized and written to, the dynamism and speed of these connections will foster the development of even more useful applications. Finally, as container orchestration systems develop affinity and capabilities in conjunction with data center interconnect fabrics over the next three to five years, the advantages of a data-center-centric model become overwhelming.