Converged Infrastructure: Utopia or Myopia?
Convergence has become a flashpoint for data center modernization, but IT leaders must realize the limitations and potential of convergence when considering long-term investments.
- Although it began in 2008, convergence is still in an early maturity stage.
- Convergence maturity and functionality will take at least five to 10 years to evolve.
- Future infrastructure convergence will "flatten" to be inclusive of scale-up and scale-out within fabric-based infrastructures.
- Convergence will require successively more advanced orchestration, parallel business and data perspectives, and hierarchical abstraction layers.
- While embryonic today, software-defined anything (SDx; meaning software-defined at all infrastructure levels) will deliver scale-out plus scale-up enterprise convergence by integrating today's silos of vertical and horizontal planes into a diagonal software-defined one.
IT leaders and architects should:
- Plan for multiconvergence infrastructures to evolve over three to five years.
- Do not plan expectations around utopian convergence (that is, all problems solved); rather, identify the myopia in lingering silos through a successive multiphase plan toward higher convergence levels.
- Experiment with multiconverged infrastructure proofs of concept (POCs) that gradually integrate vertical and horizontal architectural schemes and mixed workloads.
- Implement the guidelines in this research over the next five to 10 years to advance multiconvergence and halt its silo effect.
Converged infrastructures have graduated from a fledgling market of a few hundred million dollars in 2010, to one that is approaching $6 billion entering 2014. This is still a small proportion of overall data center hardware spending, which is $85 billion; but all traditional vendors have stepped into the ring, and legacy and emerging providers are seeking entry and pursuing variations on the convergence theme.
Convergence has multiple dimensions beyond Gartner's standard definition, which is subsuming server, network and storage with management software. As examples, the following list tabulates some of these dimensions:
- Technologies: x86, extreme low-energy (ELE), reduced instruction set computer (RISC), flash
- Architecture: Vertically integrated and centralized configurations, or horizontally clustered scale-out and distributed, or combinations of both
- Fabric interconnect: Within the rack, or externally connecting a federation of nodes, storage area networks (SANs) and networks
- Connected and converged nodes: Compute nodes, Internet-connected appliances, development tests and cloud
- Converged workloads and data: Data and application structures such as transaction-based, Web, publish, structured, unstructured, in-memory, pattern and cognitive
In the next five to 10 years, we expect hybrids will emerge that address broader architectural, data, management and cloud variations. Internal IT architects/integrators must use these variables as the basis to advance multiconvergence and halt its silo effect. Figure 1 provides a simplified model to help position both current and emerging vendor entries.
Source: Gartner (February 2014)
Here are some of the current trends in convergence:
- Today's convergence is largely vertical.
- Many traditional vendors offer combinations of integrated stacks and integrated infrastructures.
- Offerings cover various workloads, such as DBMS, enterprise applications, virtual desktop infrastructure (VDI) and others, including ones that don't offer reference architectures.
- Most converged infrastructures comprise a hypervisor for compute resource abstraction, provisioning and management, and can be configured as hard solutions or as cloud foundations — though a hypervisor may not be required.
- Converged infrastructures can have a distributed dimension by enabling multiple configurations to be sized for distributed and remote locations.
- More mixed workloads are hosted over specific purpose-built solutions.
- Both the compute and the storage functions are addressed as interrelated aspects of the convergence equation.
More-recent vendor entrants (for example, Nutanix and SimpliVity) have put most of the emphasis of convergence on the processing and movement of data, while delegating compute resources as a given — supplied by the vendor or viewed as an inexpensive and interchangeable processor-feeding storage resource with file management.
Current and next-generation workloads require distributed, performance-scalable, inexpensive and massively pooled nodes. Convergent designs must manage performance, capacity, provisioning, service demands, infrastructure optimization and cloud stacks. These converged infrastructures will be based on fabric-integrated multinode clusters, federated over multiple data center and geographical sites, using software-defined storage and network technologies on both virtual and bare metal infrastructures.
This is only the beginning though. Additional questions need to be addressed, for example:
- How should convergence address management — with a single superconsole or multiple integrated ones through open or exposed APIs?
- Which vendors should deliver the standard enterprise matrix for the others to connect to?
- How do endpoint services at warehouses and retail outlets, in home and work appliances, and in vehicles converge to enable enterprises to operate in near real time across the fabric?
Convergence will require successively more advanced orchestration, parallel business and data perspectives, and hierarchical abstraction layers. Our model chooses to distinguish this level of convergence as the diagonal plane, evolving from the previous vertical and horizontal ones.
The diagonal of our model can offer IT leaders a way to view convergence that transcends current convergence as primarily silo-based and invokes the software-defined era (see Figure 2). It would preferentially (though not mandatorily) feature open APIs, community and vendor open-source collaboration, multivendor converged infrastructure playbooks, and more lock-in disablers. This will not result in total immunity from vendor control points. Vendors will still battle for software control points and shift to greater leverage using productizing services. The economics shifts for both IT procurement and vendors. Interchangeable technology based on standard components will be supplied at potentially lower costs and margins by outside design manufacturers. But the design, integration and support will still need strong vendor relationships through the value realized by management and orchestration. Valuation shifts up the stack and through services. Inexpensive hardware and improved cost optimization enable IT to invest in higher levels of agility, responsiveness and quality. Without hardware asset deflation, the additional software intelligence to reduce and prevent overwhelming moving-part complexity would add further stress on strained budgets.
SDC: software-defined compute; SDI: software-defined infrastructure; SDDC: software-defined data center.
Source: Gartner (February 2014)
SDx is a collective term that encapsulates the growing market momentum of infrastructure programmability. The goal of SDx is to abstract conventional, proprietary vendor hardware- and software-specific implementations so that users have less hardware lock-in. This can be achieved through an infrastructure policy framework and with interoperability through APIs (although not necessarily standard APIs). Gartner takes this concept of SDx one step further as the future of IT infrastructure will be model-based, with business key performance indicators (KPIs; such as throughput, uptime, response time and input/output per second) driving the selection of infrastructure to meet service needs. This action, in turn, fosters repeatable engineering and a direct connection between business requirements and infrastructure.
Server design will remain an integral part of the holistic picture even though the emphasis appears to shift more toward storage and fabric. As seen in Table 1, many design points at the node, rack and fabric level will remain and influence the progress of server evolution within the convergence framework. Many of the design points are related to horizontal and, eventually, diagonal integration, and some of them are estimated to take five to 10 years until they reach maturity, as defined by reaching the Plateau of Productivity in the Gartner "Hype Cycle for Server Technologies, 2013." With convergence maturity dependent on a multiplicity of hardware and software factors, a best-case and worst-case scenario would be structured as follows:
- Best case (0.3 probability): The sum of all parts aligns and progresses in its individual maturity cycle perfectly, to produce a harmonious confluence in a briefer period of time (for example, two to five years).
- Worst case (0.7 probability): The sum of all parts progresses erratically, has interdependent relationships and retards the growth along the diagonal, and continues to drive individual vendor convergence strategies.
Source: Gartner (February 2014)
Next-generation, Nexus of Forces-type applications will drive infrastructure design. Integrated and converged systems will have to evolve in step with applications and service delivery. Convergent designs must manage performance, capacity, provisioning, service demands, infrastructure optimization and cloud stacks. These converged infrastructures will be based on fabric-integrated multinode clusters, federated over multiple data center and geographical sites, using software-defined storage and network technologies on both virtual and bare metal infrastructures.