LICENSED FOR DISTRIBUTION

Critical Capabilities for Solid-State Arrays

Published: 17 July 2017 ID: G00316095

Analyst(s):

Summary

Solid-state array features, pricing, scale and density are improving at a rapid pace, while agility and disaggregation benefits are providing long-term value. Gartner has analyzed 19 SSA products across five high-impact use cases to quantify what's important to infrastructure and operations leaders.

Overview

Key Findings

  • Vendors in the solid-state array market use two different architectural and business models: standard solid-state drives to reduce costs and commodity components to build proprietary SSDs that increase performance and offload processing.

  • The most-common use cases for SSAs have changed, with server consolidation now first, online transaction processing moving to second place, analytics moving to third and virtual desktop infrastructure in fifth place.

  • The cost of SSAs continues to drop, with the best purchase price being 50% less than 12 months ago. This decline is expected to continue for the medium to short term.

  • Storage virtualization, orchestration and agility features are now available in the arrays and are integrated closely with the hypervisors, enabling storage provisioning to be performed from the server hypervisor.

  • Customers value the SSA disaggregation benefit of being able to scale capacity and performance separately, while simultaneously providing agility by being able to quickly move application data among servers without causing bottlenecks.

Recommendations

Infrastructure and operations leaders involved with infrastructure modernization should:

  • Deploy SSAs in business- and mission-critical environments, because reliability and performance exceed expectations and are often better than hard-disk drive arrays.

  • Avoid depending on vendors' multimillion IOPS claims, which few need; instead, they should look at response times and low latency as performance indicators.

  • Leverage vendor product guarantees for reliability, performance, upgrades, effective storage capacity or any other key requirement or feature.

  • Move to a solid-state data center in the next five years for primary application workloads.

Strategic Planning Assumption

By 2020, 50% of the traditional, general-purpose storage arrays currently used for low-latency, high-performance workloads will be replaced by solid-state arrays (SSAs).

What You Need to Know

This document was revised on 24 July 2017. For more information, see the Corrections page .

Infrastructure and operations (I&O) leaders are increasing the number of SSAs they purchase and are successfully using SSAs in a wide variety of business-critical enterprise environments. The use cases continually expand, with enterprises using them in non-primary-data workloads, such as backup targets for fast restores, big data and high-performance computing (HPC).

Since 2016, the raw capacity of SSAs has increased significantly, with larger systems offering as much as 8PB of raw storage. Therefore, with deduplication and compression, SSAs can meet most customers' capacity requirements. Although SSAs are between five and 10 times more expensive than disk arrays with equivalent raw capacity, the purchase and ownership cost of SSAs continues to drop, due to vendor discounting to gain market share. During the past 12 months, the price of an SSA — including all features, upgrade guarantees, and support and maintenance — has dropped by 50% (see "There Has Never Been a Better Time to Buy a Storage Array" ).

Customers should choose solutions backed by strong service and support that includes the ability to deliver product enhancements and new features nondisruptively. Price, performance and ease of use are still the leading purchase criteria; however, during qualification, most customers request extensive data services. Nevertheless, in early 2018, due to the advent of Nonvolatile Memory Express (NVMe) Peripheral Component Interconnect Express (PCIe) solid-state technology, expectations of consistent, sub-500-μs storage input/output (I/O) response times will become commonplace.

Such response times will again become performance differentiators, but only for a short time. However, response times are more important than multimillion input/output operations per second (IOPS) capabilities, which are often the most cited performance metric. Therefore, the new low-latency NVMe — and, externally, the NVMe over Fabric (NVMeoF) — protocol, which will replace Small Computer System Interface (SCSI), will enable SSAs to meet most performance requirements.

Consequently, because storage array performance is limited by its external connections, customers should be cautious when purchasing an SSA that doesn't support 16 Gbps Fibre Channel (FC) or doesn't have NVMEoF, 32 Gbps FC or 25 Gigabit Ethernet on its product roadmap. A balanced design with high internal performance must be complemented by a low-latency, high-bandwidth network connection (see "The Future of Storage Protocols" ). Due to wide-area network (WAN) latency, synchronous replication considerably reduces the performance of SSAs, which is diametrically opposed to their key value. Therefore, synchronous replication is not weighted heavily (see "Slow Storage Replication Requires the Redesign of Disaster Recovery Infrastructures" ).

Any SSA vendor that can improve general-purpose, hard-disk drive (HDD) disk array reliability and performance to submillisecond levels, or by an order of magnitude, has a demonstrable value proposition, even though performance benefits are valuable and are often the initial attraction. Considerations such as reduced administration, increased capacity utilization, and reduced rack space, power and cooling are just as important, but may not be as immediately apparent.

Because Gartner requires significant product adoption to obtain customer references, only products that have been field-validated have been included. (For an analysis of forward-looking statements concerning vendor vision, ability to execute and statements of direction, see the "Magic Quadrant for Solid-State Arrays." )

Product evaluation criteria include the following considerations:

  • Product features considered must have been in general availability by 31 March 2017 to be included in the vendors' product scores.

  • Ratings in this Critical Capabilities document should not be compared with other Critical Capabilities research, because the ratings are relative to the products analyzed in each case, not the ratings in other research.

  • The scoring for the seven critical capabilities and five use cases was derived from analyst research conducted throughout the year. Each vendor responded in detail to a comprehensive, primary research questionnaire administered by the research team. Extensive follow-up interviews were conducted with all participating vendors, and reference checks were conducted with end users.

Analysis

Critical Capabilities Use-Case Graphics

Figure 1. Vendors' Product Scores for the Online Transaction Processing Use Case
Research image courtesy of Gartner, Inc.

Source: Gartner (July 2017)

Figure 2. Vendors' Product Scores for the Server Virtualization Use Case
Research image courtesy of Gartner, Inc.

Source: Gartner (July 2017)

Figure 3. Vendors' Product Scores for the Virtual Desktop Infrastructure Use Case
Research image courtesy of Gartner, Inc.

Source: Gartner (July 2017)

Figure 4. Vendors' Product Scores for the High-Performance Computing Use Case
Research image courtesy of Gartner, Inc.

Source: Gartner (July 2017)

Figure 5. Vendors' Product Scores for the Analytics Use Case
Research image courtesy of Gartner, Inc.

Source: Gartner (July 2017)

Vendors

Dell EMC VMAX All Flash

The VMAX All Flash family consists of the VMAX250F, 450F, 850F and 950F, which range from 15TB to 5PB in raw capacity. This array is a scale-up/out design, starting with two nodes on the 250F to eight nodes on the 950F. EMC made the first models, the VMAX 450F and 850F, generally available in February 2016, before the Dell acquisition and, later in the year, the smaller VMAX 250F was made available in September 2016. With the new, top-of-range VMAX 950F joining the model range with a generally availability date of June 2017. The VMAX All Flash family has selectable in-line hardware compression, but does not offer deduplication. Dell EMC does offer a 4:1 compression guarantee. The minimum requirement of a VMAX All Flash is 10 rack units (RU or U), which is the amount of space that the 250F requires. Although this is still a large amount of rack space, it's good compared with the larger 450F and 950F, which require at least 40U. However, if a smaller footprint is needed, customers can choose a VMAX 250F, which starts with an 11TB v-Brick. Dell EMC often leads with the VMAX All Flash family for mission-critical workloads. It is positioned for online transaction processing (OLTP), server consolidation and high-availability workloads.

The VMAX All Flash exploits some NVMe technology. For example, NVMe is used for "vault solid-state drives (SSDs)," which are the storage for nonvolatile destage cache. However, there are no NVMe connections for standard data SSDs, which use 12 Gbps SAS. The VMAX All Flash has good security features and offers certified data erasure for customer data to be erased from the SSDs. Dell EMC does not have SPC-2 published performance numbers for the solid-state VMAX All Flash. Customers moving to SSAs can use the VMAX All Flash replicator and/or standard replication features to do nondisruptive migrations from HDD hybrid VMAX to VMAX All Flash. Software is priced in "F" and "FX" software bundles, which are included with the initial array purchase, but individual features can still be purchased separately.

Dell EMC Unity All Flash

The Dell Unity All Flash series is the newest SSA from Dell. It became available in May 2016 and consists of four products (the 300F, 400F, 500F and 600F), which scale from 1.1TB to 10PB. New, denser, higher-capacity arrays are expected in 3Q17. Unlike its VMAX All Flash or XtremIO stablemates, which are scale-out systems, the Unity All Flash SSA, is a scale-up dual controller unified multiprotocol (file, block and REST API) system. From an efficiency perspective, the Unity All Flash array also provides more storage density in terms of TB per rack unit (U) than the VMAX All Flash or XtremIO. Therefore, if the data center footprint is a concern, then the Unity All Flash has significant advantages over other Dell EMC SSAs. All array features are included in the software pricing when purchasing the array. From a data reduction perspective, the Unity All Flash SSAs provide compression, which is user-selectable at the logical unit number (LUN) level.

An SDS version of the Unity All Flash Array is available as a virtual storage array (VSA), if customers want to use SDS and run Unity within a server, using the servers' processing and internal SSD resources. Access to the public cloud is provided via: OpenStack Cinder and a Manila Driver and public cloud API support via the Cloud Tiering Appliance (CTA). Extensive backup support is provided with support for more than 10 separate backup products. Guarantees are provided for lifetime SSD endurance, a 4:1 compression guarantee, and, for customers who require replication, the array offers both synchronous and asynchronous replication. Cloud-based storage support and predictive analytics is provided by CloudIQ, which is no-cost software as a service (SaaS).

Dell EMC XtremIO

The scoring in this Critical Capabilities research is based on the currently available XtremIO X1 array, not the May 2017 announced (currently unavailable, but expected in 3Q17) XtremIO "X2" product. The XtremIO Solid State Array is a scale-out system that offers in-line "always on" compression, deduplication and all inclusive software feature licensing. It became available in June 2015 and has been repositioned to run workloads, such as virtual desktop infrastructure (VDI) environments and workloads, which are good candidates for deduplication, or require large numbers of snapshots or copies of data, such as test/development. Copy data capabilities are aided with the extensive features in XtremIO Integrated Copy Data Management (iCDM).

A new version, XtremIO X2, is planned for availability in 3Q17. This new scale-up and scale-out model will scale from 134TB in one X-Brick and up to 1.1PB, when scaled out to eight X-Bricks. XtremIO X2 will continue to offer current Dell EMC RecoverPoint-based asynchronous. However, RecoverPoint will be sold as a separately chargeable item with X2 arrays. Dell EMC offers a 4:1 Storage Efficiency Guarantee Program; however, capacity upgrade granularity is still not competitive, and all nodes need to be the same raw capacity in the initial XtremIO X2 release. Similarly, old X1 and new X2 XtremIO controller nodes cannot be mixed and matched together. The capacity of the SSDs in the new X2 XtremIO are 400GB and 1.92TB and will use 12 Gbs SAS as the internal interconnect, not NVMe. However, this changed SSD capacity allows smaller increments for upgrades, because, during XtremIO X2 capacity upgrades, each X-Brick will still be required to have the same raw capacity. SSD capacities cannot be mixed within an X1 XtremIO.

Fujitsu Storage Eternus AF Series

Fujitsu's new AF250 and AF650 SSAs became available in November 2016. All storage features are included in the "All-in flash pack," which is part of the base price of the array. Major improvements are many selectable and flexible configuration options, plus new features, which include in-line deduplication, compression and an increased maximum raw capacity of 737TB for the AF250 and 2.9PB for the AF650, The flexible data reduction features are selectable per volume or LUN; therefore, data that is unsuitable for compression or deduplication can be excluded as required. Similarly, performance can be managed at the volume level, as well as the host level via quality of service (QoS) features. This is becoming a more important feature, as the size and performance capabilities of the SSA increase.

Backward compatibility and customer investment protection are good. The array controllers can also replicate data synchronously or asynchronously to any other Fujitsu AF or DX disk-based array. The AF models can be part of a highly available storage cluster with other AF arrays or with the previous DX200F Fujitsu SSAs. The array has wide support with the several backup and archive vendors, plus it has cloud API support via the OpenStack ETERNUS OpenStack VolumeDriver. Controller software upgrades are nondisruptive, taking approximately 10 minutes, and customers can do this themselves, without requiring scheduled time with vendors. The AF arrays are space-efficient, requiring only 2U and 5U of rackspace for AF250 and AF650 configurations providing more than 350TB raw capacity Fujitsu provide a five nines guaranteed availability and an SSD rebuild time of less than two hours.

From a security perspective, if customers require data to be deleted from the array, then Fujitsu offers both drive sanitization as a free-of charge feature of ETERNUS AF and certified data erasure as a chargeable service from Fujitsu Professional services. Effective capacity guarantees can be agreed on after a customer-specific data structure assessment.

Hitachi Data Systems VSP F Series

The Hitachi Data Systems (HDS) VSP F Series of SSAs consists of the F400, F600, F800 and F1500 SSAs. In October 2016, HDS expanded this series with a high-end model, the F1500; the range of capacities starts at 7TB and scales to 8PB. HDS continues to build its own custom flash modules (FMDs), which offload compression from the array controller into the FMDs. These FMDs are now offered as larger 14TB and 7TB FMDs. The VSP F series' scale-out, unified architecture enjoys widespread interoperability, based on its VSP heritage. Other hallmarks of Hitachi engineering include strong performance and reliability, with the latter supported by a 100% availability guarantee. The VSP F series administration graphical user interface (GUI) is the same as the rest of the VSP series, but is not as intuitive or simple to use as competitors' offerings. If administration simplicity for nonstorage administrators is required, then the Hitachi Automation Director can be purchased as part of the Advanced software package.

Hitachi includes penalty-free compression, which is done in-line at the FMD level, and post-process deduplication for file storage. However, block-based deduplication is unavailable. HDS have understood that low-latency storage requires a low-latency network for balanced holistic end to end performance. To this end, the F400, 600, 800 does have 32 Gbps FC, but the high-end F1500 does not have 32 Gbps FC. The VSP array software features are sold in two packages; Foundation, which is the base software supplied with each array, and Advanced packages that contain extra features, such as remote replication. The Advanced package contains all the software that is in the Foundation package. Alternatively, customers can purchase individual additional features to the foundation package, which are charged by capacity.

HDS offers a 100% data availability guarantee and a 2:1 data reduction guarantee. Microcode updates are nondisruptive and can be done in a phased manner, one controller core at a time. Backup support is extensive and object support is provided by an OpenStack REST API on the block-and-file module. Tiering to Amazon is via S3 APIs, and tiering to Azure is also provided. If a customer requires that all the data is securely deleted from an FMD or whole array, HDS can offer a guaranteed data eradication service.

HPE 3PAR StoreServ All-Flash Arrays

The HPE 3PAR StorServ portfolio has an extensive range of product offerings starting with its entry-level 8200 series and progresses higher with more compute resources and memory with the 8450. The predecessor to the 8000 series (the 7000 series had its end of life in April 2017). The model range has added the new 9000 series, which became available in June 2017 and ends with its 20000 series capable of scaling to 8PB of raw capacity. All of these products offer low-capacity introductory starter kits and software packages that can be available with a consumption-based pricing model.

The StoreServ portfolio features a wide range of cost-effective SSD capacities, up to 15.3TB, enabling compelling power and space savings. Given its architecture, the product is able to efficiently use these SSDs to drive an aggressive overall system price. Well-established thin-provisioning capabilities complement selective thin deduplication, which proceeds after the initial zero-block, bit-pattern detection. Compression is relatively unproven (it became generally available in March 2017). The compression uses a Express Scan method that identifies redundant or incompressible data and avoids wasting compute resources. The data efficiency is supported by a Get Thinner Guarantee that states a 75% data reduction ratio.

HPE 3PAR StoreServ has extensive ecosystem support for leading hypervisors and OSs, third-party backup software that now includes Veeam, as well as enhanced support for container and orchestration tools, such as Mesosphere. A common management interface, improved QoS features and granular performance and health monitoring are simple to use. HPE resiliency is comprehensive with HPE Persistent Cache, Persistent Port, Peer Persistent, and Asynchronous and Synchronous Remote Copy technologies. This is supported by a reliability track record of six nines (99.9999%), high availability and a 12-month guarantee. HPE requires customers to have at least four nodes and a more-expensive, mission-critical support contract. HPE offers investment protection with its ability to nondisruptively migrate to NVMe PCIe technology, support for storage class memory (e.g., 3D XPoint) and a three-year technology refresh extension business program.

IBM FlashSystem A9000

IBM FlashSystem A9000 Series consists of two models: FlashSystem A9000 and FlashSystem A9000R, which is the rack version. FlashSystem A9000 has a maximum raw capacity of 105.6TB, and FlashSystem A9000R can achieve 633.6TB across a fully populated six enclosures in a rack. FlashSystem A9000 has IBM Hyper-Scale Manager and Hyper-Scale Mobility included to allow central management and mobility across 144 storage systems. The system has both compression and deduplication. IBM offers a blind 2:1 guarantee and a 5:1 data reduction guarantee that depends on workload diagnosis. If this is not met, then IBM will provide additional hardware to make up for the promised effective capacity. FlashSystem A9000 is used by customers as a low-latency SSA for high-performance applications on single servers. However, it has support and can be used as shared SAN high-performance storage. Conversely, FlashSystem A9000R is positioned for big data and analytics. It does not have the low latency of the A9000; however, the A9000R offers more storage capacity.

The FlashSystem A9000 architecture inherits its software from the XIV; therefore, it has simple graphical-icon-based storage administration. IBM's hardware is predicated on its own flash module technology, which it has developed internally and optimized with its flash component supplier, Micron. This was done to enhance performance and increase reliability, while using less-expensive, consumer-grade flash technology. However, this product has fallen behind on the flash technology and now comes at a premium. Given its high performance and reliability, this product features robust security. It's the only product in IBM's SSA portfolio that offers customers a method to purge data achieved with IBM's crypto-erase option for keys. Although removing key encryption does not physically delete data, it does make the data logically unreadable, which is nearly equivalent to a deletion.

IBM DS8880F Data Systems

The DS88880F series became available in January 2017 and is, therefore, the newest SSA in IBM's flash storage portfolio. This family of SSAs includes the DS8884F, DS8886F and DS8888F, which scale from 6.4TB to 1.2PB of raw capacity. The models are positioned for mainframe deployments and business-critical workloads, such as ERP/OLTP for the DS8884F. DS8886F is a midrange system and the DS8888F is for analytics workloads. Nevertheless, customers do not need to strictly follow these guidelines and can successfully use each system for any application or workload, because the controller processor core count, external connections and storage capacities increase in step with each model. Compression and deduplication features are not provided by the DS88880F series; however, IBM does offer compression to customers by selling the San Volume Controller (SVC) product, which is installed as a virtualization layer between the server and the DS8880F SSA.

If the SVC is used then the customer will have to purchase, administer, support and upgrade the SVC and DS8000F equipment separately at additional complexity and cost. The DS88880F Series is architected for high availability, robust microcode, and an extensive suite of synchronous, asynchronous, mirroring and other resiliency features suitable for business-critical workloads and mainframe environments. The DS8880F family provides enterprise-grade flash technology with SSD drives or optimized High-Performance Flash Enclosures. The minimum rack requirements range from 44 to 46U, It is not an small system and, therefore, is unsuitable when floorspace and rackspace are at a premium. Even so, DS8000F has reduced the rack width by 30%, compared with the previous system generation. Although the DS88880F series does interoperate and supports the major hypervisors, it is mainly sold and positioned for IBM mainframes, IBM Power Systems and for SAN attach to open-system physical servers.

IBM Storwize All-Flash Series

IBM's Storwize series of SSAs consists of Storwize V5030F and V7000F, which are positioned primarily for virtualized storage infrastructure consisting of general-purpose database workloads and back-office environments. This product became generally available in September 2016, and the systems can scale up to larger capacities than the A9000 or DS8000 series. These offerings have a maximum of 11.6PB of raw capacity using cost-effective, industry-standard SSD technology. In addition, Storwize series can also be clustered to reach its maximum architecture capacity limit of 32PB. The Storwize series has extensive API integration and support for hypervisors and the main data protection vendors. In-line deduplication is planned for 4Q17, and compression is already provided in-line.

When the system detects that the data is unsuitable and no savings can be obtained via compression, compression is automatically disabled at the LUN level. IBM guarantees a base 2:1 compression savings ratio for the Storwize series. However, if customers use IBM's "Comprestimator" tool, which results in an estimated 5:1 compression ratio, then IBM will guarantee this extra saving. The Storwize series offers both synchronous and asynchronous replication and also offers Hyperswap capabilities for high-availability, active-to-active requirements. From a security perspective, the Storwize arrays provide AES/XTS 256 encryption.

Kaminario K2

The Kaminario K2 was updated to the sixth-generation SSA on 8 February 2017, which reinforces an excellent track record of product innovation. The K2 is a flexible design that can be implemented as a single controller pair array, which can expanded by scaling up or scaling out. The system scales from 7.4TB to 4PB and provides both compression and deduplication. NVMe/PCIe is used in the array to provide low latency access to an offload engine which performs compression. However, internal NVMe connection to SSDs is not available today, even though the array has been designed for NVMe and use NVMe SSDs in the future. The sixth-generation K2 also has the latest high-speed external interconnects: 32 Gbps FC and 25GbE. This puts the K2 in a good position for NVMeoF and the upcoming requirement for faster end-to-end storage networks, where data can be shared among many systems, without the restriction of moving data between internal server storage.

Customers who own the Generation 5 K2 arrays can mix and match these with new sixth-generation K2 nodes. This makes product transitions and migrations for users simple and provides investment protection. Kaminario offers good upgrade, performance and capacity guarantees within the "Assured Capacity, Availability, Performance, Scale, Maintenance and SSD Life" programs (Kaminario ForeSight). The administration GUI is clear, simple and intuitive, and it can be used by nonstorage administrators to perform simple tasks. Deduplication is selectable, but compression cannot be disabled. The K2 supports asynchronous replication; however, synchronous replication is not available. System security is good, with Advanced Encryption Standard (AES) encryption at the SSD level and key management.

NetApp AFF Series

The NetApp AFF series of arrays has been completely refreshed with a new product line of SSAs. The model series consists of the AFF A200, A300, A700 and the A700s. They have all become available during the past eight months, with the following general availability dates:

  • AFF A300 — October 2016

  • A700 — November 2016

  • A200 — December 2016

  • A700s — February 2017

The arrays scale up from 2.2TB to 7.3PB, but they can be connected in a federated cluster mode in which it's possible to have 88PB namespace of storage capacity. Solid-state installations of this size are rare; however, NetApp has had a few customers who require an SSA cluster to support up to 10PB in a single image.

Compared with the previous generation of AFF arrays, the new AFF A series is more efficient in rack space, power and cooling requirements. In particular, the smallest array, the A200, only requires 2U of rack space to provide 100TB of capacity. High-density, low-latency storage arrays require low-latency network connections as more data has to be transferred across fewer connections. NetApp has commensurately upgraded the network connections on the A-series to 32 Gbps FC and 40GbE, so that the network scales with array performance. Because the AFF A-series array software is based on and derived from the Ontap hybrid arrays, the A-series provides an extremely broad feature set, including cloud integration, compression and deduplication. The software is provided in bundles, one of which is the Premium Flash Bundle, which includes all features. However, customers can also purchase the Base Bundle, which provides features on a cost-per-capacity model. NetApp offers effective capacity guarantee and a controller upgrade program.

NetApp EF Series

The NetApp EF Series of SSAs is based on the HDD E-series of general-purpose arrays (see "Critical Capabilities for General-Purpose, Midrange Storage Arrays" ). It is a scale-up dual controller array and does not offer compression or deduplication and only scales to 384TB. Because the EF arrays do not offer data reduction features, NetApp can provide performance, price and configuration transparency with published SPC1 and SPC 2 benchmarks. These results show competitive response times, and IOPS performance for the cost of the benchmarked EF configuration. The arrays are relatively old — the EF560 was made available in December 2014 and the EF550 became available in November 2013.

The EF series is positioned as a simple, "no frills" array oriented toward low-latency or high-bandwidth workloads that do not require data reduction features. Due to the system being based on the equivalent E disk array product, the EF also offers synchronous and asynchronous replication. The EF series is also used as a storage array within NetApp's Converged Infrastructure Flexpod offerings. The EF is part of the Flash Advantage program, which includes a 3X Performance Guarantee. All storage software features are included in the base price of the array, excluding encryption, which is an optional chargeable item. Media security is provided by Full Data Encryption (FDE) for each SSD in the array.

NetApp SF Series

The NetApp SF Series of SSAs is a true scale-out array, which is highly available as data is dispersed across nodes and therefore it can survive the failure of any single node. The smallest configuration which can be purchased is four nodes. However, the average number of nodes which customers implement is twelve nodes. Since the SF series scales to 100 nodes the system can scale to multiple Petabytes of capacity. Depending on the makeup of the application data the effective capacity can be much higher as the SF Series offers both compression and deduplication. These data reduction features are always enabled.

Different capacity and generations of nodes can be mixed together in a cluster, which provides investment protection and simplifies array migration projects. The current generation of nodes are the 4.8TB SF4805, 9.6TB SF9605 and 19.2TB SF19210. This generation, specifically the most recent model, the 38.4TB SF38410, will be the last based on internal SAS 12 Gbps interconnects, because NVMe will be used in the future to further reduce latencies and exploit the next generation of high-capacity, low-latency solid-state media. The SF Series supports FC and internet SCSI (iSCSI), but 90% of customers connect to the SF Series via iSCSI. The SF Series was initially used by service providers and customers for private cloud implementations. However, with improvements in performance, the SF Series is now used to run multiple OLTP workloads, which require fast and consistent response times.

QoS features are extensive within the SF Series; therefore, customers can guarantee performance-based SLAs with these features. All storage features are included in the base price of the system, but support costs are per node. NetApp offers a software-defined storage (SDS) version of the SF Series, called "Element X," which is oriented to OEM customers.

Nimble Storage AF Series

On 17 April 2017, HPE acquired Nimble Storage and inherited its AF product line, which arrived late to the SSA market on 16 March 2016. The AF Series leverages the common architecture and Cache Accelerated Sequential Layout (CASL) file system from its hybrid arrays that resonates well with users for its simplicity and competitive pricing. The product is available as a single array or as a scale-out/federated cluster, and provides block storage protocols, such as FC and iSCSI. The CASL file system allows the use of cost-effective, consumer-grade SSD technology and can scale from 5TB to 553TB in a single array and from 23TB to 2.2PB in a four-array cluster. This is supported by selectable deduplication and compression software that further enhances its competitive pricing and effective-capacity reach.

A hallmark of Nimble Storage products is the InfoSight Predictive Analytics, which is a remote monitoring and support tool that provides proactive problem resolution. The monitoring and diagnostics are based on extensive telemetry data that is analyzed to resolve and anticipate upcoming issues that extend beyond storage to include issues across the hardware infrastructure and application layer. Customer-friendly business programs on performance, maintenance extensions, and future-proofed for next-generation technologies (including NVMe PCIe) supports high customer satisfaction levels.

Pure Storage M-Series

The Pure Storage M-Series of SSAs became available in 2016. It consists of the m20, m50 and m70 models, which scale from 5TB to 512TB. The products have a well-established and proven track record in terms of operational simplicity, support and reliability. Compression and deduplication is always on and Pure Storage upgraded its deduplication software algorithms to improve the effective storage capacity. A new Pure Storage Purity RUN feature became available in 2016. This offers a software container within the array that can run certified third-party software, such as Windows Storage Server or Catalogic.

Pure Storage has always provided a simple, all-inclusive storage software feature licensing model. Similarly, the Pure Storage "Evergreen" support and maintenance offerings provide trade-in credits and free controller upgrades every three years as part of the "Evergreen Gold Subscription" service. Since the reliability of SSAs is higher than hybrid arrays, and many customers are planning to keep their storage arrays for more than five years, this support model will ensure that customers can move to the latest controllers during the ownership period. Pure Storage provides significant customer investment protection with the M-series arrays, because new controllers can nondisruptively be installed while the array is up and running. Telemetrics concerning the health of the arrays is collected in real time by the remote monitoring systems, and analytical processes determine whether specific fixes are required before problems occur. This fingerprinting of customer usage patterns enable Pure Storage to create, distribute and nondisruptively install customized preventive software fixes to its customers' arrays.

The M-series already exploits NVMe to connect to the NV-RAM controller modules, but the SSDs are still connected via SAS. However, Pure Storage provides an "NVMe Ready Guarantee," because the array can support NVMe connections and drives so that NVMe SSDs will be used in the future. The M-series supports only FC and iSCSI block protocols and has low rackspace requirements. If file protocols are required, then these need to be implemented as Purity RUN software or a separate Pure Storage Flashblade needs to be purchased . Active-active stretch cluster for transparent business continuity across metropolitan regions, synchronous replication and QoS features became available in June 2017.

Pure Storage FlashBlade

The Pure Storage FlashBlade is one of the new entrants into this segment, having become generally available in January 2017. This is a 4U high scale-out array consisting of internal blades that contain all the storage controller processing and storage media. The blades have been designed and engineered by Pure Storage as has the whole FlashBlade array, which provides the low-latency interblade connections and data communications. The array's internal architecture, which exploits blade parallelism, has the capability to provide more than 15 GB/sec of bandwidth in a dense, high-capacity storage footprint. Raw capacities for a single blade are 8.8TB and 52.78TB. Therefore, a single 4U FlashBlade chassis is sold in a minimal configuration of seven blades, giving a minimal raw configuration of slightly more than 61TB raw. The maximum capacity of the chassis is 15 blades. Therefore, with 52TB blades, the raw system capacity is just under 792TB. It only provides file protocol access via Network File System (NFS) and object storage support.

The FlashBlade is positioned for data analytics, technical computing, media and large object workloads. Pure Storage has opened up a new market with this product, and it uses erasure codes, rather than redundant array of independent disks (RAID), to improve resiliency, so that solid-state media failures can be repaired faster than with traditional RAID algorithms. Encryption is implemented using the field-programmable gate array (FPGA) in each blade, which provides always-on, data at rest (DAE) encryption. Remote data erase is also provided by Pure Storage support. All storage software features are included in the base price, and similar investment protection upgrade schemes to the Pure Storage FlashArray//M-series "Evergreen" program are provided for the FlashBlade. The array provides only compression; deduplication is not available nor are any QoS features.

Tegile T-Series

The Tegile T-Series is a well-established SSA, with a proven product track record of providing extensive features, ease of use/ownership and reliability. The T-Series SSAs have been upgraded and improved since 2014 and today's models became available in December 2016. The models range from the T4500 series to the T10KHD-8-300, which scale from 6TB to 1.3PB. However, Tegile provides a cluster mode that enables eight arrays to be clustered together; therefore, in these configurations, scalability is increased to a maximum of 10.4PB for all protocols, except Microsoft SMB3.

The arrays are physically and logically space-efficient, with both selectable deduplication and compression, which gives good effective storage capabilities. And low physical rack space requirements provided by models, which only require 2U of rackspace to provide 50TB of storage. Broad unified protocol support is provided with support for block-and-file protocols. The T-series shows technical leadership, because it's NVMe-ready. The controllers use NVMe (v.1.2, PCIe Gen3), which supports NVMe SSDs, and T4000 arrays can be nondisruptively upgraded to NVMe. NVMe storage shelves are not available; however, when they become available, customers will be able to nondisruptively swap from SAS to NVMe with nondisruptive expansion upgrades. The SSDs within the expansion units do not have to be the capacity to enable flexibility and investment protection. Performance and QoS can be selected and managed at the cache, port, volume and LUN level across block-and-file protocols.

Compression and dedupe are both in-line and also individually selectable at the volume level across file-and-block protocols. The Tegile arrays offer all-inclusive storage licensing, fully featured storage services and instrumentation scale-out clustering, which provides higher flexibility than most other SSAs. Cloud-based analytics is used to provide problem prediction and trending within the Intellicare support program. Guaranteed upgrades are provided by the Lifetime Storage program and effective capacity guarantees are part of the Flash 5 commitment program. Data is encrypted on the SSDs and managed by internal key management. Synchronous replication is not yet supported.

Tintri T5000 Series

Tintri has a proven track record as a simple-to-use SSA for connection to servers that use hypervisors. The T5000 series consists of the T5040, T5060 and T5080, which scale from 6TB to 92TB raw capacity and offer always-on compression, deduplication, and asynchronous and synchronous replication. The capacities of the individual arrays are relatively small, but Tintri includes Tintri Global Center, which enables storage administrators to administer and use as many as 64 arrays in one GUI. Due to the deep integration of Tintri into the hypervisors, many of the value propositions and features that are promised in SDS, server-based products have been available in the T5000 arrays for years. This array is a good example of a SDS array, because an administrator can do all storage provisioning and orchestration from the hypervisor administration tools without having to log onto the physical arrays.

Storage processing, features, monitoring and recovery are transparently disaggregated from the server and transparently offloaded to the array. If a customer requires agility and wants to move applications between servers without the time-consuming and resource-intensive process of moving the data between servers, Tintri provides one of the simplest ways to administer and manage storage within virtualized server environments. Tintri supports only NFS and SMB3 as connection protocols to the server. The software features can be purchased inclusively with the Tintri Software Suite or separately as individual items.

SSD encryption is provided by the Self Encrypting Drives (SEDs), and external key management can be used to provide encryption key management. Later in 2017, Tintri plans to make available new arrays that will have more raw capacity

X-IO ISE 800 Series

X-IO Technologies launched the ISE 800 in March 2015 and has seen relatively few updates since its debut, because it will be superseded by its future fourth-generation product. The array is oriented toward low-latency, high-performance use cases and workloads, and it's backed by a "fast when full" guarantee. The raw capacity of the array is relative small, compared with competitors in this marketplace — it scales from 7.8TB to 64TB of raw storage per 3U node. Because it lacks data reduction features, the effective capacity is relatively low, and it does not have the latest, lower-latency 16 Gbps FC connections. The array provides all standard software features, plus QoS administration, along with enhanced data governance features via its V-center plug-in support.

Although the device administration GUI is good enough for storage administrators, it is not as intuitive as other products, due to a relatively high reliance on text and numeric presentation, rather than graphical icons. This makes the array difficult to manage for nonstorage administrators. Rack density, power usage and heat dissipation are average for its class.

Context

For the first time in decades, SSAs have made storage managers popular with the organization and the IT department. This is because SSAs have addressed traditional, general-purpose array HDD performance constraints by improving storage IOPS and latency performance by one or two orders of magnitude. As the capacity of HDDs and SSDs increases, SSAs offer lower risk, because rebuild times for SSDs take minutes to hours, whereas HDD rebuild times can take days. SSAs have been specifically designed or marketed to exploit the reduced cost and improved performance of NAND, flash-based, solid-state storage. However, although performance is the trigger point for SSA attention, it is no longer the major justification for SSAs. Simpler IT environments, feature licensing and purchasing methods make the overall value proposition more than speeds and feeds.

Customers are initially concerned with latency or response times; however, capacity utilization, media rebuild times, and bandwidth or throughput are also improved by SSAs. The reduced latency has enabled new technologies, such as in-line primary data reduction, deduplication, compression or both. These features were restricted by the mechanical constraints of HDDs. The reduced environmental requirements of SSAs, such as power and cooling, also have incidental and important advantages over general-purpose arrays and other HDD-based storage systems. Due to these benefits, as well as administration GUIs, which have been designed with ease of use in mind, storage administration overhead has been reduced and storage provisioning tasks can now be performed by nonstorage specialists. As a result, less time needs to be spent performing detailed configuration, performance tuning and problem determination tasks.

Product/Service Class Definition

The following descriptions and criteria classify SSA architectures by their externally visible characteristics, rather than vendor claims or other nonproduct criteria that may be influenced by fads in the SSA storage market.

SSA

The SSA category is a subcategory of the broader external controller-based (ECB) storage market. SSAs are scalable, dedicated solutions based solely on solid-state semiconductor technology for data storage that can never be configured with HDD technology. The SSA category is distinct from SSD-only racks in ECB storage arrays. An SSA must be a stand-alone product denoted with a specific name and model number, which typically (but not always) includes an OS and data management software optimized for solid-state technology.

To be considered an SSA, the storage software management layer should enable most, if not all, of the following benefits:

  • High availability

  • Enhanced-capacity efficiency — perhaps through thin provisioning, compression or data deduplication

  • Data management

  • Automated tiering within SSD technologies

  • Perhaps, other advanced software capabilities — such as application-specific and OS-specific acceleration, based on the unique workload requirements of the data type being processed

Scale-Up Architectures
  • Front-end connectivity, internal, and back-end bandwidth are fixed or scale to packaging constraints independent of capacity.

  • Logical volumes, files or objects are fragmented and spread across user-defined collections, such as solid-state pools, groups or RAID sets.

  • Capacity, performance and throughput are limited by physical packaging constraints, such as the number of slots in a backplane and/or interconnected constraints.

Scale-Out Architectures
  • Capacity, performance, throughput and connectivity scale with the number of nodes in the system.

  • Logical volumes, files or objects are fragmented and spread across multiple storage nodes to protect against hardware failures and improve performance.

  • Scalability is limited by software and networking architectural constraints, not physical packaging or interconnect limitations.

Unified Architectures
  • These can simultaneously support one or more block, file and/or object protocol — such as FC, iSCSI, NFS, SMB (aka CIFS), Fibre Channel over Ethernet (FCoE) and InfiniBand.

  • Gateway and integrated data flow implementations are included.

  • These architectures can be implemented as scale-up or scale-out arrays.

Gateway implementations provision block storage to gateways implementing network-attached storage (NAS) and object storage protocols. Gateway-style implementations run separate NAS and SAN microcode loads on virtualized or physical servers. As a result, they have different thin-provisioning, autotiering, snapshot and remote copy features, which are not interoperable. By contrast, integrated or unified storage implementations use the same primitives independent of the protocol, which enables them to create snapshots that span SAN and NAS storage, and dynamically allocate server cycles, bandwidth and cache — based on QoS algorithms and/or policies.

Mapping the strengths and weaknesses of these different storage architectures to various use cases should begin with an overview of each architecture's strengths and weaknesses and an understanding of workload requirements (see Table 1).

Table 1.   SSA Architectures

Strengths

Weaknesses

Scale-Up

  • Mature architectures:

    • Reliable

    • Cost-competitive

  • Large ecosystems

  • Independently upgrade:

    • Host connections

    • Back-end capacity

  • May offer shorter recovery point objectives (RPOs) over asynchronous distances

  • Performance and bandwidth do not scale with capacity.

  • Limited compute power can make a high impact.

  • Electronics failures and microcode updates may be high-impact events.

Scale-Out

  • IOPS and GB/sec scale with capacity

  • Nondisruptive load balancing

  • Greater fault tolerance than scale-up architectures

  • Use of commodity components

  • There are high electronics costs relative to back-end storage costs.

Unified

  • Maximal deployment flexibility

  • Comprehensive storage efficiency features

  • Performance may vary by protocol (block versus file).

Source: Gartner (July 2017)

Critical Capabilities Definition

Ecosystem

Ecosystem refers to the ability of the platform to support multiple protocols, OSs, third-party independent software vendor (ISV) applications, APIs and multivendor hypervisors.

Manageability

Manageability refers to the automation, management, monitoring, and reporting related to tools and programs supported by the platform.

The tools and programs can include single-pane management consoles, as well as monitoring and reporting tools designed to assist support personnel in seamlessly managing systems and monitoring system use and efficiency. They can also be used to anticipate and correct system alarms and fault conditions before or soon after they occur.

Multitenancy and Security

This refers to the ability of a storage system to support diverse workloads, isolate workloads from each other, and provide user access controls and auditing capabilities that log changes to the system configuration.

Performance

This collective term is often used to describe IOPS, bandwidth (MB/second) and response times (milliseconds per I/O) that are visible to attached servers.

RAS

Reliability, availability and serviceability (RAS) refers to a design philosophy that consistently delivers high availability by building systems with reliable components and "derating" components to increase their mean time between failures (MTBF).

Systems are designed to tolerate marginal components, hardware and microcode designs that minimize the number of critical failure modes in the system, serviceability features that enable nondisruptive microcode updates, diagnostics that minimize human errors when troubleshooting the system, and nondisruptive repair activities. User-visible features can include tolerance of multiple disk and/or node failures, fault-isolation techniques, built-in protection against data corruption, and other techniques (such as snapshots and replication; see Note 2) to meet customers' RPOs and recovery time objectives (RTOs).

Scalability

This refers to the storage system's ability to grow capacity, as well as performance and host connectivity. The concept of usable scalability links capacity growth and system performance to SLAs and application needs. (Capacities are total raw capacity and are not usable unless otherwise stated.)

Storage Efficiency

This refers to the ability of the platform to support storage efficiency technologies, such as compression, deduplication and thin provisioning, as well as improve utilization rates, while reducing storage acquisition and ownership costs.

Use Cases

Online Transaction Processing

OLTP is closely affiliated with business-critical applications, such as database management systems (DBMSs).

DBMSs require 24/7 availability and subsecond transaction response times; hence, the greatest emphasis is on performance and RAS features. Manageability and storage efficiency are important, because they enable the storage system to scale with data growth, while staying within budget constraints.

Server Virtualization

This use case encompasses business-critical applications, back-office and batch workloads, and development.

The need to deliver low I/O response times to large numbers of virtual machines (VMs) or desktops that generate cache-unfriendly workloads, while providing 24/7 availability, causes performance and storage efficiency to be heavily weighted, followed closely by RAS.

High-Performance Computing

HPC clusters comprise large numbers of servers and storage arrays, which combine to deliver high computing densities and aggregated throughput.

Commercial HPC environments are characterized by the need for high throughput and parallel read-and-write access to large volumes of data. Performance, scalability and RAS are important considerations for this use case.

Analytics

This use case applies to all analytic applications that are packaged or provide business intelligence (BI) capabilities for a specific domain or business problem.

Analytics does not apply only to storage consumed by big data applications using map/reduce technologies (see definition in "Hype Cycle for Data Science, 2016" ).

Virtual Desktop Infrastructure

VDI is the practice of hosting a desktop OS within a VM running on a centralized server.

VDI is a variation on the client/server computing model, which is sometimes referred to as server-based computing (SBC). Performance and storage efficiency (in-line data reduction) features are heavily weighted for this use case, for which SSAs are emerging as popular alternatives. The performance weighting was reduced by 5%, and manageability was increased by 5%, because manageability has become a relatively greater concern and priority in this use case than performance.

Vendors Added and Dropped

Added

None

Dropped

None

Inclusion Criteria

  • The vendor's SSA product must be a self-contained, solid-state-only system that has a dedicated model name and model number.

  • The solid-state-only system must be initially sold with 100% solid-state technology and cannot be reconfigured, expanded or upgraded at any future point in time with any form of HDDs within expansion trays via any vendor special upgrade, specific customer customization or vendor product exclusion process into a hybrid or general-purpose SSD and HDD storage array.

  • The vendor's SSA product must have been in general availability by 31 March 2016, and the specific SSA product model must have more than $10 million in SSA product revenue during the past 12 months.

  • The vendor's SSA product with the highest revenue will be included in the Critical Capabilities research. When a vendor has more than one SSA with equal revenue, Gartner reserves the right to choose the product to include, based on Gartner client interest.

  • Just a bunch of SSDs (JBOS) or just a bunch of flash (JBOF) will not be included in the Critical Capabilities research, which covers arrays. Therefore, only SSAs with high-level data services, such as thin provisioning, data reduction features, replication and snapshots, will be included.

  • The vendor must sell its product as a stand-alone, without the requirement to bundle it with other vendors' storage products to enable that product to be implemented in production.

  • Vendors must be able to provide Gartner with at least five references that we can successfully interview. At least one reference must be provided from each geographic market — the Asia/Pacific (APAC) region, EMEA and North America — or the two within which the vendor has a presence.

  • The vendor must provide enterprise-class support and maintenance services, and offer 24/7 customer support (including phone support). This can be provided via other service organizations or channel partners.

  • The company must have established a notable market presence, as demonstrated by the terabytes sold, the number of clients or significant revenue.

  • The product and a service capability must be available in at least two of the following three markets — the APAC region, EMEA and North America — by either direct or channel sales.

The SSAs evaluated in this research include scale-up, scale-out and unified storage architectures. Because these arrays have different availability characteristics, performance profiles, scalability, ecosystem support, pricing and warranties, they enable users to tailor solutions against operational needs, planned new application deployments, forecast growth rates and asset management strategies.

Although this SSA Critical Capabilities research represents vendors whose dedicated systems meet our inclusion criteria, ultimately, it is the application workload that governs which solutions should be considered, regardless of the criteria involved. Some vendors may still warrant investigation based on application workload needs for their SSD-only offerings. The following providers and products were considered for this research, but did not meet the inclusion criteria, despite offering SSD-only configuration options to existing products:

  • American Megatrends

  • DDN Storage

  • Dell (Compellent Technologies)

  • Huawei

  • Pivot3

  • Nexsan

  • Nimbus Data

  • Oracle FS1

Table 2.   Weighting for Critical Capabilities in Use Cases

Critical Capabilities

Online Transaction Processing

Server Virtualization

High-Performance Computing

Analytics

Virtual Desktop Infrastructure

Performance

30%

20%

42%

25%

25%

Storage Efficiency

15%

20%

5%

15%

30%

RAS

20%

15%

15%

20%

15%

Scalability

8%

10%

15%

18%

4%

Ecosystem

7%

10%

3%

5%

8%

Multitenancy and Security

5%

5%

10%

6%

5%

Manageability

15%

20%

10%

11%

13%

Total

100%

100%

100%

100%

100%

As of July 2017

Source: Gartner (July 2017)

This methodology requires analysts to identify the critical capabilities for a class of products/services. Each capability is then weighed in terms of its relative importance for specific product/service use cases.

Critical Capabilities Rating

Each product or service that meets our inclusion criteria has been evaluated on several critical capabilities on a scale from 1.0 (lowest ranking) to 5.0 (highest ranking). The ratings are listed in Table 3.

Table 3.   Product/Service Rating on Critical Capabilities

Critical Capabilities

Dell EMC Unity All Flash

Dell EMC VMAX All Flash

Dell EMC XtremIO

Fujitsu Storage Eternus AF Series

Hitachi Data Systems VSP F Series

HPE 3PAR StoreServ All-Flash Arrays

IBM DS8880F Data Systems

IBM FlashSystem A9000

IBM Storwize All-Flash Series

Kaminario K2

NetApp AFF Series

NetApp EF Series

NetApp SF Series

Nimble Storage AF Series

Pure Storage FlashBlade

Pure Storage M-Series

Tegile T-Series

Tintri T5000 Series

X-IO ISE 800 Series

Performance

3.6

3.7

3.6

3.7

3.9

3.7

3.7

3.8

3.7

4.0

3.7

3.8

3.6

3.6

3.8

3.6

3.6

3.5

3.6

Storage Efficiency

2.0

2.0

4.0

4.5

3.0

4.0

1.5

4.0

3.0

4.2

4.0

1.5

4.0

4.5

2.0

4.5

4.5

4.5

2.1

RAS

3.4

3.8

3.5

3.6

3.9

3.7

3.6

3.6

3.6

3.7

3.6

3.4

4.0

3.6

3.4

3.6

3.6

3.5

3.3

Scalability

3.8

3.6

3.4

3.4

4.0

4.1

3.4

3.6

4.0

3.9

4.0

2.0

4.0

3.6

3.7

3.3

3.6

3.2

3.0

Ecosystem

3.7

4.0

3.7

3.6

4.0

3.8

3.7

3.6

3.8

3.6

4.1

2.9

3.0

3.6

3.0

3.6

4.0

3.3

3.3

Multitenancy and Security

3.4

3.7

3.2

3.3

3.6

3.6

3.7

3.7

4.0

3.2

3.4

2.8

3.6

3.5

3.0

3.1

3.5

3.3

2.7

Manageability

3.7

3.6

3.6

3.7

3.4

3.7

3.4

3.8

3.8

3.8

3.7

3.2

3.8

4.1

3.8

4.0

3.7

4.0

3.0

As of July 2017

Source: Gartner (July 2017)

Table 4 shows the product/service scores for each use case. The scores, which are generated by multiplying the use-case weightings by the product/service ratings, summarize how well the critical capabilities are met for each use case.

Table 4.   Product Score in Use Cases

Use Cases

Dell EMC Unity All Flash

Dell EMC VMAX All Flash

Dell EMC XtremIO

Fujitsu Storage Eternus AF Series

Hitachi Data Systems VSP F Series

HPE 3PAR StoreServ All-Flash Arrays

IBM DS8880F Data Systems

IBM FlashSystem A9000

IBM Storwize All-Flash Series

Kaminario K2

NetApp AFF Series

NetApp EF Series

NetApp SF Series

Nimble Storage AF Series

Pure Storage FlashBlade

Pure Storage M-Series

Tegile T-Series

Tintri T5000 Series

X-IO ISE 800 Series

Online Transaction Processing

3.35

3.46

3.61

3.75

3.69

3.78

3.28

3.76

3.64

3.86

3.76

3.03

3.76

3.81

3.35

3.75

3.77

3.68

3.11

Server Virtualization

3.29

3.38

3.64

3.79

3.63

3.81

3.16

3.77

3.62

3.87

3.80

2.84

3.76

3.88

3.25

3.81

3.84

3.74

3.00

High-Performance Computing

3.51

3.61

3.54

3.64

3.79

3.77

3.50

3.73

3.74

3.84

3.73

3.17

3.74

3.69

3.53

3.59

3.66

3.53

3.23

Analytics

3.36

3.45

3.59

3.72

3.72

3.82

3.26

3.74

3.66

3.86

3.78

2.88

3.80

3.78

3.34

3.70

3.76

3.63

3.07

Virtual Desktop Infrastructure

3.11

3.21

3.69

3.89

3.56

3.81

2.97

3.80

3.52

3.91

3.80

2.78

3.77

3.93

3.09

3.89

3.91

3.83

2.93

As of July 2017

Source: Gartner (July 2017)

To determine an overall score for each product/service in the use cases, multiply the ratings in Table 3 by the weightings shown in Table 2.

Evidence

Data has been gathered from client interactions during the past year, vendor briefings and vendor references; detailed questionnaire responses and review calls with all profiled vendors; and detailed reference checks with more than 50 customers.

Note 1
Solid-State Usage in SSAs

Gartner uses the commercial enterprise term "solid-state array" to differentiate SSA from electromechanical disk drives and to avoid media dependence on a particular memory technology. The term "flash" is a consumer market term, and the products analyzed are not targeted for the consumer market. In the future, the current NAND circuits may not be used; however, another type or derivative of semiconductor memory technology — such as 3D NAND, memristors, phase change memory or any other solid-state technology — could be used in SSAs. This makes the term "SSA" more inclusive and gives it longevity. SSA is also more flexible, and is not tied to a specific solid-state storage media or format.

Note 2
Replication Explanation

Replication distance is a function of two variables:

  • Storage array latency

  • Network latency

SSAs reduce the storage array latency component, so that the overall distance can be increased. However, because synchronous replication adds milliseconds of delay to the microsecond response time of SSAs, synchronous replication slows down the performance of all SSAs and SSDs, due to the additional network latency induced by the requirement to acknowledge successful data replication at the remote destination. At a distance of 100 km, this reduces SSD performance by an order of magnitude (from 100 μs to 2.1 ms); therefore, synchronous replication negates the performance improvement and is not an important factor during SSA acquisition.

High-availability applications requiring high-performance and synchronous replication are limited, and the choice is mutually exclusive. Therefore, if performance is more important than replication, asynchronous replication with consistency groups is the recommended technique. From a positive perspective, synchronous replication distance limitations for HDD-based storage arrays can be relaxed.

When an HDD-based array is replaced with an SSA, the new SSA's submillisecond performance will reduce the previous HDD array latency, allowing an increase in network latency and a longer distance — for example, if SSA performance improves by 4 ms (good HDD latency is 5 ms) to 1 μs or less. A customer can then increase synchronous replication distances by 200 km, which adds 4 ms, while maintaining the same performance, to the application when replacing an HDD storage array with an SSA.

Critical Capabilities Methodology

This methodology requires analysts to identify the critical capabilities for a class of products or services. Each capability is then weighted in terms of its relative importance for specific product or service use cases. Next, products/services are rated in terms of how well they achieve each of the critical capabilities. A score that summarizes how well they meet the critical capabilities for each use case is then calculated for each product/service.

"Critical capabilities" are attributes that differentiate products/services in a class in terms of their quality and performance. Gartner recommends that users consider the set of critical capabilities as some of the most important criteria for acquisition decisions.

In defining the product/service category for evaluation, the analyst first identifies the leading uses for the products/services in this market. What needs are end-users looking to fulfill, when considering products/services in this market? Use cases should match common client deployment scenarios. These distinct client scenarios define the Use Cases.

The analyst then identifies the critical capabilities. These capabilities are generalized groups of features commonly required by this class of products/services. Each capability is assigned a level of importance in fulfilling that particular need; some sets of features are more important than others, depending on the use case being evaluated.

Each vendor’s product or service is evaluated in terms of how well it delivers each capability, on a five-point scale. These ratings are displayed side-by-side for all vendors, allowing easy comparisons between the different sets of features.

Ratings and summary scores range from 1.0 to 5.0:

1 = Poor: most or all defined requirements not achieved

2 = Fair: some requirements not achieved

3 = Good: meets requirements

4 = Excellent: meets or exceeds some requirements

5 = Outstanding: significantly exceeds requirements

To determine an overall score for each product in the use cases, the product ratings are multiplied by the weightings to come up with the product score in use cases.

The critical capabilities Gartner has selected do not represent all capabilities for any product; therefore, may not represent those most important for a specific use situation or business objective. Clients should use a critical capabilities analysis as one of several sources of input about a product before making a product/service decision.