Critical Capabilities for General-Purpose, Midrange Storage Arrays

7 March 2014 ID:G00248904
Analyst(s): Stanley Zaffos, Arun Chandrasekaran, Valdis Filks

VIEW SUMMARY

IT leaders can improve agility and SLAs while lowering infrastructure costs by optimally mapping application needs to storage array capabilities. This report quantifies eight critical measures of product attractiveness across six high-impact use cases for 14 midrange storage arrays.

Overview

Key Findings

  • Traditional, dual-controller architectures sold by established vendors will continue to dominate the midrange storage market, even as new scale-up, scale-out, flash and hybrid storage arrays compete for market share.
  • Server virtualization, desktop virtualization, big data analytics and cloud storage are reprioritizing the traditional metrics of product attractiveness.
  • The compression of product differentiation between various vendor offerings and the availability of easy-to-use migration tools are diminishing the strength of vendor lock-ins. Security and concerns with migration and conversion costs between competing storage vendors' arrays are declining in importance relative to vendor reputation, performance, reliability and scalability.

Recommendations

  • Take a top-down design approach to infrastructure design that begins with identifying high-impact workloads and workload profiling, setting service-level objectives, quantifying future growth rates, and examining the impact of infrastructure refreshes on existing contracts with service providers.
  • Focus on externally visible measures of product attractiveness and nonproduct considerations, such as IOPS, throughput and response times, scalability, vendor support capabilities, and acquisition and ownership costs, rather than configuration and architectural differences when choosing an optimal storage solution.
  • Build a cross-functional team that includes users, developers, operations, finance, legal and senior management to unmask hidden agendas, and to provide greater foresight into planned, new application deployments and changes in business needs.
  • Conduct a what-if analysis to determine the impacts of changes in organizational data growth rates and changes in planned service lives on the attractiveness of various shortlisted solutions.

Table of Contents

Contents
Tables
Figures
Figure 7.

What You Need to Know

The tension that has always existed between insatiable storage growth and limited resources has made overdelivering against application needs with respect to availability, performance and data protection a luxury that most IT organizations can no longer afford. The plethora of different storage vendors, architectures and product lines confronting users requires them to create a methodology that stack ranks vendors, storage arrays and bids in their environment if they are to have any hope of building an agile, manageable and cost-effective storage infrastructure.

There are few "bad" storage arrays currently being sold, and none in the 14 arrays we have selected for inclusion in this research. The differences between arrays ranked at the top of the use-case charts and the arrays at the bottom are small and, to a significant extent, reflective of differences in design points and ecosystem support. Hence, array differentiation is minimal, and the real challenge of performing a successful storage infrastructure upgrade is not designing an infrastructure upgrade that works, but designing an upgrade that optimizes agility and minimizes ownership costs. Users not needing scalability over 200TB or missing ecosystem support are strongly encouraged to consider the arrays ranked lower in the figures shown below, since they may have benefits in your environment, and may be priced more competitively. While optimization does add a layer of complexity to the design of the storage infrastructure upgrade, users should take comfort in the knowledge that choosing a suboptimal solution is likely to have only moderate impacts on deployment and ownership costs for the following reasons:

  • Product advantages are usually temporary in nature. Indeed, Gartner would refer to this phenomenon as the "compression of product differentiation."
  • Most clients report that differences in management and monitoring tools, as well as ecosystem support between various vendors' offerings, are not enough to change staffing requirements.
  • Storage ownership costs, while growing as a percentage of the total IT spend, still account for less than 10% (7.1% in 2012) of most IT budgets.

Users are also reminded that nonproduct considerations (such as vendor relationships, presales and postsales support capabilities, including training, past experience and pricing) that are not strictly critical capabilities should be significant considerations in choosing solutions for the high-impact use cases explored in this research: more specifically, consolidation, online transaction processing (OLTP), server virtualization and virtual desktop infrastructure (VDI), business analytics, and cloud. For more information about vendors whose products are included in this research, see "Magic Quadrant for General-Purpose Disk Arrays."

Analysis

This document was revised on 11 March 2014. The document you are viewing is the corrected version. For more information, see the Corrections page on gartner.com.

Introduction

Even as much of the storage array market is consolidating into one general-purpose market, Gartner appreciates the entrenched usage and appeal of simple labels. Therefore, even though the terms "midrange" and "high end" are no longer always accurate descriptions of array capabilities, user buying behaviors or future market directions, Gartner has chosen to publish separate midrange and high-end Critical Capabilities research documents. Publishing two documents also enabled us to provide analyses of more arrays in a potentially more client-friendly format.

The arrays evaluated in this research include scale-up, scale-out, hybrid and unified storage architectures. Because these arrays have different availability characteristics, performance profiles, scalability, ecosystem support, pricing and warranties, they enable users to tailor solutions against operational needs, planned new application deployments, and forecast growth rates and asset management strategies. Many readers of this research will recognize that midrange arrays exhibiting scale-out characteristics can satisfy the high-end criteria when configured with four or more controllers and multiple disk shelves. Whether these differences in availability are enough to affect infrastructure design and operational procedures will vary by user environment, and will also be influenced by other considerations such as downtime costs, lost opportunity costs and the maturity of the end-user change control procedures (hardware, software, procedures and scripting, for example) that directly affect availability.

Product Class Definition

Architectural Definitions

The following criteria classify storage array architectures by their externally visible characteristics, rather than by vendor claims or other nonproduct criteria.

Scale-Up Architectures
  • Front-end connectivity, internal bandwidth and back-end capacity scale independently of each other.
  • Logical volumes, files or objects are fragmented and spread across user-defined collections of disks, such as disk pools, disk groups or redundant array of independent disks (RAID) sets.
  • Capacity, performance and throughput are limited by physical packaging constraints, such as the number of slots in a backplane and/or interconnect constraints.
Scale-Out Architectures
  • Capacity, performance, throughput and connectivity scale with the number of nodes in the system.
  • Logical volumes, files or objects are fragmented and spread across multiple storage nodes to protect against hardware failures and improve performance.
  • Scalability is limited by software and networking architectural constraints, not physical packaging or interconnect limitations.
Hybrid Architectures
  • Incorporate solid-state drive (SSD), Flash, hard-disk drive (HDD), compression and/or deduplication into their basic design.
  • Can be implemented as scale-up or scale-out arrays.
  • Can support one or more protocols, such as block or file, and/or object protocols, including FC, Internet Small Computer System Interface (iSCSI), Network File System (NFS), Server Message Block (SMB — aka Common Internet File System [CIFS]), FCoE, InfiniBand and others

Including compression and deduplication into the initial system design often results in both having a positive impact on system performance and throughput, with simplified management attributable, at least in part, to better instrumentation and more intelligent cache management algorithms that are compression- and deduplication-aware.

Unified Architectures
  • Can simultaneously support multiple block, file, and/or object protocols, including FC, iSCSI, NFS, SMB (aka CIFS), FCoE, InfiniBand and so on.
  • Include gateway and integrated data flow implementations.
  • Can be implemented as scale-up or scale-out arrays.

Gateway-style implementations provision network-attached storage (NAS) and object storage protocols with storage area network (SAN)-attached block storage. Gateway implementations run separate NAS, object and SAN microcode loads on either virtualized or physical servers, and consequently have different thin-provisioning, autotiering, snapshot and remote-copy features that are not interoperable among different protocols. By contrast, integrated implementations use the same thin-provisioning, autotiering, snapshot and remote-copy primitives independent of protocol, and can dynamically allocate controller cycles to protocols on an as-needed or prioritized basis.

Mapping the strengths and weaknesses of these different storage architectures to various use cases should begin with an overview of each architecture's strengths and weaknesses, as well as an understanding of workload requirements (see Table 1).

Table 1. Strengths and Weaknesses of the Storage Architectures

Strengths

Weaknesses

Scale-up

  • Mature architectures: reliable, cost-competitive
  • Large ecosystems
  • Independently upgrade: host connections, back-end capacity
  • May offer shorter recovery point objectives (RPOs) over asynchronous distances
  • Performance and internal bandwidth are fixed, and do not scale with capacity
  • Limited compute power may result in the use of efficiency and data protection features negatively impacting performance
  • Electronics failures and microcode updates may be high-impact events
  • Forklift upgrade

Scale-out

  • Input/output operations per second (IOPS) and Gbps scale with capacity
  • Greater fault tolerance than scale-up architectures
  • Nondisruptive load balancing
  • High electronics costs relative to back-end storage costs

Hybrid

  • Efficient use of flash
  • Compression and deduplication are performance-neutral to positive
  • Consistent performance experience with minimal tuning
  • Excellent price/performance
  • Low environmental footprint
  • Relatively immature technology
  • Limited ecosystem and protocol support

Unified

  • Maximal deployment flexibility
  • Comprehensive storage-efficiency features
  • Performance may vary by protocol (block versus NAS)

Source: Gartner (March 2014)

Critical Capabilities Definition

Manageability

This refers to the automation, management, monitoring and reporting tools and programs supported by the platform. These tools and programs can include single-pane management consoles, monitoring and reporting tools designed to help support personnel to seamlessly manage systems, and monitor system usage and efficiencies. They can also be used to anticipate and correct system alarms and fault conditions before or soon after they occur.

RAS

Reliability, Availability and Serviceability (RAS) refers to a design philosophy that consistently delivers high availability by building systems with reliable components, "de-rating" components to increase their mean time between failures (MTBFs), and designing system and clocking to tolerate marginal components. RAS also supports the following: hardware and microcode designs that minimize the number of critical failure modes in the system, serviceability features that enable nondisruptive microcode updates, diagnostics that minimize human errors when troubleshooting the system, and nondisruptive repair activities. User visible features can include tolerance of multiple disk and/or node failures, fault isolation techniques, built-in protection against data corruption and other techniques (such as snapshots and replication) to meet customer RPOs, and recovery time objectives (RTOs).

Performance

This is the collective term that is often used to describe IOPS, bandwidth (MB/sec) and response times (milliseconds per input/output [I/O]) that are visible to attached servers. In well-designed systems, the many potential performance bottlenecks that exist are encountered at the same time when supporting a variety of common workload profiles as described earlier. When comparing systems, users are reminded that performance is more of a scalability enabler than a differentiator in its own right.

Snapshot and Replication

These are data protection features that protect against data corruption problems caused by human and software errors, and technology and site failures, respectively. Snapshots can also address backup window issues and minimize the impact of backups on production workloads.

Scalability

This refers to the ability of the storage system to grow not just capacity, but performance and host connectivity. The concept of usable scalability links capacity growth and system performance to SLAs and application needs.

Ecosystem

This refers to the ability of the platform to integrate with and support third-party independent software vendor (ISV) applications, such as databases, backup/archiving products and management tools, and various hypervisor and desktop virtualization offerings.

Multitenancy and Security

This refers to the ability of a storage system to support a diverse variety of workloads, isolate workloads from each other, and provide user access controls and auditing capabilities that log changes to the system configuration.

Storage Efficiency

This refers to the raw versus usable capacity; efficiency of data protection algorithms; and the ability of the platform to support storage efficiency technologies, such as compression, deduplication, thin provisioning and autotiering, to improve utilization rates while reducing storage acquisition and ownership costs.

Use Cases

This research evaluates the midrange, general-purpose storage systems supporting the use cases enumerated in Table 2:

Overall

Overall use case is a generalized usage scenario and is not representative of how specific users’ will use or deploy a technology or service within their enterprise.

Consolidation

This use case simplifies storage management and disaster recovery solutions, and improves economies of scale by consolidating multiple, potentially dissimilar storage systems into fewer larger storage systems. RAS, performance, scalability, and multitenancy and security are heavily weighted selection criteria because the system becomes a shared resource, which magnifies the effects of outages and performance bottlenecks.

OLTP

This use case is closely affiliated with business-critical applications, such as database management systems (DBMSs), that require 24/7 availability and subsecond transaction response times. Hence, the greatest emphasis is on RAS and performance features, followed by snapshots and replication, which enable rapid recovery from data corruption problems and technology or site failures. Manageability, scalability and storage efficiency are important because they enable the storage system to scale with data growth, while staying within budget constraints.

Server Virtualization and VDI

This use case encompasses business-critical applications, back-office and batch workloads, and development. The need to deliver I/O response times of 2 milliseconds (ms) or less to large numbers of virtual machines or desktops that generate cache-unfriendly workloads, while providing 24/7 availability, heavily weights performance and storage efficiency, followed closely by multitenancy and security. The heavy reliance on SSDs, autotiering, quality of service (QoS) features that prioritize and throttle I/Os, and disaster recovery (DR) solutions that are tightly integrated with virtualization software also make RAS and manageability important criteria.

Analytics

This use case applies not only to storage consumed by big data applications using map/reduce technologies, but also to all analytic applications that are packaged, or provide business intelligence (BI) capabilities for a particular domain or business problem. Performance (or, more specifically, bandwidth), RAS and snapshot capabilities are critical to success: RAS features to tolerate disk failures; snapshots to facilitate check-pointing, long-running applications; and bandwidth to reduce time to insight (see definition in "Hype Cycle for Analytic Applications, 2013").

Cloud

This use case applies to storage arrays used in private, hybrid and public cloud infrastructures, and how they apply to the specific cost, scale, manageability and performance requirements within this use case. Hence, scalability, multitenancy and resiliency are important selection considerations, and are highly weighted.

Inclusion Criteria

Table 2. Weighting for Critical Capabilities in Use Cases

Critical Capabilities

Overall

Consolidation

OLTP

Server Virtualization and VDI

Analytics

Cloud

Manageability

11.0%

10.0%

10.0%

10.0%

10.0%

15.0%

RAS

17.0%

18.0%

25.0%

12.0%

15.0%

15.0%

Performance

18.0%

15.0%

25.0%

20.0%

20.0%

10.0%

Snapshot and Replication

10.8%

10.0%

10.0%

9.0%

15.0%

10.0%

Scalability

13.8%

15.0%

10.0%

9.0%

15.0%

20.0%

Ecosystem

5.0%

5.0%

5.0%

5.0%

5.0%

5.0%

Multitenancy and Security

12.0%

15.0%

5.0%

15.0%

10.0%

15.0%

Storage Efficiency

12.4%

12.0%

10.0%

20.0%

10.0%

10.0%

Total

100.0%

100.0%

100.0%

100.0%

100.0%

100.0%

As of March 2014

Source: Gartner (March 2014)

The 14 arrays selected for inclusion in this research are offered by vendors discussed in "Magic Quadrant for General-Purpose Disk Arrays," which includes arrays supporting block and/or file protocols. Here are the criteria that must be met for classification as a midrange storage array:

  • Single electronics failures:
    • Are not single points of failure (SPOFs)
    • Do not result in loss of data integrity or accessibility
    • Can impact more than 25% of the array's performance/throughput
    • Can be visible to the SAN and connected application servers
  • Microcode updates:
    • Can be disruptive
    • Can impact more than 25% of the array's performance/throughput
  • Repair activities and capacity upgrades:
    • Can be disruptive
  • Have an average selling price of more than $24,999

The criteria for qualification as a high-end array are more severe than those for midrange arrays. For this reason, arrays that satisfy the high-end criteria also satisfy the midrange criteria, but are included in the high-end Critical Capabilities research rather than here.

For the reader's convenience, high-end array criteria are shown below:

  • Single electronics failures are:
    • Invisible to the SAN and connected application servers
    • Impact less than 25% of the array's performance/throughput
  • Microcode updates are:
    • Nondisruptive and can be nondisruptively backed out
    • Impact less than 25% of the array's performance/throughput
  • Repair activities and capacity upgrades are:
    • Invisible to the SAN and connected application servers
    • Impact less than 50% of the array's performance/throughput
  • Support dynamic load balancing
  • Support local replication and remote replication
  • Typical high-end disk array ASPs are greater than $250,000

Critical Capabilities Rating

Each product or service that meets our inclusion criteria has been evaluated on several critical capabilities on a scale from 1.0 (lowest ranking) to 5.0 (highest ranking). Rankings are not adjusted to account for differences in various target market segments. So, for example, a system targeting the small and midsize business (SMB) market that is less costly and less scalable than a system targeting the enterprise market would rank lower on scalability than the larger array, despite the SMB prospect not needing the extra scalability (see Table 3).

Table 3. Product Rating on Critical Capabilities

Product or Service Ratings

Coraid ZX/SRX

Dell Compellent

Dot Hill AssuredSAN 4000/PRO 5000

EMC VNX Series

Fujitsu Eternus DX500 S3/DX600 S3

HDS HUS 100 Series

HP 3PAR

Huawei S5000T/S6000T Series

IBM V7000

NEC M-Series

NetApp FAS/V32x0

Nimble Storage CS-Series

Oracle Sun ZFS Storage Appliance

X-IO ISE Storage Systems

Manageability

3.3

4.2

3.2

3.7

3.3

3.3

3.8

2.8

3.3

3.0

4.2

4.0

3.7

3.5

RAS

3.5

3.5

3.3

3.5

4.0

4.2

4.0

3.3

3.3

3.5

4.0

3.3

3.3

4.7

Performance

3.3

3.3

3.2

3.8

3.3

3.5

4.0

3.7

3.3

3.5

3.3

3.5

3.8

4.0

Snapshot and Replication

3.3

3.5

3.0

3.5

3.5

4.0

3.7

3.7

3.8

3.3

3.8

3.3

3.7

2.0

Scalability

3.7

3.2

2.2

3.7

3.3

3.3

3.5

4.0

3.2

2.8

4.0

3.0

3.8

3.2

Ecosystem

3.3

3.7

3.0

4.2

3.7

3.3

3.7

3.5

3.5

3.0

4.2

3.3

2.8

3.0

Multitenancy and Security

3.2

2.8

2.5

3.2

3.2

3.3

4.2

2.7

3.3

2.5

3.8

3.2

2.8

2.7

Storage Efficiency

3.3

3.7

3.3

3.8

3.3

3.5

3.8

3.3

3.7

2.8

4.0

3.7

3.7

2.5

As of March 2014

Source: Gartner (March 2014)

Figure 1. Overall Score for Each Vendor's Product Based on the Nonweighted Score for Each Critical Capability
Figure 1.Overall Score for Each Vendor's Product Based on the Nonweighted Score for Each Critical Capability

As of March 2014

Source: Gartner (March 2014)

To determine an overall score for each product in the use cases, the ratings in Table 3 are multiplied by the weightings shown in Table 2. These scores are shown in Table 4, which also provides our assessment of the viability of each product.

Table 4. Product Score in Use Cases

Use Cases

Coraid ZX/SRX

Dell Compellent

Dot Hill AssuredSAN 4000/PRO 5000

EMC VNX Series

Fujitsu Eternus DX500 S3/DX600 S3

HDS HUS 100 Series

HP 3PAR

Huawei S5000T/S6000T Series

IBM V7000

NEC M-Series

NetApp FAS/V32x0

Nimble Storage CS-Series

Oracle Sun ZFS Storage Appliance

X-IO ISE Storage Systems

Overall

3.38

3.45

2.98

3.64

3.45

3.59

3.86

3.39

3.40

3.10

3.86

3.41

3.51

3.35

Consolidation

3.38

3.42

2.95

3.62

3.45

3.59

3.87

3.38

3.39

3.07

3.88

3.39

3.48

3.33

OLTP

3.39

3.49

3.07

3.67

3.51

3.67

3.88

3.44

3.39

3.22

3.83

3.43

3.55

3.58

Server Virtualization and VDI

3.35

3.45

3.01

3.65

3.41

3.55

3.88

3.35

3.43

3.05

3.84

3.45

3.50

3.24

Analytics

3.38

3.45

2.97

3.65

3.45

3.60

3.85

3.45

3.41

3.12

3.84

3.40

3.54

3.31

Cloud

3.40

3.45

2.89

3.62

3.43

3.55

3.84

3.37

3.38

3.02

3.92

3.39

3.49

3.28

As of March 2014

Source: Gartner (March 2014)

Product Viability

Product viability is distinct from the critical capability scores for each product. It is our assessment of the vendor's strategy and ability to enhance and support a product throughout its expected life cycle, not an evaluation of the vendor as a whole.

Four major areas are considered: strategy, support, execution and investment. Strategy includes how a vendor's strategy for a particular product fits in relation to the vendor's other product lines, market direction and overall business. Support includes the quality of technical and account support, as well as customer experiences with the product. Execution considers a vendor's structure and processes for sales, marketing, pricing and deal management. Investment considers the vendor's financial health, and the likelihood of the individual business unit responsible for a product to continue investing in it. Each product is rated on a five-point scale, from poor to outstanding, for each of the four areas, and is assigned an overall product viability rating (see Table 5).

Table 5. Product Viability Assessment

Coraid ZX/
SRX

Dell Com-pellent

Dot Hill AssuredSAN 4000/
PRO 5000

EMC VNX Series

Fujitsu Eternus DX500 S3/
DX600 S3

HDS HUS 100 Series

HP 3PAR

Huawei S5000T/S6000T Series

IBM V7000

NEC M-Series

NetApp FAS/
V32x0

Nimble Storage CS Series

Oracle Sun ZFS Storage Appliance

X-IO ISE Storage Systems

Good

Good

Good

Excellent

Good

Excellent

Excellent

Good

Excellent

Good

Excellent

Excellent

Good

Good

As of March 2014

Source: Gartner (March 2014)

The weighted capabilities scores for all use cases are displayed as components of the overall score (see Figures 2 through 7).

Figure 2. Vendors' Product Scores for the Overall Use Case
Figure 2.Vendors' Product Scores for the Overall Use Case

Source: Gartner (March 2014)

Figure 3. Vendors' Product Scores for the Storage Consolidation Use Case
Figure 3.Vendors' Product Scores for the Storage Consolidation Use Case

Source: Gartner (March 2014)

Figure 4. Vendors' Product Scores for the OLTP Use Case
Figure 4.Vendors' Product Scores for the OLTP Use Case

Source: Gartner (March 2014)

Figure 5. Vendors' Product Scores for the Server Virtualization and VDI Use Case
Figure 5.Vendors' Product Scores for the Server Virtualization and VDI Use Case

Source: Gartner (March 2014)

Figure 6. Vendors' Product Scores for the Analytics Use Case
Figure 6.Vendors' Product Scores for the Analytics Use Case

Source: Gartner (March 2014)

Figure 7. Cloud Use Case
Figure 7.Cloud Use Case

Source: Gartner (March 2014)

Vendors

Coraid ZX/SRX

Coraid's use of the ATA over Ethernet (AoE) protocol lowers connectivity costs, and simplifies the connection to physical and virtualized servers, while delivering high performance. SRX block storage nodes, which can be configured with or without SSDs, depending on the model, can connect to servers via an Ethernet switch. The ZX NAS gateway adds CIFS and NFS protocol support, scale-out capabilities and value-added functionality, such as local and remote replication. The VSX block gateway provides scale-out, logical volume management, and local and remote replication capabilities. Building the ZX NAS gateway with Oracle Solaris 11, which includes the ZFS, expands third-party support, and minimizes concerns with code quality and support. Built with internal IP, the VSX increases Coraid product attractiveness, but does not directly influence third-party ecosystem support. Offsetting these strengths is Coraid's decision to use different virtualization gateways to provide NAS and block storage functionality, which makes it impossible to create a constant timeline across block and NAS storage when creating snapshots.

Dell Compellent

Dell's Compellent midrange storage arrays continue to remain the cornerstone of the company's evolving storage portfolio. The Dell Compellent SC8000 is a performance and functionally competitive array that can integrate with the FS8600 NAS appliance for unified block and file storage capabilities. The product provides ease of use, excellent reporting and the ability to keep connections active, even in the presence of a controller failure. With the Storage Center array software release 6.3, which is available as a no-charge upgrade for customers under a current support agreement, enhanced scalability and performance are improved, with support for 16Gb FC added. Dell also offers specialized "Copilot" support services to reduce service calls, while improving storage management and utilization, as well as customer satisfaction. Dell Compellent is one of the few storage arrays where customers can retain the software licenses when upgrading the array, thus lowering acquisition costs.

Although Dell can deliver block and file storage capabilities, a number of its established competitors are delivering more seamless, unified or multiprotocol solutions. Dell Compellent was the first to deliver autotiering capabilities, but its data progression engine moves data only once in a 24-hour window and not in real time, which can create bottlenecks in fast-changing, dynamic, multitenancy environments.

Dot Hill AssuredSAN 4000/PRO 5000

Dot Hill's AssuredSAN and AssuredSAN Pro series share a common technology base; are positioned at the lower end of the storage array market; and deliver competitive performance with software features (only for the Pro series), such as thin provisioning, autotiering and remote replication. Both arrays' reliability and microcode quality have benefited from Dot Hill's OEM agreements with companies such as HP, Teradata and Tektronix, which have sold its products under their brand names. The autotiering feature uses short monitoring windows to make the array very responsive to changes in workload characteristics. AssuredSAN has extremely competitive pricing and high customer satisfaction levels for products in its range, and its software licensing extends to the entire array, including all just a bunch of disks (JBODs). Gartner inquiries reveal that the management GUI isn't easy to use and needs considerable improvement. Dot Hill's efforts to build a strong technology partner ecosystem have been hampered by its past reliance on private-label OEMs, and its inability to rapidly support new APIs under its own label continues to be a challenge.

EMC VNX Series

The latest generation of the VNX storage arrays, launched in September 2013, incorporated a hardware refresh, as well as a firmware update that improved multitasking to exploit the multicore processors within the controllers, improve performance and reduce the overhead of value-added features. This enabled the VNX to scale performance of the front-end controllers, and to fully exploit the back-end SSDs and HDDs. The VNX benefits from a large ecosystem and tight integration with VMware and RecoverPoint, which provides network-based local (concurrent local copy) and remote replication (continuous remote replication). The Unisphere management GUI for new users is still not as modern or easy to use as those from newer array designs, but the differences are small once the learning curve has been scaled. Gartner client feedback does verify that the new VNX system performs very well and is a significant improvement over the previous generation. However, with the ubiquitous usage of SSDs in storage arrays and the ability of many new startups to create 100,000-plus IOPS arrays, performance in the general marketplace is no longer a key differentiator in its own right, but a scalability enabler. Customer satisfaction with EMC sales and support is above average.

Fujitsu Eternus DX500 S3/DX600 S3

The Eternus DX500 S3 and DX600 S3 running current firmware are delivering feature and functional equivalence with competitors in the market. The DX500 S3 and DX600 S3 are now unified storage arrays. Fujitsu is conservative, and generally not the first vendor to have a function or feature, but it does have a stable cadence of upgrades that follow a consistent road map. Therefore, customers who value stability and investment protection should consider Fujitsu storage. Gartner client inquiries show that the Fujitsu arrays also have a higher-than-average reputation for reliability. Fujitsu uses the same Eternus SF management software across all DX models, and, therefore, replication and management are the same across the range, cross-training is minimized, and staff productivity is improved. The common software also makes upgrades between models simple and enables replication across different models. For this reason, if required, the migration of data between dissimilar Fujitsu Eternus DX storage arrays becomes less disruptive.

HDS HUS 100 Series

Hitachi Data Systems' (HDS’) Hitachi Unified Storage (HUS) 100 series is a unified storage array that supports block, file and object capabilities, and is renowned for its solid hardware engineering. HUS has a symmetric, active/active controller architecture, thus enabling logical unit number (LUN) access through either controller, with equal performance for block-access applications. Additionally, the array will maintain all active host connections through the operating controller in case of a failure. Due to the block and file services being provided by physically separate components, albeit tied together via a unified management GUI, a consistent snapshot cannot be created between separate block or file storage resources. HUS also supports reliable, nondisruptive microcode updates that can be done at a microprocessor core level. Hitachi Data Systems is striving to differentiate this product in the midrange segment through superior performance and reliability relative to competing products. Though Hitachi Command Suite offers unified administration of various Hitachi Data Systems storage arrays, it needs to improve its ease of use and support for older arrays. More specifically, it needs to provide tighter integration with HUS 100 for block, file and object storage management features. A larger HUS ecosystem will improve HUS usability across more markets.

HP 3PAR

The HP 3PAR StoreServ series is the centerpiece of HP's disk storage strategy, thus providing a common management and software architecture across the whole product line. The 3PAR architecture now extends from the entry-level two-node 7200 and four-node 7400 and 7450 models, to the eight-node StoreServ 10000 (aka V800), providing midrange users with simple-to-manage and seamless growth into multi-PB systems. Ongoing hardware and software enhancements are keeping the system competitive with other SAN storage systems in availability, scalability, performance, functionality, ecosystem support and ease of use. The 3PAR 74x0 systems, configured with four or more nodes, have an inherent advantage in usable availability relative to dual-controller architectures, and this advantage has been aided by recent functional enhancements, such as persistent cache and persistent ports. However, 3PAR systems are not yet delivering the same RPOs over asynchronous distances as traditional high-end storage systems because 3PAR asynchronous remote copy still transmits the difference between snaps. Performance and throughput scale linearly as nodes are added to the system, and the fine-grained thin provisioning (16KB chunks) enables users to take full advantage of SSDs and aggressively overcommit storage resources. Offsetting these strengths is a lack of an integrated NAS capability, as well as a lack of data compression and deduplication.

Huawei S5000T/S6000T Series

The Huawei T series is a family of dual-controller storage systems that deliver strong performance and competitive functionality. There are no signs of corner cutting on the printed circuit boards (PCBs), chassis and support equipment. Packaging and cabling layout show attention to detail and serviceability. Microcode updates, repair activities and capacity expansions are nondisruptive. Transparency and openness are provided via Storage Performance Council (SPC) benchmarks, which are used to position the T series against its competitors. A checklist of storage efficiency and data protection features includes clones, snapshots, thin provisioning, autotiering, synchronous and asynchronous remote copy. To improve the usability of asynchronous remote copy, the T series includes consistency group support. A similar checklist of supported software includes Windows, VMware, Hyper-V, KVM and various Linux implementations, including Red Hat and SUSE. Offsetting these strengths is the relative lack of integration with many backup/restore solutions, management tools that are less than intuitive and a limited pool of experienced Huawei storage administrators.

IBM V7000

The IBM Storwize V7000 series is a unified storage array that incorporates technologies from many IBM products, including the System Storage SAN Volume Controller (SVC), General Parallel File System (GPFS) and XIV GUI design. This reuse of technologies provides interoperability with installed SVCs; a reduction in the V7000 learning curve for many IBM customers; and mature thin provisioning, autotiering, snapshot and replication features, and storage virtualization capabilities. Even though storage virtualization and data migration features are now commonplace within server hypervisors, virtualization is still a clear differentiator, since few storage arrays offer this feature natively, and it can be useful during array migration projects. High performance is achieved by placing SSDs in the SVC-based controllers, but this limits the scalability due to physical slot limits within the controllers. Customers seeking to improve their physical infrastructure agility by implementing software-based storage (SBS) can purchase the V7000 software and install it as a virtual host, thus creating a software storage appliance with all the rich data service features of a storage array. Since the value and the intelligence of the V7000 are dependent on the rich SVC software features, IBM's ability to maintain a single SVC software image between the V7000 and the stand-alone SVC product is critical to the competitiveness and survival of the V7000. Offsetting these strengths is the limited back-end scalability of the V7000; 240 internal disks per controller node-pair, plus the capacity of other node-pairs in a V7000 cluster and virtualized storage systems; limited integration between the NAS gateway built on the GPFS and back-end storage, which add management complexity; and the maintenance of two code trees — one for the SVC and the other for the V7000 — which could create interoperability exposures.

NetApp FAS/V32x0

The NetApp FAS 32x0 series is based on the Data Ontap operating system, which is a common operating system that spans the entire FAS and V-Series lines. With the introduction of Clustered Data Ontap, which can deliver horizontal, scale-out capability through a global namespace, load balancing and federated management, NetApp has been focusing on migrating customers to the clustered version, which can deliver excellent scalability in capacity, with little compromise on availability and data services. The FAS 32x0 series can scale up to eight node pairs, with a maximum storage capacity of 11PB. With the recent release of Clustered Data Ontap 8.2, NetApp announced the long-awaited SnapVault support, as well as comprehensive support for VMware APIs, and SMB 3.0 and Offloaded Data Transfer (ODX) from Microsoft. Many of the management tools were updated to support NetApp Clustered Data Ontap, and the road map for storage management in 2014 looks positive, with deeper levels of support and management, as well as better automation of 7-Mode to NetApp Clustered Data Ontap migrations. While Clustered Data Ontap is a major software release enabling scale-out architectures from NetApp, it lacks a distributed file system. Moreover, features such as MetroCluster and SnapLock are still unavailable in Clustered Data Ontap 8.2.

NEC M-Series

Although well-known in its home market of Japan as a storage vendor, NEC actively started marketing its midrange storage products overseas only in the past few years. The M Series comes in four models — M100, M300, M500 and M700 — with the last two models supporting both FC and iSCSI. The product has simple, all inclusive software pricing, and includes low-power hardware components to reduce power consumption. The product has high reliability and comprehensive data services, such as autotiering, thin provisioning, snapshots, replication, and VMware vSphere API for Array Integration (VAAI) and vSphere API for Storage Awareness (VASA) support. Customers have indicated that the manageability needs to be improved. Autotiering can make tiering decisions only on a daily basis, rather than in real time. The M Series currently supports only block protocols, and doesn't offer unified storage capabilities. Customers requiring NAS capabilities need to use the NV Series as a gateway, which is available only in Japan.

Nimble Storage CS-Series

There are three key differentiators that Nimble customers rate as competitive advantages: proactive support via the InfoSight feature, a relatively low purchase cost, and automated tiering between SSD and HDD. The InfoSight feature provides suggested changes to the arrays to create an optimum configuration. Because Nimble's storage tiering is real time, it is closer to a caching method than a true tiering system. Customer experiences are positive, and Nimble still provides low-cost storage while simultaneously providing relatively high performance by placing data on the correct storage media. Another key factor that is becoming more important is the value of a community. Nimble has managed to create an information-sharing community of customers via NimbleConnect, which can be used to swap hints and tips with one another. This type of added value via transparency is quite rare, since it opens up positive and negative information sharing between users. Nimble is being adventurous and bold by taking these steps. If this can be successfully continued, and Nimble becomes a company renowned for trust and openness, then this will be a significant soft-product advantage that cannot be created or emulated overnight. Offsetting these strengths is the lack of multiprotocol support and deduplication, which can further improve storage efficiency, as well as a limited, but growing, ecosystem.

Oracle Sun ZFS Storage Appliance

The Sun ZFS Storage Appliance provides all the features required of a storage array, but excels in providing detailed instrumentation, performance and integration into Oracle platforms. Oracle focuses its R&D and sales toward its own installed base, and does not prioritize support for other platforms, such as VMware APIs. Even though the ZS3 systems are unified, which can provide block storage, Oracle positions these arrays for file and NAS-based storage. Oracle Database customers gain extra performance and storage utilization benefits due to the support of columnar compression, which is available only with Oracle Databases attached to Oracle storage arrays. The design of the system is less than 10 years old, and because it is not constrained by obsolescent design decisions, new features are added quickly. This is made apparent by the incorporation of detailed instrumentation, "double-bit error" checking and correction, performance, multicore scaling and capacity scaling from its original design, SSD exploitation, pooling, snapshots, compression, encryption, and deduplication that have been design objectives built in since inception. Offsetting these strengths is the present lack of scale-out capabilities and the reluctance to sell the ZS3's block-level protocol support.

X-IO ISE Storage Systems

ISE and Hyper ISE are dual-controller arrays with the unique ability to repair most HDD failures in situ. ISE arrays are configured with HDDs only, whereas Hyper ISE is configured with a mix of SSDs and HDDs. The ability to vary a disk surface offline, rather than an entire HDD, reduces rebuild times; insulates the user from field engineering (FE) mistakes; and makes it practical to offer a standard, five-year warranty on both offerings. Both ISE and Hyper ISE have earned a reputation for delivering consistently high availability and performance with minimal management attention. This is largely attributable to the building-block approach taken by X-IO, which limits the maximum capacity of any ISE to no more than 40 SSDs and/or HDDs, and its decision to manage Hyper ISE SSDs as second-level cache, rather than as a separate tier of storage, using its internally developed Continuous Adaptive Data Placement (CADP) algorithm. Both versions of ISE use the same management tools, have the same rack form factor (3Us) and are energy-efficient. Offsetting these strengths is the lack of storage efficiency and data protection features, such as thin provisioning, snapshots, and asynchronous replication. The ISE ecosystem is small, and currently limited to VMware, Citrix, Hyper-V, Symantec Storage Foundation and Windows Server.

Note 1
Critical Capabilities Methodology

Scoring for the eight critical capabilities was derived from recent independent Gartner research on the midrange storage market. Each vendor responded in detail to a comprehensive, primary research questionnaire administered by the authors. Extensive follow-up interviews were conducted with all participating vendors, and reference checks were conducted with end users. This provides the objective process for considering the vendors' suitability for your use cases.

Critical capabilities are attributes that differentiate products in a class in terms of their quality and performance. Gartner recommends that users consider the set of critical capabilities as some of the most important criteria for acquisition decisions.

This methodology requires analysts to identify the critical capabilities for a class of products. Each capability is then weighted in terms of its relative importance overall, as well as for specific product use cases. Next, products are rated in terms of how well they achieve each of the critical capabilities. A score that summarizes how well they meet the critical capabilities overall, and for each use case, is then calculated for each product.

Ratings and summary scores range from 1.0 to 5.0:

1 = Poor: Most or all defined requirements not achieved

2 = Fair: Some requirements not achieved

3 = Good: Meets requirements

4 = Excellent: Meets or exceeds some requirements

5 = Outstanding: Significantly exceeds requirements

Product viability is distinct from the critical capability scores for each product. It is our assessment of the vendor's strategy, as well as its ability to enhance and support a product over its expected life cycle, not an evaluation of the vendor as a whole. Four major areas are considered: strategy, support, execution and investment. Strategy includes how a vendor's strategy for a particular product fits in relation to its other product lines, its market direction and its business overall. Support includes the quality of technical and account support, as well as customer experiences for that product. Execution considers a vendor's structure and processes for sales, marketing, pricing and deal management. Investment considers the vendor's financial health and the likelihood of the individual business unit responsible for a product to continue investing in it. Each product is rated on a five-point scale, from poor to outstanding, for each of these four areas. It is then assigned an overall product viability rating.

The critical capabilities Gartner has selected do not represent all capabilities for any product and, therefore, may not represent those most important for a specific use situation or business objective. Clients should use a critical capabilities analysis as one of several sources of input about a product before making an acquisition decision.