LICENSED FOR DISTRIBUTION

Critical Capabilities for Object Storage

Archived Published: 31 March 2016 ID: G00271719

Analyst(s):

Summary

Growing investments in Mode 2 IT projects and cost reduction efforts in core enterprise workloads are driving the demand for object storage. Here, we compare 12 object storage products against seven critical capabilities in use cases relevant to infrastructure and operations leaders.

Overview

Key Findings

  • Infrastructure and operations leaders are attracted by the lower total cost of ownership and the scalability of object storage, whereas enterprise developers are attracted to its programmability, cloud portability and productivity improvements.

  • The object storage market has been in a consolidation phase during the past few years, with notable acquisitions by IBM and HGST in 2015.

  • The Amazon Simple Storage Service API has emerged as the de facto standard for data access — a growing number of vendors support the S3 API, with varying degrees of compatibility.

  • The use cases for object storage in the enterprise are evolving beyond archiving, due to increased innovation that focuses on performance, reliability, interoperability and minimalist architectural designs.

Recommendations

  • Choose object storage products as alternatives to block and file storage when you need huge scalable capacity, reduced management overhead and lower cost of ownership.

  • Build on-premises object storage repositories with the hybrid cloud in mind, and evaluate their API support and level of compatibility with dominant public cloud providers for data portability.

  • Select object storage vendors that offer a wide choice of deployment (software-only versus packaged appliances versus managed hosting) and licensing models (perpetual versus subscription).

  • Train developers on best practices related to application design and the operational considerations relevant to an object storage system.

Strategic Planning Assumption

By 2019, more than 30% of the storage capacity in enterprise data centers will be deployed with software-defined storage (SDS) architectures based on x86 hardware systems, which is an increase from today's less than 5%.

What You Need to Know

Object storage is pervasive as the underlying platform for cloud applications that we consume in our personal lives, such as content streaming, photo sharing and file collaboration services. The degree of awareness and the level of adoption of object storage are less in the enterprise, but they continue to grow. The key drivers for the adoption of object storage in the enterprise are:

  • The explosion in the amount of unstructured data and the resulting need for lower-cost, scalable, self-healing, multitenant platforms for storing petabytes of data.

  • New investments in private clouds and analytics, particularly in industries such as media and entertainment, life sciences, the public sector, and education/research, which demand scalable, cost-effective storage.

  • Growing interest from enterprise developers and DevOps team members looking for agile and programmable infrastructures that can be extended to the public cloud.

Object storage is characterized by access through RESTful interfaces via a standard Internet Protocol (IP), such as HTTP, that have granular, object-level security and rich metadata that can be tagged to it. Object storage products are available in a variety of deployment models — virtual appliances, managed hosting, purpose-built hardware appliances or software that can be installed on standard server hardware. These products are capable of huge scale in capacity, and many of the vendors included in this research have production deployments beyond 10PB. They are better-suited to workloads that require high bandwidth than transactional workloads that demand high input/output operations per second (IOPS) and low latency.

The new generation of object storage products relies mainly on erasure-coding schemes that can improve availability at lower-capacity overhead and cost, when compared with the traditional redundant array of independent disks (RAID) schemes. The growing support for Amazon Simple Storage Service (S3) API among the object storage vendors is stimulating market demand for these products, although the level of compatibility with the S3 API widely varies, and there can still be lock-in, due to proprietary methods of managing metadata.

IT leaders who need highly scalable, self-healing and cost-effective storage platforms for large amounts of unstructured data should evaluate the suitability of object storage platforms. They should use this research as a basis to identify the appropriate products for their use cases.

Analysis

Critical Capabilities Use-Case Graphics

Figure 1. Vendors' Product Scores for the Overall Use Case
Research image courtesy of Gartner, Inc.

Source: Gartner (March 2016)

Figure 2. Vendors' Product Scores for the Analytics Use Case
Research image courtesy of Gartner, Inc.

Source: Gartner (March 2016)

Figure 3. Vendors' Product Scores for the Archiving Use Case
Research image courtesy of Gartner, Inc.

Source: Gartner (March 2016)

Figure 4. Vendors' Product Scores for the Backup Use Case
Research image courtesy of Gartner, Inc.

Source: Gartner (March 2016)

Figure 5. Vendors' Product Scores for the Cloud Storage Use Case
Research image courtesy of Gartner, Inc.

Source: Gartner (March 2016)

Figure 6. Vendors' Product Scores for the Content Distribution Use Case
Research image courtesy of Gartner, Inc.

Source: Gartner (March 2016)

Vendors

Caringo Swarm

Based in Austin, Texas, Caringo was established in 2005 and is privately held. Caringo's main offering is Swarm, which can leverage standard hardware and supports the Amazon S3 API and the OpenStack Swift API.

Swarm can be paired with FileFly, introduced in September 2015, which supports the life cycle management of file content in NetApp and Windows filers for transition to object storage. Customers like this capability to enable the archiving use case. Caringo continues to focus on the healthcare vertical, and Swarm has a number of governance features, including write once, read many (WORM), legal hold and retention management at the object level.

Swarm offers erasure coding and replication with its Elastic Content Protection features. There is no support for extending data to other environments, such as tape or tiering to public cloud storage providers as part of a tiered architecture.

Cloudian HyperStore

Cloudian is a small, venture-backed startup based in San Mateo, California, with roots in Japan. HyperStore is its Amazon S3-compatible object storage product, which was released in 2011.

HyperStore is software designed to run on commodity hardware, virtual machines (VMs) or as Cloudian-designed appliances. HyperStore not only supports S3 as a compatible protocol on the front end, it can tier objects to Amazon S3's public cloud storage service on the back end. This gives HyperStore cloud storage gateway properties, which are an emerging trend among object storage vendors. HyperStore relies heavily on the Cassandra key/value store for metadata management.

Cloudian claims 100% protocol compatibility with Amazon S3. It is not 100% compatible; however, HyperStore is closely compatible and attempts to support large portions of the S3 API. (It does not support features such as bucket notifications.) Most HyperStore implementations are small, with few deployments exceeding 2PB. Cloudian references indicated that the overall management of HyperStore could be improved, stating that some configuration aspects are very manual, and its capabilities for monitoring the health of the system are not enterprise-ready.

DataDirect Networks WOS

DataDirect Networks (DDN) offers the WOS7000 and WOS 9660 Archive Appliances as object storage platforms. The company is known for its high-performance computing (HPC) storage products, which are used in some of the world's top supercomputers.

WOS is available as a hardware appliance from DDN and as software, with reference architectures from partners, such as Dell, HP and Supermicro. The product supports Amazon S3, as a plug-in, and Swift API compatibility. Interoperability with the company's EXAScaler and GRIDScaler parallel file system appliances is provided through its WOS Bridge offering. DDN provides large volume capacity and high-performance throughput in terms of seek/read time. These factors, in addition to DDN's heritage with HPC, lead customers toward active archiving and analytics use cases that involve the aggregation of big data.

WOS doesn't implement authentication or encryption through its native REST API; however, it allows access to any calling application, if it knows a valid object identification. WOS assumes that data is properly secured if object IDs are hard to guess, but this is not sufficient. WOS does not support named keys for objects stored through its native REST interface, requiring applications to persist an object identification returned to the client to subsequently perform operations on files. This is particularly challenging with mobile applications.

EMC Elastic Cloud Storage

Elastic Cloud Storage (ECS) is EMC's newest object storage platform; however, it contains traces to EMC's 15-year lineage as a developer of object storage products. As such, ECS supports APIs from each EMC object storage product that came before it, including Centera and Atmos. ECS is also compatible with the OpenStack Swift and Amazon S3 protocols.

ECS is used by both public-facing service providers and large enterprises to provide object storage services that support external customers and internal users. This is made possible by ECS's multitenant foundation, which enables billing, metering and monitoring to be measured at a granular level. The ECS architecture has been significantly improved, compared with its EMC object storage predecessors, by employing a scale-out storage architecture and a layered approach to the outbound APIs and internal data services. The result is an elegant design and a more-resilient platform.

EMC has accumulated a significant amount of knowledge related to building scale-out storage platforms. However, maintaining technical debt in the form of existing APIs from previous products is a trade-off that will cause the company to not be solely focused on serving modern application architectures. As a result, ECS hedges between the old and the new, rather than definitively focusing on the current or the future. Moreover, not all ECS APIs support the same features, such as server-side encryption and governance. The result is that many of the features attributed to the ECS platform are supported only under certain conditions. There's also little interoperability between the APIs, and migration between them requires expensive professional services.

HGST Active Archive System

Amplidata was one of the early object storage startups to embed self-healing erasure code algorithms, architecting them into a grid-based storage system. Founded in 2008, Amplidata was acquired by HGST in 2015. HGST is now a brand of Western Digital.

After the acquisition, HGST relaunched its product as HGST Active Archive System, which is a full rack solution, with capacities ranging from 1.2PB to 4.7PB. It supports an S3-compatible API, with distributed erasure coding that enables it to provide protection against site failures. This product uses Helioseal PMR drives that provide high density and lower power consumption. By combining BitDynamics, which is a checksum mechanism to prevent silent data corruption with its stress-tested, high-capacity drives, HGST can potentially deliver high data durability in its Active Archive System.

The Active Archive System product lacks several key features, including replication, WORM, and support for file and non-S3 interfaces. Moreover, the lack of a software-only procurement model and limited independent software vendor (ISV) integration can limit its appeal across a broad set of use cases and customer segments. The PB range starting size and high acquisition price of the Active Archive System create a high barrier for organizations that want to test the product at smaller capacities.

Hitachi Data Systems HCP

The Hitachi Data Systems object storage portfolio is a combination of three products:

  • Hitachi Content Platform (HCP)

  • HCP Anywhere — an enterprise file sync-n-share solution

  • Hitachi Data Ingestor (HDI) — used as a cloud onramp device

HCP is available as a preconfigured hardware appliance or as a virtual software appliance and in an operating expenditure (opex)-based consumption model from Hitachi's partners or Hitachi Cloud Services. HCP is a mature product with competitive security features, including robust multitenancy and built-in encryption. It offers native WORM support, data destruction and digital signatures to ensure secure information life cycle management. Local erasure coding was recently introduced to reduce capacity overheads for large object workloads. HCP supports remote asynchronous replication, as well as an active-active topology that allows parallel read/writes across configured sites belonging to the same namespace. In addition to its native API, HCP supports the Amazon S3 API and Swift API, and it can enable tiering into public cloud storage from providers such as AWS, Microsoft Azure and Google.

HCP's erasure coding is fixed, it isn't user-configurable and it can't span multiple locations. Customers that need erasure coding and cloud tiering have the option of paying for the higher-priced active license, or using the lower-priced economy and extended licensing options, all of which still include a cost for managing the data in the public cloud.

Huawei OceanStor UDS

Since acquiring the Huawei-Symantec joint venture, Huawei has been aggressively investing in its storage business. Huawei has a diversified portfolio that spans storage area network (SAN), network-attached storage (NAS) and object storage product lines.

Huawei's object storage product, UDS, delivers high-density storage nodes with as many as 75 hard-disk drives in a 4U enclosure, which are based on ARM processors for lower energy consumption. The product supports local replication and erasure coding, and objects can be asynchronously replicated to remote sites to mitigate site failures. The UDS product is based on a decentralized architecture of peer-to-peer nodes in which the metadata is stored with the object to eliminate any single point of failure and enable seamless scalability. Huawei UDS supports its native API and the S3 API for data access.

The product can only be deployed as a packaged appliance that is sold by Huawei. It does not offer native encryption support and it doesn't support WORM or other compliance-related features. Although Huawei has achieved robust revenue growth in the Asia/Pacific (APAC) region and Europe, its presence in the U.S. continues to be weak, due to political challenges it has been unable to overcome.

IBM Cleversafe dsNet

Founded in 2004 as a privately held company, Cleversafe was acquired by IBM in November 2015. Cleversafe's Dispersed Storage Network (dsNet) is offered in various deployment models, including a physical or virtual appliance (through VMware), and certain components can be deployed as a Docker container or Linux OS daemon.

Cleversafe dsNet is cited for its scalability and ease of management. The management interface contains features offered in functional tabs that enhance productivity. Erasure coding is applied to data upon ingest, and the product is architected in a very distributed manner. Cleversafe dsNet is aligned with a number of popular backup, archiving and cloud gateway ISVs to support associated use cases. Security is its major strength, with many forms of encryption available and special support for audits.

Cleversafe dsNet supports the Amazon S3 and OpenStack Swift APIs; however, native Network File System (NFS) support is lacking. Cleversafe's acquisition by IBM bears watching, because this could affect future development for enterprise customers.

NetApp StorageGRID

Based in Sunnyvale, California, NetApp delivers StorageGRID, which is available as a physical or virtual appliance, with support across the company's storage portfolio. StorageGRID supports Common Internet File System (CIFS) and NFS protocols, as well as Cloud Data Management Interface (CDMI), Swift and S3 APIs.

Security capabilities include native at-rest encryption, strong audit and reporting, and WORM, through the use of the company's Data Ontap SnapLock features. The product has good ISV support for backup and archiving use cases, as well as effective tier storage, with support for disk, solid-state drive, tape and cloud options. NetApp has a fair number of PB-scale object storage customers.

Object versioning is a work in progress and an area that needs improvement. Updated objects are managed as new objects with StorageGRID, and require separate retention policy and management. Beyond its virtual appliance, NetApp offers limited options for software-based deployments, due to the absence of reference architectures with third-party server OEMs.

Red Hat Ceph Storage

Commercially distributed and supported by Red Hat and others, Red Hat Ceph Storage is an open-source storage project. Red Hat has acquired Inktank, the primary code contributor and support organization behind the Ceph Storage project.

Ceph Storage is "block on object" storage software that runs on commodity hardware. The internals of Ceph Storage are an object store, and the block storage portions are built on that base. Most Ceph Storage implementations are primarily for block storage service; however, its largest deployments (in terms of the overall amount of data) involve its use as an object store. The open-source and community aspects of Ceph Storage act as either positive or negative attributes, depending on the preference of the particular customer. Some users are attracted to the community aspects of Ceph Storage and the openness of its development, whereas others prefer the lower risk associated with commercial, closed-source products. Customers attracted to the community nature of Ceph Storage indicate that the ability to communicate with other Ceph Storage users and view a list of outstanding bugs are important aspects in their decisions about using the product.

Ceph Storage has basic multitenant capabilities and relies on native file system encryption, rather than server-side encryption. CephFS, the Ceph Storage file system, needs maturing, and it's not ready for production. References indicated difficulty in managing, diagnosing and troubleshooting the Ceph Storage cluster when it is unhealthy, leading to longer problem resolution times.

Scality Ring

Scality develops object and scale-out file storage software with R&D in France and sales and marketing based in Silicon Valley. Scality's Ring is deployed on commodity hardware; however, it's often resold by HP and Dell on their own brand of servers.

Scality is unique, compared with most other object storage vendors, in that many of its clients use Ring for its scale-out NAS capabilities. Applications and users need not integrate with a REST API to take advantage of Scality's resilience characteristics; they can simply use NFS or Server Message Block (SMB) to communicate with Ring and to get the benefits of its distributed storage back end. Ring supports configurable erasure coding and replication in a peer-to-peer architecture that provides efficiency at scale, with no single point of failure. Scality has deeper support for OpenStack than most other object storage vendors, in that it supports Cinder, Swift and Glance. Some of Scality's largest customers use Ring as the back end for email workloads, reflecting the lineage of Scality's founders in that market.

Scality's S3 compatibility is immature, compared with other object storage vendors, and it does not support significant aspects of the S3 protocol, such as object versioning, bucket versioning, life cycle policies, server-side encryption, cross-site replication and event notification. Scality's native REST interface uses no security mechanisms for data-in-motion and relies on customers to secure the communication. Customer references reported that Scality's management and reporting capabilities are in need of improvement.

SwiftStack Object Storage

SwiftStack's OpenStack Swift is an open-source project available under the 2.0 license from Apache Software Foundation.

Enterprise customers can procure a software subscription from SwiftStack, which provides easier management, monitoring and runtime tools to ease deployment and runtime challenges. SwiftStack is the primary contributor of software code to the OpenStack Swift project. It provides an out-of-band controller to deploy, integrate and manage the storage nodes that can be deployed on compatible hardware and prebuilt reference architectures from OEM partners, such as Cisco and Supermicro. It also provides an optional NFS/SMB gateway. The product is highly scalable, with support for multiregion replication and erasure coding. Swift API is gaining support with a wide range of ISVs, and the product can also support the emulation of Amazon S3 APIs. SwiftStack offers an S3-compatible interface, in addition to the native Swift interface.

The product lacks support for native encryption capabilities. End-user organizations need to work closely with the OEMs to optimize and tune the hardware to extract optimal performance from OpenStack Swift, particularly for small file/object environments.

Context

The first generation of object storage, in early 2000, manifested as content-addressed storage (CAS). During the late 2000s, the second phase of object storage shifted the product focus to cloud uses, with a development emphasis on a cost-effective cloud storage infrastructure, with erasure codes for storage-efficient protection and better WAN support. Cloud providers, such as Amazon, Google and Microsoft, built their own storage infrastructures with object interfaces to offer it as an on-demand cloud storage service. The success of object storage services in the cloud has had a significant effect on the on-premises vendor ecosystem and common access standards.

Key vendors that offer an object storage product include Caringo, Cloudian, DDN, EMC, HGST, Hitachi Data Systems, Huawei, IBM, NetApp, Quantum, Red Hat, Scality, SwiftStack and Tarmin. The market has been consolidating in the past few years, with NetApp and Red Hat having made acquisitions to enter this market segment. In 2015, HGST acquired Amplidata and IBM acquired Cleversafe, one of the early pioneers in the object storage space.

The number of open-source options is also on the rise. The following is a list of key open-source object storage projects:

  • OpenStack Swift is the most prominent open-source object store in the market, with availability from the Apache community, several OpenStack distributions and specialized commercial vendors, such as SwiftStack.

  • Ceph is an open-source project supported by Red Hat, SUSE, Intel and DreamHost, among others; it's a unified storage architecture with block and object access.

  • Minio is an emerging vendor that provides an S3-compatible, lightweight, open-source object store to cater to the needs of individual and enterprise developers.

  • OpenIO is an open-source project that was incubated at Atos Origin, with commercial support now being provided by Vade Retro, a French company.

More vendors are starting to offer annual, subscription-based, all-inclusive, software-licensing models on top of the legacy perpetual-licensing models to attract customers to deploy their products.

Product/Service Class Definition

Object storage refers to devices and software that house data in structures called "objects," and serve hosts via protocols (such as HTTP) and APIs (such as REST, SOAP, Amazon Simple Storage Service [Amazon S3], OpenStack Swift and CDMI). Conceptually, objects are similar to files, in that they are composed of content and metadata. In general, objects support richer metadata than file storage by enabling users or applications to assign attributes to objects that can be used for administrative purposes, data mining and information management.

Critical Capabilities Definition

Object storage products often outscore traditional block and file storage products in capacity scalability, security/multitenancy, total cost of ownership (TCO) and manageability, although they tend to lag in performance, interoperability and efficiency. Given the nascent state of the market, several features that clients expect in a traditional NAS system may be absent or less developed in object storage products, due to design considerations or product immaturity. Clients need to understand these trade-offs during the procurement process.

Enterprises should consider the following seven critical capabilities when deploying object storage products. Enterprises can work toward these goals by evaluating object storage products in all capability areas.

Capacity

The ability of the product to support growth in capacity in a nearly linear manner. It examines object storage capacity scalability limitations in theoretical and real-world configurations, such as maximum theoretical capacity, maximum object size and maximum production deployment.

Storage Efficiency

The ability of a product to support storage efficiency technologies, such as compression, single-instance storage/deduplication, tiering, erasure coding and massive array of idle disks (MAID) to reduce TCO.

Interoperability

The ability of the product to support multiple networking topologies, third-party ISV applications, public cloud APIs and various deployment models.

Manageability

The automation, management, monitoring, and reporting tools and programs supported by the product. In addition, ease of setup and configuration and metadata management capabilities were considered.

These tools and programs can include single-pane management consoles, monitoring systems and reporting tools designed to help personnel seamlessly manage systems, monitor system usage and efficiencies, and anticipate and correct system alarms and fault conditions before or soon after they occur.

Performance

The per-node and aggregated throughput for reads and writes that can be delivered by the cluster in real-world configurations.

Resilience

The platform capabilities for providing high system availability and uptime. Options include high tolerance for simultaneous disk and/or node failures, fault isolation techniques, built-in protection against data corruption, and data protection techniques, such as erasure coding and replication.

Features are designed to meet users' recovery point objectives (RPOs) and recovery time objectives (RTOs). There are several methods for data protection in today's object storage products. RAID is becoming less popular, due to huge capacity overheads and long rebuild times. The simplest way to protect data is replication, which stores multiple copies of the data locally or in a distributed manner. A more innovative data protection scheme is erasure coding, which breaks up data into "n" fragments and "m" additional fragments across n+m nodes, offering clients configurable choices, depending on their cost and data protection requirements. Enterprises often combine erasure coding and replication, because the former performs well with large files, whereas the latter works well when there are large numbers of small files. WAN costs and performance considerations in distributed environments are also factors.

Security and Multitenancy

The native security features embedded in the platform provide granular access control, enable enterprises to encrypt information, provide robust multitenancy, offer data immutability and ensure compliance with regulatory requirements.

Use Cases

Analytics

This applies to storage consumed by big data analytics applications and packaged business intelligence (BI) applications for domain or business problems.

Performance (more specifically, bandwidth), resilience and scalability are critical to success. These include features to tolerate disk/node failures, versioning to facilitate check-pointing of long-running applications and bandwidth to reduce time to insight.

Archiving

The earliest enterprise use case for object storage products, it has been used for more than a decade. Products provide cost-effective, scalable, long-term data preservation.

For this use case, object storage products are used to store immutable data for years and decades. Features such as WORM, legal hold and object-level versioning increase the attractiveness of object storage as an archiving target in terms of ease of access, affordability and data immutability. Security, resilience, interoperability and manageability (e.g., indexing and metadata management features) are important selection considerations, and are heavily weighted.

Backup

Infrastructure and operations leaders have used object storage products as backup targets for years, because they provide added scalability for large backup datasets.

Object storage is important for meeting increasing demands for disk-based backup at lower cost. Resilience, storage efficiency, performance and interoperability with a variety of backup ISVs are important selection considerations, and are heavily weighted.

Cloud Storage

This is the most prominent use case for object storage products. Most popular consumer and enterprise public clouds are built on an object storage foundation.

This use case refers broadly to service-provider-built public and private clouds and enterprise-built public, community, hybrid and private clouds, where the access is through REST/HTTP. This is different from VM storage (which is likely to be block storage) that providers or enterprises build for high-performance applications, such as databases.

Resilience, capacity, performance and security are the most important considerations in the choice of products, and are heavily weighted.

Content Distribution

This refers to distributed delivery of content for users across multiple locations to enhance collaboration — i.e., the mobile and social aspects of the Nexus of Forces.

Intelligent and predictive content placement that is served via optimal network routes with high availability, high performance and robust data integrity are key considerations. Performance, resilience, interoperability and manageability are critical selection considerations, and are heavily weighted.

Overall

This refers to general use case.

Vendors Added and Dropped

Added

Cloudian: In the 2014 release of this Critical Capabilities research, Cloudian did not yet meet Gartner's inclusion criteria, because the company did not have a sufficient number of reference customers across all of the outlined use cases. However, the company now meets the criteria, along with the other requirements for inclusion.

Huawei: In the 2014 release of this Critical Capabilities research, Huawei did not yet meet our inclusion criteria, because the company did not have at least 10 customers with 300TB or more in production. However, the company now meets the criteria, along with the other requirements for inclusion.

Red Hat: In the 2014 release of this Critical Capabilities research, Red Hat did not yet meet our inclusion criteria, because the company did not have a sufficient number of object storage customers across all use cases. However, the company now meets the criteria, along with the other requirements for inclusion.

Dropped

No vendors were dropped from the past Critical Capabilities research.

Inclusion Criteria

The products covered in this research include object storage hardware or software offerings that are available for purchase and deployment as stand-alone products.

The object storage system needs to meet the following criteria:

  • There is a publicly defined API for data access through a RESTful interface.

  • The vendor owns the object storage software intellectual property.

  • There is support for horizontal scaling of capacity and throughput through independent node additions.

  • At least 10 production customers have deployed 300TB plus of storage based on the platform.

  • The product must have been deployed across all of the use cases outlined in this research.

  • The product must be installed in at least two major geographic regions worldwide.

  • The product must have been in general availability for at least 12 months prior to the publication of this research.

Table 1.   Weighting for Critical Capabilities in Use Cases

Critical Capabilities

Analytics

Archiving

Backup

Content Distribution

Cloud Storage

Overall

Capacity

18%

12%

12%

6%

20%

13%

Storage Efficiency

5%

10%

20%

8%

5%

8%

Interoperability

15%

15%

15%

15%

7%

12%

Manageability

10%

15%

8%

15%

10%

13%

Performance

20%

8%

15%

25%

16%

16%

Resilience

18%

18%

25%

21%

25%

22%

Security and Multitenancy

14%

22%

5%

10%

17%

16%

Total

100%

100%

100%

100%

100%

100%

As of March 2016

Source: Gartner (March 2016)

This methodology requires analysts to identify the critical capabilities for a class of products/services. Each capability is then weighed in terms of its relative importance for specific product/service use cases.

Critical Capabilities Rating

Each of the products/services has been evaluated on the critical capabilities on a scale of 1 to 5; a score of 1 = Poor (most or all defined requirements are not achieved), while 5 = Outstanding (significantly exceeds requirements).

Table 2.   Product/Service Rating on Critical Capabilities

Critical Capabilities

Caringo Swarm

Cloudian HyperStore

DataDirect Networks WOS

EMC Elastic Cloud Storage

HGST Active Archive System

Hitachi Data Systems HCP

Huawei OceanStor UDS

IBM Cleversafe dsNet

NetApp StorageGRID

Red Hat Ceph Storage

Scality Ring

SwiftStack Object Storage

Capacity

3.5

3.9

4.1

4.0

3.6

3.7

3.2

4.6

3.5

3.7

4.1

3.9

Storage Efficiency

2.9

3.5

3.8

3.4

3.0

4.2

2.6

3.5

3.3

2.4

3.6

2.6

Interoperability

3.5

3.9

3.7

4.0

2.9

4.1

2.6

3.6

3.5

3.3

3.8

3.5

Manageability

3.5

3.9

3.7

4.1

4.0

4.0

3.1

4.4

3.3

3.1

3.9

3.9

Performance

3.9

4.1

4.1

4.1

3.6

3.8

3.4

3.8

3.0

3.2

4.2

3.0

Resilience

3.8

4.1

3.7

3.9

3.5

4.1

3.4

4.2

3.5

3.7

4.1

4.0

Security and Multitenancy

3.9

4.1

2.9

4.2

2.4

4.1

2.3

4.1

3.9

2.5

4.1

3.7

As of March 2016

Source: Gartner (March 2016)

Table 3 shows the product/service scores for each use case. The scores, which are generated by multiplying the use-case weightings by the product/service ratings, summarize how well the critical capabilities are met for each use case.

Table 3.   Product Score in Use Cases

Use Cases

Caringo Swarm

Cloudian HyperStore

DataDirect Networks WOS

EMC Elastic Cloud Storage

HGST Active Archive System

Hitachi Data Systems HCP

Huawei OceanStor UDS

IBM Cleversafe dsNet

NetApp StorageGRID

Red Hat Ceph Storage

Scality Ring

SwiftStack Object Storage

Analytics

3.66

3.98

3.75

4.01

3.32

3.96

3.02

4.07

3.43

3.25

4.03

3.59

Archiving

3.61

3.96

3.61

3.99

3.21

4.02

2.89

4.06

3.50

3.12

3.98

3.61

Backup

3.54

3.91

3.79

3.89

3.32

4.02

3.02

3.97

3.39

3.20

3.95

3.46

Content Distribution

3.66

3.98

3.75

3.99

3.37

3.99

3.05

4.00

3.37

3.20

4.01

3.51

Cloud Storage

3.68

4.00

3.71

4.01

3.33

3.97

3.05

4.14

3.46

3.26

4.05

3.65

Overall

3.65

3.98

3.70

3.99

3.31

4.00

3.00

4.07

3.44

3.20

4.01

3.59

As of March 2016

Source: Gartner (March 2016)

To determine an overall score for each product/service in the use cases, multiply the ratings in Table 2 by the weightings shown in Table 1.

Evidence

Scoring for the seven critical capabilities was derived from Gartner research on the object storage market. Each vendor responded in detail to a comprehensive primary-research questionnaire administered by Gartner analysts. Extensive follow-up interviews were conducted with all participating vendors, and reference checks were conducted with end users. This provides the objective process for considering the vendors' suitability for your use cases.

Critical Capabilities Methodology

This methodology requires analysts to identify the critical capabilities for a class of products or services. Each capability is then weighted in terms of its relative importance for specific product or service use cases. Next, products/services are rated in terms of how well they achieve each of the critical capabilities. A score that summarizes how well they meet the critical capabilities for each use case is then calculated for each product/service.

"Critical capabilities" are attributes that differentiate products/services in a class in terms of their quality and performance. Gartner recommends that users consider the set of critical capabilities as some of the most important criteria for acquisition decisions.

In defining the product/service category for evaluation, the analyst first identifies the leading uses for the products/services in this market. What needs are end-users looking to fulfill, when considering products/services in this market? Use cases should match common client deployment scenarios. These distinct client scenarios define the Use Cases.

The analyst then identifies the critical capabilities. These capabilities are generalized groups of features commonly required by this class of products/services. Each capability is assigned a level of importance in fulfilling that particular need; some sets of features are more important than others, depending on the use case being evaluated.

Each vendor’s product or service is evaluated in terms of how well it delivers each capability, on a five-point scale. These ratings are displayed side-by-side for all vendors, allowing easy comparisons between the different sets of features.

Ratings and summary scores range from 1.0 to 5.0:

1 = Poor or Absent: most or all defined requirements for a capability are not achieved

2 = Fair: some requirements are not achieved

3 = Good: meets requirements

4 = Excellent: meets or exceeds some requirements

5 = Outstanding: significantly exceeds requirements

To determine an overall score for each product in the use cases, the product ratings are multiplied by the weightings to come up with the product score in use cases.

The critical capabilities Gartner has selected do not represent all capabilities for any product; therefore, may not represent those most important for a specific use situation or business objective. Clients should use a critical capabilities analysis as one of several sources of input about a product before making a product/service decision.