IBM’s Spectrum Storage: Now a Suite, Not Just a Family

Please note: This guest commentary is by independent IT industry analyst David G. Hill, principal of the Mesabi Group. 

By David G. Hill, Mesabi Group  January 27, 2016

IBM has announced the IBM Spectrum Storage Suite, which is a way to offer a single software license that covers all six IBM Spectrum Storage family products. The licensing model for the suite is based on the volume (in TBs) of data managed. IBM’s goal is to encourage the use and adoption of its software defined storage (SDS) products not only for the traditional SAN storage environments, but also for a new storage architecture — what the company calls “storage-rich servers” —where the bulk of data growth seems to be occurring. Let’s dig deeper into what IBM is doing while also discussing data diversity.

More data diversity leads to more storage architecture diversity

Data diversity is the watchword for the day since it also impacts storage architecture in the sense that different architectures are often used to house newer sources and types of data. But that results in a need to efficiently manage all of this data in terms of cost controls and storage processes. That is the impetus behind the software-defined storage strategy behind IBM’s Spectrum Storage. But why is data and storage architecture diversity so serious a subject?

Originally, enterprise mainly generated and stored structured data that was block oriented, such as online transactional processing (OLTP), including applications like ERP. Storage area networks (SANs) arose to accommodate this data because they decouple storage from its server and enable many servers to share the same data pool (like that contained in the same storage array). The need to share file-based data (such as documents) also arose, and network attached storage (NAS) solutions were developed to that material in shared storage pool environments. The two have now come together under the rubric of unified storage, although the term SAN can still be loosely used.

But in the broader world of data, major transformations continue. Web-based applications — including social media — as well as mobile data, information generated by the Internet of Things, and the wider use of analytics has led to new storage technologies supporting semi-structured and unstructured data. Although these might be considered to be file-based data,    they are more frequently considered as object-based.

These broader data sources are growing much faster than the traditional block and file data, and a non-SAN storage architecture model is necessary to accommodate them. That storage architecture model is what IBM calls the storage-rich server model. In a sense, this is back to the past since it is simply a variant of the old direct-attached storage (DAS) model where all the storage that a server accessed was physically attached to it in the form of just a bunch of disks (JBODs) arrays.

So why go back? The answer is that many new requirements (such as object storage for capacity-driven applications) do not need the controller-based overhead, switched-fabric, and storage cache that SANs supply for OLTP applications. Moreover, the power of servers and the density of storage has increased dramatically since SANs first appeared, so the storage-rich storage world is a more or less turbocharged DAS.

This model plays well in the service provider and cloud worlds and is becoming more attractive for enterprises to deploy alongside their existing shared storage infrastructure. Why so? Because at the same time these solutions provide the opportunity to transition appropriate workloads to a storage-rich server environment. However, although this model does not have the hardware overhead of the shared storage model, it still has data management (such as data protection) and storage management (such as monitoring and control of the physical storage) requirements.

By definition, these solutions qualify as software-defined storage since the management functions depend on software running on the server itself. And this is what IBM Spectrum Storage brings to the table; a rich proven portfolio of software-defined storage products that can provide the necessary management capabilities to support both traditional shared storage and newer storage-rich server environments.

IBM Spectrum Storage provides for SDS consumption models

When IBM announced the Spectrum Storage family in February 2015, it took a step back and recast six existing products as software-defined storage solutions in the sense that the products were physically decoupled from vendor-specific hardware. In other words, even though there has to be a physical storage device or environment that the software manages, that storage does not have to be IBM’s.

For example, Spectrum Accelerate is derived from the software that manages IBM XIV storage arrays. Users can continue to run Spectrum Accelerate in conjunction with XIV storage hardware in the traditional appliance consumption model, but now have the option of using it for other storage consumption models, including self-built clouds, cloud servers and software that runs on a server. Software running on a server is appropriate for the storage-rich server world where the underlying storage often leverages commodity disks, not IBM solutions.

I’ll repeat this for emphasis: these capabilities arise because IBM’s Spectrum Storage Suite products are software- — not hardware- — based. IBM has a leg up on making an end-to-end solution sale if a customer’s SAN infrastructure already uses its Spectrum Storage family products. But even if not, IBM can argue that it has a more comprehensive portfolio of software-defined storage than any other vendor. And now it can argue that these products have the advantage of being part of a suite.

IBM Spectrum Storage: Now a Suite, not just a family

The key differentiating feature that the suite designation provides is a simple new licensing model that delivers unlimited access to all six products in the IBM Spectrum Storage Suite. That pricing model is based on the total number of TBs of usable physical storage that is being software-defined. When the number of products used exceeds two, IBM asserts that costs are likely to be lower than if all the products had been paid for individually.

Why is this important? For organizations entrenched in just SAN storage, the value of the Spectrum Storage Suite will increase only if more than two products are used. However, for organizations adding storage-rich servers to their traditional storage environments, the equation changes significantly.

For example, both storage environments would probably use IBM Spectrum Control (which provides for performance management, storage provisioning, availability monitoring and reporting, etc.). The SAN environment would probably use Spectrum Virtualize, which is based upon SAN Volume Controller (SVC). The storage-rich environment might start off with Spectrum Accelerate, but then move to Spectrum Scale for scale-out file and object storage management. In addition, one or both environments may benefit from IBM Spectrum Protect for data-protection-related activities.

But that is not all. For IT, going back to the budget well to ask for more money for an additional software license is an unpleasant process that requires complex financial justification in addition to business justification (for example, data protection has business value, but trying to define that value in terms of ROI is hard). Not only that, but the process requires IT to cool its heels for weeks or months waiting for financial approval. With IBM’s new simplified pricing model, IT can acquire and deploy necessary software in timelier fashion.

All in all, IBM’s approach should lower the barriers to adopting additional software product, and to lower them considerably if the new product does not increase the storage under management (and it is not likely to). Yes, there is a learning curve for IBM’s new products, but, if they provide real value (such as needed data protection, compression or some other service), then adoption should be relatively easy. Now of course, IBM likes this strategy because it keeps all the software-defined products in its own family, but it also benefits IT with extra needed data and storage management functionality. So the use of the term suite is appropriate.

Mesabi musings

Data continues to grow rapidly but much of that growth now comes from non-traditional sources, such as sensor-based data from the Internet of Things. Much of that new data is being housed on what are called storage-rich servers instead of in conventional SAN infrastructures. This new environment badly needs data and storage management tools since traditional solutions are unable to fully support those data sources.

IBM is now providing a comprehensive set of software-defined storage tools under the rubric of the IBM Spectrum Storage Suite to meet this need. These solutions can be applied to data residing in both SAN infrastructures and storage-rich server environments, enhancing the effectiveness of IT. But the new simplified licensing policy for IBM’s Spectrum Storage Suite should also allow enterprise customers to improve both cost and process efficiencies.

© 2016 Mesabi Group. All rights reserved.

About the Mesabi Group

The Mesabi Group (www.mesabigroup.com) helps organizations make their complex storage, storage management, and interrelated IT infrastructure decisions easier by making the choices simpler and clearer to understand.