Server SAN: Demystifying Today's Newest Storage Buzzword
The Server SAN, along with hyperscale and software-defined storage, is the latest buzzword in the enterprise storage space. But is Server SAN a distinct and disruptive technology or just a different way of putting existing technologies together? Here's how to make sense of this new storage trend.
Technologists like to think that the next, big, disruptive technology will be based on a never-before-considered concept that changes the way we think of physics. But just as cloud computing is based on a grown-up version of mainframe-based, client/server technology, tomorrow's massive storage networks might well be based on a 1980s-era clustering technology developed by the now-defunct Digital Equipment Corp.
Dave Floyer, co-founder and chief technology officer of the research firm Wikibon, defines a Server SAN (storage-area network) as "software-led storage built on commodity servers with directly attached storage." Functionally speaking, the Server SAN is the convergence of software-defined storage, flash memory and the hyperscale service market. Google, Facebook, Amazon, Microsoft and others that build enormously large networks that need to scale up continuously are examples of what Wikibon calls hyperscale. Read: Advantages of Flash in the Data Center
These hyperscale providers are price-sensitive both to what they must pay for technology and what their clients will pay for their services, according to Wikibon.
David Vellante, also a co-founder of Wikibon and the firm's CEO, characterizes the Server SAN as a disruptive technology that will change the way IT professionals look at servers and storage for years to come. But whether the Server SAN is a distinct technology or just a different way of putting existing technologies together is an issue that still has yet to gain universal agreement.
Server SAN Disrupts the Storage Networking Market
"For decades we've seen storage function shift away from the server toward the SAN," Vellante says. "It's shifting back as multicore processors and flash come into prominence. The best I/O (input/output) is no I/O and the closer things move to the processor the faster they will run. So the Server SAN disrupts a $30 billion storage networking market."
Some of the basic building blocks of storage are setting the market on its ear, notes Jim Bagley, a senior analyst with the storage consultancy SSG-NOW in Austin, TX. Today's high-end commodity drives such as the 6 TB rotating disks and 2 TB SSDs today have intelligence built directly into the drive, allowing data processing to be conducted within the confines of the drive itself, he says. The on-board intelligence on the drives is further blurring the lines of where data processing actually takes place -- is it on the server's CPU or in intelligence built into the drives?
Certainly the way networks in general and storage in particular were managed in the past continues to evolve with the technology, says Greg Schulz, founder of the storage consultancy StorageIO and author of Cloud and Virtual Data Storage Networking and The Green and Virtual Data Centee. He agrees that the days of companies having both network and storage administrators is, for the most part, history.
Schulz likens Server SANs to the VAXcluster technology developed by Digital in 1984 to expand the processing capabilities of is VAX computer family. (Virtualization was a significant component 30 years ago; the name VAX stood for Virtual Address eXtension. Digital's VMS operating system stood for Virtual Memory System.)
Essentially, a VAXcluster was an interconnected group of servers that were networked by what was called a star coupler. The coupler essentially was a server acting as device controller with each server running its own version of the VMS operating system, but making all the storage on all the system looking like one storage environment as if all the drives were direct-attached storage connected to the same backplane.
Generally speaking, Vellante agrees with the cluster comparison, although he notes that the interconnect today is Ethernet and storage is now priced as a commodity. When Digital developed the VAXcluster, "everything VAX was closed and proprietary and expensive."
Bagley concurs that Digital's clusters were written in proprietary software and the communications protocols were relatively new, while "today the protocols are all grown up" and the technology building blocks are thousands of orders of magnitude faster in the same chips.
Today's network administrators are generalists who know something about networks and storage, but also a lot about virtualization, Schulz says. Rather than concerning themselves with the details of storage such as LUNs (logical unit numbers) and other storage intricacies, they see it in terms of virtual servers and Windows storage shares. "They might not know they're doing storage," he says. "It's just an app to them."
What Defines a Server SAN?
As a result, the server SAN can include any storage technology plugged into a server, be it direct attached storage (DAS), solid state drives (SSD), hybrid drives, or flash memory. Today's servers are leveraged to do multiple tasks, including storage management, file sharing and communications, so the systems administrators need to be generalists with a variety of skills. Read: Top 5 Hybrid Storage Systems
Regardless of what one calls their environment, Schulz says, the resulting network is essentially the same. As networks grow, the company needs to invest in infrastructure to solve problem. The question, he says, is whether the investment is made in physical servers, which then can be used to launch multiple virtual servers with DAS, storage, which then is sliced up among multiple servers using storage management tools, or software management. Just moving the problems from hardware to software might not ultimately save money; the goal is to find the bottlenecks and address them directly, he notes.
Bagley says that while vendors are working on ways to differentiate their software-defined storage offerings, it's the Application Programming Interface (API) Orchestration Layer where the vendors' "secret sauce" is focused. Daniel Jacobson, director of engineering for the Netflix API, describes the Orchestration Layer this way in a recent post: "An API Orchestration Layer (OL) is an abstraction layer that takes generically-modeled data elements and/or features and prepares them in a more specific way for a targeted developer or application."
Today's Orchestration Layer can be compared to the proprietary protocols built into the VAXcluster in that it differentiates one vendor's product from another, Bagley notes.
The real challenge today is managing the storage and computing resources, Schulz says. He likens some of today's evolving storage and virtualization issues to a traditional shell game with a pea under each shell; under one shell is storage, under another is servers, and under the third is the virtualization layer that ties it all together.
"What we're not hearing about is what manages everything," he says. Rather than trying to pick the correct shell and apply management to that pea alone, the player should smash all three shells and then put them back together as one, applying management to the entire infrastructure rather than doing it piecemeal, he says. If that's not possible, then IT should put together a virtual team of engineers with expertise representing each of the peas to manage the environment.
- New Storage Technologies and the Future of HDDs
- Building a Business Case for Flash Storage
- Selecting the Right Type, Amount & Location of Flash Storage