Product and service reviews are conducted independently by our editorial team, but we sometimes make money when you click on links. Learn more.
 

HGST To Display Phase Change Memory, Persistent Memory Fabric At FMS 2015

By - Source: HGST

HGST, a Western Digital Company, has its own spin on its choice for the non-volatile memory of the future. The company is demonstrating its PCM-based (Phase Change Memory) RDMA-enabled in-memory compute cluster architecture developed with Mellanox Technologies at the Flash Memory Summit 2015 (FMS).

The system presents the enticing prospect of an in-memory architecture that can be deployed into compute clusters with random access latencies below two microseconds (yes, microseconds) for 512B reads, and throughput topping 3.5 GBps for 2KB over the fabric.

Last year at the show, HGST demonstrated its PCIe PCM SSD churning out 3 million IOPS, and we spoke with HGST's engineers about the challenges associated with pushing the boundaries with PCM. They indicated the hardware was relatively easy to design, but reaching the full potential of the device was rife with challenges due to the inherent inefficiencies of the software stack.

Interestingly enough, the company originally experimented with utilizing the NVMe protocol over the PCIe bus and found it to be lacking. NVMe is designed from the ground up for future non-volatile memories, and not NAND (though that is the most common implementation) -- so the fact that it couldn't handle PCM raises eyebrows. 

The company created its own custom stack when NVMe fell short, and it allowed the SSD to reach its full potential of 3 million IOPS (and amazingly low latency). Now the company is working with Mellanox to extend that performance outside of the server with its RDMA technology. This is another departure from one of the NVMe industry darlings, NVMe Over Fabrics, which is speeding its way through standards committees as we speak.

Above, you can see one of the pictures we took of HGST's PCM SSD last year. It will be interesting to see the evolution on the hardware side as well; I am sure the platform has matured by this point. We will gather more information once we are on the ground.

HGST's and Mellanox's custom persistent memory fabric is designed to provide reliable, scalable, low-power memory with DRAM-like performance. Most importantly, it purportedly requires no changes to the system BIOS or applications, which is important from an ease-of-deployment and qualification standpoint. The fabric utilizes memory mapping of remote PCM devices via RDMA (Remote Direct Memory Access) over existing network infrastructures, such as Ethernet or InfiniBand.

HGST's new implementation may present an alternative to Intel/Micron Flash Technologies' (IMFT) new 3D XPoint persistent memory. 3D XPoint is going to require application modifications to use it as an adjunct to system DRAM, but for block storage purposes, IMFT is committed to utilize the competing NVMe standard -- and one can easily imagine NVMe Over Fabrics will be a big part of IMFT's equation for success in the in-memory processing space.

NVMe has the advantage of being an open and standard interface, while HGST's software stack is proprietary. However, HGST is wise to couple its implementation with RDMA, which enjoys widespread adoption. The battle over which high-speed interconnect will be utilized in clustered architectures is far from over, even for existing technologies, but it is interesting to see the players line up with their own unique implementations for the future non-volatile products.

"Mellanox is excited to be working with HGST to drive persistent memory fabrics," said Kevin Deierling, vice president of marketing at Mellanox Technologies. "To truly shake up the economics of the in-memory compute ecosystem will require a combination of networking and storage working together transparently to minimize latency and maximize scalability. With this demonstration, we were able to leverage RDMA over InfiniBand to achieve record-breaking round-trip latencies under two microseconds. In the future, our goal is to support PCM access using both InfiniBand and RDMA over Converged Ethernet (RoCE) to increase the scalability and lower the cost of in-memory applications."

On a side note, HGST recently underwent a branding makeover. (Hopefully in anticipation of MOFCOM regulatory approval of its WD merger?) Check out the new logos and look here.

The new product will be on display at the Flash Memory Summit 2015 in Santa Clara from August 11-12 in the HGST booth.

Paul Alcorn is a Contributing Editor for Tom's IT Pro, covering Storage. Follow him on Twitter and Google+.


Comments