Product and service reviews are conducted independently by our editorial team, but we sometimes make money when you click on links. Learn more.

NVMe Over Fabrics Coming To Market As Mangstor Releases NX6320 AFA

By - Source: Toms IT Pro

Mangstor was one of the industry-leading pioneers in the NVMe-Over-Fabrics movement as it developed and qualified its NX6320 AFAs, so it isn't entirely surprising to find the company announcing the general availability of its bleeding-edge All-Flash Array (AFA) at Supercomputing 2015. Mangstor employs its MX6300 NVMe SSD Series, which we tested thoroughly in our product evaluation, as the backbone of the NX6320.

The NVMe specification was designed from the ground up to provide a faster and more efficient interface for all future non-volatile memories, and not specifically NAND. However, its refined and lightweight attributes provide a significant advance over the current storage protocols utilized in tandem with NAND-powered SSDs.

As with any radical breakthrough, it wasn't long before architects envisioned a future that included utilizing NVMe for other applications, and thus the NVMe-Over-Fabrics (NVMeF) movement was born. Traditional storage implementations convert the storage protocol to a networking protocol before data begins its traverse through the associated networking infrastructure. The protocol translation, and the inherent inefficiencies of the network protocol itself, can incur excess latency and other performance penalties. 

NVMeF allows the storage to utilize the same lightweight protocol employed for inter-server communication over RDMA networks. This provides the server with access to the data over the network with nearly the same latency as a direct-attached NVMe SSD. The NX6320 is available in 8, 16 and 32 TB capacities, and random read/write performance spans up to 3.0/2.25 million IOPS. Sequential throughput is equally impressive, with up to 12.0/9.0 GBps of sequential read/write performance.

Caltech physicists are utilizing the NX6320 to simulate how they share large data streams between the Large Hadron Collider (LHC) in CERN and its global web of universities and research labs. The team achieves 2 to 3 million IOPS over a 100 Gbps InfiniBand network with only 100-200 microseconds of latency, which is a 10x improvement over standard flash-based implementations. The increased speed allows researchers to reduce the time for finding, querying, sharing and analyzing LHC's data from 24 hours to four hours.

Mangstor has a large presence at Supercomputing 2015, and its hardware is on display in the Caltech, Caringo, Magma, Mellanox and Open Grid Computing booths. Many of the heavyweight vendors are working on NVMe-Over-Fabrics products, which indicates the transition will be quick. Mangstor scored the notable distinction of being the first vendor to market with a shipping NVMeF product, which is a significant win for the scrappy start-up.

Paul Alcorn is a Contributing Editor for Tom's IT Pro, covering Storage. Follow him on Twitter and Google+.