One Stop Systems Displays OCP-Compliant All-Flash Storage, Up To 8M IOPS
We expected to see more datacenter M.2 SSD implementation at the Flash Memory Summit, particularly in the OCP (Open Compute Project) space, and FMS delivered. Currently, there are very few datacenter-class M.2 offerings available on the open market, such as the Intel M.2 DC S3500 SSD we recently evaluated. However, the M.2 ecosystem is a powder keg waiting to explode as architects exploit the inherent advantages of M.2 SSDs.
A visit to the One Stop Systems booth confirmed our suspicions that M.2 will be out in full force over the coming year. Above, we can see an impressive array of 61 PCIe 3.0 x8 slots in the 2OpenU Flash Storage Array (OCP-FSA). Each M.2 carrier board in the demo system only carries one M.2 SSD, but the system accepts HH-HL (Half-Height, Half-Length) boards with up to four M.2 SSDs per carrier, which enables up to 262 TB in the slim 2U OCP-compliant chassis. Three PLX PEX 8796 switches manage PCIe connections from the cables to the M.2 canisters.
The tray that houses the carrier cards slides out to provide easy access for servicing the PCIe hot-swappable flash modules, and we observed the swing arm on the rear of the chassis that allows the drawer to slide out while the system is still operational. The system can connect up to three clients via externally-cabled PCIe 3.0 x16 connections with 128 GBps of bandwidth each (utilizing PCIe cards we cover below).
Heat is always a concern in dense designs (especially considering that each slot supports up to 50W of power) and the system has three 143 CFM fans to keep the flash cool. The chassis delivers up to 3780W through the OCP rack power delivery system, and it provides IPMI v2.0 compliant monitoring functionality. The One Stop Systems FSA 200 is going to be hard to beat when it comes to the performance end of the spectrum. This chassis supports up to 200 TB of screaming flash performance in its 3U chassis by supporting up to 32 HL-FH (Half-Length, Full-Height) x16 single-slot boards. The chassis can pack in more than 64 single-slot PCIe SSDs (or M.2 holsters) with x8 (or greater) slots.
The FAS 200 provides up to 8 million IOPS via eight canisters populated with eight PCIe slots apiece, for a total of 64 PCIe slots. Each canister has slots arranged in two ranks of four each. Two slots in each canister have x16 connectivity and the remaining six slots offer x8 connections. A PLX PEX 8796 PCIe 3.0 switch handles PCIe switching for each canister.
It takes a firehose to get this kind of performance out of the box and to the client machines. The FSA 200 utilizes four PCIe 3.0 x16 interfaces to connect to clients. The display above illustrates a one-cable connection from the FSA 200 (top) to a client server (bottom). The system offers an N+1 redundant 1,200W hot-swappable power supply configuration. These systems are also utilized in military mobile command center applications, so reliability and durability are top priorities. There are also two smaller systems (FSA 50, FSA 25) available.
Both systems connect via 8.0 GT/s 32-lane PCIe 3.0 PLX PEX8733 switches. The PCIe x16 cards come in a HH-HL form factor and support cable lengths from 0.5m (1.64') up to 2.0m (6.56'). Utilizing PCIe as the interconnect between the storage chassis and the clients is really the best approach, as all networking interfaces are going to add untenable amounts of latency to very expensive architectures.
The company also offers testing and validation enclosures for the storage vendors. The 4U Flash Storage Array Test System is a direct-attached test system that allows insertion and removal of 32 NVMe SSDs or PCIe cards. The enclosure has the additional capability to test U.2 (SFF-8639 SSDs), and the demo chassis is partially populated with U.2 holsters. Each PCIe slot has independent power control (on/off), enabling use in fluid test/validation environments. The company also offers PCIe riser cards for power measurement. The chassis provides 2,400W of redundant power and has a hinged top to facilitate accessibility.
It is not surprising to find OCP-compliant designs forging ahead in pursuit of the maximum density and efficiency we can extract from today's flash products. The move to M.2 in the OCP segment is hardly surprising, and we expect other companies to also exploit the new dense SSDs in distributed arrays.