Product and service reviews are conducted independently by our editorial team, but we sometimes make money when you click on links. Learn more.
 

Conclusion

64 PCIe SSDs, 120TB Of Flash And One Stop Systems: The FSA 200 Review
By

Testing the One Stop Systems Flash Storage Array 200 was, frankly, one of the most exciting evaluations we have undertaken. The 32 SanDisk 3.2TB Fusion ioMemory SX300s provided a beefy 102.4TB of storage capacity, and the FSA 200 houses a total of 204.8TB of total flash capacity with 6.4TB SSD models. We also deployed 32 of the cutting-edge Intel DC P3700 NVMe SSDs to provide a high-performance comparison point.

Judging the FSA 200 in comparison to competing devices is impossible; there simply are not any competing systems at our disposal. We also considered varying ways of attempting to compare the SSD scalability in the FSA 200 to other setups, such as server motherboards, but there are not enough PCIe slots on a standard motherboard to conduct meaningful comparisons. Our mission is simply to determine if the chassis lives up to its billing.

The FSA 200 adheres to the increasingly popular JBOF (Just A Bunch of Flash) philosophy of allowing users to provision and share a central flash resource among multiple servers, then applying their own data services as needed. We tested the FSA 200 with one large volume to avoid additional host processing overhead during our tests, but most users will carve multiple storage volumes out of the large repository. A single FSA 200 supports up to four servers, and most users will outfit the hosts with high-throughput networking additives, such as 25, 40, 50 or 100GBe or Gen 6 Fibre Channel adapters. These configurations can host all-flash volumes for other servers, thus magnifying the impact of a single FSA 200 appliance.

The Intel DC P3700 SSDs offered the most performance during testing, but the SanDisk Fusion ioMemory SX300 also performed well in our tests. The sixteen Intel DC P3700's provided the peak IOPS measurement of 3,055,736 IOPS—and we were only employing half of the FSA 200 during the test. The Intel SSDs stepped up to the plate and knocked 13,629MB/s of sequential write throughput out of the park, and the SanDisk Fusion ioMemory SSDs responded with their own homerun-worthy 13,186MB/s of read throughput.

The test results indicate that users can reach over 6 million IOPS and 26Gb/s of throughput from the full complement of SSDs with multiple hosts. There are SSDs on the market currently, and many more headed to the market in the future, that offer more performance than both of the SSD models we utilized in our tests. Users will likely achieve better performance with newer SSD models, particularly in less-intense workloads due to enhanced scaling, as the ecosystem continues to mature.

There are host processing requirements that can hamper software RAID performance and other limitations, but ascertaining the impact on the overall scalability of the underlying SSDs is difficult outside of targeted tests in each unique environment. Years of testing SSDs and HDDs in RAID on various motherboards, HBAs and RAID controllers has shown that RAID scalability issues can stem from multiple sources, and sometimes simultaneously.

RAID arrays, generally, are only as fast as the slowest I/O operation, so storage devices with excessive performance variation can suffer poor scaling in large implementations. There are also host and software considerations that can muddy the picture. We were not able to achieve the native performance of the flash, in part, due to the restriction of the single PCIe 3.0 x16 connection from each container, but there were likely other limiting factors in play.

The FSA 200 provides a tremendous amount of density in a surprisingly small chassis, and we found it to be reliable and provide an astounding amount of performance for a chassis built upon a PCIe switching architecture. Airflow is one of the critical requirements for such a dense chassis that supports up to 75W per slot. The FSA 200 provided an incredible amount of airflow through the chassis via the rows of spinning fans, which will eliminate any possibility of thermal throttling in high-performance configurations

One Stop Systems (OSS) has a history of developing PCIe-based expansion products that dates back to 2005. The company's wide range of products include a bevy of GPU High Density Compute Accelerators (HDCA), custom flash memory cards, PCIe expansion optimized servers and PCIe cards, cables, switches and backplanes. OSS also has several Open Compute Project (OCP) compliant storage systems, such as the 61-slot OCP-FSA appliance, which can house up to 244 M.2 SSDs in one slim 2U chassis.

The full range of One Stop System's PCIe-based products is indicative of the wealth of PCIe engineering experience it has amassed during its decade in the industry. From the battlefield to the data center, One Stop Systems designed the Flash Storage Array 200 to offer the most throughput and highest density possible, while still allowing users to custom-select their PCIe SSD of choice.

MORE: Best Enterprise SSDs
MORE: How We Test Enterprise SSDs
MORE: Latest Enterprise Storage News
MORE: All Storage Content

Paul Alcorn is a Contributing Editor for Tom's IT Pro, covering Storage. Follow him on Twitter and Google+.