Four PCIe x16 connectors line the bottom of the FSA 200 (top) and connect to the hosts (bottom), which accept two cables each. The high-quality Amphenol PCIe 3.0 x16 cables are available in .5, 1, 2, 3, 5 and 7m lengths, and feature a retractor that locks the cable into place.
The FSA 200 provides an IMPI-based system management monitor that supports command line or web interface GUIs for resource tracking, monitoring, control and alarming features. One Stop Systems offers a base configuration that supports the full IPMI command set along with SNMP and RCMP interfaces.
The rear of the unit features dual hot-swappable 1U 1200-Watt power supplies. There are a number of power supply options available, and the power supplies both connect to the same bus bar. The PSUs operate in 1+1 redundant mode, and a fully loaded system typically draws up to 900W of power.
The rear of the FSA 200 features four redundant 80x36mm fans that provide 143 CFM of airflow apiece. The rear fans are hot-swappable and support control, monitoring and alarm features; four screws located near the PCIe connections allow for easy serviceability. The powerful fans make no attempt at silence, which is not a concern in the desired applications.
Both of the OSS Expansion Optimized Servers feature dual Intel Xeon E5-2667 v3 CPUs at 3.20GHz and 512GB of system RAM. The dual 8-core CPUs provided enough horsepower to propel our load generator, and the copious RAM allocation was required to satisfy the high RAM requirements of the SanDisk Fusion ioMemory SX300 SSDs, which require between 7.3 to 57.8GB of RAM per SSD.
The One Stop Systems PCIe x16 Gen 3 Switch-based Cable Adapters operate at up to 128Gb/s of PCIe 3.0 speed in a slim HHHL (Half-Height Half-Length) form factor. The switch-based board draws a miserly 17W of power under full load and slots into a standard x16 PCIe slot, though it will support x4 and x8 connections as well. The card supports PCIe 3.0 x16 cables and employs a 32-lane PCIe 3.0 PLX PEX8733 switch.
The Switch-based Cable Adapters do not require drivers and operate in a simple bus pass-through mode. The card appears in the Windows device manager as a Base System Device, or as a PLX PCI bridge when using LSPCI in Linux.
The system supports from one to four host servers and users can access the SSDs in 15 different possible configurations (graphic).
CPU Usage And Efficiency
SanDisk Fusion ioMemory cards utilize host CPU resources for SSD management tasks, and NVMe is an efficient low-latency interconnect designed specifically for non-volatile memories, so the CPU usage metrics we recorded are not surprising. The Fusion cards require more CPU power during the tests, and their comparatively lower IOPS performance leads to fewer IOPS generated per percentage of CPU utilization (less efficient).
The Fusion SX300s offer half the efficiency of the NVMe SSDs during the fairly straightforward random read and write operations, but that gap jumps significantly when we move into the demanding mixed workloads. The 16-SSD SX300 array provides 31 IOPS, in comparison to 75 IOPS from the 16-SSD Intel array, during the OLTP workload at 128 OIO.