The next generation enterprise storage arrays leverage flash to offer high performance and efficiency. Tintri goes beyond, by focusing on usability and creating ridiculously simple storage.
Traditional storage systems have focused on complex layouts of SSDs and spinning disk using a hierarchy of physical containers -- RAID groups, storage groups, and aggregates -- to further build out logical containers like storage pools or volumes. Each logical pool will generally provide performance or capacity to workloads by way of a RAID schema like mirroring, striping, or parity protection, forcing administrators to make hard choices about where virtual machine disks were placed.
Tintri, an NFS attached hybrid flash storage platform, boldly wipes the slate on these ideas by instead creating one large, high performance pool of storage in their Tintri VMstore appliance nodes. Each VMstore model is made up of SSDs for performance and high capacity disk drives, hence earning my labeling as a hybrid flash array, although 99 percent of the actual IO reads and writes are serviced directly from the flash devices.
The usable storage configuration and presentation is handled by the VMstore itself, which takes care of all the inner workings behind the SSDs and disks in real-time. This makes a lot of sense -- modern virtual workload IO profiles today are extremely complex and quite difficult, if nearly impossible, to completely plan for in any meaningful manner. Next generation storage arrays, such as Tintri, should consider solid intelligence around how the internal NVRAM, memory, SSDs, and disk are being used without the need for administrative intervention to be table stakes.
|Raw||49.32 TB (9x 480 GB SSD + 15x 3 TB HDD)||26.4 TB (8x 300 GB SSD + 8 x 3 TB HDD)||19.44 TB (6x 240 GB SSD + 18x 1 TB HDD)|
|Usable||33.5 TB||13.5 TB||13.5 TB|
|Management||Included: 2x 1GbE (RJ-45)||Included: 2x 1GbE (RJ-45)||Included: 2x 1GbE (RJ-45)|
|Data||Included: 2x 10GbE (SFP+ or 10GBASE-T)||Option 1: 2x 10GbE (10GBASE-SR LC fibre or SFP+ direct attach copper)||Included: 2x 1GbE (RJ-45)|
|Option 2: 2x 10GbE (RJ-45)||Optional: 2x 10GbE (SFP+ or 10GBASE-T)|
|Replication||Included: 2x 1 GbE (RJ-45)||Optional: 2x 1GbE (RJ-45)||Optional: 2x 1GbE (RJ-45)|
|Optional: 2x 1 GbE (SFP)||Optional: 2x 1GbE (SFP)||Optional: 2x 1GbE (SFP)|
|Software functionality||Ethernet failover link aggregation, VLAN tagging, IP aliasing, LACP|
|Height||7" (178 mm) 4U rackmount||5.2" (132 mm) 3U rackmount||7" (178 mm) 4U rackmount|
|Width||19" (483 mm)|
|Depth||28.5" (724 mm)||25.5" (648 mm)||28.5" (724 mm)|
|Weight||108 lbs/49 kgs||85 lbs/39 kgs||106 lbs/48.1 kgs|
Once the VMstore has been installed into the data center and cabled to management and data networks, an administrator walks through a very simple wizard to give the Tintri appliance a personality. This includes providing the data and management IP addresses, the vCenter server(s) that will be consuming storage from the appliance, a list of NFS access control list (ACL) entries, and where to send alerts.
At this point, the VMstore is now fully operational and ready to serve IO without the need to ask questions about the disk configuration. As shown below, a single NFS datastore is made available at the data IP address using the default /tintri folder share.
Because there's relatively zero emphasis on storage configuration and layout, the operational effort to transition the VMstore into production is less than an hour. A vSphere administrator can mount the NFS datastore to all ESXi hosts that will need to use the storage array using any method they're comfortable with: the vSphere Client, the vSphere Web Client, or even a PowerCLI script. VMs can immediately be created or migrated onto the Tintri appliance once it has been presented to the VMware vSphere hosts.