Product and service reviews are conducted independently by our editorial team, but we sometimes make money when you click on links. Learn more.
 

Mellanox Launches New MetroX Networking Appliance

By - Source: Mellanox

This week Mellanox introduced the MetroX TX6100, a long-haul solution that enables InfiniBand and Ethernet RDMA connectivity between data centers. The new MetroX model not only provides rapid disaster recovery, but improved utilization of remote storage and compute infrastructures across long distances and multiple geographic sites.

"Commonly used for internal data center long reach connectivity, or between close data center compute and storage infrastructures, the MetroX series extends Mellanox InfiniBand solutions from a single-location data center network to campus and metro data centers at a distance of up to 10 kilometers," the company said.

The MetroX TX6100 is provided as a 19 inch rack mountable 1U system with six long-haul (40 Gbps) ports, six downlink (56 Gbps) VPI ports, two virtual lanes for QoS applications, redundant power supplies and fan drawers. The device supports up to 240 Gbps long-haul aggregated data, and is compliant with IBTA 1.2.1 and 1.3, and compliant with Mellanox LR4 QSFP+ 40 Gbps transceivers.

The device also provides two Gigabit Ethernet ports, a USB port, and an RS232 port over DB9. For fabric management, the device has on-board SM for fabrics up to 648 nodes, and the Unified Fabric Manager (UFM) Agent. Connectors and cabling consist of QSFP connectors, passive copper or active fiber cables, and fiber media adapters.

"MetroX latency is 200ns, and each kilometer of transmission across fiber glass adds 5 microseconds of latency. MetroX downlink ports support Mellanox’s VPI technology, which enables any standard networking, storage or management protocol to seamlessly operate over any converged network," the company said.

Mellanox said that Perdue University has already deployed MetroX TX6100 over six kilometers to connect computation clusters to storage facilities. The university can now access its remotely located supercomputers to organize assets to limited data center space more flexibly. That means higher facilities utilization without increased construction or building retrofit costs.

"A common problem facing data-driven researchers is the time cost of moving their data between systems, from machines in one facility to the next, which can slow their computations and delay their results," said Mike Shuey, HPC systems manager at Purdue University. "Mellanox's MetroX solution lets us unify systems across campus, and maintain the high-speed access our researchers need for intricate simulations -- regardless of the physical location of their work."

For more information about the new MetroX TX6100, head here. You can also check out a data sheet that provides the full hardware specs in this PDF file.

Kevin Parrish is a contributing editor and writer for Tom's Hardware,Tom's Games and Tom's Guide. He's also a graphic artist, CAD operator and network administrator.

See here for all of Kevin's Tom's IT Pro articles.

Comments