Blade architectures now come in a diversity of sizes and designs.
HP, which dominates the blade market, has produced everything from the ProLiant BL10E, a 3U enclosure capable of holding 20 server boards, to the BladeSystem c7000, a 10U enclosure able to fit 16 half-height blades. However, nearly all blade motherboards are proprietary (ATXBLADE and Build-a-Blade are two intriguing exceptions), and their complex backplane connectivity drives up their solution costs.
In our prior article, we examined the Supermicro 1U Twin. In the 4U space, the company most recently released its Fat Twin platform as a compelling alternative to regular blade options (including those sold by Supermicro). The Fat Twin uses eight motherboard sleds in a 4U enclosure. Each board can hold up to two Xeon processors, effectively placing four CPUs in 1U of rack space. This is not a blade-type architecture, though. There is no unifying compute backplane. Rather, it is a collection of essentially ½U servers sharing a 4U enclosure. The question then becomes why this approach is any better or worse than a competing 4P 1U server configuration.
“Most customers are cost-sensitive,” answers Supermicro CEO Charles Liang. “They look at performance per dollar, right? The Fat Twin is pretty much an optimized DP [2P] system, and dual-processor is pretty much the sweet spot architecture for performance-per-dollar. Four-way and eight-way is larger capacity, but the cost is way higher.”
One of the chief reasons to consider the Fat Twin as well as competing blade-esque products is power efficiency. Server boards pull from a shared power supply and cooling architecture, which delivers greater economies of scale compared to standalone servers. According to Liang, the company’s Twin architecture saves 2% to 3% on architecture, the same amount again on superior circuit design and componentry, and another 1.5% or so on its 95% efficient power supplies.
All told, this adds up to significant savings.
“With each node we save 7% to 10% power,” says Liang. “That equates to $100 to $150 per node in four years in thermal energy cost savings. Also, because the system is running cooler and more efficiently, that increases its reliability.”
Blades still have a place in ultra-dense compute applications, such as HPC clusters and oil and gas exploration. But for moderate- to high-density compute needs, the Fat Twin (which even features PCI slots on its boards) and its ilk may prove to be amply capable and far more cost-effective.
4U in the Organization
If a company is large enough to need servers, it likely needs a 4U.
In fact, the 4U tower may provide an optimal transition design for small companies making the leap from disjointed desktops into a more cohesive, server-centric operation. The addition of racks can be deferred until needed. When that time comes, companies will have a clearer sense of what types of server tasks are needed within (and outside of) the enterprise.
A 4U server offers some of the most flexibility possible owing to its ability to accept full-height cards. The same barebones 4U shell can be optimized for database, transaction processing, or even graphical analysis and modeling. Only the cards need change. In this way, 4Us may represent one of the safest infrastructure investments for rapidly evolving companies that need power but want to remain ready for whichever way the future might lead them.
Check Out These Recent IT Slide Shows