Internal Cloud Bandwidth Issues: A Vendor's Perspective

Internal Cloud Bandwidth Issues: A Vendor's Perspective
By

Enterprises can benefit from viewing matters surrounding cloud computing from all perspectives, including that of network equipment vendors.

When the National Institute for Standards in Technology (NIST) released the 16th and final draft of its definition of cloud computing last year, it noted five essential characteristics of the technology:

  • on-demand self-service,
  • resource pooling,
  • rapid elasticity or expansion,
  • measured service,
  • and broad network access.

While NIST didn’t explicitly state as much, where broad network access is concerned, internal bandwidth issues certainly factor in. It’s not just the link between the business and the cloud service provider that matters; it’s also about the available bandwidth within the organization and everyone’s ability to reach out beyond the firewall at optimal speeds. For the sake of assessing present and future internal bandwidth requirements, avoiding deployment pitfalls, and gaining other valuable insight, enterprises can benefit from viewing matters from all perspectives, including that of network equipment vendors.

Assessing Needs The Vendor Way

Typically, vendors’ advice to enterprises for assessing present and future internal bandwidth needs regarding cloud computing simply comes down to a) the needs of the applications that an enterprise has in place and b) end-users’ perceptions of what constitutes acceptable performance. Sam Barnett, directing analyst, data center and cloud, at Infonetics Research says vendors such as Cisco typically attempt to understand what users are trying to do and within what timeframe.

Bob Laliberte, senior analyst at Enterprise Strategy Group, meanwhile, states that assessing bandwidth requirements usually occurs via a services engagement that takes a 360-degree view of all stakeholders, business units, and IT and that accounts for future applications, employee growth, cloud strategy, and other factors. Monitoring tools able to identify current bandwidth use, orphaned applications, etc., are also helpful.

Ultimately, it’s likely that vendors will urge enterprises to anticipate that future total bandwidth requirements may be significantly larger than initially anticipated, says Bernard Golden, CEO at HyperStratus. Enterprises should create headroom in their network infrastructures for adding future capabilities and throughput. In other words, he advises, “Don’t buy the biggest of a small thing.” Don’t plant the best 1 Gigabit Ethernet (GbE) infrastructure around when even an average 10 GbE solution is really what’s needed.

An example of when enlisting a bandwidth specialist might prove worthwhile, says Michael Shonholz, CDW telecom services manager, could be working with only a few or even one hardware manufacturer or VAR on a voice solution. Coordinating resources and reducing the number of partners that the enterprise needs to project-manage can lead to shortened install times, streamlined communication, and a more scalable infrastructure, he says. Further, discussing current and future applications planned to operate over the network can yield appropriately sized infrastructure. Discussing needs for up to 36 months out can also generate questions regarding disaster recovery and access diversity that could expand bandwidth requirements. The longer the time-view, the more likely IT will be to include all of the necessary bandwidth considerations in planning.

Although Golden sees a risk to planning bandwidth needs much beyond 24 months due the nature of computing rapidly changing, increased adoption of BYOD, and more devices generating traffic, 36-month horizons for anticipating bandwidth growth is common. Shonholz says there are network designs and options available that provide flexibility and room for growth, as well as providers that allow for allocated bandwidth to be adjusted virtually on-the-fly. Such designs, however, require pre-planning with a specialist.

Barnett agrees with a three-year approach for intra-data center bandwidth needs, but says inter-data center requirements can change daily. Thus, an enterprise with a dynamic business environment that can forge a year-long bandwidth horizon is doing well. “Lots of carriers are focused on rapid service delivery/provisioning to address inter-data center bandwidth requirements,” Barnett says. “This is one of the reason’s SDN [software defined networking] is so appealing to the carrier community.” The SDN layer allows network admins to dictate network services without tying these services to specific network interfaces.

Where edge infrastructure is concerned, remedies for addressing equipment-related performance problems can including installing a WAN optimizer or application delivery controller at the network’s edge, Golden says. Elsewhere, Laliberte says ESG’s research indicates organizations are actively moving to 10 Gigabit Ethernet (GbE). Primary reasons include current or anticipated data center traffic, 10 GbE-related costs decreasing to acceptable levels, a current or future implementation of a private cloud in the data center, current or planned use of new application types, and vendors more actively selling 10 GbE technologies.

At the very least, when anticipating internal bandwidth needs for cloud usage, enterprises should have a solid understanding of how long it will take to make significant network infrastructure upgrades if needed and plan ahead of that curve, something that may require applying touches of science and art. The science, Laliberte says, is looking in the past to determine bandwidth needs based on seasonal spikes or project launches. The art is forecasting how much bandwidth is required for an as yet undetermined future event. 

Anticipate Deployment Obstacles

Once planning is done, it’s time to deploy the network resources needed for the projected bandwidth for your cloud setup.

Unfortunately, deployment is fraught with its own pitfalls. From the viewpoint of a vendor onsite for deployment, communication breakdowns are among the most common obstacles to surface. “These issues can often happen during the handoff of the technology that is being delivered, as that process involves a lot of back and forth and extreme coordination of resources,” Shonholz says. “The consequences could be a delay in implementing the new solution, outages, or increased costs due to double billing of network assets.”

Leveraging an unbiased third party or trust solution provider is one way to ensure fluid communication. Shonholz says most telecom providers are very process-oriented, which requires possessing sets of technical expertise to manage potential choke points. CDW, for example, can provide an overarching project manager to oversee implementation and any escalation through the lifetime of a telecom contract. Acquiring such management helps move orders through the deployment process and avoids communication pitfalls.

Though myriad issues can arise, deploying network equipment isn’t really rocket science, Laliberte says. Typically, the biggest issues stem from a lack of appropriate maintenance windows in which to switch over infrastructure. If things go wrong and the enterprise must fail back to old infrastructure, the typical consequence is that installation gets delayed.

Therefore, Laliberte says, ensure that those installing the network are properly trained and certified, check references, and establish penalties, if required, for noncompliance with stated installation dates (assuming the client company didn’t cause the delays).

Is NaaS The Answer?

A new, intriguing, technology that may appeal to enterprises with bandwidth needs is Network-as-a-Service (NaaS).

Mobsource, for example, bills itself as a cloud marketplace where purchasers and sellers can virtually congregate to trade bandwidth. For enterprises, the allure of a solution like Mobsource is easily acquiring on-demand bandwidth “at a moment’s notice” and only paying for the bandwidth it uses.

Barnett says the carrier pool is a great concept, but the question remains as to whether it works as advertised. Shonholz and Laliberte both know of few customers currently using NaaS solutions. While interesting, Shonholz says NaaS solutions are likely only available for major data centers where significant carrier bandwidth and numerous carrier options are available. Ultimately, Laliberte notes, NaaS usage “really depends on what the use case is and what kind of SLAs and support are available.”

Potentially, NaaS could prove a good source for migration projects “to tack up a line and then take it down.”

If companies have done their assessments well and weighed the findings against ROI analyses, the need (or lack thereof) for NaaS should be more evident. Most vendors are more than willing to help with this step, but they may not supply a complete or objective answer regarding NaaS. A third-party consultant may be of more use here, and good consultants should also be able to assist with guidance throughout deployment. Whether with or without a consultant onboard, though, keep vendor support close at hand during bandwidth deployment.

Strong vendor assistance will make all the difference between a quick trip into the cloud and seeing bandwidth deployments get rained on.