A Crash Course in Network Performance Monitoring: A Manager’s Guide

A Crash Course in Network Performance Monitoring: A Manager’s Guide
By

A Crash Course in Network Performance Monitoring: A Manager’s GuideA Crash Course in Network Performance Monitoring: A Manager’s GuideNetworks are the backbone of IT service delivery. So when the network experiences problems, application performance can suffer

IT managers responsible for meeting service level agreements and business managers depending on applications to run their operations both have a stake in ensuring networks operate efficiently. Network performance does not have to be black box for everyone but networking experts and this article will help non-networking professionals understand the tools and techniques available for diagnosing network issues.

Four ways to monitor networks are:

  • simple network management protocol (SNMP),
  • protocol analysis,
  • distributed device analysis
  • and traffic flow analysis.

SNMP is a protocol for collecting information from devices and for issuing basic commands to those devices. Routers, servers, printers and other network devices run software known as an agent. SNMP agents are responsible for collecting information about a device, such as measures of network traffic.  Devices running SNMP agents are known as managed devices. Another type of entity in the SNMP protocol is the network management system (NMS) which is a centralized repository for information collected from managed devices.

SNMP can provide useful information about devices on a network.For example, routers receiving unusually large volumes of traffic may indicate a problem with the way traffic is directed through the network or that another router may have failed and traffic is being rerouted.   This example highlights one of the limitations of SNMP: it is useful for understanding events on a single device but it less helpful when you need to consider traffic flows.

When you need to move beyond a single device to monitor traffic between devices, you can use protocol analyzers. These are tools for capturing information about packets transmitted between devices.  Protocols analyzers are useful for collecting detailed information on network utilization, traffic patterns and network protocols in use on the network.

Consider the case of a developer trying to identify the root cause of an application performance issue.If the developer suspects that poor application performance is due to a network problem then they could use a protocol analyzer to collect data about traffic between the application server and a client device. This would include data about the time it takes to send a request to the server and receive a response, known as the round trip time (RTT).  RTT is independent of application behavior and depends only on network factors, such as congestion and packet loss.

Depending on where protocol analyzers are deployed, they may capture only a subset of data transmitted over the network. This stems from network designs that optimize network traffic. Network engineers can still get comprehensive view of network traffic with protocol analyzers by using them at switches or installing monitoring devices (known as network taps) on each network segment.This kind of distributed device analysis can help create a comprehensive view of network traffic and performance but there are disadvantages.

Protocol analyzers use network packets as the basic data structure. Working with a large number of logically related packets can be challenging. (One way to deal with this is to use traffic flow analysis techniques that are described below). As networks grow in complexity, the number of distributed analysis devices may need to grow. This can lead to additional administration and maintenance overhead.

Dan Sullivan is an author, systems architect, and consultant with over 20 years of IT experience with engagements in systems architecture, enterprise security, advanced analytics and business intelligence. He has worked in a broad range of industries, including financial services, manufacturing, pharmaceuticals, software development, government, retail, gas and oil production, power generation, life sciences, and education.  Dan has written 16 books and numerous articles and white papers about topics ranging from data warehousing, Cloud Computing and advanced analytics to security management, collaboration, and text mining.

See here for all of Dan's Tom's IT Pro articles.

(Shutterstock cover image credit: Research)


Comments