This article describes how to interpret Network Path Monitoring results and charts.
Network Path Monitoring Symbols
|Source node||This is the node from where the Path Monitoring is initiated from. Can be a Sinefa probe in-network or a Cloud Probe public location.|
|Node||A node used to connect the source and destination. This hop has associated performance information when selected.|
|Node with packet loss||The thicker red circle, the higher the loss of packets for that node.|
|Node belongs to destination ASN||This is a node that belongs to the destination network. E.g. If testing to Office 365, you may see multiple hops that belong to Microsoft before reaching the destination node.|
|Node belongs to destination ASN with packet loss||Node in a destination network exhibiting packet loss.|
|Destination node||Can be a single or multiple IP addresses depending on DNS resolution.|
|Unreachable destination node||If the destination node is unreachable, it will be shown with a dashed border. Path monitoring never received a response from the destination, however, it's included for clarity.|
|Unknown node||No details can be collected from this node as it never response to path tracing packets.|
|A series of unknown nodes||3 unknown nodes collapsed in a series to remove clutter.|
|Paths - the connections between nodes.||Each path has its respective performance details. e.g. a path connecting a node in the US and a node in UK may have a delay of around 150ms. Over time, the the bolder path highlights route preference.|
Node Information Panel
|IP||IP address of the node|
|ASN||Autonomous System Number||Used to identify which provider the hop belongs to (in the screen shot, the hop belongs to Microsoft).|
|Country||Physical location of the hop||IP geolocation used to determine the hop's physical location.|
|Average Response||Response time refers to the amount of time each network hop takes to return the results of a request to the user. The response time is affected by factors such as network bandwidth, number of users, number and type of requests submitted, and average processing time.||Depending on time frame from the pull down time window, average response times shown in milliseconds.|
|Average Jitter||Is calculated by taking the median absolute deviation of the delays for each sample period. Median absolute deviation (or MAD for short) is best described as the median deviation from the median. As with delay, MAD is not as effected by large positive outliers, so provides a good representation of "typical" network jitter. Again, this better reflects how applications get impacted on the network.||Depending on time frame from the pull down time window, average Jitter times shown in milliseconds.|
|Average Response Rate||A percentage indicating how often this node responded to test packets. If a node's response rate is less than 100%, it does not necessarily mean that node is dropping packets. In some cases, routers rate limit responses to path traces.||Depending on time frame from the pull down time window, average response rate shown as percentage.|
View and Filtering
A lot of data is being presented, which can be overwhelming.
The view allows you to collapse multiple hops on either side. This is especially handy on traces which have many nodes and you want to focus on particular sections.
With search you can track down nodes with certain parameters. e.g. only show nodes in the US or show nodes with a specific ASN. Search is auto populated with the components found in the trace so you can easily select what you need.
Quickly filter on IP, Country, ASN or Name.
Visualizing multiple tests
When running the same trace multiple sinefa probes, the default display will collapse the majority of nodes on the destination side. The first few nodes on the sources side will be displayed. This is done to enable faster screen rendering and also provides the option for you to select the trace of interest.
Select the trace of interest by clicking the box detailing the destination nodes which will unpack the trace and allow you to change the View to see more nodes. Select the source node and click Filter by XYZ if you want to only visualize this trace.
Use the end-to-end chart to look back in time and drill down to points of interest for different metrics. When zooming in, only the traces for that time period will be displayed.
|NQS||Network Quality Scoring||
Network Quality is measured between two points on a network. You can either monitor Network Quality between 2 Sinefa probes or between a Sinefa probe and a Network Object.
Once two Probes are deployed and subscribed, Network Quality measurements will begin automatically between the two probes. If you only have one Probe setup in your Account or you want to test against a public Probe, you can Setup a Public Probe.
|Availability||Is calculated by measuring the duration each sample period had 10 or more seconds of consecutive loss events. For each sample period, the available time and unavailable time is recorded, so over long periods of time, availability measurements are extremely accurate.|
|Delay||(also known as latency) Is measured by calculating the time it takes to receive each packet back, minus any time spent by the operating system. It's important to account for OS processing time (such as the time it takes for the OS to schedule the request, or the time taken to serialize the packet). Not doing so can cause inaccurate results, particularly on low-delay links. Over a sample period, the median value is calculated, and this is what's reported. Why median and not mean? Typically, a set of delay samples is skewed to the right. If there are outliers, they are usually high, as opposed to low. These outliers can push averages up since they are rarely uniform, so the median better represents "typical" network delay, which in turn better reflects how applications get impacted.|
|Jitter||Is calculated by taking the median absolute deviation of the delays for each sample period. Median absolute deviation (or MAD for short) is best described as the median deviation from the median. As with delay, MAD is not as effected by large positive outliers, so provides a good representation of "typical" network jitter. Again, this better reflects how applications get impacted on the network.|
|Loss||Is calculated by counting the number of lost packets (after a time-out) and dividing that by the total number of packets sent for each sample period. Lost packets do not count towards delay or jitter calculations.|