Quality metrics measured by Sinefa include delay, jitter, loss and availability. These are measured by sending an empty UDP (28 byte) packet to the Sinefa Instance on the other side of the WAN (which immediately sends it back). UDP is used in preference to other protocols because it produces results that most closely represent the actual network. Using ICMP for example can produce misleading results since routers along the path are often configured to treat it differently.
Delay (also known as latency) is measured by calculating the time it takes to receive each packet back, minus any time spent by the operating system. It's important to account for OS processing time (such as the time it takes for the OS to schedule the request, or the time taken to serialize the packet). Not doing so can cause inaccurate results, particularly on low-delay links. Over a sample period, the median value is calculated, and this is what's reported. Why median and not mean? Typically, a set of delay samples is skewed to the right. If there are outliers, they are usually high, as opposed to low. These outliers can push averages up since they are rarely uniform, so the median better represents "typical" network delay, which in turn better reflects how applications get impacted.
Jitter is calculated by taking the median absolute deviation of the delays for each sample period. Median absolute deviation (or MAD for short) is best described as the median deviation from the median. As with delay, MAD is not as effected by large positive outliers, so provides a good representation of "typical" network jitter. Again, this better reflects how applications get impacted on the network.
Loss is calculated by counting the number of lost packets (after a time-out) and dividing that by the total number of packets sent for each sample period. Lost packets do not count towards delay or jitter calculations.
Availability is calculated by measuring the duration each sample period had 10 or more seconds of consecutive loss events. For each sample period, the available time and unavailable time is recorded, so over long periods of time, availability measurements are extremely accurate.
By default, 1 packet is sent per second and the sample period is 1 minute, so there are 60 samples per sample period. This default configuration consumes less than 5MB of data per link, per day.