North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Calculating Jitter

  • From: Marshall Eubanks
  • Date: Fri Jun 10 09:22:15 2005

On Fri, 10 Jun 2005 03:53:09 -0600
 Jeff Murri <[email protected]> wrote:
> 
> I'm hoping here that this post isn't out of line with the scope of the 
> NANOG list, of which I've been a long time lurker.  If so, please just 
> ignore me. 
> 

Hello Jeff; 

These are both moving averages, the question is the memory of the moving average.
The RTP version has a specific finite memory, the one you describe has an infinite memory.

The statistical trouble with infinite memory moving average estimates is that they
eventually converge to a fixed value (after one million samples, say, even a 
large change in jitter will take a long time to produce a small change in the average), and,
if the underlying process is not stationary, then they need not converge to anything like
the correct current value. The RTCP protocol has a finite memory estimator that has
have enough memory to smooth out statistical fluctuations somewhat, but which 
responds to real changes in the underlying jitter fairly rapidly, and which is comparable
across implementations. Yes, this means that older data is ignored (that's the finite memory part),
but its intended use is to try and estimate what's happening in the network now, not last week. (I
have had RTCP sessions up for months; even a solid day of high jitter would hardly budge a
total average over months.)

If what you want is the jitter averaged over some long period of time (say so you can say that
the average jitter on your network was X msec in 2005), then what you want is indeed

 Jsum = Jsum+|D(i-1,i)|
 J = Jsum / (sample count - 1)

(assuming that the sample count is the number of delay measurements, not the number of delay
differences). Note that that is the same as 

J[i] = J[i-1] * (i-1 /i) + |D(i-1,i)| * (1/i)

assuming that the first sample is i = 0 and J[0] is finite; this shows clearly
how new data gets down-weighted as time goes on and i increases.

Regards
Marshall

> We're trying to calculate Jitter of a variable (non-limited) size data 
> set.  One Jitter formula that we see cited occasionally (and is in RFC 
> 1889 - I believe iPerf uses this formula for it's Jitter #'s) looks 
> something like this:
> 
> J = J+(|D(i-1,i)|-J)/16
> 
> The problem with this formula is that it works best on small sample 
> sets, and it also favors more recent samples.  As the sample size grows, 
> the jitter of early samples seem to get factored down to basic "noise", 
> and then aren't really well represented in the overall Jitter number.
> 
> We're trying to find a viable formula for showing a general Jitter 
> "average" over a period of time.  One possibility here is just to 
> iterate all samples like this:
> 
> Jsum = Jsum+|D(i-1,i)|
> 
> and then calculating the jitter like this:
> 
> J = Jsum / (sample count - 1)
> 
> The sample count could be anywhere from 2 to 1 million (or more).  This 
> formula does seem to represent early sample in the "Jitter" number just 
> as strongly as later samples, but seems like it might be a bit simplistic.
> 
> Does anyone have any feedback on this alternate way of calculating 
> Jitter, or any better ways to do this?
> 
> Thanks in advance for any input.
> 
> Jeff Murri
> Nessoft, LLC
> [email protected]
> www.nessoft.com
>