count,SORT
Pop one element from the stack. This is the count of items to be sorted. The top count of the remaining elements are then sorted from the smallest to the largest, in place on the stack.
4,3,22.1,1,4,SORT -> 1,3,4,22.1
count,REV
Reverse the number
Example: CDEF:x=v1,v2,v3,v4,v5,v6,6,SORT,POP,5,REV,POP,+,+,+,4,/
will compute the average of the values v1 to v6 after removing the smallest and largest.
count,AVG
Pop one element (count) from the stack. Now pop count elements and build the average, ignoring all UNKNOWN values in the process.
Example: CDEF:x=a,b,c,d,4,AVG
count,SMIN and count,SMAX
Pop one element (count) from the stack. Now pop count elements and push the minimum/maximum back onto the stack.
Example: CDEF:x=a,b,c,d,4,SMIN
count,MEDIAN
pop one element (count) from the stack. Now pop count elements and find the median, ignoring all UNKNOWN values in the process. If there are an even number of non-UNKNOWN values, the average of the middle two will be pushed on the stack.
Example: CDEF:x=a,b,c,d,4,MEDIAN
count,STDEV
pop one element (count) from the stack. Now pop count elements and calculate the standard deviation over these values (ignoring any NAN values). Push the result back on to the stack.
Example: CDEF:x=a,b,c,d,4,STDEV
percent,count,PERCENT
pop two elements (count,percent) from the stack. Now pop count element, order them by size (while the smallest elements are -INF, the largest are INF and NaN is larger than -INF but smaller than anything else. Pick the element from the ordered list where percent of the elements are equal to the one picked. Push the result back on to the stack.
Example: CDEF:x=a,b,c,d,95,4,PERCENT
count,TREND, TRENDNAN
Create a "sliding window" average of another data series.
Usage: CDEF:smoothed=x,1800,TREND
This will create a half-hour (1800 second) sliding window average of x. The average is essentially computed as shown here:
+---!---!---!---!---!---!---!---!--->
now
delay t0
<--------------->
delay t1
<--------------->
delay t2
<--------------->
Value at sample (t0) will be the average between (t0-delay) and (t0)
Value at sample (t1) will be the average between (t1-delay) and (t1)
Value at sample (t2) will be the average between (t2-delay) and (t2)
TRENDNAN is - in contrast to TREND - NAN-safe. If you use TREND and one source value is NAN the complete sliding window is affected. The TRENDNAN operation ignores all NAN-values in a sliding window and computes the average of the remaining values.
PREDICT, PREDICTSIGMA, PREDICTPERC
Create a "sliding window" average/sigma/percentil of another data series, that also shifts the data series by given amounts of time as well
Usage - explicit stating shifts: CDEF:predict=<shift n>,...,<shift 1>,n,<window>,x,PREDICT
CDEF:sigma=<shift n>,...,<shift 1>,n,<window>,x,PREDICTSIGMA
CDEF:perc=<shift n>,...,<shift 1>,n,<window>,<percentil>,x,PREDICTPERC
Usage - shifts defined as a base shift and a number of time this is applied CDEF:predict=<shift multiplier>,-n,<window>,x,PREDICT
CDEF:sigma=<shift multiplier>,-n,<window>,x,PREDICTSIGMA
CDEF:sigma=<shift multiplier>,-n,<window>,<percentil>,x,PREDICTPERC
Example: CDEF:predict=172800,86400,2,1800,x,PREDICT
This will create a half-hour (1800 second) sliding window average/sigma of x, that average is essentially computed as shown here:
+---!---!---!---!---!---!---!---!---!---!---!---!---!---!---!---!---!--->
now
shift 1 t0
<----------------------->
window
<--------------->
shift 2
<----------------------------------------------->
window
<--------------->
shift 1 t1
<----------------------->
window
<--------------->
shift 2
<----------------------------------------------->
window
<--------------->
Value at sample (t0) will be the average between (t0-shift1-window) and (t0-shift1)
and between (t0-shift2-window) and (t0-shift2)
Value at sample (t1) will be the average between (t1-shift1-window) and (t1-shift1)
and between (t1-shift2-window) and (t1-shift2)
The function is by design NAN-safe. This also allows for extrapolation into the future (say a few days) - you may need to define the data series with the optional start= parameter, so that the source data series has enough data to provide prediction also at the beginning of a graph...
The percentile can be between [-100:+100]. The positive percentiles interpolates between values while the negative will take the closest.
Example: you run 7 shifts with a window of 1800 seconds. Assuming that the rrd-file has a step size of 300 seconds this means we have to do the percentile calculation based on a max of 42 distinct values (less if you got NAN). that means that in the best case you get a step rate between values of 2.4 percent. so if you ask for the 99th percentile, then you would need to look at the 41.59th value. As we only have integers, either the 41st or the 42nd value.
With the positive percentile a linear interpolation between the 2 values is done to get the effective value.
The negative returns the closest value distance wise - so in the above case 42nd value, which is effectively returning the Percentile100 or the max of the previous 7 days in the window.
Here an example, that will create a 10 day graph that also shows the prediction 3 days into the future with its uncertainty value (as defined by avg+-4*sigma) This also shows if the prediction is exceeded at a certain point.
rrdtool graph image.png --imgformat=PNG \
--start=-7days --end=+3days --width=1000 --height=200 --alt-autoscale-max \
DEF:value=value.rrd:value:AVERAGE:start=-14days \
LINE1:value#ff0000:value \
CDEF:predict=86400,-7,1800,value,PREDICT \
CDEF:sigma=86400,-7,1800,value,PREDICTSIGMA \
CDEF:upper=predict,sigma,3,*,+ \
CDEF:lower=predict,sigma,3,*,- \
LINE1:predict#00ff00:prediction \
LINE1:upper#0000ff:upper\ certainty\ limit \
LINE1:lower#0000ff:lower\ certainty\ limit \
CDEF:exceeds=value,UN,0,value,lower,upper,LIMIT,UN,IF \
TICK:exceeds#aa000080:1 \
CDEF:perc95=86400,-7,1800,95,value,PREDICTPERC \
LINE1:perc95#ffff00:95th_percentile
Note: Experience has shown that a factor between 3 and 5 to scale sigma is a good discriminator to detect abnormal behavior. This obviously depends also on the type of data and how "noisy" the data series is.
Also Note the explicit use of start= in the CDEF - this is necessary to load all the necessary data (even if it is not displayed)
This prediction can only be used for short term extrapolations - say a few days into the future.
DUP, POP, EXC
Duplicate the top element, remove the top element, exchange the two top elements.
DEPTH
pushes the current depth of the stack onto the stack
a,b,DEPTH -> a,b,2
n,COPY
push a copy of the top n elements onto the stack
a,b,c,d,2,COPY => a,b,c,d,c,d
n,INDEX
push the nth element onto the stack.
a,b,c,d,3,INDEX -> a,b,c,d,b
n,m,ROLL
rotate the top n elements of the stack by m
a,b,c,d,3,1,ROLL => a,d,b,c
a,b,c,d,3,-1,ROLL => a,c,d,b