Hi,

I'm benchmarking the Mahalanobis distance to see how the accuracy and execution time changes with an increasing sample size. As far as I understand the algorithm the execution time should grow linearly as the sample size increases. The weird thing is that the time grows linearly up to (and including) 199 samples, but then suddenly has a drop at 200 samples. I've attached a graph to illustrate this.

I'm using it to do outlier detection. The time drops at 200 samples, but the accuracy increases without a sudden drop.

Does anyone know why this happens?

Chris