Hi again,

Sorry about the confusion. I that sentence 1 sample = 1 observation. What I've basically done is execute the code a couple million times (since I'm working with music files) to make sure that the average execution time is as accurate as possible.
So my original graph indicates the average time it took to execute the code with 1 observation in the data set, then 2 observations , 3 observations, until 250 observations (with the jump occurring between 199 and 200 observations).

I've attached a small example program (with a makefile) that does the test. I'm using gettimeofday, which will probably only work under Linux. It might take a couple of seconds to execute. I would apprciate it if someone could run it (sorry Allin, this is probably the wrong mailing list again for posting C code), and verify that I'm not the only one with this time-jump.

These are my outputs:

197 number of observations: 784814
198 number of observations: 760379
199 number of observations: 759822
200 number of observations: 598327
201 number of observations: 602174
202 number of observations: 604390
203 number of observations: 607213

Chris

On 2014/04/15 04:55 PM, Riccardo (Jack) Lucchetti wrote:
On Tue, 15 Apr 2014, GOO Creations wrote:

I've used 8 different datasets with 30-40 million samples each. Every single window over every single dataset gave the exact same time jump between 199 and 200 observations.

Sorry, _now_ I'm officially confused. Could you please clarify what you mean by "samples" and "observations"?

-------------------------------------------------------
  Riccardo (Jack) Lucchetti
  Dipartimento di Scienze Economiche e Sociali (DiSES)

  Università Politecnica delle Marche
  (formerly known as Università di Ancona)

  r.lucchetti@univpm.it
  http://www2.econ.univpm.it/servizi/hpp/lucchetti
-------------------------------------------------------


_______________________________________________
Gretl-users mailing list
Gretl-users@lists.wfu.edu
http://lists.wfu.edu/mailman/listinfo/gretl-users