Hi gretl-listers,
I have a problem with running times of a simulation of mine. IMO the
time needed should grow linearly with the number of simulation runs, but
it turns out that instead it is the "marginal cost" of an extra 100 runs
which grows linearly; each additional 100 runs seem to take about 4 or 5
seconds more than the previously added 100 runs (see below for the numbers).
This essentially simulates data and estimates Vecms. For 200 runs, data
simulation takes about 1/3 of the total time (5s), the estimation close
to the rest (9s, 1s is remaining overhead). The sorting of the results
with gretl's quantile() function is not significantly costly. I don't
think that RAM limitations are a problem, it's not that much data
actually, it's just CPU intensive.
One guess is that the huge script text output may be responsible for
part of the problem -- here it would help if a VECM could be estimated
without any output, as I was suggesting in a previous email (vecm with
the --silent option).
But that is just one guess, and I don't really know what's happening.
Any hints or remarks? I'm not posting the code because it's quite complex.
thanks,
sven
100: Elapsed time: 5.530000
(Diff: 10)
200: Elapsed time: 15.090000
(Diff: 14)
300: Elapsed time: 29.180000
(Diff: 19)
400: Elapsed time: 47.700000
(Diff: 23)
500: Elapsed time: 70.750000
(Diff: 27)
600: Elapsed time: 98.170000
(Diff: 32)
700: Elapsed time: 130.090000
(Diff: 39 -- gretl not always the only active task)
800: Elapsed time: 168.830000 (35810 lines in script output, 1.6MB)
(Diff: 41)
900: Elapsed time: 209.550000
(Diff: 47)
1000: Elapsed time: 257.090000 (51110 lines in script output, 2.3MB)
20000: ran overnight, no result, gretl still at 100% CPU, killed it.