Thanks for both good advises, fortunately I have been following them all along, namely, declare the matrices before hand, and if slice, then slice by column.
However, none replies to my question. So I run an experiment
based on Jack's example
<hansl>
#case 1
X = zeros(N, 3)
set stopwatch
loop i = 1..N
X[i,1] = sqrt(i)
X[i,2] = sqrt(i)
X[i,3] = sqrt(i)
endloop
t1 = $stopwatch
#case 2
X1 = zeros(N, 1)
X2 = zeros(N, 1)
X3 = zeros(N, 1)
set stopwatch
loop i = 1..N
X1[i,1] = sqrt(i)
X2[i,1] = sqrt(i)
X3[i,1] = sqrt(i)
endloop
t2 = $stopwatch
</hansl>
and I got
<output>
t1 = 0.641354
t2 = 0.653886
</output>
So, perhaps contrary to first a priori impressions, it appears
that slicing down the column dimension, if anything, makes gretl
slow down a bit.
Alecos Papadopoulos PhD Athens University of Economics and Business web: alecospapadopoulos.wordpress.com/ scholar:https://g.co/kgs/BqH2YU
On Fri, 23 Apr 2021, Sven Schreiber wrote:
Am 23.04.2021 um 12:34 schrieb Alecos Papadopoulos:
Not directly an answer to your question (sorry), but perhaps still
These estimates will be stored in matrices. Does it make any
difference in computational speed if I use one matrix of [replications
/times/ 20] dimension, or 20 matrices of [replications /times/ 1]
dimension?
relevant: In my experience, when creating or filling a large matrix in a
loop, it's a lot faster in gretl if the matrix of the final dimension is
pre-initialized and then the rows' or columns' values are reassigned,
instead of recursively stacking (concatenating) new rows or columns to a
small initial matrix.
This is very good advice: especially when matrices are large, the computational impac of memory allocation can be noticeable; consider for example this script:
<hansl>
set verbose off
N = 10000
# method 1
set stopwatch
X = {}
loop i = 1 .. N
X = X | sqrt(i)
endloop
t1 = $stopwatch
# method 2
set stopwatch
X = zeros(N, 1)
loop i = 1 .. N
X[i] = sqrt(i)
endloop
t2 = $stopwatch
print t1 t2
</hansl>
On my system, the output is
<output>
t1 = 0.17871549
t2 = 0.0025337400
</output>
As for your original question: the most efficient format depends in many cases on the nature of your problem (see below).
Also the gretl gurus told me that it's faster to work (with hansl
scripting) on entire columns instead of entire rows, due to the internal
memory layout of a gretl matrix. (I hope I got this right and didn't mix
it up.)
That's absolutely correct: to be more specific, the operation of matrix slicing takes much less CPU if it involves columns instead of rows, on both sides of the assignment operator. For example,
<hansl>
set verbose off
K = 1000
h = round(K/10)
N = 1000
X = mnormal(K, K)
y = zeros(K, K)
# do some arbitrary slicing
# method 1
set stopwatch
loop i = 1 .. N
# generate a random slice and slice by column
sel = mrandgen(i, 1, K, 1, h)
y[,sel] = X[,sel]
endloop
t1 = $stopwatch
# method 2
set stopwatch
loop i = 1 .. N
# generate a random slice and slice by row
sel = mrandgen(i, 1, N, 1, h)
y[sel,] = X[sel,]
endloop
t2 = $stopwatch
print t1 t2
</hansl>
gives, on my system
<output>
t1 = 0.091952580
t2 = 0.82560153
</output>
-------------------------------------------------------
Riccardo (Jack) Lucchetti
Dipartimento di Scienze Economiche e Sociali (DiSES)
Universitą Politecnica delle Marche
(formerly known as Universitą di Ancona)
r.lucchetti@univpm.it
http://www2.econ.univpm.it/servizi/hpp/lucchetti
-------------------------------------------------------
_______________________________________________ Gretl-users mailing list -- gretl-users@gretlml.univpm.it To unsubscribe send an email to gretl-users-leave@gretlml.univpm.it Website: https://gretlml.univpm.it/postorius/lists/gretl-users.gretlml.univpm.it/