On Fri, 23 Apr 2021, Alecos Papadopoulos wrote:
But the crucial additional point is that, we have some gains if we go
down to
N X 1 matrices i.e. vectors, so that we can skip the column index.
I guess it ultimately depends on the nature of your problem: memory
allocation has some computational cost, matrix slicing also has one. You
have to find the bast tradeoff between the two. Additionally, you should
factor in the cost of managing several matrices; for example, the "$i"
construct in loops is rather time-consuming, because string operations are
involved. Perhaps you could consider matrix arrays as well.
For example, try playing with different values for N and K in the
following script:
<hansl>
set verbose off
N = 1000
K = 100
H = 100
# 1: one matrix
set stopwatch
loop H
X = zeros(N, K)
loop i = 1 .. K
X[,i] = 1
endloop
endloop
t1 = $stopwatch
# 2: several matrices
set stopwatch
loop H
loop i = 1 .. K
matrix X$i = zeros(N, 1)
X$i = 1
endloop
endloop
t2 = $stopwatch
# 3: matrix array
set stopwatch
loop H
matrices XX = array(K)
loop i = 1 .. K
XX[i] = zeros(N, 1)
XX[i] = 1
endloop
endloop
t3 = $stopwatch
print t1 t2 t3
<hansl>
Having said all this: in most cases, if the problem is not absolutely
trivial, the main factor in determining the speed of your code is likely
to be the computational complexity of the main computation, and the way
you store things away would only play a marginal role. In a MCMC algorithm
such as Metropolis-Hastings, most of your CPU time will probably be spent
generating random numbers and computing the likelihood for the various
points you visit during the Markov chain no matter how you organise the
data.
-------------------------------------------------------
Riccardo (Jack) Lucchetti
Dipartimento di Scienze Economiche e Sociali (DiSES)
Università Politecnica delle Marche
(formerly known as Università di Ancona)
r.lucchetti(a)univpm.it
http://www2.econ.univpm.it/servizi/hpp/lucchetti
-------------------------------------------------------