I am writing an Metropolis-Hastings algorithm for Bayesian estimation of a model. At least for the moment, I have 10 parameters that will be under estimation in this way. This means that from each replication I want to store 20 values, so as to retrieve the "proposal" sequence and the "accepted" sequence of estimates. Typically this algorithm should have at least 100,000 replications or even more, like up to 500,000.

These estimates will be stored in matrices. Does it make any difference in computational speed if I use one matrix of [replications times 20] dimension, or 20 matrices of [replications times 1] dimension?

I know that matrices become a software's nightmare when both dimensions grow large, but what happens when the one dimension is large while the other is small?

Does it pay to make the small dimension as small as possible?

PS: I know that one could also (or alternatively) break the large dimension in pieces by, I guess, an IF statement inside the loop that will depend on the loop index value, but I wanted to know if anything can be improved by keeping the one dimension large while chopping off the other.

-- 
Alecos Papadopoulos PhD
Athens University of Economics and Business
web: alecospapadopoulos.wordpress.com/
scholar:https://g.co/kgs/BqH2YU