Am 12.04.19 um 09:07 schrieb Sven Schreiber:
Am 11.04.2019 um 22:20 schrieb Allin Cottrell:
> On Thu, 11 Apr 2019, Artur Tarassow wrote:
>
>> Am 11.04.19 um 18:12 schrieb Sven Schreiber:
>>> Am 11.04.2019 um 17:55 schrieb Artur Tarassow:
>>>> Just out of curiosity: Gretl wouldn't compute stuff in parallel
>>>> out-of-box, right?
>>>
>>> Hehe, well if "somebody" implemented the cross-array operation
natively
>>> and used OpenMP or stuff like that then perhaps it would... otherwise:
>>> not, I guess.
>>
>> Of course, I rather meant (without saying it explicitly) whether there
>> is some way to exploit C in a simple way to do this. But I guess this
>> is rather a non-trivial issue...
>
> Yes, non-trivial. The thing is, with feval() "anything could happen"
> (that is, any part of the libgretl code could be visited as a result of
> the function call). Getting this right with OpenMP would require that
> libgretl as a whole is thread-safe.
Thanks for the explanation, Allin.
Hm, not sure I understand this point in the given context. What I
thought Artur meant is that for a given array of matrices the same
operation should be applied to every array member. That sounds to me
like a natural parallelization task without race conditions and so on.
OK, I guess one has to rule out that the operation itself accesses
sister elements in the same array, and I admit that ensuring this may
not be trivial.
But say you want to apply cdemean() to every matrix in an N-element
matrices array. In principle it wouldn't be too difficult to distribute
that task to different threads/cores, no?
Yes, that's an example I had in mind. But given that some functions
already natively seem to support multi-threading [at least for big
matrices and certain linear algebra operations I frequently enjoy
watching all cores fully demanded ;-)], there would be no real
value-added in parallelizing stuff. For more complex task, one can
(fortunately) always rely on your 'parallel_specs' package ;-)
Best,
Artur