On Wed, 11 Jan 2017, Sven Schreiber wrote:
Am 11.01.2017 um 07:51 schrieb Sven Schreiber:
> Am 10.01.2017 um 21:27 schrieb Allin Cottrell:
>> you could try "launch", see
>>
http://lists.wfu.edu/pipermail/gretl-devel/2017-January/007250.html
>
> After you said launch is meant for GUI programs I was trying to avoid
> it. Time to reconsider!
OK, attached is a version where I switched over from "! cmd c/" + output
redirection to using launch with an outfile command embedded in the temp
files.
This is now really working in parallel, I verified with a more CPU-intensive
task that actually 2 of the 4 CPU cores are then used instead of only one as
before.
Thanks.
>> I'm attaching a variant of your script which permits
comparison with
>> MPI.
>
> Thanks, I will take a look.
Very interesting that the simple construction:
mpi
runfile = sprintf("hansltemp%d.inp", $mpirank + 1)
run @runfile
end mpi --np=@Nstr
would work, are you sure? I would have never guessed that it's possible
without mpiscatter or similar.
Yes, it works: in this case all the info needed by each MPI worker is
in the pre-generated script and the bundle that the script directs
the worker to read. However, this could now be done more cleanly: in
git, mpisend, mpirecv and mpibcast accept bundle arguments.
(Apart from the fact that this code environment shows again that
it's quite
difficult to explain to users why they sometimes must write N and sometimes
constructions like @Nstr, but that's a different topic.)
My bad: the @Nstr thing is not required there, plain N works fine.
Allin