On 06.10.2019 16:40, Allin Cottrell wrote:
On Sun, 6 Oct 2019, Marcin Błażejowski wrote:
> On 06.10.2019 09:55, Marcin Błażejowski wrote:
>> Hi,
>>
>> I face the problem with MPI in Windows: when I start processing on more
>> than 3 nodes I'm getting the following message (current git 64 bit
>> compilation + currenty MS MPI 10 library):
>> ##################
>> MPI nodes: 4, OMP threads: 1
>> gretlmpi 2019d-git
>>
>> job aborted:
>> [ranks] message
>>
>> [0-1] terminated
>>
>> [2] application aborted
>> aborting MPI_COMM_WORLD (comm=0x44000000), error 1, comm rank 2
>>
>> [3] terminated
Do you have a "hosts" or "machinefile" active, perhaps?
No,
just use 4 local phisical cores.
> The next part of the story. I've decided to test the code on Linux +
> MPICH, so I compiled another gretl instance with binaries in, let's say,
> '/home/marcin/gretl/2019c/bin' and:
>
> 1. Calling mpi block genereted overall problem: mpiexec was trying to
> execute gretlmpi from '/usr/local/bin' instead of
> '/home/marcin/gretl/2019c/bin'.
That's controlled under /Tools/Preferences/General/MPI : you can reset
the path to mpiexec there.
Allin, mpiexec (mpiexec.mpich in fact) was found end executed, but it
tried to call '/usr/local/bin/gretlmpi' (which was linked against
OpenMPI) instead of "local" '/home/marcin/gretl/2019c/bin/gretlmpi'.
Marcin
--
Marcin Błażejowski