Wednesday, 15 July 2015

c++ - MPI_Comm_spawn and MPI_Reduce -



c++ - MPI_Comm_spawn and MPI_Reduce -

i have 2 programs. "master" spawns "workers" perform calculations , want master results workers , store sum. trying utilize mpi_reduce collect results workers, , workers utilize mpi_reduce send masters mpi_comm. not sure if correct. here programs:

master:

#include <mpi.h> #include <iostream> using namespace std; int main(int argc, char *argv[]) { int world_size, universe_size, *universe_sizep, flag; int rc, send, recv; // intercommunicator mpi_comm everyone; mpi_init(&argc, &argv); mpi_comm_size(mpi_comm_world, &world_size); if (world_size != 1) { cout << "top heavy management" << endl; } mpi_attr_get(mpi_comm_world, mpi_universe_size, &universe_sizep, &flag); if (!flag) { cout << "this mpi not back upwards universe_size. how many processes total?"; cout << "enter universe size: "; cin >> universe_size; } else { universe_size = *universe_sizep; } if (universe_size == 1) { cout << "no room start workers" << endl; } mpi_comm_spawn("so_worker", mpi_argv_null, universe_size-1, mpi_info_null, 0, mpi_comm_self, &everyone, mpi_errcodes_ignore); send = 0; rc = mpi_reduce(&send, &recv, 1, mpi_int, mpi_sum, 0, everyone); // store result of recv ... // other calculations here cout << "from spawned workers recv: " << recv << endl; mpi_finalize(); homecoming 0; }

worker:

#include <mpi.h> #include <iostream> using namespace std; int main(int argc, char *argv[]) { int rc, send,recv; int parent_size, parent_id, my_id, numprocs; // parent intercomm mpi_comm parent; mpi_init(&argc, &argv); mpi_comm_get_parent(&parent); if (parent == mpi_comm_null) { cout << "no parent!" << endl; } mpi_comm_remote_size(parent, &parent_size); mpi_comm_rank(parent, &parent_id) ; //cout << "parent of size: " << size << endl; if (parent_size != 1) { cout << "something's wrong parent" << endl; } mpi_comm_rank(mpi_comm_world, &my_id) ; mpi_comm_size(mpi_comm_world, &numprocs) ; cout << "i'm kid process rank "<< my_id << " , " << numprocs << endl; cout << "the parent process rank "<< parent_id << " , " << parent_size << endl; // value of send send = 7; // illustration recv = 0; rc = mpi_reduce(&send, &recv, 1, mpi_int, mpi_sum, parent_id, parent); if (rc != mpi_success) cout << my_id << " failure on mpi_reduce in worker" << endl; mpi_finalize(); homecoming 0; }

i compiled both , execute (mpic++ osx):

mpic++ so_worker.cpp -o so_worker mpic++ so_master.cpp -o so_master mpirun -n 1 so_master

is right way run master spawns workers?

in master 0 mpi_reduce. can utilize mpi_reduce intercommunicators or should utilize mpi_send workers , mpi_recv master? i'm not sure why it's not working.

any help appreciated. thanks!

mpi_comm_get_parent returns parent intercommunicator encompasses original process , spawned ones. in case calling mpi_comm_rank(parent, &parent_id) not homecoming rank of parent rather rank of current process in local grouping of intercommunicator:

class="lang-none prettyprint-override">i'm kid process rank 0 , 3 parent process **rank 0** , 1 i'm kid process rank 1 , 3 parent process **rank 1** , 1 i'm kid process rank 2 , 3 parent process **rank 2** , 1

(observe how highlighted values differ - 1 expect rank of parent process should same, shouldn't it?)

that's why mpi_reduce() phone call not succeed worker processes specify different values root rank. since there 1 master process, rank in remote grouping of parent 0 , hence workers should specify 0 root mpi_reduce:

// // worker code // rc = mpi_reduce(&send, &recv, 1, mpi_int, mpi_sum, 0, parent);

this half of problem. other half rooted collective operations (e.g. mpi_reduce) operate bit different intercommunicators. 1 first has decide of 2 groups host root. 1 time root grouping identified, root process has pass mpi_root value of root in mpi_reduce , other processes in root grouping must pass mpi_proc_null. processes in receiving grouping not take part in rooted collective operation @ all. since master code written there 1 process in master's group, suffice alter phone call mpi_reduce in master code to:

// // master code // rc = mpi_reduce(&send, &recv, 1, mpi_int, mpi_sum, mpi_root, everyone);

note master not participate in reduction operation itself, e.g. value of sendbuf (&send in case) irrelevant root not sending info reduced - simply collects result of reduction performed on values processes in remote group.

c++ mpi

No comments:

Post a Comment