communicator.qbk 4.6 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122
  1. [section:communicators Communicators]
  2. [section:managing Managing communicators]
  3. Communication with Boost.MPI always occurs over a communicator. A
  4. communicator contains a set of processes that can send messages among
  5. themselves and perform collective operations. There can be many
  6. communicators within a single program, each of which contains its own
  7. isolated communication space that acts independently of the other
  8. communicators.
  9. When the MPI environment is initialized, only the "world" communicator
  10. (called `MPI_COMM_WORLD` in the MPI C and Fortran bindings) is
  11. available. The "world" communicator, accessed by default-constructing
  12. a [classref boost::mpi::communicator mpi::communicator]
  13. object, contains all of the MPI processes present when the program
  14. begins execution. Other communicators can then be constructed by
  15. duplicating or building subsets of the "world" communicator. For
  16. instance, in the following program we split the processes into two
  17. groups: one for processes generating data and the other for processes
  18. that will collect the data. (`generate_collect.cpp`)
  19. #include <boost/mpi.hpp>
  20. #include <iostream>
  21. #include <cstdlib>
  22. #include <boost/serialization/vector.hpp>
  23. namespace mpi = boost::mpi;
  24. enum message_tags {msg_data_packet, msg_broadcast_data, msg_finished};
  25. void generate_data(mpi::communicator local, mpi::communicator world);
  26. void collect_data(mpi::communicator local, mpi::communicator world);
  27. int main()
  28. {
  29. mpi::environment env;
  30. mpi::communicator world;
  31. bool is_generator = world.rank() < 2 * world.size() / 3;
  32. mpi::communicator local = world.split(is_generator? 0 : 1);
  33. if (is_generator) generate_data(local, world);
  34. else collect_data(local, world);
  35. return 0;
  36. }
  37. When communicators are split in this way, their processes retain
  38. membership in both the original communicator (which is not altered by
  39. the split) and the new communicator. However, the ranks of the
  40. processes may be different from one communicator to the next, because
  41. the rank values within a communicator are always contiguous values
  42. starting at zero. In the example above, the first two thirds of the
  43. processes become "generators" and the remaining processes become
  44. "collectors". The ranks of the "collectors" in the `world`
  45. communicator will be 2/3 `world.size()` and greater, whereas the ranks
  46. of the same collector processes in the `local` communicator will start
  47. at zero. The following excerpt from `collect_data()` (in
  48. `generate_collect.cpp`) illustrates how to manage multiple
  49. communicators:
  50. mpi::status msg = world.probe();
  51. if (msg.tag() == msg_data_packet) {
  52. // Receive the packet of data
  53. std::vector<int> data;
  54. world.recv(msg.source(), msg.tag(), data);
  55. // Tell each of the collectors that we'll be broadcasting some data
  56. for (int dest = 1; dest < local.size(); ++dest)
  57. local.send(dest, msg_broadcast_data, msg.source());
  58. // Broadcast the actual data.
  59. broadcast(local, data, 0);
  60. }
  61. The code in this except is executed by the "master" collector, e.g.,
  62. the node with rank 2/3 `world.size()` in the `world` communicator and
  63. rank 0 in the `local` (collector) communicator. It receives a message
  64. from a generator via the `world` communicator, then broadcasts the
  65. message to each of the collectors via the `local` communicator.
  66. For more control in the creation of communicators for subgroups of
  67. processes, the Boost.MPI [classref boost::mpi::group `group`] provides
  68. facilities to compute the union (`|`), intersection (`&`), and
  69. difference (`-`) of two groups, generate arbitrary subgroups, etc.
  70. [endsect:managing]
  71. [section:cartesian_communicator Cartesian communicator]
  72. A communicator can be organised as a cartesian grid, here a basic example:
  73. #include <vector>
  74. #include <iostream>
  75. #include <boost/mpi/communicator.hpp>
  76. #include <boost/mpi/collectives.hpp>
  77. #include <boost/mpi/environment.hpp>
  78. #include <boost/mpi/cartesian_communicator.hpp>
  79. #include <boost/test/minimal.hpp>
  80. namespace mpi = boost::mpi;
  81. int test_main(int argc, char* argv[])
  82. {
  83. mpi::environment env;
  84. mpi::communicator world;
  85. if (world.size() != 24) return -1;
  86. mpi::cartesian_dimension dims[] = {{2, true}, {3,true}, {4,true}};
  87. mpi::cartesian_communicator cart(world, mpi::cartesian_topology(dims));
  88. for (int r = 0; r < cart.size(); ++r) {
  89. cart.barrier();
  90. if (r == cart.rank()) {
  91. std::vector<int> c = cart.coordinates(r);
  92. std::cout << "rk :" << r << " coords: "
  93. << c[0] << ' ' << c[1] << ' ' << c[2] << '\n';
  94. }
  95. }
  96. return 0;
  97. }
  98. [endsect:cartesian_communicator]
  99. [endsect:communicators]