point_to_point.qbk 6.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176
  1. [section:point_to_point Point-to-Point communication]
  2. [section:blocking Blocking communication]
  3. As a message passing library, MPI's primary purpose is to routine
  4. messages from one process to another, i.e., point-to-point. MPI
  5. contains routines that can send messages, receive messages, and query
  6. whether messages are available. Each message has a source process, a
  7. target process, a tag, and a payload containing arbitrary data. The
  8. source and target processes are the ranks of the sender and receiver
  9. of the message, respectively. Tags are integers that allow the
  10. receiver to distinguish between different messages coming from the
  11. same sender.
  12. The following program uses two MPI processes to write "Hello, world!"
  13. to the screen (`hello_world.cpp`):
  14. #include <boost/mpi.hpp>
  15. #include <iostream>
  16. #include <string>
  17. #include <boost/serialization/string.hpp>
  18. namespace mpi = boost::mpi;
  19. int main()
  20. {
  21. mpi::environment env;
  22. mpi::communicator world;
  23. if (world.rank() == 0) {
  24. world.send(1, 0, std::string("Hello"));
  25. std::string msg;
  26. world.recv(1, 1, msg);
  27. std::cout << msg << "!" << std::endl;
  28. } else {
  29. std::string msg;
  30. world.recv(0, 0, msg);
  31. std::cout << msg << ", ";
  32. std::cout.flush();
  33. world.send(0, 1, std::string("world"));
  34. }
  35. return 0;
  36. }
  37. The first processor (rank 0) passes the message "Hello" to the second
  38. processor (rank 1) using tag 0. The second processor prints the string
  39. it receives, along with a comma, then passes the message "world" back
  40. to processor 0 with a different tag. The first processor then writes
  41. this message with the "!" and exits. All sends are accomplished with
  42. the [memberref boost::mpi::communicator::send
  43. communicator::send] method and all receives use a corresponding
  44. [memberref boost::mpi::communicator::recv
  45. communicator::recv] call.
  46. [endsect:blocking]
  47. [section:nonblocking Non-blocking communication]
  48. The default MPI communication operations--`send` and `recv`--may have
  49. to wait until the entire transmission is completed before they can
  50. return. Sometimes this *blocking* behavior has a negative impact on
  51. performance, because the sender could be performing useful computation
  52. while it is waiting for the transmission to occur. More important,
  53. however, are the cases where several communication operations must
  54. occur simultaneously, e.g., a process will both send and receive at
  55. the same time.
  56. Let's revisit our "Hello, world!" program from the previous
  57. [link mpi.tutorial.point_to_point.blocking section].
  58. The core of this program transmits two messages:
  59. if (world.rank() == 0) {
  60. world.send(1, 0, std::string("Hello"));
  61. std::string msg;
  62. world.recv(1, 1, msg);
  63. std::cout << msg << "!" << std::endl;
  64. } else {
  65. std::string msg;
  66. world.recv(0, 0, msg);
  67. std::cout << msg << ", ";
  68. std::cout.flush();
  69. world.send(0, 1, std::string("world"));
  70. }
  71. The first process passes a message to the second process, then
  72. prepares to receive a message. The second process does the send and
  73. receive in the opposite order. However, this sequence of events is
  74. just that--a *sequence*--meaning that there is essentially no
  75. parallelism. We can use non-blocking communication to ensure that the
  76. two messages are transmitted simultaneously
  77. (`hello_world_nonblocking.cpp`):
  78. #include <boost/mpi.hpp>
  79. #include <iostream>
  80. #include <string>
  81. #include <boost/serialization/string.hpp>
  82. namespace mpi = boost::mpi;
  83. int main()
  84. {
  85. mpi::environment env;
  86. mpi::communicator world;
  87. if (world.rank() == 0) {
  88. mpi::request reqs[2];
  89. std::string msg, out_msg = "Hello";
  90. reqs[0] = world.isend(1, 0, out_msg);
  91. reqs[1] = world.irecv(1, 1, msg);
  92. mpi::wait_all(reqs, reqs + 2);
  93. std::cout << msg << "!" << std::endl;
  94. } else {
  95. mpi::request reqs[2];
  96. std::string msg, out_msg = "world";
  97. reqs[0] = world.isend(0, 1, out_msg);
  98. reqs[1] = world.irecv(0, 0, msg);
  99. mpi::wait_all(reqs, reqs + 2);
  100. std::cout << msg << ", ";
  101. }
  102. return 0;
  103. }
  104. We have replaced calls to the [memberref
  105. boost::mpi::communicator::send communicator::send] and
  106. [memberref boost::mpi::communicator::recv
  107. communicator::recv] members with similar calls to their non-blocking
  108. counterparts, [memberref boost::mpi::communicator::isend
  109. communicator::isend] and [memberref
  110. boost::mpi::communicator::irecv communicator::irecv]. The
  111. prefix *i* indicates that the operations return immediately with a
  112. [classref boost::mpi::request mpi::request] object, which
  113. allows one to query the status of a communication request (see the
  114. [memberref boost::mpi::request::test test] method) or wait
  115. until it has completed (see the [memberref
  116. boost::mpi::request::wait wait] method). Multiple requests
  117. can be completed at the same time with the [funcref
  118. boost::mpi::wait_all wait_all] operation.
  119. [important Regarding communication completion/progress:
  120. The MPI standard requires users to keep the request
  121. handle for a non-blocking communication, and to call the "wait"
  122. operation (or successfully test for completion) to complete the send
  123. or receive.
  124. nlike most C MPI implementations, which allow the user to
  125. discard the request for a non-blocking send, Boost.MPI requires the
  126. user to call "wait" or "test", since the request object might contain
  127. temporary buffers that have to be kept until the send is
  128. completed.
  129. Moreover, the MPI standard does not guarantee that the
  130. receive makes any progress before a call to "wait" or "test", although
  131. most implementations of the C MPI do allow receives to progress before
  132. the call to "wait" or "test".
  133. Boost.MPI, on the other hand, generally
  134. requires "test" or "wait" calls to make progress.
  135. More specifically, Boost.MPI guarantee that calling "test" multiple time will
  136. eventually complete the communication (this is due to the fact that serialized communication are potentially a multi step operation.). ]
  137. If you run this program multiple times, you may see some strange
  138. results: namely, some runs will produce:
  139. Hello, world!
  140. while others will produce:
  141. world!
  142. Hello,
  143. or even some garbled version of the letters in "Hello" and
  144. "world". This indicates that there is some parallelism in the program,
  145. because after both messages are (simultaneously) transmitted, both
  146. processes will concurrent execute their print statements. For both
  147. performance and correctness, non-blocking communication operations are
  148. critical to many parallel applications using MPI.
  149. [endsect:nonblocking]
  150. [endsect:point_to_point]