tutorial.qbk 4.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136
  1. [section:tutorial Tutorial]
  2. A Boost.MPI program consists of many cooperating processes (possibly
  3. running on different computers) that communicate among themselves by
  4. passing messages. Boost.MPI is a library (as is the lower-level MPI),
  5. not a language, so the first step in a Boost.MPI is to create an
  6. [classref boost::mpi::environment mpi::environment] object
  7. that initializes the MPI environment and enables communication among
  8. the processes. The [classref boost::mpi::environment
  9. mpi::environment] object is initialized with the program arguments
  10. (which it may modify) in your main program. The creation of this
  11. object initializes MPI, and its destruction will finalize MPI. In the
  12. vast majority of Boost.MPI programs, an instance of [classref
  13. boost::mpi::environment mpi::environment] will be declared
  14. in `main` at the very beginning of the program.
  15. [warning
  16. Declaring an [classref boost::mpi::environment mpi::environment] at global scope is undefined behavior.
  17. [footnote According to the MPI standard, initialization must take place at user's initiative after once the main function has been called.]
  18. ]
  19. Communication with MPI always occurs over a *communicator*,
  20. which can be created be simply default-constructing an object of type
  21. [classref boost::mpi::communicator mpi::communicator]. This
  22. communicator can then be queried to determine how many processes are
  23. running (the "size" of the communicator) and to give a unique number
  24. to each process, from zero to the size of the communicator (i.e., the
  25. "rank" of the process):
  26. #include <boost/mpi/environment.hpp>
  27. #include <boost/mpi/communicator.hpp>
  28. #include <iostream>
  29. namespace mpi = boost::mpi;
  30. int main()
  31. {
  32. mpi::environment env;
  33. mpi::communicator world;
  34. std::cout << "I am process " << world.rank() << " of " << world.size()
  35. << "." << std::endl;
  36. return 0;
  37. }
  38. If you run this program with 7 processes, for instance, you will
  39. receive output such as:
  40. [pre
  41. I am process 5 of 7.
  42. I am process 0 of 7.
  43. I am process 1 of 7.
  44. I am process 6 of 7.
  45. I am process 2 of 7.
  46. I am process 4 of 7.
  47. I am process 3 of 7.
  48. ]
  49. Of course, the processes can execute in a different order each time,
  50. so the ranks might not be strictly increasing. More interestingly, the
  51. text could come out completely garbled, because one process can start
  52. writing "I am a process" before another process has finished writing
  53. "of 7.".
  54. If you should still have an MPI library supporting only MPI 1.1 you
  55. will need to pass the command line arguments to the environment
  56. constructor as shown in this example:
  57. #include <boost/mpi/environment.hpp>
  58. #include <boost/mpi/communicator.hpp>
  59. #include <iostream>
  60. namespace mpi = boost::mpi;
  61. int main(int argc, char* argv[])
  62. {
  63. mpi::environment env(argc, argv);
  64. mpi::communicator world;
  65. std::cout << "I am process " << world.rank() << " of " << world.size()
  66. << "." << std::endl;
  67. return 0;
  68. }
  69. [include point_to_point.qbk]
  70. [include collective.qbk]
  71. [include user_data_types.qbk]
  72. [include communicator.qbk]
  73. [include threading.qbk]
  74. [include skeleton_and_content.qbk]
  75. [section:performance_optimizations Performance optimizations]
  76. [section:serialization_optimizations Serialization optimizations]
  77. To obtain optimal performance for small fixed-length data types not containing
  78. any pointers it is very important to mark them using the type traits of
  79. Boost.MPI and Boost.Serialization.
  80. It was already discussed that fixed length types containing no pointers can be
  81. using as [classref
  82. boost::mpi::is_mpi_datatype `is_mpi_datatype`], e.g.:
  83. namespace boost { namespace mpi {
  84. template <>
  85. struct is_mpi_datatype<gps_position> : mpl::true_ { };
  86. } }
  87. or the equivalent macro
  88. BOOST_IS_MPI_DATATYPE(gps_position)
  89. In addition it can give a substantial performance gain to turn off tracking
  90. and versioning for these types, if no pointers to these types are used, by
  91. using the traits classes or helper macros of Boost.Serialization:
  92. BOOST_CLASS_TRACKING(gps_position,track_never)
  93. BOOST_CLASS_IMPLEMENTATION(gps_position,object_serializable)
  94. [endsect:serialization_optimizations]
  95. [section:homogeneous_machines Homogeneous Machines]
  96. More optimizations are possible on homogeneous machines, by avoiding
  97. MPI_Pack/MPI_Unpack calls but using direct bitwise copy. This feature is
  98. enabled by default by defining the macro [macroref BOOST_MPI_HOMOGENEOUS] in the include
  99. file `boost/mpi/config.hpp`.
  100. That definition must be consistent when building Boost.MPI and
  101. when building the application.
  102. In addition all classes need to be marked both as is_mpi_datatype and
  103. as is_bitwise_serializable, by using the helper macro of Boost.Serialization:
  104. BOOST_IS_BITWISE_SERIALIZABLE(gps_position)
  105. Usually it is safe to serialize a class for which is_mpi_datatype is true
  106. by using binary copy of the bits. The exception are classes for which
  107. some members should be skipped for serialization.
  108. [endsect:homogeneous_machines]
  109. [endsect:performance_optimizations]
  110. [endsect:tutorial]