introduction.qbk 2.7 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253
  1. [section:introduction Introduction]
  2. Boost.MPI is a library for message passing in high-performance
  3. parallel applications. A Boost.MPI program is one or more processes
  4. that can communicate either via sending and receiving individual
  5. messages (point-to-point communication) or by coordinating as a group
  6. (collective communication). Unlike communication in threaded
  7. environments or using a shared-memory library, Boost.MPI processes can
  8. be spread across many different machines, possibly with different
  9. operating systems and underlying architectures.
  10. Boost.MPI is not a completely new parallel programming
  11. library. Rather, it is a C++-friendly interface to the standard
  12. Message Passing Interface (_MPI_), the most popular library interface
  13. for high-performance, distributed computing. MPI defines
  14. a library interface, available from C, Fortran, and C++, for which
  15. there are many _MPI_implementations_. Although there exist C++
  16. bindings for MPI, they offer little functionality over the C
  17. bindings. The Boost.MPI library provides an alternative C++ interface
  18. to MPI that better supports modern C++ development styles, including
  19. complete support for user-defined data types and C++ Standard Library
  20. types, arbitrary function objects for collective algorithms, and the
  21. use of modern C++ library techniques to maintain maximal
  22. efficiency.
  23. At present, Boost.MPI supports the majority of functionality in MPI
  24. 1.1. The thin abstractions in Boost.MPI allow one to easily combine it
  25. with calls to the underlying C MPI library. Boost.MPI currently
  26. supports:
  27. * Communicators: Boost.MPI supports the creation,
  28. destruction, cloning, and splitting of MPI communicators, along with
  29. manipulation of process groups.
  30. * Point-to-point communication: Boost.MPI supports
  31. point-to-point communication of primitive and user-defined data
  32. types with send and receive operations, with blocking and
  33. non-blocking interfaces.
  34. * Collective communication: Boost.MPI supports collective
  35. operations such as [funcref boost::mpi::reduce `reduce`]
  36. and [funcref boost::mpi::gather `gather`] with both
  37. built-in and user-defined data types and function objects.
  38. * MPI Datatypes: Boost.MPI can build MPI data types for
  39. user-defined types using the _Serialization_ library.
  40. * Separating structure from content: Boost.MPI can transfer the shape
  41. (or "skeleton") of complex data structures (lists, maps,
  42. etc.) and then separately transfer their content. This facility
  43. optimizes for cases where the data within a large, static data
  44. structure needs to be transmitted many times.
  45. Boost.MPI can be accessed either through its native C++ bindings, or
  46. through its alternative, [link mpi.python Python interface].
  47. [endsect:introduction]