This file is a top-level convenience header that includes all of the Boost.MPI library headers. Users concerned about compile time may wish to include only specific headers from the Boost.MPI library.
This header provides an STL-compliant allocator that uses the MPI-2 memory allocation facilities. Standard Library-compliant allocator for the MPI-2 memory allocation routines. This allocator provides a standard C++ interface to the MPI_Alloc_mem and MPI_Free_mem routines of MPI-2. It is intended to be used with the containers in the Standard Library (vector, in particular) in cases where the contents of the container will be directly transmitted via MPI. This allocator is also used internally by the library for character buffers that will be used in the transmission of data.The allocator class template only provides MPI memory allocation when the underlying MPI implementation is either MPI-2 compliant or is known to provide MPI_Alloc_mem and MPI_Free_mem as extensions. When the MPI memory allocation routines are not available, allocator is brought in directly from namespace std, so that standard allocators are used throughout. The macro BOOST_MPI_HAS_MEMORY_ALLOCATION will be defined when the MPI-2 memory allocation facilities are available. Retrieve the type of an allocator similar to this allocator but for a different value type. allocator< U > Holds the size of objects. std::size_t Holds the number of elements between two pointers. std::ptrdiff_t A pointer to an object of type T. T * A pointer to a constant object of type T. const T * A reference to an object of type T. T & A reference to a constant object of type T. const T & The type of memory allocated by this allocator. T pointerreferenceReturns the address of object x. const_pointerconst_referenceReturns the address of object x. pointersize_typeThe number of elements for which memory should be allocated.allocator< void >::const_pointer0Allocate enough memory for n elements of type T. a pointer to the newly-allocated memory voidpointerThe pointer whose memory should be deallocated. This pointer shall have been returned from the allocate() function and not have already been freed. size_typeDeallocate memory referred to by the pointer p. size_typeReturns the maximum number of elements that can be allocated with allocate(). voidpointerconst T &Construct a copy of val at the location referenced by p. voidpointerDestroy the object referenced by p. Default-construct an allocator. const allocator &Copy-construct an allocator. const allocator< U > &Copy-construct an allocator from another allocator for a different value type. Destroy an allocator. voidAllocator specialization for void value types. The void specialization of allocator is useful only for rebinding to another, different value type. allocator< U > void * const void * void boolconst allocator< T1 > &const allocator< T2 > &Compare two allocators for equality. Since MPI allocators have no state, all MPI allocators are equal. true boolconst allocator< T1 > &const allocator< T2 > &Compare two allocators for inequality. Since MPI allocators have no state, all MPI allocators are equal. false
This header defines facilities to support MPI communicators with cartesian topologies. If known at compiled time, the dimension of the implied grid can be statically enforced, through the templatized communicator class. Otherwise, a non template, dynamic, base class is provided. boost::mpi::communicatorAn MPI communicator with a cartesian topology. A cartesian_communicator is a communicator whose topology is expressed as a grid. Cartesian communicators have the same functionality as (intra)communicators, but also allow one to query the relationships among processes and the properties of the grid. intRetrive the number of dimension of the underlying toppology. intconst std::vector< int > &the coordinates. the size must match the communicator's topology. Return the rank of the process at the given coordinates. std::pair< int, int >intthe dimension in which the shift takes place. 0 <= dim <= ndim(). intthe shift displacement, can be positive (upward) or negative (downward). Return the rank of the source and target destination process through a shift. std::vector< int >intthe ranks in this communicator. Provides the coordinates of the process with the given rank. the coordinates. voidcartesian_topology &std::vector< int > &Retrieve the topology and coordinates of this process in the grid. cartesian_topologyRetrieve the topology of the grid. intDetermine the rank of the executing process in a communicator. This routine is equivalent to MPI_Comm_rank. The rank of the process in the communicator, which will be a value in [0, size()) const MPI_Comm &comm_create_kindBuild a new Boost.MPI cartesian communicator based on the MPI communicator comm with cartesian topology.comm may be any valid MPI communicator. If comm is MPI_COMM_NULL, an empty communicator (that cannot be used for communication) is created and the kind parameter is ignored. Otherwise, the kind parameter determines how the Boost.MPI communicator will be related to comm: If kind is comm_duplicate, duplicate comm to create a new communicator. This new communicator will be freed when the Boost.MPI communicator (and all copies of it) is destroyed. This option is only permitted if the underlying MPI implementation supports MPI 2.0; duplication of intercommunicators is not available in MPI 1.x. If kind is comm_take_ownership, take ownership of comm. It will be freed automatically when all of the Boost.MPI communicators go out of scope. If kind is comm_attach, this Boost.MPI communicator will reference the existing MPI communicator comm but will not free comm when the Boost.MPI communicator goes out of scope. This option should only be used when the communicator is managed by the user. const communicator &The communicator that the new, cartesian communicator will be based on.const cartesian_topology &the cartesian dimension of the new communicator. The size indicate the number of dimension. Some dimensions be set to zero, in which case the corresponding dimension value is left to the system.boolfalseWhether MPI is permitted to re-order the process ranks within the returned communicator, to better optimize communication. If false, the ranks of each process in the returned process will match precisely the rank of that process within the original communicator. Create a new communicator whose topology is described by the given cartesian. The indices of the vertices in the cartesian will be assumed to be the ranks of the processes within the communicator. There may be fewer vertices in the cartesian than there are processes in the communicator; in this case, the resulting communicator will be a NULL communicator. const cartesian_communicator &the original communicator. const std::vector< int > &and array containiing the dimension to keep from the existing communicator. Create a new cartesian communicator whose topology is a subset of an existing cartesian cimmunicator. Specify the size and periodicity of the grid in a single dimension. POD lightweight object. intThe size of the grid n this dimension. boolIs the grid periodic in this dimension. int0boolfalse voidArchive &const unsigned int std::vector< cartesian_dimension >Describe the topology of a cartesian grid. Behave mostly like a sequence of cartesian_dimension with the notable exception that its size is fixed. This is a lightweight object, so that any constructor that could be considered missing could be replaced with a function (move constructor provided when supported). std::vector< cartesian_dimension > &Export as an stl sequence. std::vector< cartesian_dimension > const &Export as an stl sequence. voidstd::vector< int > &std::vector< bool > &Split the topology in two sequences of sizes and periodicities. cartesian_topology const & cartesian_topology &cartesian_topology const & cartesian_topology && cartesian_topology &cartesian_topology && intCreate a N dimension space. Each dimension is initialized as non periodic of size 0. std::vector< cartesian_dimension > const &Use the provided dimensions specification as initial values. InitArrUse dimensions specification provided in the sequence container as initial values. #param dims must be a sequence container. std::initializer_list< cartesian_dimension >Use dimensions specification provided in the initialization list as initial values. #param dims can be of the form { dim_1, false}, .... {dim_n, true}. cartesian_dimension(&)Use dimensions specification provided in the array. #param dims can be of the form { dim_1, false}, .... {dim_n, true}. DimRg const &the dimensions, values must convert to integers. PerRg const &the periodicities, values must convert to booleans. #param dims can be of the form { dim_1, false}, .... {dim_n, true} Use dimensions specification provided in the input ranges The ranges do not need to be the same size. If the sizes are different, the missing values will be complete with zeros of the dim and assumed non periodic. DimIterdimension iterator, value must convert to integer type. PerIterperiodicity iterator, value must convert to booleans.. intIterator based initializer. Will use the first n iterated values. Both iterators can be single pass. cartesian_dimensiontrue_ boolcartesian_dimension const &cartesian_dimension const &Test if the dimensions values are identical. boolcartesian_dimension const &cartesian_dimension const &Test if the dimension values are different. std::ostream &std::ostream &cartesian_dimension const &Pretty printing of a cartesian dimension (size, periodic) boolcartesian_topology const &cartesian_topology const & boolcartesian_topology const &cartesian_topology const & std::ostream &std::ostream &cartesian_topology const &Pretty printing of a cartesian topology. std::vector< int > &intthe numer of mpi processes.fill a sequence of dimension. std::vector< int > &a sequence of positive or null dimensions. Non zero dimension will be left untouched. Given en number of processes, and a partially filled sequence of dimension, try to complete the dimension sequence.
This header contains MPI collective operations, which implement various parallel algorithms that require the coordination of all processes within a communicator. The header collectives_fwd.hpp provides forward declarations for each of these operations. To include only specific collective algorithms, use the headers boost/mpi/collectives/algorithm_name.hpp. voidconst communicator &The communicator over which the all-gather will occur.const T &The value to be transmitted by each process. To gather an array of values, in_values points to the n local values to be transmitted.std::vector< T > &A vector or pointer to storage that will be populated with the values from each process, indexed by the process ID number. If it is a vector, the vector will be resized accordingly. voidconst communicator &const T &T *voidconst communicator &const T *intstd::vector< T > &voidconst communicator &const T *intT *Gather the values stored at every process into vectors of values from each process. all_gather is a collective algorithm that collects the values stored at each process into a vector of values indexed by the process number they came from. The type T of the values may be any type that is serializable or has an associated MPI data type.When the type T has an associated MPI data type, this routine invokes MPI_Allgather to gather the values. voidconst communicator &The communicator over which the reduction will occur. const T *The local value to be combined with the local values of every other process. For reducing arrays, in_values is a pointer to the local values to be reduced and n is the number of values to reduce. See reduce for more information.intIndicated the size of the buffers of array type. T *Will receive the result of the reduction operation. If this parameter is omitted, the outgoing value will instead be returned.OpThe binary operation that combines two values of type T and returns a third value of type T. For types T that has ssociated MPI data types, op will either be translated into an MPI_Op (via MPI_Op_create) or, if possible, mapped directly to a built-in MPI operation. See is_mpi_op in the operations.hpp header for more details on this mapping. For any non-built-in operation, commutativity will be determined by the is_commmutative trait (also in operations.hpp): users are encouraged to mark commutative operations as such, because it gives the implementation additional lattitude to optimize the reduction operation.voidconst communicator &const T &T &OpTconst communicator &const T &Opvoidconst communicator &inplace_t< T * >intOpvoidconst communicator &inplace_t< T >OpCombine the values stored by each process into a single value available to all processes. all_reduce is a collective algorithm that combines the values stored by each process into a single value available to all processes. The values are combined in a user-defined way, specified via a function object. The type T of the values may be any type that is serializable or has an associated MPI data type. One can think of this operation as a all_gather, followed by an std::accumulate() over the gather values and using the operation op.When the type T has an associated MPI data type, this routine invokes MPI_Allreduce to perform the reduction. If possible, built-in MPI operations will be used; otherwise, all_reduce() will create a custom MPI_Op for the call to MPI_Allreduce. If wrapped in a inplace_t object, combine the usage of both input and $c out_value and the local value will be overwritten (a convenience function inplace is provided for the wrapping). If no out_value parameter is supplied, returns the result of the reduction operation. voidconst communicator &The communicator over which the all-to-all communication will occur.const std::vector< T > &A vector or pointer to storage that contains the values to send to each process, indexed by the process ID number.std::vector< T > &A vector or pointer to storage that will be updated to contain the values received from other processes. The jth value in out_values will come from the procss with rank j. voidconst communicator &const T *T *voidconst communicator &const std::vector< T > &intstd::vector< T > &voidconst communicator &const T *intT *Send data from every process to every other process. all_to_all is a collective algorithm that transmits p values from every process to every other process. On process i, jth value of the in_values vector is sent to process j and placed in the ith position of the out_values vector in process j. The type T of the values may be any type that is serializable or has an associated MPI data type. If n is provided, then arrays of n values will be transferred from one process to another.When the type T has an associated MPI data type, this routine invokes MPI_Alltoall to scatter the values. voidconst communicator &The communicator over which the broadcast will occur.T &The value (or values, if n is provided) to be transmitted (if the rank of comm is equal to root) or received (if the rank of comm is not equal to root). When the value is a skeleton_proxy, only the skeleton of the object will be broadcast. In this case, the root will build a skeleton from the object help in the proxy and all of the non-roots will reshape the objects held in their proxies based on the skeleton sent from the root.intThe rank/process ID of the process that will be transmitting the value. voidconst communicator &T *intintvoidconst communicator &skeleton_proxy< T > &intvoidconst communicator &const skeleton_proxy< T > &intBroadcast a value from a root process to all other processes. broadcast is a collective algorithm that transfers a value from an arbitrary root process to every other process that is part of the given communicator. The broadcast algorithm can transmit any Serializable value, values that have associated MPI data types, packed archives, skeletons, and the content of skeletons; see the send primitive for communicators for a complete list. The type T shall be the same for all processes that are a part of the communicator comm, unless packed archives are being transferred: with packed archives, the root sends a packed_oarchive or packed_skeleton_oarchive whereas the other processes receive a packed_iarchive or packed_skeleton_iarchve, respectively.When the type T has an associated MPI data type, this routine invokes MPI_Bcast to perform the broadcast. voidconst communicator &The communicator over which the gather will occur.const T &The value to be transmitted by each process. For gathering arrays of values, in_values points to storage for n*comm.size() values.std::vector< T > &A vector or pointer to storage that will be populated with the values from each process, indexed by the process ID number. If it is a vector, it will be resized accordingly. For non-root processes, this parameter may be omitted. If it is still provided, however, it will be unchanged.intThe process ID number that will collect the values. This value must be the same on all processes. voidconst communicator &const T &T *intvoidconst communicator &const T &intvoidconst communicator &const T *intstd::vector< T > &intvoidconst communicator &const T *intT *intvoidconst communicator &const T *intintGather the values stored at every process into a vector at the root process. gather is a collective algorithm that collects the values stored at each process into a vector of values at the root process. This vector is indexed by the process number that the value came from. The type T of the values may be any type that is serializable or has an associated MPI data type.When the type T has an associated MPI data type, this routine invokes MPI_Gather to gather the values. voidconst communicator &The communicator over which the gather will occur.const std::vector< T > &The array of values to be transmitted by each process.T *A pointer to storage that will be populated with the values from each process. For non-root processes, this parameter may be omitted. If it is still provided, however, it will be unchanged.const std::vector< int > &A vector containing the number of elements each non-root process will send.const std::vector< int > &A vector such that the i-th entry specifies the displacement (relative to out_values) from which to take the ingoing data at the root process. Overloaded versions for which displs is omitted assume that the data is to be placed contiguously at the root process.intThe process ID number that will collect the values. This value must be the same on all processes. voidconst communicator &const T *intT *const std::vector< int > &const std::vector< int > &intvoidconst communicator &const std::vector< T > &intvoidconst communicator &const T *intintvoidconst communicator &const T *intT *const std::vector< int > &intvoidconst communicator &const std::vector< T > &T *const std::vector< int > &intSimilar to boost::mpi::gather with the difference that the number of values to be send by non-root processes can vary. voidconst communicator &The communicator over which the scatter will occur.const std::vector< T > &A vector or pointer to storage that will contain the values to send to each process, indexed by the process rank. For non-root processes, this parameter may be omitted. If it is still provided, however, it will be unchanged.T &The value received by each process. When scattering an array of values, out_values points to the n values that will be received by each process.intThe process ID number that will scatter the values. This value must be the same on all processes. voidconst communicator &const T *T &intvoidconst communicator &T &intvoidconst communicator &const std::vector< T > &T *intintvoidconst communicator &const T *T *intintvoidconst communicator &T *intintScatter the values stored at the root to all processes within the communicator. scatter is a collective algorithm that scatters the values stored in the root process (inside a vector) to all of the processes in the communicator. The vector out_values (only significant at the root) is indexed by the process number to which the corresponding value will be sent. The type T of the values may be any type that is serializable or has an associated MPI data type.When the type T has an associated MPI data type, this routine invokes MPI_Scatter to scatter the values. voidconst communicator &The communicator over which the scatter will occur.const std::vector< T > &A vector or pointer to storage that will contain the values to send to each process, indexed by the process rank. For non-root processes, this parameter may be omitted. If it is still provided, however, it will be unchanged.const std::vector< int > &A vector containing the number of elements each non-root process will receive.const std::vector< int > &A vector such that the i-th entry specifies the displacement (relative to in_values) from which to take the outgoing data to process i. Overloaded versions for which displs is omitted assume that the data is contiguous at the root process.T *The array of values received by each process.intFor each non-root process this will contain the size of out_values.intThe process ID number that will scatter the values. This value must be the same on all processes. voidconst communicator &const T *const std::vector< int > &const std::vector< int > &T *intintvoidconst communicator &T *intintvoidconst communicator &const T *const std::vector< int > &T *intvoidconst communicator &const std::vector< T > &const std::vector< int > &T *intSimilar to boost::mpi::scatter with the difference that the number of values stored at the root process does not need to be a multiple of the communicator's size. voidconst communicator &The communicator over which the reduction will occur.const T &The local value to be combined with the local values of every other process. For reducing arrays, in_values contains a pointer to the local values. In this case, n is the number of values that will be reduced. Reduction occurs independently for each of the n values referenced by in_values, e.g., calling reduce on an array of n values is like calling reduce n separate times, one for each location in in_values and out_values.T &Will receive the result of the reduction operation, but only for the root process. Non-root processes may omit if parameter; if they choose to supply the parameter, it will be unchanged. For reducing arrays, out_values contains a pointer to the storage for the output values.OpThe binary operation that combines two values of type T into a third value of type T. For types T that has ssociated MPI data types, op will either be translated into an MPI_Op (via MPI_Op_create) or, if possible, mapped directly to a built-in MPI operation. See is_mpi_op in the operations.hpp header for more details on this mapping. For any non-built-in operation, commutativity will be determined by the is_commmutative trait (also in operations.hpp): users are encouraged to mark commutative operations as such, because it gives the implementation additional lattitude to optimize the reduction operation.intThe process ID number that will receive the final, combined value. This value must be the same on all processes. voidconst communicator &const T &Opintvoidconst communicator &const T *intT *Opintvoidconst communicator &const T *intOpintCombine the values stored by each process into a single value at the root. reduce is a collective algorithm that combines the values stored by each process into a single value at the root. The values can be combined arbitrarily, specified via a function object. The type T of the values may be any type that is serializable or has an associated MPI data type. One can think of this operation as a gather to the root, followed by an std::accumulate() over the gathered values and using the operation op.When the type T has an associated MPI data type, this routine invokes MPI_Reduce to perform the reduction. If possible, built-in MPI operations will be used; otherwise, reduce() will create a custom MPI_Op for the call to MPI_Reduce. voidconst communicator &The communicator over which the prefix reduction will occur.const T &The local value to be combined with the local values of other processes. For the array variant, the in_values parameter points to the n local values that will be combined.T &If provided, the ith process will receive the value op(in_value[0], op(in_value[1], op(..., in_value[i]) ... )). For the array variant, out_values contains a pointer to storage for the n output values. The prefix reduction occurs independently for each of the n values referenced by in_values, e.g., calling scan on an array of n values is like calling scan n separate times, one for each location in in_values and out_values.OpThe binary operation that combines two values of type T into a third value of type T. For types T that has ssociated MPI data types, op will either be translated into an MPI_Op (via MPI_Op_create) or, if possible, mapped directly to a built-in MPI operation. See is_mpi_op in the operations.hpp header for more details on this mapping. For any non-built-in operation, commutativity will be determined by the is_commmutative trait (also in operations.hpp).Tconst communicator &const T &Opvoidconst communicator &const T *intT *OpCompute a prefix reduction of values from all processes in the communicator. scan is a collective algorithm that combines the values stored by each process with the values of all processes with a smaller rank. The values can be arbitrarily combined, specified via a function object op. The type T of the values may be any type that is serializable or has an associated MPI data type. One can think of this operation as a gather to some process, followed by an std::prefix_sum() over the gathered values using the operation op. The ith process returns the ith value emitted by std::prefix_sum().When the type T has an associated MPI data type, this routine invokes MPI_Scan to perform the reduction. If possible, built-in MPI operations will be used; otherwise, scan() will create a custom MPI_Op for the call to MPI_Scan. If no out_value parameter is provided, returns the result of prefix reduction.
This header provides forward declarations for all of the collective operations contained in the header collectives.hpp.
This header defines the communicator class, which is the basis of all communication within Boost.MPI, and provides point-to-point communication operations. A communicator that permits communication and synchronization among a set of processes. The communicator class abstracts a set of communicating processes in MPI. All of the processes that belong to a certain communicator can determine the size of the communicator, their rank within the communicator, and communicate with any other processes in the communicator. intDetermine the rank of the executing process in a communicator. This routine is equivalent to MPI_Comm_rank. The rank of the process in the communicator, which will be a value in [0, size()) intDetermine the number of processes in a communicator. This routine is equivalent to MPI_Comm_size. The number of processes in the communicator. boost::mpi::groupThis routine constructs a new group whose members are the processes within this communicator. Equivalent to calling MPI_Comm_group. voidintThe rank of the remote process to which the data will be sent.intThe tag that will be associated with this message. Tags may be any integer between zero and an implementation-defined upper limit. This limit is accessible via environment::max_tag().const T &The value that will be transmitted to the receiver. The type T of this value must meet the aforementioned criteria for transmission. Send data to another process. This routine executes a potentially blocking send with tag tag to the process with rank dest. It can be received by the destination process with a matching recv call.The given value must be suitable for transmission over MPI. There are several classes of types that meet these requirements: Types with mappings to MPI data types: If is_mpi_datatype<T> is convertible to mpl::true_, then value will be transmitted using the MPI data type get_mpi_datatype<T>(). All primitive C++ data types that have MPI equivalents, e.g., int, float, char, double, etc., have built-in mappings to MPI data types. You may turn a Serializable type with fixed structure into an MPI data type by specializing is_mpi_datatype for your type. Serializable types: Any type that provides the serialize() functionality required by the Boost.Serialization library can be transmitted and received. Packed archives and skeletons: Data that has been packed into an mpi::packed_oarchive or the skeletons of data that have been backed into an mpi::packed_skeleton_oarchive can be transmitted, but will be received as mpi::packed_iarchive and mpi::packed_skeleton_iarchive, respectively, to allow the values (or skeletons) to be extracted by the destination process. Content: Content associated with a previously-transmitted skeleton can be transmitted by send and received by recv. The receiving process may only receive content into the content of a value that has been constructed with the matching skeleton. For types that have mappings to an MPI data type (including the concent of a type), an invocation of this routine will result in a single MPI_Send call. For variable-length data, e.g., serialized types and packed archives, two messages will be sent via MPI_Send: one containing the length of the data and the second containing the data itself.Std::vectors of MPI data type are considered variable size, e.g. their number of elements is unknown and must be transmited (although the serialization process is skipped). You can use the array specialized versions of communication methods is both sender and receiver know the vector size.Note that the transmission mode for variable-length data is an implementation detail that is subject to change. voidintintconst std::vector< T, A > & voidintThe rank of the remote process to which the skeleton will be sent.intThe tag that will be associated with this message. Tags may be any integer between zero and an implementation-defined upper limit. This limit is accessible via environment::max_tag().const skeleton_proxy< T > &The skeleton_proxy containing a reference to the object whose skeleton will be transmitted. Send the skeleton of an object. This routine executes a potentially blocking send with tag tag to the process with rank dest. It can be received by the destination process with a matching recv call. This variation on send will be used when a send of a skeleton is explicitly requested via code such as:comm.send(dest, tag, skeleton(object)); The semantics of this routine are equivalent to that of sending a packed_skeleton_oarchive storing the skeleton of the object. voidintThe process rank of the remote process to which the data will be sent.intThe tag that will be associated with this message. Tags may be any integer between zero and an implementation-defined upper limit. This limit is accessible via environment::max_tag().const T *The array of values that will be transmitted to the receiver. The type T of these values must be mapped to an MPI data type.intThe number of values stored in the array. The destination process must call receive with at least this many elements to correctly receive the message. Send an array of values to another process. This routine executes a potentially blocking send of an array of data with tag tag to the process with rank dest. It can be received by the destination process with a matching array recv call.If T is an MPI datatype, an invocation of this routine will be mapped to a single call to MPI_Send, using the datatype get_mpi_datatype<T>(). voidintThe process rank of the remote process to which the message will be sent.intThe tag that will be associated with this message. Tags may be any integer between zero and an implementation-defined upper limit. This limit is accessible via environment::max_tag(). Send a message to another process without any data. This routine executes a potentially blocking send of a message to another process. The message contains no extra data, and can therefore only be received by a matching call to recv(). statusintThe process that will be sending data. This will either be a process rank within the communicator or the constant any_source, indicating that we can receive the message from any process.intThe tag that matches a particular kind of message sent by the source process. This may be any tag value permitted by send. Alternatively, the argument may be the constant any_tag, indicating that this receive matches a message with any tag.T &Will contain the value of the message after a successful receive. The type of this value must match the value transmitted by the sender, unless the sender transmitted a packed archive or skeleton: in these cases, the sender transmits a packed_oarchive or packed_skeleton_oarchive and the destination receives a packed_iarchive or packed_skeleton_iarchive, respectively.Receive data from a remote process. This routine blocks until it receives a message from the process source with the given tag. The type T of the value must be suitable for transmission over MPI, which includes serializable types, types that can be mapped to MPI data types (including most built-in C++ types), packed MPI archives, skeletons, and content associated with skeletons; see the documentation of send for a complete description. Information about the received message. statusintintstd::vector< T, A > & statusintThe process that will be sending data. This will either be a process rank within the communicator or the constant any_source, indicating that we can receive the message from any process.intThe tag that matches a particular kind of message sent by the source process. This may be any tag value permitted by send. Alternatively, the argument may be the constant any_tag, indicating that this receive matches a message with any tag.const skeleton_proxy< T > &The skeleton_proxy containing a reference to the object that will be reshaped to match the received skeleton.Receive a skeleton from a remote process. This routine blocks until it receives a message from the process source with the given tag containing a skeleton. Information about the received message. statusintThe process that will be sending data. This will either be a process rank within the communicator or the constant any_source, indicating that we can receive the message from any process.intThe tag that matches a particular kind of message sent by the source process. This may be any tag value permitted by send. Alternatively, the argument may be the constant any_tag, indicating that this receive matches a message with any tag.skeleton_proxy< T > &The skeleton_proxy containing a reference to the object that will be reshaped to match the received skeleton.Receive a skeleton from a remote process. This routine blocks until it receives a message from the process source with the given tag containing a skeleton. Information about the received message. statusintThe process that will be sending data. This will either be a process rank within the communicator or the constant any_source, indicating that we can receive the message from any process.intThe tag that matches a particular kind of message sent by the source process. This may be any tag value permitted by send. Alternatively, the argument may be the constant any_tag, indicating that this receive matches a message with any tag.T *Will contain the values in the message after a successful receive. The type of these elements must match the type of the elements transmitted by the sender.intThe number of values that can be stored into the values array. This shall not be smaller than the number of elements transmitted by the sender.Receive an array of values from a remote process. This routine blocks until it receives an array of values from the process source with the given tag. If the type T is Information about the received message. std::range_error if the message to be received contains more than n values. statusintThe process that will be sending the message. This will either be a process rank within the communicator or the constant any_source, indicating that we can receive the message from any process.intThe tag that matches a particular kind of message sent by the source process. This may be any tag value permitted by send. Alternatively, the argument may be the constant any_tag, indicating that this receive matches a message with any tag.Receive a message from a remote process without any data. This routine blocks until it receives a message from the process source with the given tag. Information about the received message. statusintintconst T &intintT &Send a message to remote process and receive another message from another process. requestintThe rank of the remote process to which the data will be sent.intThe tag that will be associated with this message. Tags may be any integer between zero and an implementation-defined upper limit. This limit is accessible via environment::max_tag().const T &The value that will be transmitted to the receiver. The type T of this value must meet the aforementioned criteria for transmission. If modified before transmited, the modification may or may not be transmited.Send a message to a remote process without blocking. The isend method is functionality identical to the send method and transmits data in the same way, except that isend will not block while waiting for the data to be transmitted. Instead, a request object will be immediately returned, allowing one to query the status of the communication or wait until it has completed. a request object that describes this communication. requestintThe rank of the remote process to which the skeleton will be sent.intThe tag that will be associated with this message. Tags may be any integer between zero and an implementation-defined upper limit. This limit is accessible via environment::max_tag().const skeleton_proxy< T > &The skeleton_proxy containing a reference to the object whose skeleton will be transmitted.Send the skeleton of an object without blocking. This routine is functionally identical to the send method for skeleton_proxy objects except that isend will not block while waiting for the data to be transmitted. Instead, a request object will be immediately returned, allowing one to query the status of the communication or wait until it has completed.The semantics of this routine are equivalent to a non-blocking send of a packed_skeleton_oarchive storing the skeleton of the object. a request object that describes this communication. requestintThe process rank of the remote process to which the data will be sent.intThe tag that will be associated with this message. Tags may be any integer between zero and an implementation-defined upper limit. This limit is accessible via environment::max_tag().const T *The array of values that will be transmitted to the receiver. The type T of these values must be mapped to an MPI data type.intThe number of values stored in the array. The destination process must call receive with at least this many elements to correctly receive the message.Send an array of values to another process without blocking. This routine is functionally identical to the send method for arrays except that isend will not block while waiting for the data to be transmitted. Instead, a request object will be immediately returned, allowing one to query the status of the communication or wait until it has completed. a request object that describes this communication. requestintintconst std::vector< T, A > & requestintThe process rank of the remote process to which the message will be sent.intThe tag that will be associated with this message. Tags may be any integer between zero and an implementation-defined upper limit. This limit is accessible via environment::max_tag().Send a message to another process without any data without blocking. This routine is functionally identical to the send method for sends with no data, except that isend will not block while waiting for the message to be transmitted. Instead, a request object will be immediately returned, allowing one to query the status of the communication or wait until it has completed. a request object that describes this communication. requestintThe process that will be sending data. This will either be a process rank within the communicator or the constant any_source, indicating that we can receive the message from any process.intThe tag that matches a particular kind of message sent by the source process. This may be any tag value permitted by send. Alternatively, the argument may be the constant any_tag, indicating that this receive matches a message with any tag.T &Will contain the value of the message after a successful receive. The type of this value must match the value transmitted by the sender, unless the sender transmitted a packed archive or skeleton: in these cases, the sender transmits a packed_oarchive or packed_skeleton_oarchive and the destination receives a packed_iarchive or packed_skeleton_iarchive, respectively.Prepare to receive a message from a remote process. The irecv method is functionally identical to the recv method and receive data in the same way, except that irecv will not block while waiting for data to be transmitted. Instead, it immediately returns a request object that allows one to query the status of the receive or wait until it has completed. a request object that describes this communication. requestintThe process that will be sending data. This will either be a process rank within the communicator or the constant any_source, indicating that we can receive the message from any process.intThe tag that matches a particular kind of message sent by the source process. This may be any tag value permitted by send. Alternatively, the argument may be the constant any_tag, indicating that this receive matches a message with any tag.T *Will contain the values in the message after a successful receive. The type of these elements must match the type of the elements transmitted by the sender.intThe number of values that can be stored into the values array. This shall not be smaller than the number of elements transmitted by the sender.Initiate receipt of an array of values from a remote process. This routine initiates a receive operation for an array of values transmitted by process source with the given tag. a request object that describes this communication. requestintintstd::vector< T, A > & requestintThe process that will be sending the message. This will either be a process rank within the communicator or the constant any_source, indicating that we can receive the message from any process.intThe tag that matches a particular kind of message sent by the source process. This may be any tag value permitted by send. Alternatively, the argument may be the constant any_tag, indicating that this receive matches a message with any tag.Initiate receipt of a message from a remote process that carries no data. This routine initiates a receive operation for a message from process source with the given tag that carries no data. a request object that describes this communication. statusintany_sourceDetermine if there is a message available from this rank. If any_source, then the message returned may come from any source.intany_tagDetermine if there is a message available with the given tag. If any_tag, then the message returned may have any tag.Waits until a message is available to be received. This operation waits until a message matching (source, tag) is available to be received. It then returns information about that message. The functionality is equivalent to MPI_Probe. To check if a message is available without blocking, use iprobe. Returns information about the first message that matches the given criteria. optional< status >intany_sourceDetermine if there is a message available from this rank. If any_source, then the message returned may come from any source.intany_tagDetermine if there is a message available with the given tag. If any_tag, then the message returned may have any tag.Determine if a message is available to be received. This operation determines if a message matching (source, tag) is available to be received. If so, it returns information about that message; otherwise, it returns immediately with an empty optional. The functionality is equivalent to MPI_Iprobe. To wait until a message is available, use wait. If a matching message is available, returns information about that message. Otherwise, returns an empty boost::optional. voidWait for all processes within a communicator to reach the barrier. This routine is a collective operation that blocks each process until all processes have entered it, then releases all of the processes "simultaneously". It is equivalent to MPI_Barrier. boolDetermine if this communicator is valid for communication. Evaluates true in a boolean context if this communicator is valid for communication, i.e., does not represent MPI_COMM_NULL. Otherwise, evaluates false. MPI_CommAccess the MPI communicator associated with a Boost.MPI communicator. This routine permits the implicit conversion from a Boost.MPI communicator to an MPI communicator. The associated MPI communicator. communicatorintThe color of this process. All processes with the same color value will be placed into the same group.intA key value that will be used to determine the ordering of processes with the same color in the resulting communicator. If omitted, the rank of the processes in this will determine the ordering of processes in the resulting group.Split the communicator into multiple, disjoint communicators each of which is based on a particular color. This is a collective operation that returns a new communicator that is a subgroup of this. A new communicator containing all of the processes in this that have the same color. communicatorint optional< intercommunicator >Determine if the communicator is in fact an intercommunicator and, if so, return that intercommunicator. an optional containing the intercommunicator, if this communicator is in fact an intercommunicator. Otherwise, returns an empty optional. optional< graph_communicator >Determine if the communicator has a graph topology and, if so, return that graph_communicator. Even though the communicators have different types, they refer to the same underlying communication space and can be used interchangeably for communication. an optional containing the graph communicator, if this communicator does in fact have a graph topology. Otherwise, returns an empty optional. boolDetermines whether this communicator has a Graph topology. optional< cartesian_communicator >Determine if the communicator has a cartesian topology and, if so, return that cartesian_communicator. Even though the communicators have different types, they refer to the same underlying communication space and can be used interchangeably for communication. an optional containing the cartesian communicator, if this communicator does in fact have a cartesian topology. Otherwise, returns an empty optional. boolDetermines whether this communicator has a Cartesian topology. voidintThe error code to return from aborted processes. Abort all tasks in the group of this communicator.Makes a "best attempt" to abort all of the tasks in the group of this communicator. Depending on the underlying MPI implementation, this may either abort the entire program (and possibly return errcode to the environment) or only abort some processes, allowing the others to continue. Consult the documentation for your MPI implementation. This is equivalent to a call to MPI_Abort Will not return. Build a new Boost.MPI communicator for MPI_COMM_WORLD.Constructs a Boost.MPI communicator that attaches to MPI_COMM_WORLD. This is the equivalent of constructing with (MPI_COMM_WORLD, comm_attach). const MPI_Comm &comm_create_kindBuild a new Boost.MPI communicator based on the MPI communicator comm.comm may be any valid MPI communicator. If comm is MPI_COMM_NULL, an empty communicator (that cannot be used for communication) is created and the kind parameter is ignored. Otherwise, the kind parameters determines how the Boost.MPI communicator will be related to comm: If kind is comm_duplicate, duplicate comm to create a new communicator. This new communicator will be freed when the Boost.MPI communicator (and all copies of it) is destroyed. This option is only permitted if comm is a valid MPI intracommunicator or if the underlying MPI implementation supports MPI 2.0 (which supports duplication of intercommunicators). If kind is comm_take_ownership, take ownership of comm. It will be freed automatically when all of the Boost.MPI communicators go out of scope. This option must not be used when comm is MPI_COMM_WORLD. If kind is comm_attach, this Boost.MPI communicator will reference the existing MPI communicator comm but will not free comm when the Boost.MPI communicator goes out of scope. This option should only be used when the communicator is managed by the user or MPI library (e.g., MPI_COMM_WORLD). const communicator &An MPI communicator.const boost::mpi::group &A subgroup of the MPI communicator, comm, for which we will construct a new communicator. Build a new Boost.MPI communicator based on a subgroup of another MPI communicator.This routine will construct a new communicator containing all of the processes from communicator comm that are listed within the group subgroup. Equivalent to MPI_Comm_create. requestintintstd::vector< T, A > &mpl::true_ requestintintconst std::vector< T, A > &mpl::true_ voidintintconst std::vector< T, A > &mpl::true_ statusintintstd::vector< T, A > &mpl::true_ requestintintstd::vector< T, A > &mpl::false_ requestintintconst std::vector< T, A > &mpl::false_ voidintintconst std::vector< T, A > &mpl::false_ statusintintstd::vector< T, A > &mpl::false_ Enumeration used to describe how to adopt a C MPI_Comm into a Boost.MPI communicator. The values for this enumeration determine how a Boost.MPI communicator will behave when constructed with an MPI communicator. The options are: comm_duplicate: Duplicate the MPI_Comm communicator to create a new communicator (e.g., with MPI_Comm_dup). This new MPI_Comm communicator will be automatically freed when the Boost.MPI communicator (and all copies of it) is destroyed. comm_take_ownership: Take ownership of the communicator. It will be freed automatically when all of the Boost.MPI communicators go out of scope. This option must not be used with MPI_COMM_WORLD. comm_attach: The Boost.MPI communicator will reference the existing MPI communicator but will not free it when the Boost.MPI communicator goes out of scope. This option should only be used when the communicator is managed by the user or MPI library (e.g., MPI_COMM_WORLD). const intA constant representing "any process.". This constant may be used for the source parameter of receive operations to indicate that a message may be received from any source. const intA constant representing "any tag.". This constant may be used for the tag parameter of receive operations to indicate that a send with any tag will be matched by the receive. BOOST_MPI_DECL boolconst communicator &const communicator &Determines whether two communicators are identical. Equivalent to calling MPI_Comm_compare and checking whether the result is MPI_IDENT. True when the two communicators refer to the same underlying MPI communicator. boolconst communicator &const communicator &Determines whether two communicators are different. !(comm1 == comm2)
This header provides MPI configuration details that expose the capabilities of the underlying MPI implementation, and provides auto-linking support on Windows. Comment this macro is you are running in an heterogeneous environment. When this flag is enabled, we assume some simple, POD-like, type can be transmitted without paying the cost of portable serialization.Comment this if your platform is not homogeneous and that portable serialization/deserialization must be performed.It you do so, check that your MPI implementation supports thats kind of environment. Major version of the underlying MPI implementation supproted standard. If, for some reason, MPI_VERSION is not supported, you should probably set that according to your MPI documentation Determine if the MPI implementation has support for memory allocation. This macro will be defined when the underlying MPI implementation has support for the MPI-2 memory allocation routines MPI_Alloc_mem and MPI_Free_mem. When defined, the allocator class template will provide Standard Library-compliant access to these memory-allocation routines. Determine if the MPI implementation has supports initialization without command-line arguments. This macro will be defined when the underlying implementation supports initialization of MPI without passing along command-line arguments, e.g., MPI_Init(NULL, NULL). When defined, the environment class will provide a default constructor. This macro is always defined for MPI-2 implementations. Specifies the calling convention that will be used for callbacks from the underlying C MPI. This is a Windows-specific macro, which will be used internally to state the calling convention of any function that is to be used as a callback from MPI. For example, the internally-defined functions that are used in a call to MPI_Op_create. This macro is likely only to be useful to users that wish to bypass Boost.MPI, registering their own callbacks in certain cases, e.g., through MPI_Op_create. Indicates that MPI_Bcast supports MPI_BOTTOM. Some implementations have a broken MPI_Bcast wrt to MPI_BOTTOM. BullX MPI and LAM seems to be among them, at least for some versions. The broacast_test.cpp test test_skeleton_and_content can be used to detect that.
This header provides the mapping from C++ types to MPI data types. boost::mpl::or_< is_mpi_integer_datatype< T >, is_mpi_floating_point_datatype< T >, is_mpi_logical_datatype< T >, is_mpi_complex_datatype< T >, is_mpi_byte_datatype< T > >Type trait that determines if there exists a built-in MPI data type for a given C++ type. This type trait determines when there is a direct mapping from a C++ type to an MPI type. For instance, the C++ int type maps directly to the MPI type MPI_INT. When there is a direct mapping from the type T to an MPI type, is_mpi_builtin_datatype will derive from mpl::true_ and the MPI data type will be accessible via get_mpi_datatype.In general, users should not need to specialize this trait. However, if you have an additional C++ type that can map directly to only of MPI's built-in types, specialize either this trait or one of the traits corresponding to categories of MPI data types (is_mpi_integer_datatype, is_mpi_floating_point_datatype, is_mpi_logical_datatype, is_mpi_complex_datatype, or is_mpi_builtin_datatype). is_mpi_builtin_datatype derives mpl::true_ if any of the traits corresponding to MPI data type categories derived mpl::true_. false_Type trait that determines if there exists a built-in byte MPI data type for a given C++ type. This type trait determines when there is a direct mapping from a C++ type to an MPI data type that is classified as an byte data type. See is_mpi_builtin_datatype for general information about built-in MPI data types. false_Type trait that determines if there exists a built-in complex MPI data type for a given C++ type. This type trait determines when there is a direct mapping from a C++ type to an MPI data type that is classified as an complex data type. See is_mpi_builtin_datatype for general information about built-in MPI data types. boost::mpi::is_mpi_builtin_datatype< T >Type trait that determines if a C++ type can be mapped to an MPI data type. This type trait determines if it is possible to build an MPI data type that represents a C++ data type. When this is the case, is_mpi_datatype derives mpl::true_ and the MPI data type will be accessible via get_mpi_datatype.For any C++ type that maps to a built-in MPI data type (see is_mpi_builtin_datatype), is_mpi_data_type is trivially true. However, any POD ("Plain Old Data") type containing types that themselves can be represented by MPI data types can itself be represented as an MPI data type. For instance, a point3d class containing three double values can be represented as an MPI data type. To do so, first make the data type Serializable (using the Boost.Serialization library); then, specialize the is_mpi_datatype trait for the point type so that it will derive mpl::true_: namespace boost { namespace mpi { template<> struct is_mpi_datatype<point> : public mpl::true_ { }; } } false_Type trait that determines if there exists a built-in floating point MPI data type for a given C++ type. This type trait determines when there is a direct mapping from a C++ type to an MPI data type that is classified as a floating point data type. See is_mpi_builtin_datatype for general information about built-in MPI data types. false_Type trait that determines if there exists a built-in integer MPI data type for a given C++ type. This type trait determines when there is a direct mapping from a C++ type to an MPI data type that is classified as an integer data type. See is_mpi_builtin_datatype for general information about built-in MPI data types. false_Type trait that determines if there exists a built-in logical MPI data type for a given C++ type. This type trait determines when there is a direct mapping from a C++ type to an MPI data type that is classified as an logical data type. See is_mpi_builtin_datatype for general information about built-in MPI data types. MPI_Datatypeconst T &for an optimized call, a constructed object of the type should be passed; otherwise, an object will be default-constructed.Returns an MPI data type for a C++ type. The function creates an MPI data type for the given object x. The first time it is called for a class T, the MPI data type is created and cached. Subsequent calls for objects of the same type T return the cached MPI data type. The type T must allow creation of an MPI data type. That is, it must be Serializable and is_mpi_datatype<T> must derive mpl::true_.For fundamental MPI types, a copy of the MPI data type of the MPI library is returned.Note that since the data types are cached, the caller should never call MPI_Type_free() for the MPI data type returned by this call. The MPI data type corresponding to type T.
This header provides forward declarations for the contents of the header datatype.hpp. It is expected to be used primarily by user-defined C++ classes that need to specialize is_mpi_datatype. a dummy data type giving MPI_PACKED as its MPI_Datatype MPI_Datatype
This header provides the environment class, which provides routines to initialize, finalization, and query the status of the Boost MPI environment. noncopyableInitialize, finalize, and query the MPI environment. The environment class is used to initialize, finalize, and query the MPI environment. It will typically be used in the main() function of a program, which will create a single instance of environment initialized with the arguments passed to the program:int main(int argc, char* argv[]) { mpi::environment env(argc, argv); } The instance of environment will initialize MPI (by calling MPI_Init) in its constructor and finalize MPI (by calling MPI_Finalize for normal termination or MPI_Abort for an uncaught exception) in its destructor.The use of environment is not mandatory. Users may choose to invoke MPI_Init and MPI_Finalize manually. In this case, no environment object is needed. If one is created, however, it will do nothing on either construction or destruction. booltrueWhen true, this object will abort the program if it is destructed due to an uncaught exception. Initialize the MPI environment.If the MPI environment has not already been initialized, initializes MPI with a call to MPI_Init. Since this constructor does not take command-line arguments (argc and argv), it is only available when the underlying MPI implementation supports calling MPI_Init with NULL arguments, indicated by the macro BOOST_MPI_HAS_NOARG_INITIALIZATION. threading::levelthe required level of threading support.booltrueWhen true, this object will abort the program if it is destructed due to an uncaught exception. Initialize the MPI environment.If the MPI environment has not already been initialized, initializes MPI with a call to MPI_Init_thread. Since this constructor does not take command-line arguments (argc and argv), it is only available when the underlying MPI implementation supports calling MPI_Init with NULL arguments, indicated by the macro BOOST_MPI_HAS_NOARG_INITIALIZATION. int &The number of arguments provided in argv, as passed into the program's main function.char **&The array of argument strings passed to the program via main.booltrueWhen true, this object will abort the program if it is destructed due to an uncaught exception. Initialize the MPI environment.If the MPI environment has not already been initialized, initializes MPI with a call to MPI_Init. int &The number of arguments provided in argv, as passed into the program's main function.char **&The array of argument strings passed to the program via main.threading::levelthe required level of threading supportbooltrueWhen true, this object will abort the program if it is destructed due to an uncaught exception. Initialize the MPI environment.If the MPI environment has not already been initialized, initializes MPI with a call to MPI_Init_thread. Shuts down the MPI environment.If this environment object was used to initialize the MPI environment, and the MPI environment has not already been shut down (finalized), this destructor will shut down the MPI environment. Under normal circumstances, this only involves invoking MPI_Finalize. However, if destruction is the result of an uncaught exception and the abort_on_exception parameter of the constructor had the value true, this destructor will invoke MPI_Abort with MPI_COMM_WORLD to abort the entire MPI program with a result code of -1. voidintThe error code to return to the environment. Abort all MPI processes.Aborts all MPI processes and returns to the environment. The precise behavior will be defined by the underlying MPI implementation. This is equivalent to a call to MPI_Abort with MPI_COMM_WORLD. Will not return. boolDetermine if the MPI environment has already been initialized.This routine is equivalent to a call to MPI_Initialized. true if the MPI environment has been initialized. boolDetermine if the MPI environment has already been finalized.The routine is equivalent to a call to MPI_Finalized. true if the MPI environment has been finalized. intRetrieves the maximum tag value.Returns the maximum value that may be used for the tag parameter of send/receive operations. This value will be somewhat smaller than the value of MPI_TAG_UB, because the Boost.MPI implementation reserves some tags for collective operations. the maximum tag value. intThe tag value used for collective operations.Returns the reserved tag value used by the Boost.MPI implementation for collective operations. Although users are not permitted to use this tag to send or receive messages, it may be useful when monitoring communication patterns. the tag value used for collective operations. optional< int >Retrieves the rank of the host process, if one exists.If there is a host process, this routine returns the rank of that process. Otherwise, it returns an empty optional<int>. MPI does not define the meaning of a "host" process: consult the documentation for the MPI implementation. This routine examines the MPI_HOST attribute of MPI_COMM_WORLD. The rank of the host process, if one exists. optional< int >Retrieves the rank of a process that can perform input/output.This routine returns the rank of a process that can perform input/output via the standard C and C++ I/O facilities. If every process can perform I/O using the standard facilities, this routine will return any_source; if no process can perform I/O, this routine will return no value (an empty optional). This routine examines the MPI_IO attribute of MPI_COMM_WORLD. the rank of the process that can perform I/O, any_source if every process can perform I/O, or no value if no process can perform I/O. std::stringRetrieve the name of this processor.This routine returns the name of this processor. The actual form of the name is unspecified, but may be documented by the underlying MPI implementation. This routine is implemented as a call to MPI_Get_processor_name. the name of this processor. threading::levelQuery the current level of thread support. boolAre we in the main thread? std::pair< int, int >MPI version. Returns a pair with the version and sub-version number. = MPI_THREAD_SINGLEOnly one thread will execute. = MPI_THREAD_FUNNELEDOnly main thread will do MPI calls.The process may be multi-threaded, but only the main thread will make MPI calls (all MPI calls are `funneled' to the main thread). = MPI_THREAD_SERIALIZEDOnly one thread at the time do MPI calls.The process may be multi-threaded, and multiple threads may make MPI calls, but only one at a time: MPI calls are not made concurrently from two distinct threads (all MPI calls are `serialized'). = MPI_THREAD_MULTIPLEMultiple thread may do MPI calls.Multiple threads may call MPI, with no restrictions. specify the supported threading level. Based on MPI 2 standard/8.7.3 std::ostream &std::ostream &levelFormated output for threading level. std::istream &std::istream &level &Formated input for threading level.
This header provides exception classes that report MPI errors to the user and macros that translate MPI error codes into Boost.MPI exceptions. exceptionCatch-all exception class for MPI errors. Instances of this class will be thrown when an MPI error occurs. MPI failures that trigger these exceptions may or may not be recoverable, depending on the underlying MPI implementation. Consult the documentation for your MPI implementation to determine the effect of MPI errors. const char *A description of the error that occurred. const char *Retrieve the name of the MPI routine that reported the error. intRetrieve the result code returned from the MPI routine that reported the error. intReturns the MPI error class associated with the error that triggered this exception. const char *The MPI routine in which the error occurred. This should be a pointer to a string constant: it will not be copied.intThe result code returned from the MPI routine that aborted with an error. Build a new exception exception. Call the MPI routine MPIFunc with arguments Args (surrounded by parentheses). If the result is not MPI_SUCCESS, use boost::throw_exception to throw an exception or abort, depending on BOOST_NO_EXCEPTIONS.
This header defines facilities to support MPI communicators with graph topologies, using the graph interface defined by the Boost Graph Library. One can construct a communicator whose topology is described by any graph meeting the requirements of the Boost Graph Library's graph concepts. Likewise, any communicator that has a graph topology can be viewed as a graph by the Boost Graph Library, permitting one to use the BGL's graph algorithms on the process topology. mpi::graph_communicatorTraits structure that allows a communicator with graph topology to be view as a graph by the Boost Graph Library. The specialization of graph_traits for an MPI communicator allows a communicator with graph topology to be viewed as a graph. An MPI communicator with graph topology meets the requirements of the Graph, Incidence Graph, Adjacency Graph, Vertex List Graph, and Edge List Graph concepts from the Boost Graph Library. int std::pair< int, int > directed_tag disallow_parallel_edge_tag unspecified int unspecified counting_iterator< int > int unspecified int vertex_descriptorReturns a vertex descriptor that can never refer to any valid vertex. boost::mpi::communicatorAn MPI communicator with a graph topology. A graph_communicator is a communicator whose topology is expressed as a graph. Graph communicators have the same functionality as (intra)communicators, but also allow one to query the relationships among processes. Those relationships are expressed via a graph, using the interface defined by the Boost Graph Library. The graph_communicator class meets the requirements of the BGL Graph, Incidence Graph, Adjacency Graph, Vertex List Graph, and Edge List Graph concepts. const MPI_Comm &comm_create_kindBuild a new Boost.MPI graph communicator based on the MPI communicator comm with graph topology.comm may be any valid MPI communicator. If comm is MPI_COMM_NULL, an empty communicator (that cannot be used for communication) is created and the kind parameter is ignored. Otherwise, the kind parameter determines how the Boost.MPI communicator will be related to comm: If kind is comm_duplicate, duplicate comm to create a new communicator. This new communicator will be freed when the Boost.MPI communicator (and all copies of it) is destroyed. This option is only permitted if the underlying MPI implementation supports MPI 2.0; duplication of intercommunicators is not available in MPI 1.x. If kind is comm_take_ownership, take ownership of comm. It will be freed automatically when all of the Boost.MPI communicators go out of scope. If kind is comm_attach, this Boost.MPI communicator will reference the existing MPI communicator comm but will not free comm when the Boost.MPI communicator goes out of scope. This option should only be used when the communicator is managed by the user. const communicator &The communicator that the new, graph communicator will be based on.const Graph &Any type that meets the requirements of the Incidence Graph and Vertex List Graph concepts from the Boost Graph Library. This structure of this graph will become the topology of the communicator that is returned.boolfalseWhether MPI is permitted to re-order the process ranks within the returned communicator, to better optimize communication. If false, the ranks of each process in the returned process will match precisely the rank of that process within the original communicator. Create a new communicator whose topology is described by the given graph. The indices of the vertices in the graph will be assumed to be the ranks of the processes within the communicator. There may be fewer vertices in the graph than there are processes in the communicator; in this case, the resulting communicator will be a NULL communicator. const communicator &The communicator that the new, graph communicator will be based on. The ranks in rank refer to the processes in this communicator.const Graph &Any type that meets the requirements of the Incidence Graph and Vertex List Graph concepts from the Boost Graph Library. This structure of this graph will become the topology of the communicator that is returned.RankMapThis map translates vertices in the graph into ranks within the current communicator. It must be a Readable Property Map (see the Boost Property Map library) whose key type is the vertex type of the graph and whose value type is int.boolfalseWhether MPI is permitted to re-order the process ranks within the returned communicator, to better optimize communication. If false, the ranks of each process in the returned process will match precisely the rank of that process within the original communicator. Create a new communicator whose topology is described by the given graph. The rank map (rank) gives the mapping from vertices in the graph to ranks within the communicator. There may be fewer vertices in the graph than there are processes in the communicator; in this case, the resulting communicator will be a NULL communicator. intconst std::pair< int, int > &const graph_communicator &Returns the source vertex from an edge in the graph topology of a communicator. intconst std::pair< int, int > &const graph_communicator &Returns the target vertex from an edge in the graph topology of a communicator. unspecifiedintconst graph_communicator &Returns an iterator range containing all of the edges outgoing from the given vertex in a graph topology of a communicator. intintconst graph_communicator &Returns the out-degree of a vertex in the graph topology of a communicator. unspecifiedintconst graph_communicator &Returns an iterator range containing all of the neighbors of the given vertex in the communicator's graph topology. std::pair< counting_iterator< int >, counting_iterator< int > >const graph_communicator &Returns an iterator range that contains all of the vertices with the communicator's graph topology, i.e., all of the process ranks in the communicator. intconst graph_communicator &Returns the number of vertices within the graph topology of the communicator, i.e., the number of processes in the communicator. unspecifiedconst graph_communicator &Returns an iterator range that contains all of the edges with the communicator's graph topology. intconst graph_communicator &Returns the number of edges in the communicator's graph topology. identity_property_mapvertex_index_tconst graph_communicator &Returns a property map that maps from vertices in a communicator's graph topology to their index values. Since the vertices are ranks in the communicator, the returned property map is the identity property map. intvertex_index_tconst graph_communicator &intReturns the index of a vertex in the communicator's graph topology. Since the vertices are ranks in the communicator, this is the identity function.
This header defines the group class, which allows one to manipulate and query groups of processes. A group is a representation of a subset of the processes within a communicator. The group class allows one to create arbitrary subsets of the processes within a communicator. One can compute the union, intersection, or difference of two groups, or create new groups by specifically including or excluding certain processes. Given a group, one can create a new communicator containing only the processes in that group. optional< int >Determine the rank of the calling process in the group. This routine is equivalent to MPI_Group_rank. The rank of the calling process in the group, which will be a value in [0, size()). If the calling process is not in the group, returns an empty value. intDetermine the number of processes in the group. This routine is equivalent to MPI_Group_size. The number of processes in the group. OutputIteratorInputIteratorBeginning of the iterator range of ranks in the current group.InputIteratorPast the end of the iterator range of ranks in the current group.const group &The group that we are translating ranks to.OutputIteratorThe output iterator to which the translated ranks will be written.Translates the ranks from one group into the ranks of the same processes in another group. This routine translates each of the integer rank values in the iterator range [first, last) from the current group into rank values of the corresponding processes in to_group. The corresponding rank values are written via the output iterator out. When there is no correspondence between a rank in the current group and a rank in to_group, the value MPI_UNDEFINED is written to the output iterator. the output iterator, which points one step past the last rank written. boolDetermines whether the group is non-empty. True if the group is not empty, false if it is empty. MPI_GroupRetrieves the underlying MPI_Group associated with this group. The MPI_Group handle manipulated by this object. If this object represents the empty group, returns MPI_GROUP_EMPTY. groupInputIteratorInputIteratorCreates a new group including a subset of the processes in the current group. This routine creates a new group which includes only those processes in the current group that are listed in the integer iterator range [first, last). Equivalent to MPI_Group_incl.first The beginning of the iterator range of ranks to include.last Past the end of the iterator range of ranks to include. A new group containing those processes with ranks [first, last) in the current group. groupInputIteratorInputIteratorCreates a new group from all of the processes in the current group, exluding a specific subset of the processes. This routine creates a new group which includes all of the processes in the current group except those whose ranks are listed in the integer iterator range [first, last). Equivalent to MPI_Group_excl.first The beginning of the iterator range of ranks to exclude.last Past the end of the iterator range of ranks to exclude. A new group containing all of the processes in the current group except those processes with ranks [first, last) in the current group. Constructs an empty group. const MPI_Group &The MPI_Group used to construct this group.boolWhether the group should adopt the MPI_Group. When true, the group object (or one of its copies) will free the group (via MPI_Comm_free) when the last copy is destroyed. Otherwise, the user is responsible for calling MPI_Group_free. Constructs a group from an MPI_Group. This routine allows one to construct a Boost.MPI group from a C MPI_Group. The group object can (optionally) adopt the MPI_Group, after which point the group object becomes responsible for freeing the MPI_Group when the last copy of group disappears. BOOST_MPI_DECL boolconst group &const group &Determines whether two process groups are identical. Equivalent to calling MPI_Group_compare and checking whether the result is MPI_IDENT. True when the two process groups contain the same processes in the same order. boolconst group &const group &Determines whether two process groups are not identical. Equivalent to calling MPI_Group_compare and checking whether the result is not MPI_IDENT. False when the two process groups contain the same processes in the same order. BOOST_MPI_DECL groupconst group &const group &Computes the union of two process groups. This routine returns a new group that contains all processes that are either in group g1 or in group g2 (or both). The processes that are in g1 will be first in the resulting group, followed by the processes from g2 (but not also in g1). Equivalent to MPI_Group_union. BOOST_MPI_DECL groupconst group &const group &Computes the intersection of two process groups. This routine returns a new group that contains all processes that are in group g1 and in group g2, ordered in the same way as g1. Equivalent to MPI_Group_intersection. BOOST_MPI_DECL groupconst group &const group &Computes the difference between two process groups. This routine returns a new group that contains all processes that are in group g1 but not in group g2, ordered in the same way as g1. Equivalent to MPI_Group_difference.
This header provides helpers to indicate to MPI collective operation that a buffer can be use both as an input and output. Wrapper type to explicitly indicate that a input data can be overriden with an output value. T & T & T *T * T * inplace_t< T >T &the contributing input value, it will be overriden with the output value where one is expected. If it is a pointer, the number of elements will be provided separatly. inplace_t< T * >T *Wrapp a input data to indicate that it can be overriden with an ouput value. The wrapped value or pointer.
This header defines the intercommunicator class, which permits communication between different process groups. boost::mpi::communicatorCommunication facilities among processes in different groups. The intercommunicator class provides communication facilities among processes from different groups. An intercommunicator is always associated with two process groups: one "local" process group, containing the process that initiates an MPI operation (e.g., the sender in a send operation), and one "remote" process group, containing the process that is the target of the MPI operation.While intercommunicators have essentially the same point-to-point operations as intracommunicators (the latter communicate only within a single process group), all communication with intercommunicators occurs between the processes in the local group and the processes in the remote group; communication within a group must use a different (intra-)communicator. intReturns the size of the local group, i.e., the number of local processes that are part of the group. boost::mpi::groupReturns the local group, containing all of the local processes in this intercommunicator. intReturns the rank of this process within the local group. intReturns the size of the remote group, i.e., the number of processes that are part of the remote group. boost::mpi::groupReturns the remote group, containing all of the remote processes in this intercommunicator. communicatorboolWhether the processes in this group should have the higher rank numbers than the processes in the other group. Each of the processes within a particular group shall have the same "high" value.Merge the local and remote groups in this intercommunicator into a new intracommunicator containing the union of the processes in both groups. This method is equivalent to MPI_Intercomm_merge. the new, merged intracommunicator const MPI_Comm &comm_create_kindBuild a new Boost.MPI intercommunicator based on the MPI intercommunicator comm.comm may be any valid MPI intercommunicator. If comm is MPI_COMM_NULL, an empty communicator (that cannot be used for communication) is created and the kind parameter is ignored. Otherwise, the kind parameter determines how the Boost.MPI communicator will be related to comm: If kind is comm_duplicate, duplicate comm to create a new communicator. This new communicator will be freed when the Boost.MPI communicator (and all copies of it) is destroyed. This option is only permitted if the underlying MPI implementation supports MPI 2.0; duplication of intercommunicators is not available in MPI 1.x. If kind is comm_take_ownership, take ownership of comm. It will be freed automatically when all of the Boost.MPI communicators go out of scope. If kind is comm_attach, this Boost.MPI communicator will reference the existing MPI communicator comm but will not free comm when the Boost.MPI communicator goes out of scope. This option should only be used when the communicator is managed by the user. const communicator &The intracommunicator containing all of the processes that will go into the local group.intThe rank within the local intracommunicator that will serve as its leader.const communicator &The intracommunicator containing all of the processes that will go into the remote group.intThe rank within the peer group that will serve as its leader. Constructs a new intercommunicator whose local group is local and whose remote group is peer. The intercommunicator can then be used to communicate between processes in the two groups. This constructor is equivalent to a call to MPI_Intercomm_create.
This header defines operations for completing non-blocking communication requests. std::pair< status, ForwardIterator >ForwardIteratorThe iterator that denotes the beginning of the sequence of request objects.ForwardIteratorThe iterator that denotes the end of the sequence of request objects. This may not be equal to first.Wait until any non-blocking request has completed. This routine takes in a set of requests stored in the iterator range [first,last) and waits until any of these requests has been completed. It provides functionality equivalent to MPI_Waitany. A pair containing the status object that corresponds to the completed operation and the iterator referencing the completed request. optional< std::pair< status, ForwardIterator > >ForwardIteratorThe iterator that denotes the beginning of the sequence of request objects.ForwardIteratorThe iterator that denotes the end of the sequence of request objects.Test whether any non-blocking request has completed. This routine takes in a set of requests stored in the iterator range [first,last) and tests whether any of these requests has been completed. This routine is similar to wait_any, but will not block waiting for requests to completed. It provides functionality equivalent to MPI_Testany. If any outstanding requests have completed, a pair containing the status object that corresponds to the completed operation and the iterator referencing the completed request. Otherwise, an empty optional<>. OutputIteratorForwardIteratorThe iterator that denotes the beginning of the sequence of request objects.ForwardIteratorThe iterator that denotes the end of the sequence of request objects.OutputIteratorIf provided, an output iterator through which the status of each request will be emitted. The status objects are emitted in the same order as the requests are retrieved from [first,last).voidForwardIteratorForwardIteratorWait until all non-blocking requests have completed. This routine takes in a set of requests stored in the iterator range [first,last) and waits until all of these requests have been completed. It provides functionality equivalent to MPI_Waitall. If an out parameter was provided, the value out after all of the status objects have been emitted. optional< OutputIterator >ForwardIteratorThe iterator that denotes the beginning of the sequence of request objects.ForwardIteratorThe iterator that denotes the end of the sequence of request objects.OutputIteratorIf provided and all requests hav been completed, an output iterator through which the status of each request will be emitted. The status objects are emitted in the same order as the requests are retrieved from [first,last).boolForwardIteratorForwardIteratorTests whether all non-blocking requests have completed. This routine takes in a set of requests stored in the iterator range [first,last) and determines whether all of these requests have been completed. However, due to limitations of the underlying MPI implementation, if any of the requests refers to a non-blocking send or receive of a serialized data type, test_all will always return the equivalent of false (i.e., the requests cannot all be finished at this time). This routine performs the same functionality as wait_all, except that this routine will not block. This routine provides functionality equivalent to MPI_Testall. If an out parameter was provided, the value out after all of the status objects have been emitted (if all requests were completed) or an empty optional<>. If no out parameter was provided, returns true if all requests have completed or false otherwise. std::pair< OutputIterator, BidirectionalIterator >BidirectionalIteratorThe iterator that denotes the beginning of the sequence of request objects.BidirectionalIteratorThe iterator that denotes the end of the sequence of request objects. This may not be equal to first.OutputIteratorIf provided, the status objects corresponding to completed requests will be emitted through this output iterator.BidirectionalIteratorBidirectionalIteratorBidirectionalIteratorWait until some non-blocking requests have completed. This routine takes in a set of requests stored in the iterator range [first,last) and waits until at least one of the requests has completed. It then completes all of the requests it can, partitioning the input sequence into pending requests followed by completed requests. If an output iterator is provided, status objects will be emitted for each of the completed requests. This routine provides functionality equivalent to MPI_Waitsome. If the out parameter was provided, a pair containing the output iterator out after all of the status objects have been written through it and an iterator referencing the first completed request. If no out parameter was provided, only the iterator referencing the first completed request will be emitted. std::pair< OutputIterator, BidirectionalIterator >BidirectionalIteratorThe iterator that denotes the beginning of the sequence of request objects.BidirectionalIteratorThe iterator that denotes the end of the sequence of request objects. This may not be equal to first.OutputIteratorIf provided, the status objects corresponding to completed requests will be emitted through this output iterator.BidirectionalIteratorBidirectionalIteratorBidirectionalIteratorTest whether some non-blocking requests have completed. This routine takes in a set of requests stored in the iterator range [first,last) and tests to see if any of the requests has completed. It completes all of the requests it can, partitioning the input sequence into pending requests followed by completed requests. If an output iterator is provided, status objects will be emitted for each of the completed requests. This routine is similar to wait_some, but does not wait until any requests have completed. This routine provides functionality equivalent to MPI_Testsome. If the out parameter was provided, a pair containing the output iterator out after all of the status objects have been written through it and an iterator referencing the first completed request. If no out parameter was provided, only the iterator referencing the first completed request will be emitted.
This header provides a mapping from function objects to MPI_Op constants used in MPI collective operations. It also provides several new function object types not present in the standard <functional> header that have direct mappings to MPI_Op. Compute the bitwise AND of two integral values. This binary function object computes the bitwise AND of the two values it is given. When used with MPI and a type T that has an associated, built-in MPI data type, translates to MPI_BAND. T T T Tconst T &const T & x & y. Compute the bitwise OR of two integral values. This binary function object computes the bitwise OR of the two values it is given. When used with MPI and a type T that has an associated, built-in MPI data type, translates to MPI_BOR. T T T Tconst T &const T & the x | y. Compute the bitwise exclusive OR of two integral values. This binary function object computes the bitwise exclusive OR of the two values it is given. When used with MPI and a type T that has an associated, built-in MPI data type, translates to MPI_BXOR. T T T Tconst T &const T & x ^ y. false_Determine if a function object type is commutative. This trait determines if an operation Op is commutative when applied to values of type T. Parallel operations such as reduce and prefix_sum can be implemented more efficiently with commutative operations. To mark an operation as commutative, users should specialize is_commutative and derive from the class mpl::true_. false_Determine if a function object has an associated MPI_Op. This trait determines if a function object type Op, when used with argument type T, has an associated MPI_Op. If so, is_mpi_op<Op,T> will derive from mpl::false_ and will contain a static member function op that takes no arguments but returns the associated MPI_Op value. For instance, is_mpi_op<std::plus<int>,int>::op() returns MPI_SUM.Users may specialize is_mpi_op for any other class templates that map onto operations that have MPI_Op equivalences, such as bitwise OR, logical and, or maximum. However, users are encouraged to use the standard function objects in the functional and boost/mpi/operations.hpp headers whenever possible. For function objects that are class templates with a single template parameter, it may be easier to specialize is_builtin_mpi_op. Compute the logical exclusive OR of two integral values. This binary function object computes the logical exclusive of the two values it is given. When used with MPI and a type T that has an associated, built-in MPI data type, translates to MPI_LXOR. T T T Tconst T &const T & the logical exclusive OR of x and y. Compute the maximum of two values. This binary function object computes the maximum of the two values it is given. When used with MPI and a type T that has an associated, built-in MPI data type, translates to MPI_MAX. T T T const T &const T &const T & the maximum of x and y. Compute the minimum of two values. This binary function object computes the minimum of the two values it is given. When used with MPI and a type T that has an associated, built-in MPI data type, translates to MPI_MIN. T T T const T &const T &const T & the minimum of x and y.
This header provides the facilities for packing Serializable data types into a buffer using MPI_Pack. The buffers can then be transmitted via MPI and then be unpacked either via the facilities in packed_oarchive.hpp or MPI_Unpack. iprimitiveAn archive that unpacks binary data from an MPI buffer. The packed_oarchive class is an Archiver (as in the Boost.Serialization library) that unpacks binary data from a buffer received via MPI. It can operate on any Serializable data type and will use the MPI_Unpack function of the underlying MPI implementation to perform deserialization. voidT &mpl::false_ voidT &mpl::true_ voidT & voidarchive::class_id_optional_type & voidarchive::class_id_type & voidarchive::version_type & voidarchive::class_id_reference_type & voidarchive::class_name_type & MPI_Comm const &The communicator over which this archive will be received.buffer_type &A user-defined buffer that contains the binary representation of serialized objects.unsigned intboost::archive::no_headerControl the serialization of the data types. Refer to the Boost.Serialization documentation before changing the default flags. int0Construct a packed_iarchive to receive data over the given MPI communicator and with an initial buffer. MPI_Comm const &The communicator over which this archive will be received.std::size_t0unsigned intboost::archive::no_headerControl the serialization of the data types. Refer to the Boost.Serialization documentation before changing the default flags. Construct a packed_iarchive to receive data over the given MPI communicator. packed_iprimitive
This header provides the facilities for unpacking Serializable data types from a buffer using MPI_Unpack. The buffers are typically received via MPI and have been packed either by via the facilities in packed_iarchive.hpp or MPI_Pack. oprimitiveAn archive that packs binary data into an MPI buffer. The packed_iarchive class is an Archiver (as in the Boost.Serialization library) that packs binary data into a buffer for transmission via MPI. It can operate on any Serializable data type and will use the MPI_Pack function of the underlying MPI implementation to perform serialization. voidT const &mpl::false_ voidT const &mpl::true_ voidT const & voidconst archive::class_id_optional_type & voidconst archive::class_name_type & voidconst archive::class_id_type & voidconst archive::version_type & MPI_Comm const &The communicator over which this archive will be sent.buffer_type &A user-defined buffer that will be filled with the binary representation of serialized objects.unsigned intboost::archive::no_headerControl the serialization of the data types. Refer to the Boost.Serialization documentation before changing the default flags.Construct a packed_oarchive for transmission over the given MPI communicator and with an initial buffer. MPI_Comm const &The communicator over which this archive will be sent.unsigned intboost::archive::no_headerControl the serialization of the data types. Refer to the Boost.Serialization documentation before changing the default flags. Construct a packed_oarchive for transmission over the given MPI communicator. packed_oprimitive
This header interacts with the Python bindings for Boost.MPI. The routines in this header can be used to register user-defined and library-defined data types with Boost.MPI for efficient (de-)serialization and separate transmission of skeletons and content. voidconst T &T()A sample value of the type T. This may be used to compute the Python type associated with the C++ type T.PyTypeObject *0The Python type associated with the C++ type T. If not provided, it will be computed from the same value value. Register the type T for direct serialization within Boost.MPI. The register_serialized function registers a C++ type for direct serialization within Boost.MPI. Direct serialization elides the use of the Python pickle package when serializing Python objects that represent C++ values. Direct serialization can be beneficial both to improve serialization performance (Python pickling can be very inefficient) and to permit serialization for Python-wrapped C++ objects that do not support pickling. voidconst T &T()A sample object of type T that will be used to determine the Python type associated with T, if type is not specified.PyTypeObject *0The Python type associated with the C++ type T. If not provided, it will be computed from the same value value. Registers a type for use with the skeleton/content mechanism in Python. The skeleton/content mechanism can only be used from Python with C++ types that have previously been registered via a call to this function. Both the sender and the transmitter must register the type. It is permitted to call this function multiple times for the same type T, but only one call per process per type is required. The type T must be Serializable.
This header defines the class request, which contains a request for non-blocking communication. A request for a non-blocking send or receive. This structure contains information about a non-blocking send or receive and will be returned from isend or irecv, respectively. status optional< status > void bool optional< MPI_Request & > statusWait until the communication associated with this request has completed, then return a status object describing the communication. optional< status >Determine whether the communication associated with this request has completed successfully. If so, returns the status object describing the communication. Otherwise, returns an empty optional<> to indicate that the communication has not completed yet. Note that once test() returns a status object, the request has completed and wait() should not be called. voidCancel a pending communication, assuming it has not already been completed. optional< MPI_Request & >The trivial MPI requet implenting this request, provided it's trivial. Probably irrelevant to most users. boolIs this request potentialy pending ? voidboost::shared_ptr< void > Constructs a NULL request. requestcommunicator const &intintT const &Send a known number of primitive objects in one MPI request. requestcommunicator const &intintT const *int requestcommunicator const &intintvoid const *std::size_t requestcommunicator const &intintMPI_Datatype requestcommunicator const &intint requestcommunicator const &intintT &Receive a known number of primitive objects in one MPI request. requestcommunicator const &intintT *int requestcommunicator const &intintMPI_Datatype requestcommunicator const &intint requestConstruct request for simple data of unknown size. requestcommunicator const &intintT &Constructs request for serialized data. requestcommunicator const &intintT *intConstructs request for array of complex data. requestcommunicator const &intintstd::vector< T, A > &Request to recv array of primitive data. requestcommunicator const &intintstd::vector< T, A > const &Request to send array of primitive data. handler *
This header provides facilities that allow the structure of data types (called the "skeleton") to be transmitted and received separately from the content stored in those data types. These facilities are useful when the data in a stable data structure (e.g., a mesh or a graph) will need to be transmitted repeatedly. In this case, transmitting the skeleton only once saves both communication effort (it need not be sent again) and local computation (serialization need only be performed once for the content).
This header contains all of the forward declarations required to use transmit skeletons of data structures and the content of data structures separately. To actually transmit skeletons or content, include the header boost/mpi/skeleton_and_content.hpp. const skeleton_proxy< T >T & const contentconst T &
This header defines the class status, which reports on the results of point-to-point communication. Contains information about a message that has been or can be received. This structure contains status information about messages that have been received (with communicator::recv) or can be received (returned from communicator::probe or communicator::iprobe). It permits access to the source of the message, message tag, error code (rarely used), or the number of elements that have been transmitted. int intRetrieve the source of the message. intRetrieve the message tag. intRetrieve the error code. boolDetermine whether the communication associated with this object has been successfully cancelled. optional< int >Determines the number of elements of type T contained in the message. The type T must have an associated data type, i.e., is_mpi_datatype<T> must derive mpl::true_. In cases where the type T does not match the transmitted type, this routine will return an empty optional<int>. the number of T elements in the message, if it can be determined. MPI_Status &References the underlying MPI_Status const MPI_Status &References the underlying MPI_Status MPI_Status const &
This header provides the timer class, which provides access to the MPI timers. A simple timer that provides access to the MPI timing facilities. The timer class is a simple wrapper around the MPI timing facilities that mimics the interface of the Boost Timer library. voidRestart the timer. elapsed() == 0 doubleReturn the amount of time that has elapsed since the last construction or reset, in seconds. doubleReturn an estimate of the maximum possible value of elapsed(). Note that this routine may return too high a value on some systems. doubleReturns the minimum non-zero value that elapsed() may return. This is the resolution of the timer. Initializes the timer elapsed() == 0 boolDetermines whether the elapsed time values are global times or local processor times.