Namespaces | |
| namespace | detail |
Classes | |
| class | ShellBoundaryCommPlan |
| class | SubdomainNeighborhoodSendRecvBuffer |
| Send and receive buffers for all process-local subdomain boundaries. More... | |
Functions | |
| template<typename GridDataType > | |
| void | pack_send_and_recv_local_subdomain_boundaries (const grid::shell::DistributedDomain &domain, const GridDataType &data, SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &boundary_send_buffers, SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &boundary_recv_buffers) |
| Packs, sends and recvs local subdomain boundaries using two sets of buffers. | |
| template<typename GridDataType > | |
| void | unpack_and_reduce_local_subdomain_boundaries (const grid::shell::DistributedDomain &domain, const GridDataType &data, SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &boundary_recv_buffers, CommunicationReduction reduction=CommunicationReduction::SUM) |
| Unpacks and reduces local subdomain boundaries. | |
| template<typename ScalarType > | |
| void | send_recv (const grid::shell::DistributedDomain &domain, grid::Grid4DDataScalar< ScalarType > &grid, const CommunicationReduction reduction=CommunicationReduction::SUM) |
| Executes packing, sending, receiving, and unpacking operations for the shell. | |
| template<typename ScalarType > | |
| void | send_recv (const grid::shell::DistributedDomain &domain, grid::Grid4DDataScalar< ScalarType > &grid, SubdomainNeighborhoodSendRecvBuffer< ScalarType > &send_buffers, SubdomainNeighborhoodSendRecvBuffer< ScalarType > &recv_buffers, const CommunicationReduction reduction=CommunicationReduction::SUM) |
| Executes packing, sending, receiving, and unpacking operations for the shell. | |
| template<typename GridDataType > | |
| void | send_recv_with_plan (const ShellBoundaryCommPlan< GridDataType > &plan, const GridDataType &data, SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &recv_buffers, CommunicationReduction reduction=CommunicationReduction::SUM) |
Variables | |
| constexpr int | MPI_TAG_BOUNDARY_DATA = 100 |
| void terra::communication::shell::pack_send_and_recv_local_subdomain_boundaries | ( | const grid::shell::DistributedDomain & | domain, |
| const GridDataType & | data, | ||
| SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > & | boundary_send_buffers, | ||
| SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > & | boundary_recv_buffers | ||
| ) |
Packs, sends and recvs local subdomain boundaries using two sets of buffers.
Communication works like this:
If the sending and receiving subdomains are on the same process, the data is directly packed into the recv buffers. However, not yet directly written from subdomain A to subdomain B. This and further optimizations are obviously possible.
unpack_and_reduce_local_subdomain_boundaries() to complete communication. This function waits until all recv buffers are filled - but does not unpack.Performs "additive" communication. Nodes at the subdomain interfaces overlap and will be reduced using some reduction mode during the receiving phase. This is typically required for matrix-free matrix-vector multiplications in a finite element context: nodes that are shared by elements of two neighboring subdomains receive contributions from both subdomains that need to be added. In this case, the required reduction mode is CommunicationReduction::SUM.
The send buffers are only required until this function returns. The recv buffers must be passed to the corresponding unpacking function recv_unpack_and_add_local_subdomain_boundaries().
| domain | the DistributedDomain that this works on |
| data | the data (Kokkos::View) to be communicated |
| boundary_send_buffers | SubdomainNeighborhoodSendRecvBuffer instance that serves for sending data - can be reused after this function returns |
| boundary_recv_buffers | SubdomainNeighborhoodSendRecvBuffer instance that serves for receiving data - must be passed to unpack_and_reduce_local_subdomain_boundaries() |
| void terra::communication::shell::send_recv | ( | const grid::shell::DistributedDomain & | domain, |
| grid::Grid4DDataScalar< ScalarType > & | grid, | ||
| const CommunicationReduction | reduction = CommunicationReduction::SUM |
||
| ) |
Executes packing, sending, receiving, and unpacking operations for the shell.
Essentially just calls pack_send_and_recv_local_subdomain_boundaries() and unpack_and_reduce_local_subdomain_boundaries().
| void terra::communication::shell::send_recv | ( | const grid::shell::DistributedDomain & | domain, |
| grid::Grid4DDataScalar< ScalarType > & | grid, | ||
| SubdomainNeighborhoodSendRecvBuffer< ScalarType > & | send_buffers, | ||
| SubdomainNeighborhoodSendRecvBuffer< ScalarType > & | recv_buffers, | ||
| const CommunicationReduction | reduction = CommunicationReduction::SUM |
||
| ) |
Executes packing, sending, receiving, and unpacking operations for the shell.
Send and receive buffers must be passed. This is the preferred way to execute communication since the buffers can be reused.
Essentially just calls pack_send_and_recv_local_subdomain_boundaries() and unpack_and_reduce_local_subdomain_boundaries().
| void terra::communication::shell::send_recv_with_plan | ( | const ShellBoundaryCommPlan< GridDataType > & | plan, |
| const GridDataType & | data, | ||
| SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > & | recv_buffers, | ||
| CommunicationReduction | reduction = CommunicationReduction::SUM |
||
| ) |
| void terra::communication::shell::unpack_and_reduce_local_subdomain_boundaries | ( | const grid::shell::DistributedDomain & | domain, |
| const GridDataType & | data, | ||
| SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > & | boundary_recv_buffers, | ||
| CommunicationReduction | reduction = CommunicationReduction::SUM |
||
| ) |
Unpacks and reduces local subdomain boundaries.
The recv buffers must be the same instances as used during sending in pack_send_and_recv_local_subdomain_boundaries().
See pack_send_and_recv_local_subdomain_boundaries() for more details on how the communication works.
| domain | the DistributedDomain that this works on |
| data | the data (Kokkos::View) to be communicated |
| boundary_recv_buffers | SubdomainNeighborhoodSendRecvBuffer instance that serves for receiving data - must be the same that was previously populated by pack_send_and_recv_local_subdomain_boundaries() |
| reduction | reduction mode |
|
constexpr |