Loading...
Searching...
No Matches
terra::communication::shell Namespace Reference

Namespaces

namespace  detail
 

Classes

class  ShellBoundaryCommPlan
 
class  SubdomainNeighborhoodSendRecvBuffer
 Send and receive buffers for all process-local subdomain boundaries. More...
 

Functions

template<typename GridDataType >
void pack_send_and_recv_local_subdomain_boundaries (const grid::shell::DistributedDomain &domain, const GridDataType &data, SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &boundary_send_buffers, SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &boundary_recv_buffers)
 Packs, sends and recvs local subdomain boundaries using two sets of buffers.
 
template<typename GridDataType >
void unpack_and_reduce_local_subdomain_boundaries (const grid::shell::DistributedDomain &domain, const GridDataType &data, SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &boundary_recv_buffers, CommunicationReduction reduction=CommunicationReduction::SUM)
 Unpacks and reduces local subdomain boundaries.
 
template<typename ScalarType >
void send_recv (const grid::shell::DistributedDomain &domain, grid::Grid4DDataScalar< ScalarType > &grid, const CommunicationReduction reduction=CommunicationReduction::SUM)
 Executes packing, sending, receiving, and unpacking operations for the shell.
 
template<typename ScalarType >
void send_recv (const grid::shell::DistributedDomain &domain, grid::Grid4DDataScalar< ScalarType > &grid, SubdomainNeighborhoodSendRecvBuffer< ScalarType > &send_buffers, SubdomainNeighborhoodSendRecvBuffer< ScalarType > &recv_buffers, const CommunicationReduction reduction=CommunicationReduction::SUM)
 Executes packing, sending, receiving, and unpacking operations for the shell.
 
template<typename GridDataType >
void send_recv_with_plan (const ShellBoundaryCommPlan< GridDataType > &plan, const GridDataType &data, SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &recv_buffers, CommunicationReduction reduction=CommunicationReduction::SUM)
 

Variables

constexpr int MPI_TAG_BOUNDARY_DATA = 100
 

Function Documentation

◆ pack_send_and_recv_local_subdomain_boundaries()

template<typename GridDataType >
void terra::communication::shell::pack_send_and_recv_local_subdomain_boundaries ( const grid::shell::DistributedDomain domain,
const GridDataType &  data,
SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &  boundary_send_buffers,
SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &  boundary_recv_buffers 
)

Packs, sends and recvs local subdomain boundaries using two sets of buffers.

Communication works like this:

  • data is packed from the boundaries of the grid data structure into send buffers
  • the send buffers are sent via MPI
  • the data is received in receive buffers
  • the receive buffers are unpacked into the grid data structure (and the data is potentially rotated if necessary)

If the sending and receiving subdomains are on the same process, the data is directly packed into the recv buffers. However, not yet directly written from subdomain A to subdomain B. This and further optimizations are obviously possible.

Note
Must be complemented with unpack_and_reduce_local_subdomain_boundaries() to complete communication. This function waits until all recv buffers are filled - but does not unpack.

Performs "additive" communication. Nodes at the subdomain interfaces overlap and will be reduced using some reduction mode during the receiving phase. This is typically required for matrix-free matrix-vector multiplications in a finite element context: nodes that are shared by elements of two neighboring subdomains receive contributions from both subdomains that need to be added. In this case, the required reduction mode is CommunicationReduction::SUM.

The send buffers are only required until this function returns. The recv buffers must be passed to the corresponding unpacking function recv_unpack_and_add_local_subdomain_boundaries().

Parameters
domainthe DistributedDomain that this works on
datathe data (Kokkos::View) to be communicated
boundary_send_buffersSubdomainNeighborhoodSendRecvBuffer instance that serves for sending data - can be reused after this function returns
boundary_recv_buffersSubdomainNeighborhoodSendRecvBuffer instance that serves for receiving data - must be passed to unpack_and_reduce_local_subdomain_boundaries()

◆ send_recv() [1/2]

template<typename ScalarType >
void terra::communication::shell::send_recv ( const grid::shell::DistributedDomain domain,
grid::Grid4DDataScalar< ScalarType > &  grid,
const CommunicationReduction  reduction = CommunicationReduction::SUM 
)

Executes packing, sending, receiving, and unpacking operations for the shell.

Note
THIS MAY COME WITH A PERFORMANCE PENALTY. This function (re-)allocates send and receive buffers for each call, which could be inefficient. Use only where performance does not matter (e.g. in tests). Better: reuse the buffers for subsequent send-recv calls through overloads of this function.

Essentially just calls pack_send_and_recv_local_subdomain_boundaries() and unpack_and_reduce_local_subdomain_boundaries().

◆ send_recv() [2/2]

template<typename ScalarType >
void terra::communication::shell::send_recv ( const grid::shell::DistributedDomain domain,
grid::Grid4DDataScalar< ScalarType > &  grid,
SubdomainNeighborhoodSendRecvBuffer< ScalarType > &  send_buffers,
SubdomainNeighborhoodSendRecvBuffer< ScalarType > &  recv_buffers,
const CommunicationReduction  reduction = CommunicationReduction::SUM 
)

Executes packing, sending, receiving, and unpacking operations for the shell.

Send and receive buffers must be passed. This is the preferred way to execute communication since the buffers can be reused.

Essentially just calls pack_send_and_recv_local_subdomain_boundaries() and unpack_and_reduce_local_subdomain_boundaries().

◆ send_recv_with_plan()

template<typename GridDataType >
void terra::communication::shell::send_recv_with_plan ( const ShellBoundaryCommPlan< GridDataType > &  plan,
const GridDataType &  data,
SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &  recv_buffers,
CommunicationReduction  reduction = CommunicationReduction::SUM 
)

◆ unpack_and_reduce_local_subdomain_boundaries()

template<typename GridDataType >
void terra::communication::shell::unpack_and_reduce_local_subdomain_boundaries ( const grid::shell::DistributedDomain domain,
const GridDataType &  data,
SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &  boundary_recv_buffers,
CommunicationReduction  reduction = CommunicationReduction::SUM 
)

Unpacks and reduces local subdomain boundaries.

The recv buffers must be the same instances as used during sending in pack_send_and_recv_local_subdomain_boundaries().

See pack_send_and_recv_local_subdomain_boundaries() for more details on how the communication works.

Parameters
domainthe DistributedDomain that this works on
datathe data (Kokkos::View) to be communicated
boundary_recv_buffersSubdomainNeighborhoodSendRecvBuffer instance that serves for receiving data - must be the same that was previously populated by pack_send_and_recv_local_subdomain_boundaries()
reductionreduction mode

Variable Documentation

◆ MPI_TAG_BOUNDARY_DATA

constexpr int terra::communication::shell::MPI_TAG_BOUNDARY_DATA = 100
constexpr