Namespaces | |
| namespace | detail |
| namespace | fv_detail |
Classes | |
| class | FVGhostLayerBuffers |
| Pre-allocated send/recv buffers and sorted face-pair lists for FV ghost layer comm. More... | |
| class | FVGhostLayerVecBuffers |
| Pre-allocated send/recv buffers for vector-valued FV ghost layer communication. More... | |
| class | Redistribute |
| Move a distributed grid vector from a fine DistributedDomain to a coarse one whose comm is a subset of the fine domain's comm. Used by agglomerated multigrid to collapse the active rank set at the descent into a coarse V-cycle level, and then to broadcast the coarse correction back to the fine rank set on the way up. More... | |
| class | ShellBoundaryCommPlan |
| class | SubdomainNeighborhoodSendRecvBuffer |
| Send and receive buffers for all process-local subdomain boundaries. More... | |
Functions | |
| template<typename GridDataType > | |
| void | pack_send_and_recv_local_subdomain_boundaries (const grid::shell::DistributedDomain &domain, const GridDataType &data, SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &boundary_send_buffers, SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &boundary_recv_buffers) |
| Packs, sends and recvs local subdomain boundaries using two sets of buffers. | |
| template<typename GridDataType > | |
| void | unpack_and_reduce_local_subdomain_boundaries (const grid::shell::DistributedDomain &domain, const GridDataType &data, SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &boundary_recv_buffers, CommunicationReduction reduction=CommunicationReduction::SUM) |
| Unpacks and reduces local subdomain boundaries. | |
| template<typename ScalarType > | |
| void | send_recv (const grid::shell::DistributedDomain &domain, grid::Grid4DDataScalar< ScalarType > &grid, const CommunicationReduction reduction=CommunicationReduction::SUM) |
| Executes packing, sending, receiving, and unpacking operations for the shell. | |
| template<typename ScalarType > | |
| void | send_recv (const grid::shell::DistributedDomain &domain, grid::Grid4DDataScalar< ScalarType > &grid, SubdomainNeighborhoodSendRecvBuffer< ScalarType > &send_buffers, SubdomainNeighborhoodSendRecvBuffer< ScalarType > &recv_buffers, const CommunicationReduction reduction=CommunicationReduction::SUM) |
| Executes packing, sending, receiving, and unpacking operations for the shell. | |
| template<typename GridDataType > | |
| void | send_recv_with_plan (const ShellBoundaryCommPlan< GridDataType > &plan, const GridDataType &data, SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &recv_buffers, CommunicationReduction reduction=CommunicationReduction::SUM) |
| template<typename ScalarType > | |
| void | update_fv_ghost_layers (const grid::shell::DistributedDomain &domain, const grid::Grid4DDataScalar< ScalarType > &data, FVGhostLayerBuffers< ScalarType > &bufs) |
| Fills all ghost layers of a scalar FV field from neighbouring subdomains. | |
| template<typename ScalarType > | |
| void | update_fv_ghost_layers (const grid::shell::DistributedDomain &domain, const grid::Grid4DDataScalar< ScalarType > &data) |
| Convenience overload — allocates temporary buffers internally. | |
| template<typename ScalarType , int VecDim> | |
| void | update_fv_ghost_layers (const grid::shell::DistributedDomain &domain, const grid::Grid4DDataVec< ScalarType, VecDim > &data, FVGhostLayerVecBuffers< ScalarType, VecDim > &bufs) |
| Fills all ghost layers of a vector-valued FV field from neighbouring subdomains. | |
| template<typename ScalarType , int VecDim> | |
| void | update_fv_ghost_layers (const grid::shell::DistributedDomain &domain, const grid::Grid4DDataVec< ScalarType, VecDim > &data) |
| Convenience overload for vector fields — allocates temporary buffers internally. | |
Variables | |
| constexpr int | MPI_TAG_BOUNDARY_DATA = 100 |
| constexpr int | MPI_TAG_FV_GHOST_LAYERS = 200 |
| MPI tag for FV ghost layer messages (distinct from MPI_TAG_BOUNDARY_DATA = 100). | |
| constexpr int | MPI_TAG_FV_VEC_GHOST_LAYERS = 201 |
| MPI tag for vector-valued FV ghost layer messages. | |
| void terra::communication::shell::pack_send_and_recv_local_subdomain_boundaries | ( | const grid::shell::DistributedDomain & | domain, |
| const GridDataType & | data, | ||
| SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > & | boundary_send_buffers, | ||
| SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > & | boundary_recv_buffers | ||
| ) |
Packs, sends and recvs local subdomain boundaries using two sets of buffers.
Communication works like this:
If the sending and receiving subdomains are on the same process, the data is directly packed into the recv buffers. However, not yet directly written from subdomain A to subdomain B. This and further optimizations are obviously possible.
unpack_and_reduce_local_subdomain_boundaries() to complete communication. This function waits until all recv buffers are filled - but does not unpack.Performs "additive" communication. Nodes at the subdomain interfaces overlap and will be reduced using some reduction mode during the receiving phase. This is typically required for matrix-free matrix-vector multiplications in a finite element context: nodes that are shared by elements of two neighboring subdomains receive contributions from both subdomains that need to be added. In this case, the required reduction mode is CommunicationReduction::SUM.
The send buffers are only required until this function returns. The recv buffers must be passed to the corresponding unpacking function recv_unpack_and_add_local_subdomain_boundaries().
| domain | the DistributedDomain that this works on |
| data | the data (Kokkos::View) to be communicated |
| boundary_send_buffers | SubdomainNeighborhoodSendRecvBuffer instance that serves for sending data - can be reused after this function returns |
| boundary_recv_buffers | SubdomainNeighborhoodSendRecvBuffer instance that serves for receiving data - must be passed to unpack_and_reduce_local_subdomain_boundaries() |
| void terra::communication::shell::send_recv | ( | const grid::shell::DistributedDomain & | domain, |
| grid::Grid4DDataScalar< ScalarType > & | grid, | ||
| const CommunicationReduction | reduction = CommunicationReduction::SUM |
||
| ) |
Executes packing, sending, receiving, and unpacking operations for the shell.
Essentially just calls pack_send_and_recv_local_subdomain_boundaries() and unpack_and_reduce_local_subdomain_boundaries().
| void terra::communication::shell::send_recv | ( | const grid::shell::DistributedDomain & | domain, |
| grid::Grid4DDataScalar< ScalarType > & | grid, | ||
| SubdomainNeighborhoodSendRecvBuffer< ScalarType > & | send_buffers, | ||
| SubdomainNeighborhoodSendRecvBuffer< ScalarType > & | recv_buffers, | ||
| const CommunicationReduction | reduction = CommunicationReduction::SUM |
||
| ) |
Executes packing, sending, receiving, and unpacking operations for the shell.
Send and receive buffers must be passed. This is the preferred way to execute communication since the buffers can be reused.
Essentially just calls pack_send_and_recv_local_subdomain_boundaries() and unpack_and_reduce_local_subdomain_boundaries().
| void terra::communication::shell::send_recv_with_plan | ( | const ShellBoundaryCommPlan< GridDataType > & | plan, |
| const GridDataType & | data, | ||
| SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > & | recv_buffers, | ||
| CommunicationReduction | reduction = CommunicationReduction::SUM |
||
| ) |
| void terra::communication::shell::unpack_and_reduce_local_subdomain_boundaries | ( | const grid::shell::DistributedDomain & | domain, |
| const GridDataType & | data, | ||
| SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > & | boundary_recv_buffers, | ||
| CommunicationReduction | reduction = CommunicationReduction::SUM |
||
| ) |
Unpacks and reduces local subdomain boundaries.
The recv buffers must be the same instances as used during sending in pack_send_and_recv_local_subdomain_boundaries().
See pack_send_and_recv_local_subdomain_boundaries() for more details on how the communication works.
| domain | the DistributedDomain that this works on |
| data | the data (Kokkos::View) to be communicated |
| boundary_recv_buffers | SubdomainNeighborhoodSendRecvBuffer instance that serves for receiving data - must be the same that was previously populated by pack_send_and_recv_local_subdomain_boundaries() |
| reduction | reduction mode |
| void terra::communication::shell::update_fv_ghost_layers | ( | const grid::shell::DistributedDomain & | domain, |
| const grid::Grid4DDataScalar< ScalarType > & | data | ||
| ) |
Convenience overload — allocates temporary buffers internally.
| void terra::communication::shell::update_fv_ghost_layers | ( | const grid::shell::DistributedDomain & | domain, |
| const grid::Grid4DDataScalar< ScalarType > & | data, | ||
| FVGhostLayerBuffers< ScalarType > & | bufs | ||
| ) |
Fills all ghost layers of a scalar FV field from neighbouring subdomains.
Pre-allocated buffers and sorted face-pair lists from bufs are used; no allocation occurs.
| domain | The distributed domain. |
| data | FV scalar field — ghost cells are written in-place. |
| bufs | Pre-allocated buffers constructed for this domain (constructed once, reused). |
| void terra::communication::shell::update_fv_ghost_layers | ( | const grid::shell::DistributedDomain & | domain, |
| const grid::Grid4DDataVec< ScalarType, VecDim > & | data | ||
| ) |
Convenience overload for vector fields — allocates temporary buffers internally.
| void terra::communication::shell::update_fv_ghost_layers | ( | const grid::shell::DistributedDomain & | domain, |
| const grid::Grid4DDataVec< ScalarType, VecDim > & | data, | ||
| FVGhostLayerVecBuffers< ScalarType, VecDim > & | bufs | ||
| ) |
Fills all ghost layers of a vector-valued FV field from neighbouring subdomains.
| domain | The distributed domain. |
| data | FV vector field — ghost cells are written in-place. |
| bufs | Pre-allocated buffers (constructed once, reused). |
|
constexpr |
|
constexpr |
MPI tag for FV ghost layer messages (distinct from MPI_TAG_BOUNDARY_DATA = 100).
|
constexpr |
MPI tag for vector-valued FV ghost layer messages.