Loading...
Searching...
No Matches
terra::communication::shell Namespace Reference

Namespaces

namespace  detail
 
namespace  fv_detail
 

Classes

class  FVGhostLayerBuffers
 Pre-allocated send/recv buffers and sorted face-pair lists for FV ghost layer comm. More...
 
class  FVGhostLayerVecBuffers
 Pre-allocated send/recv buffers for vector-valued FV ghost layer communication. More...
 
class  Redistribute
 Move a distributed grid vector from a fine DistributedDomain to a coarse one whose comm is a subset of the fine domain's comm. Used by agglomerated multigrid to collapse the active rank set at the descent into a coarse V-cycle level, and then to broadcast the coarse correction back to the fine rank set on the way up. More...
 
class  ShellBoundaryCommPlan
 
class  SubdomainNeighborhoodSendRecvBuffer
 Send and receive buffers for all process-local subdomain boundaries. More...
 

Functions

template<typename GridDataType >
void pack_send_and_recv_local_subdomain_boundaries (const grid::shell::DistributedDomain &domain, const GridDataType &data, SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &boundary_send_buffers, SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &boundary_recv_buffers)
 Packs, sends and recvs local subdomain boundaries using two sets of buffers.
 
template<typename GridDataType >
void unpack_and_reduce_local_subdomain_boundaries (const grid::shell::DistributedDomain &domain, const GridDataType &data, SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &boundary_recv_buffers, CommunicationReduction reduction=CommunicationReduction::SUM)
 Unpacks and reduces local subdomain boundaries.
 
template<typename ScalarType >
void send_recv (const grid::shell::DistributedDomain &domain, grid::Grid4DDataScalar< ScalarType > &grid, const CommunicationReduction reduction=CommunicationReduction::SUM)
 Executes packing, sending, receiving, and unpacking operations for the shell.
 
template<typename ScalarType >
void send_recv (const grid::shell::DistributedDomain &domain, grid::Grid4DDataScalar< ScalarType > &grid, SubdomainNeighborhoodSendRecvBuffer< ScalarType > &send_buffers, SubdomainNeighborhoodSendRecvBuffer< ScalarType > &recv_buffers, const CommunicationReduction reduction=CommunicationReduction::SUM)
 Executes packing, sending, receiving, and unpacking operations for the shell.
 
template<typename GridDataType >
void send_recv_with_plan (const ShellBoundaryCommPlan< GridDataType > &plan, const GridDataType &data, SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &recv_buffers, CommunicationReduction reduction=CommunicationReduction::SUM)
 
template<typename ScalarType >
void update_fv_ghost_layers (const grid::shell::DistributedDomain &domain, const grid::Grid4DDataScalar< ScalarType > &data, FVGhostLayerBuffers< ScalarType > &bufs)
 Fills all ghost layers of a scalar FV field from neighbouring subdomains.
 
template<typename ScalarType >
void update_fv_ghost_layers (const grid::shell::DistributedDomain &domain, const grid::Grid4DDataScalar< ScalarType > &data)
 Convenience overload — allocates temporary buffers internally.
 
template<typename ScalarType , int VecDim>
void update_fv_ghost_layers (const grid::shell::DistributedDomain &domain, const grid::Grid4DDataVec< ScalarType, VecDim > &data, FVGhostLayerVecBuffers< ScalarType, VecDim > &bufs)
 Fills all ghost layers of a vector-valued FV field from neighbouring subdomains.
 
template<typename ScalarType , int VecDim>
void update_fv_ghost_layers (const grid::shell::DistributedDomain &domain, const grid::Grid4DDataVec< ScalarType, VecDim > &data)
 Convenience overload for vector fields — allocates temporary buffers internally.
 

Variables

constexpr int MPI_TAG_BOUNDARY_DATA = 100
 
constexpr int MPI_TAG_FV_GHOST_LAYERS = 200
 MPI tag for FV ghost layer messages (distinct from MPI_TAG_BOUNDARY_DATA = 100).
 
constexpr int MPI_TAG_FV_VEC_GHOST_LAYERS = 201
 MPI tag for vector-valued FV ghost layer messages.
 

Function Documentation

◆ pack_send_and_recv_local_subdomain_boundaries()

template<typename GridDataType >
void terra::communication::shell::pack_send_and_recv_local_subdomain_boundaries ( const grid::shell::DistributedDomain domain,
const GridDataType &  data,
SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &  boundary_send_buffers,
SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &  boundary_recv_buffers 
)

Packs, sends and recvs local subdomain boundaries using two sets of buffers.

Communication works like this:

  • data is packed from the boundaries of the grid data structure into send buffers
  • the send buffers are sent via MPI
  • the data is received in receive buffers
  • the receive buffers are unpacked into the grid data structure (and the data is potentially rotated if necessary)

If the sending and receiving subdomains are on the same process, the data is directly packed into the recv buffers. However, not yet directly written from subdomain A to subdomain B. This and further optimizations are obviously possible.

Note
Must be complemented with unpack_and_reduce_local_subdomain_boundaries() to complete communication. This function waits until all recv buffers are filled - but does not unpack.

Performs "additive" communication. Nodes at the subdomain interfaces overlap and will be reduced using some reduction mode during the receiving phase. This is typically required for matrix-free matrix-vector multiplications in a finite element context: nodes that are shared by elements of two neighboring subdomains receive contributions from both subdomains that need to be added. In this case, the required reduction mode is CommunicationReduction::SUM.

The send buffers are only required until this function returns. The recv buffers must be passed to the corresponding unpacking function recv_unpack_and_add_local_subdomain_boundaries().

Parameters
domainthe DistributedDomain that this works on
datathe data (Kokkos::View) to be communicated
boundary_send_buffersSubdomainNeighborhoodSendRecvBuffer instance that serves for sending data - can be reused after this function returns
boundary_recv_buffersSubdomainNeighborhoodSendRecvBuffer instance that serves for receiving data - must be passed to unpack_and_reduce_local_subdomain_boundaries()

◆ send_recv() [1/2]

template<typename ScalarType >
void terra::communication::shell::send_recv ( const grid::shell::DistributedDomain domain,
grid::Grid4DDataScalar< ScalarType > &  grid,
const CommunicationReduction  reduction = CommunicationReduction::SUM 
)

Executes packing, sending, receiving, and unpacking operations for the shell.

Note
THIS MAY COME WITH A PERFORMANCE PENALTY. This function (re-)allocates send and receive buffers for each call, which could be inefficient. Use only where performance does not matter (e.g. in tests). Better: reuse the buffers for subsequent send-recv calls through overloads of this function.

Essentially just calls pack_send_and_recv_local_subdomain_boundaries() and unpack_and_reduce_local_subdomain_boundaries().

◆ send_recv() [2/2]

template<typename ScalarType >
void terra::communication::shell::send_recv ( const grid::shell::DistributedDomain domain,
grid::Grid4DDataScalar< ScalarType > &  grid,
SubdomainNeighborhoodSendRecvBuffer< ScalarType > &  send_buffers,
SubdomainNeighborhoodSendRecvBuffer< ScalarType > &  recv_buffers,
const CommunicationReduction  reduction = CommunicationReduction::SUM 
)

Executes packing, sending, receiving, and unpacking operations for the shell.

Send and receive buffers must be passed. This is the preferred way to execute communication since the buffers can be reused.

Essentially just calls pack_send_and_recv_local_subdomain_boundaries() and unpack_and_reduce_local_subdomain_boundaries().

◆ send_recv_with_plan()

template<typename GridDataType >
void terra::communication::shell::send_recv_with_plan ( const ShellBoundaryCommPlan< GridDataType > &  plan,
const GridDataType &  data,
SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &  recv_buffers,
CommunicationReduction  reduction = CommunicationReduction::SUM 
)

◆ unpack_and_reduce_local_subdomain_boundaries()

template<typename GridDataType >
void terra::communication::shell::unpack_and_reduce_local_subdomain_boundaries ( const grid::shell::DistributedDomain domain,
const GridDataType &  data,
SubdomainNeighborhoodSendRecvBuffer< typename GridDataType::value_type, grid::grid_data_vec_dim< GridDataType >() > &  boundary_recv_buffers,
CommunicationReduction  reduction = CommunicationReduction::SUM 
)

Unpacks and reduces local subdomain boundaries.

The recv buffers must be the same instances as used during sending in pack_send_and_recv_local_subdomain_boundaries().

See pack_send_and_recv_local_subdomain_boundaries() for more details on how the communication works.

Parameters
domainthe DistributedDomain that this works on
datathe data (Kokkos::View) to be communicated
boundary_recv_buffersSubdomainNeighborhoodSendRecvBuffer instance that serves for receiving data - must be the same that was previously populated by pack_send_and_recv_local_subdomain_boundaries()
reductionreduction mode

◆ update_fv_ghost_layers() [1/4]

template<typename ScalarType >
void terra::communication::shell::update_fv_ghost_layers ( const grid::shell::DistributedDomain domain,
const grid::Grid4DDataScalar< ScalarType > &  data 
)

Convenience overload — allocates temporary buffers internally.

Note
THIS ALLOCATES ON EVERY CALL. Use the overload with FVGhostLayerBuffers for any code that runs repeatedly (iterative solvers, time-stepping loops).

◆ update_fv_ghost_layers() [2/4]

template<typename ScalarType >
void terra::communication::shell::update_fv_ghost_layers ( const grid::shell::DistributedDomain domain,
const grid::Grid4DDataScalar< ScalarType > &  data,
FVGhostLayerBuffers< ScalarType > &  bufs 
)

Fills all ghost layers of a scalar FV field from neighbouring subdomains.

Pre-allocated buffers and sorted face-pair lists from bufs are used; no allocation occurs.

Parameters
domainThe distributed domain.
dataFV scalar field — ghost cells are written in-place.
bufsPre-allocated buffers constructed for this domain (constructed once, reused).

◆ update_fv_ghost_layers() [3/4]

template<typename ScalarType , int VecDim>
void terra::communication::shell::update_fv_ghost_layers ( const grid::shell::DistributedDomain domain,
const grid::Grid4DDataVec< ScalarType, VecDim > &  data 
)

Convenience overload for vector fields — allocates temporary buffers internally.

Note
THIS ALLOCATES ON EVERY CALL. Prefer the overload with FVGhostLayerVecBuffers for any code that runs repeatedly.

◆ update_fv_ghost_layers() [4/4]

template<typename ScalarType , int VecDim>
void terra::communication::shell::update_fv_ghost_layers ( const grid::shell::DistributedDomain domain,
const grid::Grid4DDataVec< ScalarType, VecDim > &  data,
FVGhostLayerVecBuffers< ScalarType, VecDim > &  bufs 
)

Fills all ghost layers of a vector-valued FV field from neighbouring subdomains.

Parameters
domainThe distributed domain.
dataFV vector field — ghost cells are written in-place.
bufsPre-allocated buffers (constructed once, reused).

Variable Documentation

◆ MPI_TAG_BOUNDARY_DATA

constexpr int terra::communication::shell::MPI_TAG_BOUNDARY_DATA = 100
constexpr

◆ MPI_TAG_FV_GHOST_LAYERS

constexpr int terra::communication::shell::MPI_TAG_FV_GHOST_LAYERS = 200
constexpr

MPI tag for FV ghost layer messages (distinct from MPI_TAG_BOUNDARY_DATA = 100).

◆ MPI_TAG_FV_VEC_GHOST_LAYERS

constexpr int terra::communication::shell::MPI_TAG_FV_VEC_GHOST_LAYERS = 201
constexpr

MPI tag for vector-valued FV ghost layer messages.