Loading...
Searching...
No Matches
terra::communication::shell::Redistribute< GridDataType > Class Template Reference

Move a distributed grid vector from a fine DistributedDomain to a coarse one whose comm is a subset of the fine domain's comm. Used by agglomerated multigrid to collapse the active rank set at the descent into a coarse V-cycle level, and then to broadcast the coarse correction back to the fine rank set on the way up. More...

#include <redistribute.hpp>

Public Types

using ScalarType = typename GridDataType::value_type
 
using memory_space = typename GridDataType::memory_space
 
using buffer_view = Kokkos::View< ScalarType *, memory_space >
 
using host_buffer_view = Kokkos::View< ScalarType *, Kokkos::HostSpace >
 

Public Member Functions

 Redistribute (const grid::shell::DistributedDomain &domain_fine, const grid::shell::DistributedDomain &domain_coarse, const grid::shell::SubdomainToRankDistributionFunction &subdomain_to_rank_fine, const grid::shell::SubdomainToRankDistributionFunction &subdomain_to_rank_coarse)
 Build a redistribute plan between two distributed domains.
 
bool is_identity () const
 True when the fine and coarse domains have the same comm AND every subdomain has the same owner on both sides. In that case there is nothing to do — the caller can route restriction output directly to the coarse-side buffer and skip calling apply/apply_transpose entirely.
 
void apply (const GridDataType &src_fine, GridDataType &dst_coarse)
 Move data from fine-owned subdomains to coarse-owned subdomains. Collective on the fine comm; every rank in it must call this.
 
void apply_transpose (const GridDataType &src_coarse, GridDataType &dst_fine)
 Move data back from coarse-owned subdomains to fine-owned subdomains. Collective on the fine comm. Used on the way up in a V-cycle after the coarse correction has been computed on the reduced rank set.
 
void pack_ (const GridDataType &src, const buffer_view &buf, const std::vector< Message > &messages, bool use_fine_index) const
 
void unpack_ (GridDataType &dst, const buffer_view &buf, const std::vector< Message > &messages, bool use_fine_index) const
 

Static Public Attributes

static constexpr int VecDim = grid::grid_data_vec_dim< GridDataType >()
 

Detailed Description

template<class GridDataType>
class terra::communication::shell::Redistribute< GridDataType >

Move a distributed grid vector from a fine DistributedDomain to a coarse one whose comm is a subset of the fine domain's comm. Used by agglomerated multigrid to collapse the active rank set at the descent into a coarse V-cycle level, and then to broadcast the coarse correction back to the fine rank set on the way up.

Assumptions:

  • Both domains describe the same mesh (same num_global_subdomains, same per-subdomain node layout). Only the owner->rank mapping differs.
  • The coarse domain's comm is a subset of the fine domain's comm. Ranks that are not in the coarse comm get MPI_COMM_NULL for domain_coarse.comm() and own zero subdomains on the coarse side.
  • Data layout: grid data is a 4D/5D Kokkos view indexed by (local_sdr, i, j, k[, c]), with a fixed block size per subdomain. We use the fine domain's layout to determine that block size (same on both sides by the same-mesh assumption).

The class is stateful: it precomputes send/recv counts and displacements once at construction, then reuses them across solves. apply() and apply_transpose() are the hot paths; they pack/Alltoallv/unpack.

Member Typedef Documentation

◆ buffer_view

template<class GridDataType >
using terra::communication::shell::Redistribute< GridDataType >::buffer_view = Kokkos::View< ScalarType*, memory_space >

◆ host_buffer_view

template<class GridDataType >
using terra::communication::shell::Redistribute< GridDataType >::host_buffer_view = Kokkos::View< ScalarType*, Kokkos::HostSpace >

◆ memory_space

template<class GridDataType >
using terra::communication::shell::Redistribute< GridDataType >::memory_space = typename GridDataType::memory_space

◆ ScalarType

template<class GridDataType >
using terra::communication::shell::Redistribute< GridDataType >::ScalarType = typename GridDataType::value_type

Constructor & Destructor Documentation

◆ Redistribute()

template<class GridDataType >
terra::communication::shell::Redistribute< GridDataType >::Redistribute ( const grid::shell::DistributedDomain domain_fine,
const grid::shell::DistributedDomain domain_coarse,
const grid::shell::SubdomainToRankDistributionFunction subdomain_to_rank_fine,
const grid::shell::SubdomainToRankDistributionFunction subdomain_to_rank_coarse 
)
inline

Build a redistribute plan between two distributed domains.

Parameters
domain_fineSource side; holds data before apply() and after apply_transpose().
domain_coarseDestination side; holds data after apply() and before apply_transpose().
subdomain_to_rank_fineFine-side owner function (sub-comm-local ranks on fine comm).
subdomain_to_rank_coarseCoarse-side owner function (sub-comm-local ranks on coarse comm).

Member Function Documentation

◆ apply()

template<class GridDataType >
void terra::communication::shell::Redistribute< GridDataType >::apply ( const GridDataType &  src_fine,
GridDataType &  dst_coarse 
)
inline

Move data from fine-owned subdomains to coarse-owned subdomains. Collective on the fine comm; every rank in it must call this.

◆ apply_transpose()

template<class GridDataType >
void terra::communication::shell::Redistribute< GridDataType >::apply_transpose ( const GridDataType &  src_coarse,
GridDataType &  dst_fine 
)
inline

Move data back from coarse-owned subdomains to fine-owned subdomains. Collective on the fine comm. Used on the way up in a V-cycle after the coarse correction has been computed on the reduced rank set.

◆ is_identity()

template<class GridDataType >
bool terra::communication::shell::Redistribute< GridDataType >::is_identity ( ) const
inline

True when the fine and coarse domains have the same comm AND every subdomain has the same owner on both sides. In that case there is nothing to do — the caller can route restriction output directly to the coarse-side buffer and skip calling apply/apply_transpose entirely.

◆ pack_()

template<class GridDataType >
void terra::communication::shell::Redistribute< GridDataType >::pack_ ( const GridDataType &  src,
const buffer_view buf,
const std::vector< Message > &  messages,
bool  use_fine_index 
) const
inline

◆ unpack_()

template<class GridDataType >
void terra::communication::shell::Redistribute< GridDataType >::unpack_ ( GridDataType &  dst,
const buffer_view buf,
const std::vector< Message > &  messages,
bool  use_fine_index 
) const
inline

Member Data Documentation

◆ VecDim

template<class GridDataType >
constexpr int terra::communication::shell::Redistribute< GridDataType >::VecDim = grid::grid_data_vec_dim< GridDataType >()
staticconstexpr

The documentation for this class was generated from the following file: