Move a distributed grid vector from a fine DistributedDomain to a coarse one whose comm is a subset of the fine domain's comm. Used by agglomerated multigrid to collapse the active rank set at the descent into a coarse V-cycle level, and then to broadcast the coarse correction back to the fine rank set on the way up.
More...
|
| | Redistribute (const grid::shell::DistributedDomain &domain_fine, const grid::shell::DistributedDomain &domain_coarse, const grid::shell::SubdomainToRankDistributionFunction &subdomain_to_rank_fine, const grid::shell::SubdomainToRankDistributionFunction &subdomain_to_rank_coarse) |
| | Build a redistribute plan between two distributed domains.
|
| |
| bool | is_identity () const |
| | True when the fine and coarse domains have the same comm AND every subdomain has the same owner on both sides. In that case there is nothing to do — the caller can route restriction output directly to the coarse-side buffer and skip calling apply/apply_transpose entirely.
|
| |
| void | apply (const GridDataType &src_fine, GridDataType &dst_coarse) |
| | Move data from fine-owned subdomains to coarse-owned subdomains. Collective on the fine comm; every rank in it must call this.
|
| |
| void | apply_transpose (const GridDataType &src_coarse, GridDataType &dst_fine) |
| | Move data back from coarse-owned subdomains to fine-owned subdomains. Collective on the fine comm. Used on the way up in a V-cycle after the coarse correction has been computed on the reduced rank set.
|
| |
| void | pack_ (const GridDataType &src, const buffer_view &buf, const std::vector< Message > &messages, bool use_fine_index) const |
| |
| void | unpack_ (GridDataType &dst, const buffer_view &buf, const std::vector< Message > &messages, bool use_fine_index) const |
| |
template<class GridDataType>
class terra::communication::shell::Redistribute< GridDataType >
Move a distributed grid vector from a fine DistributedDomain to a coarse one whose comm is a subset of the fine domain's comm. Used by agglomerated multigrid to collapse the active rank set at the descent into a coarse V-cycle level, and then to broadcast the coarse correction back to the fine rank set on the way up.
Assumptions:
- Both domains describe the same mesh (same num_global_subdomains, same per-subdomain node layout). Only the owner->rank mapping differs.
- The coarse domain's comm is a subset of the fine domain's comm. Ranks that are not in the coarse comm get MPI_COMM_NULL for
domain_coarse.comm() and own zero subdomains on the coarse side.
- Data layout: grid data is a 4D/5D Kokkos view indexed by (local_sdr, i, j, k[, c]), with a fixed block size per subdomain. We use the fine domain's layout to determine that block size (same on both sides by the same-mesh assumption).
The class is stateful: it precomputes send/recv counts and displacements once at construction, then reuses them across solves. apply() and apply_transpose() are the hot paths; they pack/Alltoallv/unpack.