XDMF output that simultaneously serves for visualization with software like Paraview and as a simulation checkpoint. More...
#include <xdmf.hpp>
Public Types | |
| enum class | OutputTypeFloat : int { Float32 = 4 , Float64 = 8 } |
| Used to specify the output type when writing floating point data. More... | |
| enum class | OutputTypeInt : int { Int32 = 4 , Int64 = 8 } |
| Used to specify the output type when writing (signed) integer data. More... | |
Public Member Functions | |
| XDMFOutput (const std::string &directory_path, const grid::shell::DistributedDomain &distributed_domain, const grid::Grid3DDataVec< InputGridScalarType, 3 > &coords_shell_device, const grid::Grid2DDataScalar< InputGridScalarType > &coords_radii_device, const OutputTypeFloat output_type_points=OutputTypeFloat::Float32, const OutputTypeInt output_type_connectivity=OutputTypeInt::Int64) | |
| Constructs an XDMFOutput object. | |
| void | set_write_counter (int write_counter) |
| Set the write counter manually. | |
| template<typename InputScalarDataType > | |
| void | add (const grid::Grid4DDataScalar< InputScalarDataType > &data, const OutputTypeFloat output_type=OutputTypeFloat::Float32) |
| Adds a new scalar data grid to be written out. | |
| template<typename InputScalarDataType , int VecDim> | |
| void | add (const grid::Grid4DDataVec< InputScalarDataType, VecDim > &data, const OutputTypeFloat output_type=OutputTypeFloat::Float32) |
| Adds a new vector-valued data grid to be written out. | |
| void | write () |
| Writes a "time step". | |
XDMF output that simultaneously serves for visualization with software like Paraview and as a simulation checkpoint.
Writes simulation data (time-series for a constant mesh) and metadata into a directory. The written data simultaneously serves for visualization with tools that can read XDMF files (e.g., Paraview) and can be used to restore the grid data, e.g., to continue a previous simulation (aka checkpoint). This does not involve data duplication. The data required for XDMF is directly used as a checkpoint.
Currently restricted to the spherical shell mesh data structure. Interprets data as block-structured wedge-element meshes.
The mesh data has to be added upon construction. None, one, or many scalar or vector-valued grids can be added afterward.
Each write() call then writes out
The first write() call also writes (once)
For time-dependent runs, call write() in, e.g., every timestep. The mesh written in the first call will be referenced from each .xmf file.
The actually written data type can be selected regardless of the underlying data type of the allocated data for the mesh points, topology, and each data grid. Defaults are selected via default parameters. Concretely, you can write your double precision fields in single precision without converting them manually. Note that this means that your checkpoints obviously have the same precision that you specify here.
All added data grids must have different (Kokkos::View-)labels.
Uses MPI IO for fast parallel output.
To recover a checkpoint, use the function read_xdmf_checkpoint_grid() (or read_xdmf_checkpoint_metadata() to just inspect the structure of the checkpoint).
Note that the checkpoint can only be read using the same domain partitioning (i.e., the 'topology' of the DistributedDomain used when the checkpoint was written must be identical) - BUT the number of MPI processes does not need to match (nor does the distribution of subdomains to ranks need to match). So you can technically (if the amount of main memory permits) read a checkpoint from a large parallel simulation with only one or a few processes (possibly useful for post-processing).
The .xmf file for each write() call is written last (after the binary data). Thus, if the corresponding .xmf step has been written, the parallel binary data output should be completed.
All data is written to a single binary file per grid data item and per time step. Each subdomain is written contiguously. Concretely, per subdomain (also applies to the grid point coordinates)
The subdomain order depends on the various factors and can be basically random. The concrete ordering (as well as the data type) is reflected in the checkpoint metadata (see below).
See terra::grid::shell::SubdomainInfo for how the global subdomain ID is encoded.
|
strong |
|
strong |
|
inline |
Constructs an XDMFOutput object.
All data will be written to the specified directory (it is a good idea to pass an empty directory).
Construction does not write any data, yet.
| directory_path | Path to a directory that the data shall be written to. If the directory does not exist, it will be created during the first write() call. If it does already exist, data will be overwritten. |
| distributed_domain | DistributedDomain instance. |
| coords_shell_device | Lateral spherical shell grid coordinates (see subdomain_unit_sphere_single_shell_coords()). |
| coords_radii_device | Spherical shell radii (see subdomain_shell_radii()). |
| output_type_points | Floating point data type to use for mesh coordinate output. |
| output_type_connectivity | Integer data type to use for mesh connectivity output. |
|
inline |
Adds a new scalar data grid to be written out.
Does not write any data to file yet - call write() for writing the next time step.
|
inline |
Adds a new vector-valued data grid to be written out.
Does not write any data to file yet - call write() for writing the next time step.
|
inline |
Set the write counter manually.
This will only affect the step number attached to the file names. The geometry is still written once during the first write() call.
|
inline |
Writes a "time step".
Will write one .xmf file with the current counter as part of the name such that the files can be opened as a time series.
The first write() call after construction will also write the mesh data (binary files) that will be referenced from later .xmf files, as well as checkpoint metadata.
For each added data grid, one binary file is written. The data is copied to the host if required. The write() calls will allocate temporary storage on the host if host and device memory are not shared. Currently, for data grids, some host-side temporary buffers are kept after this method returns (the sizes depend on the type of data added) to avoid frequent reallocation.