![]() |
programmer's documentation
|
The "Finite Volume Mesh" library is intended to provide mesh and associated fields I/O and manipulation services for unstructure Finite Volume codes or other tools with similar requirements.
The FVM library is originaly intended for unstructured cell-centered finite-volume codes using almost arbitrary polyhedral cells. It may thus handle polygonal faces (convex or not) and polyhedral cells as well as more classical elements such as tetrahedra, prisms, pyramids, and hexahedra, but can currently only handle linear elements. There are also currently no optimizations for structured or block-structured meshes, wich are handled as unstructured meshes.
FVM is used to handle post-processing output in external formats (EnSight, CGNS, or MED are currently handled), possibly running in parallel using MPI. In the case of Code_Saturne, this implies reconstructing a nodal connectivity (cells -> vertices) from the faces -> cells connectivity (using an intermediate cells -> faces representation, passed to th FVM API). It is also possible to directly pass a nodal connectivity to FVM.
So as to limit memory usage and avoid unecessary copies, the fvm_nodal_t
structure associated to a mesh defined by its nodal connectivity is construed as a "view" on a mesh, and owns as little data as possible. As such, most main structures associated with this representation are defined by 2 arrays, one private, and one shared. For example, an fvm_nodal_t structure has 2 coordinate arrays:
const *cs_coord_t vertex_coords;
*cs_coord_t _vertex_coords;
If coordinates are shared with the calling code (and owned by that code), we have _vertex_coords = NULL
, and vertex_coords
points to the array passed from the calling code. If the coordinates array belongs to FVM (either having been "given" by the calling code, or generated by an operation wich invalidates coorindates sharing with the parent mesh, such as mesh extrusion), we have vertex_coords = _vertex_coords
, which points to the private array.
When an fvm_nodal_t object is destroyed, it destroys its private data and frees the corresponding memory, but obviously leaves its shared data untouched.
If a fvm_nodal_t
structure B is built from a structure A with which it shares its data, and a second fvm_nodal_t
mesh C is a view on B (for example a restriction to a part of the domain that we whish to post-process more frequently), C must be destroyed before B, which must be destroyed before A. FVM does not use reference counters or a similar mechanism, so good managment of object lifecycles is of the responsiblity of the calling code. In practice, this logic is simple enough that no issues have been encountered with this model so far in the intended uses of the code.
Another associated concept is that of "parent_number": if a mesh constitutes a "view" on another, sur un autre, a list of parent entity numbers allows accessing variables associated with the "parent" mesh, without needing to extract or duplicate values at the level of the calling code.
Note that an FVM structure is global from an MPI point of view, as it may participate in collective parallel operations. Thus, if an FVM mesh represents a subset of a global domain, it may very well be empty for some ranks, but it must still exist on all ranks of the main communicator associated with FVM.
For parallelism, another important concept is that of "global numbering", corresponding to entity numbers in a "serial" or "global" version of an object: two entities (vertices, elements, ...) on different processors having a same global number correspond to the same "absolute" object. Except when explicitely mentioned, all other data defining an object is based on local definitions and numberings. Parallelism is thus made quite "transparent" to the calling code.
In parallel, post-processor output (using EnSight Gold, CGNS, or MED formats) uses the global numbering to serialize output, and generate a unique data set (independent of the number of processors used for the calculation). This operation is done by blocks so as to limit local memory consumption, possibly incurring a slight processing overhead. The aim is to never build a full global array on a single processor, so as to ensure scalability. This is fully implemented at least for the EnSight Gold format.
As an option, generic polygonal or polyhedral elements may be split into simpler elements (triangles, tetrahedra, and pyramids) for output, this being done on the fly (so as to avoid a complete memory copy). This allows working around possible lack of support for these complex elements in certain tools or formats.