Source of Non Matching Meshes
Source of Non Matching Meshes
Classification of Mesh Coupling Methods 1. Primal - no additional unknowns master-slave elimination penalty 2. Dual - additional unknowns Lagrange multipliers adjoined and perhaps a kinematic frame 3. Primal-Dual: begin as primal, end as dual
Primal Methods for Displacement based FEM Rely on direct interpolation to establish multifreedom constraints (MFCs) MFCs are then applied by one of 3 techniques primal: primal: dual: master slave elimination penalty Lagrange multipliers
1. Direct interpolation (collocation) followed by function adjunction . 2. Shape function least square matching. 3. Transition elements .
We will cover only direct interpolation with master-slave elimination (DI+MS) DI+MS Elimination: Structure as Master
2. Monolithically couples fluid & structure,Changes data structures of system matrices becauseunknown vector is modified
3. If master is finer mesh,prone to spurious modes. Think of the structure boundary motion pictured here: structure moves butfluid does not notice
4. DI+Penalty Function: Difficulties Generally fails interface patch test (explained later) Strongly couples fluid & structure
Less change in system matrices, but need to pick weights May need scaling Again prone to spurious modes if MFCs are insufficient to prevent them
Dual Methods: bring additional unknowns Global Lagrange Multipliers (Variational form of mortar) Localized Lagrange Multipliers
1. Types of contact algorithms under consideration Contact algorithms can be classier according to the concept utilized for the description of motion of a continuous medium. In Lagrangian contact algorithms, the nodes move with the velocity of the material medium. In non-Lagrangian contact algorithms, the nodes either are fixed (Eulerian algorithms) or move Independently of the material medium (Arbitrary Lagrangian {Eulerian (ALE) algorithms). A characteristic feature of non-Lagrangian algorithms is the occurrence of convective terms in the evolution equations due to a difference in the velocities of the grid (coordinate system) and the medium. In both cases (Eulerian and Lagrangian), the moving interfaces can be treated explicitly by tracking algorithms as a set of surface nodes (or markers) and cells or be defined implicitly by capturing algorithms. The capturing algorithms are based on continuous marker functions that take on a certain constant values at the moving boundaries. This known classification underlies the systematization of the publications to be reviewed. In the present paper, we consider all types of contact algorithms, irrespective of the types of contacting media, which is in agreement with the modern tendency to unify the methods of solid mechanics and hydrodynamics. This tendency is accounted for by the requirement for construction of unified computational models for technological and natural processes.
Algorithm 1 Local untangling of node N. Begin the untangling (node N, decomposition mode) Identify Ball(N) - the set of surrounding cells Decompose each cell in Ball(N) into tetrahedra using one decomposition pattern (8-tet or 24-tet) Apply linear optimization: Compute the set of linear constraints for linear optimization problem Setup the problem of linear optimization using the basic set of target functions Find the respective set of positions on the kernel boundary using the simplex method Compute the internal kernel position PI by averaging the found boundary points If PI is still on the kernel boundary: Reapply linear optimization using the extended set of target functions and recompute PI If the recomputed PI is still on the kernel boundary: Skip node N (no successful untangling is possible)
Move node N to PI End. The solution of a local untangling problem must be the new position of node N, such that no concave cells remain in Ball(N). In order to set up the simplex problem properly, a series of linear constraints must be established and a target function must be defined. Each cell in Ball(N) provides a number of non-strict linear constraints (Figure 4.5). In this 2-D illustration, each quadrilateral can be subdivided into two triangles by each of the two diagonals. Four resulting triangles must have positive volumes in order to guarantee cell convexity. The condition of volume positiveness for a triangle can be expressed in terms of coordinates (x, y)0,1,2 of its vertices as follows:
This expression is linear with respect to any of vertex coordinates. Node N is referenced by three of the four triangles constituting the quadrilateral. Thus, each cell of Ball(N) contributes exactly three linear constraints. In 3-D the number of constraints per cell depends on the pattern used to decompose a cell into tetrahedra. Each tetrahedron always
contributes one constraint. For the 8-tet decomposition (Section 4.2.2) the number of constraints is eight; for the 24-tet decomposition it is twenty four, respectively. As we are interested only in finding any point within the kernel, the target function can be chosen almost arbitrarily. Its gradient must be non-zero; otherwise, no solution will be found. Unfortunately, solutions provided by the simplex method are always located on kernel boundary, which is unacceptable. In this case, at least one cell in the ball remains in undetermined, or locked state between concave and convex. If this situation emerges due to kernel degeneracy, it indicates that no solution exists. In all other cases, when the kernel is not degenerate, the point located strictly within the kernel is always possible to find. This is achieved via combining solutions resulting from different target functions as described below. For the sake of clarity, the combining procedure is explained using 2-D framework. Thefollowing set of target functions is employed: Linear optimization of functions of this set results in four opposite positions on the kernel boundary: with maximum x, minimum x, maximum y and minimum y. These positions are averaged to result in a new point. For a general kernel geometry, it is located strictly within the kernel (see Figure 4.9a). However, it is sometimes possible that the averaged position still appears on the kernel boundary. A few examples of kernel shapes that lead to such breakdown are illustrated in Figure 4.10. In this case, an extended set of target functions is employed. It includes four basic functions (4.8) and four additional ones: It is easy to see that using the extended set of target functions guarantees that an internal kernel point will be found provided that the kernel is not degenerate (Figure 4.9b). For the sake of efficiency, the averaging technique is applied adaptively. The basic set of target functions is used for each quadrilateral by default. The extended set is employed only if averaging fails with the basic set. Thus, the basic set is used for the majority of kernels and the extended set is used only when necessary. The above procedure is naturally propagated to 3-D without any difficulties. A 3-D kernel represents a convex polyhedron; the simplex method and the averaging technique are applied in the same manner as in 2-D. Expression similar to (4.6) can be established for the condition of volume positiveness of a tetrahedron
Along with the problem of global mesh untangling, the problem of global mesh optimization is too complicated to be solved using implicit methods. Instead, it is decomposed into consecutive optimizations of sub-meshes possessing only one internal node. Such a submesh associated with an internal mesh node is referred to as the Ball(N). Figure 4.3 shows an example of such a ball. The problem of global mesh optimization is solved iteratively. Each ball of cells in the mesh is successively optimized in every iteration. Iterations are repeated until specified exit conditions are satisfied. Finally, the problem of local optimization can be formulated as followsThe kernel or the feasible set of node N, as defined in Section 4.2.1, is a sub-region of Ball(N), such that all cells in Ball(N) are non-concave with respect to node N if and only if node N is positioned strictly within its kernel (Figure 4.5). The kernel is bounded by straight lines and always represents a convex polyhedron. Placing node N at any location inside its kernel guarantees convexity of Ball(N) provided that the kernel exists. However, quality measure of cells in Ball(N) naturally varies across Kernel(Ball(N)). Finding such position of node N in which the quality measure reaches its local maximum solves the problem of local optimization. The local optimization problem is solved successfully for each internal non-hanging mesh node during a number of iterations resulting in global mesh quality improvement
Algorithm 2 Local optimization of node N. Begin the optimization (node N, decomposition mode) Identify Ball(N) - the set of surrounding cells, all of which are convex with respect to N Decompose each cell in Ball(N) into tetrahedra using one decomposition pattern (the 8-tet or 24tet) Apply optimization: Compute the nodal functional value for the current position of N Find the direction of functional gradient Solve the 1-D minimization problem Find the new optimized position POPT for node N Move node N to POPT End.
4.4 Combined approach The two approaches described in previous sections, unstructured hexahedral mesh untangling and optimization, generally do not show their best performance when applied as stand-alone tools. The untangling method is capable of eliminating the majority of concave cells. However, untangling some nodes requires the nodal quality measures of surrounding nodes to first be optimized. The proposed optimization procedure is obviously limited by the fact that it can only optimize non-tangled nodes, i.e. any mesh must be concave in order to be subject to full optimization. Therefore, a methodology combining these techniques into one global quality improvement procedure is proposed herein (see Algorithm 1. 3).
Another reason why this approach is chosen is the fact that it allows for an automated generation of boundary-conforming fully hexahedral meshes. Grids of this type still lack popularity due to inherent difficulties of meshing within highly complex geometries. However, they potentially offer a higher accuracy of solution than tetrahedral or hybrid prismatic tetrahedral meshes when using classical numerical methods. Besides, hexahedral elements are the best choice for resolving highly sheared flows such as boundary layers. The presented approach has been implement in the unstructured mesh generation software package called HEXPRESSTM and developed by NUMECA, International, s.a. [35,36,119]. The approach consists of five main steps.
First, a geometrical model of the domain to be meshed (Figure 3.1a) is converted into native format which contains both topological and geometrical information about the domain. Following that, a Cartesian background mesh that encompasses the entire model is generated (Figure 3.1b). It is further refined nonconformally
via an Octree technique until cells sufficiently small for capturing relevant flow details are populated throughout the domain. At this point, the mesh is not yet boundaryconforming but distribution of cell sizes already satisfies requirements of the problem. At the next step, all cells of the octree-refined mesh that fall outside the domain or intersect its boundary are removed (Figure 3.1c). Surface of the remaining staircase mesh is projected onto domain boundaries; all topological items such as ridges and corners are captured and regular cell layers are inserted. This step is followed by mesh quality improvement (optimization) procedures. Figure 3.1e shows a mesh cross-section revealing optimized cell shapes inside a volume mesh. Optionally, layers of highly stretched cells can be inserted along solid boundaries if viscous effects in the boundary layer are to be captured during simulation (Figure 3.1f). Following sections present a detailed description of these steps as well as examples of meshes obtained by means of this method. Conclusions regarding general performance of the approach bring the Chapter to closure
According to the grid-based approach, the first step of a mesh generation process is construction of an initial background mesh. For this purpose, a Cartesian box bounding the domain of interest is identified and optionally subdivided into a set of identical cells by refinement along the axes x, y, z. Figure 3.4 illustrates this operation. Alternatively, any hexahedral mesh fully encompassing the domain can be used. The choice of the initial mesh is large. It can be a structured mesh with orthogonal or non-orthogonal cells, curvilinear or straight, or it can also be an unstructured collection of hexahedra. As highlighted by Tchon et al [119], this mesh should be very easy to generate and should
fully
encompass
the
respective
computational
domain
The default behaviour of the generator is to start from an initial mesh involving a single hexahedral cell corresponding to the bounding box of the computational domain.
At this step, the initial mesh is automatically adapted to particular geometrical features by successive subdivisions of its cells. The goal is to refine the mesh until necessary nodal density and cell sizing are obtained. Further adaptation of the mesh can be performed optionally during flow simulation, depending on some indicators measuring the quality of obtained solution. The available refinement patterns are depicted in Figure 3.5. The process of refinement typically involves several levels. At each level, mesh cells are either preserved or refined according to one of the patterns in Figure 3.5. All cells intersecting the domain boundary are identified and their sizes are compared to target cell sizes that must be satisfied. The latter can either be specified by user or computed based on characteristic length scales of underlying geometry. The difference in refinement levels allowed between neighboring cells is limited to one. This criterion guarantees smooth cell size gradation between regions with different cell sizes. A cell is subdivided if at least one of refinement criteria is satisfied (see Figure 3.6). The first criterion involves target cell sizes specified by user for each topological surface, nearwhich certain cell sizing is required. During the refinement process, cells intersecting respective surfaces are subdivided until their sizes reach specified values. The other criterion is variation of normals of surface triangles intersecting mesh cells. Cells are refined if this
variation exceeds a certain threshold value. Additional refinements are forced in narrow gaps present in the domain. A certain minimum number of cells is generated across such gapsAfter the refinement process is completed, the final operation of the adaptation procedure is removal of cells that intersect or fall outside the domain boundary. This process is also referred to as trimming. Intersecting cells are identified and marked first. After that, cells
The optimization algorithm minimizes the functional using a weighted steepest descent technique which locally updates coordinates of nodes in a Jacobi procedure. A sample
The optimization technique is capable of removing overlapping and degenerate cells but is an order of magnitude more expensive than the Laplacian smoothing. Development of a new fast and reliable mesh quality improvement approach is addressed in Chapter 4 of this
thesis.
Several layers of high-aspect ratio hexahedral cells can be added to a mesh close to surface geometry for correct resolution of boundary layers in high-Reynolds number flow simulations. A method using tangential refinement of cells adjacent to solid boundaries in normaltowall direction is applied (see Figure 3.14). The method is capable of generating layers of highly stretched cells along solid boundaries as well as of satisfying user-specified firstcell size and stretching ratio. However, such simplified procedure often produces meshes with excessive cell size discontinuities in normal-to-wall direction between anisotropic and isotropic regions. Figure 3.15 presents an example of such a mesh. Chapter 5 presents a new powerful approach for generation of high-quality meshes for viscous computations developed in the thesis. The new method directly addresses the problem of cell size discontinuities between isotropic and anisotropic regions. initial mesh consists of thirty two, twenty five and eight cells in the three respective Cartesian
directions; the total number of cells is 6,400. After three refinements and a boundary fitting, the mesh contains 136,155 cells. Buffer insertion increases cell count to 229,331 cells. Figure 3.16 shows the resulting surface mesh with all topological edges captured successfully. All three refinement criteria (target sizes, distances between opposite walls, and high curvature) were implied in this case. The second example shows performance of the generator when used for an external aerodynamics application. The computational domain around a generic business jet geometry contains fourty eight topological edges and twenty five topological faces. This domain consists of the airplane model and the external bounding box. The initial mesh consists of only one cell. Eleven refinements are performed, after which the mesh is snapped onto theconfiguration. An unstructured hexahedral mesh is generated around the wing-body-nacelle configuration DLR-F6. The respective computational domain contains fifty topological edges and twenty two topological faces. The initial mesh contains twelve, five and ten cells in each respective Cartesian direction, i.e. six hundred cells in total. After thirteen refinements are performed, the mesh is successfully projected onto the geometry and all topological entities are captured. The resulting mesh contains 536,206 cells. The next operation, buffer insertion, increases cell count by approximately thirty percent to result in a 710,249 cell mesh. At this point, mesh quality is improved by means of optimization. The latter is followed by insertion of layers of highly stretched cells along solid boundaries (i.e., the aircraft surface). Parameters of the layers are: the first cell size is 103, the stretching ratio is 1.2, and the number of layers is seven. The resulting viscous mesh contains 1,834,477 cells. Figure 3.19 shows a general view of the surface mesh on the DLR-F6 configuration. A close-up view of the nacelle mesh is depicted in Figure 3.20. This test case represents a reference geometry for CFD community, and a successful mesh generation is an important validation milestone for the mesh generator. Finally, the last example represents external geometry around a fragment of a genericdescription. It simplifies the potentially cumbersome translation of a CAD model into a native data set of the mesh generator. Secondly, the mesh generator automatically adapts the initial mesh
in such a way that the final one is graded according to typical length scales of surrounding geometry with minimal user intervention. Furthermore, unlike other related Cartesianbased methods, the present technique automatically fits a mesh to geometry and removes degenerate cells and faces. It is also important, that the method does not require existence of a surface mesh with length scales compatible with physics simulation prior to volume mesh generation. Instead, it is directly obtained as a by-product of the boundary fitting step. Also, the method implements sophisticated algorithms to preserve geometry features of the domain of interest. This obviously becomes a benefit at the physics simulation stage, as the effects of corners and ridges on simulation results can be significant. Automatic generation of unstructured meshes sometimes produces meshes with poorly shaped and inverted elements. The case is even worse for hexahedral meshing because of high flexibility of a hexahedron to become extremely distorted, and its intrinsic difficulties when generating such meshes for complex configurations. Presence of invalid mesh elements in a mesh may lead to a major breakdown of the simulation algorithm. Therefore, development of a posteriori quality improvement tools is of prime importance. Meshes consisting solely of simplicial elements (triangles or tetrahedra) are most likely subject to quality improvement. This is due to presence of linear mapping of a uniform triangle or tetrahedron onto an arbitrary triangular or tetrahedral element. Poorly-shaped cells can be detected by evaluating the Jacobian of the corresponding mapping. Linearity of such mappings is extremely important because the Jacobian of linear mapping is constant. Therefore, evaluating the mapping Jacobian unambiguously answers the question whether or not mesh element is inverted, in other words, has a non-positive volume. However, the situation for computational meshes consisting of non-simplicial elements such as quadrilaterals or hexahedra is different. Particularly, no linear mapping exists for a hexahedral element to be transformed into a reference element. Instead, the mapping between a unit cube and a hexahedral cell is trilinear and its Jacobian exhibits a rather complicated behavior throughout the cell. Also, no convexity relations can be effectively exploited because faces of a hexahedron are generally not planar and can be folded even when all eight corners are convex. This makes the task of quality improvement much more complicated for hexahedral meshes as compared to simplicial and even quadrilateral meshes. For the latter, simple convexity relations can be easily established. An effective method for untangling and optimization of hexahedral unstructured nonconformal meshes is presented in the thesis [34, 75]. The method has been developed as a
part of the new mesh generator presented in Chapter 3. It is able to untangle invalid (i.e., concave) cells and optimize valid but poorly-shaped cells resulting from grid generation process. The ultimate goal is to obtain a mesh with all convex cells. The grid generation process employed in the generator is based on overlay approach and includes several stages. First, an initial non-boundary-conforming mesh is created and refined based on geometry particularities. Cells that fall outside or intersect the domain are removed from the volume mesh. Next, surface of the resulting staircase mesh is projected onto the domain boundary and layers of buffer cells are inserted between the volume mesh and the corresponding surface mesh in order to obtain a body-conforming mesh. Concave and poorly shaped cells which are usually concentrated near the boundary may appear during the projection step. In the final stage, new untangling and optimization tools are applied to transform these cells into convex ones and to recover a mesh of high quality. Optionally, layers of high aspect ratio cells for viscous flow computations may be inserted. The grid generator is coupled with a new flow solver, which intensively adapts the mesh based on local solution error estimation in order to obtain a mesh optimized for a particular flow solution. Refinement of a concave cell may result in new cells with negative volumes. The latter are unacceptable for reasons of robustness and accuracy of the flow solver. That is why an automatic optimization procedure is necessary. It combines the untangling and optimization techniques with Laplacian smoothing to result in an efficient automated tool for improvement of quality of unstructured hexahedral meshes. The upcoming sections are organized as follows. To begin with, methods for untangling and optimization are considered separately with the focus on their individual algorithmic details. After that, an approach combining both techniques is described in details. Finally, performance of the combined approach is analysed, conclusions about its efficiency are