Turbulent (incompressible) flow around a NACA-4412 profile - adaptive mesh refinement 

Author: Adam Peplinski, Nicolas Offermans and Philipp Schlatter (KTH Mechanics)

All of these cases were designed to expose some of the difficulties encountered in CFD, for instance complex geometry, intricate physical interactions etc., but having the same common denominator that they represent relevant cases which can be scaled to large sizes (i.e. number of grid points, and running time) to be of industrial relevance.

In the present blog, we describe the progress that we are making for one of those flagship runs, namely the incompressible flow around a asymmetric wing profile, the NACA 4412 airfoil. Whereas we have done similar simulations before in our group, the major innovation from ExaFLOW comes with a novel treatment of the discretisation inside the computational domain: For the first time, we allow the mesh to evolve dynamically depending on the estimated computational error at any given point in space and time. During ExaFLOW, we coupled this so-called adaptive mesh refinement to the highly accurate spectral-element code Nek5000. Special focus has been on the design of the preconditioners necessary to efficiently solve the arising linear systems, the definition of the error indicators (in this case the so-called spectral error indicators), and the overall scalability of our implementation.

Author: Allan S. Nielsen, Ecole polytechnique fédérale de Lausanne (EPFL)

HPC resilience is expected to be a major challenge for future Exascale systems. On todays Peta scale systems, hardware failures that disturb the cluster work flow is already a daily occurrence. The currently used most common approach for safe-guarding applications against the impact of hardware failure is to write all relevant data on the parallel file system at regular intervals. In the event of a failure, one may simply restart the application from the most recent checkpoint rather than recomputing everything again. This approach works fairly well for smaller applications using 1-100 nodes, but becomes very inefficient for large scale computations for various reasons. A major challenge is that the parallel file system is unable to create checkpoints fast enough. It has been suggested that if one were to use the current checkpoint-recover from parallel file system approach on future Exascale systems, applications would be unable to make progress due to being in a constant state of checkpointing to, or restarting from, the parallel file system.

Aothors: Martin Vymazal, David Moxey, Chris Cantwell, Robert M. Kirby and Spencer Sherwin

High-order methods, combined with unstructured grids, are now becoming increasingly popular in application areas such as computational fluid dynamics. They simultaneously provide geometric flexibility and high-fidelity of flow solutions, whilst being able to utilise modern computing hardware more effectively than traditional low-order methods. These properties make high-order methods particularly attractive in various application areas such as large-eddy simulations over complex industrial geometries, which can be used to gain detailed insight into flow physics.

Author: Sebastian Wagner, Automotive Simulation Center Stuttgart e.V.

In our previous blog-post we presented the results from strong scalability  test only  performed with Nektar++. As a next step, the code performance of Nektar++  in contrast to the commercial simulation software  Fluent was investigated. The big difference are the numerical methods. Nektar++ uses a spectral/hp element method, in Fluent (DES) the Finite Volume method is implemented. In Nektar++ the polynomial expansion factors were 3 and 2 for velocity and pressure.

Table 1: Test conditions for strong scalability test
Parameter Nektar++ Fluent
Reynoldsnumber 100 6.3×106
Δt 10-5 10-4
Timesteps 5000 500
Physical time 0.05s 0.05s


Author: Nicolas OffermansKTH Royal Institute of Technology

In our previous blog posts, we presented our progress on the implementation of h-type mesh refinement capabilities in the spectral element method (SEM) code Nek5000. For the next step, we combine these tools with appropriate error estimators and we perform adaptive mesh refinement (AMR) on some test cases. Two methods are considered for estimating the error. The first method uses selected spectral error indicators, which are based on the properties of the SEM. The second one uses dual error estimators, which are based on the computation of an adjoint problem. At the moment, only steady and two-dimensional simulations are considered. Yet, the results obtained provide a valuable experience before an application to the real-world benchmarks/test cases of the ExaFLOW project in the future. A detailed description of our latest work can be found in [1].