Author: Allan S. Nielsen, Ecole polytechnique fédérale de Lausanne (EPFL)

HPC resilience is expected to be a major challenge for future Exascale systems. On todays Peta scale systems, hardware failures that disturb the cluster work flow is already a daily occurrence. The currently used most common approach for safe-guarding applications against the impact of hardware failure is to write all relevant data on the parallel file system at regular intervals. In the event of a failure, one may simply restart the application from the most recent checkpoint rather than recomputing everything again. This approach works fairly well for smaller applications using 1-100 nodes, but becomes very inefficient for large scale computations for various reasons. A major challenge is that the parallel file system is unable to create checkpoints fast enough. It has been suggested that if one were to use the current checkpoint-recover from parallel file system approach on future Exascale systems, applications would be unable to make progress due to being in a constant state of checkpointing to, or restarting from, the parallel file system.

Author: Sebastian Wagner, Automotive Simulation Center Stuttgart e.V.

In our previous blog-post we presented the results from strong scalability  test only  performed with Nektar++. As a next step, the code performance of Nektar++  in contrast to the commercial simulation software  Fluent was investigated. The big difference are the numerical methods. Nektar++ uses a spectral/hp element method, in Fluent (DES) the Finite Volume method is implemented. In Nektar++ the polynomial expansion factors were 3 and 2 for velocity and pressure.

Table 1: Test conditions for strong scalability test
Parameter Nektar++ Fluent
Reynoldsnumber 100 6.3×106
Δt 10-5 10-4
Timesteps 5000 500
Physical time 0.05s 0.05s


Author: Nicolas OffermansKTH Royal Institute of Technology

In our previous blog posts, we presented our progress on the implementation of h-type mesh refinement capabilities in the spectral element method (SEM) code Nek5000. For the next step, we combine these tools with appropriate error estimators and we perform adaptive mesh refinement (AMR) on some test cases. Two methods are considered for estimating the error. The first method uses selected spectral error indicators, which are based on the properties of the SEM. The second one uses dual error estimators, which are based on the computation of an adjoint problem. At the moment, only steady and two-dimensional simulations are considered. Yet, the results obtained provide a valuable experience before an application to the real-world benchmarks/test cases of the ExaFLOW project in the future. A detailed description of our latest work can be found in [1].

Aothors: Martin Vymazal, David Moxey, Chris Cantwell, Robert M. Kirby and Spencer Sherwin

High-order methods, combined with unstructured grids, are now becoming increasingly popular in application areas such as computational fluid dynamics. They simultaneously provide geometric flexibility and high-fidelity of flow solutions, whilst being able to utilise modern computing hardware more effectively than traditional low-order methods. These properties make high-order methods particularly attractive in various application areas such as large-eddy simulations over complex industrial geometries, which can be used to gain detailed insight into flow physics.

Author: Nick Johnson, EPCC

Use cases are the bridge between benchmarks, where specific elements of a code are exercised (often in isolation) such as a solver, or IO routine and full applications. However, that’s not to say use cases are not full applications. In ExaFlow, our use cases are specific geometries, specific CFD problems that we use to demonstrate the improvements we have made to our co-design applications: Nektar++, Nek5000 & OpenSBLI. What differentiates them from full applications is that we already know the answer! Like a school kid sneaking a look at the answers in the back of the textbook, we have a good feel for how the model should resolve, what is should look like, and perhaps more importantly, what it shouldn’t look like.