Author: Nick Johnson, EPCC

Use cases are the bridge between benchmarks, where specific elements of a code are exercised (often in isolation) such as a solver, or IO routine and full applications. However, that’s not to say use cases are not full applications. In ExaFlow, our use cases are specific geometries, specific CFD problems that we use to demonstrate the improvements we have made to our co-design applications: Nektar++, Nek5000 & OpenSBLI. What differentiates them from full applications is that we already know the answer! Like a school kid sneaking a look at the answers in the back of the textbook, we have a good feel for how the model should resolve, what is should look like, and perhaps more importantly, what it shouldn’t look like.

Authors: Neil D. Sandham, Roderick Johnstone, Christian T. Jacobs

Direct numerical simulation (DNS) of turbulent flow is highly accurate, capable of resolving all turbulence length scales with the numerical grid. However, applications of DNS are limited due to the sheer computational cost of the approach, since the number of grid points scales very strongly with the Reynolds number (e.g. Re(37/14) for aerofoil boundary layer flow [1]).

Author: Allan S. Nielsen, Ecole polytechnique fédérale de Lausanne (EPFL)

In the quest towards reaching Exascale computational throughput on potentially Exascale capable machines, dealing with faulty hardware is expected to be a major challenge. On todays Petascale systems, hardware failures of various forms have already become something of a daily occurrence.

In mid-September, the ExaFLOW consortium gathered with members of the Scientific Advisory Board in the impressive surroundings of the mid-nineteenth century mansion St Leonard's Hall in Edinburgh. This all-hands meeting allowed everyone to be updated on the project’s achievements during the last six months and set the agenda for the final year.

St Leonard's Hall in Edinburgh, Formerly the St Trinnean‘s School for Girls

Author: Carlos J. Falconi Delgado, Automotive Simulation Center Stuttgart e.V.

The computational methods which fully take into account the evolution of three-dimensional turbulent behaviors such as LES and DNS require huge computation cost according to the temporal and spatial resolution of those approaches. Using a code with a good scalability behavior would enable more efficient computation in order to meet the fast turnaround times (<36 hours) required from the automotive industry. To find the optimal number of processors for massive computation, the computation time with respect to the number of cores, namely strong scalability test has been carried out.