Author: Jean-Eloi W. Lombard, Imperial College London

Building on our February 16th post, we at McLaren have been working towards the first comparison between high-order simulations computed with Nektar++ (Cantwell 2016) of the McLaren Front-Wing and PIV data.

One of our key goals remains to assess the accuracy of the high-fidelity LES with respect to experimental data. The different elements composing this geometry are presented in Figure 1. As a first step we have simulated the front-wing without the wheel to focus on the generation of the vortex system on the wing itself. Pegrum (2006) conducted PIV experiments on the same geometry at a ride of height of h/cMP=0.48, where h is the height of the front-wing measured at the trailing edge of the footplate and cMP is the chord of the mainplane. The Reynolds number, based on the chord of the mainplane is the same for both experiment and CFD at Rec|MP = 2.105. 

Figure 1:  McLaren front-wing geometry (Lombard, 2017) The initial condition for the CFD was a state-of-the-art RANS computation.The flow then developed for 5 time units, based on the length of the chord of the main plane, before the results were averaged for 2 time units.

 

Author: Björn Dick, High Performance Computing Center Stuttgart - HLRS

Besides scalability, resiliency and I/O, the energy demand of HPC systems is a further obstacle on the path to exascale computing and hence also addressed by the ExaFLOW project. This is due to the fact that already the energy demand of current systems accounts for several million € per year. Furthermore, the infrastructure to provide such amounts of electric energy is expensive and not available at the centers these days. Last but not least, almost the entire electric energy is transferred to thermal energy, posing challenges with respect to heat dissipation.

Author: Dr. Christian Jacobs, University of Southampton

In simulations of fluid turbulence, small-scale structures must be sufficiently well resolved. A feature of under-resolved regions of flow is the appearance of grid-to-grid point oscillations, and such oscillations are often used to decide when/where grid refinement is required. Two new error indicators have recently been developed by SOTON as part of the ExaFLOW project, that permit the quantification of these features of under-resolution. These are both based on spectral techniques using small-scale Fourier transforms.

Authors: Dr. Christian Jacobs, University of Southampton; Niclas Jansson, KTH Royal Institute of Technology

ParCFD mini-symposium: Towards Exascale in High-Order Computational Fluid Dynamics

  • "High-Fidelity Road Car & Full-Aircraft Simulations using OpenFOAM on ARCHER - Perspectives On The Need For Exa-Scale" - N. Ashton
  • "Incorporating complex physics in the Nek5000 code: reactive and multiphase flows"A. Tomboulides
  • "Towards Resilience at Exascale: Memoryconservative fault tolerance in Nektar++"C. Cantwell
  • "Future-proofing CFD codes against uncertain HPC architectures: experiences with OpenSBLI"N.D. Sandham
  • "Towards adaptive mesh refinement for the spectral element solver Nek5000" - A. Peplinksi

Author: Dr. Chris Cantwell, Imperial College London

Fluid dynamics is one of the application areas driving advances in supercomputing towards the next great milestone of exascale – achieving 10^18 floating-point operations per second (flops). One of the major challenges of running such simulations is the reliability of the computing hardware on which the tasks execute. With saturation of clock-speeds and growth in available flops now being primarily achieved through increased parallelism, an exascale machine is likely to consist of a far greater number of components than previous systems. Since the reliability of these components is not improving considerably, the time for which an exascale system can run before a failure occurs (mean time to interrupt, MTTI) will be on the order of a few minutes. Indeed, the latest petascale (10^15 flops) supercomputers have an MTTI of around 8 hours.