Author: Dr. Chris Cantwell, Imperial College London

Fluid dynamics is one of the application areas driving advances in supercomputing towards the next great milestone of exascale – achieving 10^18 floating-point operations per second (flops). One of the major challenges of running such simulations is the reliability of the computing hardware on which the tasks execute. With saturation of clock-speeds and growth in available flops now being primarily achieved through increased parallelism, an exascale machine is likely to consist of a far greater number of components than previous systems. Since the reliability of these components is not improving considerably, the time for which an exascale system can run before a failure occurs (mean time to interrupt, MTTI) will be on the order of a few minutes. Indeed, the latest petascale (10^15 flops) supercomputers have an MTTI of around 8 hours.

The High-Performance Computing Center Stuttgart (HLRS) will host one out of four residencies in conjunction with the ExaFLOW project. We invite applications from artists and designers who are interested in computer science and technology to join us as part of the VERTIGO Project of the European Commission. The deadline for applications is May 29, 2017 at 10:00 CET.

Author: Dr. Julien Hoessler, McLaren

 

One of our goals here in McLaren as industrial partners is to demonstrate that the algorithms developed within the ExaFLOW consortium will potentially help improve the accuracy and/or throughput of our production CFD simulations on complex geometries. To that end, we provided a demonstration case, referred to as McLaren Front wing, based on the McLaren 17D, and representative of one of the major points of interest in Formula 1, i.e. the interaction of vortical structures generated by the front wing endplate with the front wheel wake.


Figure 1: McLaren Front wing, initial run in Nek++

Author: Dr. Nick Johnson, EPCC

 

Having just returned from Lausanne where we had the most recent all-hands meeting, it was time to write our periodic report. These are good opportunities to step back and see what we've covered, as a work-package and partner since our first meeting in Stockholm in October 2015.

I resurrected a set of slides to do a comparison and see that we've covered a fair amount of work in the past 18 months and I even now understand some of the maths! We've worked heavily on energy efficiency, benchmarking codes in-depth on a number of systems. We are lucky that we have three similar (but not identical) systems from the same vendor so we can easily exchange measurement tips and libraries. It is also apparent that despite using well tuned systems, we see variances between runs of a simulation and have to be careful to design out experiments. 

Author: Patrick Vogler, IAG, University of Stuttgart

 

The steady increase of available computer resources has enabled engineers and scientists to use progressively more complex models to simulate a myriad of fluid flow problems. Yet, whereas modern high performance computers (HPC) have seen a steady growth in computing power, the same trend has not been mirrored by a significant gain in data transfer rates. Current systems are capable of producing and processing high amounts of data quickly, while the overall performance is oftentimes hampered by how fast a system can transfer and store the computed data. Considering that CFD (computational fluid dynamics) researchers invariably seek to study simulations with increasingly higher temporal resolution on fine grained computational grids, the imminent move to exascale performance will consequently only exacerbate this problem. [6]