Date: June 22, 2017

Location: Frankfurt Marriott Hotel across Messe Frankfurt, Hamburger Allee 2, 60486 Frankfurt am Main, Germany

Registration: http://www.isc-hpc.com/registration.html

 

The complex nature of turbulent fluid flows implies that the computational resources needed to accurately model problems of industrial and academic relevance is virtually unbounded. Computational Fluid Dynamics (CFD) is therefore a natural driver for exascale computing and has the potential for substantial societal impact, like reduced energy consumption, alternative sources of energy, improved health care, and improved climate models.

Extreme-scale CFD poses several cross disciplinary challenges e.g. algorithmic issues in scalable solver design, handling of extreme sized data with compression and in-situ analysis, resilience and energy awareness in both hardware and algorithm design. The wide range of topics makes exascale CFD relevant to a wider HPC audience, extending outside the traditional fluid dynamics community.

This workshop aims at bringing together the CFD community as a whole, from HPC experts to domain scientists, discuss current and future challenges towards exascale fluid dynamics simulations and facilitating international collaboration.

 

Agenda:

14:00 - 14:10  Welcome/Intro

14:10 - 14:45  Alexander Heinecke,  Intel

14:45 - 15:20  Ingrid Hotz, LiU

15:20 - 15:55  Evelyn Otero, KTH

16:00 - 16:30  Coffee break!

16:30 - 17:05  Keiji Onishi, RIKEN AICS

 17:05 - 17:40  Niclas Jansson, KTH   (Wrap-up)

 

Speakers:

Alexander Heinecke,  Intel

Title: Seismic Simulations in the Days of Deep Learning Hardware

Abstract: Deep Learning applications are developing themselves into the next killer app and influence/have influenced current hardware platforms substantially towards more regular compute or even fixed function units for dense linear algebra operations. In this talk we will discuss how seismic wave equations can be solved efficiently on modern hardware using FDM and DG-FEM. In both case we introduce novel implementation schemes which optimize bandwidth in case of FDM (via vector-folding) and the efficiency of small sparse matrix products in DG-FEM (via concurrent forward runs).  As test platform we used the Cori Phase-II supercomputer hosted at NERSC which comprises of more than 9000 Intel Xeon Phi processors.

 

Keiji Onishi, RIKEN AICS

Title: Encouragement of Accelerating Pre/Post processing towards Exascale CFD

Abstract: Acceleration of pre / post process is a hidden task to be solved by HPC technology for Exascale CFD. At the site where CFD is actually used, work time due to pre-processing which is relating to shape modification based on ‘’dirty CAD data is still a bottleneck, and it hinders speeding up the entire CFD design process. This also becomes a problem when performing an optimization loop involving a shape change. In addition, the data size obtained by analysis has been increasing year by year, visualization work becomes more difficult as the calculation becomes larger scale. Speeding up its post-processing work is also a problem. This means that even if the operation of the core solver is accelerated, if the whole process can not be speeded up, the benefit obtained will be reduced. Up until now HPC community has turned its attention to speeding up solver, but is it enough? Now is the time to face the clear and present problem. In this presentation, we introduce an example of new analytical methods that do not require pre-processing and parallelized in HPC environment, and discuss how important this concept is with examples. In addition, we will describe the benefits of In-Situ visualization in post-process and introduce practical examples.

 

Ingrid Hotz, Linköping University

Title: Feature-based Analysis and Visualization in Flow Applications

Abstract: The increasing size and complexity of datasets originating from experiments or simulations raises new challenges for data analysis and visualization. Over the last years, much effort has been put into the development of visualization techniques for steady and unsteady flow fields. The resulting tools are widely used for the everyday visual analysis of flow fields. However, even with advanced visualization tools it is often difficult to understand inherent flow structures, since usually only raw data is displayed. To ease the access of complex flows higher-levels of abstraction can play a substantial role. This is the objective of feature-based visualization. In this talk I will focus on the use of topological methods in this context.

 

Evelyn Otero, KTH Royal Institute of Technology

Title: The effect of lossy data compression in computational fluid dynamics applications

Abstract: The need for large-scale simulations has been significantly growing in e-Science applications which require more and more data storage and computing resources. Computational fluid dynamics (CFD) is an area where very large amounts of data are produced. In particular, in direct and large-eddy simulations (DNS and LES) a wide range of flow scales are simulated with high fidelity, leading to a large number of degrees of freedom. Thus, storage limitations and also slow I/O (input/output) speed are some of the main limitations when performing large-scale simulations. In this project we analyze the I/O performance of Nek5000 (a spectral element CFD code) and implement parallel I/O strategies to improve I/O performance and scaling. Lossy data compression can be used to mitigate such shortcomings. In particular, the Discrete Chebyshev Transform (DCT) has been used in the image compression community , as well as in CFD. In the present work we assess the use of the DCT in situations such as data postprocessing, vortex identification, as well as restarts from compressed data fields. The latter being relevant in situations where the flow is highly sensitive to initial conditions. In the compression algorithm under consideration, the data is truncated using a priori error estimation, thus allowing total control over the error considered permissible. Note that this is an improvement with respect to previous compression algorithms. Here we illustrate the ability of the data compression algorithm to compress the data at very large scales and on complex grids, with a very good approximation of the total error.

 

Niclas Jansson, KTH Royal Institute of Technology

Title: Towards Exascale in High-Order Computational Fluid Dynamics

Abstract: TDB

  

Program committee:

  • Prof. Erwin Laure, KTH Royal Institute of Technology
  • Dr. Philipp Schlatter, KTH Royal Institute of Technology
  • Dr. Niclas Jansson, KTH Royal Institute of Technology
  • Prof. Spencer Sherwin, Imperial College London
  • Dr. David Moxey, Imperial College London
  • Dr. Nick Johnson, The University of Edinburgh, EPCC

 

 

 

 

 

EU technology platforms

  • ETP4HPC - Several members of the consortium are already involved in the European Technology Platform for High Performance Computing (ETP4HPC) technology platform, helping shape its Research Agenda.

 

H2020 projects 

  • Intertwine - ExaFLOW considers using the hybrid programming models developed in Interwine
  • NextGenIO - New memory and storage concepts - relevant for ExaFLOW's I/O and fault tolerance work are considered
  • SAGE - New memory and storage concepts - relevant for ExaFLOW's I/O and fault tolerance work is considered
  • ESCAPE - ExaFLOW considers applying  the accelerator technology developed in ESCAPE to explicit DG methods

 

National projects

  • UK Turbulence Consortium - partners involved: ICL and SOTON
  • Swedish e-Science Research Center, SeRC - collaborations on efficient implementations and exascale technologies are pursued through the partner KTH
  • Linné FLOW center - ExaFLOW is connected to the centre through the partenr KTH and is active in two research areas:"e-Science" and "Turbulence"; the latter is mainly relevant for the physical interpretation of the ExaFLOW test cases (wings, jet in crossflow)

Download material on "ExaFLOW use cases for Nek5000: incompressible jet in cross-flow and flow around a NACA4412 wing section"

 

This generic flow case of high practical relevance is obtained when a fluid jet through the wall enters a boundary layer flow along a wall. As its understanding is important in many real applications e.g. smoke and pollutant plumes or fuel injection, this flow has been the subject of a number of experimental and numerical studies over the last decades. It is also considered a canonical flow problem with complex, fully threed-imensional dynamics which cannot be investigated under simplifying assumptions that are commonly applicable to simpler flows. It makes the flow case a perfect tool for testing adaptive numerical methods for studying the stability of fluid flows and simulation capabilities, as the results have shown it to be highly dependent on a faithful resolution of the region close to the nozzle. It as well benefit for testing the feature detection and I/O reduction strategies is that it contains a variety of length scales due to boundary layer turbulence, flow instability and breakdown of the jet. We are going to consider a circular pipe perpendicular to the flat plate.

 

 Figure 1: 

 

CODES USED: Nek5000, NS3D

Download material on "ExaFLOW use case: Numerical simulation of the rear wake of a sporty vehicle"

 

The automotive use case focuses on the simulation of an unsteady turbulent flow, which originates from the separation of the flow on the rear part of the Opel Astra GTC. This flow is characterised by the three-dimensional movement of vortical structures appearing near the rear roof spoiler. It is of interest to understand the interaction between the vortical structures and the aerodynamic coefficients in this highly sensitive region, for example the pressure coefficient cp.

Additionally to the wind tunnel testing this vehicle was aerodynamically developed by using Computational Fluid Mechanics (CFD). The Reynols number is equal Re=6.3×106 using the wheel base (L=2.695m) as the characteristic length and the INLET velocity (140km/h) as reference velocity. 

Figure 1: Computational domain of the completely vehicle model in the wind tunnel.

 

Numerical method

The complete simulation model consists of the entire vehicle geometry (4.468m long, 1.991m wide and 1.449 m height) and the virtual wind tunnel (51m long, 20m wide and 12m high). All boundary surfaces of the virtual wind tunnel (INLET, OUTLET, SIDE1, SIDE2, GROUND and TOP) are identified in Figure 1. At the INLET the velocity was set to a constant value of 140 kph. At the OUTLET a pressure outlet condition is applied. The symmetry condition is used on the TOP as well as on SIDE1 and SIDE2 surfaces. At the GROUND surface a moving wall condition was applied. All other surfaces of the model are set to the non-slip condition using the non-equilibrium wall function since the first cell was placed in the log-layer obtaining typical values of y+≈30.

Simulation results

The flow field around the vehicle is illustrated in Figure 2. The flow is characterized by a big stagnation point at the front part of the vehicle and a smaller one at the windshield. Low velocity values are depicted in the airflow through the engine compartment and underbody as well at the wake of the vehicle. The location of the main turbulent structures are identified by the isosurface of the total mean pressure <p>=0 bar as illustrated in Figure 3. The isosurface is colored by the Turbulent Kinetic Energy k and shows that the main structures originate from each wheel (4 Structures), each sidemirror (2 Structures) and from the rear end of the vehicle (1 Structure). 

Figure 2: Simulation results (RANS) of the complete vehicle model. 2D contour of the velocity magnitude at the middle of the vehicle. 

Figure 3: Simulation results of the complete vehicle model. 3D isosurface of the mean total pressure <p> = 0 bar. 

 

CODES USED: Nektar++

Download material on "ExaFLOW use cases: Wing tip vortex, Imperial Front Wing and McLaren front section"

 

Based on the McLaren 17D, this geometry was provided to Imperial College London for Jonathan Pegrum’s PhD thesis, “Experimental study of the vortex system generated by a Formula 1 front wing”. 

Front Wing

Figure 1: Streamlines from a spectral/hp element simulation of the unsteady LES flow past the Imperial Front Wing geometry

It is representative of one of the major points of interest in Formula 1, i.e. the interaction of vortical structures generated by the front wing endplate with the front wheel wake. As such, it exhibits complex flow dynamics with vortex interaction/merging, and said vortices are subjected to strong adverse and favourable pressure gradients when negotiating around the tyre, affecting their stability. Capturing the behaviour of the wheel wake is also a challenge on its own. Detailed experimental results (PIV and total pressure probe) are available in Pegrum’s thesis for validation, and we aim to provide a set of geometries for the different experimental conditions. 

 

 McLaren Car  Laser Smoke Visualisation

Figure 2: Experimental setup (left) and laser smoke visualisation behind the endplate (right), courtesy of Jonathan Pegrum

 

CODE USED: Nektar++