The main goal of the project is to address current algorithmic bottlenecks to enable the use of accurate CFD codes for problems of practical engineering interest. The focus will be on different simulation aspects including:
- accurate error control and adaptive mesh refinement in complex computational domains,
- solver efficiency via mixed discontinuous and continuous Galerkin methods and appropriate optimised preconditioners,
- strategies to ensure fault tolerance and resilience,
- heterogeneous modelling to allow for different solution algorithms in different domain zones,
- parallel input/output for extreme data, employing novel data reduction algorithms,
- energy awareness of high-order methods.
Specifically, we are going to address the following problems:
- In complex flow simulations, a priori knowledge of the flow physics and regions within the domain that contain the dominant flow features is generally not available, making the development of adaptive techniques crucial for large-scale computational problems. From the perspective of algorithmic development, this can be broadly categorized into an investigation of scalable, load-balanced mesh-refinement strategies, and effective error estimators based on the spectral discretization within each element that indicate the regions in the flow domain that require additional resolution or coarsening.
- Communication topologies in exascale systems will be inherently heterogeneous and necessitate new algorithms which lead to communication patterns which best align with the underlying network infrastructure. Exascale systems will require hierarchical parallelization strategies, essentially differentiating between intra- and inter-node parallelism. Achieving good efficiency on both levels while exploiting existing algorithms is a challenge and thus we propose a combination of different algorithmic approaches.
- On the next generation of large scale computing platforms, the number of computing cores will be so large that the probability of hardware faults becomes significant during a large scale simulation. It is thus essential that algorithms be resilient to such faults, allowing the computation to detect and recover from such faults. Ensuring fault tolerance and resilience is a critical component of the development of simulation tools suitable for the exascale.
- In complex flow simulations, the physics of the flow can differ drastically in different regions of the domain. Heterogeneous modelling allows the use of different representations of the physics depending on the level of detail required. At the exascale a key challenge to overcome is in maintaining scalable performance when interfacing the models in adjacent regions.
- Due to the deep memory hierarchy of large-scale systems, I/O is becoming one of the key bottlenecks to overcome. This problem is compounded in CFD simulations that discretise the flow field by a large number of data points and the flow by a collection of scalar and vector data at these points. Overall this leads to data fields that contain an order of magnitude more data than the mesh degrees of freedom. This “raw” data contains not only the flow physics in an implicit manner but also with redundancy, i.e. multiple data points contain the same physical phenomenon, similar or identical data. Innovative in-situ data reduction schemes in conjunction with the exploitation of parallel I/O strategies are proposed to alleviate these problems.
- Finally, independent of the problem domain, the energy consumption of exascale systems is becoming a limiting factor. Hardware solutions will likely have to work in conjunction with energy-efficient and energy-aware algorithms and implementations to maintain energy consumption at an acceptable level.
The project has the following five main objectives:
- Mesh adaptivity, heterogeneous modelling and resilience.
- Strong scaling at exascale using a mixed Continuous Galerkin-Hybridizable Discontinous Galerkin (CG-HDG) approach.
- I/O in ExaFLOW
- Validation and application use cases
- Energy efficient algorithms