The code is available for download and installation from either a Subversion Control Repository or Git. Links to both repositories are given at: https://nek5000.mcs.anl.gov/

The git repository always mirrors the svn. Official releases are not in place since the nek community users and developers prefer immediate access to their contributions. However, since the software is updated on constant basis, tags for stable releases as well as latest releases are available, so far only for the Git mirror of the code.

The reason for this is that SVN is maintained mainly for senior users who already have their own coding practices, and will be maintained at Argonne National Laboratory (using respective account at ANL); the git repository is maintained at github. A similar procedure is followed for the documentation to which developers/users are free to contribute by editing and adding descriptions of features, and these are pushed back to the repository by issuing pull requests. These allow the nek team to assess whether publication is in order. All information about these procedures are documented on the homepage. KTH maintains a close collaboration with the Nek team at ANL.

The code is daily run through a series of regression tests via buildbot (to be transferred to jenkins). The checks range from functional testing, compiler suite testing to unit testing. So far not all solvers benefit of unit testing but work is ongoing in this direction Successful runs of buildbot determine whether a version of the code is deemed stable.

A suite of examples is available with the source code, examples which illustrate modifications of geometry as well as solvers and implementations of various routines. Users are encouraged to submit their own example cases to be included in the distribution.

The use cases withing ExaFLOW which involve Nek5000 will be packaged as examples and included in the repository for future reference.

The DNS code ns3d  is based on the complete Navier-Stokes equations for compressible fluids with the assumptions of an ideal gas and the Sutherland law for air.  The differential equations are discretized in streamwise and wall-normal directions with 6th-order compact or 8th-order explicit finite differences.  Time integration is performed with a four-step, 4th-order Runge-Kutta scheme.  Implicit and explicit filtering in space and time is possible if resolution or convergence problems occur.  The code has been continuously optimized for vector and massive-parallel computer systems until the current Cray XC40 system.  Boundary conditions for sub- and supersonic flows can be appropriately specified at the boundaries of the integration domain.  Grid transformation is used to cluster grid points in regions of interest, e.g. near a wall or a corner.  For parallelization, the domain is split into several subdomains as illustrated in the figure.

                             NS3D code

                                                                      Illustration of grid lines (black) and subdomains (red).  A small step is located at Rex=3.3E+05.

The code is a tensor product based finite element package designed to allow one to construct efficient classical low polynomial order h-type solvers (where h is the size of the finite element) as well as higher p-order piecewise polynomial order solvers. The framework currently has the following capabilities:

  • Representation of one, two and three-dimensional fields as a collection of piecewise continuous or discontinuous polynomial domains.
  • Segment, plane and volume domains are permissible, as well as domains representing curves and surfaces (dimensionally-embedded domains).
  • Hybrid shaped elements, i.e triangles and quadrilaterals or tetrahedra, prisms and hexahedra.
  • Both hierarchical and nodal expansion bases.
  • Continuous or discontinuous Galerkin operators.
  • Cross platform support for Linux, Mac OS X and Windows.

Nektar++ comes with a number of solvers and also allows one to construct a variety of new solvers. In this project we will primarily be using the Incompressible Navier Stokes solver. 

 

 

                                                                                       

The SBLI code solves the governing equations of motion for a compressible Newtonian fluid using a high-order discretisation with shock capturing. An entropy splitting approach is used for the Euler terms and all the spatial discretisations are carried out using a fourth-order central-difference scheme. Time integration is performed using compact-storage Runge-Kutta methods with third and fourth order options. Stable high-order boundary schemes are used, along with a Laplacian formulation of the viscous and heat conduction terms to prevent any odd-even decoupling associated with central schemes.

 

                                 SBLI code

 

 

Date: June 22, 2017

Location: Frankfurt Marriott Hotel across Messe Frankfurt, Hamburger Allee 2, 60486 Frankfurt am Main, Germany

Registration: http://www.isc-hpc.com/registration.html

 

The complex nature of turbulent fluid flows implies that the computational resources needed to accurately model problems of industrial and academic relevance is virtually unbounded. Computational Fluid Dynamics (CFD) is therefore a natural driver for exascale computing and has the potential for substantial societal impact, like reduced energy consumption, alternative sources of energy, improved health care, and improved climate models.

Extreme-scale CFD poses several cross disciplinary challenges e.g. algorithmic issues in scalable solver design, handling of extreme sized data with compression and in-situ analysis, resilience and energy awareness in both hardware and algorithm design. The wide range of topics makes exascale CFD relevant to a wider HPC audience, extending outside the traditional fluid dynamics community.

This workshop aims at bringing together the CFD community as a whole, from HPC experts to domain scientists, discuss current and future challenges towards exascale fluid dynamics simulations and facilitating international collaboration.

 

Agenda:

14:00 - 14:10  Welcome/Intro

14:10 - 14:45  Alexander Heinecke,  Intel

14:45 - 15:20  Ingrid Hotz, LiU

15:20 - 15:55  Evelyn Otero, KTH

16:00 - 16:30  Coffee break!

16:30 - 17:05  Keiji Onishi, RIKEN AICS

 17:05 - 17:40  Niclas Jansson, KTH   (Wrap-up)

 

Speakers:

Alexander Heinecke,  Intel

Title: Seismic Simulations in the Days of Deep Learning Hardware

Abstract: Deep Learning applications are developing themselves into the next killer app and influence/have influenced current hardware platforms substantially towards more regular compute or even fixed function units for dense linear algebra operations. In this talk we will discuss how seismic wave equations can be solved efficiently on modern hardware using FDM and DG-FEM. In both case we introduce novel implementation schemes which optimize bandwidth in case of FDM (via vector-folding) and the efficiency of small sparse matrix products in DG-FEM (via concurrent forward runs).  As test platform we used the Cori Phase-II supercomputer hosted at NERSC which comprises of more than 9000 Intel Xeon Phi processors.

 

Keiji Onishi, RIKEN AICS

Title: Encouragement of Accelerating Pre/Post processing towards Exascale CFD

Abstract: Acceleration of pre / post process is a hidden task to be solved by HPC technology for Exascale CFD. At the site where CFD is actually used, work time due to pre-processing which is relating to shape modification based on ‘’dirty CAD data is still a bottleneck, and it hinders speeding up the entire CFD design process. This also becomes a problem when performing an optimization loop involving a shape change. In addition, the data size obtained by analysis has been increasing year by year, visualization work becomes more difficult as the calculation becomes larger scale. Speeding up its post-processing work is also a problem. This means that even if the operation of the core solver is accelerated, if the whole process can not be speeded up, the benefit obtained will be reduced. Up until now HPC community has turned its attention to speeding up solver, but is it enough? Now is the time to face the clear and present problem. In this presentation, we introduce an example of new analytical methods that do not require pre-processing and parallelized in HPC environment, and discuss how important this concept is with examples. In addition, we will describe the benefits of In-Situ visualization in post-process and introduce practical examples.

 

Ingrid Hotz, Linköping University

Title: Feature-based Analysis and Visualization in Flow Applications

Abstract: The increasing size and complexity of datasets originating from experiments or simulations raises new challenges for data analysis and visualization. Over the last years, much effort has been put into the development of visualization techniques for steady and unsteady flow fields. The resulting tools are widely used for the everyday visual analysis of flow fields. However, even with advanced visualization tools it is often difficult to understand inherent flow structures, since usually only raw data is displayed. To ease the access of complex flows higher-levels of abstraction can play a substantial role. This is the objective of feature-based visualization. In this talk I will focus on the use of topological methods in this context.

 

Evelyn Otero, KTH Royal Institute of Technology

Title: The effect of lossy data compression in computational fluid dynamics applications

Abstract: The need for large-scale simulations has been significantly growing in e-Science applications which require more and more data storage and computing resources. Computational fluid dynamics (CFD) is an area where very large amounts of data are produced. In particular, in direct and large-eddy simulations (DNS and LES) a wide range of flow scales are simulated with high fidelity, leading to a large number of degrees of freedom. Thus, storage limitations and also slow I/O (input/output) speed are some of the main limitations when performing large-scale simulations. In this project we analyze the I/O performance of Nek5000 (a spectral element CFD code) and implement parallel I/O strategies to improve I/O performance and scaling. Lossy data compression can be used to mitigate such shortcomings. In particular, the Discrete Chebyshev Transform (DCT) has been used in the image compression community , as well as in CFD. In the present work we assess the use of the DCT in situations such as data postprocessing, vortex identification, as well as restarts from compressed data fields. The latter being relevant in situations where the flow is highly sensitive to initial conditions. In the compression algorithm under consideration, the data is truncated using a priori error estimation, thus allowing total control over the error considered permissible. Note that this is an improvement with respect to previous compression algorithms. Here we illustrate the ability of the data compression algorithm to compress the data at very large scales and on complex grids, with a very good approximation of the total error.

 

Niclas Jansson, KTH Royal Institute of Technology

Title: Towards Exascale in High-Order Computational Fluid Dynamics

Abstract: TDB

  

Program committee:

  • Prof. Erwin Laure, KTH Royal Institute of Technology
  • Dr. Philipp Schlatter, KTH Royal Institute of Technology
  • Dr. Niclas Jansson, KTH Royal Institute of Technology
  • Prof. Spencer Sherwin, Imperial College London
  • Dr. David Moxey, Imperial College London
  • Dr. Nick Johnson, The University of Edinburgh, EPCC