[an error occurred while processing this directive] Computational Infrastructure
Home >> Research >> Computational Infrastructure
Computational Infrastructure
 
The multi-physics, multi-fidelity simulation requirements of Stanford’s PSAAP center require a modern, massively parallel computational infrastructure. A key component of this infrastructure will be an accurate and efficient reacting compressible flow solver with a RANS (Reynold’s Averaged Navier Stokes) treatment of turbulence. In addition, the infrastructure must support large ensemble simulations involving 1000's of runs, and provide efficient, parallel tools for analyzing the resulting mountain of data. The characterization of turbulence and its interaction with other aspects of our PSAAP problem remains a critical uncertainty, so the infrastructure must also support an LES/DNS solver where turbulence can be largely/completely resolved and these uncertainties reduced. Finally, development efforts must be cognizant of the current trajectory of computer hardware towards highly heterogeneous processors and deep memory hierarchies.
 
We are pursuing two approaches to support the Center’s simulation requirements:
  1. To support current and near-future requirements on existing platforms, a C++/MPI infrastructure called Mum has been developed and is described below.
  2. For future (or existing non-traditional) architectures, we are developing a domain-specific language (DSL) for mesh-based PDEs called Liszt, also described below.
 
 
Mum
Mum is the C++ computational infrastructure being developed to support parallel, scalable solvers for mesh-based methods (e.g., finite volume method, finite element method, finite difference method, discontinuous-Galerkin method). Mum is NOT an acronym, although some want the “um” to stand for “unstructured mesh”. Fortunately, Mum also contains support for structured grids. The name came about after several application codes were developed in this new but then unnamed set of classes. Defying the temptation to develop fancy acronyms, we named these codes Joe, Ray and Charles. These codes needed a mother… thus mum.
 
 
Joe
Joe is a state-of-the-art multiphysics RANS/URANS solver. It is based on a cell-centered finite-volume method. Multiple time integration schemes are available. Turbulence modeling options include Spalart-Allmaras, Menter-SST, and k-omega (Wilcox 2006). Different models can be used for mixing or combustion:
  • Single gas with variable properties (temperature dependent specific heat coefficient)
  • Binary mixing (with constant specific heat coefficient for the individual gases)
  • Mixing (with temperature dependent specific heat coefficient)
  • Steady flamelet model
  • Flamelet Progress Variable Approach (FPVA) model
  • Flamelet Progress Variable Approach (FPVA) model with coefficients
 
 
Charles
The Charles LES code solves the spatially-filtered compressible Navier-Stokes equations on unstructured grids using a novel control-volume based finite-volume method where the flux is computed at each control volume face using a blend of a non-dissipative central flux and a dissipative upwind flux, i.e.:
F = (1-α) Fcentral + α Fupwind
where 0 <= α <= 1 is a blending parameter. This blending approach is often the basis of implicit approaches to LES, where the blending parameter is selected as a global constant with a value large enough to provide all the necessary dissipation (and potentially quite a bit more). Charles does not use the implicit LES approach -- an explicit sub-grid scale model is used to model the effect of sub-grid scales. To minimize numerical dissipation relative to implicit LES approaches, the value alpha is allowed to vary spatially such that it can be set to zero in regions where the grid quality is good and the scheme based on the central flux is discretely stable and non-dissipative. In regions of less-than-perfect grid quality, however, the central scheme can introduce numerical instabilities that must be prevented from contaminating/destabilizing the solution by locally increasing alpha. The novel aspect of Charles is its algorithm to compute this locally optimal alpha using a preprocessing analysis of the Summation-by-Parts (SBP) properties of the nominally central operators. For shock-capturing, CharLES uses a hybrid Central-WENO scheme.
 
 
 
Ray
The Ray simulation tool is designed to support the setup, simulation, and analysis of ensemble simulations. It is built in the Mum infrastructure to leverage the common parallel I/O capabilities of the solvers.
 
 
Liszt
Power efficiency is causing a shift to heterogeneous computer platforms. Future supercomputers will combine features from distributed memory clusters (MPI), Multi-core SMPs (e.g. 32 core, 4-socket systems, Shared memory, Threads/locks, OpenMP), Many-core GPUs (e.g. Fermi, Larrabee, Separate GPU memory, SIMT programming model like CUDA). Examples already exist, including LANL Roadrunner and ORNL Fermi.
 
At present, efficient utilization of this new hardware has required major software rewrites. A critical question is thus:
“Is it possible to write one program and run it on all these machines?”
The Liszt research project is our answer to this question. We believe it is possible using domain-specific languages (DSLs). DSLs exploit domain knowledge to provide the following advantage:
  • Productivity: Separate domain expertise (computational science) from computer science expertise
  • Portability: Run on wide range of platforms
  • Performance: Super-optimize using a combination of domain knowledge and platform knowledge
  • Innovation: Allows vendors to change architecture and programming models in revolutionary ways
The Liszt DSL is a domain-specific language for writing mesh-based programs for solving partial differential equations. Liszt code is a proper subset of the syntax and typing rules of the Scala programming language. Currently Liszt programs are written in Scala; a compiler-plugin to the Scala compiler translates the Scala code into an intermediate representation used by the Liszt compiler.
 
 
People
 
Name Email address Focus
Dr. Frank Ham fham@stanford.edu Mum infrastructure and solvers
Dr. Rene Pecnik renep@stanford.edu Mum infrastructure and solvers
Dr. Vincent Terrapon terrapon@stanford.edu Mum infrastructure and solvers
Prof. Pat Hanrahan hanrahan@cs.stanford.edu Liszt DSL
Prof. Alex Aiken aiken@cs.stanford.edu Liszt DSL
Prof. Eric Darve darve@stanford.edu Liszt DSL
Prof. Juan Alonso jjalonso@stanford.edu Liszt DSL
Dr. Frank Ham fham@stanford.edu Liszt DSL
Zach Devito zdevito@stanford.edu Liszt DSL

 

[an error occurred while processing this directive]