Home >> Research >> Computational Infrastructure 
Computational Infrastructure 
The multiphysics, multifidelity simulation requirements of Stanford’s PSAAP center require a modern, massively parallel computational infrastructure. A key component of this infrastructure will be an accurate and efficient reacting compressible flow solver with a RANS (Reynold’s Averaged Navier Stokes) treatment of turbulence. In addition, the infrastructure must support large ensemble simulations involving 1000's of runs, and provide efficient, parallel tools for analyzing the resulting mountain of data. The characterization of turbulence and its interaction with other aspects of our PSAAP problem remains a critical uncertainty, so the infrastructure must also support an LES/DNS solver where turbulence can be largely/completely resolved and these uncertainties reduced. Finally, development efforts must be cognizant of the current trajectory of computer hardware towards highly heterogeneous processors and deep memory hierarchies. 
We are pursuing two approaches to support the Center’s simulation requirements:

Mum  
Mum is the C++ computational infrastructure being developed to support parallel, scalable solvers for meshbased methods (e.g., finite volume method, finite element method, finite difference method, discontinuousGalerkin method). Mum is NOT an acronym, although some want the “um” to stand for “unstructured mesh”. Fortunately, Mum also contains support for structured grids. The name came about after several application codes were developed in this new but then unnamed set of classes. Defying the temptation to develop fancy acronyms, we named these codes Joe, Ray and Charles. These codes needed a mother… thus mum.  
Joe  
Joe is a stateoftheart multiphysics RANS/URANS solver. It is based on a cellcentered finitevolume method. Multiple time integration schemes are available. Turbulence modeling options include SpalartAllmaras, MenterSST, and komega (Wilcox 2006). Different models can be used for mixing or combustion:
 
Charles  
The Charles LES code solves the spatiallyfiltered compressible NavierStokes equations on unstructured grids using a novel controlvolume based finitevolume method where the flux is computed at each control volume face using a blend of a nondissipative central flux and a dissipative upwind flux, i.e.:  
F = (1α) F_{central} + α F_{upwind}  
where 0 <= α <= 1 is a blending parameter. This blending approach is often the basis of implicit approaches to LES, where the blending parameter is selected as a global constant with a value large enough to provide all the necessary dissipation (and potentially quite a bit more). Charles does not use the implicit LES approach  an explicit subgrid scale model is used to model the effect of subgrid scales. To minimize numerical dissipation relative to implicit LES approaches, the value alpha is allowed to vary spatially such that it can be set to zero in regions where the grid quality is good and the scheme based on the central flux is discretely stable and nondissipative. In regions of lessthanperfect grid quality, however, the central scheme can introduce numerical instabilities that must be prevented from contaminating/destabilizing the solution by locally increasing alpha. The novel aspect of Charles is its algorithm to compute this locally optimal alpha using a preprocessing analysis of the SummationbyParts (SBP) properties of the nominally central operators. For shockcapturing, CharLES uses a hybrid CentralWENO scheme.  
Ray  
The Ray simulation tool is designed to support the setup, simulation, and analysis of ensemble simulations. It is built in the Mum infrastructure to leverage the common parallel I/O capabilities of the solvers.  
Liszt  
Power efficiency is causing a shift to heterogeneous computer platforms. Future supercomputers will combine features from distributed memory clusters (MPI), Multicore SMPs (e.g. 32 core, 4socket systems, Shared memory, Threads/locks, OpenMP), Manycore GPUs (e.g. Fermi, Larrabee, Separate GPU memory, SIMT programming model like CUDA). Examples already exist, including LANL Roadrunner and ORNL Fermi.  
At present, efficient utilization of this new hardware has required major software rewrites. A critical question is thus:  
“Is it possible to write one program and run it on all these machines?”  
The Liszt research project is our answer to this question. We believe it is possible using domainspecific languages (DSLs). DSLs exploit domain knowledge to provide the following advantage:
 
The Liszt DSL is a domainspecific language for writing meshbased programs for solving partial differential equations. Liszt code is a proper subset of the syntax and typing rules of the Scala programming language. Currently Liszt programs are written in Scala; a compilerplugin to the Scala compiler translates the Scala code into an intermediate representation used by the Liszt compiler.  
People  

[an error occurred while processing this directive]