Computational methods for flow analysis and turbulence
Science and engineering have undergone a major transformation at the research as well as
at the development and technology level. The modern scientist and engineer spend more
and more time in front of a laptop, a workstation, or a parallel supercomputer and less and
less time in the physical laboratory or in the workshop. The virtual wind tunnel and the
virtual biology lab are not a thing of the future, they are here! The old approach of “cutand-
try†has been replaced by “simulate-and-analyze†in several key technological areas such
as aerospace applications, synthesis of new materials, design of new drugs, chip processing
and microfabrication, etc. The new discipline of nanotechnology will be based primarily on
large-scale computations and numerical experiments. The methods of scientific analysis and
engineering design are changing continuously, affecting both our approach to the phenomena
that we study as well as the range of applications that we address. While there is a lot
of software available to be used as almost a “black-box,†working in new application areas
requires good knowledge of fundamentals and mastering of effective new tools.
In the classical scientific approach, the physical system is first simplified and set in a form
that suggests what type of phenomena and processes may be important, and correspondingly
what experiments are to be conducted. In the absence of any known-type governing
equations, dimensional inter-dependence between physical parameters can guide laboratory
experiments in identifying key parametric studies. The database produced in the laboratory
is then used to construct a simplified “engineering†model which after field-test validation
will be used in other research, product development, design, and possibly lead to new technological
applications. This approach has been used almost invariably in every scientific
discipline, i.e., engineering, physics, chemistry, biology, etc.
The simulation approach follows a parallel path but with some significant differences.
First, the phase of the physical model analysis is more elaborate: The physical system is
cast in a form governed by a set of partial differential equations, which represent continuum
approximations to microscopic models. Such approximations are not possible for all systems,
and sometimes the microscopic model should be used directly. Second, the laboratory experiment is replaced by simulation, i.e., a numerical experiment based on a discrete model. Such
a model may represent a discrete approximation of the continuum partial differential equations,
or it may simply represent a statistical representation of the microscopic model. Finite
difference approximations on a grid are examples of the first case, and Monte Carlo methods
are examples of the second case. In either case, these algorithms have to be converted to
software using an appropriate computer language, debugged, and run on a workstation or a
parallel supercomputer.
The output is usually a large number of files of a few Megabytes to
hundreds of Gigabytes, being especially large for simulations of time-dependent phenomena.
To be useful, this numerical database needs to be put into graphical form using various visualization
tools, which may not always be suited for the particular application considered.
Visualization can be especially useful during simulations where interactivity is required as
the grid may be changing or the number of molecules may be increasing.
The simulation approach has already been followed by the majority of researchers across
disciplines in the last few decades. The question is if this is a new science, and how one could
formally obtain such skills. Moreover, does this constitute fundamental new knowledge or is
it a “mechanical procedure,†an ordinary skill that a chemist, a biologist or an engineer will
acquire easily as part of “training on the job†without specific formal education. It seems
that the time has arrived where we need to reconsider boundaries between disciplines and
reformulate the education of the future simulation scientist, an inter-disciplinary scientist.
Let us re-examine some of the requirements following the various steps in the simulation
approach. The first task is to select the right representation of the physical system by
making consistent assumptions in order to derive the governing equations and the associated
boundary conditions. The conservation laws should be satisfied; the entropy condition should
not be violated; the uncertainty principle should be honored. The second task is to develop
the right algorithmic procedure to discretize the continuum model or represent the dynamics
of the atomistic model. The choices are many, but which algorithm is the most accurate
one, or the simplest one, or the most efficient one? These algorithms do not belong to a
discipline! Finite elements, first developed by the famous mathematician Courant and rediscovered
by civil engineers, have found their way into every engineering discipline, physics,
geology, etc. Molecular dynamics simulations are practiced by chemists, biologists, material
scientists, and others. The third task is to compute efficiently in the ever-changing world of
supercomputing. How efficient the computation is translates to how realistic of a problem is
solved, and therefore how useful the results can be to applications. The fourth task is to assess
the accuracy of the results in cases where no direct confirmation from physical experiments
is possible such as in nanotechnology or in biosystems or in astrophysics, etc. Reliability of
the predicted numerical answer is an important issue in the simulation approach as some
of the answers may lead to new physics or false physics contained in the discrete model or
induced by the algorithm but not derived from the physical problem. Finally, visualizing the
simulated phenomenon, in most cases in three-dimensional space and in time, by employing
proper computer graphics (a separate specialty on its own) completes the full simulation
cycle. The rest of the steps followed are similar to the classical scientific approach.
In classical science we are dealing with matter and therefore atoms but in simulation we
are dealing with information and therefore bits, so it is atoms versus bits! We should, therefore,
recognize the simulation scientist as a separate scientist, the same way we recognized
just a few decades ago the computer scientist as different than the electrical engineer or the applied mathematician. The new scientist is certainly not a computer scientist although she
should be computer literate in both software and hardware. She is not a physicist although
she needs a sound physics background. She is not an applied mathematician although she
needs expertise of mathematical analysis and approximation theory.
With the rapid and simultaneous advances in software and computer technology, especially
commodity computing, the so-called soupercomputing, every scientist and engineer will
have on her desk an advanced simulation kit of tools consisting of a software library and
multi-processor computers that will make analysis, product development, and design more
optimal and cost-effective. But what the future scientists and engineers will need, first and
foremost, is a solid inter-disciplinary education.
Scientific computing is the heart of simulation science, and this is the subject of this
book. The emphasis is on a balance between classical and modern elements of numerical
mathematics and of computer science, but we have selected the topics based on broad modeling
concepts encountered in physico-chemical and biological sciences, or even economics.