2008 DOE Summer School in Multiscale Mathematics and High Performance Computing
2008 Summer School Research Talks
Microsoft Windows Media Player is required to view the videos linked from this page. You can obtain the latest Windows Media Player software from the Microsoft Windows Media Website.
Jarek Nieplocha - Laboratory Fellow, Pacific Northwest National Laboratory
Download this presentation (PowerPoint 6.24MB)
Eric Bylaska - Senior Research Scientist, Pacific Northwest National Laboratory
NWChem was developed as part of the EMSL construction project to provide users with the massively parallel and scalable computational chemistry software necessary to tackle large scientific questions. The software continues to be developed to provide new cutting-edge capabilities to address new scientific questions relevant to EMSL, and to ensure that the software will provide the fastest time-to-solution on the growing MSCF supercomputing resources. NWChem is a large and complex code that consists of over 2 million lines of Fortran and C code, and provides many methods to compute the properties of molecular and periodic systems using standard quantum mechanical descriptions of the electronic wavefunction or density. Its classical molecular dynamics capabilities provide for the simulation of macromolecules and solutions, including the computation of free energies using a variety of force fields. The object oriented programming model enables the various approaches to be combined to perform, for example, mixed quantum-mechanics and molecular-mechanics simulations. NWChem is part of the Molecular Sciences Software Suite (MS3). In addition to NWChem, MS3 includes the Global Array Toolkit that provides an efficient and portable shared-memory programming interface for distributed-memory computers, and the Extensible Computational Chemistry Environment (Ecce) that provides the user with a graphical user interface, scientific visualization tools, and underlying data management framework. The NWChem software is currently distributed to over 1600 sites world wide by means of an EMSL User Agreement.
With the emergence of computing platforms that are hundreds teraflops, and with petascale computing platforms on the horizon, computational chemistry is on the verge of entering a new era of modeling. These huge computing resources will enable researchers to tackle scientific problems that are larger and more realistic than ever before, to include more of the complex dynamical behavior of nature, and to start asking new and different scientific questions. The next-generation supercomputing hardware, including those currently being installed at MSCF, NERSC, ARL, and ORNL, consist of tens to hundreds of thousands of processors, a scale that was not envisioned when NWChem was conceived over fifteen years ago.
In this talk, an overview of the challenges, as well as strategies to overcome them, in developing massively parallel algorithms for computational chemistry. Basic instruction on how use existing terascale'petascale simulations in NWChem will also be given. Finally, a brief tutorial will be given on how to use embeddable scripting languages such as Python (or Lua) to design multifaceted simulations (e.g. AIMD, NEB, parareal) using components of NWChem.
Download this presentation (PowerPoint 7.08MB)
Daniel Chavarría - Scientist, Pacfic Northwest National Laboratory
This talk will present high-performance multithreaded systems for parallel applications. The talk will initially present some general high-performance computing concepts regarding system and processor architecture and then will focus in more detail on multithreaded systems. Programming environments for multithreaded systems will be presented. Examples from applications will also be presented.
Download this presentation (PowerPoint 2.08MB)
Bruce Palmer - Scientist,Pacific Northwest National Laboratory
This talk will discuss the structure and use of the Global Arrays software toolkit. The presentation will discuss the main features of global address space programming models and the properties of one-sided communication. The basic features of the toolkit will be described followed by a discussion of selected advanced topics.
Download this presentation (PowerPoint 1.54MB)
Don Stewart - Galois, Inc.
We address the tension between software generality and performance in the domain of simulations based on Monte-Carlo methods. We simultaneously achieve generality and high performance by a novel development methodology and software architecture centred around the concept of a specialising simulator generator. Our approach combines and extends methods from functional programming, generative programming, partial evaluation, and runtime code generation. We also show how to generate parallelised simulators.
We evaluated our approach by implementing a simulator for advanced forms of polymerisation kinetics. We achieved unprecedented performance, making Monte-Carlo methods practically useful in an area that was previously dominated by deterministic PDE solvers. This is of high practical relevance, as Monte-Carlo simulations can provide detailed microscopic information that cannot be obtained with deterministic solvers.
Download this presentation (PDF 527KB)
Viral Shah - Senior Research Engineer, Interactive Supercomputing
Star-P is a language-agnostic platform for parallel computing. We will describe parallelization of Matlab and Python codes with Star-P (with a demo if possible). Sequential codes can be parallelized with data- parallelism for very large problems, or task-parallelism for embarrassingly parallel problems. No knowledge of MPI is needed. We will also describe tools for multi-scale simulations in Star-P. We will conclude with our ongoing work on our Knowledge Discovery Toolbox—a toolbox for interactive algorithmic exploration on large networks.
Download this presentation (PDF 3.58MB)
Guang Lin - Scientist, Pacific Northwest National Laboratory
In many microfluidic and biomedical applications there is often a need to model accurately multiscale flow phenomena across several orders of magnitude in spatiotemporal scale. Multiple scale models both in time and space can overcome this difficulty and provide a unified description of liquid flows from nanoscales to larger scales. We propose a new multiple particle formalism based upon hybrid pore-scale model coupling the fine fundamental particle, coarsened particles derived from the fine scale, and continuum scales. An example of the proposed coupled approaches would be atomistic dynamics, Dissipative Particle Dynamics (DPD) and the incompressible Navier-Stokes equations, to cover a broad range of spatiotemporal scales starting from molecular to mesoscopic and to continuum for fluid flow.
Download this presentation (PowerPoint 4.98MB)
Dror Givon - Research Associate, Princeton University
In this talk I will present a particle filter construction for a system that exhibits time scale separation. Multiscale integrators that are based on the averaging principle overcome the problem of stiffness and multiple time scales. These integrators also allow the dimensional reduction of the dynamics for each particle during the prediction step. Multiscale integrators rely heavily on the dissection of the right hand side of the evolutionary equations. As is the case with many systems, the particle filter problem statement assumes only the knowledge of the transition probability of the hidden process, which is not as detailed as the evolutionary equations, hence prohibits the use of multiscale integrators. I will explain how we can bypass this obstacle relying upon the Equation Free methodology and Coarse Projective Integration to expedite the predictive step. As in the multiscale integrator case, the resulting particle filter is faster and has smaller variance than the particle filter based on the original system. The method is tested on a multiscale stochastic differential equation and on a stochastic simulation algorithm motivated by chemical reactions.
Download this presentation (PDF 2.18MB)
Larry Holder - Professor, Washington State University
The graph, a collection of nodes and links, is a natural representation for many domains consisting of various entities and their inter-relationships. Such domains include social networks, biological networks, computer networks, telecommunication networks, power grids, and the world-wide web. Detecting patterns in these networks is crucial for understanding the domain and predicting the behavior of the entities and relationships in the domain. We will discuss several problems related to graph-based pattern learning along with alternative approaches. While graphs are capable of representing most domains, they present a particular challenge to computing due their typically massive size and irregular access patterns. Multiscale approaches and unique high-performance computing platforms are needed to address these challenges.
Ralph Showalter - Professor, Oregon State University
We introduce the concept of homogenization of partial differential equations with two classical approaches. First we review the classical method of formal expansions and then we describe the two-scale convergence method. We highlight the similarities of these approaches and develop the remarkable intuitive advantages of the latter to characterize various examples of upscaled problems.
Download this presentation (PDF 623KB)
Malgorzata Peszynska - Associate Professor, Oregon State University
Computational modeling of flow, transport, and other coupled phenomena in subsurface is important for optimization of oil and gas recovery, management of environmental remediation, and storage and cleanup of hazardous materials such as nuclear waste or carbon sequestration. The multiple spatial and temporal scales involved make it difficult to solve the associated computational problems directly with high resolution and thus require adaptive modeling approaches as concerns grid, model, and couplings. Furthermore, not enough data is available for some of the model parameters or, on the other hand, data comes at a resolution which cannot be managed by current models even with high performance computing resources. In the talk we give an overview of various techniques used to combat this tyranny of scales; we focus first on numerical realizations of continuum models at meso- and macro-scale, and then we briefly touch on emerging discrete approaches at micro- and porescale(s) and on the couplings between continuum and discrete models.
Download this presentation (PDF 9.28MB)