MOVES Seminar 27 Jan, 2011, 10:00

Many-Core Parallel Programming with GPGPUs

 

A few years ago, the continuous increase of CPU clock rates ended while
multi- and many-cores turned out to be the future of microprocessor
development. A future that is nowadays already available in off-the-shelf
consumer computers. However, to make fully use of their capabilities,
programs must be developed for parallelism. After recalling some general
basics of parallel programming, for both, distributed and shared memory, we
will have a closer look at CUDA, NVIDIA's framework to program general
purpose graphic processing units (GPGPUs), and the corresponding hardware.
Today, GPGPUs are one of the best examples of massively parallel many-core
processors -- in contrast to multi-core CPUs with still a much smaller
number of available cores.

 

To demonstrate the applicability of GPGPU-acceleration to scientific
computing, we develop a program that helps to reconstruct genetic networks
from data obtained in pertubation experiments. The underlying concept of
transitive reduction will be implemented as an adapted parallel version of
the Floyd-Warshall algorithm. Comparing run times of the CUDA implementation
for different input sizes with those of a sequential CPU program, the actual
speed-up with respect to wall clock time as well as the scalability of our
approach are evaluated.

 

As a second application domain, some aspects of stochastic model checking
and their relation to systems biology will be discussed.

 

 

This is joint work with Dragan Bosnacki (TU/e), Anton Wijs (TU/e), Willem
Ligtenberg (was TU/e), Joost-Pieter Katoen (RWTH & UT); partially funded by
the NWO-project "Efficient Multi-Core Model Checking". Additionally, some
collaboration with the Scientific Computing group around Martin Bücker
(RWTH) took place.