Home

Welcome to the Multi-Core Programming Homepage at the Department of Informatics, UiB.

Projects


Cell Programming with Algebraic and Generative Abstractions

The SAGA project has explored the use of high-level abstractions and algebraic software methodologies for scientific computing.

As part of SAGA, the Sophus C++ library has been built with abstractions suitable for PDE mathematics. The high level of modularity in the Sophus code makes it easy to factor out code suitable for parallelization. Parallel versions of Sophus applications have been built simply by replacing one or two modules dealing with the underlying mesh data structures.

While previous work on the Sophus infrastructure has focuses on large super-computers, we are now moving to simpler, off-the-shelf hardware, like the AMD64 architecture, the Cell processor on the Playstation 3, and programmable graphics processors, like Nvidia's GPU with the Cuda infrastructure.

A preliminary attempt at porting Sophus to the Cell processor has been successful, with the SeisMod application being ported in around 20 hours (including time to learn Cell programming), with a very simple parallel version running 2-3 times faster than the sequential version.

DDA-embeddings for Multi-Core Programming

Also under the wings of SAGA, the Grid-DDA project investigates the possibility of arbitrary depth, nested parallel programming concepts based on multi-level Data Dependency Algebras (DDAs), from microprocessors to e-grids.

Previous research on DDAs - in the framework of Saphire project- provided a theory on how to program parallel machines, where the explicit utilization of the parallel computer's internal network topology is fully programmable as an independent aspect of the computation.

As such, the run-time parallel distribution and global communication pattern of a hardware layout, whether a parallel computer, a highly parallel graphics processor unit (GPU), a many-core CPU or a chip, can be defined by a separate data type, a space-time DDA.

The data dependency structure of a computation is also defined by a DDA, and the algorithm for solving a problem is given by a recursive function on this DDA.

The embedding of a computation into the underlying hardware then becomes a task of finding an efficient mapping of the DDA of the computation into the space-time DDA of the hardware layout. This allows full control of the computation and explicit handling of the underlying hardware resource at a very high abstraction level.

A prototype compiler was built to provide a simple way to generate parallel code from high level DDA descriptions for high performance computing architectures using the MPI message passing library. We are now planning to enhance this compiler to generate parallel code for other architectures as well, e.g., for NVIDIA's CUDA and Cell. This also promises an easy way to test the efficiency of different embeddings, since they can be reformulated on a high level, new parallel code being generated by Sapphire.

Activities 


This semester within a regular departmental course (inf329) we are taking a closer look at different Programming Models for Non-Traditional Architectures. Reading list is now available.

People


Past Events



Browse Space

- Pages
- News
- Labels
- Attachments
- Bookmarks
- Mail
- Activity
- Advanced

Explore Confluence

- Popular Labels
- Notation Guide

Your Account

Log In

 

Other Features

Add Content


Explore Confluence

- Popular Labels
- Notation Guide

Your Account

Log In

 

Other Features

Add Content