You are in the main area:Events
Headerimage for: Meet Us

Meet Us

Meet Us

01. 12. 99
Planetary Terascale Project Wins SC'99 High Performance Computing Award

Planetary Terascale Project Wins SC'99 High Performance Computing Award for the most challenging applications.

At the SC99 "HPC Games", an intercontinental team consisting of computational scientists, networking and systems specialists in Stuttgart (Germany), Manchester (UK), Pittsburgh (USA) and Tsukuba (Japan) was awarded the top prize for the most challenging scientific applications, executed live across the planet from the Portland (Oregon) convention center.

A molecular dynamics simulation with over two million particles ran concurrently on a Hitachi SR8000 at ETL (Tsukuba), and on CRAY T3E's at the Pittsburgh Supercomputing Center, CSAR (Manchester) and HLRS (Stuttgart). This Ter(r)acomputer spanning more than 10,000 miles has a total peak performance of 2.2 TFlops.

A second application demonstrated was a flow solver called URANUS . A simulation of the crew-rescue vehicle (X-38) of the international space station with 3.6 Million cells on 1536 T3E processors was accompanied by a visualisation of the flow around the vehicle in a collaborative session with the European Networking Demonstration booth .

The third application, analyzing radio astronomy data in search of pulsars, was contributed by Dr. Stephen Pickles of Manchester Computing. For this application, sufficient bandwidth between the different computers is crucial. Between the three T3E systems, a system of networks consisting of JANET and Teleglobe (UK), DFN (Germany), and Abilene and vBNS (USA) delivered sustained bandwidths in excess of 1 megabit per second.

The Manchester application adapts to actual bandwidth conditions by varying the amount of work it assigns to each machine. The molecular dynamics and fluid dynamics applications are optimized to mask latency by overlapping communication and computation.

Message passing between the heterogeneous machines comprising the Ter(r)acomputer is done by means of PACX-MPI , a library developed at HLRS (Stuttgart). This is implemented as a large subset of the MPI-1 standard, allowing immediate "grid-enabling" of most application codes that use MPI. The work is supported by the European Project METODIS .

Details about PACX-MPI, the networks, the applications, and the metacomputing team can be found here.