You are in the main area:Organization
Headerimage for: VisPME

VisPME

Visualization in parallel manycore environments (VisPME)

Foreseeable changes in the development of hardware architecture in the domain of high performance computing allow for considerable increase of the complexity of scientific simulations. In medical sciences, novel techniques such as diffusion-weighted magnetic resonance imaging lead to ever increasing amount of data. Simulations of problems in the domain of engineering or medical science normally result in transient data sets on three dimensional, often unstructured grids. Interactive visualization methods provide essential means in gaining knowledge about these data sets.

The objective of this project is the development of a flexible, highly parallel and scalable interactive visualization environment for data processing and analysis that meets the requirements of the different application domains, by making use of the available computing infrastructure.

Resources available for postprocessing and visualization

The available infrastructure for visualization provides increasing potential for parallelization. Single workstations are replaced by visualization clusters to provide adequate performance for post processing and visualization of huge data sets. These clusters comprise multiple nodes which are connected by a high bandwidth, low latency network infrastructure such as Infiniband. The nodes themselves are parallel systems, composed of multiple processors containing multiple cores. These cores have access to shared memory, often with non-uniform latency and bandwidth.

Each of these nodes contains one or more graphics cards that can be used for rendering. Modern graphics cards can also be used for highly complex parallel computations such as postprocessing of scientific datasets. The available memory on graphics cards is typically much smaller than the available main memory on the computing node, thus additional latency is introduced by copying data between system memory and the memory of the graphics card. Such a visualization cluster is typically connected to one or more HPC systems by a comparatively low bandwidth, high latency network. To make this infrastructure available for interactive visualization to the users, they have to be able to access the cluster remotely from their workstation.

Requirements for postprocessing/visualization software

For maximum utilization of the visualization infrastructure by a visualization environment based on the data-flow paradigm, several requirements have to be met. The data management has to be adapted to the distributed memory model with hierarchical memory access. The algorithms have to be optimized for parallel hybrid distributed/shared memory architectures and have to be integrated with the data management. The algorithms have to be adapted to make use of GPUs and other accelerators such as FPGAs. The rendering has to be able to utilize several GPUs. On the one hand, this is needed to drive immersive environments such as CAVEs. On the other hand, remote rendering with low latency and high detail can be achieved. Due to the complexity of the post processing, an automatic scheduling of the required processes has to be provided. To make optimal use of the available resources and to assure tolerable latency for user interaction, the scheduling algorithms have to take into account load balancing, a given decomposition of the data and the amount of communication between the compute nodes.

The main objectives of this project are:

  1. to create a flexible framework for parallel visualization that is applicable throughout the different fields
  2. to develop scheduling strategies that are able to re-order the processing of data and dynamically adapt the accuracy of algorithms and resolution of the data
  3. to make available important algorithms for visualization of data sets from different application areas in distributed many-core environments
  4. to assure the applicability of the developed concepts and the usability of the visualization environments with the help of users from different application domains.

 

Partners

Regionales Rechenzentrum der Universität zu Köln, Rechen- und Kommunikationszentrum der RWTH Aachen, Max-Planck-Institut für neurologische Forschung, RECOM Services GmbH, NVIDIA GmbH, Aachen Institute für Advanced Study in Computational Engineering Science

For more information, please contact Florian Niebling.