Microsoft Parallel Visualization
A project: Parallel Visualization
In our cooperation with Microsoft, we continue our ongoing development of visualization software. COVISE, our visualization software, doesn't support a parallel pipeline yet. The already existing successor of COVISE (YAC - Yet Another Covise) shall be brought to maturity. YAC already exists in its core parts and its design allows for parallel visualization.
The work is split in two main areas. One part focusses on GPGPUs and simulations, the other part focusses on parallel visualization. In addition, we provide visualization support to other HPC-Institutes. We will help them setting up visualization facilities at their sites and will help them with integrating their simulation codes in our visualization software.
The Windows HPC concept handles computational power and memory sizes that enables users to solve large computational problems. That leads to datasets that are too large to be visualized on a single node. Dataset analysis increasingly becomes challenging.
As future research plan we would like to focus on on the visualization of large datasets using parallel visualization on new Multi/Manycore Architectures and GPGPUs.
As scientific datasets are growing rapidly, it is not enough to concentrate on the parallelization of simulation codes. The interactive evaluation of huge datasets cannot be accomplished on a single CPU and GPU anymore due to limited memory and computational power on a single node.
The idea is to leverage on current and future Multicore/Manycore Chips and also computation on the GPU. The required parallelization of visualization algorithms can address the processing speed issues we currently have with large datasets while the memory limit can only be overcome by distributed parallel visualization on a compute cluster.
The last issue in visualization of large datasets is the rendering speed and size of the graphical objects. This can be addressed by parallelizing the rendering on a cluster equipped with graphics board.
Project description / Work packages
Simulations can be started using the HPC Pack’s job scheduler from within the visualization software. We integrate the whole engineering process chain from geometry definition, definition of boundary conditions over meshing and partitioning to simulation and visualization in one consistent environment. Annoying interfaces cease to exist. The complex numerics of the simulation shall be hidden (while it is still possible to change all expert parameters, if wished), so that it’s possible for the engineer to focus on his actual problem. As an example, we have coupled the wide-spread commercial simulation code ANSYS CFX with our visualization software so that interactive simulations can be realised using CFX. For details, see the application example Interactive Airflow and Climate Simulation in HLRS Computing Room.
The goal is to adapt the core parts of our data-flow oriented modular visualization software to allow the parallel processing of simulation data. This includes distributed data management and data processing, scheduling and load balancing. Distributed data objects must be made available on all participating computing resources.
A very good scalability of modules that implement postprocessing algorithms on parallel computing resources is important.
The data flow management has to be enhanced: To increase the data flow rate, data parts that already have been computed shall be handed over to the next module, even if this module cannot be executed yet.
Parallelization of algorithms
Visualization algorithms like the generation of cutting surfaces, domain surfaces, isosurfaces and particle traces have to be parallelized. Furthermore, virtual reality support for parallel datasets has to be added.
An interactive particle tracing is an important instrument in the exploration of flow fields. As engineers need to start traces of a high number of particles to get a meaningful insight in their dataset, particle tracing is expensive.
We will try to increase the interactivity of particle traces by adding support for GPGPU usage within the particle tracer module.
Low response times that are required for an interactive particle tracing can only be achieved if the particle traces are computed massively parallel on the cores of graphics hardware (GPU) via numeric integration. Due to the limited availability of memory, that cannot be done on the complete geometry in general; an adequate partitioning of the data and resampling of the computational grid is necessary. As a simple resampling of complex
datasets (that in the majority of cases are available as unstructured grids) to coarse cartesian grids, as it is commonly done today, cannot be accepted in many fields of science, we will explore advanced parallel approaches that take advantage of manycore-architectures.
The rendering part of the visualization pipeline has to be parallelized as
well. Interactive visualization is only possible with rendering framerates above 12-15 Hz. To achieve this when working with huge datasets, parallel rendering strategies have to be applied.
There are two different approaches for parallel rendering, on the one hand this is partitioning of the image space (sort-last), on the other hand the data is partitioned in respect of the rendered objects (sort-first). In the case of a sort-first rendering, the rendering of the data is ideally performed on the same parallel computation resources as the postprocessing. The data rendered on different compute nodes is then transferred to the display node where they are combined to one single image and then displayed on the screen.
Sort-last rendering is of advantage in case of fillrate-limited rendering or when the communication between render- and visualizaiotn node is limited. As replication of the data is necessary in this case, it might be profitable to use hybrid approaches. We will implement a rendering strategy that chooses the best mechanism according to the application.
As our visualization software is a wide-spread tool in academia & industry, it is important to provide support for the users. We will help them with problems regarding their own visualization installations as well as with visualizing their datasets and implementing new algorithms in COVISE. See Topologically correct Isosurfaces in COVISE for details.