Photo Huan Zhou

Dr.-Ing. Huan Zhou

Research Scientist

Dr. Huan Zhou was qualified for a PhD candidate in the HLRS Department of Scalable Programming Models and Tools, focusing her research on the design of an efficient (data locality-aware) and portable runtime system (in C) for the higher-level PGAS model. This runtime system provides easy-to-use interfaces for the user and is internally implemented based on the MPI communication layer.

After completing her PhD with honors in 2016, Dr. Zhou started research on performance analysis and optimization with the aid of performance tools. She joined the HLRS Department of Numerical Methods and Libraries in 2018.

In addition to the research fields described above, she is currently engaged in the employment of the conception of parallelization in solving real-world problems on HPC systems. This can be achieved by harnessing multi-threaded (i.e. OpenMP), MPI, or PGAS programming models.


[ 2021 ] [ 2020 ] [ 2019 ] [ 2017 ] [ 2016 ] [ 2015 ] [ 2014 ]

2021 [ to top ]

  • 1.
    Zhou, N., Georgiou, Y., Pospieszny, M., Zhong, L., Zhou, H., Niethammer, C., Pejak, B., Marko, O., Hoppe, D.: Container orchestration on HPC systems through Kubernetes. Journal of Cloud Computing. 10, 1–14 (2021).
  • 2.
    Zhou, H., Niethammer, C., Azcue, M.H.: Usage Experiences of Performance Tools for Modern C \($$\)\($$\) Code Analysis and Optimization. Tools for High Performance Computing 2018/2019. S. 103–121. Springer (2021).

2020 [ to top ]

  • 1.
    Zhou, H., Gracia, J., Zhou, N., Schneider, R.: Collectives in hybrid MPI+ MPI code: Design, practice and performance. Parallel Computing. 102669 (2020).
  • 2.
    Herrer’ias Azcu’e, M., Capdevila, H., Zhou, H., Hammer, A.: Simulation of Large PV Plants Using a Continuous Radiance Distribution Model and Cell-Resolution Mismatch Calculation. European Photovoltaic Solar Energy Conference and Exhibition. S. 1311–1316 (2020).

2019 [ to top ]

  • 1.
    Zhou, H., Gracia, J., Schneider, R.: MPI Collectives for Multi-core Clusters: Optimized Performance of the Hybrid MPI+ MPI Parallel Codes. Proceedings of the 48th International Conference on Parallel Processing: Workshops. S. 1–10. ACM (2019).

2017 [ to top ]

  • 1.
    Zhou, H., Gracia, J.: Application Productivity and Performance Evaluation of Transparent Locality-aware One-sided Communication Primitives. International Journal of Networking and Computing. 7, 136–153 (2017).

2016 [ to top ]

  • 1.
    Zhou, H., Gracia, J.: Asynchronous Progress Design for a MPI-Based PGAS One-Sided Communication System. ICPADS. S. 999–1006. IEEE (2016).
  • 2.
    Zhou, H., Gracia, J.: Towards Performance Portability through Locality-Awareness for Applications Using One-Sided Communication Primitives. CANDAR. S. 536–542. IEEE (2016).

2015 [ to top ]

  • 1.
    Zhou, H., Marjanovic, V., Niethammer, C., Gracia, J.: A Bandwidth-saving Optimization for MPI Broadcast Collective Operation. Proceedings of the International Conference on Parallel Processing Workshops, ICPPW. , Beijing, China (2015).
  • 2.
    Niethammer, C., Khabi, D., Zhou, H., Marjanovic, V., Gracia, J.: Impact of Late-Arrivals on MPI Collective Operations. INFOCOMP 2015. , Brussels, Belgium (2015).
  • 3.
    Zhou, H., Idrees, K., Gracia, J.: Leveraging MPI-3 Shared-Memory Extensions for Efficient PGAS Runtime Systems. In: Träff, J.L., Hunold, S., und Versaci, F. (Hrsg.) Euro-Par. S. 373–384. Springer (2015).

2014 [ to top ]

  • 1.
    Zhou, H., Mhedheb, Y., Idrees, K., Glass, C.W., Gracia, J., Fürlinger, K.: DART-MPI: An MPI-based Implementation of a PGAS Runtime System. In: Malony, A.D. und Hammond, J.R. (Hrsg.) PGAS. S. 3:1–3:11. ACM (2014).
  • 2.
    Fürlinger, K., Glass, C.W., Gracia, J., Knüpfer, A., Tao, J., Hünich, D., Idrees, K., Maiterth, M., Mhedheb, Y., Zhou, H.: DASH: Data Structures and Algorithms with Support for Hierarchical Locality. Euro-Par 2014: Parallel Processing Workshops - Euro-Par 2014 International Workshops, Porto, Portugal, August 25-26, 2014, Revised Selected Papers, Part II. S. 542–552 (2014).