Skip to Main content Skip to Navigation

HPC - Big Data Convergence : Managing theDiversity of Application Profiles on HPC Facilities

Valentin Honoré 1 
1 TADAAM - Topology-Aware System-Scale Data Management for High-Performance Computing
LaBRI - Laboratoire Bordelais de Recherche en Informatique, Inria Bordeaux - Sud-Ouest
Abstract : Numerical simulations are complex programs that allow scientists to solve, simulate and model complex phenomena. High Performance Computing (HPC) is the domain in which these complex and heavy computations are performed on large-scale computers, also called supercomputers.Nowadays, most scientific fields need supercomputers to undertake their research. It is the case of cosmology, physics, biology or chemistry. Recently, we observe a convergence between Big Data/Machine Learning and HPC. Applications coming from these emerging fields (for example, using Deep Learning framework) are becoming highly compute-intensive. Hence, HPC facilities have emerged as an appropriate solution to run such applications. From the large variety of existing applications has risen a necessity for all supercomputers: they mustbe generic and compatible with all kinds of applications. Actually, computing nodes also have a wide range of variety, going from CPU to GPU with specific nodes designed to perform dedicated computations. Each category of node is designed to perform very fast operations of a given type (for example vector or matrix computation).Supercomputers are used in a competitive environment. Indeed, multiple users simultaneously connect and request a set of computing resources to run their applications. This competition for resources is managed by the machine itself via a specific program called scheduler. This program reviews, assigns andmaps the different user requests. Each user asks for (that is, pay for the use of) access to the resources ofthe supercomputer in order to run his application. The user is granted access to some resources for a limited amount of time. This means that the users need to estimate how many compute nodes they want to request and for how long, which is often difficult to decide.In this thesis, we provide solutions and strategies to tackle these issues. We propose mathematical models, scheduling algorithms, and resource partitioning strategies in order to optimize high-throughput applications running on supercomputers. In this work, we focus on two types of applications in the context of the convergence HPC/Big Data: data-intensive and irregular (orstochastic) applications.Data-intensive applications represent typical HPC frameworks. These applications are made up oftwo main components. The first one is called simulation, a very compute-intensive code that generates a tremendous amount of data by simulating a physical or biological phenomenon. The second component is called analytics, during which sub-routines post-process the simulation output to extract,generate and save the final result of the application. We propose to optimize these applications by designing automatic resource partitioning and scheduling strategies for both of its components.To do so, we use the well-known in situ paradigm that consists in scheduling both components together in order to reduce the huge cost of saving all simulation data on disks. We propose automatic resource partitioning models and scheduling heuristics to improve overall performance of in situ applications.Stochastic applications are applications for which the execution time depends on its input, while inusual data-intensive applications the makespan of simulation and analytics are not affected by such parameters. Stochastic jobs originate from Big Data or Machine Learning workloads, whose performanceis highly dependent on the characteristics of input data. These applications have recently appeared onHPC platforms. However, the uncertainty of their execution time remains a strong limitation when using supercomputers. Indeed, the user needs to estimate how long his job will have to be executed by the machine, and enters this estimation as his first reservation value. But if the job does not complete successfully within this first reservation, the user will have to resubmit the job, this time requiring a longer reservation.
Complete list of metadata

Cited literature [172 references]  Display  Hide  Download
Contributor : ABES STAR :  Contact
Submitted on : Friday, November 13, 2020 - 1:42:08 PM
Last modification on : Saturday, June 25, 2022 - 10:43:02 AM
Long-term archiving on: : Sunday, February 14, 2021 - 6:56:52 PM


Version validated by the jury (STAR)


  • HAL Id : tel-03003808, version 1



Valentin Honoré. HPC - Big Data Convergence : Managing theDiversity of Application Profiles on HPC Facilities. Data Structures and Algorithms [cs.DS]. Université de Bordeaux, 2020. English. ⟨NNT : 2020BORD0145⟩. ⟨tel-03003808⟩



Record views


Files downloads