These html pages are based on the PhD thesis "Cluster-Based Parallelization of Simulations on Dynamically Adaptive Grids and Dynamic Resource Management" by Martin Schreiber.
There is also more information and a PDF version available.

8.3 Invasive resource manager

The content and structure of this section is related to our work [SRNB13b] which is currently under review. A separate process runs in the background on one thread without pinning and executes the resource manager (RM). The task of the resource manager is then the optimization of the resource distribution and is based on the information provided by the applications via constraints. Such constraints can be e.g. scalability graphs, workload and range constraints, see Sec. 8.2.1. For sake of clarity, Table 8.1 gives an overview of the symbols used in this and the upcoming section.




Number of system-wide available computing resources


Number of concurrently running processes


List of running applications or MPI processes


Placeholder for ”no application”


State of resource assignments to applications


Optimal resource distribution assigning Di cores to application Ai


Optimization information (scalability graphs, e.g.) for application i


Optimization targets (throughput, energy, etc.) for each application


Number of resources currently assigned to application i


List of free resources


Workload for application i


Throughput for c cores


Scalability graph for application i.

Table 8.1: (source: [SRNB13b]) Overview of the symbols which are used in the data structures of the resource manager.


The RM aims at optimizing the core-to-application assignment stored in the vector C⃗. Here, each entry represents the association of the R = |⃗C| physical cores to the applications. The application id is stored to C⃗i if core i is assigned to the application. In case of no core assignment, ϵ is used as a placeholder.

Scheduling information

Here, we describe our algorithm which optimizes the resource distribution based on the constraints provided by the applications. Again, let R be the amount of system-wide available compute resources. Further, let N be the amount of concurrently running applications, ϵ a marker for a resource not assigned to any application and A⃗ a list of identifiers of concurrently running applications, with |⃗A| = N. Then, we distinguish between management data inside the RM: uniquely per-application and system-wide data.

Per-application data: For each application ⃗Ai, there is a ⃗Pi storing the currently specified constraints which were previously send to the RM via a (non-)blocking invade. The RM uses these constraints for optimizations, depending on the desired optimization targets which are discussed in Section 8.4.

System-wide data: The system-wide management data is defined with the current resource assignment C⃗ and an optimization target. Such optimization targets e.g. request a maximization of the application throughput or for future applications the minimization of energy consumption. Then,

⃗C ∈ ({ϵ}∪ A⃗)R,
is the current state on the resource assignment. This assigns each compute resource uniquely to either an application a ⃗A or to none ϵ. Then an optimization target is given e.g. by the optimal resource distribution
⃗D ∈ {0,1...,R } .
Here, each entry ⃗
Di stores the number of cores which are assigned to the i-th application ⃗

We further demand

   ⃗Di ≤ R

to avoid oversubscription of these resources. This avoids assignment of more resources than there are available on the system. The resource collision itself is avoided by assigning the resources via the vector ⃗C. Here, each core can be assigned to only a single application. Cores which are currently assigned to an application are additionally stored in a list for releasing them without a search operation on ⃗

Optimization loop

A loop is used inside the RM which successively optimizes the resource distribution. Here, the resource distribution is updated based on the constraints. Further, the current resource distribution C⃗ is optimized towards the optimal target resource distribution ⃗D. The optimization loop can be separated into three parts: