Meaningful measurement

What is a meaningful measurement of performance of a scheduler?  I am embarking on the next paper which is due in two weeks, and unsure about what I am actually going to write.  Well, I know what I want to write, but not sure about its presentation mechanism.

This is the status of where I am right now…

Alea + GridSim have now successfully been morphed into a new framework called dSim.  Why dSim?  not really sure, but it had a good ring to it, and potentially a commercially viable name.  Anyhow, dSim is capable of creating a dynamic load, creating dynamic set of resources, schedule appropriately, and measure results.  All of this can be done via configuration files and parameters.

One of the results that I am collecting is the makespan for the tasks, and as a result the entire job.  I dont really care about the job itself, but the makespan of a given task.

I am collecting data that is like the following:

Client Name task ID Submission time Finished Time Makespan
Client 1 1034 3000 11000 8000

For a given run, I have 1000’s of these rows.  I have one row entry per task; which makes sense.  The question is then what to do with this data?

Statistical analysis of the data would give me the mean, median, and the STD.  But is that enough?  For example, i may have following:

Client 1:
mean: 33000
STD: 18500

Client 2:
mean: 44000
STD: 9000

Obviously, that’s good info, but is it enough?

The work can potentially be extended to a cloud environment, but for now, the focus is HPC environments.

Published paper in SCPE

Hi everyone,

As promissed, I am attaching my first published paper:

FAIRNESS OF TASK SCHEDULING IN HIGH PERFORMANCE COMPUTING ENVIRONMENTS

You can download it here: Fairness-sedighi

 

Abstract:

We claim that the current scheduling systems for high performance computing environments are unable to fairly distribute resources among the users, and as such, are unable to maximize the overall user satisfaction. We demonstrate that a user can game the system to cause a temporal starvation to the other users of the system, even though all users will eventually finish their job in the shared-computing environment. Undesired and unfair delays in the current fair-shared schedulers impede these schedulers from wide deployments to users with diverse usage profiles.

 

Source:

http://www.scpe.org/index.php/scpe/article/view/1020

 

Art Sedighi

Scheduling Architecture

As I am working my way thru the Alea framework to continue my research.  I realized the Alea does not actually implement a scheduling algo, at least not a fairshare one.

Schedulers are difficult to write, but I have had the fortune to develop a few over the years.  Wanted to document the internal architecture of a “good” scheduler.  Frankly, it does not matter what type of scheduling algorithm you implement.  As long at you follow the following approach, you will be ok.

Schedulers need to have 3 distinct stages:
– Pre-processing of tasks

– Scheduling of tasks

– Dispatching of tasks

most people lump the first two together.  That’s a mistake!

– Step 1: preprocessing:

This is where you apply the desired policy to the incoming tasks.  If you are interested in a FCFS policy, well, then you make sure that you queue up your tasks in a FCFS manner, but putting all the tasks in the order of arrival in a queue.  If you are interested in SJF (shortest job first), well, then you sort the incoming tasks in a manner that the shortest jobs are first in the queue.  Maybe you have more than one queue; it does not matter.  The point here is that in the preprocessing, you are not actually scheduling anything, but rather enforcing a policy or set of desired policies to the incoming tasks.

– Step 2: scheduling:

After your tasks are sorted based on some policy, you are ready to schedule these tasks.  Scheduling is supply vs. demand.
The most common scheduling policy is fairshare.  I have many posts about fairshare, but the gist of it is that you are given a set of resources based on your “fair share”, which is determined by “you deserve more because you need more” mentality.  If A has 10x as many tasks as B, it will get 10x more resources.
There are other policies, like highest priority first, etc.

– Step 3: dispatching:

After you have decided which tasks should get executed first, you now need to dispatch the tasks.  Most scheduling systems use gang-bang scheduling in that a number of tasks are scheduled.  The reason here is efficiency and practicality.  You dont make a scheduling decision on one task, but rather a set of tasks.  I am classifying gang-bang in the dispatching of the tasks, as it does not make a scheduling decision.  Many papers (ref needed) simply claim that gang-bang is a scheduling policy of its own.  The dispatcher essentially goes thru the set of tasks assigned to be executed, and sends those tasks to the nodes.  Once the set is dispatched, it goes to the next set.

There are optimization that can be done at every step, but that’s where the innovation comes in.  Also, the steps are not discrete as they depend on each other.  You *can* and should make them as decoupled as possible, but that’s just a general software development rule of thumb!

Art Sedighi

Policy as it pertains to high-performance systems

I was recently asked to think about how high performance systems deal with policies?

Two clarifications are required here:

  1. What are high-performance systems in the context of Grid, HPC and scheduling?
  2. What are the policies that a typical high-performance system deals with or in other words, sets?

In the context of high-performance schedulers, a high-performance system is the scenario where we are dealing with a large number of tasks (potentially millions of tasks) that are fairy short in duration, and the total job is only complete once all the tasks have been completed.

What is “short” in our context?  I can easily say that short is in the order of milliseconds or even seconds, but more quantitatively, I will assume that a task is short in duration iff:

  1. Scheduling overhead directly impacts the speedup factor (i.e. the time that it takes to schedule that task cannot be neglected)
  2. The runtime of a give task is significantly shorter (two-orders of magnitude) than the overall runtime of given job.

The bigger question becomes what these policies actually are and why would they be of importance?

The following is a subset of policies that we could be referring to:

  1. Sharing policy
  2. Fair-share policy pertaining to scheduling
  3. (others – TBD)

In a sharing policy, a client can allow some or all if its resources to be shared (given out) to other client[s] that may need them.  This obviously has a risk that the resources are not immediately available when the original owner needs them back.  At the same time, if one waits before lending out resources, there could a high degree of unutilized resources.

 

The fair-share policy is scheduling is probably the most implicitly set policy in shared environments.  The users get their “fair-share” of the available resources based on some preset fair-share policy.  A user may assist or hint the policy with priorities, for example, but generally speaking, the policy is set and agreed to by all the users.

My research focuses on Fair-share policy and how it affects users – and to an extend resources.  Users agree to the fair-share policy with the assumption that what the scheduler does is “fair”.

Furthermore, users interact with the system unbeknownst to how the fair-share scheduling policy is affecting their runtime.  The side effect of a fair-share scheduler is that timing is severely affects the outcome.  Since there is no historical perspective kept to aid the scheduler to better aid the enforcement of such policy, and some users end up keep temporarily starved.

 

Art Sedighi

Congestion Games

The concept of Congestion Games is a very interesting one as it pertains to Scheduling.  Rosenthal (1973) coined the term and defined a congestion game to be a scenario where the payoff of a given player depends on the resource it ends up on, and the how busy that resource may be based on how many other players are also on that resource.  Rosenthal, however, focused on having identical machines, and same-size jobs.

A scheduler may schedule a number of tasks on a given machine, if the number of pending tasks is greater than the total number of resources.  As the number of tasks scheduled on a given machine increases – they are all competing for the same resource (CPU, memory, etc) and therefore the utility perceived by a given client could change based on how “congested” a node gets.

The fact of the matter is that most schedulers schedule one task per CPU, so avoid congestion.  I do agree that there could be other processes running on that machine as part of the Operating System (for example, time sync daemon runs on a box every few minutes), but these processes in a dedicated node scenario consume very minimal resources and can be ignored.

In unrelated machines where the infrastructure is heterogeneous, Nir Andelman argues in Strong Price of Anarchy (2006),  that congestion does not take place since the load of a given task is different on different machines.  I strongly disagree with this sentiment.  Based on one of my previous postings called Utility of Master-worker Systems, we must start thinking about the utility of the node itself.  A node (CPU) desires a high utility as well, and that utility is to be idle.  Based on this, unrelated machines cannot be treated any different than related machines.   A system of nodes breaks down to its components of individual nodes with each node desiring a level of utility that matches the utility of all the other nodes.

In short, there is an intrinsic commonality between related and unrelated machines.  This commonaility is the fact that each CPU can achieve the same level of utility by doing the same thing (taking on the same strategy in GT lingo), and that is to finish the task that it was given as soon as possible.

Furthermore, we need to look at scheduling a job on a CPU as being two seperate and sometimes conflciting games:

– Macro-level: a game played by the job submitters trying to minimize their runtime (makespan)

-Micro-level: a game played by the job and the CPU where the CPU wishes to be idle and the job wishes to be compeleted.

 

Reference:

Strong Price of Anarchy N. Andelman, M. Feldman, Y. Mansour, 2006

 

The Grid Problem

Determining if a problem/game has a Nash Equilibrium is not an easy problem.  In fact, it is harder than the classical NP problems and it is considered to be PPAD.  In this posting, I will look and classify the Grid Problem to be the following:

Grid is a shared infrastructure with a limited number of resources.  The limit could be high, but it must be less than what two players (users of the Grid) are able to consume.  We want to look at the Grid Problem and see if there exists a configuration where all the users/players are satisfied.  In short, we want to see if there exists a Nash’s Equilibrium point for the Grid Problem. 

Lets take a step back, and look at the classes of games that exist.  I want to do this to level-set and clarify my position about where various infrastructure types will fit into the different classes of games.

We have the following:

  1. Strategic Games (a.k.a Games in Normal Form)
  2. Extensive Games with perfect information
  3. Extensive Games without perfect information
  4. Coalitional Games

Strategic games are rather interesting in that both players (or N-players) make the decision in advance, and stick to that decision for the duration of the games.  Extensive games, however, players can modify their strategy between moves.

Lets look at a Grid infrastructure:

A. Resource are limited
B. Jobs are submitted to the Queue to be processed
C. Queues are most likely FIFO
D. A Player wants to get results asap without regards to the other players

Based on the above properties, we can deduce the following:

  • Property A makes this game a Zero-sum game.
  • Property B makes this game an imperfect information game
  • Properties C + D makes this game a non cooperative strictly competitive game

In a Strategic game, the player chooses his plan of action once and for all and independently of the other players.  Each plan is made based on a set of preferences, but those preferences *must* be selfish as we are dealing with a Zero-sum game.

If this problem can be reduced to the Matching Pennies problem, we can deduce that the Grid Problem does not have a Nash Equilibrium.

 

Art Sedighi