The concept of Congestion Games is a very interesting one as it pertains to Scheduling. Rosenthal (1973) coined the term and defined a congestion game to be a scenario where the payoff of a given player depends on the resource it ends up on, and the how busy that resource may be based on how many other players are also on that resource. Rosenthal, however, focused on having identical machines, and same-size jobs.
A scheduler may schedule a number of tasks on a given machine, if the number of pending tasks is greater than the total number of resources. As the number of tasks scheduled on a given machine increases – they are all competing for the same resource (CPU, memory, etc) and therefore the utility perceived by a given client could change based on how “congested” a node gets.
The fact of the matter is that most schedulers schedule one task per CPU, so avoid congestion. I do agree that there could be other processes running on that machine as part of the Operating System (for example, time sync daemon runs on a box every few minutes), but these processes in a dedicated node scenario consume very minimal resources and can be ignored.
In unrelated machines where the infrastructure is heterogeneous, Nir Andelman argues in Strong Price of Anarchy (2006), that congestion does not take place since the load of a given task is different on different machines. I strongly disagree with this sentiment. Based on one of my previous postings called Utility of Master-worker Systems, we must start thinking about the utility of the node itself. A node (CPU) desires a high utility as well, and that utility is to be idle. Based on this, unrelated machines cannot be treated any different than related machines. A system of nodes breaks down to its components of individual nodes with each node desiring a level of utility that matches the utility of all the other nodes.
In short, there is an intrinsic commonality between related and unrelated machines. This commonaility is the fact that each CPU can achieve the same level of utility by doing the same thing (taking on the same strategy in GT lingo), and that is to finish the task that it was given as soon as possible.
Furthermore, we need to look at scheduling a job on a CPU as being two seperate and sometimes conflciting games:
– Macro-level: a game played by the job submitters trying to minimize their runtime (makespan)
-Micro-level: a game played by the job and the CPU where the CPU wishes to be idle and the job wishes to be compeleted.
Reference:
Strong Price of Anarchy N. Andelman, M. Feldman, Y. Mansour, 2006