game theory vs. graph theory in scheduling

Most scheduling systems, resource managers, HPC schedulers, etc, use some sort of fair share algorithm where the portion of share resource being scheduled (divided up) depends on a very simple ratio analysis of “the person who ‘deserves’ more, gets more”.  The amount that one deserves usually depends on the number of tasks pending to be completed.  Essentially, the scheduler takes a snapshot of the queue sizes, and makes a determination that in order to drain the queues faster, the large queue must get a larger portion of the available resources.

Without the loss of generality, assume that we have two users (A and B).  ‘A’ has a queue size of 10 and ‘B’ has a Queue size of 90.  ‘A’ ends up with 10% of the resources and ‘B’ ends up with 90% of the resources.  If you have only 8 machines available at a time, there is a good chance that ‘A’ is starved.

Scheduling systems follow ‘a’ graph algorithm to make a determination.  Whether that’s implicit or explicit, a graph algorithm is somehow used.  Graphs are essentially decision trees, and based on the number of levels and/or the fan-out, they could be either complicated or a simple binary decision tree of “if not this way, then it must be the other way”.  That is why a two-node system scheduling system is deterministic, and anything above a two-node system falls under the nondeterministic realm (exception is a 3-node system).

Graph algorithms do not consider history — how did i get here?  That is the main reason why most fair-share algorithms are not fair.

Games depend solely on the history of events or strategies.  There is an implicit history that is built in to each strategy that allows one to determine all the previous steps.  As such, one’s current state is not the only state required to make a decision about the next state.

As opposed to using a fair-share algorithm, if schedulers treated each transaction as a game, and each event a strategy of that game, schedulers would actually be fair. 

Art Sedighi