Utility of master-worker systems

For a small — very small — master-worker system, there are two players: one master and one worker.  The master is the user or the client that requires the worker to get something done, and the worker is the actual processor that does the work.

Let’s break each of these into their various state:

Mater:

1. Work pending

2. Work submitted for completion

3. Work completed and results returned

Worker:

1. Working

2. Work completed

3. Idle and results returned

Based on our earlier discussion around utility, we can further assign an arbitrary utility to each of these states.  We can then model the utility as a function of workload for the two participants.

 you can see that each of the states have been assigned a utility.  When the processor is working, it has the lowest utility of -1, and when it is idle, it has a highest utility of +1.  For a client, when there is work pending, the client is at its lowest utility, and when all the work has been completed, it has the highest utility.

There is an intrinsic time built into this model.  Imagine, if you will, that a given client goes from its lowest utility to its highest utility as more work is submitted and they completed.  This obviously is time dependant.

The same is true for the worker; a given CPU is at its lowest when it is working but as it completes the work given to it, it will move towards a higher utility.

 

Art Sedighi

From the worker (CPU) perspective

In a distributed/Grid system, the worker is the “end” processor that  does the actual work.  it is the CPU that calculates the average of two numbers; it is the CPU that executes the business logic, etc…

I call this processor the “end” processor, because there could be many intermediate nodes/processors that route the work to the end node. In a graph-based architecture, the leaf node is the node that does the actual work.  All the other nodes route the work to where it belongs.  In a Grid/HPC environment, the scheduler sits in the middle and routes the jobs to the appropriate end node.  We will ignore this middle portion for the time being and focus on the end nodes.

Anyhow, this end-node is the node that does the work. Its tendency, however, is to sit idle and not do anything.  in other words, a processor wants to be idle.  From an entropy perspective, “order” is when a processor is executing proper code, and “disorder” is when the processor is idle.

Do not focus on the fact that “we” as users want the processor to be busy all the time. The tendency of the processor is to sit idle.  The processor aims to finish the work as fast as possible and sit idle.  Another way of looking at this is that a processor upon receiving a jobs is in an ordered-place, and its tendency is towards disorder.  When a processor is idle, it is that state.

We as users, however, want the processor to be utilized 100% of the time.  That’s what we want.  We will get to this conflict of interest in later postings.

From a macro-level, it all makes perfect sense now…  Faster clock speeds, newer technology, etc, allow the processor to reach its preferred state faster:

Crossing the finish line is the only goal that both the master and the worker have in common

Art Sedighi

Perfect Information

In a “perfect game”, there is perfect information.  What this means is that all the players are aware of the current state of the game and are fully aware of their options.  Chess is an example of such game.

There are very few real-life scenarios that follow this pattern.  More commonly, not all the information nor the state of the system is available to all the players/users.

Under the most basic scenario – known as the Normal Game – there are ‘n’ players, each of which have perfect information and each player is aware of the pay-off  function and striving to win.  The pay-off function, however, depends on how the other participants play the game and their strategies.

 

Back to utility

Wikipedia defines utility as:

utility is a measure of satisfaction, referring to the total satisfaction received by a consumer from consuming a good or service” (REF: http://en.wikipedia.org/wiki/Utility)

(Kuhn 1953) further explains that for each player, there is a linear utility function over all the possible outcomes of a given game.  So if a game is depicted using a tree, and the end leaf node is one of the possible outcomes of that game, there exists a utility function that defines these outcomes. It is important to metion here that the utility function defines and explains all the possible outcomes not just one.

This will be very useful to our research as we aim to maximize a utility function – which itself is a function of currnet state of the system (functional analysis).

 

game theory vs. graph theory in scheduling

Most scheduling systems, resource managers, HPC schedulers, etc, use some sort of fair share algorithm where the portion of share resource being scheduled (divided up) depends on a very simple ratio analysis of “the person who ‘deserves’ more, gets more”.  The amount that one deserves usually depends on the number of tasks pending to be completed.  Essentially, the scheduler takes a snapshot of the queue sizes, and makes a determination that in order to drain the queues faster, the large queue must get a larger portion of the available resources.

Without the loss of generality, assume that we have two users (A and B).  ‘A’ has a queue size of 10 and ‘B’ has a Queue size of 90.  ‘A’ ends up with 10% of the resources and ‘B’ ends up with 90% of the resources.  If you have only 8 machines available at a time, there is a good chance that ‘A’ is starved.

Scheduling systems follow ‘a’ graph algorithm to make a determination.  Whether that’s implicit or explicit, a graph algorithm is somehow used.  Graphs are essentially decision trees, and based on the number of levels and/or the fan-out, they could be either complicated or a simple binary decision tree of “if not this way, then it must be the other way”.  That is why a two-node system scheduling system is deterministic, and anything above a two-node system falls under the nondeterministic realm (exception is a 3-node system).

Graph algorithms do not consider history — how did i get here?  That is the main reason why most fair-share algorithms are not fair.

Games depend solely on the history of events or strategies.  There is an implicit history that is built in to each strategy that allows one to determine all the previous steps.  As such, one’s current state is not the only state required to make a decision about the next state.

As opposed to using a fair-share algorithm, if schedulers treated each transaction as a game, and each event a strategy of that game, schedulers would actually be fair. 

Art Sedighi

 

Games — all about Utilities

it has been a while…

 

A game’s outcome is measured in its utility;  i.e. user 1 utility vs. user 2 utility.  Utility is the classification of decision making when it comes to games.  This goes back to the early days of Game theory and it was put forth (coined really) by von Neumann and Morgenstern.  “Utility” is a very overused term, but it has specific meaning in this context.  Others have also contributed to this concept, and one can refer to Savage (1954) for a great history of this term. 

As with anything else – one must consider the risk vs. reward for a given action.  For a game, this notion is taken one step further as a given strategy or action may have one of the following clasifications:

– Certainty – or – Certain outcome: each action is known to invariably lead to a specific and set outcome.  “you know exactly what you will receive if you employ Strategy S1”.  This class of decision making is very popular – it turns out.  much of formal theory in economics, psychology and management can be classified here.   

– Uncertainty – or – Uncertain outcome: either player’s strategy is could lead to one of many outcomes, but the distribution of these outcomes is not known prior.  “Strategy s2 could lead to outcome x, y, and z – but not sure if the % for each is the same”

– Risk: each action leads to one of many outcomes, *and* the probability of occurrence of each outcome is known.  A coin toss could lead to a reward of $10 if it comes up heads, and $5 if it comes up tails. 

 

Utility relates to classes of games that fall under uncertainty and risk.  von Neumann and Morgenstern claimed that a person’s affinity towards risk and its behavior or actions to a given game relates to the utility of the expected value.  More specifically speaking, if one is able to have (and continue to have) a preference between two outcomes, then one is guided entirely by the utility of the expected outcome.  In other words, utility can be measured when one is “acting in accord with his true taste”. 

 If the payout for a game is $5 and $10 – according to the level of risk of each action, one person might be risk averse, and continue to “bet” on $5, whereas the other person might be willing to risk for a $10 outcome.  Person 1’s action leads up to believe that his utility for $5 is higher than the second person’s utility for $10.  In other words, person 1 might not be financially capable of risking $5, where as the second person is more willing to give up $5 to have the chance of winning $10. 

We will come back to this term over and over again thruout.

Art Sedighi

 

 

 

Game Theory with Graphs

As you can imagine, you can pretty much represent anything that has any type of flow as a graph.  It might not a nice DAG, but it is a graph that allows different states.

A game can obviuosly be presented as a graph as well.  There are states, and based on a decision or strategy, one moves from one state to the next.  One thing that is different about a game is that two graphs are NOT the same if the way to arrived at a given node differs between the two graphs. 

For example, if it took one game 20 moves to get to a node ‘z’, and another game 3, then these two games are said to be unequal even though every move from ‘z’ onward is the same for both games.

Very important concept of game theory: history matters.  How can a fair-share methodology be accurate if history does not play a role?  How can I make a decision soley based on my current queue size without taking history into account?

 

Art Sedighi

 

Fair-share

Been thinking a lot about fair-share and if something is fair or how one can make it fair?

Fair-share, specially when it comes to computing, is figured out based on the load on the decision maker.  If I (a router for example) want to give fair access to a shared network, I look at my load of incoming and try to give out the most to the person with the largest queue.  Relatively speaking, all the queues get the same (fair-share) access, but not if you are the “little guy” that submits packets at a lower rate than others.

There are other mechanisms that try to take care of the little guy first and then go to the big users. In job scheduling, there is an algo for this called Shortest Job First.  You get the little guys and empty those queues first; then you are off to the longer running jobs.  The concept makes sense, right? You don’t want a short job to be stuck behind a job that runs for days.

So, what is fair?  How does one get its fair-share of a system?

Art Sedighi

Selfish Computing

There are many papers written on this topic, and there is even a book on a similar topic called Selfish Routing and the Price of Anarchy.

I am not going to cover what the research is working on, but rather a simple scenario that pertains to systems.

Imagine a system that does not provide any feedback to its users or requesters.  In other words, there is no congestion control built-in.  What this basically means is that the system is used and utilized on a FCFS basis.  The first user that gets on the system can utilize and use up all the resources, and not allow any one else to use the system.

Oddly enough, this scenario is not too far fetched; consider a computer virus for example.  If your computer is infected with a virus, it takes over and you are unable to do anything else.

Consider an HPC environment.  A user gets up early in the morning and submits a job that takes over the environment.  If there are no controls in place, no one other than the first user can use the system.  The system becomes “available” when the first user logs off or is finished.

Selfish computing is everywhere.

Art Sedighi

The Prisioner’s Dilemma

This is the typical scenario used to explain, in simple terms, the concept of Game Theory, and it goes something like this:

Two prisoners are on trial for crime and each one faces a jail sentence (or not) based on the options given to them:

– confess

– do not confess

Simple, but to the point.

If they both do not say a word, there is not enough evidence to convict either with a criminal act, and they each get a sentence of say two years.  If one of them confesses, that prisoner gets a reduction in the sentence to 6 months, while the other gets 5 years.  If they both confess, they each get a break and a sentence of 4 years each.

The optimal strategy is obviously for both not to say a word.  The selfish strategy is for one to confess and hope that the other one will not do the same.

This might seem like a very simple example, but this simple example demonstrates the complexities of dealing with a system with multiple users with each user aiming for a selfish strategy.

Art Sedighi