Search This Blog

Monday, February 14, 2011

Being Rational

Hello again.  Now that finals and break are over I can return to my haphazard updating schedule.  No, I don't know what happened to January either.

Today's post isn't about a particular game, but rather an assumption that is important to understanding game theory.  When a game theoretic model gets described, we assume that the players are rational actors.  However, rationality in this sense might be different from a more conventional understanding of the word.  Most common (or at least, the first one that comes to mind), is some reference to a person forming conclusions logically and acting accordingly.  We might also assume that a rational person would make good choices, or at least sensible ones.  Having seen a few people walking outside in the Rochester winter in shorts, it's obvious that this is not how all people behave.  So what's an economist/political scientist to do?

To answer that question, let's take a minor digression into utility functions (my favorite functions!).  A utility function is an assignment of utility values to alternatives.  Please contain your enthusiasm, it gets better.  Anyway, a utility value is just a number, and we know what alternatives are.  If you have to pick between beer, quiche, or a fight, you might assign 1 to beer, 2 to quiche, and 3 to a fight.  That's a utility function.  It doesn't exactly matter what number you assign to what, so long as bigger numbers mean better alternatives.  Neither do the relative magnitudes of utility values matter.  For example, you might assign -1,000,000 to beer, 3.4 to quiche, and 500 to a fight.  This would be no different from the other function.  Technically, utility functions of this form are called ordinal utility functions.  If this seems strange, have patience!  I'll try and explain EVEN MORE.

An ordinal utility function therefore allows us to represent a ranking among alternatives.  In game theory, this is critically important.  In fact, the game theoretic assumption of rationality simply requires that a person be able to rank alternatives such that the top ranked one is what they most prefer.  Utility functions then are devices that represent those preferences so a game can be made solvable.  So, looking back at the person outside in shorts, it may be that they prefer shorts to pants no matter what.  This could be perfectly rational behavior, theoretically speaking, since we don't make assumptions about the quality of the ranking a person creates.

Maybe this should actually be categorized under introductory material.  But this assumes that I am able to create comprehensible organization schemes, a contradiction.

Wednesday, December 15, 2010

When Bad Things Happen to Good Choices Part 2

Last time on When Bad Things Happen to Good Choices, you and your friends were unable to decide on a restaurant for lunch.  While I was not privy to the events that followed, I assume that some sort of altercation ensued.  Fists were raised and harsh words spoken.  One of you may have pulled a knife, someone else wielded a blunderbuss, and before long, the situation escalated into global thermonuclear war.  How you survived to continue reading my blog is beyond me, but I am always happy to speculate.  Perhaps you foresaw the coming doom and prepared a number of clone bodies to house your intellect in the event of the nuclear apocalypse.  You know, just in case.  Note:  This is all hypothetical, but I will swear to my dying day that it is substantially more interesting than the game of rock, paper, scissors you actually resorted to.  (Further note:  Arms races are interesting games.)

At any rate, your paranoia appears to have paid off and we can return to the matter at hand.

By now it should be clear that if we want to avoid CATASTROPHIC SOCIAL PREFERENCE CYCLES, we have a bit of work to do.  We could add an extra person, whose preferences would break the cycle, or we could invent ever more complicated choice rules.  Unless you are entirely repulsed by all other possible people, I recommend the former.

Up until now, I've been assuming that your preferences over restaurants are based on some arbitrary system such as "how much I like x type of food".  This might not be the case!  Perhaps you prefer restaurants based on their proximity to your home, office, or bunker.  If that is so, we can represent the restaurants as points on a line, or points in some space.  For the moment, let's consider points on a line.  Suppose you and your friends live in a one street town (I have seen many of these, skeptics).  The restaurants are arrayed as follows:

          a          b          c          

Now if you recall, 1 preferred a to b to c, 2 preferred b to c to a, and 3 preferred c to a to b.  This was fine when we didn't give your preferences much structure.  But if all you really care about is distance (and who wants to walk very far in the Rochester winter anyway), we can see that 3 is up to something fishy.  In particular, there is no point on that line that 3 could occupy such that c is closer than a and a is closer than b.  Shame on you, 3.  Preferences consistent with this interpretation (i.e. distance from someone's "ideal point" is all that matters) are called single peaked.  As it happens, unidimensional preference orders like this are much more resistant to cycles.  

As an aside, political views and voting are often represented along a unidimensional spectrum where each end represents an ideological extreme.  This is quite helpful to people who want to model voting behavior, as it lets us make some useful assumptions about how voters will act.  I'm always happy to talk about political theory in detail (that is, after all, what I'm trained in), but for now I'll save those observations for another day. (Unless somebody asks.  My threshold value is quite low, so it wouldn't take more than one request.  Beware.)  

ANYWAY.  As you may have guessed, it isn't always sensible to collapse choices to one dimension.  When we get into choice problems in multiple dimensions however, some undesirable results rear their ugly heads.  I won't quote Plott or McKelvey's theorems, but suffice to say we would be in for a meeting with our old nemesis, cycles.

On that note, we'll return next time to our regularly scheduled game theory programming.

Sunday, December 12, 2010

When Bad Things Happen to Good Choices Part 1

At the present moment, I'm taking a break from my social choice theory homework to bring you a blog post.  I have justified this discontinuity in my studies by constructing the following quasi-ramble on the subject of social choice theory.  As such, I have helpfully filed this post under "Not Game Theory" for those who might abjure such digressions.

I am not, by nature, particularly socially inclined.  Some of this, however, is not my fault!  Deciding what a group ought to do, as it turns out, is very difficult.  If you are not yet convinced, allow me to present the following seemingly benign situation.

You and two of your friends wish to eat lunch together at a restaurant.  While there are many restaurants available, you have narrowed your field down to three, which happen to be named a, b, and c.  (Incidentally, you and your friends are named 1, 2, and 3.  These are fine names with long and glorious histories.)  Unfortunately, you are not members of the hive mind, so each of you has a different opinion of which restaurant to patronize.  Here are your preferences (higher up is better).



123

abc

bca

cab

Now, there are quite a few ways that the three of you could potentially decide.  You, 1, would probably like it if you could have your way.  If you were a dictator, that is exactly how it would be.  When dictatorship is the method of social choice, someone is the dictator and determines the social preferences by their social preferences.  This is no time for delusions of grandeur however, as your friends are willing to resist your tyranny by any means necessary and you cannot overpower them both.  

Far more reasonable, or at least less likely to result in a violent uprising, is majority rule.  This should be quite familiar.  Something is majority preferred to something else if just over half of the people in a group prefer it to that other thing.  Here, a is majority preferred to b.  An alternative is optimal in majority rule if nothing else is majority preferred to it.  This is pretty simple.  So simple in fact that you and your friends decide to adopt majority rule to determine where you go for lunch.  Unfortunately, you are now paralyzed by indecision.  Oh no!

Your indecision happens to be the product of a phenomenon called Condorcet's Paradox.  Notice that a is majority preferred to b (I'll write this statement in the form aP*b from now on), bP*c, and cP*a.  Nothing is optimal, and your social preference forms a cycle.  This is not a good thing.  

However, it is not yet time to resort to dictatorship.  Other options remain.  

Another common method is plurality rule.  Plurality rule counts the number of times an alternative appears at the top of individuals' lists, and ranks alternatives by their resulting score.  All three restaurants appear at the top once.  You are now hungrier, but no closer to a solution.

Unfortunately for you, your particular profile of preferences makes an effective social choice quite difficult.  Given enough time, it would be possible to formulate an effective strategy, but lunch hour is only so long.  Under the circumstances, perhaps it would be better to solicit somebody else's advice (I am always more than willing to offer an opinion).  

We'll look at some less indecisive situations in Part 2.  For anybody who still remains unconvinced that group decisions are difficult, I might suggest that you have thus far been quite lucky, or are a dictator.  Either way, you read the rest of my post.  Mission accomplished.

Thursday, December 2, 2010

Trust Falls

I started reading Cowbirds in Love at the behest of one of my friends/nemeses.  When I arrived at this comic, I was largely nonplussed -- until I read the alt text, that is.  I hope I get extra credit for what follows.


P2
P2

Fall
Don't
Fall
Don't

P1
Catch
1,1
0,0
Catch
-1,1
0,0

Don't
-1,-1
0,0
Don't
1,-1
0,0

1 is trustworthy
1 is untrustworthy

I would like to note that the utility values in this game are derived from the expression on the stick figures' faces.  

This time, Player 1 (the one catching) might be one of two sorts of people:  Trustworthy or untrustworthy.  Player 2 (the one falling) would like to fall and be caught, because this is the objective of the game.  She does not know which type Player 1 is, but she thinks there's a p chance that Player 1 is trustworthy.  She also knows that trustworthy people always catch, and untrustworthy people never catch (perhaps untrustworthy people aren't entirely untrustworthy after all).  Nobody gets anything if Player 2 doesn't fall.  It is called "trust falls" after all.

Through the power of algebra, I present the following utility comparisons.
Player 2 should fall if: p + (1-p)(-1) > 0, or if p > 1/2
Abracadabra, or something like that.

So if Player 2 thinks Player 1 is very likely to not be a jerk, she should fall.  Otherwise, she should glare at him sternly.

There are actually a few different ways to approach this problem as a game representation.  If anybody's interested, I'll spend the next post discussing those.

Monday, November 29, 2010

Walking is Difficult

Hello everybody!

For this very first post, I thought I'd provide an example of a situation that might warrant some game theoretic analysis. I often walk places, which is a thing that quite a few people seem to do.  This means that sometimes other people end up walking straight toward me.  I don't want to bump into them, they don't want to bump into me.  The following game is an example of that situation

P2
Turn Straight
P1 Turn 1,1 1,2
Straight 2,1 0,0


Here it doesn't really matter whether I'm Player 1 or Player 2, as their preferences are symmetrical.  Neither person knows what the other will choose, but they do know what outcome they want.

When we look at the pure strategy equilibrium of this game, it's best for the players to play different strategies.    Put another way, if you pick Turn, I should pick Straight.  If you pick Straight, I should pick Turn.  To anybody who's faced this situation, this result is fairly obvious.

However, determining who ought to turn is a rather more difficult endeavor.  Without knowing what you're inclined to do, I might decide to go straight, since I know that you know that turning is better than a collision.  If we both think this, we'll shortly perform a rather awkward sort of dance.

You might be inclined to ask:  Michael, even if I know this, what should I do when I find myself in this situation?

Clearly the best thing to do is for you to always turn.  That way, I'll never have to worry about running into anyone.