Lade Inhalt...

An investigation of strategic decision making in Swedish and German companies based on Game Theory

©1998 Diplomarbeit 161 Seiten

Zusammenfassung

Inhaltsangabe:Abstract:
Game theory was established by the mathematician John von Neumann (1903 to 1957) and the economist Oskar von Morgenstern (1902 to 1977), who in 1944 published a - among game theorists - very well known work of literature
called Theory of Games and Economic Behavior. However, in his book Spieltheorie und ökonomische (Bei)spiele, Werner Güth regards game theory
not exclusively as an economic discipline, although fundamental concepts of game theory have been inspired by economic questions and have been developed by economists.
Regarding game theory, there are numerous applications in the areas theoretical economy, operations research, statistical decision theory, marketing, politic and military science, insurance mathematics, sociology and psychology.
Aim of the dissertation is to give a general overview on game theory and especially to answer the following questions by analysing the Swedish and German replies of the questionnaire:
1. Do strategic deciders of large companies know about game theory and do they use it as a strategic tool?
2. What is the percentage of managers who are able to give correct answers
when they are confronted with certain game situations?
3. Are there any links between the characteristics of the manager and their
ability to give correct answers to the game situations?
4. Is it possible to find any differences between German and Swedish manag
ers regarding 1,2 and 3?
The dissertation does not contain all parts of game theory. Only aspects the authors think to be the most important in connection with economy will be discussed.

Inhaltsverzeichnis:Table of Contents:
1.INTRODUCTION1
2.THEORETICAL FRAMEWORK2
2.1FUNDAMENTAL DEFINITIONS2
2.2ECONOMIC RELEVANCE3
2.3FORMAL REPRESENTATION OF GAMES4
2.4SOLUTION CONCEPTS12
2.5STRATEGIC MOVES26
2.6HISTORICAL OVERVIEW OF GAME THEORY27
3.THE PRISONERS' DILEMMA (PD)31
3.1THE STORY31
3.2COOPERATION32
3.3THE REPEATED PD33
4.EXAMPLES FOR INTERESTING ECONOMIC GAMES38
4.1MARKET ENTRY GAME38
4.2COVER STORY WAR40
4.3THE OPEC GAME42
4.4CRAZY EDDIE44
4.5FOOTBALL LEAGUE45
4.6TECHNOLOGY RACE47
5.THE QUESTIONNAIRE49
5.1THE INTERVIEWED PERSONS49
5.2EXPLANATION OF THE QUESTIONNAIRE49
6.EVALUATION OF THE QUESTIONNAIRE67
6.1THE EVALUATED COUNTRIES67
6.2THE EVALUATION OF THE QUESTIONNAIRE70
7.REVIEW120

Leseprobe

Inhaltsverzeichnis


Table of contents

1 Introduction

2 Theoretical framework
2.1 Fundamental definitions
2.1.1 What is a game?
2.1.2 What is a strategy?
2.1.3 What are strategic decisions?
2.1.4 What is Game Theory?
2.2 Economic relevance
2.3 Formal representation of games
2.3.1 Extensive form/strategic form
2.3.1.1 Extensive form (game tree)
2.3.1.2 Strategic form (or normal form)
2.3.2 Perfect/Imperfect information
2.3.3 Games with complete and incomplete information
2.3.4 Cooperative/Non-cooperative Game Theory
2.3.5 Pure strategy/mixed strategy
2.3.6 Repeated Games
2.4 Solution concepts
2.4.1 Maximin-concept
2.4.2 Dominant strategy equilibrium concept
2.4.3 Iterated dominance equilibrium concept
2.4.4 Nash equilibrium
2.4.5 (Subgame) Perfect equilibrium
2.4.6 Conclusion
2.5 Strategic moves
2.6 Historical overview of Game Theory

3 The Prisoner’s Dilemma (PD)
3.1 The story
3.2 Cooperation
3.3 The repeated PD

4 Examples for interesting economic games
4.1 Market entry game
4.2 Cover story war
4.2.1 Version “two dominant strategies from the beginning“
4.2.2 Version “only one dominant strategy from the beginning“
4.3 The OPEC game
4.4 Crazy Eddie
4.5 Football League
4.6 Technology race

5 The Questionnaire
5.1 The interviewed persons
5.2 Explanation of the questionnaire
5.2.1 Part A: Explanation of the opening part
5.2.2 Part B: Explanation of the game situations
5.2.2.1 Game situation 1
5.2.2.1.1 Variant “sequential“
5.2.2.1.2 Variant “simultaneous“
5.2.2.2 Game situation 2
5.2.2.2.1 Variant “PD played once“
Strategies
Payoffs
5.2.2.2.2 Variant “PD played once with cooperation“
5.2.2.2.3 Variant “PD played infinitely often
5.2.3 Part C: Explanation of the final part

6 Evaluation of the questionnaire
6.1 The evaluated countries
6.1.1 Sweden
6.1.2 Germany
6.1.3 Formal comparison
6.2 The evaluation of the questionnaire
6.2.1 Evaluation of the opening part (Part A)
6.2.1.1 Knowledge on game theory
6.2.1.2 Sources of knowledge on GT
6.2.1.3 Terms linked with game theory
6.2.1.4 Use of game theory
6.2.1.5 Use of alternative strategic instruments
6.2.2 Evaluation of the game situations (Part B )
6.2.2.1 Performance regarding the totality of questions
6.2.2.2 Performance regarding the five different questions
6.2.2.3 Performance regarding the individual number of correct answers
6.2.3 Evaluation of the final part (Part C)
6.2.3.1 Sex, Age and Education
6.2.3.2 Interviewed departments and positions
6.2.3.3 Risk attitude and time taken
6.2.4 Links between characteristics and performance
6.2.4.1 Links between previous knowledge and performance
6.2.4.1.1 The totality of questions
6.2.4.1.2 The five different questions
6.2.4.1.3 The individual number of correct answers
6.2.4.2 Links between source of knowledge and performance
6.2.4.3 Links between connected terms and performance
6.2.4.4 Links between use of GT and performance
6.2.4.5 Links between use of other strategic instruments and performance
6.2.4.6 Links between sex/age and performance
6.2.4.7 Links between education and performance
6.2.4.8 Links between department/position and performance
6.2.4.9 Links between time taken and performance
6.2.5 Excursion: Comparison of risk attitudes
6.2.6 Significant matters
6.2.6.1 Managers with excellent performance
6.2.6.2 Managers with unsatisfactory performance
6.2.6.3 Differentiation of industries
6.2.7 Final remarks

7 Review

Literature

Appendices (A – O)

List of illustrations

Illustration 1: Game 1 - Extensive form of a sequential game

Illustration 2: Game 2 - Extensive form of a simultaneous game

Illustration 3: Game 3 - “Matching pennies”

Illustration 4: Game 4 - Game with outside options

Illustration 5: Game 1 – Extensive form

Illustration 6: Game 1 – Folded up game tree

Illustration 7: The options for strategic moves

Illustration 8: Profit amounts for Newcleaners for every possible outcome

Illustration 9: Profit amounts for Newcleaners and Fastcleaners

Illustration 10: Extensive form of “Technology Race”

Illustration 11: Extensive form of Variant “sequential“ of game situation 1

Illustration 12: Folded up game tree of variant “sequential” of game situation 1

Illustration 13: Extensive form of variant “simultaneous“ of game situation 1

Illustration 14: Extensive form of the economical PD

Illustration 15: Knowledge on Game Theory

Illustration 16: Source of knowledge on GT

Illustration 17: Terms linked with GT

Illustration 18: Use of Game Theory

Illustration 19: Use of alternative strategic instruments

Illustration 20: Performance regarding totality of questions (in %)

Illustration 21: Performance regarding the five different questions

Illustration 22: Performance regarding the individual number of correct answers

Illustration 23: Age of the interviewees

Illustration 24: Education of the interviewees

Illustration 25: Risk attitude of the interviewees

Illustration 26: Time taken for filling in the questionnaire

Illustration 27: Links between previous knowledge and performance (totality of questions)

Illustration 28: Links between previous knowledge and performance (game situation 1)

Illustration 29: Links between previous knowledge and performance (game situation 2)

Illustration 30: Links between previous knowledge on GT and performance of the interviewed managers

Illustration 31: Links between source of knowledge on GT and performance

Illustration 32: Links between age and performance

Illustration 33: Links between education and performance

Illustration 34: Links between time taken and performance

Illustration 35: Links between the risk attitude resulting from the explanation of the answer to question B 1.2 and the risk attitude resulting from the preferred investment (question C 6)

List of tables

Table 1: Game 1 - Strategic form

Table 2: Game 2 - Strategic form

Table 3: Game 5 - Maximin-solution for general

Table 4: Game 6 - Dominant strategy equilibrium

Table 5: Game 7 - Dominant strategy equilibrium

Table 6: Game 8 – four dominant strategy equilibria

Table 7: Game 9 - Dominant strategy equilibrium

Table 8: Game 10a - Game without dominant strategy

Table 9: Game 10b - iterated dominance equilibrium

Table 10: Game 11 - Iterated dominance equilibrium

Table 11: Game 5 – Nash equilibrium

Table 12: Game 2 – Strategic form

Table 13: Game 12 – “Battle of the sex”

Table 14: Game 1 - Strategic form

Table 15: Game 1 - Strategic form

Table 16: Strategic form of the PD

Table 17 : Strategic form of a general PD type game

Table 18: Payoffs (market shares) for Time and Newsweek – Version 1

Table 19: Payoffs (market shares) for Time and Newsweek – Version 2

Table 20: The OPEC game

Table 21: Viewer figures for the USFL and the NLF

Table 22: Payoffs in the technology race for Japan and the USA

Table 23: Strategic form of game situation 1, variant “sequential”

Table 24: Strategic form of game situation 1, variant “simultaneous”

Table 25: PD payoff table given to the interviewees

Table 26: Calculation of the PD figures given to the interviewees

Table 27: Strategic form of the economical PD

Table 28: Possible kind of investments given to the interviewees and the respective meaning

Table 29: Comparison Sweden – Germany

Table 30: Overview on solutions of the game situations

Table 31: Possible kind of investments given to the interviewees and the respective meaning and risk factor

Table 32: Meaning of strong and weak refutation and confirmation

Table 33: Correct answers for question B 1.2 split up into risk attitudes

Table 34: Comparison of industries

Table 35: Investigated hypothesis and their results

List of abbreviations

illustration not visible in this excerpt

1 Introduction

Game theory was established by the mathematician John von Neumann (1903 to 1957) and the economist Oskar von Morgenstern (1902 to 1977), who in 1944 published a - among game theorists - very well known work of literature called Theory of Games and Economic Behavior. However, in his book Spieltheorie und ökonomische (Bei)spiele, Werner Güth regards game theory not exclusively as an economic discipline, although fundamental concepts of game theory have been inspired by economic questions and have been developed by economists. (Werner Güth, 1992, page 1 - he refers to Harsanyi 1967/68 and Selten 1965)

Regarding game theory, there are numerous applications in the areas theoretical economy, operations research, statistical decision theory, marketing, politic and military science, insurance mathematics, sociology and psychology. (Bühlmann/Loeffel/Nievergelt, 1975, page 155)

Aim of the dissertation is to give a general overview on game theory and especially to answer the following questions by analysing the Swedish and German replies of the questionnaire:

1. Do strategic deciders of large companies know about game theory and do they use it as a strategic tool?
2. What is the percentage of managers who are able to give correct answers when they are confronted with certain game situations?
3. Are there any links between the characteristics of the manager and their ability to give correct answers to the game situations?
4. Is it possible to find any differences between German and Swedish managers regarding 1,2 and 3?

The dissertation does not contain all parts of game theory. Only aspects the authors think to be the most important in connection with economy will be discussed.

2 Theoretical framework

2.1 Fundamental definitions

2.1.1 What is a game?

Game theory understands by a game a number of rules, which describe the permitted actions of the involved parties of a competition. These rules have to lay down precisely, what each player in all possible situations is allowed to do, when the game is finished and who has won (or lost) which amount at this stage (translated faithfully from Thomas Fernandez, 1997, internet).

2.1.2 What is a strategy?

A strategy is a complete plan for how to play a game. “Complete” means in this context that for any possible decision situation, the plan must specify what the player would do. Consequently, a strategy can be viewed as an instruction for a referee of how the player will move when the play reaches a node at which he/she has to move (Eichberger, 1993, page 17).

2.1.3 What are strategic decisions?

Although there are many books discussing only strategic decisions, the authors would like to keep the definitions as simple as possible. Therefore the following simplified definition by Alan Gilpin should be given. Due to Gilpin, strategic decisions are “policy forming, goal-setting activities, which provide structure and direction to an organisation” (Gilpin, 1977, page 212).

2.1.4 What is Game Theory?

Game theory analyses strategic decision situations, where

a) the result of the game depends on the decision of more than 1 decision- maker, so that the individual is not able to determine the result of the game, independent from the choice of the others;
b) every decision maker is aware of this interdependence;
c) every decision maker presumes, that all the others are aware of the interdependence, too;
d) everybody takes into consideration a), b) and c) when making decisions.
Due to these 4 properties, conflicts of interest and/or coordination problems are the characteristics of strategic decision situations (translated faithfully from Holler/Illing, 1993, page 1).

In literature, game theory is often separated from decision theory. In contrast to game theory, decision theory has only one decision maker. In other words, decision theory can be understood as a special case of game theory for one-person-games (translated faithfully from Markus Wendel, 1996, page 6).

2.2 Economic relevance

The economic relevance should be illustrated by some extracts of books and articles about game theory, which have been published during the last couple of years:

Many economic questions do have the under 2.1.4 mentioned characteristics. Game theory offers an abstract, formal tool, which can be used for analysing these questions. Conversely, the formulation of economic problems in the last couple of years has made a substantial contribution to the further development and refinement of game theoretical concepts (translated faithfully from Holler/Illing, 1993, page 1).

“There are at least 2 reasons for the growing importance of game theory in economics: Game theory provides a unifying framework for economic analysis in many fields, and it structures the process of modelling economic behaviour. ... Game theory enables economists to approach problems that thirty years ago seemed to be beyond formal modelling. Strategies provide game theory with a concept for modelling behaviour that takes informational as well as dynamic characteristics of economic situations into account. In this sense, game theory is more than an extension of existing economic thinking because it offers guidelines for the modelling of economic problems.” (Eichberger, 1993, preface, page xi and xii)

Supposing political and economic behaviour show similar motives like acting while playing poker or in the casino seems to be displeasing. But 50 years of theoretical work has not only emphasised the seriousness, but also the explanation power of game theory for cooperative and competitive behaviour. In addition game theory has pointed out the extent to which conventional concepts do not hold important aspects of strategic decisions. (translated faithfully from “Spektrum der Wissenschaft”, December 1994, page 25)

According to Bewley, 1985, and Tirole, 1988, game theory is very common within microeconomics (translated faithfully from Werner Güth, 1992; page 2). Due to the normative orientation of traditional microeconomics, a modern introduction into microeconomics requires game theoretical basics, provided that it is from the beginning not confined to special questions and it also does not evade the central problem of strategic interaction on markets (same book, same page).

“The key link between neoclassical economics and game theory was and is rationality. Neoclassical economics is based on the assumption that human beings are absolutely rational in their economic choices. Specifically, the assumption is that each person maximises her or his rewards (profits, incomes, or subjective benefits) in the circumstances that she or he faces. Firstly, it narrows the range of possibilities somewhat. In other words, absolutely rational behaviour is more predictable than irrational behaviour. Secondly, it provides a criterion for evaluation of the efficiency of an economic system.” (Dr. Roger A. McCain, 1997, internet)

2.3 Formal representation of games

To get a general overview of game theory and to understand the contents of the questionnaire, this chapter is to introduce some fundamental features of game theory.

2.3.1 Extensive form/strategic form

There are 2 possibilities to represent a game: the extensive form and the strategic form. The following describes both forms in more detail.

2.3.1.1 Extensive form (game tree)

“The extensive form is the most explicit description of a game. It notes the sequence of moves, all possible states of information, and the choices at different stages for all players of the game.” (Eichberger, 1993, page 2)

A game tree represents the extensive form. The figures beneath the game tree represent the payoffs. Both players try to maximise their payoff. That means every player tries to get to that point, where his/her pay-off has got the highest value (it is the same for every player, if the other players’ pay-off is as high or is even higher). This fact coincides with the aim of companies to maximise their profit. For example, company A prefers a payoff of 3 Mio $ for itself and a payoff of 3 Mio $ for its competitor to a pay-off of 2 Mio $ for itself and 1 Mio $ for its competitor, assuming the game ends at that stage.[1]

illustration not visible in this excerpt

Illustration 1: Game 1 - Extensive form of a sequential game (Tirole, 1995, page 946)

Player 1 has to move first (L for “Left” or R for ”Right”). Consequently, player 2 is able to observe the move of player 1, before he/she has to decide for l (“left”) or for r (“right”). These games are often called “sequential games“. (For details see Holler/Illing, 1993, page 14 and Tirole, 1995, page 946/947.) Chess is an example for this kind of games.

In contrast to game 1, game 2 describes a situation the players have to decide simultaneously. This fact is represented by the oval around the 2 nodes of player 2. Player 2 is now not able anymore to wait for the decision of player 1. He/she has got only 2 possibilities: either to go l or to go r. These games are often called “one-shot-games“ or “simultaneous games”.

illustration not visible in this excerpt

Illustration 2: Game 2 - Extensive form of a simultaneous game (Tirole, 1995, page 947)

The very well-known game “Matching pennies” (game 3) is another example to demonstrate this kind of games: Two players, player 1 and player 2, each put a coin on the table but keep their moves hidden from each other. Player 1 puts his/her coin down first, then player 2 does the same. Finally, they reveal to each other the sides of the coins lying face up on the table. If the sides match, player 1 wins a dollar from player 2; if the sides do not match, player 2 wins a dollar.

illustration not visible in this excerpt

Illustration 3: Game 3 - “Matching pennies”

Note: H (heads) and T (tails) are the choices of player 1, h and t are the choices of player 2.

In comparison to the games 1 and 2, game 3 represents a Two-player zero-sum game, i.e. what one player gains the other loses. This type of games will not be analysed in more detail, because economic situations usually do not have zero-sum character. “Though economic problems usually are not described by a zero-sum game, these kinds of games were very popular among game theorist, probably because they can easily be analysed mathematically.” (Holler/Illing, 1993, page 59)

2.3.1.2 Strategic form (or normal form)

The strategic form (or normal form) is a more abstract representation of a game. Here, one notes all possible strategies of each agent together with the payoff that results from strategy choices of the agents. The normal form concentrates on the strategic aspects of a game, but neglects the dynamic structure of the game.

illustration not visible in this excerpt

Table 1: Game 1 - Strategic form (according to Tirole, 1995, page 949)

Table 1 describes the normal form of game 1. Player 1 has got 2 strategies (s), that is L or R. Player 2 has got 4 strategies. Regarding player 2, the first letter in the bracket describes the reaction to player 1 playing L, the second letter the reaction to player 1 playing R. For example, if player 2 plays sAbbildung in dieser Leseprobe nicht enthalten, he/she reacts with l to L and with l to R. If he/she plays sAbbildung in dieser Leseprobe nicht enthalten he/she reacts with r to L and with r to R, etc.

As explained above in game 2, player 2 has got only 2 strategies. He/she can choose to go left or to go right, because he/she is not able to observe the other player’s move, but has to decide simultaneously (table 2).

illustration not visible in this excerpt

Table 2: Game 2 - Strategic form (according to Tirole, 1995, page 949)

2.3.2 Perfect/Imperfect information

“Games in which each player knows exactly what has happened in previous moves are called games with perfect information. Games in which there is some uncertainty about previous moves are called games with imperfect information.” (Eichberger, 1993, page 16). Consequently, game 1 is a game with perfect information, games 2 and 3 represent games with imperfect information.

This fact is important for the solution of a game. Games with perfect information do not have to have the same solution as games with imperfect information. The questionnaire will investigate both types of games. Chapter 5.2.2.1 will also discuss the different solutions of these two kinds of games.

2.3.3 Games with complete and incomplete information

“A game is a game with complete information if each element of the game is common knowledge. Otherwise it is a game with incomplete information.” (Eichberger, 1993 , page 17) “The concept of common knowledge states the assumption that, to any degree of mutual understanding, all players of a game are completely informed about some aspects of the game.” (Eichberger, 1993, page 16)

Werner Güth defines incomplete information by “the rules of the game are not generally known” (faithfully translated from Werner Güth, 1992, page 129).

Consequently, a game is a game with incomplete information, if not every player knows everything about the game. Game 4 describes a game, in which player 2 has got an outside option and player 1 does not know, if player 2 has got this outside option or not. In this game player 2 is able to observe the move of player 1.

illustration not visible in this excerpt

Illustration 4: Game 4 - Game with outside options (according to Güth, 1992, page 130)

The parameter restrictions of this game are as follows: 1 > x > c > y > 0.

The options r2 and r4 of player 2 represent his/her outside options. The incomplete information about the rules of the game is, that player 1 does not know exactly if player 2 has really got these 2 outside options. Depending on this fact, player 1 chooses his/her strategy. If player 1 supposes that there exists no outside option for player 2 he/she will choose R, because then player 2 had only the choice between y (l) and 0 (r3), and he/she certainly will choose the y-alternative (l). In this case, player 1 gets 1-y. On the other hand, if player 1 supposes that player 2 has got the outside options he/she will choose L. Did he/she choose R, player 2 would choose the outside option r4 and so player 1 would get 0 instead of 1-x by choosing L.

Further examples for games with incomplete information are games, in which the number of players or the payoffs of one or more players are not common knowledge.

This treatise will only investigate games with complete information.

2.3.4 Cooperative/Non-cooperative Game Theory

“A game is cooperative, if commitments - agreements, promises and threats - are fully binding and enforceable. It is non-cooperative if commitments are not enforceable.” (Paul Walker, 1995, internet - he refers to Harsanyi, 1966)

The differentiation between cooperative and non-cooperative sometimes causes confusion, as there are games, like the Prisoner’s Dilemma (chapter 3), which allow cooperation. But these games are not part of cooperative games, as there is no institution enforcing commitments.

The definition of Holler/Illing is a little bit clearer: If the players are able to make binding agreements, the game is called cooperative game. This presupposes that not only communication is possible, but also from outside an agreement can be forced (e.g. from a third party) (faithfully translated from Holler/Iling, 1993 page 6)

Within the bounds of the questionnaire, cooperative games will not be investigated in more detail. The authors are of the opinion, that cooperative games – because of the outside institution forcing agreements – are not very realistic in practice. However, a game which allows communication, namely the Prisoner’s Dilemma, is part of the questionnaire.

2.3.5 Pure strategy/mixed strategy

“Pure strategies give a complete plan of action for the game in which a player chooses an action at every stage the player has to move.” (Eichberger, 1993, page 36)

“Mixed strategies are random choices of pure strategies where the player controls the randomisation.” (Eichberger, 1993, page 36)

For example game 1: Player 1 plays a pure strategy, if he/she decides either to play L or to play R. He/she plays a mixed strategy, if he/she plays with a certain probability L (probability x) and with a certain probability R (1 - x), in which x has to be in the interval [0, 1]. Consequently, the pure strategy is a special case of a mixed strategy, with x = 0 or x = 1 (faithfully translated from Tirole, 1995, page 947, 948)

Mixed strategies will not be investigated in more detail, as they do no have great relevance for economy. Dixit/Nalebuff give an explanation for this: Why are there in reality so few examples for companies playing mixed strategies? It is very difficult to imagine in economy, to leave results to chance; one prefers to have the results under control. This is especially then true, when something goes wrong. But this happens sometimes, if one randomises moves. (faithfully translated from Dixit/Nalebuff, 1997, page 186/187)

2.3.6 Repeated Games

Game theory differentiates games, which are played only once and repeated games. Repeated games can be played finitely many times or infinitely many times. The reason for this differentiation is the fact that the outcome of infinitely often repeated games sometimes differ from games, which are played only once or finitely many times. This phenomenon is called Folk theorem and will be discussed in connection with the Prisoner’s dilemma (chapter 3.3).

The questionnaire investigates some games, which are played only once and one game that is played infinitely many times.

2.4 Solution concepts

“Strategic uncertainty about the behaviour of the opponents is an essential characteristic of game situations. The solution of a game depends on the players’ expectations about the strategy choice of the other players. The main differences of alternative solution concepts are, how this expectation forming will be modelled. Consequently, a standardised accepted theory about that can not exist.“ (faithfully translated from Holler/Illing, 1993, page 57)

2.4.1 Maximin-concept

The maximin-concept in particular makes sense, if in a two-person game the winning of one player is the loss of the other player, in two-player zero-sum games (faithfully translated from Holler/Illing, 1993, page 58/59).

For the under 2.3.1.1 mentioned reasons, the questionnaire only includes general (not zero-sum) games. For general games, the maximin concept is not a very suitable tool. Nevertheless, the authors would like to describe it and show the differences to better concepts for general games.

“The logic of the maximin approach is in considering the worst case. Each player considers the worst that could result from each of his/her strategies and then simply chooses the strategy that yields the best “worst outcome“.“ (Eichberger, 1993, page 43)

illustration not visible in this excerpt

Table 3: Game 5 - Maximin-solution for general games (according to Holler/Illing, 1993, page 58)

If player 1 plays his/her first strategy, the worst what can happen to him/her is a payoff of 0. The worst payoff of his/her second strategy is also 0 and of his/her third strategy it is 1. Consequently, if he/she is playing a maximin-strategy, he/she will play his/her third strategy. If the same considerations are made for player 2, player 2 will also play his/her third strategy. Consequently, every player will get a payoff of 1. The same game will be discussed under 2.4.4 (Nash equilibrium) again and it will be shown that both players could have got a much higher payoff by playing another strategy. As mentioned above the maximin concept is not the best concept to solve general (not zero sum) games. Only very risk averse players would play a maximin-strategy in this game.

2.4.2 Dominant strategy equilibrium concept

“If a player has a strategy better than any other, independent of the strategic choices of the opponents, then it appears reasonable to assume a player will choose this strategy. Such dominance arguments are intuitive and often applied in economics.“ (Eichberger, 1993, page 63)

“Better than any other” in this context means, that the different payoffs of a certain strategy are always better – or at least not worse – than the respective payoffs of any other strategy. Game 6 illustrates what a dominant strategy is.

illustration not visible in this excerpt

Table 4: Game 6 - Dominant strategy equilibrium

The dominant strategy of player 1 is sAbbildung in dieser Leseprobe nicht enthalten (4 is better than 2, and 3 is better than 1). Player 2 follows his/her dominant strategy by playing sAbbildung in dieser Leseprobe nicht enthalten (6 is better than 5, and 8 is better than 7). This strategy combination (sAbbildung in dieser Leseprobe nicht enthalten, sAbbildung in dieser Leseprobe nicht enthalten) is called a “dominant strategy equilibrium“.

There are also games, where not every payoff of a strategy is better than the payoffs of all other strategies, but at least not worse than the payoffs of the other possible outcomes. Game 7 gives an example of this kind of games.

illustration not visible in this excerpt

Table 5: Game 7 - Dominant strategy equilibrium

In this case, the dominant strategy of player 1 is sAbbildung in dieser Leseprobe nicht enthalten (4 is better than 3, and 4 is not worse than 4). The dominant strategy of player 2 is sAbbildung in dieser Leseprobe nicht enthalten (2 is better than 1, and 2 is not worse than 2). Consequently, the dominant strategy equilibrium of this game yields in a payoff of 4 for player 1 and a payoff of 2 for player 2.

Not every game has got only one dominant strategy equilibrium as the following example shows.

illustration not visible in this excerpt

Table 6: Game 8 – four dominant strategy equilibria (according to Moulin, 1986, page 65)

In this game, every strategy is a dominant strategy and every strategy combination is a dominant strategy equilibrium, although strategy combination (sAbbildung in dieser Leseprobe nicht enthalten, sAbbildung in dieser Leseprobe nicht enthalten) would be the best strategy for the players. “This example demonstrates that individually rational behaviour may result in sub-optimal outcomes.“ (Eichberger, 1993, page 67). Chapter 3, which discusses the Prisoner’s Dilemma, will show again, that rational behaviour not always results in optimal outcomes for the players.

If every player possesses a dominant strategy, the opponent’s payoffs do not have to be taken into account. Sometimes, however, one of the players possesses a dominant strategy and the other one does not, like the following example demonstrates.

illustration not visible in this excerpt

Table 7: Game 9 - Dominant strategy equilibrium

In this game player 2 does not have a dominant strategy: If player 2 assumes, that player 1 plays sAbbildung in dieser Leseprobe nicht enthalten, he/she is better off with playing sAbbildung in dieser Leseprobe nicht enthalten. However, if player 2 assumes, that player 1 plays sAbbildung in dieser Leseprobe nicht enthalten, he/she should prefer playing sAbbildung in dieser Leseprobe nicht enthalten. Now to choose the best strategy is not independent anymore of the opponents’ strategy. Player 2 should realise, that player 1 possesses a dominant strategy and assume, that player 1 will play this dominant strategy (sAbbildung in dieser Leseprobe nicht enthalten). Now for player 2, it is not a question anymore what strategy he/she should prefer. He/she has only got the choice between a payoff of 4 and 9. His/her dominant strategy therefore is now sAbbildung in dieser Leseprobe nicht enthalten, yielding a payoff of 9. The dominant strategy equilibrium of this game is (sAbbildung in dieser Leseprobe nicht enthalten, sAbbildung in dieser Leseprobe nicht enthalten).

The questionnaire includes a game with the same properties: One player possesses a dominant strategy and therefore the other player obtains a dominant strategy, as well (game situation 1, variant “sequential“).

As there is sometimes confusion on what a dominant strategy is Dixit/Nalebuff give 2 examples what a dominant strategy NOT is:

1. In 1971 Leonard Silk, a respected American economy journalist, wrote about a congressional debate: “Mr. Reagan has sensed that the Republicans have what game theorists call a ‘dominant strategy’ - one that makes a player better off than his opponent, no matter what strategy his opponent uses.“ (quoted from “Game Theory: Reagen’s Move“, New York Times, April 15, 1981, page D2, New York Times, 1981). This definition is not correct. The dominance in ‘dominant strategy’ is a dominance of a player’s strategies over his/her other strategies, not of his/her strategies over the opponent’s strategies. A dominant strategy is one that makes a player better off than he/she would be if he/she used any other strategy, no matter what strategy the opponent uses.

2. A second misperception is that the worst possible outcome of a dominant strategy is better than the best possible outcome of another strategy. Sometimes, but not always that is true. For example, that is true for the dominant strategy of Time in the cover story war (chapter 4.2). But that is not true for the dominant strategies of the players in the OPEC game (chapter 4.3) (Dixit/Nalebuff, 1991, page 64, 65)

2.4.3 Iterated dominance equilibrium concept

In chapter 2.4.2, games with one or more dominant strategy equilibria were explained. But most games do not possess a dominant strategy at all, as the following example illustrates.

illustration not visible in this excerpt

Table 8: Game 10a - Game without dominant strategy (according to Eichberger, 1993, page 67)

For player 1, there is no dominant strategy. His/her best response to sAbbildung in dieser Leseprobe nicht enthalten is sAbbildung in dieser Leseprobe nicht enthaltenor sAbbildung in dieser Leseprobe nicht enthalten, to sAbbildung in dieser Leseprobe nicht enthalten it is sAbbildung in dieser Leseprobe nicht enthalten and to sAbbildung in dieser Leseprobe nicht enthalten and sAbbildung in dieser Leseprobe nicht enthalten it is sAbbildung in dieser Leseprobe nicht enthalten. Also for player 2, there is no dominant strategy. His/her best response to sAbbildung in dieser Leseprobe nicht enthalten and sAbbildung in dieser Leseprobe nicht enthalten is sAbbildung in dieser Leseprobe nicht enthalten and to sAbbildung in dieser Leseprobe nicht enthaltenit is sAbbildung in dieser Leseprobe nicht enthalten.

Nevertheless, a solution for this game can be found, as in some games there are strategies, which are always unfavourable for a player. These dominated strategies can be removed from the payoff table, to make it smaller and clearer: Regarding player 1, sAbbildung in dieser Leseprobe nicht enthalten is dominated by sAbbildung in dieser Leseprobe nicht enthalten and sAbbildung in dieser Leseprobe nicht enthalten. Regarding player 2, sAbbildung in dieser Leseprobe nicht enthalten is dominated by sAbbildung in dieser Leseprobe nicht enthaltenand sAbbildung in dieser Leseprobe nicht enthalten; in addition, sAbbildung in dieser Leseprobe nicht enthalten is dominated by sAbbildung in dieser Leseprobe nicht enthalten.

When all dominated strategies are removed from the table, then the new strategic situation looks as follows:

illustration not visible in this excerpt

Table 9: Game 10b - iterated dominance equilibrium (according to Eichberger, 1993, page 73)

The process of eliminating dominated strategies, can be continued now. SAbbildung in dieser Leseprobe nicht enthalten is dominated by sAbbildung in dieser Leseprobe nicht enthalten regarding player 1 and sAbbildung in dieser Leseprobe nicht enthalten is dominated by sAbbildung in dieser Leseprobe nicht enthalten regarding player 2.[2] The game is now reduced to the strategy combination (sAbbildung in dieser Leseprobe nicht enthalten, sAbbildung in dieser Leseprobe nicht enthalten). This result is called iterated dominance equilibrium (or sophisticated equilibrium) [3].

Note that for eliminating dominant strategies, the opponent’s payoffs have to be taken into account. It is true that player 1 does not need the opponent’s payoffs for eliminating his/her dominated strategy, but without knowing something about the opponents’ payoffs and therefore, without the ability to eliminate strategies of the opponent, too, he/she was not able to eliminate further of his/her own strategies.

The following example describes a game, where not every player is able to eliminate dominant strategies from the beginning.

illustration not visible in this excerpt

Table 10: Game 11 - Iterated dominance equilibrium

For player 1, there is no dominated strategy. But there is a dominated strategy for player 2, namely sAbbildung in dieser Leseprobe nicht enthalten. If player 2 is a rational player, he/she would never choose this strategy. Assuming player 1 knows the payoffs of player 2 and therefore, that player 2 would never choose sAbbildung in dieser Leseprobe nicht enthalten he/she is now able to choose a strategy: SAbbildung in dieser Leseprobe nicht enthalten is now dominated by sAbbildung in dieser Leseprobe nicht enthalten and therefore he/she would play sAbbildung in dieser Leseprobe nicht enthalten and player 2 would choose sAbbildung in dieser Leseprobe nicht enthalten.

The concept of eliminating dominated strategies does work in this game, but has to be questioned in many other games, as the reason for not playing a dominated strategy may disappear during the iterated elimination of dominated strategies[4].

2.4.4 Nash equilibrium

The Nash equilibrium is a more general solution concept than either iterated equilibrium or dominant strategy equilibrium. “There are more games with a Nash equilibrium than there are games with a dominant strategy equilibrium or an iterated dominance equilibrium.“ (Eichberger, 1993, page 87)

Game 5 now will be analysed again.

illustration not visible in this excerpt

Table 11: Game 5 – Nash equilibrium (according to Holler/Illing, 1993, page 58)

In this game, neither player 1 nor player 2 possesses a dominant strategy. Also, this game does not possess any dominated strategies.

The concept of Nash equilibrium (John Nash received the economic Nobel prize in 1994 for this concept) however, will come to a rational and better solution for this game: “A Nash equilibrium is a strategy combination in which each player plays a best response to the opponents’ behaviour“. (Eichberger, 1993, page 84)

The easiest way to find this combination is, to investigate every possible outcome. It has to be checked for every player, if it makes sense for him/her - given the strategy of the opponent - to deviate from his/her strategy. Only if for none of the players it makes sense to deviate, the investigated strategy combination is a Nash equilibrium.

For example, the strategy combination (sAbbildung in dieser Leseprobe nicht enthalten, sAbbildung in dieser Leseprobe nicht enthalten) is not a Nash equilibrium: If player 1 assumes player 2 is playing sAbbildung in dieser Leseprobe nicht enthalten, it makes no sense for him/her to stay with sAbbildung in dieser Leseprobe nicht enthalten, but to play sAbbildung in dieser Leseprobe nicht enthalten. The situation becomes different, if the strategy combination (sAbbildung in dieser Leseprobe nicht enthalten, sAbbildung in dieser Leseprobe nicht enthalten) is investigated: Neither player 1 nor player 2 is able to improve his/her payoff by deviating from his/her strategy, if it is assumed that the opponent is playing the corresponding strategy. Therefore, (sAbbildung in dieser Leseprobe nicht enthalten, sAbbildung in dieser Leseprobe nicht enthalten) is a Nash equilibrium. John Nash (1951) proved that every finite game (assuming mixed strategies are allowed) possesses at least one Nash equilibrium.

The problem of the Nash concept is, that some games possess more than 1 Nash equilibrium. Game 2 – already mentioned in chapter 2.3.1 - gives an example for these kind of games.

illustration not visible in this excerpt

Table 12: Game 2 – Strategic form (according to Tirole, 1995, page 947)

If this game is investigated by the Nash approach it can be ascertained, that there are 2 Nash equilibria, namely (L ,l) and (R, r). The problem is, that there are two possible solutions now and it is not possible to predict, which one will be played and if one of the Nash solutions will be played at all. This game is - in a little bit changed form - part of the questionnaire and will be discussed in chapter 5.2.2.1.2 again.

A very well known game with multiple Nash equilibria is “Battle of the sex“:

illustration not visible in this excerpt

Table 13: Game 12 – “Battle of the sex” (according to Eichberger, 1993, page 83)

The story is that there is a couple who wants to spend an evening together. But the husband prefers seeing a boxing match, the wife prefers going to the theatre. Nevertheless, they would rather like to spend the evening together than to go to one of these events separately. Player 1 is the wife (Theatre), player 2 is the husband (Boxing match). There is no dominant strategy for one of the players. If the wife goes to the theatre, the best response of the husband is to go to the theatre as well. But if the wife goes to the boxing match, he is better off by going also to the boxing match. The same holds true for the wife.

There is also no iterated dominance equilibrium in this game. On the other hand, if the players follow the safety strategy concept (Maximin) the husband would go to the boxing match and the wife would go to the theatre. But this strategy combination leads to one of the worst possible outcomes (0.5, 0.5) and, in addition, nobody would play the best response to the opponents strategy.

When the “Battle of the sex“ is investigated by the Nash approach it can be discovered, that there are 2 Nash equilibria: (T, T) and (B, B). If one player goes to the theatre, the best choice for the other is to follow, if one player goes to the boxing match the other does best to do the same. Of course, the husband prefers the equilibrium (B, B) and the wife the equilibrium (T, T). Again, there is the problem, that there are two possible solutions and it is not possible to predict, which one will be played and if one of the Nash solutions will be played at all.

The next example concerns a sequential game, which can be solved by the dominant strategy approach. Again, the Nash approach finds a number of solutions. The investigated game (game 1, strategic form) is also in a little bit changed form part of the questionnaire (chapter 5.2.2.1.1) and has been already mentioned in chapter 2.3.1.2.

illustration not visible in this excerpt

Table 14: Game 1 - Strategic form (according to Tirole, 1995, page 946)

Regarding player 2 in this game, the strategies sAbbildung in dieser Leseprobe nicht enthalten, sAbbildung in dieser Leseprobe nicht enthalten and sAbbildung in dieser Leseprobe nicht enthalten are dominated by sAbbildung in dieser Leseprobe nicht enthalten. Consequently, one can assume that player 2 plays sAbbildung in dieser Leseprobe nicht enthalten.[5] If player 1 predicts this rational behaviour of player 2, he/she decides playing sAbbildung in dieser Leseprobe nicht enthalten, because by this decision he/she will get a payoff of 3. The solution of this game therefore is (sAbbildung in dieser Leseprobe nicht enthalten, sAbbildung in dieser Leseprobe nicht enthalten). This solution is also a Nash equilibrium, but the Nash approach again allows more than this solution, namely (sAbbildung in dieser Leseprobe nicht enthalten, sAbbildung in dieser Leseprobe nicht enthalten) and (sAbbildung in dieser Leseprobe nicht enthalten, sAbbildung in dieser Leseprobe nicht enthalten). The Nash equilibrium as a solution concept is only then suitable, if all the decisions have to be made simultaneously. Regarding games, where one player is able to wait for the opponents decision, it is too weak. (faithfully translated from Tirole, 1995, page 956).

All these different solution concepts might seem confusing. Therefore a kind of procedure how to solve games will be given with chapter 2.4.6.[6]

2.4.5 (Subgame) Perfect equilibrium

The (subgame) perfect equilibrium concept describes a refinement of the Nash equilibrium concept for games, the players are able to observe the opponent's moves (games with perfect information). “The basic idea of the concept is choosing those Nash equilibria, which are not linked with implausible threats.“ (Tirole, 1995, page 957). Reinhard Selten developed this concept in 1965 and was awarded for the Economy Nobel Prize in 1994 for this concept.

The payoff table of game 1 shows, that two of the three Nash equilibria are linked with implausible threats. Implausible threats in this context means, that a player plays a strategy, that does not imply optimal decision at every stage of the game.

illustration not visible in this excerpt

Table 15: Game 1 - Strategic form (according to Tirole, 1995, page 949)

illustration not visible in this excerpt

Illustration 5: Game 1 – Extensive form (according to Tirole, 1995, page 946)

If player 2 plays his/her first strategy sAbbildung in dieser Leseprobe nicht enthalten his/her reaction is optimal, if player 1 plays L, because by playing l he/she gets 0 (instead of -1 assuming playing r). But if player 1 plays R, player 2 would react by playing l, if he/she follows strategy sAbbildung in dieser Leseprobe nicht enthalten. This is not an optimal decision, as he/she could get a higher payoff by playing r in this case. Consequently, (sAbbildung in dieser Leseprobe nicht enthalten, sAbbildung in dieser Leseprobe nicht enthalten) is not a (subgame) perfect equilibrium, as it is linked with the implausible threat of player 2 to play l, if player 1 plays R.

The same problem occurs, if sAbbildung in dieser Leseprobe nicht enthalten is investigated in more detail. By playing r (assuming player 1 is playing L) player 2 is taking a not optimal decision.

Only strategy sAbbildung in dieser Leseprobe nicht enthalten is not linked with implausible threats. The behaviour of player 2 is optimal at every possible stage. Therefore the strategy combination (sAbbildung in dieser Leseprobe nicht enthalten,sAbbildung in dieser Leseprobe nicht enthalten) is the only (subgame) perfect equilibrium in this game.

There is also an easier way to find the perfect equilibrium in this game. This concept is called backward induction or Kuhnscher Algorithm (faithfully translated from Tirole, 1995, page 958 – he refers to Kuhn, 1953).

To find the perfect equilibrium, this concept works from the end to the beginning of the game tree and eliminates all the non-optimal reactions of player 2 to the actions of player 1. The game tree then can be “folded up“, like the following picture shows. The game tree now represents a decision problem of only one player (player 1).

illustration not visible in this excerpt

Illustration 6: Game 1 – Folded up game tree (according to Tirole, 1995, page 958)

The concept of eliminating dominated strategies would have led to the same result in this game. However, both concepts do not always lead to the same result[7]. The questionnaire includes one game, both concepts can be used. In that game, again both concepts lead to the same result.

2.4.6 Conclusion

As announced before, finally the authors would like to refer to a kind of “procedure“ for solving games. This procedure will not go into the maximin concept, as this concept is only suitable for zero-sum-games and zero-sum-games will not be investigated by the survey. All the other concepts - as suggested in Dixit/Nalebuffs “Thinking strategically“ (Dixit/Nalebuff, 1991, page 85/86) - should be used as follows:

- In sequential -move games there is a linear chain of thinking: If I do this, my rival can do that, then I can respond by doing this, etc. These kind of games can be solved by backward induction. That means, look forward, and reason backward.
- For simultaneous- move games, there is a circle of reasoning: I think that he/she thinks that I think, etc. One must see through the opponent's action even though one cannot see it when making one's own move. If in this kind of games a dominant strategy exists, one should use it. If one does not have a dominant strategy, but the opponent does, then one should expect him to use it and choose the best response accordingly.
- If none of the players has a dominant strategy one should try to find out, if there are any dominated strategies and eliminate these strategies from considerations. This procedure has do be continued successively. If during elimination any dominant strategies appear in the smaller games, they should be chosen. Even if the procedure does not lead to an outcome, it reduces the size of the game.
- If there are neither dominant nor dominated strategies or it is not possible to simplify a game anymore, one should try to find a Nash equilibrium. As mentioned before, a Nash equilibrium is a strategy pair, in which each player's action is the best response to the opponent's action. If there is a unique Nash equilibrium, there are many reasons why all players should choose it. If there is more than one Nash equilibrium, a commonly understood rule or convention for choosing one of them is necessary. For example, as mentioned before (chapter 2.4.4), the simultaneous game 2 possesses 2 Nash equilibria. The convention or rule in this game could be that, always that Nash equilibrium should be chosen, which guarantees the highest payoff for both players. In other words, the Nash equilibrium (sAbbildung in dieser Leseprobe nicht enthalten, sAbbildung in dieser Leseprobe nicht enthalten) should be played, as both players' payoff is better than the one they would reach by playing Nash equilibrium (sAbbildung in dieser Leseprobe nicht enthalten, sAbbildung in dieser Leseprobe nicht enthalten).

2.5 Strategic moves

“We must organise a merciless fight. The enemy must not lay hands on a single loaf of bread, on a single litre of fuel. Collective farmers must drive their livestock away and remove their grain. What cannot be removed must be destroyed. Bridges and roads must be dynamited. Forests and depots must be burned down. Intolerable conditions must be created for the enemy.“ - Joseph Stalin, proclaiming the Soviets' “scorched earth“ defence against the Nazis, July 3, 1941.

Stalin's campaign lives on today in the battlefields of corporate control. When in the USA, Western Pacific tried to take over the publisher Houghton Mifflin, the publisher responded by threatening to empty their stable of well-known authors. The economist John Kenneth Galbraith, the writer Archibald MacLeish, the historian Arthur Schlesinger and a number of other authors threatened to find new publishers if Houghton Mifflin were acquired (Dixit/Nalebuff, 1991, page 119) “When Western Pacific Chairman Howard (Mickey) Newman got the first few letters from authors, he thought it was a big laugh. When he began getting more letters, he began to realise, 'I am going to buy this company and I ain't going to have nothing.'”(Institutional Investor, June 1979)

The scorched earth defence is an example for what in game theory is called strategic moves[8]. That means, that leaving options open in game theory is no longer always preferable. The lack of freedom has strategic value, as it changes other players' expectations about the own future responses. This sometimes can be of advantage.

illustration not visible in this excerpt

Illustration 7: The options for strategic moves (according to Dixit/Nalebuff, 1997, page 125)

An unconditional move (initiative) gives a strategic advantage to a player who is able to take the initiative and move first. A threat is a response rule that punishes others who fail to cooperate with you. A compelling threat is designed to induce someone to action, a deterrent threat is designed to prevent someone from taking an action. A promise is also a response rule that offers to reward someone who cooperates with you. A compelling promise is designed to induce someone to take a favourable action, a deterrent promise is designed to prevent someone from taking an unfavourable action. For further explanations and examples about promises and threats see Dixit/Nalebuff, 1997, page 122/123. An example for unconditional moves will be discussed in this treatise in chapter 4.6.

Strategic moves do contain two elements: the planned course of actions and the commitment that makes this course credible. The credibility will not be explained in more details in this treatise. For further details on possibilities for making actions credible, see Dixit/Nalebuff, 1997, pages 139 – 165.

2.6 Historical overview of Game Theory

Although most historical overviews on game theory start with 1944 (namely the publication of J.v. Neumann’s and O. Morgenstern’s famous book “Theory of Games and Economic Behavior”) the origins can be backdated until the mid of the 19th century.

1838 Publication of Augustin Cournot’s Researches into the Mathematical Principles of the Theory of Wealth. In chapter 7, On the Competition of Producers, Cournot discusses the special case of duopoly and utilises a solution concept that is a restricted version of the Nash equilibrium.

1928 John von Neumann proved the maximin theorem in his article “Zur Theorie der Gesellschaftsspiele“. This paper also introduced the extensive form of game.

1944 “Theory of Games and Economic Behavior“ by John von Neumann and Oskar Morgenstern is published.

1950 In January 1950 Melvin Dresher and Merill Flood carry out, at the Rand Corporation, the experiment which introduced the game now known as the Prisoner’s Dilemma. The famous story associated with this game is due to A.W.Tucker, a Two-Person Dilemma. Howard Raiffa independently conducted, unpublished, experiments with the Prisoner’s Dilemma.

1950-1953 In four papers between 1950 and 1953 John Nash made seminal contributions to non-cooperative game theory. In two papers - Equilibrium Points in N-Person Games (1950) and Non-cooperative Games (1951) - Nash proved the existence of a strategic equilibrium for non-cooperative games, the Nash equilibrium.

1952 The first textbook on game theory was John Charles C. McKinsey, “Introduction to the Theory of Games “

1952 Merill Flood’s report, (Rand Corporation research memorandum, Some Experimental Games, RM-789, June), on the 1950 Dresher/Flood experiments appears.

1953 Extensive form games allow the modeller to specify the exact order in which players have to make their decisions and to formulate the assumptions about the information possessed by the players in all stages of the game. H.W.Kuhn’s paper “Extensive Games and the Problem of Information” includes the formulation of extensive forms games which is currently used, and also some basic theorems pertaining to this class of games.

Late 50’s Near the end of this decade came the first studies of repeated games. The main result to appear at this time was the Folk Theorem. This states that the equilibrium outcomes in an infinitely repeated game coincide with the feasible and strongly individually rational outcomes of the one-shot game on which it is based. Authorship of the theorem is obscure. (faithfully translated from Holler/Illing, 1993, page 148)

1965 Reinhard Selten, “Spieltheoretische Behandlung eines Oligopolmodells mit Nachfrageträgheit“. In this article Selten introduced the idea of refinements of the Nash equilibrium with the concept of (subgame) perfect equilibria.

1966 Infinitely repeated games were born in a paper by R.J. Aumann and M. Maschler, “Game-Theoretic Aspects of Gradual Disarmament“.

1966 In the paper “A General Theory of Rational Behavior in Game Situations“ John Harsanyi gave the, now, most commonly used definition to distinguish between cooperative and non-cooperative games.

1972 “International Journal of Game Theory“ was founded by Oskar Morgenstern.

1976 An event is common knowledge among a set of agents if all know it and all know that they all know it and so on ad infinitum. It was not until its formalisation in Robert Aumann’s “Agreeing to Disagree“ that the game theorists and economists came to fully appreciate its importance.

1987 Publication of “The Evolution of Cooperation“ by Robert Axelrod

1994 The Central Bank of Sweden Prize in Economic Science in Memory of Alfred Nobel was award to John Nash, John C. Harsanyi and Reinhard Selten for their contributions to Game Theory.

1995 In January 1995 A.W.Tucker who invented the famous Prisoner’s Dilemma story of the two criminals, died in the age of 89.

This history ends with 1995, because as far as papers published afterwards it is too soon to be able to tell what effect on game theory any of them will have in the longer term. Due to Paul Walker perhaps the book “The theory of learning in games“ by Drew Fundenberg and David Levine could become a new milestone in game theory. Paul Walker is author of a history on game theory. Some parts of his history were used for the history of this treatise.

3 The Prisoner’s Dilemma (PD)

The most famous game in terms of game theory is the so-called Prisoner’s Dilemma. For this reason the PD plays an important role for this treatise and an economical version of this game is an integral part of the questionnaire.

In 1950 M. Dresher and M. Flood carried out an experiment, which introduced this type of game. A.W. Tucker invented the name “Prisoner's Dilemma“ later in 1950.

3.1 The story

Due to A.W. Tucker, the famous story associated with this game is a two-person-dilemma. Tucker created the PD to illustrate the difficulty of analysing certain kinds of games. His simple explanation has given rise to a vast body of literature in different subjects, first of all to game theory.

Tucker described the story like this: Two burglars are arrested separately by the police. Both have to choose either to confess (and implicate the other) or not. If none of them confesses, then both will serve one year only, because of lack of proof. If both confess and implicate the other, both will be arrested for 10 years. However, if one of them confesses and implicates the other, and the other one does not confess, the burglar who has collaborated with the police will go free and the other one will get the maximum penalty of 20 years (Dr. Roger A. McCain, 1997, internet).

The possible strategies for both are: “confess“ (C) or “not confess“ (NC). Their payoffs (actually penalties) are the served sentences. The “payoff“ table in strategic form of this game looks as follows:

illustration not visible in this excerpt

Table 16: Strategic form of the PD

As the payoff table shows, the only equilibrium pair is (C, C), because both burglars have got a dominant strategy, which tells them to confess. In addition, this equilibrium pair is a Nash-equilibrium because for both prisoners there is no incentive to change their strategy, if they knew that the other one confesses. This equilibrium pair results in poor payoffs for both players, namely (-10, -10) and therefore this game is called a Prisoner’s Dilemma. They could both benefit from choosing (NC, NC), because then they would achieve (-1, -1) years arrested, which is much obviously much better for both of them. But not to confess may be dangerous for an imprisoned person, who can not communicate with his/her colleague, because then he/she might be put in prison for 20 years.

3.2 Cooperation

In game theory the PD as a non-zero-sum game is often used to analyse cooperation. During the seventies and eighties many game theorists worked hard on a “solution“ of the dilemma, as for example Fundenberg/Maskin, 1986 .

They tried to find conditions such that “cooperation“ in PD type games becomes an equilibrium. It should be assumed that if the prisoners in the story above are able to communicate they should agree upon not confessing. But it easily can be seen that even if the prisoners could communicate and promise each other not to confess the equilibrium pair would not change, as there is an incentive to deviate from the agreement. Consequently, if the burglars promise each other not to confess, both would act irrationally if they kept their promise, as by confessing - assuming the other one keeps the promise - they will not be imprisoned instead of going to jail for a year.

The problem with the PD is, that if both players were purely rational, they would never cooperate. Rational decision-making in this case means that everybody makes the decision which is best for himself, whatever the other player chooses. Assuming that the other one would confess, then it would be rational to confess as well, because then you would be put in jail for only 10 years, instead of 20. Supposing that the other one cooperates (=not confess), again the best strategy is to confess, because then you will not be imprisoned, instead of being arrested for one year. The problem is that if both prisoners act rational, both will decide not to cooperate, and again none of them will gain anything. The dilemma arises again.

Only if the prisoners were able to commit themselves credibly to play cooperation strategies, a different outcome could be expected, so for example a written contract[9].

3.3 The repeated PD

In a repeated game each player has to consider what to do in each round and how to react to the opponent players’ previous actions. Naturally it is hard to imagine, that two prisoners will come into the above mentioned situation more than once in their life. Nevertheless, repeated PD type games must be analysed, because they are basic for many relevant economic conflicts, e.g. GATT negotiations and OPEC , as will be explained in chapter 4.3.

Generally a PD type game is characterised by the following strategic form which also fits for the above-mentioned story with the two criminals. Notice that the payoffs in this table do not represent concrete figures but should be understood as rankings with 1 = best and 4 = worst. Also pay attention to the fact, that this time NC stands for “not cooperative” and C for “cooperative”.[10]

illustration not visible in this excerpt

Table 17 : Strategic form of a general PD type game

Like most kind of games, PD type games can be repeated finitely many times and infinitely many times. PD type games played finitely many times will not be explained in more detail, as the equilibrium pair is the same as in the game played only once. Therefore the PD played finitely many times is not an integral part of the questionnaire. The reason for the same result is the so-called backward induction. For further explanation see Reinhard Selten/John C. Harsanyi, 1992, page 195.

Much more interesting are PD type games played infinitely many times. One could object to analysing repeated games played infinitely many times by saying that all real life games can be played only finitely many times. Since we are all going to die someday, and definitely sooner than an infinitely game will ever end, one could think that infinitely repeated games would have no relevance at all for real life games. However, the term “infinite repetitions“ should not be interpreted literally. Here are four reasons for analysing infinitely often repeated games nevertheless, especially in an economical context:

- Even though you are going to die one day, your company is probably going to survive you. As a corporation is not a person there is no reason that it cannot live and therefore play forever. The best examples for infinitely repeated games involve companies.
- Even if you know that you are going to die one day you probably will not act as if you will die tomorrow. Most people act as if they are going to live forever and that is all it takes for a game played infinitely many times: two or more players who act as if they will keep on playing forever.
- It is sufficient to assume that decision-makers do not know when the game ends. From a strategic point of view an uncertain time horizon is equal to an infinite time horizon.
- As it will be shown in the following section only infinitely repeated games can create new, credible equilibrium payoff possibilities for one-shot games with a single equilibrium like the PD.

For PD type games played infinitely many times the strategic situation changes drastically and new equilibrium pairs are possible. The reason for that is, that now it is possible to punish the opponent player when he/she deviates from playing cooperatively. This phenomenon is called Folk-Theorem (faithfully translated from Holler/Illing, 1993, page 24).

Players can achieve to play cooperatively in all rounds by playing so called trigger strategies. A trigger strategy, in case of repeated PD type games, tells us to cooperate in the first game and also to cooperate in the second game if - and only if - the opponent cooperated in the previous game as well. As soon as the opponent deviates from cooperation the trigger strategy tells us not to cooperate anymore for the rest of the game. In other words the trigger strategy tells the “prisoner“ to confess (not to cooperate) for the whole game from the moment of the first deviation on. Consequently, the original Nash-Equilibrium pair (-10, -10) will be played from then on, which is again not desirable for both players. That means, if player one chooses not to cooperate in the first round he/she runs the risk that the opponent will punish him/her by not playing cooperatively for the rest of the game.

It is easy to see that the additional profit of one single deviation will never pay, unless the profit for this unique deviation is evaluated higher than the total of the reduced future payoffs. This would only be the case, if the discount factor for the future payoffs is very low. The following explanation might be helpful to understand this statement:

Infinite payoff streams in economics are evaluated by a discount factor (d) between 0 and 1. As economic agents prefer to be paid as soon as possible, payments at different dates have to be valued differently. This discount factor can be interpreted as a (individual) measure of time preference expressing the “impatience“ of the individual. High preferences for getting the payoff as soon as possible are indicated by low discount factors (close to 0). For example an infinite payoff stream of $ 100 per period is evaluated as follows:

PV (100) = 100 + d100 + d²100 + d³100 + … + d (n-1)100 = 100 : (1 - d)

PV (100) is called present value of the infinite payoff stream 100. In other words, only if the decision maker is very impatient and prefers to get a higher payoff just once (namely today) to getting a higher payoff every time the game takes place in future it is possible that a single deviation will pay. That means deviation only is profitable if d is small enough, which usually is not a realistic premise. Therefore, if both players in a PD type game played infinitely many times behave according to the trigger strategy they will both end up in playing cooperatively all the time. As this situation then is a stable situation, cooperation is the new equilibrium for the infinitely repeated PD type game.

Essentially, that is the way, that the dilemma situation in PD type games has been solved. One must be aware that, of course, trigger strategies are not the only way to punish a deviator from playing cooperatively. One alternative method for example is the so-called Tit-for-Tat-Strategy, which is also very famous and leads to the same new equilibrium as the trigger strategy. The Tit- for-Tat-Strategy tells you to start playing cooperatively in the first round. If your opponent played cooperatively in this round as well, the Tit- for-Tat-Strategy tells you to keep on playing cooperatively. Only if your opponent deviates, the strategy advises you to also stop playing cooperatively. But in contrast to the trigger strategy, the Tit-for-Tat-Strategy tells you to return to cooperative action, if your opponent returned to playing cooperatively in the previous game. Consequently, both players can achieve an advantage for one round only by not playing cooperatively. As mentioned before, if both players follow the Tit- for-Tat-Strategy they again will end up playing cooperatively for the whole game.

The power of Tit-for-Tat is confirmed through an experiment by Robert Axelrod, a political scientist at the University of Michigan. He carried out a tournament of two-person PD type games. Game theorists from around the world took part in this tournament and sent their strategies in the form of computer programs to R. Axelrod. It should be assumed that the programmes were developed for games played infinitely many times. The programs played against each other in pairs to play a PD game repeated 150 times (as an approach to infinity). The programs were then ranked by the sum of their scores. The winner was the programme of a mathematics professor whose winning strategy was “Tit-for-Tat“. This fact shows that the Tit- for-Tat-Strategy not only is theory but is also able to beat other strategies in real experiments (faithfully translated from Holler/Illing, 1993, page 167 – they refer to Axelrod, 1987).

Tit-for-That is another way (besides trigger-strategy) how the new equilibrium in an infinitely repeated PD type game is achieved. As there is no incentive for both players to deviate from this new equilibrium it is called Nash-equilibrium of the infinitely repeated game.

4 Examples for interesting economic games

A lot of books about game theory have been published already. Nevertheless, it is very difficult to find books not only discussing the theoretical and mathematical aspects of game theory, but also illustrating some games with a real story around.

The book Thinking strategically from Dixit and Nalebuff (1991) is one of the few books illustrating some very interesting games. That is why some economic games of this book are mentioned and described in more detail here.

4.1 Market entry game

This sequential game is an example for backward induction and shows how different assumptions about the opponent's payoff lead to different decisions.

Assuming that in a certain country there is a company called “Fastcleaners“ dominating the market for vacuum cleaners. Now there is a new vacuum cleaner company called “Newcleaners“, who has to decide if they shall enter the vacuum cleaner market or not. If Newcleaners enters, Fastcleaners has two possibilities to react: either to accept the new company and therefore to accept a lower market share, or fight a price war.

illustration not visible in this excerpt

Illustration 8: Profit amounts for Newcleaners for every possible outcome (according to Dixit/Nalebuff, 1997, page 40)

As Newcleaners do not know anything about the payoffs of Fastcleaners, the only way to solve the problem is given by decision theory. To every decision of Fastcleaners a probability will be assigned. Assuming every action of Fastcleaners will have a probability of 50 %, one is able to calculate the following: 0.5 x $100,000 + 0.5 x (-$200,000) = -$50,000. As this is equivalent to a loss of $50,000 an analyst would recommend staying out of the market.

[...]


[1] In this context, it has to be mentioned that the payoffs already include additional effects like interest, prestige improvement, etc.

[2] Now in this smaller game it is also correct to say s and s are dominant strategies.

[3] These expressions are only valid for the case, that the final game possesses only strategy combinations, in which each player gets the same payoff of all his/her remaining strategies or the game ends with a unique strategy combination (like game10b).

[4] examples and further explanation: Eichberger, 1993, page 79/80

[5] As s dominates all the other strategies of player 2 it is a dominant strategy at the same time.

[6] For games with incomplete information there is an additional Nash solution concept. It is called Bayes-Nash equilibrium. This concept will not be discussed in more detail, because - as mentioned before - games with incomplete information are not investigated by the questionnaire (for Bayes-Nash equilibrium see Eichberger, 1993, chapter 5).

[7] for details: Tirole, 1995, page 958

[8] The terminology, and much of the analysis, was pioneered by Thomas Schelling, 1960.

[9] As mentioned before, some methods about how such credible commitments can be made are explained in chapter 6 of Dixit/Nalebuffs book “Thinking strategically”, 1991, pages 142 – 167.

[10] In the game before, C stood for confess and NC for not confess.

Details

Seiten
Erscheinungsform
Originalausgabe
Jahr
1998
ISBN (eBook)
9783832453633
ISBN (Paperback)
9783838653631
DOI
10.3239/9783832453633
Dateigröße
1020 KB
Sprache
Englisch
Institution / Hochschule
Hochschule Ludwigshafen am Rhein – European Management + Controlling
Erscheinungsdatum
2002 (April)
Note
2,0
Schlagworte
nash nash-gleichgewicht prisoners dilemma spieltheorie
Zurück

Titel: An investigation of strategic decision making in Swedish and German companies based on Game Theory
book preview page numper 1
book preview page numper 2
book preview page numper 3
book preview page numper 4
book preview page numper 5
book preview page numper 6
book preview page numper 7
book preview page numper 8
book preview page numper 9
book preview page numper 10
book preview page numper 11
book preview page numper 12
book preview page numper 13
book preview page numper 14
book preview page numper 15
book preview page numper 16
book preview page numper 17
book preview page numper 18
book preview page numper 19
book preview page numper 20
book preview page numper 21
book preview page numper 22
book preview page numper 23
book preview page numper 24
book preview page numper 25
book preview page numper 26
book preview page numper 27
book preview page numper 28
book preview page numper 29
book preview page numper 30
book preview page numper 31
book preview page numper 32
book preview page numper 33
161 Seiten
Cookie-Einstellungen