How Does Game Theory Affect Blockchain Networks?

Game Theory is the study of strategic decision making.

Game Theory is a core principle of blockchain systems. If you’ve wondered how every individual Bitcoin node works together, Game Theory is your answer.

Game Theory incentivizes people who don’t know each other to work together successfully.

How to find Nash Equilibrium in a 2X2 payoff matrix

This video goes over the strategies and rules of thumb to help figure out where the Nash equilibrium will occur in a 2×2 payoff matrix. Generally you need to figure out what the dominant strategy is for each player and then use the dominant strategy of each player to see if a final cell ends up being the choice for both players.

Game Theory 14: Network Game Theory

NETWORK GAME THEORY

In continuing on with our discussion on evolutionary game theory, in this video we will discuss network games.

The workings of evolution are typically told as a story of competition and the classical conception of the survival of the fittest.

But in reality, evolution is as much about cooperation as competition. A unicellular organism may have survived the course of history largely based upon its capacity to fight for resources with other unicellular organisms.

But the cells in multicellular organisms have survived based upon their capacity to cooperate. They form part of large systems of coordination and they are selected for based upon their capacity to interoperate with other elements within large networks that contribute to the workings of the whole organism.

Likewise, in a ghetto full of gangsters, it may be your capacity to look out for your own skin that will enable you to get ahead. But at the other end of town where people earn their living as part of large complex organizations, it is primarily your capacity to interoperate with others and form part of these large organizations that determine your payoff.

You form part of a large cooperative organization which is really what is supporting you and determining your payoff. In such an event one needs to be able to interoperate with others effectively, to be of value to the organization, and thus succeed in the overall game.

The idea is that evolution creates networks of cooperation that are able to intercept resources more effectively because of the coordinated effort.

People’s capacity to survive within such systems is then based upon their capacity for cooperation, instead of competition, as it might be if they were outside of these networks of cooperation, in the jungle so to speak.

Thus what we do, our choice of strategy and the payoff for cooperation or defection in the real world, depends hugely on the context outside of the immediate game and this context can be understood as a network of agents interacting.

When we form part of networks of coordination and cooperation our payoffs come to depend largely on what others around us are doing.

I want to buy a certain computer operating system but the payoff will depend on what operating system my colleagues are using. Or people want to learn a new language only if the other people around them also speak that language.

SPATIAL DISTRIBUTION

A key factor in the evolution of cooperation is spatial distribution. If you can get cooperators to cluster together in a social space, cooperation can evolve.

In research conducted by Christakis and Fowler, they have shown that our experience of the world depends greatly on where we find ourselves within the social networks around us. Particular studies have found that networks influence a surprising variety of lifestyle and health factors, such as how prone you are to obesity, smoking cessation, and even happiness.

The experiment they conducted took place in Tanzania with the Hadza people, one of the last remaining populations of hunter-gatherers on the planet whose lifestyle predates the invention of agriculture. They designed experiments to measure social ties and social cooperation within the communities.

To identify the social networks existing within the communities they first asked adults to identify individuals they would prefer to live with in their next encampment. Second, they gave each adult three straws containing honey and were told they could give these straws as gifts to anyone in their camp.

This generated 1,263 campmate ties and 426 gift ties.

In a separate activity, the researchers measured levels of cooperation by giving the Hadza additional honey straws that they could either keep for themselves or donate to the group.

When the networks were mapped and analyzed, the researchers found that co-operators and non-cooperators formed distinct clusters within the overall network. When they looked at individual traits with the ties that they formed they found clearly that cooperators clustered together, becoming friends with other cooperators.

The study’s findings describe elements of social network structures that may have been present early in human history. Suggesting how our ancestors may have formed ties with both kin and non-kin based on shared attributes, including the tendency to cooperate.

According to the paper, social networks likely contributed to the evolution of cooperation.

MODEL

The emerging combination of network theory and game theory offers us an approach to looking at such situations. The idea is that there are different individuals making decisions and they are on a network and people care about the actions of their neighbors.

As an example, we can think of an individual, Kate, choosing whether to go to university or not, and this action will depend upon how many of her friends are choosing to go to university also.

So the pay off for the individual will depend on how much she likes the idea of going to university as an individual, but also how many of her friends choose to go and on how many friends she has.

So in this networked game, the individual might have a threshold, say Kate will only go to the university if at least two of her friends are also going and her friends also have the same threshold.

This is an example of a strategic complements game. Meaning that the more of one’s neighbors that take an action the more attractive it becomes for one to also take it.

But we can also have the inverse, what are called games of strategic substitution, where the more of my neighbors that take the action the less attractive it is for me.

As an example, we might take Billy who is thinking of buying a car, but Billy is also part of a social network of friends and if one of his friends has a car then he can take rides with the friend and has no great need to purchase a car. If we assume the same is true for his friends we could use a social network model of the game to find where the equilibrium state is. So the payoff for Billy would look like a ranking where one of his friends having a car is best, then him having to buy one, then worst of all no one having a car.

An agent is only willing to take action 1 if no one they are connected to is also taking that action. So in the network, we can see that it is in equilibrium because all the players connected to a player taking strategy 1 do not take that strategy.

Our world is a complex place, especially when dealing with social interaction where people are embedded within a given social, cultural, economic and physical environment, all of which is affecting the choices they make. The combination of network theory and game theory takes us into this world of complex games which is much more representative of many real-world situations, but still very much at the forefront of research.

This video has hopefully given you a sense of how network game theory can help us look outside the box of standard games. To see how other factors in the environment may be influencing the games and how to potentially incorporate these other factors through the application of network modeling.

Game Theory 13: Replicator Dynamics

REPLICATOR DYNAMICS

The Replicator equation is the first and most important game dynamic studied in connection with evolutionary game theory. The replicator equation and other deterministic game dynamics have become essential tools over the past 40 years in applying evolutionary game theory to behavioral models in the biological and social sciences.

REPLICATOR EQUATION

These models show the growth rate of the proportion of agents using a certain strategy. As we will illustrate, this growth rate is equal to the difference between the average payoff of that strategy and the average payoff of the population as a whole.

MODEL

There are three primary elements to a replicator model:

Firstly we have a set of agent types, each of which represents a particular strategy and each type of strategy has a payoff associated with it which is how well they are doing.

There is also a parameter associated with how many of each type there are in the overall population – each type represents a certain percentage of the overall population.

Now in deciding what they might do, people may adopt two approaches.

  • They may simply copy what other people are doing, in such a case the likelihood of an agent adopting any given strategy would be relative to its existing proportion of that strategy within the population. So if lots of people are doing some strategy the agent would be more likely to adopt that strategy over some other strategy that few are doing.
  • Alternatively, the agent might be more discerning and look to see which of other people’s strategies is doing well and adopt the one that is most successful, having the highest payoff.

The replicator dynamic model is going to try and balance these two potential approaches that agents might adopt, and hopefully, give us a more realistic model than one where agents simply adopt either strategy solely.

Given these rules, the replicator model is one way of trying to capture the dynamic of this evolutionary game, to see which strategies become more prevalent over time or how the percentage mix of strategies changes.

In a rational model, people will simply adopt the strategy that they see as doing the best amongst those present. But equally, people may simply adopt a strategy of simply copying what others are doing. If 10% are using strategy 1, 50% strategy 2, and 40% strategy 3, then the agent is more likely to adopt strategy 2 due to its prevalence.

So the weight that captures how likely an agent will adopt a certain strategy in the next round of the game is a function of the probability times the payoff.

If we wanted to think about this in a more intuitive way, we might think of having a bag of balls where the ball represents a strategy that will be played in the game. If a strategy has a better payoff then it will be a bigger ball and you will be more likely to pick that bigger ball.

Equally, if there are more agents using that strategy in the population, there will be more balls in the bag representing that strategy, meaning again you will be more likely to choose it. The replicator model is simply computing which balls will get selected and thus what strategies will become more prevalent.

One thing to note though is that the theory typically assumes large homogeneous populations with random interactions. The replicator equation differs from other equations used to model replication in that it allows the fitness function to incorporate the distribution of the population types rather than setting the fitness of a particular type constant. This important property allows the replicator equation to capture the essence of selection. But unlike other models, the replicator equation does not incorporate mutation and so is not able to innovate new types or pure strategies.

FISHERS FUNDAMENTAL THEOREM

An interesting corollary to this is what is called Fisher’s Fundamental Theorem, which is a model that tries to capture the role that variation plays in adaptation. The basic intuition is that a higher variation in the population will give it greater capacity to evolve optimal strategies given the environment.

Thus given a population of agents trying to adapt to their environment, the rate of adaptation of a population is proportional to the variation of types within that population. Fisher’s Fundamental Theorem then works to incorporate this additional important parameter, of the degree of variation among the population, so as to better model the overall process of strategy evolution.

GAMES

Static game-theoretic solution concepts, such as Nash equilibrium, play a central role in predicting the evolutionary outcomes of game dynamics.

Conversely, game dynamics that arise naturally in analyzing behavioral evolution lead to a more thorough understanding of issues connected to the static concept of equilibrium. That is, both the classical and evolutionary approaches to game theory benefit through this interplay between them.

Replicator Dynamic models have become a primary method for studying the evolutionary dynamics in games both social, economic and ecological.

Game Theory 12: Evolutionarily Stable Strategies

COOPERATIVE STRUCTURES

As we saw in the previous chapter on evolutionary games, when everyone was playing a random strategy it was best to play a Tit for Tat strategy. When everyone was playing a Tit for Tat strategy it was best to play Generous Tit for Tat. When people were playing this, it was then best to play an unconditional cooperative strategy. Once the game was in this state it was then best to play a defecting strategy, thus creating a cycle. This illustrates clearly the dynamic nature to the success of strategies within games.

Because evolutionary games are dynamic, meaning that agents’ strategies change over time, what is best for one agent to do often depends on what others are doing.

It is legitimate for us to then ask, are there any strategies within a given game that are stable and resistant to invasion?

In studying evolutionary games one thing that biologists and others have been particularly interested in is this idea of evolutionary stability, which are evolutionary games that lead to stable solutions or points of stasis for contending strategies.

Just as equilibrium is the central idea within static noncooperative games, the central idea in dynamic games is that of evolutionarily stable strategies, as those that will endure over time.

As an example, we can think about a population of seals that go out fishing every day. Hunting for fish is energy consuming and thus some seals may adopt a strategy of simply stealing the fish off those who have done the fishing. So if the whole population is fishing then if an individual mutant might be born that follows a defector strategy of stealing, it would then do well for itself because there is plenty of fishing happening. This successful defector strategy could then reproduce creating more defectors. At which point we might say that this defecting strategy is superior and will dominate. But of course, over time we will get a tragedy of the commons situation emerge as not enough seals are going out fishing. Stealing fish will become a less viable strategy to the point where they die out, and those who go fishing may do well again.

Thus the defector strategy is unstable, and likewise, the fishing strategy may also be unstable. What may be stable in this evolutionary game is some combination of both.

EVOLUTIONARILY STABLE STRATEGY

The Evolutionarily Stable Strategy is very much similar to Nash Equilibrium in classical Game Theory, with a number of additions.

Nash Equilibrium is a game equilibrium where it is not rational for any player to deviate from their present strategy.

An evolutionarily stable strategy here is a state of game dynamics where, in a very large population of competitors, another mutant strategy cannot successfully enter the population to disturb the existing dynamic.

Indeed, in the modern view, equilibrium should be thought of as the limiting outcome of an unspecified learning, or evolutionary process, that unfolds over time. In this view, equilibrium is the end of the story of how strategic thinking, competition, optimization, and learning work, not the beginning or middle of a one-shot game.

Therefore, a successful stable strategy must have at least two characteristics.

  1. One, it must be effective against competitors when it is rare – so that it can enter the previous competing population and grow.
  2. Secondly, it must also be successful later when it has grown to a high proportion of the population – so that it can defend itself.

This, in turn, means that the strategy must be successful when it contends with others exactly like itself. A stable strategy in an evolutionary game does not have to be unbeatable, it only has to be uninvadable and thus stable over time.

A stable strategy is a strategy that, when everyone is doing it, no new mutant could arise which would do better, and thus we can expect a degree of stability.

UNSTABLE CYCLING

Of course, we don’t always get stable strategies emerge within evolutionary games. One of the simplest examples of this is the game Rock, Paper, Scissors.

The best strategy is to play a mixed random game, where one plays any of the three strategies one-third of the time.

However in biology, many creatures are incapable of mixed behavior — they only exhibit one pure strategy. If the game is played only with the pure Rock, Paper and Scissors strategies the evolutionary game is dynamically unstable. Rock mutants can enter an all scissor population, but then – Paper mutants can take over an all Rock population, but then – Scissor mutants can take over an all Paper population – and so on.

Using experimental economic methods, scientists have used the Rock, Paper, Scissors game to test human social evolutionary dynamical behaviors in the laboratory. The social cyclic behaviors, predicted by evolutionary game theory, have been observed in various lab experiments.

Likewise, it has been recorded within ecosystems, most notably within a particular type of lizard that can have three different forms, creating three different strategies, one of being aggressive, the other unaggressive and the third some what prudent. The overall situation corresponds to the Rock, Scissors, Paper game, creating a six-year population cycle as new mutants enter and become dominant before another strategy invades and so on.

Game Theory 11: Evolutionary Game Theory

EVOLUTIONARY GAME THEORY

Classical game theory was developed during the mid 20th century primarily for application in economics and political science. But in the 1970s a number of biologists started to recognize how similar the games being studied were to the interaction between animals within ecosystems. Game theory then quickly became a hot topic in biology as they started to find it relevant to all sorts of animal and microbial interactions from the feeding of bats to the territorial defense of stickleback fish.

Originally evolutionary game theory was simply the application of game theory to evolving populations in biology. Asking how cooperative systems could have evolved over time from various strategies that biological creatures might have adopted. However, the development of evolutionary game theory has produced a theory which holds great promise as a general theory of games.

More recently, evolutionary game theory has become of increased interest to economists, sociologists, anthropologists and social scientists in general as well as philosophers. In this video will talk about this more general application of evolutionary game theory.

Whereas the game theory that we have been talking about so far has been focused on static strategies, that is to say, strategies that do not change over time, evolutionary game theory differs from classical game theory in focusing more on the dynamics of strategy change. Here we are asking how strategies evolve over time and which kind of dynamic strategies are most successful in this evolutionary process.

EVOLUTION

One of the interesting differences between evolutionary game theory and standard game theory is that the evolutionary version does not require players to act rationally.

When we talk about biological cells or ants we know that they do not sit in front of a payoff matrix and ask themselves what is the best payoff, in evolutionary game theory natural selection does this for us.

So if we have a group of cooperators and defectors who randomly meet each other, the average payoff for the defectors is higher than the cooperators, therefore, they reproduce better. Payoffs in evolutionary biology correspond to reproductive success. So after some time evolution will have favored defectors to the point where all of the cooperators will be extinct.

The basic logic is that, for something to survive the course of time, it must be an optimal strategy or else any other strategy that is more effective will eventually come to dominate the population.

Traditionally, the story of evolution is told as one of competition, and there is certainly plenty of this. But there is also mutualism, where organisms and people manage to work together cooperatively and survive in the face of defectors. Many research papers have been written on this topic of how cooperation could evolve in the face of such an evolutionary dynamic.

The general question of interest in evolutionary game theory is in how do patterns of cooperation evolve, and what are optimal strategies to use in a game that evolves over time.

The basic mechanism that underlies the evolution of cooperation is the interdependency between acts over time.

In a single shot game, it makes sense to always defect, but with repeated interaction, cooperation becomes greatly more viable. If the game is repeated, it is no longer the case that strict defection is the best option.

If the prisoner’s dilemma situation is repeated it allows non-cooperation to be punished more, and cooperation to be rewarded more, than the single-shot version of the problem would suggest. We can understand this better by looking at a number of experiments that were done to investigate this dynamic.

EXPERIMENTS

The political scientist Robert Axelrod in the late seventies did a number of highly influential computer experiments asking what is a good strategy for playing a repeated Prisoner’s Dilemma. Axelrod asked for various researchers to submit computer algorithms to a competition to see which algorithms would fare best against each other. Computer models of the evolution of cooperation showed that indiscriminate cooperators almost always end up losing against defectors, who accept helpful acts from others but do not reciprocate. People who are cooperative and helpful indiscriminately all of the time will end up getting taken advantage of by others. However, if we have a population of pure defectors they will also lose out on the possible rewards of cooperation that would give all higher payoffs.

Many strategies have been tested; the best competitive strategies are general cooperation with a reserved retaliatory response if necessary.

The most famous and one of the most successful of these is Tit for Tat with a simple algorithm. Tit for Tat is a very simple algorithm of just three rules, I start with cooperation, if you cooperate, then I will cooperate. If you defect, then I will defect. Computer tournaments in which different strategies were pitted against each other showed Tit for Tat to be the most successful strategy in social dilemmas.

Tit for Tat is a common strategy in real-world social dilemmas because it is nice but firm it makes cooperation a possibility but is also quick to reprimand. It is a strategy that can be found naturally in everything from international trade policies to people borrowing and lending money. And in repeated interactions cooperation can emerge when people adopt a Tit for Tat strategy.

To go beyond Tit for Tat, researchers started to use computers to simulate the process of evolution. Instead of people submitting solutions the computer itself generated mutations and selected from them with the researchers recording and analyzing the results.

From these experiments, they found that if the players play randomly the winners are those that always defect. But then when everyone has come to play defect strategies, if a few people play Tit for Tat strategies, a small cluster can form where among themselves they get a good pay off.

Evolutionary selection can then start to favor them and they do not get exploited by all the defectors because they immediately switch to defect in retaliation.

But the Tit for Tat strategy did not last long in this setting as a new solution came to emerge given this context. This strategy was a mutant of Tit for Tat that was more forgiving called Generous Tit for Tat.

Generous Tit for Tat is an algorithm that starts with cooperation and then will reciprocate cooperation from others, but if the other defects it will defect with some probability. Thus it uses probability to enable the quality of forgiveness. It cooperates when others do but when they defect there is still some probability that it will continue to cooperate. This is a random decision so it is not possible for others to predict when it will continue to cooperate.

It turns out that this forgiving strategy is optimal in environments where there is some degree of noise in communications, as is characteristic of real-world environments.

In the real world, we often do not know for certain if our partner cheated or if someone really meant to say what they said, and these errors have to be compensated for by some degree of forgiveness. In a world of errors in action and perception, such a strategy can be a Nash equilibrium and evolutionarily stable. The more beneficial cooperation is, the more forgiving Generous Tit for Tat can be, while still resisting invasion by defectors.

The extraordinary thing that now happens is that once everyone has moved towards playing Generous Tit for Tat, cooperation becomes a much stronger attractor and at this stage, players can now play an unconditional cooperative strategy without having any disadvantage.

In a world of Generous Tit for Tat, there is no longer a need for any other actions and thus unconditional cooperators survive. In order for a strategy to be evolutionarily stable, it must have the property that if almost every member of the population follows it, no mutants can successfully invade – where a mutant is an individual who adopts a novel strategy.

In many situations, cooperation is favored and it even benefits an individual to forgive an occasional defection, but cooperative societies are always unstable because mutants inclined to defect can upset any balance. And this is the downfall of the cooperative strategy. What happens next is somewhat predictable. In a world where everyone is cooperating, unconditional defection is an optimal strategy once it takes hold.

Thus we can see a dynamic cyclical process, as higher forms of cooperation arise and then collapse. In many ways then this reflects what we see in the real world of economies and empires rising and falling as institutional structures for cooperation are formed, mature and eventually decline.

INDIRECT RECIPROCITY

These experiments describe the evolution of systems of cooperation through direct interaction, and much of our interactions are repeated with people we have interacted with before and built up an understanding of their capacity for reciprocity. However, in large societies we have to interact with many people that we have not interacted with before and it may only be a once off interaction.

Experiments have shown that people help those who have helped others and have shown reciprocity in the past and that this form of indirect reciprocity has a higher payoff in the end. Reputation systems are what allow for the evolution of cooperation by indirect reciprocity. Natural selection favors strategies that base the decision to help on the reputation of the recipient. The idea is that you interact with others and that interaction is seen and people note whether you acted cooperatively or non-cooperatively. That information is then circulated so that others learn about your behavior. Direct reciprocity is where I help you and you help me, indirect reciprocity is where I help you and then somebody helps me because I now have a reputation for cooperating.

The result is the formation of reputation, when you cooperate that helps your reputation, when you defect it reduces it. That reputation then follows us around and is used as the basis for your interaction with others.

Thus reputation forms a system for the evolution of cooperation in larger societies where people may interact frequently with people that they may not know personally. But because of various reputation systems, they are able to identify those who are cooperative and enter into mutually beneficial reciprocal relations.

The more sophisticated and secure these reputation systems, the greater the capacity for cooperative organizations. We can create large systems wherein we know who to cooperate with and thus can be cooperative ourselves, potentially creating a successful community.

But of course, as the society gets bigger we have to form more complex institutions for enabling functional reputation systems. In such a way we have gone from small communities where local gossip was sufficed to know everyone’s capacity for cooperation, to large modern industrial societies where centralized organizations vouched for people’s reputation. To today’s burgeoning global reputation systems based on information technology and mediated through the internet.

Research shows that cooperators create better opportunities for themselves than non-cooperators: They are selectively preferred as collaborative partners, romantic partners, and group leaders. This only occurs however when people’s social dilemma choices are seen and recorded by others in some way.

However this kind of indirect reciprocity is cognitively complex, no other creature has mastered it to even a fraction of what humans have. Games of indirect reciprocity lead to the evolution of social intelligence and ever more sophisticated means of communications, social and cultural institutions that are characteristic of human civilization.

The basic problem of the evolution of cooperation is thus that nice guys get taken advantage of, and thus there must be some form of supporting structure to enable cooperation.

More than any other primate species, humans have overcome this problem through a variety of mechanisms, such as reciprocating cooperative acts, forming reputations of others and the self as cooperators, and caring about these reputations.

We create prosocial norms about good behavior that everyone in the group will enforce on others through disapproval, if not punishment, and will enforce on themselves through feelings of guilt and shame. All of which form the fabric of our sociocultural institutions that enable advanced forms of cooperation.

Game Theory 10: Cooperative Structures

COOPERATIVE STRUCTURES

Throughout this section of the course , we have been talking about cooperation and different aspects of the social dilemma. In this video, we will look at various approaches that have been identified for fostering the cooperation required to overcome this core constraint.

Our capacity to solve the social dilemma in various ways is a defining factor in the strength of individual relationships, social organizations, economies, and society at large and is thus a topic that is of great interest to many.

Depletion of natural resources, pollutants, and intercultural conflict, can be characterized as examples of social dilemmas.

Social dilemmas are challenging because acting in one’s immediate self-interest is tempting to everyone involved, even though everybody benefits from acting in the longer-term collective interest. Thus some form of cooperative institutional infrastructure is required to enable the cooperation required for sustained success.

The empirical fact that subjects in most societies contribute anything in the simple public goods game, that we looked at previously, is a challenge for game theory to explain via motives of total self-interest. But as we have noted one of the defining features to human beings is their extraordinarily high level of cooperative behavior. Cooperation is a massive resource for advancing individual and group capabilities, and over the course of thousands of years, we have evolved complex networks for collaboration and cooperation which we can call institutions of various form.

These institutional structures help us to solve the many different forms of the tragedy of the commons that we encounter within large societies.

As we have touched upon previously, the central issue of the tragedy of the commons is externalities. That is to say, that the actions that the individual takes have costs that the person does not fully bare, as they are externalized to the overall organization. If there are then too many negative externalities and not enough positive externalities the organization will degrade over time. The central issue in solving the tragedy of the commons is then in reconnecting the costs of the individual’s actions on the whole with the costs that they pay. When the individual always pays the full costs for their actions then there is no social dilemma and we have a self-sustaining organization.

This may sound simple in the abstract, but in practice, it is not simple at all, and this is one reason why we have such a complex array of economic and social institutions. How we approach doing this though, depends on the degree of interconnectivity and interdependence between the players in the game.

INTERDEPENDENCE

When there is low interconnectivity, then there will likely be low interdependence, which means a high probability for negative correlations between actors.

When actors are independent then they can do things that affect the other without that effect returning to themselves.

For example, if I live in Germany and pollute the atmosphere so that there is acid rain in Sweden, as long as I never go to Sweden then what happens there does not affect me too much and this negative correlation can exist.

Now if we turn up the interconnectivity and interdependence, this changes the dynamic. Say I have business partners in Sweden and happen to go on holiday there also. Due to this interconnectivity and interdependence there is a much greater possibility for a positive correlation between my experience and what happens in Sweden. This interconnectivity and interdependence means that I increasingly have to factor my negative externalities into my cost-benefit equation.

The central importance of interdependence as a parameter in cooperation can be simply seen in the way that people cooperate more with those that they are closely connected to; more than with those that are of a different group, culture or society that they are not connected with.

Thus, how we go about solving the social dilemma depends on the degree of interconnectivity and interdependence within the dynamic. At a low-level, cooperative structures have to be imposed through regulation, while at a high level this is no longer necessary as the interconnectivity and interdependence can be used to create self-sustaining cooperative organizations. This is illustrated by how different cooperative structures have evolved within society. Those within small closely interdependent groups like the family and those that have formed for larger society that is composed of many groups that are more independent.

This is to a large extent part of what has happened as we have gone from small, pre-modern societies to large modern societies. As the scale of the social systems that we are engaged in has increased the interconnectivity and interdependence between any two random members has decreased – because they are farther apart in the network. Thus this has disintegrated traditional cooperative institutions that are based on local interactions and interdependencies.

In the absence of tools for interconnecting everyone within a large national society, we have had to create the formal centralized regulatory institutions of the nation state.

And of course, with the rise of information technology and globalization, this is once again changing as we create social interdependencies that span the entire planet.

REGULATION

The most manifest and obvious form for enabling cooperation is regulation and rules that are imposed on the social system by a third party to ensure behavior that is of benefit to the group. The aspect of cooperation examined in many experimental games is cooperation that occurs when people follow rules limiting the exercise of their self-interested motivations.

People might want to take from a shop without paying, but are required to abide by the law, they may want to fish in a lake, but limit what they catch to the quantity specified in a permit.

They buy a fuel efficient car because of regulation taxing the sale of inefficient cars.

In all of these situations, people are refraining from engaging in behavior that would give them immediate benefit but is against the welfare of the group.

Regulation involves limiting undesirable behavior.

This method for enabling cooperation through regulation and rule adherence is deeply intuitive to us and often the default assumption as to how we might achieve cooperation.

The central aim of regulation is to connect the individual’s externalities with the costs and benefits they pay by imposing extra costs on them for certain negative externalities, while providing them with subsidies and payments for certain activity that generates positive externalities.

This form of solution for enduring cooperation through an external third party that imposes sanctions or rewards can be very effective in situations of independence between members.

For example, this would be a good solution to the prisoner’s dilemma where the members can not communicate with each other and are otherwise independent. By forming a third party that could impose sanctions on them, we could change the payoffs in the game to enable cooperative outcomes.

Although the regulatory approach is simple and straightforward, the development and maintenance of this external organization have overhead costs. It is also prone to corruption and has other limitations to it.

Studies have been conducted into the success of the establishment of a leader or authority to manage a social dilemma. Experimental studies on commons dilemmas show that over-harvesting groups are more willing to appoint a leader to look after the common resource.

There is a preference for a democratically elected prototypical leader with limited power especially when people’s group ties are strong.

When ties are weak, groups prefer a stronger leader with a coercive power base.

The question remains whether authorities can be trusted in governing social dilemmas and field research shows that legitimacy and fair procedures are extremely important in citizen’s willingness to accept the authority.

Furthermore, the formal governance structures of a police force, army, and judicial system will fail to operate unless people are willing to pay taxes to support them. This raises the question if many people want to contribute to these institutions.

Experimental research suggests that particularly low-trust individuals are willing to invest money in punishment systems.

The political economist Elinor Ostrom won a Nobel laureate for her studies of various communities around the world and how they managed to develop diverse institutional arrangements for managing natural resources, thus avoiding ecosystem collapse. She illustrated how communities can be managed successfully by the people who use them rather than by governments or private companies. In an interview talking about this centralized regulatory approach, she had this to say about it: “for some simple situations that theory works and we should keep it for the right situation, but there are so many other rich solutions.”

INTERDEPENDENCE

When interconnectivity between members within a game increases, so typically does interdependence and this changes the nature of the game.

Externalities are things that we can put external to our domain of value and interest, but interconnectivity reduces the capacity to do this.

One good example of this is the warning signs on the side of cigarette packets that makes you aware of the negative externalities of smoking on your body. They are trying to connect you with the negative externality that you are creating so that you recognize your interdependence and factor it into the equation under which you are making your decision to smoke.

Thus we can see an externality is not necessarily something that is far away, it is simply whatever you exclude from your value system so that reducing it has no reduction to your payoff. But connectivity takes this barrier down requiring us to recognize the value of the other entity and factor it into our decision. This connectivity can be of many different kinds.

Communication is a form of connection that can enable positive interdependence and there is a robust finding in the social dilemma literature that cooperation increases when people are given a chance to talk to each other.

Cooperation generally declines when group size increases. In larger groups, people often feel less responsible for the common good, as they are more removed from it and the other people with whom they share it.

Thus we can see what is really at the core of the social dilemma is the question of what people value and how far that value system extends.

Wherever we stop seeing something as part of us or our group, that is where negative externalities accumulate and start to give us the social dilemma. However, by building further connections so that people recognize their interdependence with what they previously saw as external, they will start to factor it into the value system under which they are making their choices and reduce their negative externalities. From this perspective, the issue is really one of value and externalities.

Connectivity can change that equation, working to internalize the externalities. Connectivity though is just an enabling infrastructure, one still has to build the channels of communication and structures that enable positive interdependence.

Building systems of cooperation in such a context means enabling ongoing interaction, with identifiable others, with some knowledge of previous behavior, lists of reputations that are durable and searchable and accessible, feedback mechanisms, transparency etc. These are all means of fostering positive interdependence once interconnectivity is present and through them, self-regulating and sustainable systems of cooperation can be formed.

If we think back to the public goods game, if the amount contributed is not hidden, then players tend to contribute significantly more. This is simply creating a transparent system where there is feedback.

As another example, we could think of eBay. EBay is really a huge social dilemma game. You would not send money before receiving the item nor would the other party send the item before receiving the money, so why has eBay succeeded? Not because eBay is going to throw you into jail if you don’t play nice, it is because of communication, transparency and feedback mechanisms that build positive interdependence.

This interconnectivity that builds positive interdependence is not just in space but also in time. Probably the single biggest difference in the prisoner’s game is whether it is a one-off or recurring game that is being played. And it is to this topic of games that play out over time that we will turn to in the next section.

Game Theory 9: Public Goods Games

The game theoretical version of the social dilemma is called the public goods game. Public goods games are usually employed to model the behavior of groups of individuals achieving a common goal. The public goods game has the same properties as the prisoner’s dilemma game, but describes a public good or a resource from which all may benefit regardless of whether or not they contributed to the good.