Game Theory 6: Pareto Optimality

PARETO OPTIMALITY

In non-cooperative game theory, the focus is on the agents in the game and strategies that optimize their payoffs, which results in some form of equilibrium.

As we can see in the prisoner’s dilemma game the issue arises in that what turns out to be the equilibrium is suboptimal for all the agents when taken as a whole. One way of defining what we mean by suboptimal for all is the idea of Pareto Optimality.

Named after Vilfredo Pareto, Pareto optimality is a measure of efficiency.

Whereas Nash Equilibrium is a solution concept of non-cooperative games, Pareto optimality in game theory answers a very specific question of whether an outcome can be better than the other?

Pareto optimality is a notion of efficiency or optimality for all the members involved.

An outcome of a game is Pareto optimal if there is no other outcome that makes every player at least as well off and at least one player strictly better off. That is to say, a Pareto optimal outcome cannot be improved upon without hurting at least one player.

To illustrate this let’s take the game called the stag hunt, wherein two individuals go out on a hunt. Each can individually choose to hunt a stag or hunt a hare. Each player must choose an action without knowing the choice of the other. If an individual hunts a stag, they must have the cooperation of their partner in order to succeed. An individual can get a hare by themselves, but a hare is worth less than a stag.

In the stag hunt, there is a single outcome that is Pareto efficient, which is that they both hunt stags. With this outcome, both players receive a payoff of three, which is each player’s largest possible payoff for the game. In this case, we cannot switch to any other outcome and make at least one party better off without making anyone worse off. The stag option is here the only Pareto optimal outcome.

One of the features of a Nash equilibrium is that in general, it does not correspond to a socially optimal outcome. That is, for a given game it is possible for all the players to improve their payoffs by collectively agreeing to choose a strategy different from the Nash equilibrium. The reason for this is that some players may choose to deviate from the agreed-upon cooperative strategy after it is made in order to improve their payoffs further at the expense of the group. A Pareto optimal equilibrium describes a social optimum in the sense that no individual player can improve their payoff without making at least one other player worse off. Pareto optimality is not a solution concept, but it can be an important attribute in determining what solution the players should play, or learn to play over time.

This is the interesting thing about the prisoner’s dilemma, that all options are Pareto optimal except for the unique equilibrium, which is for both to defect. This strong contrast between Pareto optimality and Nash equilibrium is what makes the prisoner’s dilemma a central object of study in game theory. The fact that all of the overall efficient outcomes are the ones that do not occur in equilibrium, makes it a classical illustration of the core dynamic between cooperation and competition.

This is a good segway into the next section of the book where we will be talking about the dynamics of cooperation. Where we will be looking specifically at the overall outcomes trying to optimize them instead of just individual payoffs.

Quantum Computing Explained in 10 Minutes

This video may (or may not) help someone understand quantum computing without a pre-existing grasp of it. However, minimally it proffers itself as a baby-step introduction to a path of greater appreciation if one were to continue taking more steps.

A fundamental concept is that quantum computing isn’t just a more powerful version of the computers we use today; it’s something else entirely, based on emerging scientific understanding — and more than a bit of uncertainty.

The concept that a non-binary foundation for computational architecture even exists may be one of the first challenges to embrace on a path towards greater understanding.

A respect that the development of actual quantum computers is still in its infancy may also help establish a flexible enough personal framework for building greater apprehension.

Regardless of any improved understanding, or not, Shohini Ghose does posit how quantum computing holds the potential to transform medicine, create unbreakable encryption and even teleport information.

Game Theory 5: Solution Concept

SOLUTION CONCEPT

As we talked about in the last video, the central aim in non-cooperative game theory is in trying to find the optimal strategy for agents to play within a game and trying to predict the outcomes of the game by finding points of equilibrium.

This equilibrium is called the Nash Equilibrium and is considered the best option given the absence of frameworks to support cooperation.

This is what we call a solution concept.

In game theory, a solution concept is a model or rule for predicting how a game will be played. These predictions are called “solutions”, and describe which strategies will be adopted by players and, therefore, the results of the game.

The most commonly used solution concepts are equilibrium concepts.

Where we look for a set of choices, one for each player, such that each person’s strategy is best for them when all others are playing their stipulated best response.

In other words, each picks their best response to what the others do.

In game theory the term best response refers to the strategy (or strategies) which produce the most favorable outcome for a player, taking other players’ strategies as given. Best response is when you know what others are going to do and you choose your best response.

DOMINANT STRATEGY

Sometimes one person’s best choice is the same no matter what the others do. This is called a “dominant strategy” for that player. Hence, a strategy is dominant if it is always better than any other strategy, for any profile of other players’ actions.

A strategy is termed strictly dominant if, regardless of what any other players do, the strategy earns a player a strictly higher payoff than any other. If a player has a strictly dominant strategy then they will always play it in equilibrium.

A strategy is weakly dominant if, regardless of what any other players do, the strategy earns a player a payoff at least as high as any other strategy.

If there are better strategies to take within a game then there must also be worse strategies to take and we call these worse strategies dominated. A dominated strategy in a game means that there is some other choice for the agent to make that will have a better payoff than that one.

When the game is non-cooperative and players are assumed to be rational, strictly dominated strategies are eliminated from the set of strategies that might feasibly be played. Thus the search for an equilibrium typically begins by looking for dominant strategies and eliminating dominated ones.

For example, in a single iteration of the prisoner’s dilemma game cooperation is strictly dominated by defect for both players. Because either player is always better off playing defect, regardless of what their opponent does. In searching for the equilibrium to this game we would simply look at each cell and ask is there a better option for the play? If so then the cell is dominated and we should not choose it. Once we have done this for both players we can identify a corresponding cell or number of cells that is optimal for each, giving us the equilibrium or possibly a number of different equilibria.

MINIMAX/MAXMINI

In games of conflict and competition, we are often interested in knowing what is the strategy that one can play that will reduce one’s exposure to some negative event.

For example, this might be a scenario of war, where we have a number of different options as to the route along which we will send our food supply to our troops. Along any of these routes, there is the possibility that they will get bombed. We would then try to choose the option that will minimize the amount of damage that might possibly be caused to the convoy. This is captured in the term minimax. Minimax is a decision rule for minimizing the possible loss for a worst case scenario. The minimax value of a player is the smallest value that the other players can force the player to receive, without knowing the agent’s actions.

A minimax strategy is commonly chosen when a player cannot rely on the other party to keep any agreement or they have in their interest that you gain the minimum payoff, such as in a zero-sum game.

Calculating the minimax value of a player is done in a worst-case approach: for each possible action of the player, we check all possible actions of the other players and determine the worst possible combination of actions – the one that gives the player the smallest value. Then, we determine which action the player can take in order to make sure that this smallest value is the largest possible.

A maximin strategy is one where the player attempts to earn the maximum possible benefit available. This means they will prefer the option which offers the chance of achieving the best possible outcome – even if a highly unfavorable outcome is possible when taking that strategy. This maximin strategy that is often referred to as the best of the best, is also seen as ‘naive’ and an overly optimistic strategy, in that it assumes a highly favorable environment for decision making. In contrast, the minimax strategy is a more realistic strategy in that it takes account of the worst case scenario and prepares for that eventuality.

Cryptoasset Regulation & Tokenization

Stephen McKeon, Associate Professor, University of Oregon, speaks with Greg LaBlance of US Berkely Haas, about the complexity of US regulation and jurisdiction overlap, in regards to asset tokenization.

Specifically, he touches upon the progress of tokenized securities, which represents a unique digital wrapper around an underlying security. He outlines how blockchain and tokenization will make it easier for regulators because it will reduce trading friction and automate compliance.

Game Theory 4: Non-Cooperative Games

NON-COOPERATIVE GAMES

In studying the dynamics of cooperation and competition between actors, understanding the structure of the game that is being played is central to understanding the system of interest.

In game theory, a primary distinction is made between those game structures that are cooperative and those that are non-cooperative.

As we will see the fundamental dynamics surrounding the whole game are altered as we go from games whose structure is innately competitive to those games where cooperation is the default position.

A cooperative game is one wherein the agents are able to resort to some institution or third party in order to enable cooperation and optimal results for all.

A game is noncooperative if players cannot form the structures required to enable cooperation.

For example, we might think about two people wishing to make a commercial transaction online. Given two anonymous people interacting without some institution to enable cooperation, there is no reason for either to think that the other will carry through with the transaction as promised.

The seller is incentivised to take the money and not send the item while the buyer is likewise incentivised to take the product without sending the money. In the absence of some cooperative structure that would enable each party to trust the other and thus cooperate, the game would naturally gravitate towards defection and the potentially valuable transaction would not take place.

Thus we can see how in the absence of cooperative mechanisms each player may follow the course that renders them the best payoff without regard for what the other does, or what is optimal for the overall system and this can result in suboptimal outcomes for all.

In non-cooperative games, each agent in the game is assumed to act in their self-interest, and this self-interested agent is the primary unit of analysis within noncooperative games because there is no cooperative structure.

This is in contrast to cooperative game theory that treats groups or subgroups of agents as the unit of analysis and assumes they can achieve certain outcomes among themselves through binding cooperative agreements.

Game theory historically has been very much focused on non-cooperative games and trying to find optimal strategies within such a context. This is likely because non-cooperative games are very much amenable to our standard mathematical framework and thus offer nice closed form solutions.

But it is important to note that the real world is made up of situations that are sometimes cooperative, sometimes non-cooperative, and often involve elements of both.

As previously mentioned, non-cooperative games arise due to a number of factors. Firstly the game may be inherently zero-sum, meaning what one wins the other loses and thus there is an inherent dynamic of competition.

Many sports games are specifically designed to be zero-sum in their structure, so as to create a dynamic of competition. In such a case there is only one prize, and if someone else gets it, you don’t. There is no incentive for cooperation and every incentive for competition and thus the best option is for the actor to focus on maximizing their payoff irrespective of all else.

This is called a strictly competitive game. A strictly competitive game is a game in which the interests of each player are diametrically opposed.

Likewise, a game may be non-competitive due to the incapacity to create cooperative structures. Most people, when engaged in a game, will wish to not only optimize their own payoff but will wish to optimize the overall outcome as well.

In general, people do not like the idea of waste or of unfairness and we typically search for some optimal solution given both our own interests and some consideration for the overall organization.

The real world of social interaction is full of all sorts of informal social and cultural institutions designed to enable trust, cooperation and optimal outcomes for all.

Almost as soon as two people start to interact they will start to look for commonalities and shared interests that enable them to develop trust and cooperation.

Thus, non-cooperative games are typically those where the actors can not interact and form the trust required for cooperation. Indeed, there will be certain games that we construct where we specifically want competition and we do that by not allowing the players to cooperate, such as in a competitive market.

Lastly non-cooperative games can be a product of an incapacity to enforce binding contracts. If there is a third party involved to ensure optimal outcomes for the overall organization through sanctions and incentives, this can form a solid basis for cooperation – in the way that a government does by enforcing laws.

This is famously captured in Thomas Hobbes’ conception of the state of nature. Where he pondered “What was life like before civil society?” He went on to write “during the time men live without a common power to keep them all in awe, they are in that condition which is called war, and such a war as is of every man against every man.”

In this state, every person has a natural right or liberty to do anything one thinks necessary for preserving one’s own life.

Hobbes’ ideas illustrate vividly how in the absence of a third party to enforce cooperation, competition can prevail.

EQUILIBRIUM ANALYSIS

Non-cooperative games create a specific dynamic within a game, where we are taking the individual and their payoff as the basic unit of analysis. In such a circumstance we do not need to consider what is best for all if given some form of cooperation because this is not possible within the context.

We are solely interested in how the individuals will act.

The question of how should they act to optimize their own payoff, and given the assumption that both are performing this optimization what will be a stable solution to the game.

Given these assumptions, both players should search for a strategy that optimizes their payoff, and where those strategies of the players interact we should have a stable outcome, that we should be able to predict will occur.

This stable outcome is what we call an equilibrium.

Where equilibrium, in the general sense, means a state in which opposing forces are balanced, thus creating a point of stability and stasis.

When we see a ball at the bottom of a bowl it is in a state of equilibrium, because if we put it anywhere else in the bowl the force of gravity would act on it to pull it back to this static point. This is the same for the actors in a non-cooperative game because they are both trying to optimize their payoff they will both naturally gravitate towards the strategy that gives them the highest payoff.

But because their payoff is dependent on what strategy the other chooses and because they can not depend upon cooperation between them, they have to choose the best strategy assuming that the other will work to optimize their payoff without cooperating.

This point of equilibrium in a game is called the Nash equilibrium after the famous mathematician John Nash.

In game theory, the Nash equilibrium is a solution concept of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only his or her own strategy. If each player has chosen a strategy and no player can benefit by changing strategies while the other players keep theirs unchanged, then the current set of strategy choices and the corresponding payoffs constitute a Nash equilibrium.

The Nash equilibrium is one of the foundational concepts in game theory.

The basic intuition of the Nash equilibrium is in predicting what others will do given their self-interest only and then choosing your optimal strategy given that assumption.

Nash equilibrium is a point where all players are doing their best given the absence of cooperation. It is a law that no one would want to change in the absence of some effective overall structure for coordination.

PRISONERS GAME

Nash equilibrium is best illustrated through the prisoner’s dilemma game.

The prisoner’s dilemma game is a classic two player game that is often used to present the concept of Nash equilibrium in a payoff matrix form.

Conceive of two prisoners detained in separate cells, interrogated simultaneously and offered deals in the form of lighter jail sentences for betraying the other criminal. They have the option to “cooperate” with the other prisoner by not telling on them, or “defect” by betraying the other.

However, if both players defect, then they will both serve a longer sentence than if neither said anything. Lower jail sentences are here interpreted as higher payoffs.

The prisoner’s dilemma has a similar matrix as depicted for the coordination game, but the maximum reward for each player is obtained only when the players’ decisions are different. Each player improves their own situation by switching from “cooperating” to “defecting”, given the knowledge that the other player’s best decision is to “defect”. The prisoner’s dilemma thus has a single Nash equilibrium: where both players choose to defect.

What has long made this an interesting case to study is the fact that this scenario is globally inferior to “both cooperating”. That is, both players would be better off if they both chose to “cooperate” instead of both choosing to defect. However, each player could improve their own situation by breaking the mutual cooperation, no matter how the other player changes their decision.

PREDICTION

The central aim of non-cooperative game theory then is in trying to predict people’s actions within a game by finding the Nash equilibria and assuming they will play that because it is their best option.

It is then legitimate for us to ask does equilibrium analysis give us any predictive capacity over what happens in the real world? Often the outcome of experiments is not an equilibrium as predicted by the theory. This is mainly because people do not fully reason through the game in a fully logically consistent fashion.

Equilibrium is a point where everyone has figured out what everyone else will do, thus behaviorally it often does not predict what people will do the first time they play the game.

Equilibrium should more be interpreted as what will happen over a number of iterations within a non-cooperative game, as players come to better understand the game and how to reason through it.

Similar to putting a ball in a bowl, it takes time before it arrives at an equilibrium and this is what is seen in game experiments they tend over time towards the equilibrium.

For example, in a game, people are asked to choose a number between 0 to 100, with the winner being the person who is able to guess what will be 2/3 of the average figure proposed by others.

So everyone is being asked to guess a bit below the average number proposed.

In this game, only a small percentage choose the equilibrium point – which is zero – and because other people did not act rationally in this game they were wrong.

In many ways then choosing this equilibrium as a prediction of what would happen is not a good option. And this clearly diverges dramatically from what the theory tells us.

However, overtime, as the game is iterated upon the numbers chosen by people does move towards the equilibrium. Thus it tells us something about statistical averages of the system but not very much about how it will behave in the real world the first iteration of the game.

AI: Life in the Age of Intelligent Machines

The future is artificial intelligence (AI).

In the coming 50 years all people will be impacted by AI.

AI is based upon computing massive amounts of data. In fact, that articulates its basic purpose: understanding massive amounts of data and complex systems.

AI is just algorithms being executed in clever ways.

It’s difficult to predict all the ways AI will make life better in the future.

But the downside includes the amount of human labor that will be replaced.

How we will manage the disruption needs to be figured out.

Furthermore, the systems themselves are prone to compromise.

The more we become reliant upon AI, the more vulnerable we become when those systems are attacked or disrupted.

We need to think about the ethical deployment of technology.

It’s impossible to predict all the ramifications of AI.

Token Economics 23: Plug and Play Enterprise

THE PLUG-N-PLAY ENTERPRISE

A central concern of economics is the question of how do people work together within some form of enterprise and then redistribute the value created by that collective effort, in a way that is optimal for the entire organization.

An enterprise is a structured project or organization designed to achieve valued ends.

What defines a business, enterprise or company is a business model. For something to be considered a business there must be some coherent business model which defines how the organization creates value, exchanges it and generates revenue and thus achieve its objectives.

A business model can emerge wherever there is the opportunity to create, exchange and capture value. If we discover a new source of mineral under the around that people need, then we can build a business model on top of it by extracting it, exchanging it and capturing some revenue from that value stream.

This business model is realized through the construction of a business or enterprise. Enterprises then operate on top of some value stream, intercepting, transforming, exchanging and retaining value.

These enterprises enable the specialization and division of labor within economy and thus the production of complex products and services.

Previously we found that we have to typically be inside of one of these formal structured organizations to be able to be productive in this way. But the proliferation of connectivity and reduction of transaction costs taking place bring about a deep structure transformational in the economy from closed organization defined by their boundaries to open networks defined by their protocols. And this offers new ways to really unlock and harness the assets and creative potential of people around the world within new larger and more complex networked organizations.

PLATFORMS

With the rise of the internet has come a new way for structuring the division of labor within the economy through on-demand, networks or what have come to be called platforms.

Platforms are information networks that enable two-sided markets, for producers and consumer to connects and exchange value. These web platforms like Alibaba, Amazon, Google or Facebook have today already risen to the top of market capitalization within the space of just a decade or so to replace the corporations of industrial capitalism.

These platforms differ from the traditional organization as they are designed to be dynamic and event-driven. Where providers and consumers can couple or decouple from the network on-demand instead of having fixed roles, like Uber drivers, or Airbnb hosts.

They are modular, tasks and service provisioning are broken down into small modules that can be easily produced and consumed, like on-demand videos on YouTube or blog posts.

They are scalable, a seller on Alibaba can easily and rapidly go from a few hundred dollars in sales to a few million.

They are based around interactions and the exchange of value in real-time instead of fixed structures and procedures. Much of the platform’s operations are automated through software running on centralized servers.

The advent of blockchain technology will overtime extend these previous trends into the world of fully automated and autonomous networked platforms. On a more technical level, this will create a new architecture for our enterprises and entire economies. This new design paradigm is best captured in the term service-oriented architecture.

SOA

Service Oriented Architecture (SOA) is an approach to distributed systems architecture that employs:

  • loosely coupled services
  • standard interfaces
  • and protocols
  • to deliver seamless cross-platform integration

It is used to integrate widely divergent components by providing them with a common interface and set of protocols through which they can communicate within what is called a service bus.

Over the past few decades, service-oriented architecture has arisen as a new systems architecture paradigm within I.T. as a response to having to build software systems adapted to distributed and heterogeneous environments that the Internet has made more prevalent.

There are many definitions for SOA, but essentially it is an architectural approach to creating systems built from autonomous services that are aggregated through a network.

SOA supports the integration of various services through defined protocols and procedures to enable the construction of composite functions that draw from many different components to achieve their goals. It requires the unbundling of monolithic systems and the conversion of the individual components into services that are then made available to be reconfigured for different applications.

Over the course of the latter half of the 20th-century enterprises consolidated their IT infrastructure within Enterprise Resource Planning systems (ERP) behind firewalls.

Over the past decade or so those IT systems have started to migrate to the cloud, but now they will be moving increasingly to this distributed cloud of these next-generation blockchain networks.

As today’s enterprises face new challenges of having to collaborate across large networks, foster innovation within their organizations and as information technology is greatly accelerating the pace of change, reducing the barriers to entry, shorter and shorter product life cycles are the norm.

These enterprises have to respond to fast-changing environments by becoming more agile and the most advanced and forward-looking of these enterprises are already moving towards a platform model to achieve this.

EVENT-DRIVEN ENTERPRISE

The enterprise of tomorrow will unlikely be based on the static structures of today. But instead will be event-driven networks as we go from a push model of industrial production to the pull model of the services economy.

Service-oriented blockchain based networks will use advanced analytics to pull together resources when and where needed on demand.

The enterprise of tomorrow will be more like an ever-evolving swarm rather than a structured machine, with value being created in micro-interactions dynamically within networks of peers; some large, some small.

Enabling this rapid coupling and decoupling from blockchain networks – of people, resources, and technology – when and where needed will require plug-n-play, API like interfaces.

With the confluence of the services economy, blockchain, and analytics for the first time, we can actually identify what people are contributing to an enterprise, what economic value they are creating, and begin to reward people in real-time.

The enterprise will need to be inherently designed to be able to plug in any capacity to the network as required. The most successful of these networks will be those that are able to harness the efforts of the many, along multiple dimensions, in a frictionless automated fashion. When we start to combine these capabilities we start to see a new and very different architecture to the enterprise and economies.

Game Theory 3: Elements of Games

ELEMENTS OF GAMES

Games in game theory involve a number of central elements which we can identify as players, strategies, and payoffs. In this chapter we are going to zoom in to better understand each of these different elements to a game, talking first about the players and rationality, then strategies and payoffs.

PLAYERS

As we touched upon in a previous videos agents are abstract models of individuals or organizations which have agency. Agency means the capacity of actors to make choices and to act independently on those choices to affect the state of their environment and they do this in order to improve their state within that environment.

In order to act and make choices, agents need a value system and need some set of rules under which to make their choices so as to improve their state with respect to their value system.

A big idea here is that of rationality, and we have to be careful how we defined this idea of rationality. A dictionary definition of rationality would read something like this “based on or in accordance with reason or logic”. Rationality simply means acting according to a consistent set of rules, that are based upon some value system that provides the reason for acting.

To act rationally is to have some value system and to act in accordance with that value system.

When a for-profit business tries to sell more products, it is acting in a rational fashion, because it is acting under a set of rules to generate more of what it values.

When a person who values their community does community work, they are acting rationally. Because their actions are in accordance with their value system and thus they have a reason for acting in that fashion.

Standard game theory makes a number of quite strong assumptions about the agents involved in games. A central assumption of classical game theory is that players act according to a limited form of rationality, what is sometimes call hyperrationality.

A player is rational in this sense if it consistently acts to improve its payoff without the possibility of making mistakes, has full knowledge of other players’ interactions and the actions available to them, and has an infinite capacity to calculate a priori all possible refinements in an attempt to find the “best one.” If a game involves only rational agents, each of whom believes all other agents to be rational, then theoretical results offer accurate predictions of the games outcomes.

Agents have a single conception of value, i.e. all value is reduced to a single homogeneous form called utility. Preferences and value are well defined.

Rational agents have unlimited rationality, the idea of omnipotence, i.e. they know all relevant information when making a choice, they can compute this information and all of its consequences. Within this model, agents have perfect information, and any uncertainty can be reduced to some probability distribution. The agent’s behavior is then seen to be an optimization algorithm over their set of possibilities.

Game theory is a young field of study—less than a century old. In that time, it has made remarkable advances, but it remains far from complete.

Traditional game theory assumes that the players of games are hyperrational — that they act in best accordance with their own desires given their knowledge and beliefs. This assumption does not always appear to be a reasonable one. In certain situations, the predictions of game theory and the observed behavior of real people differ dramatically.

People in the real world operate according to a multiplicity of motives, some of the time people are in a situation where they are simply trying to optimize a single metric, but more often they are not. They are embedded within a context where they are trying to optimize according to a number of different metrics.

The fact that people aren’t always optimizing according to a single metric is illustrated in the many games where people don’t choose actions that give them the greatest payoff within that single value system.

The best empirical examples of this are taken from the dictator game. The dictator game is a very simple game, where one person is given a sum of money, say 100 dollars, this person plays the role of “the dictator,” and is then told that they must offer some amount of that money to the second participant, even if that amount is zero. Whatever amount the dictator offers to the second participant must be accepted. The second participant, should they find the amount unsatisfactory, cannot then punish the dictator in any way.

Standard economic theory assumes that all individuals act solely out of self-interest. Under this assumption, the predicted result of the dictator game is that the “dictator should keep 100% of the cake, and give nothing to the other player.” This effectively assigns the value of what the dictator shares with the second player to zero.

The actual results of this game, however, differ sharply from the predicted results. With a “standard” dictator game setup, “only 40% of the experimental subjects playing the role of dictator keep the whole sum.” In research by Robert Forsythe, et al, they found the average amount given, under these standard conditions, to be around 20% of the allocated money.

In any case, in the majority of these game trials, the dictator assigns the second player a non-zero amount.

The obvious reason for this is that the dictator is not simply trying to optimize according to a single monetary value – that a strict conception of rationality would posit – but is acting rationally to optimize according to a number of different value systems.

They want the money, yes, but they are also optimizing according to cultural and social capital that motivates them to act in accordance with some conception of fairness and it is out of the interaction of these different value systems that we get the empirical results.

What agents value can be simple or it can be complex.

A financial algorithm is a form of agent that acts according to some set of rules designed to create a financial profit; this is an example of a very simple value system.

In contrast, what a human being value is typically many things. People value social capital, that is to say, their relationships with other people and their roles within social groups. They care about cultural capital, how they perceive themselves and how others perceive them. They care about financial capital and natural capital. They often care about their natural environment to a greater or lesser extent.

Likewise, the set of instructions or rules can be based on some simple linear cause and effect model – what may be called an algorithm – or they may be much more complex models – what may be called a schema.

Thus when we say that someone is acting rationally and maximizing their value payoff, this can be in many different contexts. A person helps an old lady onto the bus, not because they are going to get paid for this, but what they do get from this is some sense of being a decent person and they gain some payoff in that sense.

Thus it is not the concept of rationality or that people try to optimize their payoff that needs to be revised. It is the narrow definition of rationality as optimizing according to a single metric that needs to be expanded within many contexts that involve social interaction.

The classical conception of strict rationality based upon a single metric will apply in certain circumstances. It will be relevant to many games in ecology, where creatures have a simple conception of value maximization.

Likewise, it will often be relevant to computer algorithms and software systems and sometimes relevant for socioeconomic interactions, or at least partially relevant.

As the influential biologist Maynard Smith, in the preface to the book Evolution and the Theory of Games, “paradoxically, it has turned out that game theory is more readily applied to biology than to the field of economic behavior for which it was originally designed.”

If we want an empirically accurate theory of games between more complex agents it will need to be expansive in its conception of value and rationality to include the more complex set of value systems and reasoning processes that are engendered in such games. We have spent quite a bit of time talking about this idea of rationality as it is a major unresolved flaw within standard game theory, one that is important to be aware of.

GAME STRATEGIES

Strategy is the choice of one’s actions.

In game theory, player’s strategy is any of the options they can choose in a setting where the outcome depends on the action of others. A strategy, in the practical sense, is then, a complete algorithm for playing the game, telling a player what to do for every possible situation throughout the game.

For example, the game might be a business entering a new market and trying to gain market share against other players. This will not just happen overnight but they will have to take a series of actions that are all coordinated towards their desired end result. They might first have to organize production processes and logistics, then advertising, then pricing etc. Each of these actions we would call a move in the game, and the overall strategy consists of a set of moves.

A player’s strategy set defines what strategies are available for them to play. For instance, in a single game of rock-paper-scissors, each player has the finite strategy set of rock, paper, scissors.

Likewise, a player’s strategy set can be infinite, for example in choosing how much to pay when making an offer to purchase an item in a process of bartering, this could be potentially infinite, it could be any increment.

PURE / MIXED STRATEGY

In some games, there will not be one primary strategy that an agent will always choose but in many circumstances, they may have a number of options and choose between them with some given probability. This will often be the case when they don’t want the other player to know in advance which move they will take.

For example, in smuggling goods across the Vietnam-Chinese border, the smugglers have many different points of entry available to them and the police have many different points that they could secure. In such a case neither side wants always to choose the same location, they want some degree of randomness in the strategy that they choose.

This gives us a distinction in games between those with strategies that one will always play and those that one will play only with a given probability. This distinction is captured in the terms mixed and pure strategy.

Pure strategies are ones which do not involve randomness and tell us what to do in every situation. A pure strategy provides a complete definition of how a player will play a game. In particular, it determines the move a player will make for any situation they may face.

Strategies that are not pure—that depend on an element of chance—are called “mixed strategies.” In mixed strategies, you have a number of different options and you ascribe a probability to the likelihood of playing them. As such we can think about a mixed strategy as a probability distribution over the actions players have available to them.

PAYOFFS

For every strategy taken within a game, there is a payoff associated with that strategy.

A player’s payoff defines how much they like the outcome of the game.

The payoffs for a particular player reflect what that player cares about, not what another player thinks they should care about. Payoffs must reflect the actual preferences of the players, not preferences anyone else ascribes to them.

Game theorists often describe payoffs in terms of utility — the general happiness a player gets from a given outcome. Payoffs can represent any type of value, but only the factors that are incorporated into the model. Thus we have to be careful in asking what do the agents really value.

Payoffs are then essentially numbers which represent the motivations of players. In general, the payoffs for different players cannot be directly compared, because they are to a certain extent subjective.

Payoffs may have numerical values associated with them or they may simply be a set of ranking preferences. If the payoff scale is only a ranking, the payoffs are called “ordinal payoffs.” For example, we might say that Kate likes apples more than oranges and oranges more than grapes.

However if the scale measures how much a player prefers one option to another, the payoffs are called “cardinal payoffs.” So if the game was simply one for money then we could ascribe a value to each payoff, that would be the quantity of money gained.

In many games all that matters is the ordinal payoffs, all we need to know is which options they prefer without actually knowing how much they prefer them. This is useful because in reality people don’t really go around ascribing specific values to how much they like things, but they do think about whether they prefer one thing or another. Kate may know that she likes apples more than oranges but she would probably laugh if you asked her to put values on how much more she likes them.

In the next section, we start to play some games, looking at how to solve games, how we find the best strategies and talk about the important idea of equilibrium.

Trusted Tokenization

https://youtu.be/X2Ku7Oo1Zos


This is a a short introduction to Tokenization, and how it is used by Sweetbridge. More specifically, it’s a broad overview of tokenization of assets and tokenization of rights.

Sweetbridge describes itself as “a blockchain-based economy that connects anyone with underutilized resources – assets, skills, intellectual property and trust networks, to businesses desiring to improve performance, thereby enabling all participants – organizations and individuals – to provide outcomes that solve disruptive problems, improve asset liquidity, and create mutually shared value.”