A Mathematical Analysis of the Truel
A Mathematical Analysis of the Truel
Ariadna Alpizar, Cameron McKenzie, Justin Schuetz
We can look at the truel a few ways: Computationally (including computer simulation or game theoretic approaches), logically (using the concepts behind game theory but little of the computational tools available) or purely analytically by analyzing that attributes of the truel problem itself. We’ll delve into some of the analytic aspects and, on the way, look at an important computational method we can use in looking at it.
What is a truel?
A truel, as stated in The Truel (Kilgour and Brams, p315), is much like a duel, but with three people instead of two. Consider how we would model a duel mathematically: we can take a probabilistic approach and determine, for example, the expected number of shots a player would have to make in order to hit his target and the probability, contingent on the other duelist’s accuracy, of surviving to that point. We can consider it game-theoretically as a sort of prisoner’s dilemma as a classic standoff, and so on. Adding more players to the duel, making it a truel or nuel not only increases the complexity of these analyses, but also introduces a problem not normally seen in them: Unlike a duel, the game actually changes as it progresses. In a duel, once a player is eliminated, the game is over; however, in a truel or nuel when a player is eliminated the game not only continues but is fundamentally changed, making analysis far more complex. We will introduce some tools to negate this effect later and show how we use them to analyze this mathematically interesting topic.
The truel as a probabilistic set of events:
There are a couple of good reasons to start with this method: the first is that it requires a fairly elementary level of mathematics to compute—one only has to know conditional probability and the law of total probability. Furthermore, our ultimate goal is to find a method of determining the ultimate probability of survival in either a finite or infinite number of rounds. If we are able to find a general expression—say, a convergent polynomial series, we can do this.
We can consider the truel using basic probability rules and set theory as follows: we consider a player in a specific round dying as a particular event. We can then define an event [pic] to indicate the death of player b in round a. Consider a simplified truel where we denote the probabilities of a player shooting and killing his target by p, and where [pic] and each player has a set target: A shoots at B, B shoots at C, and C shoots at A, until one of the players dies and the situation devolves into a duel. By fixing the probabilities and the targets one would think we would have a simple case of solving for total probabilities. Let [pic], the probability a shooter misses, and let’s look at the progression we have of probabilities if players continue surviving:
[pic]
[pic]
[pic]
And so on. This is fairly clear—since the only probability that matters at any point is the probability that the player shooting hits his target, we don’t need to worry about previous probabilities. However, if we want to see how the players fare after the first round, we will need to calculate total probabilities.
Then:
[pic]
[pic]
[pic]
[pic]
We can see that to complete even the first round using this method we must calculate probabilities of several unions of events—one of which was logically invalid. When we look at this mathematically we increase the complexity of each of these intersections by a factor of two for each shot fired; for instance, for finding the probability of A surviving we found that we had [pic]= 4 permutations. The next time we have to calculate A’s chance of survival, we have to consider [pic] permutations, not all of which will be logically valid. Doing this even long enough to find a pattern is mathematically and logically difficult. If you write out the chains that are required for more than two rounds, you’ll see how this may be—if player A dies in round 2, for instance, that invalidates any permutation that includes him being alive in subsequent rounds, which alter the probabilities of, say, player C surviving in round 5. And this is with a common probability of actually hitting a target! It’s very clear this method is not suitable for what we’re trying to use it for.
Truels as modeled by Markov Chains
First we should describe what a Markov chain is. We’ll do this by presenting an example of the truel being modeled by one.
The main problem in our previous example was that we had to consider multiple previous contingencies to determine a total probability for the death or survival of a player at any particular point. We tried this because we had the advantage that, once we know what state the game was in (who was still alive and who was dead) finding the probability of a player hitting another was easy to figure out. Furthermore, the state of the game at any time t was the only information we needed to find the probabilities of the game changing to any other state in time t + 1.
A markov chain is designed to keep this in mind. It only considers the probability of leaving one state and transitioning to another. Let’s consider a simple markov chain for a duel – a fairly trivial sequence of events:
First we need to define our states. Let each state be denoted by an integer and represented by an ordered binary pair. Each numeral in the pair will symbolize whether a player is alive (which we’ll arbitrarily denote 0) or dead (1). Then the state where player A is alive and player B is dead will be (0 1). We have a transition matrix which will show the probabilities of moving from one state to another. In this form we don’t worry about the previous events as the duel progresses—we only need to concern ourselves with the transitional probabilities. Let [pic] denote the probability that player n shoots and hits his target. Likewise, let [pic] denote the probability that player n shoots and misses his target. For the sake of space, let’s designate the states:
1 = (0, 0)
2 = (0, 1)
3 = (1, 0)
4 = (1, 1)
Since we are looking at each state’s probability of transitioning to another state (or possibly the same one), we’ll put this in matrix form. Let T be the transition matrix for this game, and [pic] will be the probability of transitioning from state i to state j.
Let’s assume that the duelists fire, for all purposes, simultaneously each round. Now we can model our duel with the following transition matrix:
|T |1 |2 |3 |4 |
|1 |[pic] |[pic] |[pic] |[pic] |
|2 |0 |1 |0 |0 |
|3 |0 |0 |1 |0 |
|4 |0 |0 |0 |1 |
We can see that there is a zero column vector in the lower left quadrant of the transition matrix, and the identity in the lower right quadrant of the transition matrix. This is in part because we are looking at an absorbing markov chain. This is a markov chain where there are states which can be moved into but not out of—in this case whenever all but one player is dead, the game ends. A shooter won’t defeat his opponent or opponents just to kill himself!! Likewise, if all of the other players die, the game ends and we do not move from this state to any other. This is reflected in the transition matrix—states 2, 3, and 4 cannot be moved out of so there is a probability of one that once we reach them, we will only transition to them in subsequent steps. When we arrange the transition matrix so that the transitive states (which may be moved out of) are top and left and intransitive states (which may not be moved out of) are moved down and right, we find that we have the following pattern:
| |Transitive |Intransitive |
|Transitive |Q |R |
|Intransitive |0 |I |
This is an absorbing markov chain in canonical form, and it has some properties which will be useful to us later. Now that we’ve seen the basic structure of an absorbing markov chain, let’s take a look at the truel in particular.
First we have to define our states. Analogously to the duel, we’ll use a binary triplet to represent the players being alive or dead where 1 denotes a player is dead and 0 denotes that a player is still alive. Then all of the possible states of a truel with unlimited bullets are:
1 = (0, 0, 0)
2 = (0, 0, 1)
3 = (0, 1, 0)
4 = (1, 0, 0)
5 = (0, 1, 1)
6 = (1, 0, 1)
7 = (1, 1, 0)
8 = (1, 1, 1)
We should note that depending on the rules we apply to the truel, not all of these states may be possible. For instance, if players shoot sequentially, as they did in the previous example, we can never reach state 8, where all players are dead. We’ll see how this won’t change our outcomes later.
With the states defined we need to determine how our players will act—what their strategies are will determine the probabilities going from one transition to another. Let’s take the rational strategy described in Kilgour and Brams where A is more accurate than B, and both are more accurate than C, and where each player shoots his most accurate opponent. That is, A shoots at B, and both B and C shoot at A.
[pic]
We notice a couple of obvious patterns. The first is that we have two simple submatrices on the bottom of our transition matrix. This isn’t by accident, and we’ll explain why soon. The second is that rows 2, 3, and 4 have the same basic pattern, but with different players and different columns in the matrix. While this is not planned, it is a natural consequence of our game and this will also be explained.
Whenever we are presented with a transition matrix, we can find the ultimate probabilities of transitioning from an initial state to another in n steps by taking P to the n power. That is, if we wanted to use this matrix to find out what happens in 6 turns, we would find [pic] and look across row 1 (which is our initial state in this contest). But what if we want to look at the same probabilities across infinite rounds? To do this we will need to look at some properties of this matrix.
As we stated before, we have four submatrices in this transition matrix. Two are of interest to us: Q and R. The Q submatrix contains the probabilities that we move from a transitive state to another transitive state. In contrast, R contains the probabilities that we move into an intransitive (absorbing) state. It can be proven that if we take the series [pic] then the matrix N has the quality that if it has rows i and columns j then the cells contain the expected number of times the process enters state j, given that it is initially in state i, after infinite rounds. These values are finite because it can also be proven that in an absorbing markov chain, [pic] as [pic]. This matrix N is called the fundamental matrix. Furthermore, we can define a matrix [pic] which will give us the probabilities that we are absorbed into state j, given that the process begins in state i. This is a very useful tool in calculating truels; as we have seen above, we cannot use conditional probability to effectively
Flexibility of Markov Chains
We considered another situation other than the case where players have infinite bullets. Let there be three players, A, B, and C, with accuracies in decreasing order. Also let each player have different quantities of bullets in increasing order. We considered how to model this, and determined that there are two different ways: The first is to expand the state space of the transition matrices to include the number of bullets each player has. However, this increases the size of the transition matrices dramatically: even adding one bullet requires us to otherwise replicate each state that already exists. For example, only limiting one player’s bullets requires that we multiply the number of states we have in the infinite case by the number of bullets that we limit for that player. The solution which we settled on was creating more than one transition matrix which would represent three stages in the game: The first would be when all three players have bullets remaining, the second was when the first player ran out of bullets, and the third was when two players ran out of bullets. These would then be multiplied by one another appropriately to yield our results. If we keep our strategy the same as in the infinite case—that is, each player shoots at their most accurate opponent, the matrices would look like this (The zero and identity matrices have been left off):
|[pic] |(A,B,C) |
|8 |The number of states in the first row of the transition matrix |
|+[pic] |The number of duels that can result times the number of states in each duel |
|= 20 |The maximum number of nontrivial cells |
| | |
|The 4-uel | |
|16 |The number of states |
|+[pic] |The number of truels that can result when one player dies times the number of states in a truel |
|+[pic] |The number of truels that can result from the 4-uel times the number of duels that can result from each truel, |
| |times the number of states in a duel. |
|= 96 |Maximum nontrivial cells |
With these generalizations in mind, we can come up with a general expression for the maximum number of nonzero cells in Q and R. We note that each term includes a product of binomials in the form [pic] where j is the number of players in the subgame (like a duel or truel) represented by the term. This can be simplified to [pic]. Furthermore we note that each of these terms is multiplied by the number of states in the subgame, [pic]. Putting these together and summing them up to get the total number yields [pic]. When we take the [pic] term out of the summation, we have [pic]. The summation part of this expression is very close to the summation for euler’s number, [pic] as we take n to be larger. This means that we have a fairly simple way of approximating the maximum number of nontrivial cells we have a transition matrix—as n gets larger (and some experimentation will show that this is true at about [pic]) we have the approximation [pic]. This implies that as n gets larger, the complexity of our transition matrices grows factorially.
We can see that in a duel, each player has only one trivial option—to shoot the other opponent. In a truel, this decision becomes more complicated only in that it is no longer trivial—each player must prioritize who he shoots first, after which his decision is then again trivial. We can expand this to the case of a nuel. Consider when we have 4 players: A, B, C and D, and we wish to enumerate the choices player D is presented with. He can shoot A first, B second, and C third; B first, C, second, and A third, etc. He must choose one of the permutations of the remaining players, of which there are (n-1)!. However, when we determine a payoff matrix we must consider each player’s possible choices. That is, we must consider each combination of players' possible strategies, making an n dimension payoff matrix. Since each player has (n-1)! choices of how to prioritize his targets, this means that a payoff matrix for a nuel will have [pic] entries. To put this in perspective, for a 4-person nuel this number, 1296, is still not computationally unreasonable if one can find a way to generate the transition matrices automatically. However, this expands incredibly quickly. Even at lower numbers of players such as 5 (7962624 entries), 6 (2985984000000 entries) and 7 (100306130042880000000 entries!) we find that brute-force computation is not possible. This is especially true when we consider the complexity of the transition matrices themselves as adding to the complexity of the overall problem. There are some patterns, however, that suggest that it may not be impossible to speed up computation.
Patterns in transition matrices
There are some distinct patterns in our Q submatrix which persist no matter how many players we have in the game. As we mentioned above, there is a certain limit to the complexity of a transition matrix. When finding the complexity above, we took advantage of the logic that any nuel breaks down to a predictable number of (n-1)uels. We can find the position of these nontrivial cells by considering the logic behind our game: we can only transition from state i to state j if every player that is dead in state i is still dead in state j. Numerically this is represented in the binary representation of our states as such: let [pic] be the binary representation for player n in state [pic]. Then a cell [pic] in matrix [pic]is necessarily 0 if [pic]. We wrote a MATLAB program which would find the nontrivial cells in Q and R based on this process, which we included in Appendix B. Let’s look at the Q matrices for the truel and the 4-uel:
|[pic] |1 |2 |3 |4 |
|1 |X |X |X |X |
|2 |0 |X |0 |0 |
|3 |0 |0 |X |0 |
|4 |0 |0 |0 |X |
[pic] |1 |2 |3 |4 |5 |6 |7 |8 |9 |10 |11 | |1 |X |X |X |X |X |X |X |X |X |X |X | |2 |0 |X |0 |0 |0 |X |X |X |0 |0 |0 | |3 |0 |0 |X |0 |0 |X |0 |0 |X |X |0 | |4 |0 |0 |0 |X |0 |0 |X |0 |X |0 |X | |5 |0 |0 |0 |0 |X |0 |0 |X |0 |X |X | |6 |0 |0 |0 |0 |0 |X |0 |0 |0 |0 |0 | |7 |0 |0 |0 |0 |0 |0 |X |0 |0 |0 |0 | |8 |0 |0 |0 |0 |0 |0 |0 |X |0 |0 |0 | |9 |0 |0 |0 |0 |0 |0 |0 |0 |X |0 |0 | |10 |0 |0 |0 |0 |0 |0 |0 |0 |0 |X |0 | |11 |0 |0 |0 |0 |0 |0 |0 |0 |0 |0 |X | |
We can see a few patterns here. First we notice that these matrices are diagonal. The second is that it’s clear that we have a number of diagonal matrices embedded in the matrix. For instance, cell (1,1) is a diagonal matrix in both Q matrices, as is the submatrix formed with the corners (2,2) and (4,4). Along the diagonal of Q we can see several of these diagonal matrices. It is a subject for further study to determine whether these patterns can be predicted and if so, how they may aid in calculations.
Game theoretic discussion of the truel
Our motivation for our work in calculation of truels and nuels, ultimately, is to reach a game-theoretic format for analyzing the situation. While the complexity of nuels increases quickly, we can still make some conclusions in this area. First, we need to define some terms and discuss how they are pertinent.
A strategy, or strategy function to be more precise, maps every game state with a choice. In other words, a player's strategy function dictates which "move" the player will make at every stage of the game. In the case of a nuel, a player's strategy will determine who that player will shoots, or even if the player shoots at all. Many games consist of only a single game state with multiple strategies. The game "Rock, Paper, Scissors" is one such game. Each player chooses one strategy (play rock, play paper, or play scissors) and the game concludes with a clear victor or a draw.
Nash equilibrium is an extremely important concept in game theory. A Nash equilibrium is an outcome in which no player can earn a better payoff by changing his strategy, assuming that no other player changes their outcome. This does not mean that a group of players can not do better by shifting strategies simultaneously, only that a single player can not gain by "defecting" from the equilibrium.
This is a very important concept, as games of perfect information will always conclude at an equilibrium, assuming rational play by all players. A game of perfect information is one in which you are aware of all previous choices made by players, and there is no simultaneous choosing. A simple example is the game "Tic-Tac-Toe." With both sides playing an optimal strategy, the game will end in a draw. Neither player can achieve a better outcome by changing their strategy. The only way to do any better is if your opponent plays suboptimally.
In games of imperfect information (often involving simultaneous decisions) one does not always end up at a pure strategy equilibrium. In the game of "Rock, Paper, Scissors", there is no pure strategy equilibrium. For any given outcome, one player could have done player could have done better by choosing differently.
To find equilibria in such a game, one must consider mixed strategies. A mixed strategy is one in which a player randomly decides between available pure strategies. In this example, the pure strategies are rock, paper, and scissors. An effective mixed strategy is to randomly choose them with equal probability. In fact, if both players use this mixed strategy, we have a mixed strategy equilibrium. If your opponent has an equal chance of picking rock, paper, or scissors, it matters not what you pick. Whatever your choice, you have an equal chance of winning, losing, or drawing.
One important theorem of game theory is that every non-trivial game has at least one Nash equilibrium. As we have just seen, this equilibrium sometimes requires the use of mixed strategies, rather than pure strategies. But it's important to note that every game, even those with more than two players or strategies, will have at least one equilibrium.
The first situation we wanted to look at was a simple simultaneous nuel. In this scenario, every player has a constant probability of hitting a target. In a single round, every player fires simultaneously, eliminating their target in one shot if they hit. This process will continue as long as two or more players are left standing. It is possible for no player to survive, due to simultaneous firing. For example, if every player chooses a different target, and all of them hit, then every player will die simultaneously. When we consider the outcomes of this game, we express the player payouts as the probability that they will live. Every player is obviously interested in maximizing his chance of survival, so this is an effective way to model the scenario in game theory.
We created a program in Java to simulate nuels, which we’ve included in Appendix A. The program is designed to be as flexible as possible, allowing the user to define each player's probability of firing, as well as a strategy which determines how they choose their target.
The program works by initializing a list of players, each of which has a set of accuracies and strategies available to them. Allowing a range of accuracies and strategies allows one to easily test the simulation under different circumstances and compare the results.
The Player object contains methods to deal with how players select their targets, what happens to them when they get shot, and to handle how to reassign their accuracies or strategies when the simulation calls for it. The only one of these of much interest is the method to select a target. The player has a specific Strategy assigned to them. The "fire" method is given a list of available targets, and the Strategy decides which player on the list is the ideal target. We've written strategies for players to fire at a random target, to fire at the most accurate opponent, or to fire at themselves (primarily for testing purposes).
The program driver initializes the game and runs a set number of games using each set of available player strategies, and prints the results for each. The simulation works by calling each player in sequence, giving them the list of available targets, and having them fire on the target. Every player is given a chance to shoot, and then players which were hit are removed from the game. This process is repeated until zero or one players remain, the result is tallied, and the game reset.
The simulation has been an extremely flexible tool, and very useful for getting a quick look at the results of certain game scenarios. While it is generally easy to solve the expected payoffs for players based on certain game criteria, it was often not a very simple process, and general solutions were never nice. Adjusting the simulation to handle different scenarios is often a much simpler processes. Of course, using the simulation has its disadvantage. We can never get an exact answer, only a close approximation.
The source code for the Java simulation is available in an appendix.
We used the java simulation to try out every possible combination of strategies and see which were equilibria. The outcomes which were equilibria dependended on the accuracies assigned to each player. However, we were able to observe two distinct categories.
On one hand, we had games in which there was a single equilibrium. This equilibrium was always the one in which every player shot his most accurate opponent. On the other hand, there were cases in which there were two equilibria. In these cases, the equilibria both corresponded to the two combinations in which every player chose a different target. Under no circumstances did we observe an equilibria other than these three, but the simulation lacks the rigor and precision to prove this fact.
To prove that these three are the only possible pure strategy equilibria, it is useful to write the strategies of triplets of a, b, and c. These letters correspond to the players, and they are in decreasing order of accuracy. A is the best shooter and C is the worst. Each term indicates who the corresponding player is targeting. (b,a,a) for example, indicates that a is shooting at b, b is shooting at a, and c is shooting at a. There are four possible combinations for these triplets (assuming a player never shoots at himself).
In order to determine that some of these can never be equilibria, consider the following situation. Suppose that Adam, Beth, and Carl are in a truel and that Adam and Beth are both choosing to shoot at Carl. Suppose that it will take n shots for Carl to successfully hit the first time, and an additional m shots for him to hit again. Carl must therefore endure n shots from one of his opponents, and n+m shots from his other opponent in order to survive the game. It is clearly advantageous for him to minimize the number of shots taken from the more accurate opponent, so it is always best for Carl to shoot at whichever opponent is more accurate.
The situation described eliminates 3 of the outcomes as possible equilibria. (c,a,a) can never be an equilibrium because A will always do better by eliminating B first rather than C. The same logic can be applied to eliminate (b,c,b) and (c,c,b) as possible equilibria.
To eliminate the remaining outcomes as possible equilibria, consider this scenario. Adam and Beth have this time to shoot at each other. Carl, wanting to maximize his survival rate, must decide which to shoot at. There are three ways in which Carl can win. Adam and Beth can die at the same time, which is an automatic win for Carl, and his favorite outcome. The other two ways involve one of the two dying, and then Carl dueling the other to claim his win. It is clear that Carl would rather end up in a duel against the less accurate of the two. Let us suppose that Adam is more accurate than Beth. The three outcomes, in orders of Carl's preference are: Adam and Beth die, Adam dies, Beth dies.
It is more likely for both to die if Carl shoots at the more accurate of the two ("assist" the weaker of the two so to speak, to make a simultaneous death more likely). It is also clearly more likely for Adam to die before Beth if Carl shoots at Adam. By choosing to shoot at Adam, Carl maximizes his chances of ending up either of his two favorite outcomes, and minimizes the chance of ending up in the worst outcome where he has to duel Adam.
This scenario describes the remaining two possible equilibria. In the case of (b,a,b), the third player is making an irrational choice by shooting at B rather than A. The same can be said of (c,c,a), where the second player is irrational.
This leaves only three possible equilibria, (b,a,a) (b,c,a) (c,a,b), all of which have been observed in specific situations.
References
Kilgour, D. Mark, Steven J Brams. The Truel. Mathematics Magazine; Dec 1997; 70, 5; Research Library
Appendix A
import java.util.Random;
import java.util.ArrayList;
public class AccOpp implements Strategy
{
Random rand;
public AccOpp(Random r)
{
rand = new Random();
}
public Shooter target(ArrayList list, Shooter self) {
ArrayList targets = new ArrayList();
ArrayList targets2 = new ArrayList();
for (int i=0;i ................
................
In order to avoid copyright disputes, this page is only a partial summary.
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business.
Related searches
- introductory mathematical analysis pdf
- introduction to mathematical analysis pdf
- principles of mathematical analysis pdf
- mathematical analysis course pdf
- principles of mathematical analysis rudin pdf
- principle of mathematical analysis solution
- principles of mathematical analysis answers
- principles of mathematical analysis rudin
- principle of mathematical analysis pdf
- principles of mathematical analysis solutions
- principles of mathematical analysis solution
- a brief history of the internet pdf