Mathematical analysis of a good Paranoia game

Drew Fisher


This page explains some of the mathematics behind generating "good" Assassins/Paranoia games. The ideal game will be resilient to inactive players and encourage players to play actively through game structure rather than external rules or tedious umpire-directed rebalancings.

Previous work

Daniel Glasscock wrote a fantastic paper that gives good background on producing fair games and avoiding degeneracy. If you're not familiar with the game itself yet, you should read either the Wikipedia explanation, his paper, or both.


I feel obligated to explain why the single-target circulant game is insufficient, before explaining how to choose multiple targets well. While guaranteeing equivalency, non-degeneracy, no duplicate targets, and maximum distance to self, it is in fact quite a terrible game to play. If any player hides, goes out of town, or just bides his time, then the whole game gets tied up around that edge. The ideal winning strategy for this game is to avoid attacking anyone until everyone else has been killed except your target. Because this game structure encourages dull play, it often will not end in a prespecified length of time. Adding kill requirements or timeouts can force the game to proceed, but add management overhead to the game and can reduce the sense of fairness or skill involved.

While previous work considered equivalence and non-degeneracy to be key design features of a game, I took the approach of dropping the "provably non-degenerate" requirement in favor of optimizing self- and duplicate-distances. It appears at first glance that this is "good enough" for most games, as all of the games I have run since have not turned out to be degenerate, nor needed any additional supervision or game-manipulation once started.

As previously discussed, we seek to maximize distance to self and distance to duplicate targets. In the case of either a self-target or a duplicate target, the player obtaining the self or duplicate target has one less valid target. In the self-target case, however, that player also has one less person that can attack him or her. As a result, to keep the game more exciting, it can be considered more important to have large self-target distance than large duplicate-target distance.

A possible game-generation tactic is as follows:

  1. Pick number of targets each player will have, call it t.
  2. For each possible list L of t unique "target numbers", as described in Glasscock's paper, we calculate a score SL in the following manner:
  3. Use the list of target numbers L that has the minimum SL. If multiple such lists have the same score, select one at random.

We can note some interesting properties of this mechanic. First, for a given t, it is clear that a L with a smaller associated D0 means shorter distance to self. Second, a list LX has fewer duplicates earlier in the kill cycle than LY if it has a smaller M. Thus, we can minimize duplicate targets, particularly those early on in the game, by minimizing M = sum(Di) - D0, and we can maximize distance to self by seeking an L with a large D0.

Unfortunately, these tend to be opposing goals, so I designed S to take into account both goals.

Let's demonstrate this technique with an example. Suppose we choose t=3, and n=10.
If L=[1, 3, 5], then D= [2, 1, 2, 1, 2, 1, 2, 3, 2, 3], and SL = (17 / 2) = 8.5.
If L=[1, 4, 7], then D= [4, 1, 2, 3, 1, 2, 3, 1, 2, 3], and SL = (18 / 4) = 4.5.
Thus, L = [1, 4, 7] is a better game than L = [1, 3, 5], because despite having a worse distance sum.

This algorithm tends to produce distance lists D where D0 ≤ Di, which I suspect (but have not proven) reduces the risk of degeneracy, as compared to lists with higher SL.

Future Work

Several of the ideas that I claim produce better games have not been rigorously proven to do so, but simply suggested. Mathematical proof of this algorithm's optimality (or lack thereof) for circulant games would be nice to have.

Since degeneracy was explicitly ignored in the creation of this scoring algorithm, it would be nice to have some idea of how important it is, and the likelihood of it arising over the course of a game. Robert Mitchell Burke may be investigating the probability of degeneracy using a random kill pattern, given this game-creation algorithm.

This algorithm assumes that one has a good choice of a value for t, the number of targets each player has (and has to watch out for!). Choosing t too large makes the game dull, as killing is too likely to give targets one already has. Choosing t too small can make the game take far longer, as less attack opportunities exist for players. A technique to choose a good value for t would likely be a valuable contribution.

This work also only considers circulant games, which have the unfortunate pattern of guaranteeing duplicate targets fairly early on in the game. It would be interesting to see further work on non-circulant vertex-transitive games: whether they can exist, whether they can be equally fair as circulant games, and whether they can acheive greater self and duplicate distances than the circulant games can produce.