This page explains some of the mathematics behind generating "good" Assassins/Paranoia games. The ideal game will be resilient to inactive players and encourage players to play actively through game structure rather than external rules or tedious umpire-directed rebalancings.

Daniel Glasscock wrote a fantastic paper that gives good background on producing fair games and avoiding degeneracy. If you're not familiar with the game itself yet, you should read either the Wikipedia explanation, his paper, or both.

I feel obligated to explain why the single-target circulant game is insufficient, before explaining how to choose multiple targets well. While guaranteeing equivalency, non-degeneracy, no duplicate targets, and maximum distance to self, it is in fact quite a terrible game to play. If any player hides, goes out of town, or just bides his time, then the whole game gets tied up around that edge. The ideal winning strategy for this game is to avoid attacking anyone until everyone else has been killed except your target. Because this game structure encourages dull play, it often will not end in a prespecified length of time. Adding kill requirements or timeouts can force the game to proceed, but add management overhead to the game and can reduce the sense of fairness or skill involved.

While previous work considered equivalence and non-degeneracy to be key design features of a game, I took the approach of dropping the "provably non-degenerate" requirement in favor of optimizing self- and duplicate-distances. It appears at first glance that this is "good enough" for most games, as all of the games I have run since have not turned out to be degenerate, nor needed any additional supervision or game-manipulation once started.

As previously discussed, we seek to maximize distance to self and distance to duplicate targets. In the case of either a self-target or a duplicate target, the player obtaining the self or duplicate target has one less valid target. In the self-target case, however, that player also has one less person that can attack him or her. As a result, to keep the game more exciting, it can be considered more important to have large self-target distance than large duplicate-target distance.

A possible game-generation tactic is as follows:

- Pick number of targets each player will have, call it
*t*. - For each possible list L of
*t*unique "target numbers", as described in Glasscock's paper, we calculate a score S_{L}in the following manner:- Without loss of generality, number the players from 0 to
*n*-1, where*n*is the number of players in this game. - Calculate the minimum distance from player 0 to each other player. Note that this differs from the definition given in Glasscock's paper. We define the minimum distance from a player to another to be the minimum number of kills required to kill this player, or in the sense of graph theory, if we were to give each edge a weight of 1, the weight of the shortest path from player 0 to player i. This can be found via a breadth-first-search style expansion. Let the list D have D
_{i}is the minimum distance from player 0 to player i. - The
*distance sum of D*, M, is defined as the sum of the elements in D, minus the first item in the list (or equivalently, the sum of the distances from player 0 to all other players). - The score S
_{L}of a list*L*is M / D_{0}.

- Without loss of generality, number the players from 0 to
- Use the list of target numbers L that has the minimum S
_{L}. If multiple such lists have the same score, select one at random.

We can note some interesting properties of this mechanic. First, for a given *t*, it is clear that a L with a smaller associated D_{0} means shorter distance to self. Second, a list L_{X} has fewer duplicates earlier in the kill cycle than L_{Y} if it has a smaller M. Thus, we can minimize duplicate targets, particularly those early on in the game, by minimizing M = sum(D_{i}) - D_{0}, and we can maximize distance to self by seeking an L with a large D_{0}.

Unfortunately, these tend to be opposing goals, so I designed S to take into account both goals.

Let's demonstrate this technique with an example. Suppose we choose *t*=3, and *n*=10.

If *L*=[1, 3, 5], then D= [2, 1, 2, 1, 2, 1, 2, 3, 2, 3], and S_{L} = (17 / 2) = 8.5.

If *L*=[1, 4, 7], then D= [4, 1, 2, 3, 1, 2, 3, 1, 2, 3], and S_{L} = (18 / 4) = 4.5.

Thus, *L* = [1, 4, 7] is a better game than *L* = [1, 3, 5], because despite having a worse distance sum.

This algorithm tends to produce distance lists D where D_{0} ≤ D_{i}, which I suspect (but have not proven) reduces the risk of degeneracy, as compared to lists with higher S_{L}.

Several of the ideas that I claim produce better games have not been rigorously proven to do so, but simply suggested. Mathematical proof of this algorithm's optimality (or lack thereof) for circulant games would be nice to have.

Since degeneracy was explicitly ignored in the creation of this scoring algorithm, it would be nice to have some idea of how important it is, and the likelihood of it arising over the course of a game. Robert Mitchell Burke may be investigating the probability of degeneracy using a random kill pattern, given this game-creation algorithm.

This algorithm assumes that one has a good choice of a value for *t*, the number of targets each player has (and has to watch out for!). Choosing *t* too large makes the game dull, as killing is too likely to give targets one already has. Choosing *t* too small can make the game take far longer, as less attack opportunities exist for players. A technique to choose a good value for *t* would likely be a valuable contribution.

This work also only considers circulant games, which have the unfortunate pattern of guaranteeing duplicate targets fairly early on in the game. It would be interesting to see further work on non-circulant vertex-transitive games: whether they can exist, whether they can be equally fair as circulant games, and whether they can acheive greater self and duplicate distances than the circulant games can produce.