Reputation systems supporting negative attestation are very tricky

The published algorithm PageRank computes the stationary (limiting) probability distribution over a Markov-chain abstraction of the WWW, and is the first published attack-resistant trust metric, inspiring Levien's later work on the Advogato network-flow-based trust metric. For PageRank to be defined, the underlying Markov transition matrix needs to be ergodic; a small smoothing probability distribution $E$ is used to ensure that this is the case, although a vulnerability in $E$ (such as the uniform $\alpha$ suggested in the original paper) can make the trust metric as a whole vulnerable to spam.

One of the surprising characteristics of both PageRank and the Advogato metric is that they have no capability for negative attestation — you can choose to raise someone else's PageRank by linking to them, but there is no action you can take to lower it. In human social networks, an analogous phenomenon results in something known as the broken stair effect, where an actor who does net harm to the network by seriously harming one victim after another can nevertheless maintain a positive reputation by befriending a much larger number of people.

("Negative attestation" is a term from the literature on cryptographic authenticity verification systems. I would like a better term, but I'm not familiar with one.)

For application to human reputation systems rather than WWW search results, we might consider this a drawback, because it is much easier for someone to harm us in a permanent way than to improve our lives in a permanent way. So, when consulting a reputation system, we are much more concerned about whether a person might do us great harm, for example by raping us, assassinating our character, or looting our house, than whether they are likely to do us great good, because the harm done by a single rape, defamation, or fraud can be much greater than the plausible positive effects on our life from interacting with a person. So at an individual level, we have strong incentives to want reputation systems to support negative attestation.

Unfortunately, evidence from human society to date suggests that negative attestation can make reputation systems vulnerable to a variety of pathologies.

One common problem is that, if attestations are non-anonymous, participants have a perverse incentive not to negatively attest, because they may suffer baseless negative attestations in return. In individual human social dynamics, this commonly results in powerful and influential participants getting away with murder — sometimes literally — while participants with less robust reputations often suffer major repercussions for minor infractions of etiquette and are left without recourse for even major damages inflicted upon them. This perverse incentive is what allows broken stairs to continue to exist pervasively in human society.

Another problem that commonly manifests at a larger scale is tribalism and war: depending on which social group you happen to be born into, the reputation system may tell you that Communists and people who associate with them are evil, while capitalists are honorable captains of industry, or the reverse, through a stable membrane of negative attestations separating the two social groups and punishing anyone who attempts to bridge the membrane. The problem is that negative attestations in this case prevent the equilibrium of the calculations of the reputation system from being unique; the ergodicity present in almost all Markov chains is lost.

Human societies have developed a broad range of institutions for managing these problems of negative attestation, including money, courts, laws, prisons, police, formal and informal honor codes, diplomacy, journalism, and arguably capitalism. (Capitalism provides ambitious young psychopaths with the option of pursuing power more effectively by manufacturing inexpensive hand soap than by raising an army to conquer territory.) Each of these institutions comes in many different varieties in different cultures; the restorative justice practiced by North American Native circles of justice is radically different from the retributive justice meted out by courts of law in Roman-derived societies; both are radically different from the Romani kris and its ideals of kintala and reconciliation; and anarchist or pacifist ideals of transformative justice are different from all of these. All of these systems have their flaws. There is no reason to think the negative-attestation problem is easy.

These problems seem to crop up not only in real human social dynamics in meatspace, but also in mathematical or cybernetic reputation systems that attempt to provide us with tractable abstractions of those dynamics, limiting the damage that can be done by bad actors. We must be careful to ensure that we do not write social software that worsens the problem instead of making it better.

Within the abstraction of fixpoints of linear algebra, the corresponding statement is that the Perron–Frobenius theorem doesn't guarantee the existence of a unique largest eigenvalue for matrices with negative elements.

Scott Aaronson wrote an essay in 2014 about attempting to expand eigenvector-based reputation systems to moral philosophy in general called Eigenmorality, covering questions like this:

[S]hould we compute your “total morality” by simply summing over your interactions with everyone else in your community? If so, then can a career’s worth of lifesaving surgeries numerically overwhelm the badness of murdering a single child?

And this, by commenter "MadRocketSci", summarizing something Aaronson said:

[I]t's a little ludicrous to award “morality” to whatever group of in-group cooperators/out-group defectors happens to be largest. That seems to be a general feature of this sort of algorithm, as stated.

That is, in the presence of deep divisions in a society, Aaronson's concept of eigenmorality condemns the minority groups struggling against the majority. This is perhaps a good illustration of how it is easy for an attempt to formalize reputation systems with negative attestation can go badly wrong. I show examples of this outcome of Aaronson's framework below.

He also linked to related work by Tyler Singer-Clark on morality metrics on Iterated Prisoner's Dilemma Players (source code), by Sep Kamvar, Mario Schlosser, and Hector Garcia-Molina on trust metrics for P2P networks, and by Rebecca Newberger Goldstein on moral philosophy in a fictional context.

Kamvar et al. base all their EigenTrust computation with a "normalized local trust value" in $[0, 1]$, truncating all negative values to 0, so they do not attempt to handle negative attestation, even though the explicit goal of their research is to "identif[y] malicious peers and isolate[] them from the network."

Here are a couple of thought experiments to explore the space around PageRank.

Basic PageRank

Here we have the transition matrix of an ergodic Markov chain and we come up with its stationary distribution. First, let's generate a random square matrix.

In [1]:
%pylab inline
Welcome to pylab, a matplotlib-based Python environment [backend: module://IPython.zmq.pylab.backend_inline].
For more information, type 'help(pylab)'.
In [2]:
n = 6
pre_t = rand(n, n)
print pre_t
[[ 0.56360638  0.71989101  0.18933134  0.67289365  0.28786193  0.88952121]
 [ 0.13994219  0.55680073  0.73279774  0.62714198  0.61473341  0.7501166 ]
 [ 0.0158959   0.96530473  0.50464747  0.01555037  0.59879578  0.4861724 ]
 [ 0.51265266  0.16894211  0.14278131  0.30172311  0.41873684  0.65900758]
 [ 0.42481032  0.32220287  0.24385906  0.90260515  0.9379906   0.7970652 ]
 [ 0.65195494  0.54374264  0.70458721  0.30520581  0.22706343  0.55002057]]

Trouble is, that's not a valid Markov-chain transition matrix. Each column of such a matrix needs to add up to 1 (i.e. its $L^1$ norm should be 1). But we can normalize it.

In [3]:
col_sums = sum(pre_t, axis=0)
print col_sums
[ 2.3088624   3.2768841   2.51800413  2.82512006  3.08518197  4.13190355]
In [4]:
t = pre_t / col_sums
print t
[[ 0.24410566  0.21968766  0.07519103  0.23818232  0.09330468  0.21528121]
 [ 0.06061088  0.16991774  0.29102325  0.22198773  0.19925353  0.18154262]
 [ 0.00688473  0.29458006  0.20041566  0.00550432  0.19408767  0.11766306]
 [ 0.2220369   0.05155572  0.05670416  0.1068001   0.13572517  0.15949249]
 [ 0.18399118  0.09832599  0.09684617  0.31949267  0.30403088  0.19290508]
 [ 0.28237064  0.16593283  0.27981972  0.10803286  0.07359807  0.13311554]]

Now each column of the matrix totals to 1, and all the items are between 0 and 1:

In [5]:
sum(t, axis=0), t.min(), t.max()
Out[5]:
(array([ 1.,  1.,  1.,  1.,  1.,  1.]),
 0.0055043202965486624,
 0.31949266802821036)

So if we take some arbitrary probability-density vector of the right length and multiply it through this matrix a few times, it should eventually converge on a fixpoint which is the first eigenvector of the matrix. If we start with all the probability at index 0, after one iteration, we have the first column of the matrix:

In [6]:
v = zeros(n)
v[0] = 1
print t.dot(v)
[ 0.24410566  0.06061088  0.00688473  0.2220369   0.18399118  0.28237064]

So let's generalize to arbitrary numbers of multiplications.

In [7]:
def ndot(m, n, v):
    for i in range(n):
        v = m.dot(v)
    return v

print ndot(t, 1, v)
[ 0.24410566  0.06061088  0.00688473  0.2220369   0.18399118  0.28237064]
In [8]:
print ndot(t, 2, v)
[ 0.2042623   0.16431061  0.09107234  0.15143749  0.23288857  0.15602869]
In [9]:
print ndot(t, 10, v)
[ 0.17994214  0.18187854  0.14284492  0.12512731  0.1982469   0.17196018]
In [10]:
print array([ndot(t, k, v) for k in (0, 1, 10, 20, 100, 101, 102)])
[[ 1.          0.          0.          0.          0.          0.        ]
 [ 0.24410566  0.06061088  0.00688473  0.2220369   0.18399118  0.28237064]
 [ 0.17994214  0.18187854  0.14284492  0.12512731  0.1982469   0.17196018]
 [ 0.17994229  0.18187821  0.14284428  0.1251277   0.19824781  0.1719597 ]
 [ 0.17994229  0.18187821  0.14284428  0.1251277   0.19824781  0.1719597 ]
 [ 0.17994229  0.18187821  0.14284428  0.1251277   0.19824781  0.1719597 ]
 [ 0.17994229  0.18187821  0.14284428  0.1251277   0.19824781  0.1719597 ]]
In [11]:
sum(ndot(t, 100, v))
Out[11]:
0.99999999999999989

So we can see that this procedure converges relatively quickly (in about 20 iterations to the display precision) on a vector which is still a valid probability distribution, i.e. its values are all between 0 and 1 and its $L^1$ norm is 1, except for rounding error. It's clearly an eigenvector of the matrix with eigenvalue 1, since it's a fixpoint of the matrix.

There are a couple of obvious questions, though:

  1. Does this result depend on the vector we started with, or would we get other results (maybe a different eigenvector, maybe something that isn't even an eigenvector) for other starting points?
  2. Is it the first eigenvector of the matrix? What are the other eigenvectors, if any? (They should only be missing if the matrix is singular, and since we generated it randomly, it probably isn't singular.)
  3. Even if any vector would get us to the same place on this matrix, is there anything special about this matrix that makes that true?

The first question is easy to answer; we can multiply the matrix by itself to see what it would do to any possible vector:

In [12]:
print ndot(t, 100, t)
[[ 0.17994229  0.17994229  0.17994229  0.17994229  0.17994229  0.17994229]
 [ 0.18187821  0.18187821  0.18187821  0.18187821  0.18187821  0.18187821]
 [ 0.14284428  0.14284428  0.14284428  0.14284428  0.14284428  0.14284428]
 [ 0.1251277   0.1251277   0.1251277   0.1251277   0.1251277   0.1251277 ]
 [ 0.19824781  0.19824781  0.19824781  0.19824781  0.19824781  0.19824781]
 [ 0.1719597   0.1719597   0.1719597   0.1719597   0.1719597   0.1719597 ]]

Since every column of this resulting matrix is the same, it doesn't matter what weighted sum we make; as long as the column weights in our starting vector add up to 1, we'll always get the same result.

What about the eigenvector decomposition? Numpy has a built-in function for computing the eigenvectors and eigenvalues of a matrix:

In [13]:
vals, vecs = linalg.eig(t)
print vals; print vecs
[ 1.00000000+0.j          0.22216486+0.11487711j  0.22216486-0.11487711j
 -0.09032211+0.07987558j -0.09032211-0.07987558j -0.10529993+0.j        ]
[[ 0.43592099+0.j         -0.00479288-0.27196655j -0.00479288+0.27196655j
  -0.39007165+0.13749651j -0.39007165-0.13749651j  0.34037043+0.j        ]
 [ 0.44061087+0.j          0.16155063+0.19536318j  0.16155063-0.19536318j
  -0.16654090-0.32726554j -0.16654090+0.32726554j -0.02804966+0.j        ]
 [ 0.34604885+0.j          0.31392215+0.35501013j  0.31392215-0.35501013j
  -0.07648123+0.30990069j -0.07648123-0.30990069j  0.32464453+0.j        ]
 [ 0.30312937+0.j         -0.20419945-0.18602353j -0.20419945+0.18602353j
   0.09486851-0.11966344j  0.09486851+0.11966344j  0.18824981+0.j        ]
 [ 0.48026722+0.j         -0.64158253+0.j         -0.64158253+0.j
  -0.19134887-0.00046822j -0.19134887+0.00046822j  0.03574450+0.j        ]
 [ 0.41658269+0.j          0.37510208-0.09238323j  0.37510208+0.09238323j
   0.72957413+0.j          0.72957413+0.j         -0.86095961+0.j        ]]

The columns of the second array here are the eigenvectors corresponding to the eigenvalues in the first array. The first eigenvector — the one with the largest absolute value — is indeed the one with eigenvalue 1, and we know the eigenvector we found was the one with eigenvalue 1. But the column looks like it's all screwed up! But that's because it's normalized to have a unit $L^2$ norm, not a unit $L^1$ norm, because people usually want a unit $L^2$ norm. Also, it's negative. So let's see what it looks like if we normalize it in $L^1$:

In [14]:
eig0 = vecs[:,0]
print eig0; print eig0/sum(eig0)
[ 0.43592099+0.j  0.44061087+0.j  0.34604885+0.j  0.30312937+0.j
  0.48026722+0.j  0.41658269+0.j]
[ 0.17994229+0.j  0.18187821+0.j  0.14284428+0.j  0.12512770+0.j
  0.19824781+0.j  0.17195970+0.j]

Sure enough, that's our eigenvector!

Note also that all the other eigenvectors are of mixed sign — there's no way to scale them into uniformly non-negative real vectors like we did with eig0. This is true in general of matrices like these, as one of the consequences of the Perron–Frobenius theorem mentioned earlier.

The third question is the most interesting. This matrix represents an ergodic Markov process, but it's easy to construct Markov-process vectors which aren't (and which thus violate the Perron–Frobenius positivity criterion). The simplest example is a deterministic periodic alternation between states:

In [15]:
pt = array([[0, 1],
            [1, 0]])
print array([ndot(pt, k, [1, 0]) for k in range(8)])
[[1 0]
 [0 1]
 [1 0]
 [0 1]
 [1 0]
 [0 1]
 [1 0]
 [0 1]]

But you can have periodicity without determinism; for example:

In [16]:
bpt = array([[0, .1,  0,.9],
             [.1, 0, .9, 0],
             [0, .9,  0,.1],
             [.9, 0, .1, 0]])
print array([ndot(bpt, k, [1, 0, 0, 0]) for k in range(8) + [100, 101]])
[[ 1.         0.         0.         0.       ]
 [ 0.         0.1        0.         0.9      ]
 [ 0.82       0.         0.18       0.       ]
 [ 0.         0.244      0.         0.756    ]
 [ 0.7048     0.         0.2952     0.       ]
 [ 0.         0.33616    0.         0.66384  ]
 [ 0.631072   0.         0.368928   0.       ]
 [ 0.         0.3951424  0.         0.6048576]
 [ 0.5        0.         0.5        0.       ]
 [ 0.         0.5        0.         0.5      ]]

You can see that this Markov chain is evolves toward not one stationary distribution but an alternating pair. But such periodicity is very fragile; the slightest deviation from a strict separation between the groups of states will eventually disrupt it. Here you can see that after a hundred thousand iterations, the oscillations have mostly converged.

In [17]:
bt = array([[0, .1,  0,  .9],
            [.1, 0, .9, 0],
            [0, .9,  0, 0.0999],
            [.9, 0, .1, 0.0001]])
print array([ndot(bt, k, [1, 0, 0, 0])
             for k in range(8)
             + [100, 101, 100000, 100001]])
[[  1.00000000e+00   0.00000000e+00   0.00000000e+00   0.00000000e+00]
 [  0.00000000e+00   1.00000000e-01   0.00000000e+00   9.00000000e-01]
 [  8.20000000e-01   0.00000000e+00   1.79910000e-01   9.00000000e-05]
 [  8.10000000e-05   2.43919000e-01   8.99100000e-06   7.55991009e-01]
 [  7.04783808e-01   1.61919000e-05   2.95050602e-01   1.49398201e-04]
 [  1.36077571e-04   3.36023922e-01   2.94975903e-05   6.63810502e-01]
 [  6.31031844e-01   4.01555883e-05   3.68736199e-01   1.91800623e-04]
 [  1.76636120e-04   3.94965764e-01   5.53009117e-05   6.04802299e-01]
 [  4.98767224e-01   1.23277650e-03   4.98628372e-01   1.37162778e-03]
 [  1.35774265e-03   4.98642257e-01   1.24652447e-03   4.98753476e-01]
 [  2.51740490e-01   2.48259510e-01   2.51615362e-01   2.48384638e-01]
 [  2.48372125e-01   2.51627875e-01   2.48247184e-01   2.51752816e-01]]
In [18]:
vals2, vecs2 = linalg.eig(bt)
print vals2; print vecs2
[-0.99995 -0.8      1.       0.80005]
[[-0.50001389  0.50014064  0.50011252  0.49998611]
 [ 0.50001389  0.49985933  0.49988745 -0.49998611]
 [-0.49998611 -0.49989059  0.49986245 -0.50001389]
 [ 0.49998611 -0.50010938  0.50013752  0.50001389]]

Here we can see that the third-produced eigenvector is the one with the highest eigenvalue (and thus the "first eigenvector" in the usual ordering) and that it corresponds to a nearly even distribution of probability across the items.

It is Markov's Theorem that for the transition matrix of an arbitrary ergodic Markov chain, there is an eigenvector with the largest eigenvalue, that eigenvalue is 1, and that eigenvector is the limiting distribution of the Markov chain. The Perron-Frobenius theorem goes further and tells us that this is a general property of real all-positive square matrices (up to normalization) — you will have a unique maximum eigenvalue, the associated eigenvector will be real and all-positive, and the other eigenvectors will not be (because they have to be orthogonal to the first eigenvector). Perhaps surprisingly, you may end up with imaginary numbers in the other eigenvalues and eigenvectors. Consider the eigenvalue decomposition of this almost-periodic transition matrix:

In [19]:
vals2, vecs2= linalg.eig([[0.1, 0.1, 0.1, 0.7],
                          [0.7, 0.1, 0.1, 0.1],
                          [0.1, 0.7, 0.1, 0.1],
                          [0.1, 0.1, 0.7, 0.1]])
print vals2; print vecs2
[  1.00000000e+00+0.j   -6.00000000e-01+0.j    3.98715349e-17+0.6j
   3.98715349e-17-0.6j]
[[  5.00000000e-01 +0.00000000e+00j   5.00000000e-01 +0.00000000e+00j
    8.14100306e-17 -5.00000000e-01j   8.14100306e-17 +5.00000000e-01j]
 [  5.00000000e-01 +0.00000000e+00j  -5.00000000e-01 +0.00000000e+00j
   -5.00000000e-01 -7.94923480e-17j  -5.00000000e-01 +7.94923480e-17j]
 [  5.00000000e-01 +0.00000000e+00j   5.00000000e-01 +0.00000000e+00j
   -2.17097933e-16 +5.00000000e-01j  -2.17097933e-16 -5.00000000e-01j]
 [  5.00000000e-01 +0.00000000e+00j  -5.00000000e-01 +0.00000000e+00j
    5.00000000e-01 +0.00000000e+00j   5.00000000e-01 +0.00000000e+00j]]

Here the stationary-distribution eigenvector has eigenvalue $1$, while the others have values $-1$ and $\pm 0.6 i$. This first eigenvector is, as you'd expect from looking at the matrix, just a uniform distribution:

In [20]:
print vecs2[:,0]
[ 0.5+0.j  0.5+0.j  0.5+0.j  0.5+0.j]

Going beyond aperiodic non-negative matrices: eigenjesus is feasible, but eigenmoses is inconsistent

But Perron-Frobenius doesn't tell us anything about what happens once elements in the matrix can go negative. Maybe we won't have a unique maximum eigenvalue (although this will happen almost never if the matrix is random), or maybe there will be, but we'll have multiple fixpoints corresponding to different eigenvectors.

Indeed, if we start considering the previous examples as reputation systems with negative attestation, all the real eigenvectors are fixpoints of the matrix if we renormalize them after each iteration — there are almost always as many possible outcomes of the linear reputation system, in some sense, as there are nyms being assigned reputations!

But what happens if we allow negative-weight edges in our graph? One question then is how to normalize the columns of our weight matrix — if we normalize in the $L^1$ norm as before, we have the perhaps perverse outcome that you obtain more power to help your friends by bashing on your enemies. Instead, let's try normalizing in the $L^2$ norm, since the interpretation as probabilities doesn't make any sense with the negative numbers anyway.

Here's a social graph of three people. Two of them love each other, the third loves only themself and is hated by both. This third person hates the first person and is indifferent toward the second.

In [21]:
st_denorm = array([[ 1,  1, -1],
                   [ 1,  1,  0],
                   [-1, -1,  1.]])

def normcols(m):
    return m / (m**2).sum(axis=0)**0.5

st = normcols(st_denorm)
print st
[[ 0.57735027  0.57735027 -0.70710678]
 [ 0.57735027  0.57735027  0.        ]
 [-0.57735027 -0.57735027  0.70710678]]

We would expect that if we begin with positive reputation in the first group, it should spread to both members and generate a negative reputation for the outsider:

In [22]:
print array([ndot(st, k, [1, 0, 0]) for k in range(10)])
[[  1.           0.           0.        ]
 [  0.57735027   0.57735027  -0.57735027]
 [  1.07491496   0.66666667  -1.07491496]
 [  1.76558227   1.00550262  -1.76558227]
 [  2.84834181   1.59988661  -2.84834181]
 [  4.58226768   2.56818587  -4.58226768]
 [  7.36846884   4.12831629  -7.36846884]
 [ 11.84796627   6.63767199 -11.84796627]
 [ 19.05046551  10.67268822 -19.05046551]
 [ 30.63138416  17.16067081 -30.63138416]]

A couple of interesting features here already. First, our attempt to normalize the columns in $L^2$ didn't really help; we still have exponentially growing reputation magnitude values, which will prevent us from reaching a fixpoint even if we do approach an eigenvector. Second, notice that the first person is getting a big reputation boost from the fact that the third person hates them.

Before we get to changing the normalization, let's see what happens if we do the same thing, but starting from the premise that the third person is okay.

In [23]:
print array([ndot(st, k, [0, 0, 1]) for k in range(10)])
[[  0.           0.           1.        ]
 [ -0.70710678   0.           0.70710678]
 [ -0.90824829  -0.40824829   0.90824829]
 [ -1.40230818  -0.76007966   1.40230818]
 [ -2.24003682  -1.2484552    2.24003682]
 [ -3.59802704  -2.01408181   3.59802704]
 [ -5.78434187  -3.24015255   5.78434187]
 [ -9.30044164  -5.21029428   9.30044164]
 [-14.95418264  -8.37777729  14.95418264]
 [-24.0449173  -13.47071335  24.0449173 ]]

As expected, we're now exponentially zooming off in a different direction, one where the first two people appear more and more evil, while the third person is better and better — both because they love themself, but also for being hated by the other two.

Instead of trying to normalize each column, let's renormalize the reputation vector after each iteration instead. But let's do it in $L^2$, because doing it in $L^1$ could give us division by zero, or worse, division by almost zero.

In [24]:
def ndotn(m, n, v):
    for i in range(n):
        v = m.dot(v)
        v /= v.T.dot(v)
    return v

print array([ndotn(m, k, [0, 0, 1])
             for m in st, st_denorm
             for k in range(10) + [100, 101, 102, 1001, 10000]])
[[ 0.          0.          1.        ]
 [-0.70710678  0.          0.70710678]
 [-0.5        -0.22474487  0.5       ]
 [-0.56472654 -0.30609331  0.56472654]
 [-0.47975637 -0.26738593  0.47975637]
 [-0.56095655 -0.31400886  0.56095655]
 [-0.47924709 -0.26845469  0.47924709]
 [-0.56086242 -0.31420639  0.56086242]
 [-0.47923439 -0.26848134  0.47923439]
 [-0.56086007 -0.31421132  0.56086007]
 [-0.47923407 -0.26848202  0.47923407]
 [-0.56086001 -0.31421145  0.56086001]
 [-0.47923407 -0.26848202  0.47923407]
 [-0.56086001 -0.31421145  0.56086001]
 [-0.47923407 -0.26848202  0.47923407]
 [ 0.          0.          1.        ]
 [-0.5         0.          0.5       ]
 [-0.44444444 -0.22222222  0.44444444]
 [-0.38135593 -0.22881356  0.38135593]
 [-0.42399116 -0.26091763  0.42399116]
 [-0.37866857 -0.23388353  0.37866857]
 [-0.42355207 -0.26174566  0.42355207]
 [-0.3786113  -0.23399153  0.3786113 ]
 [-0.42354273 -0.26176329  0.42354273]
 [-0.37861008 -0.23399383  0.37861008]
 [-0.42354252 -0.26176368  0.42354252]
 [-0.37861005 -0.23399388  0.37861005]
 [-0.42354252 -0.26176368  0.42354252]
 [-0.37861005 -0.23399388  0.37861005]
 [-0.42354252 -0.26176368  0.42354252]]

So, we still preserve this behavior that if you start believing the third person is okay, then you keep thinking the other two are baddies, but there are a couple of other anomalies there. One is that under this renormalizing procedure, our signed transition array gives us different results according to whether we've normalized its columns or not. The other is that we aren't finding an eigenvector; we're oscillating back and forth between two states, although they're only a little different.

What do the actual eigenvectors of this matrix look like?

In [25]:
svals, svecs = linalg.eig(st)
print svals; print svecs
[  1.60790686e+00  -2.67640061e-17   2.53900459e-01]
[[  6.57402760e-01   7.07106781e-01  -4.39114577e-01]
 [  3.68297737e-01  -7.07106781e-01   7.83809146e-01]
 [ -6.57402760e-01  -1.54274026e-16   4.39114577e-01]]

So our first eigenvector looks pretty much like we'd expect: the third person, because they're the hated minority, is computed to have a negative reputation, exactly as negative as the positive reputation of the minority-hated person in the majority; the majority person the minority doesn't care about is computed to have a positive reputation, because the first person loves them, but less so.

The associated eigenvalue tells us how we could renormalize the entire transition matrix to get something that doesn't blow up when you iterate it.

In [26]:
st_n = st / svals.max()
print st_n
[[ 0.35906947  0.35906947 -0.4397685 ]
 [ 0.35906947  0.35906947  0.        ]
 [-0.35906947 -0.35906947  0.4397685 ]]
In [27]:
minority_report = array([ndot(st_n, k, [0, 0, 1]) for k in range(8) + [100, 101]])
print minority_report
[[ 0.          0.          1.        ]
 [-0.4397685   0.          0.4397685 ]
 [-0.35130377 -0.15790744  0.35130377]
 [-0.33733453 -0.1828422   0.33733453]
 [-0.33512869 -0.18677959  0.33512869]
 [-0.33478037 -0.18740133  0.33478037]
 [-0.33472537 -0.18749951  0.33472537]
 [-0.33471668 -0.18751501  0.33471668]
 [-0.33471505 -0.18751792  0.33471505]
 [-0.33471505 -0.18751792  0.33471505]]

So that somewhat unjustifiable normalization of the state transition matrix does indeed get us to a stable equilibrium, which presumably is the principal eigenvector — although this time it's on the negative side. Indeed, if we divide by the eigenvector we get a constant:

In [28]:
first_eigenvector = svecs[:,0]
minority_report[-1] / first_eigenvector
Out[28]:
array([-0.50914762, -0.50914762, -0.50914762])

And of course if we start with a positive reputation for the majority group, we end up computing opposite reputations.

In [29]:
print array([ndot(st_n, k, [1, 0, 0]) for k in range(8) + [100, 101]])
[[ 1.          0.          0.        ]
 [ 0.35906947  0.35906947 -0.35906947]
 [ 0.41576922  0.25786177 -0.41576922]
 [ 0.42472253  0.24188032 -0.42472253]
 [ 0.42613632  0.23935673 -0.42613632]
 [ 0.42635957  0.23895824 -0.42635957]
 [ 0.42639482  0.23889532 -0.42639482]
 [ 0.42640039  0.23888538 -0.42640039]
 [ 0.42640143  0.23888352 -0.42640143]
 [ 0.42640143  0.23888352 -0.42640143]]
In [30]:
ndot(st_n, 100, [1, 0, 0]) / first_eigenvector
Out[30]:
array([ 0.64861522,  0.64861522,  0.64861522])

In the Markov-chain version of the system, we had the constraint that the distribution vector was a valid probability distribution, which implies that it summed to 1, so only one multiple of the principal eigenvector was permitted. But in this version of the system, any possible multiple of that eigenvector seems equally valid. For example, this one, which comes down on the side of the majority, but more lightly:

In [31]:
print array([ndot(st_n, k, [0, 1, 1]) for k in range(8) + [100, 101]])
[[ 0.          1.          1.        ]
 [-0.08069902  0.35906947  0.08069902]
 [ 0.06446544  0.09995433 -0.06446544]
 [ 0.08738799  0.05903812 -0.08738799]
 [ 0.09100763  0.05257715 -0.09100763]
 [ 0.0915792   0.05155691 -0.0915792 ]
 [ 0.09166946  0.05139581 -0.09166946]
 [ 0.09168371  0.05137037 -0.09168371]
 [ 0.09168638  0.0513656  -0.09168638]
 [ 0.09168638  0.0513656  -0.09168638]]

And this one, where the two groups' reputations cancel out 99.999%:

In [32]:
print ndot(st_n, 100, [0, .785, 1])
[  1.00734560e-05   5.64346740e-06  -1.00734560e-05]

Even if we limit ourselves to $L^2$-normalized eigenvectors, there are still two candidate versions of the first eigenvector — the sign of an eigenvector is arbitrary, and remember that in the first calculation it came out all-negative instead of all-positive. In the Markov-chain interpretation, there was a principled way to resolve the ambiguity, but in this one there doesn't seem to be. The eigenvalue computation algorithm provided by Numpy happened to give us the version of the eigenvector that favors the majority, but its negation would have been equally valid.

It happens that none of the three eigenvectors in this case are all positive. It could have happened by chance, since one of the three is a rounding error, but the chance of that happening for a larger matrix is probably very small, order $\frac{N}{2^N}$ for $N$ participants. But the fact that none of the three is all positive shows that that isn't a valid way to cut the knot.

What happens if our oppressed minority turns the other cheek and makes nice with their oppressors, Dalai-Lama-style? Our attempt at normalization of the transition-Lama matrix rotates us entirely off the real line, because there are no nonzero real eigenvalues!

In [33]:
tl = array([[1, 1, 1],
            [1, 1, 0],
            [-1, -1, 1]])
tlvals, tlvecs = linalg.eig(tl)
tl_n = tl / tlvals.max()
print tl_n
[[ 0.5-0.28867513j  0.5-0.28867513j  0.5-0.28867513j]
 [ 0.5-0.28867513j  0.5-0.28867513j  0.0+0.j        ]
 [-0.5+0.28867513j -0.5+0.28867513j  0.5-0.28867513j]]
In [34]:
print tlvals; print tlvecs
[  1.50000000e+00+0.8660254j   1.50000000e+00-0.8660254j
  -3.83543104e-17+0.j       ]
[[ -2.69370030e-16+0.4472136j  -2.69370030e-16-0.4472136j
   -7.07106781e-01+0.j       ]
 [  3.87298335e-01+0.2236068j   3.87298335e-01-0.2236068j
    7.07106781e-01+0.j       ]
 [ -7.74596669e-01+0.j         -7.74596669e-01+0.j          7.13716803e-17+0.j       ]]

I think this shows that Singer-Clark's "eigenmoses rating" does not deterministically produce a real-valued set of morality ratings, and on some simple examples, it deterministically produces a complex-valued set of morality ratings, for which he proposes no interpretation.

Furthermore, I do not see a principled way that these problems can be overcome in the framework of an egalitarian reputation metric.