Full Non-Indexical Anthropics are Fine
Or: How I Learned to Stop Worrying and Love Time-Inconsistency
The Full Non-Indexical (FNI) approach to anthropics is, imo, the best approach to anthropic reasoning. However it has a seemingly fatal flaw which I’m going to try to argue isn’t a flaw at all - namely, that it leads agents to become time-inconsistent. I’m going to give a very brief rundown of FNI, but I intend this to be a defence of FNI vs a specific objection rather than an overarching argument for the position. I’ll explore a little bit of what I like about FNI but if you already don’t buy into FNI for other reasons this may not do much for you.
Brief FNI Primer
Essentially, the core idea here is that approaching anthropic problems using indexicals (“I, here, now” etc.) always results in muddy confusions and questions that are, under the hood, ill-posed. Classical theories of anthropics seem to approach problems by trying to locate “yourself” among a bunch of possible observers in various different ways. In contrast, FNI tries to take a more “objective” approach of conditioning on the entirety of the event as evidence, leaving out any indexical concepts. For example, instead of conditioning on “I am seeing a coin land Tails” which seems to have all sorts of murky asterisks (what is this degree of freedom of where “you” slot in to the possible observers? Could “I” have been one of the other possible observers? What facts change in the world if “I” am a different observer? What’s the “I-ness” that can get shuffled around man didn’t Hume sort this all out ages ago…), instead FNI conditions on something more like “There exists a conscious observer making the following observation: [input literally every detail here]”. This seems like a more rigorous and clean way of framing the data and hey, if there isn’t anything dodgy going on with this indexical lingo, the results should agree. If not…
FNI has a lot going for it. As well as I think being more rigorous and coherent, it generally gives more sensible answers in edge-cases than the leading theories (SIA and SSA). In particular it easily avoids the weird paradoxical implications these theories have, such as the presumptuous philosopher problem. For example, a common objection to SIA is that one can get extremely confident that one exists in a massive/infinite universe just from the “armchair” - since there are overwhelmingly more agents having your exact subjective experience in massive/infinite tiled universes, it seems overwhelmingly likely that you’re in one of those rather than somehow finding yourself a universe with only one consistent observer. No telescopes needed! By contrast, FNI updates purely on the existence of your subjective experiences, since that’s the ground-truth you have: larger universes are more likely only insofar as they make the fact that there exists your exact observation more likely, so an infinite tiled universe isn’t preferred over a finite one that definitely contains a copy of you (the likelihood is just 1 in both cases).
What I particularly like about this approach is that it avoids the presumptuous philosopher problem whilst maintaining a sensible degree of preference for larger worlds: a larger universe often does in fact make your specific observation more likely to occur, but since we’re concerned with the likelihood ratios to the observation rather than “counting all possible observers”, this preference gets bounded in a more sensible way. This in my experience is a common theme with FNI - there’s a core of an intuition that makes sense that SIA or SSA take to some confounding logical conclusion, whilst FNI maintains a less bewildering middle-ground. SSA has its own presumptuous problems which I think FNI deals with in a sensible middle-ground way, but I’m going to skip over them for brevity (and so I don’t have to think about what a “reference class” is).
The Actual Problem
However, FNI seems on the face of it to have its own weird implication which is arguably more unacceptable than issues like presumptuous philosophers plaguing the more popular proposals - namely, it seems to be time-inconsistent: in some cases, an agent’s rational expectation for their future credence for an event is different to what it (rationally) is now. This seems to violate some pretty sacrosanct things, and in theory opens the agent up to all sorts of nasty money-pumps. Stuart Armstrong has written up a great illustration of this, which I’ll briefly run through:
Coin toss with random number generation
God is going to flip a fair coin. If it’s Heads they’ll create one copy of you in a room, if Tails then 100 identical copies in identical rooms. However, God will then randomly display a number between 1 and 100 on the walls of each room. The numbers will be uniformly drawn and independent for each room
The wrinkle of these extra random numbers seems irrelevant, and indeed the classic theories deliver unchanged verdicts. Let’s consider how our FNC behaves in this example. Upon waking up in a room, the agent surmises that the likelihood of the evidence (the existence of this specific conscious experience) is equal under both hypotheses - namely, it’s 1. Our evidence is just the existence of an observer in a room like this, which we know to be guaranteed under either hypothesis. However, suppose WLOG the agent later sees the number 23 on the wall. The likelihood of the evidence under Heads is just 1/100, whereas the likelihood of the existence of this event under Tails is:
1 - (99/100)^100 ≈ 1 - 1/e ≈ 0.63
In other words, since the evidence being conditioned on is now the existence of an observer seeing this number, the fact that there are more numbers being drawn in the second case does matter for the likelihoods. The issue is of course that the above argument clearly holds for any number seen, and therefore the agent, before seeing the number, will know that in the future all observations update it towards existing in a Tails world. But since it expects its future credence for “The coin landed Tails” to be higher than it is now, it should rationally assign that probability now. Contradiction.
This does seem like a pretty troubling issue on the face of it. In particular, violations of time-inconsistency leave agents open to money-pumping - if you’ll predictably change your probability in the future, then you leave yourself open to rationally taking bets now which you will predictably pay money to get out of later. I think critics of FNI probably see this as much more of an issue than things like the presumptuous philosopher, which loosely speaking “merely” seem to lead agents to acting rationally with stange-seeming priors, rather than acting irrationally outright.
FNI Response
I don’t believe the failure of time-inconsistency is actually an issue. I agree that the agent predictably goes through the credence changes as described, I just think that this is a totally fine phenomenon as I’ll argue below. The reason I think is quite subtle, but boils down to “expectations of future credences for an event don’t have to be time-consistent if there is a dependence between the outcome of the event and whether or not the credence is defined/there is an observer to update on the outcome”. To start with though, I want to give a pretty simple toy example where I think everyone agrees we should be time-inconsistent, in the sense that your expectation for your future credence is rationally different from what your credence is now. The actual reasons why time-inconsistency is ok in this toy case are - I think - subtly different to why it’s ok in the anthropic case, but I hope it’s a useful intuition pump nonetheless:
Coin Toss with a Killer
You’re being held captive by a crazy murderer. They tell you that they’re going to flip a fair coin to decide your fate. They’re going to flip the coin, look at it privately, and then murder you if it’s Tails. Then if you survive, they’re going to ask you what your probability of Heads is.
It seems clear that your credence in Heads at the start is 0.5, and your expectation for your future credence in Heads is close to 1. Moreover this does not rationally require you to update your present credence. I know that future versions of me will in expectation be (rationally) overwhelmingly confident in Heads without this affecting my present assessment of the coin-toss chances whatsoever. This example probably seems trivial, but the reason that it’s fine for our credence to predictably change is that the law undergirding why time-inconsistency is important - the Law of Total Expectation (LTE), doesn’t apply. More specifically, LTE requires both X and Y to be defined over the same space - in other words, the random variables of “How the coin landed” and “Credence in how the coin landed” need to be defined in exactly the same outcomes. But in a reasonable interpretation of how we define these things, “My credence in how the coin landed” is simply not defined in the scenarios in which I no longer exist. And since whether or not that variable is defined is not independent of the outcomes of the other random variable (how the coin landed), it’s totally unproblematic that the conditional expectation is different.
With this in mind, let’s revisit the original case - suppose I, as a FNI proponent, have just woken up in my room and assign credence 0.5 to Heads. We know that in a moment I will see a number and change my credence to overwhelmingly favour Tails. How do we formalise my supposed failing here? I think the natural response is to say something like:
Whatever value the “What number I see” variable takes, I will update towards Tails. Moreover, “What number I see” is going to take some value! I’m already sitting here in the room waiting so I know I’m going to see something. We can’t appeal to it being undefined in some cases
The issue here is that the FNI proponent is just going to respond that reasoning about a “What number I see” variable is just smuggling in indexical fuzziness that they outright reject. On the FNI view, there are a bunch of events happening, and we can update on the occurrence/non-occurrence of any specific event, but asking which conscious experience “you’re” having - or which of the possible observers you are - is ill-posed. We can be uncertain about what number shows up on a given specific wall, but not which conscious experience we “slot into”. What we can do on the FNI view is take a well-defined event such as “There exists a conscious observer looking at the number 23” and update on that. And for any such binary variable, we see that obviously the FNI agent respects the laws of probability - if such an event does occur, it’s evidence for Tails, but if such an event does not occur, it’s equally strong evidence for Heads. But crucially, by the set-up, no observer can condition on this event not occurring. So in a way the problem is very analogous to the coin-toss case above, except rather than the asymmetry coming from you being dead in some outcomes and therefore unable to update, it comes from the fact that the agents can update on the occurrence of certain events, but not their non-occurrence. With respect to any given variable “There is a conscious observer looking at number X”, the expected expectation for the coin does not change, but by definition there is only an observer to update on the outcome of this variable in the cases it resolves positively, and therefore there is a dependence between the outcome of the variable and whether the agent’s credences are defined.
Taking a step back, what does this mean? To recap, the FNI proponent totally agrees that, upon waking up, their credence in Heads is 0.5, and that they expect their future credence to be much lower. But I think they should reject that this is anything problematic - time-consistency of credences can clearly be broken in cases where there is a dependence between the outcome of a variable and whether or not those credences are defined. We see this in the coin-toss-killer case, and the same thing is occurring in the anthropic case, albeit in a more confusing way. These cases break time-consistency because of a selection bias - observers only exist to receive evidence bearing on a hypothesis in cases where that evidence favours one hypothesis.
Doesn’t FNI still get money-pumped?
I honestly found the above a bit confusing for a while, so at the risk of sounding incredibly disingenuous I think it’s the kind of argument that warrants mulling over a few times. Setting that aside, I think the obvious objection here is “Ok well you’ve explained why time-inconsistency is expected and accepted by FNI, but doesn’t this still end up with agents getting money-pumped? If ultimately your theory is just galaxy-braining why getting money-pumped is to be expected then it still sounds like it sucks”. To which I think the answer is no. There is a whole separate can of worms here around Bayesianism and the relationship between credences and acceptable betting odds, but I think the bottom line is that time-inconsistent agents will avoid money-pumping bets in these scenarios for the same reason they’re time-inconsistent in the first place: the space of possibilities where the outcomes of the bet are defined, and the space of outcomes where the observer is defined to update/receive payoffs come apart.
For example, an obvious money-pump for an agent who assigns credence 0.5 to an outcome and will predictably assign credence ~1 in the future is to buy shares from them at an implied probability of 0.51, and sell back at 0.99 in the future. The agent presumably thinks this is a good deal despite predictably giving away money for free. Except, in the coin-toss-killer case, the agent will refuse this bet because, in scenarios where you did in fact overpay for Heads, she’ll be too dead to enjoy the money. They expect to only be around to care about money in the scenarios in which the initial arm of the money-pump is bad, and so they’ll refuse. The money-pump response for the anthropic case is similar, albeit a bit more subtle. We offer the first arm of the trade (buying shares of Tails at 0.51) upon the agent waking up, and then sell them back once the agent has seen the number and believes they exist in a Tails world. I think in this case the FNI agent still refuses the first arm, because although some future events do make Tails less likely (e.g. “There exists no conscious observer observing 23”), these cannot be observed. If the only possible observables that settle a bet are ones which settle it in a particular direction, you shouldn’t take the bet even if you think the stated odds are fair!
So I think the FNI agent avoids money-pumping despite being time-inconsistent, although it requires them to turn down bets which - on the face of it - are good in light of their FNI-yielded credences. There’s a lot to unpack here around how betting odds come apart from credences, and what it means for an agent to have credence X but rationally be required to act as though they have credence Y qua bets, but I think that’s a whole other essay.
