I have been reading through the Effective Altruism Handbook, and what do you know, there’s a post of Eliezer Yudkowsky right there in chapter 1. It’s not that particular post I have an issue with, but at the end of it, it links to another one, Circular Altruism, with the encouragement to “shut up and multiply”. In it Yudkowksy lays out a case for relying on a utilitarian calculus rather than intuition or emotion when it comes to ethical quandaries. I have issues with both of the fundamental cases, but let’s deal with the second one first, because it’s the one that really caught my eye.
Yudkowksy lays out the second scenario thus:
Now, would you rather that a googolplex people got dust specks in their eyes, or that one person was tortured for 50 years? I originally asked this question with a vastly larger number - an incomprehensible mathematical magnitude - but a googolplex works fine for this illustration.
Yudkowksy argues that the math shows that, contrary to intuition, it is worse for a googolplex people to get dust specks in their eyes than one person to get tortured for 50 years. If there were such a thing as utility, this would be true, as the negative utility of getting a dust speck in your eye would have to be very tiny indeed to not balloon into a massive number when multiplied by a googolplex. But this is not actually the case.
Before we get into the nitty-gritty, here is a very simple modification of the thought experiment that clarifies what I mean:
Instead of trying to solve it in your armchair, put it to a vote. Do you really think a googolplex people would vote to torture someone for 50 years to avoid getting a dust speck in their eye? Surely these people themselves are the best judge of the negative utility they are to experience.
What does utility mean? We pretend like there is this number you can assign to any experience, presumably a negative one to negative experience and positive for positive experience. It posits that there is a continuum between these, and that when you add up enough negative utility, you get something that could fairly be called “suffering”.
Naively, it can seem like adding up enough discomfort nets you suffering, but when you think of actual examples of ordinary sensory inputs becoming pain through increased intensity, like say, overeating, listening to loud music, or looking at the sun, the pain starts because the body is actually being damaged (yes, you can rupture your stomach by overeating), not because the stimulus made it past some threshold and became pain. Pain is created through a different kind of input entirely, not through the addition of things that are not pain. It’s not like cranking the volume on a radio, it’s like tuning into a different channel, or more like getting the ordinary programming interrupted because of an emergency broadcast.
Because of this, no amount of dust specks add up to suffering. In a twist of serendipity, while I was writing this essay, I actually did get a dust speck in my eye, which was very lucky as it had been a while since that happened and I couldn’t remember how it felt. It was mildly uncomfortable for about 5 seconds then it was gone. It was so mild, I don’t really remember it all that well, in spite of it happening a few days ago.
A googolplex people getting a dust speck in their eye would be nothing, would create no suffering in the world, particularly when considering the additional issue that there is no underlying being that experiences what it’s like to get a googolplex dust specks in a googolplex eyes. The simplistic utilitarian arithmetic proposed here makes it sound like it would create some kind of Mount Everest of suffering, which the people voting on it would just know to not be true. Hell, based on my experience, were I such a being1, I would choose to take a dust speck in each of those eyes over torturing someone for 50 years. I think it would be mildly unpleasant, considering that being would have to be capable of handling a lot of sensory input anyway to have that many eyes, and since no damage would be incurred (nor some nerve would get pinched), there would be no pain.
Even further, if I had to choose between the dust-specks-in-my-googolplex-eyes or being tortured myself for 50 years, it’s obvious that the dust specks are the sane choice. Because 50 years of torture is actually suffering, unlike the dust specks. Non-pain does not add up to pain, no matter the amount. Even if it somehow did add up, it’s one and done, unlike being tortured for 50 years. A ripple in a pond versus a tsunami. Which really reveals another way utility doesn’t work: it seems to be entirely insensitive to time, at least as presented here.2
The other thought experiment, the one that opens the essay, is this:
Suppose that a disease, or a monster, or a war, or something, is killing people. And suppose you only have enough resources to implement one of the following two options:
Save 400 lives, with certainty.
Save 500 lives, with 90% probability; save no lives, 10% probability.
Most people choose option 1. Which, I think, is foolish; because if you multiply 500 lives by 90% probability, you get an expected value of 450 lives, which exceeds the 400-life value of option 1. (Lives saved don’t diminish in marginal utility, so this is an appropriate calculation.)
“What!” you cry, incensed. “How can you gamble with human lives? How can you think about numbers when so much is at stake? What if that 10% probability strikes, and everyone dies? So much for your damned logic! You’re following your rationality off a cliff!”
Ah, but here’s the interesting thing. If you present the options this way:
100 people die, with certainty.
90% chance no one dies; 10% chance 500 people die.
Then a majority choose option 2. Even though it’s the same gamble. You see, just as a certainty of saving 400 lives seems to feel so much more comfortable than an unsure gain, so too, a certain loss feels worse than an uncertain one.
I actually went with option 2 without calculating anything, as 90% odds are pretty damn good. And I don’t think calculating the expected value is necessary for figuring out it’s the same gamble: it’s just that the first one doesn’t scan as “500 people die with 10% probability”, even though it is saying that, while the second one says that directly.
But those are not the real problems I have. The problem is you can’t use expected value alone to make such decisions. Consider this scenario instead:
Save 1 million lives, with certainty.
Save 10 billion lives, with 1% odds.
Using EV alone would mean you pick option 2, which means almost certainly no one gets saved. EV doesn’t really price in risk all that well.
But don’t take it from me: here’s investopedia on the expected return of a portfolio, which is a more complex measure similar to expected value:
To make investment decisions solely on expected return calculations can be quite naïve and dangerous. Before making any investment decisions, one should always review the risk characteristics of investment opportunities to determine if the investments align with their portfolio goals.
Since expected value alone is not good enough for a portfolio, why is it being sold here as the solution in life and death situations?
Circular Altruism isn’t really doing rationality. It’s more like it showcases a conceit that goes as follows:
Numbers are rational.
I am using numbers.
Therefore, this is the rational solution. QED.
It’s a cargo cult of reason, not the actual thing.
So it’s very grating how it’s presented as showcasing the superiority of reason to emotion, when in fact, reason alone can prevail against these views.
I figured it was worth writing this since Circular Altruism is one click away from chapter one of the EA handbook, linked to with the admonition to ‘shut up and multiply’. But multiplication is entirely too simple to be used as is in ethics conundrums. I expect ethical math to be a goddamn mess on the order of quantum mechanics, because there are so many features to take into account before you get anything representing reality.
Does this mean anything for EA?
EA is ultimately much bigger than these thought experiments and even utilitarianism. I don’t know if a utilitarian calculus was somehow involved in the production of the rather mundane list of most pressing world problems, for example. I’m guessing not, since there wasn’t anything really counterintuitive there, and surely the point of relying on math should be to discover counterintuitive truths.
There are, of course, similar concepts to utility that are actually being used, which are the disability-adjusted life-year (DALY) and the quality-adjusted life-year (QALY), but both are definitely more narrow than the concept of utility.
So perhaps the main use of this essay should be to knock out the link to Circular Altruism, or to write and then link to another piece that explores the same issue but without using specious arguments. If it ticked me off enough to write this whole thing, it might turn off other people from EA entirely.3
But what about circular altruism?
It’s true, I didn’t address the central point of the essay, that if you rely on emotion, you can end up with circular preferences. At least in the particular scenario Yudkowsky offered to showcase this, I think I would pick both choices just to make the point that it doesn’t matter, because dust specks are irrelevant. In other, more subtle scenarios, I don’t know what the math would be like. Consider:
Torture someone for 10 years.
Torture two people for 1 year.
Even before you actually calculate anything, it seems you should pick option 2. But consider this angle on it:
Destroy 1 person
Destroy 2 persons.
Because really, I’m not sure anyone comes back from being tortured for a year. This is possibly just a restatement of the initial problem.
Anyway, it’s all how-many-angels-can-dance-on-the-head-of-a-pin: these ethical conundrums don’t correspond to any real world scenarios, which instead are of the kind where dollars are traded for life and health.
And we should count our blessings that this is so.
Like the Islamic archangel of death Azrael, said to have one eye for every human alive. Whenever someone dies, a corresponding eye closes, though I don’t remember if the death makes the eye close, or he kills someone by closing his eye.
Can math be brought to bear on this problem at all? Let's see. At a first pass, I can think of calculating the average disutility incurred as well as the variation in disutility, if any, and then not combining the two numbers, as that would be like collapsing a cube into a square and still calling it a cube. But surely any utilitarian calculus should take into account the amount of people affected, and a googolplex is so large that it doesn't matter what equation you plug it into, it's gonna swamp all the other factors (unless you use it as a denominator or a negative exponential, but that wouldn't make any sense here).
If you try to use math to arrive at the correct decision, it seems you are forced into the illogical conclusion that a googolplex people can do *nothing* to prevent a guy for being tortured for 50 years:
1. Someone is going to be tortured for 50 years unless a googolplex people do *something*.
2. Even if the *something* is that everyone lifts a finger, it would result in absolutely colossal costs (e.g. the caloric consumption of a googolplex people all lifting a finger is inconceivable).
3. Therefore, a googolplex people can't do anything to save someone from being tortured for 50 years.
But this points the way out: practically any number multiplied by a googolplex is massive, but not if you consider it instead as a percent of total resources. The total caloric expenditure of a googolplex people all lifting their fingers is massive, but not as a percent of total calories available to the googolplex people.
Similarly with the dust specks, the aggregate discomfort (which as we've said, is not commensurate to pain, but let's roll with it) is inconceivable, but not as a percent of the total discomfort people can handle, assuming that's something you can pin a number on. Play the googolplex off against itself basically.
Which suggests that the final answer would be that if you can expect a random stranger A to incur cost X to avoid bad outcome Y for random stranger B, uncountably many random strangers can be expected to personally bear the same cost X, since clearly X is small as a percentage of the total resources available to each individual.
Let's see how this does on a thornier version of the problem: the price is not dust specks, it's that everyone gets a pinky finger lopped off with no anesthesia. Even if I were playing God-Emperor here (as the thought experiment asks us to play), I wouldn't impose this top down, but I think I go with the fingers. It's one and done, and the permanent inconvenience is not too great, and I think random strangers can be talked into it (I would be one of them, and as God-Emperor, clearly I would have to lead by example).
Assuming the SBF fiasco has not done that already.
In The Problem of Pain, C. S. Lewis points out that it's easy for the brain to get overwhelmed when thinking about lots of people suffering a lot, as we imagine the pain multiplying out in a linear way. But pain doesn't stack up like this. Really, when contemplating the question of "how much can human beings suffer", the only thing you need to take into consideration is the maximum amount that *one* person is capable of suffering. Of course, this is still a huge amount - but the point is that consciousness is what it is. It's not lots of little things in a pile.
Only marginally relevant to what you're discussing here, but always stuck with me.
Thanks for this!!!
It’s strange reading about EA for me, since I also read the logic as erroneous, but I appreciate the good being done---which is weirdly similar to how I feel about proselytizing religious charities.