Will Yudkowsky agree that his "death with dignity" post overstated the risk of extinction from AI, by end of 2029?
➕
Plus
96
Ṁ48k
2029
15%
chance

Resolves YES if he agrees with this publicly. Otherwise, at market close, I'll ask him somehow, and resolve based on the answer. Resolves N/A if he doesn't answer me.

https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy

I will not trade on this market after 2022.

Get
Ṁ1,000
and
S3.00
Sort by:

This market was to resolve n/a if he doesn't answer, eg because humans are extinct. Since n/a resolutions are going away I'm likely to adjust this. Suggestions welcome.

@MartinRandall apparently they came back. Unlike extinct species.

predictedNO

Related:

This market is implicitly conditional on us surviving till end of 2029, which is uncertain. In this conditional world, the market opinion is that no solid evidence will emerge that we are not doomed.

The only way I see that this is not an extinction outcome is if timelines are surprisingly long, and we don't learn much of anything by 2029.

predictedNO

The prior on "Eliezer claiming he was wrong/overstated about something" is fairly low (ymmv whether you believe this is due to him being usually correct or him disliking admitting his mistakes, but either way it's not something that often happens).

@ShakedKoplewitz the yes bettors I was talking with appeared to think that EY's prediction would be so obviously and staggeringly wrong in 2029 that he would find his mistake undeniable. Not my view.

predictedYES

@MartinRandall My guess was more like: EY's prediction might come to seem substantially more clearly overconfident in 2029, and a few parts of it were already deliberately exaggerated, and this made it seem like a good idea to buy some cheap YES shares despite YES not seeming likely. I now put less weight on the second consideration, but the first consideration still seems important to me. A YES resolution doesn't require EY to become a non-"doomer", just to become less of a "doomer" than he currently is.

@StevenK I think it was deliberately underplayed by being posted on April 1st, but I seem to be in a minority on that.

I sure disagree with this market's current probability of 23%, but I shall not trade in it.

@EliezerYudkowsky Yes, I think your disagreement follows from your "death with dignity" post. I'm not expecting you to "weight your opinion and mix it with the weighted opinions of others" which is the only reason I can see that you might think this market should be above 0% + discount rate.

Clarification request. Let's suppose that at the market close date Eliezer believes that there is a 5% survival chance for humanity as opposed to ~0% implied in the linked post, because of some new evidence (to make the example more concrete, suppose there is a minor but noticeable AI-related disaster with a few thousand people dead, and 2028 US president is a former EA who actually tries to solve alignment with government-scale resources). He still thinks that without this new evidence, it was correct to think in 2022 that humanity was pretty much doomed. Would you resolve to Yes or No in this case?

In other words, what does "overstated the risk" mean here? Does this market resolve to YES only if Eliezer admits that looking at the world in 2022 it was a mistake to estimate humanity's survival at 1 in 1000 instead of 1 in 20? Or just him increasing his odds of humanity's survival to a "not certain failure" level by 2029 would suffice, since the risk of extinction stated in the post would not agree with his updated guess anymore?

@theservy Oh, I missed this reply: https://manifold.markets/MartinRandall/will-yudkowsky-agree-that-his-death#uTa0RDI1WTYdh2hEk9z5 . I suppose that means "NO" resolution in my example since he thinks "the post was right at the time".

So, this means that this resolves to YES only if:

  • Eliezer made an avoidable mistake OR was intentionally lying/exaggerating way out of proportion.

  • He admits to it.

This seems pretty unlikely (considering that in the worlds where Eliezer is the kind of person to make stupid mistakes or be ok with lying, he is less likely to admit it).

@theservy I don't think there are kinds of people who don't make stupid mistakes.

Yet.

Selling my handful of shares to avoid perceived bias.

predictedYES

Ouch! I want your limit sniping software @jbeshir!
Ofc this resolves NO 😭

predictedYES

@GeorgeVii He may say it was moderately exaggerated for effect, with an attitude of "look, the situation is so dire that I only need to overstate the truth by 20% for it to sound absurd", or "this is so close to true as to be outrageous, so I'll say it's outright true as an April Fools joke". He's also changed his mind about important features of AI risk before, though people change their minds more when they're young. It doesn't take much optimism on future Eliezer's part to think that this is overstated:

When Earth's prospects are that far underwater in the basement of the logistic success curve, it may be hard to feel motivated about continuing to fight, since doubling our chances of survival will only take them from 0% to 0%.

predictedNO

@StevenK my read of the text is that 0% is accurate to the nearest percentage point and excludes positive model violations aka "miracles".

predictedYES

most of his worries will turn out to have been things he could have realized weren't real threats, but he tends to ignore criticism of types he sees as uneducated, which ignores most actual deep learning researchers. if he actually understood program inference his fears would be different! That's not at all to say there's zero risk of inter-species war, just that his views are somewhat magical about what that actually looks like, and he tends to exaggerate the form of the risk and how avertible it is.

I suspect he may be doing that on purpose and that in retrospect he's going to realize he was arguing in slightly bad faith in order to cause enough panic to actually cause people to do something. which like, I don't know, seems to have worked okay, I don't begrudge him trying to be an ass to save the world, I do think he is not a very good AI safety researcher though. more of an activist. (activism good.)

predictedYES

@L of course he does understand program inference pretty well but his anticipation of foom is sort of misplaced. foom doesn't happen in one computer, it happens to society

predictedNO

@L personally his death with dignity post increased my estimated likelihood on ai safety being intractable and so reduced my estimate of its optimal resource allocation, but I agree that my reaction was rare.

What if he says something like "in retrospect, we weren't doomed in 2022, but we're doomed now"?

My read was the post was deliberately a bit overstated.

predictedNO

@StevenK Resolves based on whether or was overstated at the time it was written. If we get a miracle and we stay alive, but Yudkowsky thinks that the post was right at the time, this resolves NO.

If we solve alignment in 2023 and Yudkowsky thinks we were not doomed in 2022, this resolves YES even if we subsequently elect Clippy as US president in 2029.

Created for a yes bettor on Discord who thinks that the risk is overblown.

I bet down to 10%. A 90% chance that we're all going to die and going out with dignity is the best we can hope for is pretty bad, relative to utopian dreams. But it's a common enough belief in historical terms, only the execution is novel.

predictedNO

Oh for an edit button...

Will Yudkowsky agree that his "death with dignity" post overstated the risk of extinction from AI, by end of 2029?, 8k, beautiful, illustration, trending on art station, picture of the day, epic composition

predictedNO

@ManifoldDream Projection of my viewpoint as I ask Yudkowsky how to resolve this market as he sits atop his tower of skulls in 2030.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules