Will Eliezer believe in mankind's survival again?
➕
Plus
111
Ṁ14k
2038
53%
chance

Eliezer Yudkowsky and MIRI currently pursue a "death with dignity" strategy. Read this blog post or watch this interview. I assume Eliezer believes it with 99+% confidence as the blog posts says "0% survival".

This resolves YES immediately, if within the next 15 years Eliezer lowers this confidence to 80% or lower. Also YES, if he extends the timeline by at least 10 years even if the doom confidence is still high.

Resolves NO, if Eliezer is dead by 2038 no matter what reason (AI or not).

Also NO, if he still forecasts certain doom like today.

Clarification 2023-07-03: Lowering doom, then dying, still resolves YES.

Change 2023-03-14: Removed the "3-15 years timeline" since I believe Eliezer probably did not intend his words to be taken as a timeline forecast. In its place, if he makes a statement that doom is "delayed by 10+ years" resolves as YES. Implicitly, an additional NO resolution is if he is alive but still predicting doom at the time of close.

Get
Ṁ1,000
and
S3.00
Sort by:

unless he's wrong this market only pays out on "yes". Yudkowsky himself should buy yes shares were he muxmaxing.

The description says it resolves NO if Yudkowski is dead by 2038, but I assume it would still resolve YES if he lowered his confidence below 80% and then died.

@JosephNoonan Same, it should resolve yes early, so his later death would not be relevant anymore.

@MaxPayne Agreed. I adapted the description accordingly.

Very similar:

predictedNO

He's got too much psychologically invested in the doomer narrative at this point. The AI-pocalypse is going to fail to materialize, and then he's going to do what every (and I mean this with the utmost respect) doomsday cult leader does, which is to modify the timeline but continue harping that the end is nigh.

predictedYES

@jonsimon I agree this is likely, but as I read the resolution, if he's still around to extend the timeliness in 2038, this would actually resolve "yes" at that point.

predictedNO

@DavidMathers ohhh you're right I should have read the description more carefully. Well then, time to flip my bet.

@copacetic where are you getting 3-15 years at 99.9% from?

In the linked interview he says that 30 years is "unlikely".

Eliezer: Timelines are very hard to project. 30 years does strike me as unlikely at this point. But, you know, timing is famously much harder to forecast than saying that things can be done at all. You know, you got your people saying it will be 50 years out two years before it happens, and you got your people saying it'll be two years out 50 years before it happens.

https://www.lesswrong.com/posts/e4pYaNt89mottpkWZ/yudkowsky-on-agi-risk-on-the-bankless-podcast

@MartinRandall Quote from the transcript:

„How on earth would I know? It could be three years. It could be 15 years. We could get that AI winter I was hoping for, and it could be 16 years. I'm not really seeing 50 without some kind of giant civilizational catastrophe.“

@MaxPayne I see that, but "how on earth would I know?" and "it could be 16 years" is not assigning a 99.9% chance of 3-15 years, so I don't know how this works for the market resolution.

As written I think it resolves YES now because EY has extended the timeline beyond 15 years, or it resolves N/A now because the question is based on a false premise.

predictedNO

@MartinRandall I had not seen the Q&A yet. Thanks for the pointer.

I hate that this question now turned into an interpretation game. The core question to me is if Eliezer changes his mind and that has not happened in the last weeks as far as I can tell, so I'm not resolving it as YES. I have lost confidence about my assumption though.

Does anybody know a clearer statement about the current belief of @EliezerYudkowsky?

predictedYES

@copacetic My read of the texts is that he's very confident of human extinction, but uncertain about timelines, and skeptical of attempts to become certain. The below text is from 2021, so may be outdated, but I think is representative.

https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works

predictedNO

I changed the resolution criteria to a vague timeline forecast. This seems to be the only fair change to me. If it triggers too many complaints, I can still use N/A.

Related since it essentially asks where Eliezer is wrong:

The "again" in the post title implies something that I don't think is totally true.

@tessabarton Eg:

alarm bells went off for me in 2015, which is when it became obvious that this is how it was going to go down.

https://www.lesswrong.com/posts/Aq82XqYhgqdPdPrBA/full-transcript-eliezer-yudkowsky-on-the-bankless-podcast

Prior to 2015 I think he reads as more hopeful. In particular I'm struck by his writing on cryonics as indicative of believing in a chance of a big future for humanity.

https://www.lesswrong.com/posts/hiDkhLyN5S2MEjrSE/normal-cryonics

If we are all going to be eaten by Shoggoths then not signing up my kids for a slightly increased chance of being eaten by Shoggoths does not make me a lousy parent.

predictedNO

@tessabarton He has written that as a teenager he was all for creating AI as fast as possible (what might now be called an accelerationist) until he gradually figured out the danger. https://www.lesswrong.com/posts/fLRPeXihRaiRo5dyX/the-magnitude-of-his-own-folly

@EliezerYudkowsky is not on his historical maximum:

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules