Scott Alexander publishes a "Death with Dignity" style post before 2030
💎
Premium
107
Ṁ40k
2029
19%
chance

In April 2022, Eliezer Yudkowsky published Less Wrong post entitled MIRI announces new "Death With Dignity" strategy. The post declared, in a kidding-not-kidding style, that humanity would not solve the alignment problem, and humanity would soon be extinguished by unfriendly AI.

This option will resolve as "yes" if Scott Alexander publishes a post in a similar vein, namely

  • The post declares that humanity is doomed, and that doom will be coming quickly

  • The post is written either earnestly, or it is written in a style that leaves the reader uncomfortably unsure if Alexander is earnest.

  • Themes of doom and futility should be a central conciet of the post. (A short parenthetical in an otherwise unrelated post would not suffice)

  • For the purpose of this bet a "post" will include literary media outside of the blogosphere. For example, an oral address or a chapter in a nonfiction book.

  • Off the cuff comments, such as a reply to a reader in the comments section of his blog, will not count as a "post."

  • Fictional work by Scott Alexander might fullfill this option, if the work as a whole makes readers suspect Alexander is an AI doomer. But if the overall arc of the story is anti-doomer, then I will not count it as a doomer post.

  • Added 12/5: If Alexander tries to put some polyanna spin on world doom, then option will still resolve as 'yes'. Examples of polyanna spin would include "Yes, were are p > .95 all going to die next year, but think about how sweet that .05 probability where we survive this whole mess is going to be" or "Yes we're all going to die soon, but isn't it cool that we get to see the most significant event in human history?"
    This point of this option is to predict whether Alexander will predict that we all die, not whether Alexander will try to cheer us up.

Get
Ṁ1,000
and
S3.00
Sort by:

Given his recent post, Why I Am Not (As Much Of) A Doomer (As Some People) [https://astralcodexten.substack.com/p/why-i-am-not-as-much-of-a-doomer], where he states his p(doom) is around 33%, I'd say a "Death With Dignity" style post is very unlikely.

predictedNO

@rockenots At least for the next couple years.

predictedYES

@rockenots That addendum is doing a lot of work.

Right now I think going out in a neat way, being killed by a product of our own genius and intellectual progress – rather than a product of our pettiness and mutual hatreds – is the best we can hope for. And I think this is attainable! I think that we, as a nation and as a species, can make it happen.

predictedYES
predictedNO

Note that this was written in 2017 and isn't specifically about AI (previous sentence: "I want this country to survive long enough to be killed by something awesome, like AI or some kind of genetically engineered superplague."), but yeah the resolution criteria don't seem to mention either of those.

predictedYES

@KatjaGrace It seems to fail the "doom will be coming quickly" and "central conceit of the post" requirements, anyway, so I agree it shouldn't cause a YES resolution.

predictedYES

Maybe ask something like "will at least one more big figure from rationalist/rat-adjacent community publish "death with dignity"-style openly doomerist post before the end of this decade", where "big figure" could be anyone from gwern to nick bostrom to scott aaronson

predictedYES

What is the probability threshold? If he says humanity is doomed with 90% probability, does that count? What about 50%, or 10%, or 1%?

predictedYES

Would something with the style of The Hour I First Believed (https://slatestarcodex.com/2018/04/01/the-hour-i-first-believed/) count as "unsure if Scott is earnest", or would it need to be more earnest than that?

predictedYES

@Multicore At the end of the post there is a parenthetical explaining that everything above the parenthetical is not quite what he actually believes. So if Scott were to write a blog post with a "Here's all the reasons we are doomed" and then follow it up with an "actually I don't believe any of this stuff", then I would not count it as earnest.
...unless the disclaimer itself were written in such a way that we would doubt that the disclaimer of earnestness was itself disearnest. For example, in EY's "Death with Dignity" post, he acknowledges that the post was made on April 1st, and then says "Only you can decide whether to live in one mental world or the other." This makes reader unsure if the disclaimer of earnestness was itself disearnest; i.e. Eliezer is just kidding about whether he is "just kidding".

When in doubt, condense it to this rule "If, after reading the entire post, the average reader will strongly suspect (p > .6) that Scott Alexander believes the world is doomed, then option will resolve as 'yes.'"

I think Scott Alexander is more similar to Scott Aaronson than Eliezer Yudkowsky. And I think Scott Aaronson has made it very clear he won't be writing a "we're doomed" post: https://scottaaronson.blog/?p=6823

So I don't think Scott Alexander will either.

(Also, I have very high confidence that Scott Alexander would only write such a post if we were somehow actually, unambiguously doomed. In which case mana is useless and so predicting YES here can't meaningfully pay out.)

@dreev If YES could somehow pay out, what probability would you trade at?

predictedNO

@WilliamEhlhardt Good question. I'm still agonizing about the question of what actually is my own probability of AI doom this decade. All I've decided so far is that it's somewhere below 10% -- which isn't saying much because if it were anywhere near that upper bound then that's a massive and terrifying probability for an existential risk.

Also, I kind of don't like this market's operationalization of the question and may just cash out of it. So, free money notice, especially for those trading for expected mana maximization rather than expressing personal probabilities.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules