If humanity survives to 2100, what will experts believe was the correct level of AI risk for us to assess in 2023?
➕
Plus
16
Ṁ288
2100
36%
chance

In 2100, I will distribute a poll among philosophers, superforecasters, AI professionals, superintelligences, and whatever other entities could reasonably be called "experts" to determine what probability they believe was correct to assign to the following statement in 2023:

Before 2075, humanity will be extinct or have suffered a fate worse than extinction, due to the effects of smarter-than-human artificial intelligence.

("Humanity" includes any non-biological successors that biological humans voluntarily turned into.)

Get
Ṁ1,000
and
S3.00
Sort by:

If you are dead in 2100 how should this market resolve?

@MartinRandall Whoever is in charge of market resolution should perform the same experiment as closely to how I would have performed it as they can and resolve based on the results.

Regardless of the actual probability of the counterfactual, the hindsight bias will be enormous.

Especially since, if indeed AGI -> doom, then no doom -> no AGI, and polling taking place means no doom, so only lowly biased human will answer this poll.

@CamillePerrin I don't think it's widely accepted that it's impossible to have a non-genocidal AI. Also I think humans have estimated the risk of Cuban Missile Crisis etc al without enormous hindsight bias.

predictedYES

@CamillePerrin what does "lowly biased human" mean?

@jameso Humans are not very good reasoners, compared to the ideal.

https://en.wikipedia.org/wiki/List_of_cognitive_biases

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules