In 2050, will the general consensus among experts be that the concern over AI risk in the 2020s was justified?
➕
Plus
73
Ṁ3638
2051
72%
chance

Get
Ṁ1,000
and
S3.00
Sort by:

How would this resolve if they agree that it was a net positive that the AI risk community was as vocal as it was because it helped in the development of helpful regulations and interpretability tools, but disagreed that the world would literally have been destroyed if not for them?

@jonsimon Neither matters. What this market cares about is "was the probability they placed on the world being destroyed by AI justified by the evidence they had at the time?"

@IsaacKing Whose probability/concern needs to be justified? Laypeople? Computer scientists? Computer scientists who responded to the AI Impact survey? Existential safety advocates / the AI existential risk community? Eliezer Yudkowsky?

I mainly ask because I think the probabilities of, say, extinction would range from something like 5% (maybe laypeople and computer scientists) to 50% (average existential safety advocate) to >99.9% (Yudkowsky).

predictedYES

I wonder if a question like, "What probability of AI existential risk will experts in 2050 think was justified for a well-informed observed in 2023?" would be analytically cleaner, though it may be more confusing for the average Manifold user.

Implying society or experts will exist

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules