
What will be the median p(doom) of AI researchers after AGI is reached?
Plus
16
Ṁ10732101
84%
Above 5%
67%
Above 10%
25%
Above 20%
9%
Above 50%
5%
Above 80%
AGI defined as an AI that is better at AI research than the average human AI researcher not using AI.
p(doom) defined as human extinction or outcomes that are similarly bad.
In Katya Grace's 2022 survey, median values were 5% for "extremely bad outcome (e.g., human extinction)” and 5-10% for human extinction.
All answers which are true resolve Yes.
Related:
This question is managed and resolved by Manifold.
Get
1,000and
3.00
Sort by:
Related questions
Related questions
Will we get AGI before 2047?
85% chance
What will be the average P(doom) of AI researchers in 2025?
20% chance
How much will AI advances impact EA research effectiveness, by 2030?
ML researchers’ median probability of existential risk from AI
20
Will AGI retaliate on AI doomers in a way that makes AI doomers regret it?
3% chance
Will a Nobel prize be awarded for the invention of AGI before 2050?
26% chance
Doom if AGI by 2040?
45% chance
The probability of extremely good AGI outcomes eg. rapid human flourishing will be >24% in next AI experts survey
54% chance
The probability of "extremely bad outcomes e.g., human extinction" from AGI will be >5% in next survey of AI experts
75% chance
Will we reach "weak AGI" by the end of 2025?
27% chance