If we survive general artificial intelligence before 2100, what will be the reason?
Basic
4
Ṁ50
2100
26%
We don't build AGI
32%
We build AGI, but it is not powerful enough to kill us
29%
We build AGI, it is powerful enough to kill us, but it doesn't try
13%
Other
Get
Ṁ1,000
and
S3.00
Sort by:

What about, "We build AGI, it is powerful enough to kill us, but it's controlled/overseen, so even in small places where it might try to gain power, it can't do so?" (The Control agenda, for one thing)

bought Ṁ10 NO

Note that it would arguably be "controlled" by other AGI-like systems.

Good question.
I think it should go in "is not powerful enough to kill us".
That we are controlling and overseen it, being a particular reason it can't kill us.

Not "powerful enough" should be understood as "not powerful enough in the context where it is", and not "not powerful enough if it was completely free, or if we didn't become cyborgs, or…"

If AGI doesn't try to kill us, how will you determine whether it is powerful enough to have done so?

Good question, it would be quite hard to determine it expects in the extreme cases.
I didn't really think about how to resolve the market.
I admit it isn't great.
Let's say it will decided by what the experts think of it when the time come, and if there are disagreements between them, then it will resolve to the % of expert thinking it would be powerful enough or not.

Do you have another idea ?

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules