Conditional on no existential catastrophe, will there be a superintelligence by 2050?
➕
Plus
106
Ṁ23k
2050
78%
chance
Get
Ṁ1,000
and
S3.00
Sort by:

My true belief is like: 80% we dead, <1% YES, 19% NO. I think it's not a good investment to put a lot of money on NO, even with loans from manifold.

@Joern what do you expect to be the cause of our extinction until then?

predictedNO

For random people passing by, this is free money on NO. Isaac's definition of superintelligence is not the normal definition; it is literal magic.

predictedYES

@DavidBolin Wait, where's the definition?

predictedNO

@wadimiusz It's in the market linked below. It's the standard definition of superintelligence used by AI researchers and philosophers.

predictedYES

@IsaacKing I see.

A superintelligence is any intelligent system that is far more intelligent than any human who existed prior to 2023. It approches the theoretical maximum intelligence that can be obtained given the amount of computing power it has.

As some examples, a superintelligence running on the world's largest supercomputer in 2023 and connected to the internet should be able to:

  • Get a perfect score on any test designed for humans where such a score is theoretically achievable.

  • Solve any mathematical problem that we know to be in principle solvable with the amount of computing power it has available.

  • Pass as any human online after being given a chance to talk to them.

  • Consistently beat humans in all computer games. (Except trivial examples like "test for humanness and the human player wins", "flip a coin to determine the winner", etc.)

  • Design and deploy a complicated website such as a Facebook clone from scratch in under a minute.

  • Answer any scientific question more accurately than any human.

I feel like a lot hangs on "the amount of computing power it has" in that definition. We know what superintelligence looks like with unlimited compute: Solomonoff induction. But we don't know what that looks like for a given constraint in compute. I'm not sure that, if something that we commonsensically perceive as superintelligence emerges, and we're somehow still alive, we'd have the chance to determine that it's approaching the limits "given that amount of compute".

predictedYES

@wadimiusz And I think I overlooked a simpler problem. How do we know it's capable or incapable of, e.g. creating a Facebook clone in under a minute, unless we can tell it to do that, and be sure that it will try its best when told? What if we can't tell it to do things despite somehow getting it to not kill us?

sold Ṁ13 YES

@wadimiusz sold my YES because of science, how is that supposed to happen when scientists regularly disagree?

As I said, this is a definite NO, given Isaac's definition.

This resolves NO, given that definition.

predictedNO

Come help me define "superintelligence" more rigorously here:

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules