@wadimiusz It's in the market linked below. It's the standard definition of superintelligence used by AI researchers and philosophers.
@IsaacKing I see.
A superintelligence is any intelligent system that is far more intelligent than any human who existed prior to 2023. It approches the theoretical maximum intelligence that can be obtained given the amount of computing power it has.
As some examples, a superintelligence running on the world's largest supercomputer in 2023 and connected to the internet should be able to:
Get a perfect score on any test designed for humans where such a score is theoretically achievable.
Solve any mathematical problem that we know to be in principle solvable with the amount of computing power it has available.
Pass as any human online after being given a chance to talk to them.
Consistently beat humans in all computer games. (Except trivial examples like "test for humanness and the human player wins", "flip a coin to determine the winner", etc.)
Design and deploy a complicated website such as a Facebook clone from scratch in under a minute.
Answer any scientific question more accurately than any human.
I feel like a lot hangs on "the amount of computing power it has" in that definition. We know what superintelligence looks like with unlimited compute: Solomonoff induction. But we don't know what that looks like for a given constraint in compute. I'm not sure that, if something that we commonsensically perceive as superintelligence emerges, and we're somehow still alive, we'd have the chance to determine that it's approaching the limits "given that amount of compute".
@wadimiusz And I think I overlooked a simpler problem. How do we know it's capable or incapable of, e.g. creating a Facebook clone in under a minute, unless we can tell it to do that, and be sure that it will try its best when told? What if we can't tell it to do things despite somehow getting it to not kill us?
@wadimiusz sold my YES because of science, how is that supposed to happen when scientists regularly disagree?