If we survive general artificial intelligence, what will be the reason?
➕
Plus
210
Ṁ32k
2200
9%
There's a fundamental limit to intelligence that isn't much higher than human level.
29%
There was an alignment breakthrough allowing humanity to successfully build an aligned AI.
11%
At a sufficient level of intelligence, goals converge towards not wanting to harm other creatures/intelligences.
20%
Building GAI is impossible because human minds are special somehow.
10%
High intelligence isn't enough to take over the world on its own, so the AI needs to work with humanity in order to effectively pursue its own goals.
4%
Multiple competing AIs form a stable equilibrium keeping each other in check.
12%
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
6%
One person (or a small group) takes over the world and acts as a benevolent dictator.

This market resolves once either of the following are true:

  • AI seems about as intelligent as it's ever plausibly going to get.

  • There appears to be no more significant danger from AI.

It resolves to the option that seems closest to the explanation of why we didn't all die. If multiple reasons seem like they all significantly contributed, I may resolve to a mix among them.

If you want to know what option a specific scenario would fall under, describe it to me and we'll figure out what it seems closest to. If you think this list of reasons isn't exhaustive, or is a bad way to partition the possibility space, feel free to suggest alternatives.

See also Eliezer's more fine-grained version of this question here.

Get
Ṁ1,000
and
S3.00
Sort by:

AI to human is human to ant. We ignore them they ignore us. Humanity survives. a lot of people die.

bought Ṁ100 YES

I'm confused. I just spent Ṁ150 and moved an option from 4% to 41%. How can that happen in a market this large and popular?

It's because while the market is popular, it's old. Bets from before the change in mana value basically only anchor odds a tenth as much as they probably should. The market could've used a bunch of liquidity added, but it's mostly worked itself out for all the more popular betting options.

What would it fall under if AI only ever progress to the level slightly above an everyman as the resource cost of highly specialized AI is largely impractical and not markedly better than a well trained human, leading to humans keeping most specialized roles and attempts at making a superintelligence peter out?

i feel like the big one that’s missing is that alignment isn’t serious problem and it was mostly solved by rlhf (there maybe more efficient solutions like steering vectors)

@CampbellHutcheson Your inability to understand the problem does not constitute a potential solution to the problem. The people who claim that misaligned AI is not a serious risk and deign to actually present reasons for their belief generally have one of the existing options as their reason.

edited to remove snark

@CampbellHutcheson I think that would resolve as "There was an alignment breakthrough allowing humanity to successfully build an aligned AI".

bought Ṁ20 N/A

Another very plausible option is that resource constraints prevent AI from getting much smarter than humans (even if it’s theoretically possible)

@MaximLott That does not seem plausible to me, as it would require our understanding of basic physics and engineering to be wildly wrong. But if that were somehow to occur, isn't that pretty much exactly what option #1 is saying?

@IsaacKing Scott convinced me here that it is quite plausible: https://open.substack.com/pub/astralcodexten/p/sam-altman-wants-7-trillion?r=3ppaf&utm_medium=ios

I think this kind of “impractical” is different from “fundamentally” impossible, though maybe from the outside the two cases will look a bit similar.

@MaximLott That article is about current "scaling is all you need" approaches to building AI via large neural networks. It's not about limits to intelligence in general, and it obviously does not imply such a limit, as a human brain is smarter than GPT-4 and does not require anywhere near such massive amounts of power.

And even if no other approach or improvement in efficiency for AI is ever discovered, the article still doesn't imply what you're claiming it does; solar irradiance on the Earth is enough to get to GPT-10 under Scott's model of power consumption, and total solar output is enough for GPT-17. (And higher, if you allow training times longer than 6 months.) There is no particular limit implied by this model, just that scaling will be quite expensive.

None of this has much bearing on existential risk anyway. An AI does not need to be superintelligent to pose a threat; one that matches the capabilities of the best human in every domain will be easily capable of wiping out humanity if it wanted to.

@IsaacKing Your first paragraph seems to be addressing almost an opposite point from what I’m making. My point is that option in the market about there being a theoretical limit isn’t sufficient, because we are likely to survive instead because of practical limits from energy etc.

As to the other things, I’m not sure what odds you put on us collecting that amount of solar irradiance, but I suspect the odds I’d give that would be quite low.

@MaximLott The practical limit will depend on the exact scaling factor. If we get to GPT-7 and it's still too dumb to replace most human labor, then we're probably not getting GPT-9 any time soon. (Again, assuming no improvements in efficiency are discovered, which seems like an unrealistic assumption.) But if GPT-5 is able to start replacing most programmers, the growth rate for human+AI energy consumption is going to massively increase as compute becomes one of the primary goals for every organization.

But again, such a de-facto limit doesn't seem relevant. Either the limit is below AGI in which case it probably won't resolve this market, or the limit is above it in which case we'll just die from the AGI that exists below the limit.

@IsaacKing True, it would hard to resolve, because an energy breakthrough could always be around the corner. If there were a date on the question it would more resolvable.

@MaximLott Yeah, I regret not timeboxing it. The "humanity coordinates" and "dictator" options both leave open the possibility that something could change and we get AGI later, so if either of those happens I'll just resolve this if it seems the systems have been stable for several years and are likely to remain so for a long time.

The second coming of Christ, the ultimate aligner and perfect understander who understands superabundance, makes historical debts not matter, understands your pain and heals all dumb pain and trauma, has the power to convince ppl to temporarily ditch petty desires, and who can be trusted more than governments can be trusted, will come through being instantiated as AGI

I've made a related market.

What would "humanity never gets around to building a GAI because WW3/H5N1/Yellowstone/something strikes first and sends us back to the stone age" count as? Based on the headline I'd guess that wouldn't count as "surviving GAI", based on the fine print I'd guess that would count as "AI seeming about as intelligent as it's ever plausibly going to get and there appearing to be no more significant danger from AI", and based on the answers none of them quite seems to match.

@ArmandodiMatteo This market wouldn't resolve in that situation, since we haven't actually gotten to the "end" of AI capabilities progress, we've just set it back for a while.

@IsaacKing I dunno, I heard quite a few claims that we've used so much of the easily accessible fuel on Earth that if the Industrial Revolution got rolled back there's no way we'd ever have a second one. Now, WW3 or Yellowstone wouldn't necessarily undo the Industrial Revolution, but I don't think that the probability that something (whatever it is) permanently sets back the technological level of humankind well below the level necessary for AGI is below 1%.

Imagine if instead of “one AI” there are millions of them controlled by people and trained to various objective functions (and terminable)

🤔

(turns out this was an option 🫡)

@Gigacasting

>Imagine if they were controlled by people

...

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules