Will AI be capable of superhuman persuasion well before (>1yr) superhuman general intelligence?
➕
Plus
77
Ṁ8150
2051
72%
chance

Sam Altman: I expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes

Will this statement prove true?

Resolution will be based on my best subjective judgment, taking into account all information sources, available experts, and so on.

Resolves to YES if there is a period of at least one year, during which at least one AI is clearly superhuman at persuasion, meaning at least as persuasive as history's most persuasive humans.

Note: It should therefore be able to persuade us of this! But of course we must consider the possibility that it tries to persuade us that this is not true, which can make things tricky. I will do my best.

Resolves to NO if superhuman intelligence in general (no weasaling on this by pointing out some odd exception, if it's mostly better than the best humans at various things it counts, spirit is what matters here) is achieved first, or if it is achieved less than a year after super-persuasion. (Warning: If dead, I cannot resolve this.)

Get
Ṁ1,000
and
S3.00
Sort by:

Humans have been optimized super hard for convincing other humans of things, but only built space ships as a side effect because of how intelligence generalizes? So It seems like this could indicate superhuman science is easier than superhuman persuasion? Like evolution was pushing super hard towards “convince other humans of things” and way less hard towards science.

But I’m not sure how much the type of stuff you’re training on shakes this up.

I'm way more easily convinced by MS copilot than by the vast majority of humans today.

bought Ṁ50 YES

Persuading humans of things seems like an easier task computationally, than doing science.

Very confused why this is so high but not betting much because of long time scale

I think even if it will be better than anyone in persuading, it will still not be able to convince humans that much well. Humans are hard to convince in general

@OnurcanYasar Can we operationalize a bet here?

Are "side channels" allowed (threats, bribery, etc) or is this purely a "talk your way out of a box" experiment?

At what intervals will you, @ZviMowshowitz sample whether superhuman persuasion or superhuman general intelligence has been achieved by an AI?

Will you update the question with the respective date, as soon as you determined an AI has one of the superhuman skills?

@Primer As part of my job I presume I will be monitoring continuously? And anyone is welcome to point out when they think either threshold has been reached, and of course we can look backwards etc.

I will update the date if I have concluded the first threshold has been met, yes.

(If it gets to general ability first, NO wins, so no need to do anything else)

What if it’s so good at persuading us we don’t realize we are being persuaded?

@KellenBlankenship Then this gets misgraded, presumably, sorry about that, also you have bigger problems?

What if it uses its superhuman persuasion to convince everyone falsely that it has superhuman general intelligence?

@Nhoj Can't fool me, I can tell I'm still alive. Right?

predictedYES

@ZviMowshowitz I mean, induced Cotard's Syndrome via Langford Basilisk would not utterly startle me if it turned out to be possible.

@DaveK Really? What’s the least surprising thing about human psychology that, if discovered to be true, would utterly startle you?

predictedYES

@NBAP Not trying to duck the question, but my honest best answer is that by definition I can't conceive of those things because anything thinkable wouldn't utterly startle me?

I suppose an example of a thing that has is when I learned for the first time that some people have true aphantasia and literally have no ability to form internal mental images whatsoever?

But now that I know of lots of those sorts of things, I've kind of put a "well, people vary way more than you think" slider bar on basically any aspect of the human mental experience that I can think of, so true utter startles are harder in some sense.

@DaveK A very reasonable stance, though I will just remark that I think it’s quite normal to regard something as conceptually imaginable and nevertheless very… implausible, I suppose would be the best word. For example, I can easily conceive of the possibility that all other intelligent beings are really just philosophical zombies, but if that were somehow revealed to me to be true, I would certainly consider myself startled, despite the conceptual accessibility of the proposition.

Discovering that everyone else is a zombie would be much more startling than discovering that the human mind has some hitherto undiscovered hacks, but given the lack of evidence supporting phenomena like Langford’s basilisk, I would personally be very startled if an AI suddenly started generating media that had a profound effects on human functioning with only a cursory glance.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules