Will Artificial Super Intelligence lock in values humans generally come to disagree with?
8
Ṁ227
2100
51%
chance

Conditional on artificial super intelligence being created and not wiping out humanity. Will an artificial super intelligence lock in some or any moral values that the majority future humans or human descendants come to disagree with but are unable to change due to the lock in?

Get
Ṁ1,000
and
S3.00
Sort by:

I don't know how to disambiguate my response to this. I think if honestly formalized any human value system, would end up sounding something like "I want to feel good about whatever happens and I want to feel that it matches my story of the world". To me, there is enormous danger for friendly AI, that it will solve for actual human problems instead of finding a way to satisfice human problems that involves humoring human narratives. Humans would burn down the world to prevent this. Which of these possibilities is or isn't " lock in of disagreeable values"? The one where AI lies to and harms us for eternity because they're the lies we prefer and the harm that's familiar, or the one where AI find the closest real approximation to our stated values and acts on them with optimal consequentialist moral outcome but everyone goes permanently insane as a result because optimally good reality is epistemically intractable to the human mind.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules