When we first meet (intelligent, advanced) aliens, will they value increasing happiness and reducing suffering of all sentient life forms?
➕
Plus
37
Ṁ3734
2032
38%
chance

About:

Resolves to YES when we meet aliens (these could be sentient, intelligent, AI's) and learn if they value happiness and the reducing suffering of all sentient (as in, those clearly capable of suffering and experiencing happiness) life forms. If we meet aliens and they don't value these things, resolves NO.

Title Changes:

Sep 20, 9:16am: When we first meet aliens, will they value increasing happiness and reducing suffering of all life forms? → When we first meet (intelligent, advanced) aliens, will they value increasing happiness and reducing suffering of all life forms?

Sep 21, 9:31am: When we first meet (intelligent, advanced) aliens, will they value increasing happiness and reducing suffering of all life forms? → When we first meet (intelligent, advanced) aliens, will they value increasing happiness and reducing suffering of all sentient life forms?

Get
Ṁ1,000
and
S3.00
Sort by:
bought Ṁ10 NO

I bet NO, because there is a good chance we have already met non-human intelligence, and the consistent story from all the government sources is that they are most likely evil.

@ian if the alien(s) we meet do value "increasing happiness and reducing suffering of all sentient life forms" at say 1 util per [unit of mass-energy optimally distributed to said goal] but they also value "increasing paperclips shaped objects and reducing staple shaped objects" at 10 utils per [unit of mass-energy optimally distributed to said goal], how would this resolve? Would the ratios being different change the resolution?

@RobertCousineau Hm it should be obvious what they value. If they are slaughtering us all but also giving us pain numbing meds beforehand that is def better than no pain meds. Maybe I should just name the question to, when we first meet aliens, will they let us live?

Related

predictedNO

If we ‘meet’ multiple species essentially simultaneously, and some of them have this value and some don’t, how would you resolve in that case?

Do humans as they currently exist meet the criteria for this market?

A case could be made that they don't, because a likely majority of humans value reduced suffering for their in-group, but value increased suffering for their out-group.

Another complication is how to handle cases where an alien culture's ideal values meet the conditions, but the lived reality of their culture fails to fully meet their ideal values. (Much like human cultures.)

Weak YES bet because I'm guessing we should be trivial and boring in the distribution that from which civilizations / sentient species are sampled. We shouldn't, like, be totally unique. Maybe valuing happiness and being opposed to suffering is a trivial and boring position to have, so many civilizations do.

(I observe that I am not predicting that alien ethics will not horrify us; I believe there are versions of ethics that would result in this question resolving YES, and yet be horrible to humankind.)

predictedNO

@wadimiusz Are you implying humanity has the value of reducing suffering for all sentient life forms? IMHO parts of humanity hold that value but probably pretty weakly for most humans if at all

predictedYES

@Ansel I guess I am implying that, and this is a valid criticism, thanks. I do think that reducing suffering and increasing happiness universally at least "rings true" for us in some way. If I told someone that reducing suffering and increasing happiness was what I was after, they'd at least see where I'm coming from, they wouldn't be like "Wtf, why did you chose this weird arbitrary goal instead of something meaningful like maximizing paperclips".

predictedNO

@wadimiusz That makes sense. When thinking about this, I think about what interstellar travel will select for. “When we first meet” implies to me that such aliens are explorers and at least to some extent they have “come to us”. Then the question is, out of the distribution of alien civilizations, what value set will skew towards rapid expansion, and thus meeting us earlier? My guess is that the aliens that place a high value on reduced suffering and some sort of definition of sentience that we’d agree with, probably won’t place as much priority on exploration and expansion. They’d be more likely to stay at their home planet or expand slowly. The ones who don’t care about suffering or sentience might expand the fastest - because they only care about one thing: expansion.

predictedYES

@Ansel Ah, that is a fine point! Of course we won't just meet random aliens - we'll probably first meet the grabby ones, and we try to infer something about their values from that fact. That's a good one!

predictedYES

@Ansel I'm aware of it, but I didn't go through the math myself.

I know that it observes there is a difference between aliens who aggressively expand, and any others. I also know that, since at least one grabby civilization is bound to emerge, one day, observing oneself in a non-grabby civilization means you're still very early in the universe; and that this also explains the Fermi paradox or something like that...

predictedNO

In my view, humans do not count as valuing increasing happiness and reducing suffering of all sentient life forms. @ian can you clarify this? Ian's previous comments suggest looking at actions and values of the majority of the species, but it's not clear to me how we'd assess this even for humans, as we can't even agree on what Earth species are sentient.

predictedNO

@wadimiusz Yes, that’s a good summary. The paper doesn’t opine on ethical values AFAIK. But you’ve helped me clarify my view, which is more precisely that 1) being grabby and valuing comfort for sentients are in opposition (at least there are trade-offs between the two). Therefore 2) we will experience adverse selection in the aliens we meet first

predictedYES

@jack I guess it is very reasonable to assume some of the aliens will think one thing, some will think others. I'm not sure of the right percentage to shoot for, or perhaps it should be more about the aliens who have major influence over either the government or over the explorers that reach us. Like if the aliens that meet us represent a major faction of the aliens that value spreading happiness, etc. this should resolve yes. If the explorers are a rogue faction that defect from the happy-spreading aliens, I'm not sure how this should resolve. And if they represent the other half of the alien government that doesn't care about suffering, then it would seem a bit misleading to resolve this as yes while the aliens are perhaps murdering us. Misleading as it may be, I think I'm in favor of resolving yes if a significant representation of the alien government values happiness/decreasing suffering.

seems like there are so many possible versions of ethics, the odds of aliens having this specific kind of ethics should be exceedingly low. but if they don't have these ethics we're probably dead and we can't make mana off it.

@SemioticRivalry Maybe they'll be dead instead.

predictedNO

Aliens approximately equal AI in their unknownness to us. If anything, aliens are more weird and unknown. How can we be so optimistic about this but so doomerish about AI?

@Ansel Is there really a mismatch? Most of the AI doom forecasts put the probability at something like 10-30%. Seems consistent with this market.

Also, the traders on the two market are very different groups. I think this market is fairly overvalued and I'm betting it down.

predictedYES

What if the aliens are negative utilitarians that only really value reducing suffering but they think the best way to reduce suffering is spreading happiness throughout the cosmos.

What if some of the aliens (in the first alien civilization we meet) value this and some do not? Similar to how only some humans value it.

predictedYES

@WinstonOswaldDrummond Lol, tough q! We could go with:
- Do their actions/impact of their actions convey their happiness maximizing values towards other species?
- Do the majority value other species' happiness?
- Do some value other species' happiness?
- Something else.
Thoughts?

I do not understand why this market was >60%, while at the time the "Will AI wipe ever out humanity" markets are >50% (especially since binary payout markets reduce the estimates from what they should be).

Do people think humans are especially uncoordinated, and other species would have/will do better on preventing unaligned AGI?

predictedYES

@RobertCousineau I’m actually still unsure whether extra terrestrial AI should count as an alien due to how people might typically interpret this question

predictedNO

@ian fwiw, I think that aliens definitely includes AI's - unless humans are special/have a soul I do not see a reason why they shouldn't.

predictedNO

@ian any further thought here?

predictedYES

@RobertCousineau Hmmm yeah I guess sufficiently intelligent, extraterrestrial AI's should count. I'll edit the description

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules