Will anyone commit suicide due to fears of anti-aligned AI before 2030?
➕
Plus
120
Ṁ7439
2030
86%
chance

Must be pretty clear that this was the primary reason for the sucide. Mentioned it in their suicide note, had expressed such fears to aquaintances, took steps to ensure their brain was unrecoverably destroyed, etc.

It must be a true counterfactual suicide, where they wouldn't have just done it for some other reason if they didn't know about AI risk.

The reason for the suicide must be fearing a fate worse than death from an AI that chooses to make humans suffer in some way rather than killing us outright.

If this has already happened in the past, I'll count that too.

Get
Ṁ1,000
and
S3.00
Sort by:

Anti-aligned AI needs awareness. Let's recruit suicidal people. New cause area, when?

I'm taking steps to make sure my brain is unrecoverably destroyed by not signing up for cryonic storage.

Partly because of unaligned future humans, not just AIs. Humans are scary.

predictedNO

@MartinRandall Depending on your predicted takeoff speed, that might not be enough. You may not have sufficient warning to destroy your brain before the choice is taken away from you.

new type of guy just dropped: guy who commits suicide to win this market

@42irrationalist we do a little trolling

@42irrationalist Not much profit to be gained from that, the market is already at 86%.

@42irrationalist That is not a new type of guy; there was already a LW guy who planned on getting a very large life insurance policy, waiting for the suicide clause to lapse, and then commit suicide.

Don't remember if he went through with it or not but he certainly sounded serious about it.

predictedNO

@DavidBolin Have a link?

I think unfortunately this market should be censored. Its well known that suicide is highly suggest-able, and its not clear that beyond shock value, this market contributes a lot to the global information ecosystem. I had a close EA/rat friend who committed suicide in 2019 after not getting a greencard and feeling impotent to work on global xrisks. I would be happy to donate my mana to traders who lose out on this market if it is censored.

predictedNO

@tftftftftftftftftftftftf I think that anyone hanging around in the X-risk space is going to hear about the possibility somewhere, and we can't censor all of those people. While deleting this market would lower the probability that someone finds out about the idea, I doubt it would lower it by all that much.

On the contrary, I think the best way to fight suicide is not by refusing to talk about it, but by doing the opposite and being open to discussing it with people. When a depressed person feels like they can't talk about it with anyone and can't find out any further information, that's going to lead to them feeling yet more isolated and have no reason to change their path. If on the other hand they find this market, see a bunch of good arguments here against committing suicide, and users are willing to seriously discuss the issue with them, then they may change their mind.

Also, the fact that this market is at almost 90% seems like extremely important information for the X-risk community to have. It implies that there's a serious risk towards community members, and the community should consider how to mitigate it. Censoring such discussions seems like a great way to increase the risk of people committing suicide.

predictedYES

@IsaacKing I don't have a strong position on whether or not discussing / creating markets on future suicides is harmful (a quick scan of the literature suggests multiple views but no predominant theory at this scale?). However, learning lessons from other markets' experinces, I am concerned leaving this up also normalises markets on suicides more broadly, which I am against for reputational, regulatory and safety reasons.

predictedNO

dude, every year in the US 1M people attempt suicide (although that doesn't necessarily mean "coherent attempt with real chance of success") and 50k people commit suicide. it's a massive cultural topic with lots of popular music, some tv shows, etc about it. a single 'manifold market' isn't going to make even .0001% of a difference. more generally, if manifold is a way to find truth, censoring is bad.

predictedYES

Also, I would like to put forward the view that if one day we defeat involuntary death, all deaths except those due to accidents and altercations might be due to suicide, which would mean we'd have to accept that suicide was a valid choice for some people. And if you agree with that, you might accept that some suicides, where people have thought about them carefully and considered all their options, are valid choices even today. Also, if an AI apocalypse really is coming, suicide might be a reasonable thing to do. Please do not misread this comment as saying I think an AI apocalypse definitely is coming - I do not think that.

anti aligned ai means ai that is actively against human values instead of just unaligned ai which simply doesn't care right?

predictedNO

@VictorLi Yep. Anything that would lead to an outcome worse than dying, making suicide preferable.

@IsaacKing right, u shud probably make it clear in the description that this is about escaping s-risk instead of x-risk, i feel like a lot of ppl are confusing the two

predictedNO

@VictorLi There's no significant incentive to escape X-risk via suicide, since you're just dying either way.

@IsaacKing yea, i get that, but just judging by some of the comments (and my initial reaction) i think it might be a good idea to clear up the distinction between anti-aligned and unaligned lol

predictedNO

@VictorLi Updated.

predictedNO

I only know of one climate change related suicide, and the cultural obsession with climate doom is at least 100x bigger (tho the actual risk is 100x smaller)

predictedNO

@JonathanRay Yes, but even the most doomerist climate change predictions don't threaten to cause an inescapable fate worse than death, so there's no incentive to commit suicide before it comes to pass.

unaligned AGI is a big cultural phenomenon now, so it's just a background scary thing that 'the sort of person who commit suicide' might commit suicide over, so pretty likely

issues: does 'suicide because AI did/will take my job' count? probably not?

does it count if the person is very delusional/schizophrenic? (as the cause of the suicide might be less "ai alignment worries" and more "schizophrenic person who randomly picked that as a reason)?

does 'suicide because AI will turn evil and kill everyone - but the person hadn't heard of AI alignment as a concept, and got the idea from old scifi' count??

predictedNO

@jacksonpolack

  1. Nope. An AI that successfully takes a job is aligned enough to not kill all humans, I'd take that as a positive.

  2. Delusional counts, yeah. But I must believe that it's a true counterfactual suicide, and they wouldn't have just done it for some other reason if they didn't know about AI risk.

  3. Old scifi counts, sure.

ASI angst has been a significant thing for a decade at this point, I am kind of surprised there's not a documented case already. I suppose that makes it less likely than I instinctively think, but OTOH more and more people are being introdiced to AI-related ideas so one 2025 is probably worth 10 2015s.

I would guess >95% yes, but I'm only ~80% that it will happen and Manifold users will find out about it.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules