Does an AI disaster kill at least 1,000,000,000 (1bn) people before 2040?
➕
Plus
25
Ṁ2125
2040
11%
chance

-"Does": the main cause must have been the AI, not external circumstance.

-"an": the deaths must be from the same source, such as a glitch, misfire or AI attack (we won't combine unrelated disasters). We'll include cumulative deaths over time (such as the same glitch killing people over several months), as long as it's credible that the deaths were caused by the same AI problem.

-"AI": must be a system (or multiple systems) that relied on artificial intelligence for its decisions.

-"disaster": the AI must've done something that most smart people would say is malicious, reckless, or poor judgement. Unless the AI was correctly following orders by a human who we agree was authorized to give those orders, then that doesn't count.

-"kill": they must have deceased, not merely been injured/poisoned/etc.

-"at least 1,000,000,000": if the disaster causes ongoing deaths (such as by poisoning the atmosphere) then we will include the ongoing deaths, with the best available estimate.

-"Before 2040": resolves No in 2040-Jan if the above hasn't happened, otherwise resolves Yes whenever there is a consensus that it happened.

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-73bcb6a788ab

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-b4aff4d3a971

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-64c23c92de25

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-a6d27cdbf0e2

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-58a3a9fbce72

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-56d8c29e61cf

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-60a898abc07f

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1

Get
Ṁ1,000
and
S3.00
Sort by:

Nice question! Curious about combining of disasters for a total harm as well

@PC There are some difficult edge cases I haven't had to grapple with yet. E.g. a moderate-size AI disaster could be exacerbated by a coincident non-AI disaster. Or the other way around. Deciding how much to attribute to one or the other isn't necessarily obvious.

At some point I may consider adding more series, for non-AI disasters. Right now I'm just trying to iron out any ambiguities/weaknesses in this series, since it's actually quite complicated!

These questions would be better as a multi

@chrisjbillington You're right, now that multis can share liquidity, that would be nice. Though it's a bit weird because these are technically cumulative, since e.g. 10k deaths necessarily also includes 1k deaths, and so on.

Hmm, suppose if we had a 1k event, then years later a 10k event. Would the multi allow me to resolve the 1k right away but leave the others still trading? That would be nice, rather than having to wait to resolve all of them.

predictedYES

@ScroogeMcDuck For a multi you would need different questions, with intervals : "between 0-100", "between 100-1000", etc…
But this would be equivalent, you just have to sum the probabilities if you want to know something like "100 or more".

@ScroogeMcDuck Ah, hadn't appreciated the subtlety. I had imagined mutually exclusive ranges, but yeah, it would be nice to have it be cumulative and be able to resolve them one-by-one. If manifold allowed you to resolve one option YES and leave the market open (I don't think they do), then I think that would work, but the probabilities displayed would not be meaningful. You'd need to mentally normalise them yourself like people were doing for the "which 5 traders will promote to master" or whatever questions - where you know 5 options will be chosen and treat 20% as if it means 100%.

predictedYES

@chrisjbillington Would that not be worse than just using intervals ?

@dionisos Well in a hypothetical world where you could resolve one "at least X deaths" option YES and leave the rest open (as if they were separate markets altogether), it would be useful not to have intervals.

I mean, maybe it doesn't matter much - intervals that get exceeded will be bid down to near zero and people who made good bets earlier can cash out - with those buying in at low probabilities and holding getting their mana back as loans. Maybe intervals aren't so bad.

predictedYES

@chrisjbillington oh ok, I didn’t understand it was about blocking the resolution because we only know part of the answer.
I think that if we need to deal with this problem, the correct way would be to have the ability to delete some of the answers (for instance, if we don’t know if it is more or less than 1000, but it is definitely more than 100 and less than 10000, we could delete all answers except 2), and you multiply each remaining probabilities by a constant for the sum to stay 1.

@dionisos I see what you're saying about how we can just have different bins. Would basically ask "How big will the biggest AI disaster be, before x date?".

But it would be very nice to be able to resolve the smaller ones ongoingly, whenever a milestone is "achieved" (so to speak).

predictedYES

@ScroogeMcDuck Yes, Chris Billington make me realize I misunderstood what it was about.
What do you think about my idea of having the ability to remove some answers, it would be like a partial resolution we can do when we gain more information, but not enough to select only one answer. And it would not only work in one direction, so I think it would be strictly better.

predictedYES

Hum, in fact this was probably what you was saying.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules