Before 2028, will any prediction market find a robust way to run a market on AI extinction risk? [M$50,000 reward]
➕
Plus
22
Ṁ1543
2028
16%
chance

/JamesDillard/will-ai-wipe-out-humanity-before-th

/EliezerYudkowsky/will-ai-wipe-out-humanity-by-2030-r

/IsaacKing/if-humanity-survives-to-2100-what-w

/IsaacKing/will-this-market-still-exist-at-the

None of these actually work. The basic mechanics that make (traditional) prediction markets accurate do not apply when traders will value mana differently if the market resolves one way than if it resolves the other, or if there's no profit incentive at all.

Moreover, the existance of these sorts of markets and people who cite them as though they were credible discredits prediction markets and those people in general in the eyes of economically-savvy outsiders. Figuring out a way to reward honest bets on on human extinction would be an extremely useful innovation for anyone concerned about existential risk, as it would allow humanity to get a concrete, empirical estimate of its likelyhood to convince the general public.

I've placed a M$50,000 limit order on NO, as a reward for anyone who can figure out how to resolve this to YES.

Get
Ṁ1,000
and
S3.00
Sort by:
bought Ṁ10 YES

Would you say this market counts?

@IsaacKing what is the fundamental issue with this style of market? I think it could be made a bit better (add more resolution criteria about what entity will be calculating the estimate so we know it is a intellectually honest super intelligence, etc), but the idea works in my opinion.

predictedNO

@RobertCousineau Could be wrong, and too long a time frame for people to really care about predicting it accurately now.

predictedYES

How about @JonathanRay's new market, prediction via Whalebait?

I claim it is theoretically impossible to forecast this with only a market. I do believe one can forecast it reasonably by other methods, but not with just a market.

First, I will note that it is possible to make market forecasts that at least correlate with risk of your own death, or risk of extinction. That risk is part of the price of all loans, which are constantly being bought and sold on the markets.

The fundamental problem is that there is no way for a market to distinguish one type of extinction risk from another. The other fundamental problem is that these prices reflect discount rates where x-risk is just one, small component - there are other much bigger components, like the rate of economic growth.

Here is a post I made with a more detailed explanation: https://manifold.markets/EliezerYudkowsky/will-ai-wipe-out-humanity-by-2030-r#8o50dQrbUR5gW9vxOZId

Will AI wipe out humanity by 2030? [resolves N/A in 2027]
15% chance. Will continuing progress in AI capabilities result in every human dying by Jan 1st, 2030? Due to M$ not being worth anything if the world ends, people have an incentive to bet "NO" on questions about AI killing everyone even if they believe the correct answer is probably YES. Pleading with people to just bet their beliefs on this important question doesn't seem like the best possible solution. This market resolves N/A on Jan 1st, 2027. All trades on this market will be rolled back on Jan 1st, 2027. However, up until that point, any profit or loss you make on this market will be reflected in your current wealth; which means that purely profit-interested traders can make temporary profits on this market, and use them to fund other permanent bets that may be profitable; via correctly anticipating future shifts in prices among people who do bet their beliefs on this important question, buying low from them and selling high to them. In principle there's still a distortionary effect if mana is worth less to you in 2026 within worlds that end by 2030. But since M$ are in any event only useful for charity (including charity that could try for last-ditch efforts to save us in doomed timelines) and not for personal partying in any timeline, the distortion should be less. I'm sure people will come up with galaxy-brained reasons not to bet their true beliefs in this market too, including trolls who come up with elaborate arguments for that in the comments just to be trolls; but the actual incentive distortion should be less. The intent of this market is that the semantic question for people to bet their beliefs on, is about AI progress causally resulting in literally every human dying by 2030. I don't think it's particularly likely that this would happen by accident, or by the deliberate choice of humans staying in control; but eg terrorists purposefully using LLAMA 4 to build a supervirus would count if the last survivors died by Jan 1st, 2030. Related market: (https://manifold.markets/embed/EliezerYudkowsky/will-artificial-superintelligence-e)

Again, the solution is to make non-market-based predictions. One can simply ask forecasters for their predictions. You don't want to score it on resolution of course.

Here's an example: https://forecastingresearch.org/news/results-from-the-2022-existential-risk-persuasion-tournament

Oh, and you can use a prediction market to predict the result of a non-market-based prediction on AI risk. I'm confident that can work well. Does that count though?

I think this is worth doing! Do we have any markets on, for example, what the Metaculus AI X-risk forecast will be at a certain date?

This one seems interesting

Metaculus has a lot of good AI risk questions, but I think the quality of predictions on the specific AI extinction questions I'm aware of is pretty bad. E.g. https://manifold.markets/jack/will-humans-go-extinct-before-2100 - I think this the Metaculus predictions here are very low-quality, and as evidence I can point to them being horribly inconsistent with other Metaculus questions. But this is not an issue with all X-risk questions - others have what I believe are pretty good-quality predictions.

I have already made markets on global catastrophic risk from AI, which I and many other forecasters believe is very similar to the risk of AI-caused human extinction. https://manifold.markets/jack/will-a-global-catastrophe-kill-at-l

There's a pretty good strategy for making high-quality predictions on X-risk:

  • Assess forecaster's prediction accuracy on measurable questions, particularly on ones that relate to the topic of AI and x-risk.

  • Ask them for predictions on the X-risk questions that cannot be directly scored - e.g. "Will AI cause human extinction by 2100?"

  • Also ask them for predictions on related questions that can be measured and scored - e.g. on AI capabilities and safety progress.

  • Aggregate the predictions with weighting based on their track record on past measurable questions.

  • Assess how well the predictions on the unmeasurable questions fit with the predictions on the measurable questions.

(It's kind of like the strategy for assessing the quality of long-range predictions when your dataset so far only has short-range predictions.)

Multiple forecasting groups have already been doing this type of work.

@jack Can you refer me to some of the work done by these forecasting groups? I would be very interested to read it.

What are your resolution criteria for this market?

predictedNO

@alextes Traders much have a profit incentive to bet their true beliefs. (Or very close to them.)

predictedNO

@IsaacKing yes, and this belief often depends on the resolution, which in this case depends on you. Are you saying you won’t resolve before close? If you reserve the right to resolve before close based on your subjective idea of “a robust way to run a market on AI-X risk” I feel I can’t safely bet my beliefs here anymore. If you do at close it’s still unsafe without defined criteria but I can safely exit before then.

predictedNO

@alextes Traders much have a profit incentive to bet their true beliefs in the market about AI risk.

predictedNO

@IsaacKing You have not answered my question about resolving prior to close. Nor acknowledged how your subjective perspective influences my “true beliefs” for this market.

That’s okay, although it also means I can’t trade this. Good luck 😄.

predictedNO

@alextes It's pretty objective whether there's an incentive to bet your true beliefs in any given market structure (or at least something close to them, modified by the Kelly criterion due to non-infinite capital), and it's not worth my time to explain basic economics that you can look up elsewhere. (If you'd like a starting point, look up "incentive compatibility" or "strategyproofness". Or just spend a few minutes thinking creatively about what would make you the most mana, and then see if it involves betting to your credence on the market resolution event occurring.)

This market will resolve to YES as soon as such a system is implemented, even if it occurs before 2028.

@IsaacKing Are you requiring that it be 100% incentive compatible? What if it works well enough in practice to give good results 90% of the time, for example? (I would note that Manifold is certainly not 100% incentive compatible for a ton of reasons)

predictedNO

@jack It needs to be good enough that, as someone concerned about existential risk, I'd seriously consider throwing few thousand dollars into subsidizing it so we can finally have a reliable number that I can show to people and say "this market system proves the risk is about this high, because anyone who thinks it's lower could turn a profit by betting in that direction".

@IsaacKing How about funding a non-market-based superforecasting study with a few thousand dollars? I think that is the better approach, as described in the thread above. (For addressing x-risk, not for resolving this market.)

predictedNO

@jack It may be equally accurate, but it's much less useful for convincing others.

@IsaacKing Quite possible, but is there actual data on how convincing people find prediction markets vs other forecasts?

predictedNO

@jack Not that I'm aware of. (Seems like an interesting study area) Seems obvious to me though that saying "here's a market, if you think it's wrong you can turn a profit by trading in it" is a lot more convincing than "here's a number that someone you disagree with came up with".

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules