Will OpenAI, Anthropic, or Google DeepMind suffer a significant security incident by the end of 2024?
💎
Premium
79
Ṁ50k
Jan 2
17%
chance

This question will resolve YES if any of the following are reported:

  • Some unauthorized actor was able to breach an AI lab's network security.

    • For example, if an AI lab's model weights are exfiltrated.

  • A capability improvement that an AI company was shared without authorization

    • For example, if an engineer is publicly accused of sharing secrets with another company.


A data breach that involves customer data, like ChatGPT bugs in 2023, will not trigger a YES resolution.

This market will resolve NO if, by Jan 1, 2025, there exist no public reports of a significant incident.

This is a near-identical market to Rob Wiblin's 2023 market here.

Get
Ṁ1,000
and
S3.00
Sort by:
opened a Ṁ3,000 NO at 40% order

I think that between this market being “near-identical” to one that resolved No for 2023, and the only incident in question being in early 2023, and that incident being kept secret because it wasn’t impactful enough (ie. didn’t reveal the sort of things used as examples in the description), that this market was overvalued and is likely to mod resolve No (lacking new news) due to a deleted creator.

sold Ṁ465 NO

Hacker accessing extensive company-internal chat logs was recently reported on by NYT https://archive.ph/7K69b#selection-645.129-645.133

This occurred before creation of this market, but I'd assume this market should cover anything first reported on during the question's duration (otherwise no market would cover such events).

Important question! I've curated it on https://theaidigest.org/timeline, it'd be nice to see more questions on lab infosec and harms from breaches

Partial weight stealing attacks (through API queries) do not count?

predictedYES

What if the organization is breached but no ai models/weights, etc are compromised? Stolen financial info or employee data perhaps?

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules