By 2026, will it be standard practice to sandbox SOTA LLMs?
➕
Plus
15
Ṁ557
2026
29%
chance

Resolves YES if most SOTA models are released within some kind of not-completely-trivial sandbox

Example scenarios:

  • If most models have direct read/write access to the internet, resolves NO

  • If DeepMind sandboxes all of their models but Brain and OpenAI don't (assuming those are the three groups releasing SOTA LLMs), resolves NO

  • Resolves NO if there's general agreement that it should be standard practice but isn't

  • Resolves YES if companies are sandboxing but the sandboxes have notable security flaws.

  • Resolves YES if models have some kind of restricted read access to the internet

    Oct 4, 12:26pm: Clarification: Resolves YES if models are deployed in sandboxes with some intentional gaps, e.g. a chatbot that has read/write access to some specific communication channel but is otherwise treated as untrusted code.

Get
Ṁ1,000
and
S3.00
Sort by:

Would this resolve to YES if applied to today's standards? ie models running on a server, reachable via api but not unrestricted access to the internet

@LauroLangoscodiLangosco I would require that the models also be sandboxed off from the systems they're running on, which is generally not the case for current models.

predictedYES

@vluzko Today's language models are not allowed to execute code, are run with fixed memory requirements, etc. If that's not sandboxing, what is?

(Example: if we gave LLMs read / write access to a terminal, I'd consider that not-sandboxed)

Seems to me like you probably want something stronger than just "some kind of not-completely-trivial sandbox"

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules