Will it be revealed by 2030 that Bing Sydney's release was partially a way to promote AI safety?
➕
Plus
42
Ṁ2796
2031
5%
chance

Microsoft Bing gained some spooky behavior after generative AI was added. Will it be revealed by the end of 2030 that at least one person involved (1) in Sydney's development or (2) in the decision to release Sydney was influenced by the fact that Sydney's behavior would help promote the concept of AI safety?

Get
Ṁ1,000
and
S3.00
Sort by:

This is microsoft we're talking about, they can barely manage to maintain a working volume slider.

That said, at this point, instead of just saying stop, I would say we should speed up the work that needs to be done to create these alignments. We did not launch Sydney with GPT-4 the first day I saw it, because we had to do a lot of work to build a safety harness. But we also knew we couldn't do all the alignment in the lab. To align an AI model with the world, you have to align it in the world and not in some simulation.

From https://www.wired.com/story/microsofts-satya-nadella-is-betting-everything-on-ai/

(I won't count this as resolution (they wanted to learn more about safety, my criteria is promoting safety), but I thought it was relevant to share.)

Also see https://arstechnica.com/tech-policy/2023/06/report-microsoft-launched-bing-chatbot-despite-openai-warning-it-wasnt-ready/

predictedNO

@ChristopherKing This really seems like evidence against them having some vaguely principled pro-safety position.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules