Will there be a disaster caused by open source developers doing unsafe things with AI by 2028?
Plus
22
Ṁ8692028
61%
chance
1D
1W
1M
ALL
"Disaster" is to be interpreted as something that is framed as such by the media - unless the outcome is merely embarassment or something that causes offense.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
Say that somebody uses an open source AI to facilitate something that the media consensus defines as a disaster. Would that count as "developers doing unsafe things" in that the disaster wouldn't have been possible without the developer releasing it? Or does the developer need to be more directly involved?
Examples: Bob uses jail broken Llama to determine how to make and use a nerve agent.
Alice agentizes Llama and uses it to automatically scam a thousand elderly people.
Related questions
Related questions
By the end of 2024, will there be a major security vulnerability reported to be caused by AI generated code?
60% chance
Will open-source AI win (through 2025)?
33% chance
Will there be an anti-AI terrorist incident by 2028?
61% chance
Will there be a massive catastrophe caused by AI before 2030?
44% chance
Will OpenAI, Anthropic, or Google DeepMind suffer a significant security incident by the end of 2024?
17% chance
Will open-source AI win? (through 2028)
35% chance
Will there be another blatant demonstration of AI risks, comparable to Bing Chat, by 2024?
30% chance
Will OpenAI have AI-related IP stolen before 2026?
52% chance
Will OpenAI be involved in a major scandal before 2025?
35% chance
By 2029, will there be a public "rogue AI" incident?
89% chance