Will any model get above human level on the Simple Bench benchmark before September 1st, 2025.
Basic
26
Ṁ51982025
55%
chance
1D
1W
1M
ALL
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
The human baseline is now 83.7%. Unfortunate that the old baseline is the name but I will resolve to true if any model exceeds the human baseline published on https://simple-bench.com.
Is it true that this benchmark can be anything, and can be changed at any point? There are no hashes, no large sample of problems, no error bars, no evaluation code, no specifics on what a model can or cannot use... How do we know what a true performance is, except what the author says?
Description of the benchmark here: https://simple-bench.com/about.html
I have made some irrational bets to subsidize the market - as I cannot be bothered to figure out the correct way to do this.
Related questions
Related questions
Will an AI achieve >85% performance on the FrontierMath benchmark before 2028?
61% chance
Will OpenAI models achieve ≥90% on SimpleBench by the end of 2025?
40% chance
80% on SWE-Bench Verified by Jan 1 2025
10% chance
Will there be a model that has a 75% win rate against the latest iteration of GPT-4 as of January 1st, 2025?
62% chance
Will an AI achieve >85% performance on the FrontierMath benchmark before 2027?
31% chance
Will models be able to do the work of an AI researcher/engineer before 2027?
40% chance
What will be the best score on the SWE-Bench (unassisted) benchmark before 2025?
39% chance
Will simple-bench scores be reported in a major AI lab paper or blog post by the end of 2024?
52% chance
Will an AI model outperform 95% of Manifold users on accuracy before 2026?
56% chance
Will an AI score over 10% on FrontierMath Benchmark in 2025
74% chance