
Will Aidan McLau's claim that very large models are "refusing instruction tuning" be validated by 2030?
Basic
4
Ṁ552030
42%
chance
1D
1W
1M
ALL
https://x.com/aidan_mclau/status/1859444783850156258 According to Aidan McLau the reason large models are not being released is because these models are resisting instruct tuning. Resolves yes if a current or former AI researcher at Google, OpenAI, Anthropic, or Meta validates this claim or it confirmed independently by research.
This question is managed and resolved by Manifold.
Get
1,000and
3.00
Related questions
Related questions
"Large models aren’t more capable in the long run if we can iterate faster on small models" within two years
13% chance
AI: Will someone train a $10T model by 2100?
59% chance
AI: Will someone train a $1T model by 2030?
19% chance
By March 14, 2025, will there be an AI model with over 10 trillion parameters?
11% chance
AI: Will someone train a $1T model by 2080?
62% chance
AI: Will someone train a $1T model by 2050?
81% chance
AI: Will someone train a $100B model by 2050?
80% chance
"Large models aren’t more capable in the long run if we can iterate faster on small models" within five years
8% chance
AI: Will someone train a $10B model by 2050?
87% chance
AI: Will someone train a $10B model by 2030?
83% chance