
How much FLOP will be used to train the best language model with freely available weights on July 1, 2025?
Basic
5
Ṁ1237Jul 2
2%
<1e24
2%
[1e24, 3e24)
12%
[3e24, 1e25)
15%
[1e25, 3e25)
33%
[3e25, 1e26)
25%
[3e26, 1e27)
4%
[1e27, 3e27)
1.9%
[3e27, 1e28)
1.6%
[1e28, 3e28)
1.2%
[3e28, 1e29)
1.2%
>=1e29
This question is managed and resolved by Manifold.
Get
1,000and
3.00
Related questions
Related questions
Will an AI model use more than 1e28 FLOPS in training before 2026?
9% chance
At least one of the most powerful neural nets at end of 2026 will be trained using 10^26 FLOPs
97% chance
At least one of the most powerful neural nets at end of 2030 will be trained using 10^26 FLOPs
97% chance
Will a flagship (>60T training bytes) open-weights LLM from Meta which doesn't use a tokenizer be released in 2025?
29% chance
End of pre-training era for language models: Will an LM fine-tune for more FLOPs than it is pre-trained for, before 2026
44% chance
Will a machine learning training run exceed 10^26 FLOP in China before 2026?
52% chance
Will the largest machine learning training run (in FLOP) as of the end of 2025 be in the United States?
89% chance
Will a lab train a >=1e26 FLOP state space model before the end of 2025?
15% chance
Will a machine learning training run exceed 10^25 FLOP in China before 2027?
82% chance
Will a machine learning training run exceed 10^25 FLOP in China before 2025?
77% chance