
an LLM as capable as GPT-4 runs on one 4090 by March 2025
Plus
13
Ṁ845Mar 2
35%
chance
1D
1W
1M
ALL
e.g. Winograde >= 87.5%
This question is managed and resolved by Manifold.
Get
1,000and
3.00
Sort by:
does it count that I can run the llm while also using cpu ram offloading just like ollama does automatically? (it would be very slow, but would work)
Related questions
Related questions
an LLM as capable as GPT-4 runs on one 3090 by March 2025
33% chance
China will make a LLM approximately as good or better than GPT4 before 2025
89% chance
Will it be possible to run an LLM of GPT-4 (or higher) capability on a portable device by 2027?
47% chance
Will there be an open source LLM as good as GPT4 by June 2024?
12% chance
Will there be an LLM (as good as GPT-4) that was trained with 1/100th the energy consumed to train GPT-4, by 2026?
82% chance
Will xAI develop a more capable LLM than GPT-5 by 2026
50% chance
Which next-gen frontier LLMs will be released before GPT-5? (2025)
Will an open-source LLM beat or match GPT-4 by the end of 2024?
83% chance
When will an open-source LLM be released with a better performance than GPT-4?
Will any open source LLM with <20 billion parameters outperform GPT-4 on most language benchmarks by the end of 2024?
13% chance