By EOY 2026, will it seem as if deep learning hit a wall by EOY 2025?
➕
Plus
182
Ṁ42k
2027
26%
chance

Formalization of: https://twitter.com/robbensinger/status/1725286678401835210.

'Hit a wall' means roughly as I understand it that rate of advancement of underlying capabilities has slowed greatly compared to the pace of 2021-2023.

Resolution will be my evaluation of consensus opinion. Best available form of polling will be used if needed. I will ask about the term 'hit a wall' and noting how I understand its meaning, in that case.

Note that this measures gains in the underlying capabilities of the AI systems. It does NOT refer to their economic impacts, which may still be accelerating.

See here for house rules: https://www.lesswrong.com/posts/ge3Jf5Hnon8wq4xqT/zvi-s-manifold-markets-house-rules

Get
Ṁ1,000
and
S3.00
Sort by:

Just as we are at the cusp of AI solving IMO, we are starting to wonder if deep learning will hit a wall (for the Nth time).

Scaling pre-training may be hitting diminishing returns, but DL is broader than pre-training, and the idea of 'scaling up' itself isn't limited to pre-training either, a point reiterated by Sutskever himself recently.

We don't even need to figure out the next paradigm for 2025 onwards. As Noam Brown put it, o1 is GPT-2 of test-time-compute scaling.

Strongly betting against this.

Looks like he's trying to retconn his "deep learning is hitting a wall" statement into "pure LLM scaling will slow down near the data wall".

Funny how some people are so desperate to convince other people they make good predictions, but won't create an account on manifold. They just don't want other people to verify and point out their bullshit.

@ZviMowshowitz Can you elaborate on the resolution criteria? If LLMs are still advancing, and each new model is more capable than the last, but it's taking a lot more work and more money each time with only relatively small gains, this resolves YES, right? What's the threshold for "a lot more work and money"?

Seems pretty clear that the current methods are not going to lead to anything amazingly better.

Possible there will be new methods, but there is no certainty of that and probably not a 70% chance.

@DavidBolin I dunno. I mostly use AI for broadly known facts or generalizations, but I have to google for minutiae and specific details. If an AI can be trained to the point where it's effective knowledge base is as good as the first page or two of google results, along with a corresponding increase in other capabilities (such as a longer context windows), which seems plausible, I would consider that to be "amazingly better". There are limits, but those limits are pretty darn high. On the other hand, I predict that the improvement rate is likely to slow down soon.

predictedNO

@DavidBolin Hmm, if it keeps steadily improving, it won't hit a wall.

bought Ṁ100 YES from 31% to 36%

@DavidBolin Manifold gives it 50%.

predictedYES

@AlexandreK "'Hit a wall' means roughly as I understand it that rate of advancement of underlying capabilities has slowed greatly compared to the pace of 2021-2023."

It is perfectly clear that this definition is consistent with steadily improving, if it happens more slowly.

predictedNO

@DavidBolin To me, that would just be the law of diminishing returns, which I wouldn't call hitting a wall. But I guess that's how the question is defined, granted, so maybe.

predictedYES

@AlexandreK I agree that is not what most people mean by "hit a wall." However, Zvi defined it explicitly.

My understanding is that the best LLMs have already been trained with what is considered the "best" information available on the internet (e.g. Wikipedia). Though there is still room for improvement - there is still tons of "mediocre" or "low quality" information available (e.g. Twitter, TIk Tok, etc) on which the models can be trained on to glean more nuggets of valuable knowledge - but at some point there are going to be diminished returns which cannot be overcome simply by throwing more data and GPUs at the problem (and GPUs are also getting more expensive, so it's harder to throw more of them into a training run), so my guess is that the slowdown will happen before EOY 2025.

Whether or not this counts as a "wall" (i.e. will the rate of progress be considered "greatly slowed" or merely "somewhat slowed") remains to be seen. It's also conceivable that other avenues of advancement (e.g. algorithmic improvements or AI self-improvement) will be found in the same time period. But my bet is that deep learning will hit a wall.

The more fundamental question:

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules