This question resolves YES if GPT-4 has enough data to roughly match the best-known scaling laws prescriptions known at the time of the training of GPT-4. Currently, this would mean following Chinchilla scaling laws. By roughly, I mean that it can be off by 20%. That is, if GPT-4 is 100B parameters, which would prescribe 12T tokens as per (currently known) optimal scaling laws, GPT-4 would need to be trained from ~10T to ~14T tokens for this question to resolve positively.
lol gpt4 just used better scaling laws than chinchilla. And because it's still the most powerful model a year after it was trained, i'm guessing that it was trained with the best known scaling laws - which were only known inside openai!
If GPT-4 turns out to be a MoE, would this question resolve according to the parameter count of each expert rather than all experts combined?
Will this market be extended until the answer is known? I have a suspicion they'll be publishing more details next year, including the parameter count that is relevant for this market.
In code we trust, algorithms reign
Machine learns with minimal pain
With GPT-4 we'll surely see
The pinnacle of AI mastery
I feel like the scaling charts in the paper are basically confirmation, here. Still, waiting for confirmation.
If GPT-4 is intended to be used a lot, then the majority of its cost will be in run-time not train-time. Dunno.
To expand on my comment: we can see from https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/ that smaller models keep improving as trained longer. Given that inference costs are high for OpenAI, it makes sense for them to train to minimize inference + training costs rather than train costs only, which means a smaller-than-chinchilla-optimal model is best.
how does this resolve if GPT-4 training does better per FLOP than chinchilla scaling?
@Lauro It doesn’t matter whether it does better or worse per flop than chinchilla scaling, as long as it is trained roughly compute-optimally according to the known scaling laws at the time. If a much-better-than-chinchilla scaling law is discovered, then it could very well be that GPT-4 is trained more compute optimally than Chinchilla yet doesn’t abide to known optimal scaling laws.
@BionicD0LPH1N got it!
Does "known" mean "publicly known" here?
If the better scaling law is discovered by openai and used to train GPT4, does that count as YES (bc that's the new best known law) or NO (bc the scaling is better than the best publicly known at the time of training)
Basically I'm interested in the interaction with this market https://manifold.markets/Lauro/will-gpt4-improve-on-the-chinchilla
@BionicD0LPH1N there is this article https://www.datacamp.com/blog/what-we-know-gpt4
not necessarily 100% reliable