
Currently, most hardware that is used for training a neural network is the same as that which is used for performing inference with that network.
However, training a network has different requirements from performing inference. There already exist some differences between inference and training (dropout, quantization) and these differences generally improve the performance or speed of the NN.
This market resolves YES if, before 1 Jan 2030, the hardware used for training a model is "significantly" different from the hardware used to perform inference with the same model. I will not bet and will resolve this based on my judgement (examples are given below). The market resolves NO if the hardware is basically the same.
For these purposes, the hardware used will refer to common practice of production NN models. If this information is kept a secret or it's unclear that using different hardware is "common practice" in the industry, the market will resolve NA.
Examples of YES resolutions:
Inference is done on hardware which was explicitly designed to primarily perform NN inference
The hardware used for inference is significantly different from that used for training, and this difference was chosen because it provides some benefit
Documentation for popular ML libraries recommend using different hardware for production NN models
Okay wow I didn't think this would jump to 94% within hours of making the question. Would love to hear from the YES betters (cc @Dentosal @sam Konstantin Kozlovtsev) about why they're so confident. Is the timeline just very long? is it the rate of progress? is it the huge amount of money that's likely to be poured into the industry in the near future? Is there already very specific hardware that already meets my criteria?
@BoydKane i'm not very knowledgeable on ml stuff, but this seems to already be starting to be the case, eg, aws trainum/inferentia?
@BoydKane We already have an example of a radically different hardware architecture for inference. Analog AI accelerators are, I believe, commercially available, if not popular.
What happens if it's a combination of hardware, where inference and training use a common piece of hardware, but one or both of them use unique hardware as well?
@BoydKane This is already the case. iPhone uses neural networks in their phones and these neural networks were of course not trained on phones.
Also: 2030 is a long time period.