Compute efficiency trend: 44x less compute to reach AlexNet-level ImageNet performance since 2012
AI Impact Summary
New analysis shows the compute required to train a neural net to ImageNet-level performance has decreased 44x since 2012, with roughly a 2x reduction every 16 months. Algorithmic progress now outpaces Moore's Law for high-investment AI tasks, implying cheaper and faster model development than hardware scaling alone. For engineering teams, this shifts the value proposition toward research, data, and architecture improvements to realize efficiency gains within existing cloud and on-prem budgets.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- medium