Hugging Face and AMD partner to optimize transformers on ROCm platforms (MI2xx/MI3xx, Navi3x, Ryzen, EPYC, Alveo V70)
AI Impact Summary
Hugging Face and AMD formalize a capability collaboration to accelerate transformer workloads on AMD ROCm platforms, spanning CPUs and GPUs. The effort targets MI2xx/MI3xx GPUs, Radeon Navi3x GPUs, Ryzen and EPYC CPUs, and the Alveo V70 accelerator, with integration across PyTorch, TensorFlow, and ONNX Runtime and a plan to ship an AMD-focused Optimum library. Early testing claims substantial speedups (MI250: BERT-Large 1.2x, GPT2-Large 1.4x) versus a competitor, indicating meaningful reductions in training and inference costs. Technical teams should anticipate ROCm-based validation, potential model/library migrations, and alignment with the AMD-supported stack as the collaboration progresses.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- info