In-browser ML-powered web games using Transformers.js and MobileViT-small
AI Impact Summary
This post demonstrates a full in-browser ML workflow: finetuning a MobileViT-small classifier on the Quick, Draw! dataset, exporting to ONNX via Optimum, and running inference in the browser with Transformers.js backed by ONNX Runtime. It shows real-time performance (approximately 60 predictions per second) using a lightweight in-browser model, enabling games to operate without server round-trips. For engineering teams, this serves as a blueprint to ship client-side ML features by converting PyTorch models to ONNX, publishing to Hugging Face Hub, and wiring them into a React/Vite app with web workers to keep the UI responsive. However, it imposes constraints on device memory and compute due to a ~20 MB model footprint, so you must plan for mobile/low-end devices and model update paths.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- info