Protect AI Guardian expands Hugging Face detection with four new modules (ARV-100, JOBLIB-101, TF-200, LMAFL-300)
AI Impact Summary
Hugging Face and Protect AI expanded Guardian's threat coverage with four new detection modules (PAIT-ARV-100, PAIT-JOBLIB-101, PAIT-TF-200, PAIT-LMAFL-300), targeting Archive slip writes at load time, suspicious joblib execution, TensorFlow SavedModel backdoors, and Llama file execution risks. Inline alerts on Hugging Face model pages and InsightsDB vulnerability reports improve visibility for developers and security teams, enabling more informed model selection and risk mitigation. The scale (4.47M model versions scanned across 1.41M repositories, 352k unsafe findings, 226M requests in 30 days) indicates broad coverage and rapid feedback to address evolving model security threats.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- info