Bias in Text-to-Image Models — Hugging Face Stable Bias Project
AI Impact Summary
Text-to-image models like Stable Diffusion and DALL-E 2 are susceptible to biases present in their training data, leading to stereotypical representations of cultures and identities. This issue is compounded by biases in the CLIP model used for prompt encoding, which can perpetuate skewed representations. Addressing this requires a multi-faceted approach including bias detection tools, red-teaming exercises, and careful evaluation of model latent spaces to mitigate the amplification of existing societal inequities.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- info