Tech

These new tools let you see for yourself how biased AI image models are

Published

on


One theory as to why that might be is that nonbinary brown people may have had more visibility in the press recently, meaning their images end up in the data sets the AI models use for training, says Jernite.

OpenAI and Stability.AI, the company that built Stable Diffusion, say that they have introduced fixes to mitigate the biases ingrained in their systems, such as blocking certain prompts that seem likely to generate offensive images. However, these new tools from Hugging Face show how limited these fixes are. 

A spokesperson for Stability.AI told us that the company trains its models on “data sets specific to different countries and cultures,” adding that this should “serve to mitigate biases caused by overrepresentation in general data sets.”

A spokesperson for OpenAI did not comment on the tools specifically, but pointed us to a blog post explaining how the company has added various techniques to DALL-E 2 to filter out bias and sexual and violent images. 

Bias is becoming a more urgent problem as these AI models become more widely adopted and produce ever more realistic images. They are already being rolled out in a slew of products, such as stock photos. Luccioni says she is worried that the models risk reinforcing harmful biases on a large scale. She hopes the tools she and her team have created will bring more transparency to image-generating AI systems and underscore the importance of making them less biased. 

Part of the problem is that these models are trained on predominantly US-centric data, which means they mostly reflect American associations, biases, values, and culture, says Aylin Caliskan, an associate professor at the University of Washington who studies bias in AI systems and was not involved in this research.  

“What ends up happening is the thumbprint of this online American culture … that’s perpetuated across the world,” Caliskan says. 

Caliskan says Hugging Face’s tools will help AI developers better understand and reduce biases in their AI models. “When people see these examples directly, I believe they’ll be able to understand the significance of these biases better,” she says. 

Copyright © 2021 Vitamin Patches Online.