Tech

The Algorithm: AI-generated art raises tricky questions about ethics, copyright, and security

Published

on


Thanks to his distinctive style, Rutkowski is now one of the most commonly used prompts in the new open-source AI art generator Stable Diffusion, which was launched late last month—far more popular than some of the world’s most famous artists, like Picasso. His name has been used as a prompt around 93,000 times.

But he’s not happy about it. He thinks it could threaten his livelihood—and he was never given the choice of whether to opt in or out of having his work used this way. 

The story is yet another example of AI developers rushing to roll out something cool without thinking about the humans who will be affected by it. 

Stable Diffusion is free for anyone to use, providing a great resource for AI developers who want to use a powerful model to build products. But because these open-source programs are built by scraping images from the internet, often without permission and proper attribution to artists, they are raising tricky questions about ethics, copyright, and security. 

Artists like Rutkowski have had enough. It’s still early days, but a growing coalition of artists are figuring out how to tackle the problem. In the future, we might see the art sector shifting toward pay-per-play or subscription models like the one used in the film and music industries. If you’re curious and want to learn more, read my story. 

And it’s not just artists: We should all be concerned about what’s included in the training data sets of AI models, especially as these technologies become a more crucial part of the internet’s infrastructure.

In a paper that came out last year, AI researchers Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe analyzed a smaller data set similar to the one used to build Stable Diffusion. Their findings are distressing. Because the data is scraped from the internet, and the internet is a horrible place, the data set is filled with explicit rape images, pornography, malign stereotypes, and racist and ethnic slurs. 

A website called Have I Been Trained lets people search for images used to train the latest batch of popular AI art models. Even innocent search terms get lots of disturbing results. I tried searching the database for my ethnicity, and all I got back was porn. Lots of porn. It’s a depressing thought that the only thing the AI seems to associate with the word “Asian” is naked East Asian women. 

Not everyone sees this as a problem for the AI sector to fix. Emad Mostaque, the founder of Stability.AI, which built Stable Diffusion, said on Twitter he thought the ethics debate around these models to be “paternalistic silliness that doesn’t trust people or society.”  

But there’s a big safety question. Free open-source models like Stable Diffusion and the large language model BLOOM give malicious actors tools to generate harmful content at scale with minimal resources, argues Abhishek Gupta, the founder of the Montreal AI Ethics Institute and a responsible-AI expert at Boston Consulting Group.

The sheer scale of the havoc these systems enable will constrain the effectiveness of traditional controls like limiting how many images people can generate and restricting dodgy content from being generated, Gupta says. Think deepfakes or disinformation on steroids. When a powerful AI system “gets into the wild,” Gupta says, “that can cause real trauma … for example, by creating objectionable content in [someone’s] likeness.” 

We can’t put the cat back in the bag, so we really ought to be thinking about how to deal with these AI models in the wild, Gupta says. This includes monitoring how the AI systems are used after they have been launched, and thinking about controls that “can minimize harms even in worst-case scenarios.” 

Deeper Learning

There’s no Tiananmen Square in the new Chinese image-making AI



Copyright © 2021 Vitamin Patches Online.