Tech

The Download: trapped by grief algorithms, and image AI privacy issues

Published

on


—Tate Ryan-Mosley, senior tech policy reporter

I’ve always been a super-Googler, coping with uncertainty by trying to learn as much as I can about whatever might be coming. That included my father’s throat cancer.

I started Googling the stages of grief, and books and academic research about loss, from the app on my iPhone, intentionally and unintentionally consuming people’s experiences of grief and tragedy through Instagram videos, various newsfeeds, and Twitter testimonials.

Yet with every search and click, I inadvertently created a sticky web of digital grief. Ultimately, it would prove nearly impossible to untangle myself from what the algorithms were serving me. I got out—eventually. But why is it so hard to unsubscribe from and opt out of content that we don’t want, even when it’s harmful to us? Read the full story.

AI models spit out photos of real people and copyrighted images

The news: Image generation models can be prompted to produce identifiable photos of real people, medical images, and copyrighted work by artists, according to new research. 

How they did it: Researchers prompted Stable Diffusion and Google’s Imagen with captions for images, such as a person’s name, many times. Then they analyzed whether any of the generated images matched original images in the model’s database. The group managed to extract over 100 replicas of images in the AI’s training set.

Why it matters: The finding could strengthen the case for artists who are currently suing AI companies for copyright violations, and could potentially threaten the human subjects’ privacy. It could also have implications for startups wanting to use generative AI models in health care, as it shows that these systems risk leaking sensitive private information. Read the full story.

Copyright © 2021 Vitamin Patches Online.