Tech

The Download: catching bad content, and farming from space

Published

on


Big Tech is surprisingly bad at catching, labeling, and removing harmful content. In theory, new advances in AI should improve our ability to do that. In practice, AI isn’t very good at interpreting nuance and context. And most automated content moderation systems were trained with English data, meaning they don’t function well with other languages.

The recent emergence of generative AI and large language models like ChatGPT means that content moderation is likely to become even harder. 

Whether generative AI ends up being more harmful or helpful to the online information sphere largely hinges on one thing: AI-generated content detection and labeling. Read the full story.

—Tate Ryan-Mosley

Tate’s story is from The Technocrat, her weekly newsletter giving you the inside track on all things power in Silicon Valley. Sign up to receive it in your inbox every Friday.

If you’re interested in generative AI, why not check out:

+ How to spot AI-generated text. The internet is increasingly awash with text written by AI software. We need new tools to detect it. Read the full story.

+ The inside story of how ChatGPT was built from the people who made it. Read our exclusive conversations with the key players behind the AI cultural phenomenon.

+ Google is throwing generative AI at everything. But experts say that releasing these models into the wild before fixing their flaws could prove extremely risky for the company. Read the full story.

Copyright © 2021 Vitamin Patches Online.