Large language models still struggle with context, which means they probably won’t be able to interpret the nuance of posts and images as well as human moderators. Scalability and specificity across different cultures also raise questions. “Do you deploy one model for any particular type of niche? Do you do it by country? Do you do it by community?… It’s not a one-size-fits-all problem,” says DiResta.
New tools for new tech
Whether generative AI ends up being more harmful or helpful to the online information sphere may, to a large extent, depend on whether tech companies can come up with good, widely adopted tools to tell us whether content is AI-generated or not.
That’s quite a technical challenge, and DiResta tells me that the detection of synthetic media is likely to be a high priority. This includes methods like digital watermarking, which embeds a bit of code that serves as a sort of permanent mark to flag that the attached piece of content was made by artificial intelligence. Automated tools for detecting posts generated or manipulated by AI are appealing because, unlike watermarking, they don’t require the creator of the AI-generated content to proactively label it as such. That said, current tools that try to do this have not been particularly good at identifying machine-made content.
Some companies have even proposed cryptographic signatures that use math to securely log information like how a piece of content originated, but this would rely on voluntary disclosure techniques like watermarking.
The newest version of the European Union’s AI Act, which was proposed just this week, requires companies that use generative AI to inform users when content is indeed machine-generated. We’re likely to hear much more about these sorts of emerging tools in the coming months as demand for transparency around AI-generated content increases.
What else I’m reading
The EU could be on the verge of banning facial recognition in public places, as well as predictive policing algorithms. If it goes through, this ban would be a major achievement for the movement against face recognition, which has lost momentum in the US in recent months.
On Tuesday, Sam Altman, the CEO of OpenAI, will testify to the US Congress as part of a hearing about AI oversight following a bipartisan dinner the evening before. I’m looking forward to seeing how fluent US lawmakers are in artificial intelligence and whether anything tangible comes out of the meeting, but my expectations aren’t sky high.
Last weekend, Chinese police arrested a man for using ChatGPT to spread fake news. China banned ChatGPT in February as part of a slate of stricter laws around the use of generative AI. This appears to be the first resulting arrest.
What I learned this week
Misinformation is a big problem for society, but there seems to be a smaller audience for it than you might imagine. Researchers from the Oxford Internet Institute examined over 200,000 Telegram posts and found that although misinformation crops up a lot, most users don’t seem to go on to share it.
In their paper, they conclude that “contrary to popular received wisdom, the audience for misinformation is not a general one, but a small and active community of users.” Telegram is relatively unmoderated, but the research suggests that perhaps there is to some degree an organic, demand-driven effect that keeps bad information in check.