Tech

How an undercover content moderator polices the metaverse

Published

on


Meta won’t say how many content moderators it employs or contracts in Horizon Worlds, or whether the company intends to increase that number with the new age policy. But the change puts a spotlight on those tasked with enforcement in these new online spaces—people like Yekkanti—and how they go about their jobs.   

Yekkanti has worked as a moderator and training manager in virtual reality since 2020 and came to the job after doing traditional moderation work on text and images. He is employed by WebPurify, a company that provides content moderation services to internet companies such as Microsoft and Play Lab, and works with a team based in India. His work is mostly done in mainstream platforms, including those owned by Meta, although WebPurify declined to confirm which ones specifically citing client confidentiality agreements. 

A longtime internet enthusiast, Yekkanti says he loves putting on a VR headset, meeting people from all over the world, and giving advice to metaverse creators about how to improve their games and “worlds.”

He is part of a new class of workers protecting safety in the metaverse as private security agents, interacting with the avatars of very real people to suss out virtual-reality misbehavior. He does not publicly disclose his moderator status. Instead, he works more or less undercover, presenting as an average user to better witness violations. 

Because traditional moderation tools, such as AI-enabled filters on certain words, don’t translate well to real-time immersive environments, mods like Yekkanti are the primary way to ensure safety in the digital world, and the work is getting more important every day. 

The metaverse’s safety problem

The metaverse’s safety problem is complex and opaque. Journalists have reported instances of abusive comments, scamming, sexual assaults, and even a kidnapping orchestrated through Meta’s Oculus. The biggest immersive platforms, like Roblox and Meta’s Horizon Worlds, keep their statistics about bad behavior very hush-hush, but Yekkanti says he encounters reportable transgressions every day. 

Meta declined to comment on the record, but did send a list of tools and policies it has in place. A spokesperson for Roblox says the company has “a team of thousands of moderators who monitor for inappropriate content 24/7 and investigate reports submitted by our community” and also uses machine learning to review text, images, and audio. 

To deal with safety issues, tech companies have turned to volunteers and employees like Meta’s community guides, undercover moderators like Yekkanti, and—increasingly—platform features that allow users to manage their own safety, like a personal boundary line that keeps other users from getting too close. 

Copyright © 2021 Vitamin Patches Online.