This more subtle approach is reminiscent of the path that EU lawmakers have taken when evaluating the use of AI in public applications. That system uses risk tiers; the higher the risks associated with a particular technology, the stricter the regulation. Under the proposed AI Act in Europe, for example, live face recognition on video surveillance systems in public spaces would be regulated more harshly than more limited, non-real-time applications, such as an image search for in an investigation of a missing child.
Eldridge says he expects resistance from prosecutors and law enforcement groups, though he is “cautiously optimistic” that the bill will pass. He also says that many tech companies lobbied during the commission hearings, claiming that the technology is accurate and unbiased, and warning of an industry slowdown if the restrictions pass. Hoan Ton-That, CEO of Clearview, told the commission in his written testimony that “Clearview AI’s bias-free algorithm can accurately find any face out of over 3 billion images it has collected from the public internet.”
Crockford and Eldridge say they are hopeful the bill will be called to a vote in this session, which lasts until July 2024, but so far, no such vote has been scheduled. In Massachusetts, like everywhere else, other priorities like economic and education bills have been getting more attention.
Nevertheless, the bill has been influential already. Earlier this month, the Montana state legislature passed a law that echoes many of the Massachusetts requirements. Montana will outlaw police use of face recognition on videos and moving images, and require a warrant for face matching.
The real costs of compromise
Not everyone is thrilled with the Massachusetts standard. Police groups remain opposed to the bill. Some activists don’t think such regulations are enough. Meanwhile, the sweeping face recognition laws that some anticipated on a national scale in 2020 have not been passed.
So what happened between 2020 and 2023? During the three years that Massachusetts spent debating, lobbying, and drafting, the national debate moved from police reform to rising crime, triggering political whiplash. As the pendulum of public opinion swung, face recognition became a bargaining chip between policymakers, police, tech companies, and advocates. Perhaps importantly, we also got accustomed to face recognition technology in our lives and public spaces.
Law enforcement groups nationally are becoming increasingly vocal about the value of face recognition to their work. For example, in Austin, Texas, which has banned the technology, Police Chief Joseph Chacon wishes he had access to it in order to make up for staffing shortages, he told MIT Technology Review in an interview.
Some activists, including Caitlin Seeley George, director of campaigns and operations at Fight for the Future, say that police groups across the country have used similar arguments in an effort to limit face recognition bans.