Tech

AI’s inequality problem

Published

on


In the US, for instance, during much of the 20th century the various regions of the country were—in the language of economists—“converging,” and financial disparities decreased. Then, in the 1980s, came the onslaught of digital technologies, and the trend reversed itself. Automation wiped out many manufacturing and retail jobs. New, well-paying tech jobs were clustered in a few cities.

According to the Brookings Institution, a short list of eight American cities that included San Francisco, San Jose, Boston, and Seattle had roughly 38% of all tech jobs by 2019. New AI technologies are particularly concentrated: Brookings’s Mark Muro and Sifan Liu estimate that just 15 cities account for two-thirds of the AI assets and capabilities in the United States (San Francisco and San Jose alone account for about one-quarter).

The dominance of a few cities in the invention and commercialization of AI means that geographical disparities in wealth will continue to soar. Not only will this foster political and social unrest, but it could, as Coyle suggests, hold back the sorts of AI technologies needed for regional economies to grow. 

Part of the solution could lie in somehow loosening the stranglehold that Big Tech has on defining the AI agenda. That will likely take increased federal funding for research independent of the tech giants. Muro and others have suggested hefty federal funding to help create US regional innovation centers, for example. 

A more immediate response is to broaden our digital imaginations to conceive of AI technologies that don’t simply replace jobs but expand opportunities in the sectors that different parts of the country care most about, like health care, education, and manufacturing. 

Changing minds

The fondnesss that AI and robotics researchers have for replicating the capabilities of humans often means trying to get a machine to do a task that’s easy for people but daunting for the technology. Making a bed, for example, or an espresso. Or driving a car. Seeing an autonomous car navigate a city’s street or a robot act as a barista is amazing. But too often, the people who develop and deploy these technologies don’t give much thought to the potential impact on jobs and labor markets.  

Anton Korinek, an economist at the University of Virginia and a Rubenstein Fellow at Brookings, says the tens of billions of dollars that have gone into building autonomous cars will inevitably have a negative effect on labor markets once such vehicles are deployed, taking the jobs of countless drivers. What if, he asks, those billions had been invested in AI tools that would be more likely to expand labor opportunities? 

When applying for funding at places like the US National Science Foundation and the National Institutes of Health, Korinek explains, “no one asks, ‘How will it affect labor markets?’”

To support MIT Technology Review’s journalism, please consider becoming a subscriber.

Katya Klinova, a policy expert at the Partnership on AI in San Francisco, is working on ways to get AI scientists to rethink the ways they measure success. “When you look at AI research, and you look at the benchmarks that are used pretty much universally, they’re all tied to matching or comparing to human performance,” she says. That is, AI scientists grade their programs in, say, image recognition against how well a person can identify an object. 

Such benchmarks have driven the direction of the research, Klinova says. “It’s no surprise that what has come out is automation and more powerful automation,” she adds. “Benchmarks are super important to AI developers—especially for young scientists, who are entering en masse into AI and asking, ‘What should I work on?’” 

But benchmarks for the performance of human-machine collaborations are lacking, says Klinova, though she has begun working to help create some. Collaborating with Korinek, she and her team at Partnership for AI are also writing a user guide for AI developers who have no background in economics to help them understand how workers might be affected by the research they are doing. 

“It’s about changing the narrative away from one where AI innovators are given a blank ticket to disrupt and then it’s up to the society and government to deal with it,” says Klinova. Every AI firm has some kind of answer about AI bias and ethics, she says, “but they’re still not there for labor impacts.”

The pandemic has accelerated the digital transition. Businesses have understandably turned to automation to replace workers. But the pandemic has also pointed to the potential of digital technologies to expand our abilities. They’ve given us research tools to help create new vaccines and provided a viable way for many to work from home. 

As AI inevitably expands its impact, it will be worth watching to see whether this leads to even greater damage to good jobs—and more inequality. “I’m optimistic we can steer the technology in the right way,” says Brynjolfsson. But, he adds, that will mean making deliberate choices about the technologies we create and invest in.


Reviewed

“The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence”
Erik Brynjolfsson
Daedalus, Spring 2022

“The wrong kind of AI? Artificial intelligence and the future of labour demand”
Daron Acemoglu and Pascual Restrepo
Cambridge Journal Of Regions, Economy and Society, March 2020

Cogs and Monsters: What Economics Is, and What It Should Be
Diane Coyle
Princeton University Press

Copyright © 2021 Vitamin Patches Online.