The starkest assertion, signed by all those figures and many more, is a 22-word statement put out two weeks ago by the Center for AI Safety (CAIS), an agenda-pushing research organization based in San Francisco. It proclaims: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The wording is deliberate. “If we were going for a Rorschach-test type of statement, we would have said ‘existential risk’ because that can mean a lot of things to a lot of different people,” says CAIS director Dan Hendryks. But they wanted to be clear: this was not about tanking the economy. “That’s why we went with ‘risk of extinction’ even though a lot of us are concerned with various other risks as well,” says Hendryks.
We’ve been here before: AI doom follows AI hype. But this time feels different. The Overton window has shifted. What were once extreme views are now mainstream talking points, grabbing not only headlines but the attention of world leaders. “The chorus of voices raising concerns about AI has simply gotten too loud to be ignored,” says Jenna Burrell, director of research at Data and Society, an organization that studies the social implications of technology.
What’s going on? Has AI really become (more) dangerous? And why are the people who ushered in this tech now the ones raising the alarm?
It’s true that these views split the field. Last week, Yann Lecun, chief scientist at Meta, and joint recipient with Hinton and Bengio of the 2018 Turing Award, called the doomerism “preposterous”. Aiden Gomez, CEO of AI firm Cohere, said it was “an absurd use of our time.”
Others scoff too. “There’s no more evidence now than there was in 1950 that AI is going to pose these existential risks,” says Signal president Meredith Whittaker, who is co-founder and former director of the AI Now Institute, a research lab that studies the social and policy implications of artificial intelligence. “Ghost stories are contagious, it’s really exciting and stimulating to be afraid.”
“It is also a way to skim over everything that’s happening in the present day,” says Burrell. “It suggests that we haven’t seen real or serious harm yet.”
An old fear
Concerns about runaway, self-improving machines have been around since Alan Turing. Futurists like Vernor Vinge and Ray Kurzweil popularized these ideas with talk of the so-called Singularity, a hypothetical date at which artificial intelligence outstrips human intelligence and machines take over.