Connect with us

Business

The Air Force just took a big step towards killer robots

Published

on

The Air Force just took a big step towards killer robots


Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.

A U.S. Air Force experiment has alarmed people concerned that the U.S. and other militaries are moving rapidly toward designing and testing “killer robots.”

In a training on Dec. 14 at Beale Air Force Base, near Marysville, Calif., the Air Force installed A.I. on a U-2 spy plane that could autonomously control the aircraft’s radar and sensors as part of what the military said was “a reconnaissance mission during a simulated missile strike.”

While a human pilot flew the U-2, the A.I., which the Air Force named ARTUMu, had final authority over how to use the radar and other sensors, Will Roper, assistant secretary of the Air Force for acquisition, technology, and logistics, said in an article for Popular Mechanics in which he described the experiment.

“With no pilot override, ARTUMu made final calls on devoting the radar to missile hunting versus self-protection,” Roper wrote. “The fact ARTUMu was in command was less about any particular mission than how completely our military must embrace AI to maintain the battlefield decision advantage.”

But giving an A.I. system the final word is a dangerous and disturbing development, said Noel Sharkey, a professor emeritus of A.I. and robotics at the University of Sheffield, in England, who is also a spokesperson for the group Stop Killer Robots. The organization, made up of computer scientists, arms control experts, and human rights activists, argues that lethal autonomous weapons systems could go awry and kill civilians in addition to making war more likely by reducing the human cost of combat.

The United Nations has held talks aimed at possibly limiting the use of autonomous weapons, but those talks have bogged down, with the U.S., the U.K., China, and Russia all opposed to any ban.

“There are a lot of red flags here,” Sharkey told Fortune about the Air Force test. While the Air Force had tried to couch the recent demonstration as being about reconnaissance, in the training exercise that reconnaissance helped select targets for a missile strike.

It’s only a small step from allowing the software to direct lethal action, said Sharkey.

He also criticized the Air Force for talking about “the need to move at machine speed” on the battlefield. He said “machine speed” renders meaningless any effort to give humans oversight over what the A.I. system is doing.

The A.I. software was deliberately designed without a manual override “to provoke thought and learning in the test environment,” Air Force spokesman Josh Benedetti told the Washington Post. Benedetti seemed to be suggesting that the Air Force wanted to prompt a discussion about what the limits of automation should be.

Sharkey said Benedetti’s comment was disingenuous and an ominous sign that the U.S. military was moving toward a fully autonomous aircraft—like a drone—that would fly, select targets, and fire weapons all on its own. Other branches of the U.S. military are also researching autonomous weapons.

Roper wrote that the Air Force wasn’t yet ready to create fully autonomous aircraft because today’s A.I. systems are too easy for an adversary to trick into making an inaccurate decision. Human pilots, he said, provide an extra level of assurance.

ARTUMu was built using an algorithm called MuZero that was created by DeepMind, the London-based A.I. company that is owned by Google parent Alphabet, and made publicly available last year. MuZero was designed to teach itself how to play two-player or single-player games without knowing the rules in advance. DeepMind showed that MuZero could learn to play chess, Go, the Japanese strategy game Shogi, as well as many different kinds of early Atari computer games, at superhuman levels.

In this case, the Air Force took MuZero and trained it to play a game that involved working the U-2’s radar, with points scored for finding enemy targets and losses deducted if the U-2 itself was shot down in the simulation, Roper wrote.

In the past, DeepMind has said it wouldn’t work on offensive military applications, and a company spokeswoman told Fortune it had no role helping the U.S. Air Force create ARTUMu nor did the company license technology to the Air Force. She said DeepMind was unaware of the Air Force project until reading press accounts about it last week.

DeepMind as a company, and its cofounders as individuals, are among the 247 entities and 3,253 people that have signed a pledge, promoted by the Boston-based Future of Life Institute, against developing lethal autonomous weapons. Demis Hassabis, DeepMind’s cofounder and chief executive, also signed an open letter from A.I. and robotics researchers calling for a UN ban on such weapons.

DeepMind said it had no comment on the Air Force’s A.I. experiment.

Some other A.I. researchers and policy experts who are concerned about A.I.’s risks have previously questioned whether computer scientists should refrain from publishing details about powerful A.I. algorithms that may have military uses or could be misused to spread disinformation.

OpenAI, a San Francisco research company that was founded partly over concerns that DeepMind had been too secretive about some of its A.I. research, has talked about restricting publication of some of its research if it believes it could be misused in dangerous ways. But when it tried to restrict access to a large language model, called GPT-2, in 2018, the company was criticized by other A.I. researchers for being either alarmist or orchestrating a marketing stunt to generate “this A.I. is too dangerous to make public” headlines.

“We seek to be thoughtful and responsible about what we publish and why,” DeepMind said in response to questions from Fortune. It said a team within the company reviewed internal research proposals to “assess potential downstream impacts and collaboratively develop recommendations to maximize the likelihood of positive outcomes while minimizing the potential for harm.”

More must-read tech coverage from Fortune:

  • How hackers could undermine a successful vaccine rollout
  • Why investors jumped on board the SPAC “gravy train”
  • GitHub CEO: We’re nuking all tracking “cookies,” and you should too
  • Innovation just isn’t happening over Zoom
  • Upstart CEO talks major IPO “pop,” A.I. racial bias, and Google

Business

These fast-growing Sun Belt cities suffer from high inflation

Published

on

These fast-growing Sun Belt cities suffer from high inflation


U.S. migration hotspots tend to have the highest inflation | Fortune



Continue Reading

Business

The U.S. is seizing a $325 million helipad-equipped megayacht in Fiji. The question is which Russian oligarch it belongs to

Published

on





Oligarch sanctions: U.S. seizing $325 million megayacht in Fiji. The question is which Russian billionaire it belongs to | Fortune



Continue Reading

Business

Investors bank on today’s ‘once-in-a-generation’ Fed hike to be one of several

Published

on

Investors bank on today's 'once-in-a-generation' Fed hike to be one of several


Fed rate hike: decision day rattles markets as investors worry that a giant ‘once-in-a-generation’ hike will be one of several this year | Fortune



Continue Reading

Copyright © 2021 Vitamin Patches Online.