The purpose of the guidelines is to make sure that tech contractors stick to the DoD’s existing ethical principles for AI, says Goodman. The DoD announced these principles last year, following a two-year study commissioned by the Defense Innovation Board, an advisory panel of leading technology researchers and businesspeople set up in 2016 to bring the spark of Silicon Valley to the US military. The board was chaired by former Google CEO Eric Schmidt until September 2020, and its current members include Daniela Rus, the director of MIT’s Computer Science and Artificial Intelligence Lab.
Yet some critics question whether the work promises any meaningful reform.
During the study, the board consulted a range of experts, including vocal critics of the military’s use of AI, such as members of the Campaign for Killer Robots and Meredith Whittaker, a former Google researcher who helped organize the Project Maven protests.
Whittaker, who is now faculty director at New York University’s AI Now Institute, was not available for comment. But according to Courtney Holsworth, a spokesperson for the institute, she attended one meeting, where she argued with senior members of the board, including Schmidt, about the direction it was taking. “She was never meaningfully consulted,” says Holsworth. “Claiming that she was could be read as a form of ethics-washing, in which the presence of dissenting voices during a small part of a long process is used to claim that a given outcome has broad buy-in from relevant stakeholders.”
If the DoD does not have broad buy-in, can its guidelines still help to build trust? “There are going to be people who will never be satisfied by any set of ethics guidelines that the DoD produces because they find the idea paradoxical,” says Goodman. “It’s important to be realistic about what guidelines can and can’t do.”
For example, the guidelines say nothing about the use of lethal autonomous weapons, a technology that some campaigners argue should be banned. But Goodman points out that regulations governing such tech are decided higher up the chain. The aim of the guidelines is to make it easier to build AI that meets those regulations. And part of that process is to make explicit any concerns that third-party developers have. “A valid application of these guidelines is to decide not to pursue a particular system,” says Jared Dunnmon at the DIU, who coauthored them. “You can decide it’s not a good idea.”