Tech

The messy morality of letting AI make life-and-death decisions

Published

on


By the 2000s, an algorithm had been developed in the US to identify recipients for donated kidneys. But some people were unhappy with how the algorithm had been designed. In 2007, Clive Grawe, a kidney transplant candidate from Los Angeles, told a room full of medical experts that their algorithm was biased against older people like him. The algorithm had been designed to allocate kidneys in a way that maximized years of life saved. This favored younger, wealthier, and whiter patients, Grawe and other patients argued.

Such bias in algorithms is common. What’s less common is for the designers of those algorithms to agree that there is a problem. After years of consultation with laypeople like Grawe, the designers found a less biased way to maximize the number of years saved—by, among other things, considering overall health in addition to age. One key change was that the majority of donors, who are often people who have died young, would no longer be matched only to recipients in the same age bracket. Some of those kidneys could now go to older people if they were otherwise healthy. As with Scribner’s committee, the algorithm still wouldn’t make decisions that everyone would agree with. But the process by which it was developed is harder to fault. 

“I didn’t want to sit there and give the injection. If you want it, you press the button.”

Philip Nitschke

Nitschke, too, is asking hard questions. 

A former doctor who burned his medical license after a years-long legal dispute with the Australian Medical Board, Nitschke has the distinction of being the first person to legally administer a voluntary lethal injection to another human. In the nine months between July 1996, when the Northern Territory of Australia brought in a law that legalized euthanasia, and March 1997, when Australia’s federal government overturned it, Nitschke helped four of his patients to kill themselves.

The first, a 66-year-old carpenter named Bob Dent, who had suffered from prostate cancer for five years, explained his decision in an open letter: “If I were to keep a pet animal in the same condition I am in, I would be prosecuted.”  

Nitschke wanted to support his patients’ decisions. Even so, he was uncomfortable with the role they were asking him to play. So he made a machine to take his place. “I didn’t want to sit there and give the injection,” he says. “If you want it, you press the button.”

The machine wasn’t much to look at: it was essentially a laptop hooked up to a syringe. But it achieved its purpose. The Sarco is an iteration of that original device, which was later acquired by the Science Museum in London. Nitschke hopes an algorithm that can carry out a psychiatric assessment will be the next step.

But there’s a good chance those hopes will be dashed. Creating a program that can assess someone’s mental health is an unsolved problem—and a controversial one. As Nitschke himself notes, doctors do not agree on what it means for a person of sound mind to choose to die. “You can get a dozen different answers from a dozen different psychiatrists,” he says. In other words, there is no common ground on which an algorithm could even be built. 

Copyright © 2021 Vitamin Patches Online.