Tech

What would it be like to be a conscious AI? We might never know.

Published

on


Others have argued that it would be immoral to turn off or delete a conscious machine: as our robot Robert feared, this would be akin to ending a life. There are related scenarios, too. Would it be ethical to retrain a conscious machine if it meant deleting its memories? Could we copy that AI without harming its sense of self? What if consciousness turned out to be useful during training, when subjective experience helped the AI learn, but was a hindrance when running a trained model? Would it be okay to switch consciousness on and off? 

This only scratches the surface of the ethical problems. Many researchers, including Dennett, think that we shouldn’t try to make conscious machines even if we can. The philosopher Thomas Metzinger has gone as far as calling for a moratorium on work that could lead to consciousness, even if it isn’t the intended goal.

If we decided that conscious machines had rights, would they also have responsibilities? Could an AI be expected to behave ethically itself, and would we punish it if it didn’t? These questions push into yet more thorny territory, raising problems about free will and the nature of choice. Animals have conscious experiences and we allow them certain rights, but they do not have responsibilities. Still, these boundaries shift over time. With conscious machines, we can expect entirely new boundaries to be drawn.

It’s possible that one day there could be as many forms of consciousness as there are types of AI. But we will never know what it is like to be these machines, any more than we know what it is like to be an octopus or a bat or even another person. There may be forms of consciousness we don’t recognize for what they are because they are so radically different from what we are used to.

Faced with such possibilities, we will have to choose to live with uncertainties. 

And we may decide that we’re happier with zombies. As Dennett has argued, we want our AIs to be tools, not colleagues. “You can turn them off, you can tear them apart, the same way you can with an automobile,” he says. “And that’s the way we should keep it.”

Will Douglas Heaven is a senior editor for AI at MIT Technology Review.

Copyright © 2021 Vitamin Patches Online.