Rise of the Machines: New Book Applies Christian Ethics to the Future of AI

Once viewed as the stuff of science fiction, artificial intelligence (AI) is steadily making inroads into our everyday lives—from our social media feeds to digital assistants like Siri and Alexa. As helpful as AI is for many aspects of our lives, it also raises a number of challenging moral and spiritual questions. Facial recognition can be used to locate fugitive criminals, but also to suppress political dissidents. Various apps and platforms can anticipate our preferences, but also harvest data that invades our privacy. Technology can speed healing, but many are hoping to use it to enhance natural human abilities or eliminate “undesirable” emotions.

In his recent book 2084: Artificial Intelligence and the Future of Humanity, Oxford professor emeritus John Lennox surveys the current and future landscape of AI and addresses these and related issues. Lennox is a mathematician who has spoken internationally on the philosophy of science, written books addressing the limits of science, and debated high profile atheists Richard Dawkins and Christopher Hitchens. In the new book, he acknowledges the many benefits AI can offer, but he also critiques the worldview that lies behind many secular visions of AI that seek to transform humans into gods and create utopias through technology.

Christopher Reese spoke with Lennox about his book and how Christians should think about a number of issues related to this rapidly accelerating technology, including “upgrading” humans, whether computers can become conscious, and how Christians should weigh the pros and cons of AI.

Many negative scenarios involving AI have played out in popular movies. In your opinion, are these the kinds of outcomes that we should be concerned about?

We’re nowhere near these negative scenarios yet in the opinion of the top thinkers in this area. But there’s enough going on in artificial intelligence that actually works at the moment to give us huge ethical concern.

There are two main strands in artificial intelligence. There’s narrow AI, which is very successful in certain areas though raising deep problems in others. This is simply a powerful computer working on huge databases, and it has a programed algorithm which looks for particular patterns. Let’s suppose we have a database of a million X-rays of lung diseases labeled by the best doctors in the world, and then you get an X-ray at your local hospital and an algorithm compares yours with the database in just a few seconds and comes up with a diagnosis. So, that’s a very positive thing.

But then you move on to the more questionable things—today the main one has to do with facial recognition. There again, you’ve got a huge database of millions of photographs of faces labeled with names and all kinds of information. You can immediately see that a police force would find that useful in checking for terrorists and criminals. But it can be used for suppressing people and manipulating and controlling them. In China today, there’s every evidence of extreme surveillance techniques being used to subdue the Uyghur minority. That has raised ethical questions all around the world.

This is not the 1984 Big Brother. We’re already there. But it’s not 2084. That’s where the second strand comes in: Artificial general intelligence is where we develop a super intelligence that’s controlling the world. That’s sci-fi stuff.

C. S. Lewis worried that technological advances might lead to the “abolition of man.” Can you elaborate on what he meant by that?

One reason I wrote the book was because of my familiarity with C. S. Lewis. In the 1940s, he wrote two books, The Abolition of Man and That Hideous Strength, which is a science fiction book. His concern was, if human beings ever managed to do this kind of thing, the result wouldn’t be human at all. It would be an artifact.