• Business Blogger

Why Silicon Valley is fertile ground for obscure religious beliefs

How do ideas about faith and God influence conversations about artificial intelligence?


It wasn’t science that convinced Google engineer Blake Lemoine that one of the company’s AIs is sentient. Lemoine, who is also an ordained Christian mystic priest, says it was the AI’s comments about religion, as well as his “personal, spiritual beliefs,” that helped persuade him the technology had thoughts, feelings, and a soul.

“I’m a priest. When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt,” Lemoine said in a recent tweet. “Who am I to tell God where he can and can’t put souls?”

Lemoine is probably wrong — at least from a scientific perspective. Prominent AI researchers as well as Google say that LaMDA, the conversational language model that Lemoine was studying at the company, is very powerful, and is advanced enough that it can provide extremely convincing answers to probing questions without actually understanding what it’s saying. Google suspended Lemoine after the engineer, among other things, hired a lawyer for LaMDA, and started talking to the House Judiciary Committee about the company’s practices. Lemoine alleges that Google is discriminating against him because of his religion.

Still, Lemoine’s beliefs have sparked significant debate, and serve as a stark reminder that as AI gets more advanced, people will come up with all sorts of far-out ideas about what the technology is doing, and what it signifies to them.

Sign up for the newsletterRecode Get the best of Recode's essential reporting on tech and business news. Email (required) By submitting your email, you agree to our Terms and Privacy Notice. You can opt out at any time. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. For more newsletters, check out our newsletters page. SUBSCRIBE

“Because it’s a machine, we don’t tend to say, ‘It’s natural for this to happen,’” Scott Midson, a University of Manchester liberal arts professor who studies theology and posthumanism, told Recode. “We almost skip and go to the supernatural, the magical, and the religious.”

It’s worth pointing out that Lemoine is hardly the first Silicon Valley figure to make claims about artificial intelligence that, at least on the surface, sound religious. Ray Kurzweil, a prominent computer scientist and futurist, has long promoted the “Singularity,” which is the notion that AI will eventually outsmart humanity, and that humans could ultimately merge with the tech. Anthony Levandowski, who cofounded Google’s self-driving car startup, Waymo, started the Way of the Future, a church devoted entirely to artificial intelligence in 2015 (the church was dissolved in 2020). Even some practitioners of more traditional faiths have begun incorporating AI, including robots that dole out blessings and advice.

Scott Midson

There’s a lot of caution about what these machines do and don’t do. It’s all about how they convince you that they understand and those kinds of things. Noel Sharkey is a prominent theorist in this field. He really does not like these robots that convince you that they can do more than they actually can do. He calls them “show bots.” One of the main examples that he uses of the show bots is Sophia, the robot which has been given honorary citizenship status in Saudi Arabia. This is more than a basic chatbot because it is in a robot body. You can clearly tell that Sophia is a robot, for no other reason than the fact that the back of its head is a transparent casing, and you can see all the wires and things.


For Sharkey, all of this is just an illusion. This is just smoke and mirrors. Sophia doesn’t actually warrant personhood status by any stretch of the imagination. It doesn’t understand what it’s saying. It doesn’t have hopes, dreams, feelings, or anything that would make it as human as it might appear. The fact is, duping people is problematic. It has a lot of swing-and-miss phrases. It sometimes malfunctions, or says questionable, eyebrow-raising things. But even where it is at its most transparent, we are still going along with some level of illusion.

There’s a lot of times when robots have that “It’s a puppet on a string” thing. It’s not doing as many independent things as we think it is. We’ve also had robots going to testimonials. Pepper the robot went to a government testimonial about AI. It was a House of Lords evidence hearing session, and it sounded like Pepper was speaking for himself, saying all the things. It was all pre-programmed, and that wasn’t entirely transparent to everyone. And again, it’s that misapprehension. It’s managing the hype that I think is the big concern.

Rebecca Heilweil

It kind of reminds me of that scene from The Wizard of Oz where the real wizard is finally revealed. How does the conversation around whether or not AI is sentient relate to the other important discussions happening about AI right now?

Scott Midson

Microsoft Tay was another chatbot that was sent out into Twitter and had a machine algorithm where it would learn from its interaction with people in the Twittersphere. Trouble is, Tay was trolled and within 16 hours had to be pulled from Twitter because it was misogynistic, homophobic, and racist.

How these robots — whether sentient or not — are made very much in our image is another huge set of ethical issues. A lot of algorithms will be trained on datasets that are entirely human. They speak of our history, of our interactions, and they’re inherently biased. There are demonstrations of algorithms that are biased on the basis of race.

The question of sentience? I can see it as a bit of a red herring, but actually, it’s also tied into how we make machines in our image and what we do with that image.

Rebecca Heilweil

Timnit Gebru and Margaret Mitchell, two prominent AI ethics researchers, raised this concern before they were both fired by Google: by thinking about the sentience discussion and the AI as a freestanding thing, we might miss the fact that the AI is created by humans.

Scott Midson

We almost see the machine in a certain way, as detached, or even kind of God-like, in some ways. Going back to that black box: There’s this thing that we don’t understand, it’s kind of religious-like, it’s amazing, it’s got incredible potential. If we watch all these adverts about these technologies, it’s going to save us. But if we see it in that kind of detached way, if we see it as kind of God-like, what does that encourage for us?

0 görüntüleme0 yorum

Son Paylaşımlar

Hepsini Gör