A monk asked Zhaozhou, “Does a dog have buddha nature?” Zhaozhou answered, “No!” The monk replied, “All sentient beings have buddha nature. Why would a dog not have it?” Zhaozhou said, “Because it has karmic consciousness.”

Another monk asked, “Does a dog have buddha nature?” Zhaozhou answered, “Yes!” The monk replied, “If it has, why then is it still stuffed into a bag of skin?” “Because though it knows, it deliberately transgresses,” said Zhaozhou.

—Entangling Vines Case 46

Well, perhaps a flesh-and-blood dog does or doesn’t have buddha nature, but what about a robotic dog that runs around on artificial intelligence? Boston Dynamics makes such a dog, which they’ve named “Spot.”

The old-fashioned dog, man’s best friend, presents only a nominal threat to their human masters. And a growing number of scientists are warning of growing threats from AI robots, which are gaining brains and power at an extraordinary pace. Does a robot—generative artificial intelligence—have buddha nature, or not?

I recently asked a friend if she thought AI has buddha nature: “A robot is soul-less, heartless. There is no way it can have buddha nature.” Then I asked ChatGPT, the AI app, to weigh in on the subject. “Buddha nature is considered to be the inherent nature of all beings, which includes robots if they are considered beings … ultimately the answer to this question may depend on one’s definition of buddha nature and what is considered to be a ‘being’ capable of possessing it,” the app smartly responded.

A Zen monk in Vermont is deeply engaged in this question, according to a recently article in The Atlantic Monthly. Soryu Forall has become something of a spiritual advisor to the AI-design community, holding talks and retreats for researchers and developers from OpenAI, Google DeepMind and Apple. He hopes his followers can “embed the enlightenment of the Buddha into code,” according to the article. Says Forall, creating AI with a spiritual path “is perhaps the most important act of all time.”

The risks of superintelligence are becoming better known, but they may not so much risk life—existential risk—as risk the quality of life. AI, dependent on their databases of Large Language Models (LLMs) for understanding, have a tendency to magnify existing gender and ethnic biases. Chat bots unleashed on social media will change public opinion by greatly amplifying false messages over truthful ones.

A longer term problem, some scientists believe, is the possibility that humans will lose the skill of interacting with other humans when AIs become teachers, caregivers,and personal confidants.

“People will be disconnecting themselves from humanity,” Jerry Kaplan, an adjunct Stanford professor of artificial intelligence says in his new book, Generative Artificial Intelligence: What Everybody Needs to Know, “By interposing a machine,” he adds, “it is kind of a strong word, but I call it ‘emotional pornography.’” Even so, Kaplan believes the development of AI and its positive social impact on education, healthcare and the law to be “one of the most important inventions in human history. It’s impact on humanity will be absolutely astonishing.” He ends, “I am genuinely grateful that I have lived to see this moment happen.”

Back to Spot, the robot dog. Three weeks ago, the Massachusetts State Police sent their AI dog (“Fido”) into a house where a shooter was holed up. It went up two flights of stairs and then down into the basement where they were hiding. The shooter opened fire on Spot, disabling it. That the dog was able to locate the shooter may have saved human lives. Unfortunately, it was an existential crisis for Fido.