D. Joy Riley, M.D., M.A.
Executive Director
The day before Thanksgiving, I needed just a few items for the feast preparations. My mother accompanied me to the grocery store, and we, along with a large sector of the local population, searched the shelves. When it was time to check out, I saw the lines peopled by the harassed clerks, and opted for the self check-out region. After scanning a couple of the larger items, I clicked the “Skip Bagging” button on the screen, and placed them back into the cart. Nearing the end of the self check-out level of Purgatory, the machine repeatedly instructed in its monotone, “Place your item in the bag.” I had no other item. Next came the message on the screen, “Help is on the way.” I didn’t need help. I needed to pay and leave. I turned to my mother and remarked, “I hate AI.”
The helpful middle-aged man monitoring the area came forward to see what kind of help I needed. I told him I simply needed to pay. He asked me if I had scanned the two unbagged items in the cart. I said I had. I placed my cash in the machine, and, with a sigh of relief, exited the store.
On reflection, I had not truly represented my understanding or appreciation of AI, of which the grocery store cash registers are a foreshadowing. I appreciate a number of the conveniences and assistance that what I consider “primitive” AI can and does bring to us. I am grateful for the directional assistance the GPS can provide in my vehicle, like which roads ahead are clogged with traffic. I am thankful for my smartphone and my computer — when they are working the way I need for them to work. That said, I do not want them to write articles in my name. Increasingly, I am concerned about how humans, including myself, are affected by AI. To what extent should we embrace AI? When/at what level should we refrain from embracing AI? Is that even possible? To help me think more clearly about AI, I have been reading. Some of those articles follow.
Viola Zhou explains her concern about her mother in eastern China. Her mother is a kidney transplant patient, and visiting the physician is a two-day journey– one way. Solution: DeepSeek. Ms. Zhou’s mother has become increasingly reliant on China’s leading AI chatbot:
When I called my mother and told her what the American nephrologists had said about DeepSeek’s mistakes, she said she was aware that DeepSeek had given her contradictory advice. She understood that chatbots were trained on data from across the internet, she told me, and did not represent an absolute truth or superhuman authority. She had stopped eating the lotus seed starch it had recommended.
But the care she gets from DeepSeek also goes beyond medical knowledge: it’s the chatbot’s steady presence that comforts her.
. . . She eventually agreed to see the doctor. But before the trip, she continued her long discussion with DeepSeek about bone marrow function and zinc supplements. “DeepSeek has information from all over the world,” she argued. “It gives me all the possibilities and options. And I get to choose.”
I thought back to a conversation we’d had earlier about DeepSeek. “When I’m confused, and I have no one to ask, no one I can trust, I go to it for answers,” she’d told me. “I don’t have to spend money. I don’t have to wait in line. I don’t have to do anything.”
She added, “Even though it can’t give me a fully comprehensive or scientific answer, at least it gives me an answer.”
What a chilling response to AI!
“These are chatbots that have logged millions of interactions with real people,” Moore noted.
In many ways, these types of human problems still require a human touch to solve, Moore said. Therapy is not only about solving clinical problems but also about solving problems with other people and building human relationships.
“If we have a [therapeutic] relationship with AI systems, it’s not clear to me that we’re moving toward the same end goal of mending human relationships,” Moore said.
Yet, the article does find some future for AI chatbots in the field of mental health. You can read it here.
Google’s AI principles can be found here. Google increasingly uses AI, and we need to be aware of that. Try typing a question into Google, and read the introduction to the answer.
A very helpful, comprehensive article on AI was penned by Patricia Engler. Read “AI and Human Futures: What Should Christians Think?” here. The introduction will whet your interest:
Some have called artificial intelligence (AI) humanity’s “biggest existential threat.”[1] Others say it could let humans achieve “a more utopian existence” built upon a “Marxist vision.”[2] Still others point to it as a reason for pursuing “a transformative vision . . . for a new society.”[3] Whatever the outcome, AI is shaping up to drastically impact humanity’s future. Where did AI come from, where could it be heading, and how should Christians think in response? To answer, the following discussion examines past, present, and prospective applications of AI, identifies theological principles for thinking about AI, and applies these principles to consider AI’s bioethical implications for human futures.
FEEDBACK from last month’s email
The Daily Mail article from last month’s email caught the eye of one reader:
J.K., a fertility specialist, reminded me that it is
Important in this discussion to remember a few things:
IVF has improved dramatically over the last 3 decades, so the 1 in 30 number no longer applies.
Many of the discarded embryos were already dead.
It appears that in vivo, there are also a minority of embryos (fertilized eggs) which make it to birth.
I was given permission to share these comments with you, our readers. My response is that the article was used to show the scale of the IVF project in a nation much smaller than ours. With the idea of “free IVF” being tossed around as a societal good, people need to think about the far-reaching ramifications.





