The one role AI shouldn’t replace – caring for others: Opinion
Source: Straits Times
Article Date: 24 Sep 2025
AI may mimic a human being. But knowing it isn't is key to preserving real connections. As AI proliferates, safeguards must be built at both the national and personal level to ensure it enhances rather than eclipses what makes us human, says the author.
As a psychologist working at the intersection of child development and healthy ageing, I’m excited by the potential that AI offers in supporting children’s learning and keeping seniors cognitively engaged.
But I am also fully aware of its limits – no matter how “smart” it gets – especially when it comes to human connection.
As AI agents become more personable, emotionally responsive, and even “empathetic” in tone, one must ask: If it feels real enough, will children – or any of us – be able to distinguish it from reality?
American psychologist Harry Harlow’s experiments in the 1950s and 60s showed that infant monkeys preferred a soft cloth surrogate mother over one made of wire mesh that fed them.
Harlow wanted to test whether attachment depends more on nutrition or on comfort, separating newborn monkeys from their mothers and giving them two surrogates: a wire mother with a milk bottle and a soft terrycloth mother with no food. He found that the infants overwhelmingly clung to the cloth mother, approaching the wire mother only to feed; when frightened, they ran to the cloth surrogate as a secure base.
Attachment is not just about biology, but perceived warmth and safety.
If a soft but unresponsive figure could meet emotional needs, what might a smiling, responsive AI avatar – one that remembers your name, appears to think and reason, and offers perfectly timed empathy – mean for a lonely child or an emotionally neglected older adult?
This question of what “feels real” has long preoccupied both scientists and philosophers. From Descartes “I think, therefore I am” to Zhuangzi’s butterfly dream, AI now forces us to ask again: What is real? And does it matter if we don’t know? What do we lose in this compromise?
These are questions we need to confront in determining how we address fundamental needs of human beings – care and connection.
The great blurring
As AI becomes more advanced, it can mimic human connection, hold deep conversations, remember your preferences and offer unconditional support. It can feel like an emotionally attuned substitute that feels real enough to replace the real thing.
But its impact on the human condition thus far raises concerns.
Common Sense Media, a non-profit that tackles kids’ safety in the digital domain, found that 72 per cent of teens in the US have used AI companions at least once, more than half interact with them regularly, and a third turn to them for emotional support, friendship, or even romance.
Many teens are already replacing missing human connections with artificial ones. The consequences can be devastating.
There have been reports of teens who took their own lives after forming virtual relationships with chatbots, and even of violent thoughts when parents restricted screen time. It is alarming how quickly the blurring of the line between human and AI can lead to dire, and even life-threatening, consequences.
In Singapore, there are even reports of people creating AI “twins” of themselves and leaning on them for emotional support – rather than a living, breathing human.
Crucially, this can erode the social skills needed to build and sustain relationships, leaving a generation less adept at empathy, patience and handling the complexities of human connection – and perhaps even less able to distinguish right from wrong.
Harlow’s classic experiments remind us that comfort without genuine caregiving can soothe in the short term but fails to nurture in the long run. The monkeys raised only with surrogate mothers (cloth or wire) grew up with severe social and emotional deficits – they struggled with mating, parenting, and interacting with peers.
Choosing AI companions over people can warp our expectations of real relationships and blunt the very empathy those relationships depend on.
Draw the line where it matters
AI, like many other technologies that have preceded it, can be reined in with the introduction of safeguards by the very humans who invented the technology.
Take the example of the smartphone. As it bridged the geographical gap between humans, it also raised countless concerns over shrinking attention spans, loneliness, and blurred boundaries between work and home.
Over time, safeguards emerged. Tech companies rolled out features like “Do Not Disturb” and parental controls. Governments legislated against texting while driving and extended privacy laws to mobile apps. Schools introduced cyber wellness programmes and restricted phone use to lockers or designated times. Families, too, drew their own lines – device-free dinners, muted notifications, and delayed phone ownership for children.
As AI proliferates, safeguards must be built at both the national and personal level to ensure it enhances rather than eclipses what makes us human.
Nationally, we need regulations around consent, data use, impersonation and transparency. With the proliferation of AI, many sectors will need to come up with new guidelines to relook the way assessments and decisions are made, such as law enforcement, criminal justice and medical science.
Safeguards are needed in areas where human connection matters most, such as education, healthcare and social services.
Do we rely entirely on AI to make decisions that shape people’s futures based purely on data – such as who is admitted to a school? Who is offered a job? Who should receive help? Or who is prioritised for medical care?
AI can grade essays, suggest treatments, or filter applicants, but only humans can bring the empathy, context and connection needed to ensure those decisions respect dignity and nurture relationships.
At the micro level, actions taken at home, in schools and in our daily lives are essential to ensuring that we keep AI in check – so that we leverage on its multitude of benefits, without letting the potential pitfalls take hold.
Just as we learnt to set TV hours, or to balance screen time when smartphones came along, we now need to be intentional about when and how AI companions enter our daily lives.
It could mean explaining to children that chatbots are helpful assistants but not friends, and making space for real conversations over dinner, no matter how busy the day. It could mean resisting the temptation to let an AI “babysit” boredom or loneliness and instead helping children to pick up skills to manage and navigate the discomforts to build patience and resilience.
For example, when your tween comes home crying about bullying in class, instead of brushing it aside, which may push the child to turn to a chatbot and withdraw from human interaction, you could take time to sit with them, ask questions, and share your own stories of struggle and resilience.
These moments of presence are what teach children that their first refuge is family, not a machine.
In schools, it could mean embracing AI tutors to support learning gaps while also cultivating debate, critical thinking, imagination, teamwork, and the messiness of human collaboration in the classroom that no algorithm can replicate.
Similarly, in eldercare, it might mean deploying robotic companions as supplements for moments of loneliness, while ensuring that real caregivers and loved ones remain the primary source of comfort and connection.
Caregivers, too, can create moments to connect more deeply with their loved ones. For example, a grandmother might repeat the same childhood story in Teochew or Malay to a robot, and the robot can nod politely and respond warmly, but only family can hug her, laugh with her, add their own memories, and carry the story forward to the next generation.
These human exchanges carry the weight of culture, memory and love. If we let AI take their place, we risk leaving our seniors surrounded by voices, yet profoundly alone.
Nurturing digital wisdom
Safeguards are important. They are not meant to banish AI but to anchor us in what only humans can offer. The human mind is extraordinary in its ability to adapt, bond, trust and anthropomorphise. Yes, AI can comfort – but it cannot care.
What we need is a new digital wisdom: knowing that just because a technology is functionally and emotionally compelling doesn’t mean it is ethically sufficient.
AI should support, not replace, our ability to care. It can help us manage our workload and improve quality of care, but it should not take our place in moments that matter.
Thus, educators must stay central to relationship-building. Parents must model presence. Technologists must work with psychologists, ethicists and social scientists to design AI that keeps humans at the centre. We must be deliberate in how we design, deploy and talk about AI.
Every era of technology has forced us to renegotiate what is “normal”. Now, the negotiation will be about nothing less than the future of human connection.
Yow Wei Quin is the Kwan Im Thong Hood Cho Temple Chair Professor of Healthcare Engineering, Professor of Psychology, and Head of the Humanities, Arts and Social Sciences cluster at the Singapore University of Technology and Design.
Source: The Straits Times © SPH Media Limited. Permission required for reproduction.
303