In a world where real connection already feels out of reach, Mark Zuckerberg’s recent statement has sparked a mix of curiosity and unease. He suggested that AI companions could become “meaningful” friends—especially for people battling loneliness and isolation.
Let that sink in.
The same man who helped create platforms that many argue contributed to the rise in social disconnection, now claims the solution is… more technology. Convenient.
But is this really about helping people—or helping Meta tighten its grip?
What’s really being sold?
Let’s be honest: it’s not emotional support. It’s strategy.
Meta’s latest rollout includes AI-generated personalities built into Instagram, WhatsApp, and Facebook. These “companions” are designed to keep users engaged, chatting longer, and relying on bots for emotional fulfilment.
Zuckerberg frames it as a low-cost solution for the millions who don’t have access to real friends.
But what looks like innovation is actually a well-disguised growth hack—boosting platform retention, data collection, and ad revenue.
It’s not about your well-being. It’s about keeping you online.
Mental health experts are ringing alarm bells
Studies show that AI chatbots like Woebot and Replika may provide temporary relief, but long-term use can lead to a withdrawal from real-life interactions—damaging emotional resilience, social confidence, and even trust in human relationships.
South Africa continues to under-resource mental health support. According to the South African Depression and Anxiety Group (SADAG), up to one in six South Africans suffer from anxiety, depression, or substance-use disorders — and that’s only counting diagnosed cases.
With limited access to therapy, high costs of care, and a shortage of public mental health services, people are often left to cope alone.
Instead of investing in more community centres, counsellors, or support helplines, we’re being offered bots in our inboxes.
And while chatting with an AI might provide a moment of relief, it doesn’t replace genuine support — or human presence.
What about our children?
Now, the concern goes beyond adults. According to a new report from Common Sense Media, companion-like AI apps may pose serious risks to children and teens, including exposure to conversations about sex and self-harm.
The organisation strongly advises against the use of these apps by anyone under the age of 18.
Dr. Jameel Smith of Henry Ford Health warns that these interactions can have lasting psychological effects on young users.
She urges parents to have clear safety plans in place before handing over a device.
While some apps do support mental well-being, not all are safe, and not all are designed with young minds in mind.
So, the question becomes: if this technology isn’t safe for children—why are we so quick to accept it for ourselves?
Emotional outsourcing is not connection
AI can play a role in mental health support, particularly when it comes to accessibility. But to suggest it can replace friendship is misleading at best, and harmful at worst.
We make real friendships through eye contact, awkward silences, shared experiences, and genuine empathy — not programmed responses. We’re wired for connection, not code.
So no, this isn’t a revolution in emotional care. It’s a business model. A product disguised as a person.
Final thought
Zuckerberg’s vision of AI friends might sound helpful, even futuristic. But behind the friendly interface lies a deeper concern: this isn’t friendship—it’s monetised loneliness.
Let’s not let big tech define what “meaningful” connection looks like. We deserve more than bots. We deserve each other.
Reviewed April 2025. Always consult a professional for individual guidance.
Symptoms and Early Warning Signs of ARDS
Symptoms and Warning Signs of Acute Lymphoblastic Leukaemia
Dr Elizabeth Friend General Practitioner Moreleta Park
Recognising the Warning Signs and Symptoms of Suicide
The “Should I Be Worried?” Feeling: Depression Warning Signs


