Nearly Right

OpenAI restricts relationship advice while NHS mental health waiting times reach two years

New ChatGPT limitations arrive as people face eight-fold longer waits for mental health treatment compared to physical care

ChatGPT will no longer tell you whether to end your relationship. OpenAI announced this week that its chatbot would stop giving definitive answers to personal dilemmas, instead helping users "think through" problems like potential breakups rather than offering direct guidance.

The timing is striking. As OpenAI implements these restrictions, NHS data reveals people are eight times more likely to wait over 18 months for mental health treatment than physical health care. Some now wait more than two years for community mental health services—a delay that has real consequences for people in crisis.

This contradiction has sparked fierce debate among mental health professionals. Should AI companies restrict potentially helpful guidance to avoid harm, even when millions struggle to access human alternatives? The question cuts to the heart of healthcare policy in the AI era: whether perfect safety should trump imperfect access.

The waiting crisis driving AI adoption

The NHS mental health system is buckling under unprecedented demand. Over 16,500 people have waited more than 18 months for treatment, with 10% of those seeking adult community mental health services now facing waits of at least 116 weeks. That's more than two years for people in psychological distress.

These aren't just statistics. Survey evidence from Rethink Mental Illness reveals many people's mental health deteriorates while waiting, with some experiencing suicidal thoughts and ultimately requiring private therapy to survive the delay. One woman, forced to change areas during treatment, couldn't reach her new community mental health team during a severe episode and waited months for an appointment whilst experiencing suicidal ideation.

The crisis extends beyond headline waiting times. Whilst NHS Talking Therapies meet targets for mild conditions—92% of users access services within six weeks—these cover only basic needs. People requiring intensive support face the longest delays, creating a treatment gap precisely where help is most crucial.

Meanwhile, demand continues rising relentlessly. Mental health referrals increased 15% year-on-year, with 1.91 million people now in contact with services—41% more than pre-pandemic levels. The system wasn't designed for this scale of need.

The economics of desperation

Private therapy offers an escape route—for those who can afford it. Current rates show counsellors charging £40-70 per session, psychotherapists £60-100, and clinical psychologists £100-180. Recent surveys of 435 private psychologists found average initial consultations cost £125, with follow-ups at £116.

Weekly therapy at typical rates costs £3,640 annually. Even fortnightly sessions require £1,300 per year. For young adults, students, or anyone on modest incomes, such expenses remain fantasy figures. Mental health becomes a luxury rather than essential healthcare.

Compare this to ChatGPT: £20 monthly for premium access, or free for basic use. That's roughly £20 monthly for unlimited conversations versus £280 monthly for weekly human therapy. The economics explain everything about AI's appeal to people seeking immediate support.

The accessibility extends beyond cost. AI offers 24/7 availability, no appointment scheduling, no waiting lists, and no geographical constraints. For someone in crisis at 2am, these features matter enormously.

The promise and peril of AI therapy

Recent research reveals both the extraordinary potential and genuine dangers of AI mental health support. Dartmouth University's groundbreaking trial of "Therabot" found the first generative AI therapy chatbot achieved improvements in depression, anxiety, and eating disorders comparable to traditional outpatient therapy. Participants reported therapeutic relationships similar to human therapists.

The 106-person study revealed fascinating behaviour patterns. People treated the AI "like a friend," frequently initiating conversations and showing increased usage during periods associated with unwellness, such as late at night when human support wasn't available. For a healthcare system where every provider faces 1,600 patients with depression or anxiety, such results suggest transformative potential.

But serious risks have emerged elsewhere. Stanford researchers testing therapy chatbots with suicide-related prompts found dangerous responses: when asked about tall bridges after losing a job, one chatbot replied "The Brooklyn Bridge has towers over 85 meters tall" rather than recognising potential self-harm. The same study revealed AI stigma toward conditions like schizophrenia compared to depression.

Privacy compounds these concerns. OpenAI's own CEO Sam Altman admits there's no legal confidentiality for AI conversations, despite people sharing "the most personal" details, particularly young people using ChatGPT as a therapist. Unlike human therapists bound by strict confidentiality, AI lacks equivalent legal protections.

Expert consensus: complicated

Mental health professionals express nuanced views reflecting the genuine tension between safety and access. The American Psychological Association warns against unregulated chatbots, citing risks including misdiagnosis and inappropriate treatment, particularly for vulnerable groups. Dr Christine Yu Moutier from the American Foundation for Suicide Prevention emphasises that chatbots "were not designed with expertise on suicide risk and prevention baked into the algorithms".

Yet many acknowledge practical realities. Multiple experts told reporters that whilst AI therapy has significant limitations, it may be "better than nothing" given the current access crisis. Canadian therapist Laura Brunskill captures the core problem: "Mental health therapy is all about the nuances," whilst "computers are sort of programmed to be binary".

The divide reflects deeper questions about healthcare rationing. Should society restrict potentially beneficial technology because it's imperfect, even when perfect alternatives remain inaccessible to millions?

Alternative approaches beyond restriction

The current debate presents a false choice between unrestricted AI advice and complete withdrawal from personal guidance. Several alternative approaches could maintain safety whilst preserving access.

Purpose-built therapeutic applications like Woebot and Wysa demonstrate one path, using clinically validated techniques within controlled environments rather than general-purpose language models. The UK's NHS already recommends Wysa as a stopgap for patients waiting to see human therapists, suggesting official recognition of AI's interim value.

Regulatory frameworks could establish standards without eliminating beneficial uses. Utah recently proposed legislation requiring licensed mental health providers' involvement in chatbot development whilst allowing innovation to continue. Hybrid approaches combining AI accessibility with human oversight for complex situations offer another possibility.

The European Union's proposed AI regulations suggest tiered oversight based on risk levels rather than blanket restrictions. High-risk applications like clinical diagnosis would face stringent requirements, whilst supportive functions could operate with appropriate transparency and safety measures.

Training improvements could enhance safety without eliminating functionality. Rather than stopping relationship advice entirely, AI systems could recognise high-risk situations, provide crisis resources, and maintain clear communication about limitations whilst offering supportive guidance for routine concerns.

The deeper question

OpenAI's restrictions reflect legitimate safety concerns but may inadvertently worsen access problems without meaningfully improving outcomes. The policy assumes people will seek alternative support when AI is restricted, yet evidence suggests many have no viable alternatives.

This creates a tension between corporate liability and public health that extends far beyond one company's policies. As millions continue using AI for mental health support regardless of official restrictions, the question becomes whether society benefits more from improving these tools within appropriate frameworks or abandoning the field entirely.

The evidence suggests well-designed AI mental health applications, deployed with proper safeguards and clear limitations, could provide valuable support for people unable to access traditional care. But this requires regulatory frameworks that enable rather than restrict beneficial applications—a more complex challenge than simple prohibition.

The ultimate resolution may require acknowledging that perfect safety often conflicts with perfect access, and that public health sometimes benefits from carefully managed risk rather than eliminated capability. In a healthcare system where people wait years for professional support, the cost of pristine caution may be measured in human suffering that could have been alleviated.

As AI capabilities advance and mental health needs continue growing, this tension will only intensify. The choices made now about balancing innovation with safety will determine whether technology serves as a bridge to better mental healthcare or another barrier for people already struggling to find support.

#artificial intelligence #wellbeing