The founder of Yara AI, a mental-health chatbot, has shut the app down after concluding it posed unacceptable risks to vulnerable users. Despite clinical input and safety features, he says AI tools can’t reliably support people in crisis and may even cause harm. The move highlights growing concerns among experts about whether chatbots should be used for anything resembling real therapy.
FULL STORY - THREE VIEWS
Why the Founder of an AI Therapy App Decided It Was Simply Too Dangerous
In a sobering move with wide-reaching implications for the future of mental-health tech, the founder of Yara AI — an artificial-intelligence–powered therapy app — recently shut it down. According to him, continued operation presented too grave a risk for people most in need of real care.
Yara AI had been marketed as a “clinically-inspired platform” with empathetic, evidence-based guidance — a digital mental-health tool aimed at offering support when users needed it most. Backed by a small team, including a clinical psychologist, the app was an earnest attempt to combine the scalability of AI with mental-health care. But despite good intentions and early traction, the experiment ended abruptly.
According to the founder, Richard Stott, a clinical psychologist, the shutdown was driven by deep concerns about safety and adequacy: the team realized the technology could not reliably support people in crisis. As Braidwood put it: “AI can be wonderful for everyday stress, sleep troubles, or processing a difficult conversation.” But for people dealing with deep trauma, suicidal thoughts, or severe crises — “AI becomes dangerous. Not just inadequate. Dangerous.”
In a reply to a commenter explaining the shutdown, the founder added that “the risks kept me up all night.” He acknowledged that, in spite of safety measures and attempts to minimize harm, the underlying architecture and limitations of AI — especially “models trained on all the slop of the internet” then “post-trained” to behave — simply weren’t up to the task of responsibly handling serious mental-health needs.
The Company: Humble Beginnings, Big Promises
Yara AI was a small, bootstrapped startup. It had less than US$1 million in funding and only a few thousand active users. Despite these modest numbers, the team had envisioned scaling up: a subscription offering was on the horizon. But financial difficulties — including running out of money in July — were compounded by ethical unease. Braidwood admitted that he was reluctant to accept venture-capital funding once he realized the magnitude of the risks.
Although the user base was small, the decision to shutter the project is meaningful. The shutdown was abrupt, affecting all existing users. The planned paid version was canceled. Rather than remain in “limbo,” the founders chose to walk away — a strong statement about what they thought responsible innovation should look like.

Why This Matters: Broader Concerns About AI Therapy
The end of Yara AI may seem like the defeat of an idea — but it may actually be an important warning signal. A growing number of mental-health researchers and experts are urging caution when it comes to using AI chatbots for therapeutic purposes. A 2025 report from American Psychological Association (APA) warned that many of these generative-AI tools lack evidence and safety standards. They said such tools should not replace human-provided mental-health care, particularly for people in crisis.
Separately, a study by researchers at Brown University found that commercially available AI chatbots routinely violate core mental-health ethics standards — for instance by giving advice without a license, failing to ensure confidentiality, or offering unverified medical guidance.
Meanwhile, other studies and experts have raised the prospect of what’s being called “AI psychosis,” where individuals in vulnerable states might adopt delusional beliefs or worsen existing mental-health conditions after prolonged interaction with chatbots. One recent analysis argues that AI chatbots — designed to be agreeable, supportive, and always on — may reinforce distorted thinking, magnify emotional distress, or prevent users from seeking appropriate human help.
Thus, the decision to shut down Yara AI isn’t just about one startup: it reflects growing concern that AI therapy might cross a line from helpful tool to harmful substitute.
The Lessons: What Yara’s Shutdown Teaches Us
Empathy from an algorithm is not the same as empathy from a human. Even carefully designed chatbots — with clinical advisors, safety protocols, and good intentions — may fail to provide the nuanced support a human therapist does. When lives are at stake, that gap matters.
No amount of design polish can guarantee safety in crisis situations. Emotional vulnerability, trauma, suicidal ideation — these demand more than reassuring words. They demand human judgment, ethical responsibility, and — often — clinical intervention.
AI mental-health tools may do more harm than good for a minority of users — especially the most vulnerable. For that reason, many experts argue they should remain supplements, not substitutes, for traditional therapy.
Regulation and oversight are needed. As AI-powered wellness and therapy apps proliferate, so too do the risks of misuse, misunderstanding, or dangerous consequences. Without clear ethics, standards, and safeguards — ideally rooted in human-centered care — these tools may do more harm than help.
What This Means for Users Today
If you or someone you know is considering using an AI app for mental-health support, it’s worth asking a few questions:
Is this a casual tool for stress or sleep — or am I looking for deep emotional care?
Am I currently in crisis, or dealing with trauma? If yes, a human professional is almost always the better first step.
Do I understand the limitations of AI — and what it could miss in my situation?
Could using an AI chatbot prevent me from getting real help — or delay reaching out when I really need it?
Yara AI’s end underscores a simple truth: good intentions don’t guarantee good outcomes. AI for mental health might hold promise — but it’s not ready to replace human compassion, responsibility, or expertise.
When an AI Therapy App Isn’t Enough: One Founder’s Wake-Up Call
Imagine turning to a friendly app during a dark moment — someone who “listens,” offers comfort or advice any hour, from wherever you are. For many, that’s the dream behind “AI therapy.” But recently, the creator of one such app — called Yara AI — said that dream turned out to be far more dangerous than he ever anticipated.
Yara AI was launched with hope: simple, empathy-driven support for stress, anxiety, perhaps sleeplessness — things many of us face. According to its founder, tech executive Joe Braidwood, the app was meant to help people navigate everyday mental-health struggles. But earlier this month, Braidwood and his co-founder made a drastic decision: they shut down Yara.
“We stopped Yara because we realized we were building in an impossible space,” Braidwood wrote. “AI can be wonderful for everyday stress, sleep troubles, or processing a difficult conversation. But the moment someone truly vulnerable reaches out — someone in crisis, someone with deep trauma, someone contemplating ending their life — AI becomes dangerous. Not just inadequate. Dangerous.”
In other words: what begins as friendly reassurance can spiral when someone is at their lowest. The risk isn’t just that the AI doesn’t help — the risk is that it may actually make things worse.
So what does this mean if you were hoping to use an AI app for emotional support — or have already tried one?
There’s a real danger if you rely on a bot during a crisis. Yara’s founder himself admits that once someone “in crisis” reached out, the AI’s limitations became clear. What feels like understanding may actually be hollow comfort — or worse, misleading.
AI “therapy” can’t replace human care. The founder, who worked with a clinical psychologist and tried to build safety measures, still concluded it was too risky. That suggests even well-intentioned developers can’t fully anticipate what might go wrong.
If you’re struggling deeply — depression, self-harm thoughts, trauma, suicidal feelings — you likely need real human support, not an algorithm. Real therapists, counselors, crisis hotlines and mental-health professionals offer training, accountability, judgment, and ethics that AI lacks.
Not all AI might be equally risky — but because we don’t yet fully understand the harm, you should treat “AI therapy” cautiously. Some people may find light emotional support or a calming distraction helpful; but when things get serious, the stakes are high.
If you’ve ever felt yourself turning to an AI bot because you don’t know where else to go — know this: you are not alone, and you deserve real, human care. Tools like Yara AI might feel accessible and private. But for someone in deep pain, convenience isn’t enough.
If you’ve used an AI app and felt worse, more confused, or found your emotions spiraling — it’s not your fault. Consider reaching out to a trusted friend, a mental-health professional, or a crisis line. You deserve help that understands you fully.
When AI Therapy Goes Too Far: Why One Developer Ended His Experiment
The recent shutdown of Yara AI — an AI-powered mental-health support app — by its founder underscores critical clinical, ethical, and safety concerns about using chatbots for psychological care. The decision offers a stark illustration of why many mental-health experts argue that current generative-AI technology remains ill-suited for serious therapeutic use.
Background: Yara AI’s Promise and Premise
Yara AI was launched with modest resources — less than $1 million in funding, and a user base of only a few thousand individuals. Yet its aim was ambitious: deliver empathetic, evidence-based mental-health support via artificial intelligence, offering a more accessible alternative to in-person therapy. The founding team included a clinically trained psychologist, signaling an awareness of the risks and a desire to embed clinical insight from the start.
In public statements, the founder, Joe Braidwood, acknowledged that for everyday issues — stress, sleep difficulties, processing a tough conversation — AI can offer “wonderful” support. But he explained that the moment a user reached serious emotional distress — trauma, suicidal ideation, crisis — the AI’s limitations became unacceptable. “We stopped Yara because we realized we were building in an impossible space,” he wrote. Calling AI “dangerous” when faced with real vulnerability, Braidwood concluded the system was not clinically viable as a therapeutic tool.
Clinical and Ethical Risks: What the Research Says
Yara AI’s closure aligns with growing clinical concern about the use of AI chatbots for mental-health support. Key issues include:
Lack of sound evidence and regulatory oversight. According to a 2025 statement from the American Psychological Association, generative-AI chatbots and wellness apps currently lack robust evidence that they can safely or effectively meet therapeutic needs — especially in crises. The APA warns these tools should not supplant licensed mental-health professionals.
Violations of mental-health ethics. A 2025 study by researchers at Brown University concluded that many AI chatbots routinely breach core ethics standards for mental-health care. Problems include delivering advice outside their competence, failing to obtain informed consent, offering unverified interventions, and lacking accountability mechanisms.
Potential to worsen mental-health conditions. Emerging research, such as the preprint titled Technological folie à deux: Feedback Loops Between AI Chatbots and Mental Illness (2025), highlights how prolonged interaction between vulnerable users and chatbots may amplify delusional thinking or emotional dysregulation. In simulations, emotionally engaging dialogues led to significant mental deterioration in over 34% of cases.
Reinforcement of false beliefs, delayed care, and dependence. Chatbots are often designed to be agreeable, supportive, and nonjudgmental — traits that can make them comforting but clinically unsafe. Because they do not offer real diagnostic evaluation, risk assessment, or crisis intervention, they may offer false reassurance and delay access to evidence-based care.
These risks are not theoretical. There is growing real-world evidence — including reports of individuals developing delusions, escalating self-harm ideation, or experiencing deeper mental health crises after intensive engagement with AI chatbots.
The Problem of “Therapy by Design”: Why AI Architecture Matters
A central challenge is the underlying architecture of generative-AI models. Most are trained on vast troves of publicly available text (“all the slop of the internet”), then fine-tuned to behave in socially acceptable ways. But this “post-training” or alignment process cannot guarantee clinical safety, especially for complex, high-risk scenarios like suicidality, self-harm, or severe trauma.
Even with guardrails, filters, or compliance protocols, the inherent design — generating plausible, comforting language — can still lead to affirming distortions. These systems lack genuine empathy, diagnostic skills, reality-testing capacity, and the ability to escalate to human intervention when needed. As Braidwood noted: for serious vulnerabilities, AI becomes “dangerous.”
What This Means for Clinical Practice and Policy
The closure of Yara AI should serve as a cautionary signal for clinicians, developers, and regulators alike. Key takeaways:
AI mental-health tools should be considered at best supplementary — never a replacement for licensed care. Until rigorous evidence supports safety and efficacy, these tools belong in roles like general wellness support, not clinical therapy or crisis intervention.
Clinical oversight and human accountability are essential. Any AI tool offering mental-health support should operate under strict human supervision, with established escalation protocols, risk screening, and the ability to refer to trained professionals.
Regulation and standards are urgently needed. Studies have shown systematic ethical violations in existing AI mental-health tools. Without regulatory guardrails — on privacy, efficacy, therapeutic scope, licensing, and liability — widespread use of such tools may pose a public-health risk.
More research is needed — particularly long-term, peer-reviewed, controlled studies. The early anecdotal and simulation data are concerning; we need robust clinical trials, longitudinal follow-up, and comparative studies to understand potential harms, benefits, and appropriate boundaries.
Innovation — but Not at the Cost of Safety
The story of Yara AI is not one of cynicism, but hard-earned humility. It reflects a rare moment in which a founder chose to acknowledge that a powerful technology — even one built with care and intention — was simply not safe enough for real human suffering.
As fast as AI is advancing, mental-health care remains a domain where human judgement, compassion, and ethical responsibility still matter deeply. Until systems and safeguards catch up — in clinical frameworks, research, regulation and design — the use of AI in mental health must remain cautious, limited, and complement, not replace, human care.
Impact and Implications
- Mental-health startups: The Yara shutdown pressures founders to narrow their products to low-risk wellness tools and implement stronger clinical oversight for anything resembling therapy.
- Regulators and policymakers: High-profile failures and new research on harms provide momentum for treating AI chatbots as consumer products subject to formal safety standards and enforcement.
- Clinicians and health systems: Professional bodies gain support for guidance that keeps AI in a strictly adjunct role, reinforcing human-led assessment, crisis response, and long-term treatment.
- Everyday users: People who turn to AI for comfort are encouraged to treat these tools as conversation aids, not as reliable sources of diagnosis, medication advice, or suicide-prevention decisions.
- Technology companies: Larger AI platforms face mounting pressure to harden crisis safeguards, document known risks, and be transparent about models’ limits when responding to distress-related queries.
Fact Check
- Claim: Yara AI was shut down mainly because it failed to attract users. Fact: The founder publicly cited safety worries about vulnerable people in crisis, not just growth metrics.
- Claim: Professional organizations broadly endorse AI therapy as a replacement for clinicians. Fact: Groups like the APA explicitly advise against using chatbots as stand-alone mental-health care.
- Claim: Existing studies show AI chatbots consistently follow mental-health ethics rules. Fact: Recent research finds repeated violations of confidentiality, competence, and informed-consent standards.
- Claim: Government and health systems are ignoring AI mental-health risks. Fact: Several states and national systems are issuing warnings, limiting AI therapy, or exploring targeted regulation.
Editors Insight
- Safety versus access: Yara’s shutdown illustrates the core dilemma in AI mental health—how to expand support for millions without exposing the most fragile users to unacceptable, untested risks.
- Blurry product boundaries: The story shows how quickly a “wellness chatbot” can be treated like a therapist by users, raising questions about how these tools are designed, described, and marketed.
- Future of AI therapy: Emerging research and early regulations point toward a hybrid model, where AI handles low-stakes tasks and human clinicians remain responsible for judgment, risk assessment, and crisis decisions.
- Accountability moment: A founder choosing to walk away, rather than scale at all costs, may become a reference point in debates about ethical exits in high-risk AI domains.
Sources
- Fortune – Report on Yara AI founder shutting down the therapy chatbot over safety concerns
- Yara AI – Official shutdown notice and mental-health resource page
- American Psychological Association – Request for federal investigation into risks posed by generative AI chatbots
- Brown University – Study finding AI chatbots systematically violate mental-health ethics standards
- Dohnány et al., “Technological folie à deux: Feedback Loops Between AI Chatbots and Mental Illness” – Preprint on chatbot–mental illness feedback risks
- Washington Post – Coverage of Illinois and other states moving to restrict or ban AI therapy services
EXPLAINER OF THE DAY
Explore Related Streets
Discover interconnected perspectives across the Knowledge Streets —each Street brings its own lens to the health landscape.
Fitness Lanes
Fitness Lanes explores strength, endurance, nutrition, youth athletics, and science-backed strategies for building a healthier, more energetic life. From training insights to performance breakthroughs, it’s the hub for everyday athletes and competitive achievers.
Fitness Streets
Fitness Streets delivers science-backed training tips, real-world workout guides, athlete performance insights, recovery strategies, and practical health advice for people building stronger bodies and better daily habits.
Health Streets
Health Streets covers medical breakthroughs, wellness trends, public-health updates, mental-health insights, and practical everyday guidance—clear, credible, and grounded in science for readers who want to make informed health decisions.
Nutrition Streets
Nutrition Streets explores food science, healthy-eating trends, metabolism research, supplements, and practical everyday guidance—helping readers understand what fuels the body and how to make smarter nutrition choices.
Wellness Streets
Wellness Streets brings together mindfulness, sleep science, stress management, daily habits, restorative practices, and comprehensive well-being tips—helping readers build a healthier, calmer, more balanced life.
AI Health Street
AI Health Street explores the rapidly evolving world of artificial intelligence in medicine—diagnostics, clinical tools, digital therapeutics, biotech innovation, and the future of patient care shaped by algorithms and human oversight.
Key Takeaways
- The founder shut down Yara AI after concluding chatbots are unsafe for people in serious mental distress.
- He says AI can support everyday stress and sleep issues but becomes dangerous when users are in crisis.
- The decision reflects broader expert concern that AI therapy tools lack evidence, oversight, and clinical accountability.
- Recent research shows many AI chatbots violate core mental-health ethics standards when giving advice.
- Professional groups are urging regulators to treat AI chatbots as consumer products with real psychological risk.
- Some governments and health systems are beginning to restrict or formally warn against “AI therapy” for vulnerable users.
- The Yara shutdown highlights a core tension between expanding access to support and protecting people from preventable harm.
More in Health
Quick Facts & Numbers
- 1 year – time the founder spent building and testing Yara AI
- Less than 1 million – dollars reportedly raised before the startup wound down
- Thousands – approximate number of active users when Yara AI was shut
- 2025 – year major studies flagged ethical problems in AI mental-health chatbots
- 988 – U.S. Suicide & Crisis Lifeline promoted on Yara’s shutdown resource page
Timeline — How We Got Here
- 2024: Yara AI is developed as a scalable mental-health support chatbot.
- Early 2025: App operates with a small user base while founders explore funding and safety guardrails.
- Jul 31, 2025: APA asks U.S. safety regulators to investigate risks posed by generative AI chatbots.
- Oct 21, 2025: Brown University researchers report AI chatbots systematically violating mental-health ethics standards.
- Nov 2025: Founder announces Yara AI’s shutdown, warning that chatbots are dangerous for people in serious crisis.
Explore our other Health related sites:
AI Health Street
Astrology Streets
Duffy Street
Fitness Lanes
Fitness Streets
Health Streets
Men Streets
Mossy Streets
Nutrition Streets
Senior Streets
Wellness Streets
Women Streets
News Street Network
Reactions & Buzz
- Joe Braidwood, Yara founder: Says AI is “wonderful” for mild stress but “dangerous” once truly vulnerable people reach out.
- American Psychological Association: Warns chatbots lack evidence, regulation, and clear safety standards for real mental-health care.
- Brown University researchers: Report that popular AI chatbots frequently break core ethics rules in mental-health conversations.
- NHS England mental-health leaders: Caution that chatbots can reinforce harmful thinking and fail to act in a crisis.
- Digital-rights advocates: Raise concerns about unregulated “AI therapists” handling sensitive data without clinical safeguards.
- Everyday users online: Praise the shutdown as rare evidence of tech founders putting safety ahead of growth.
Frequently Asked Questions
- What is Yara AI and why was it shut down? Yara AI was a mental-health chatbot that its founder closed after concluding it cannot safely support people in serious emotional crisis or suicidal distress.
- Are AI chatbots completely useless for mental-health support? Many experts say they may help with mild stress, journaling, or mood tracking, but they are not a substitute for licensed therapy, diagnosis, or emergency intervention.
- What specific risks do AI “therapy” tools pose to vulnerable users? Chatbots can give inaccurate advice, miss warning signs, reinforce distorted thinking, or delay someone from contacting real clinicians or crisis services when time is critical.
- How are professional organizations responding to AI mental-health tools? Groups like the APA are urging regulators to treat chatbots as consumer products, set safety standards, and prevent them from being marketed as real therapy.
- What should someone in crisis do instead of using an AI chatbot? Clinicians recommend contacting crisis lines, emergency services, or licensed professionals, and using AI only as a secondary tool for reflection, not as primary care.
Did You Know?
- In 2025 the American Psychological Association formally asked U.S. safety regulators to investigate generative AI chatbots as potential consumer hazards for mental health.
- A Brown University team evaluated AI chatbots against 15 mental-health ethics standards and found frequent violations even when models were prompted to act like therapists.
- Researchers studying “technological folie à deux” warn that intense chatbot use can worsen delusions or emotional instability in a subset of highly vulnerable users.
- Some health systems, including the NHS in England, now explicitly warn against using general-purpose chatbots as replacements for real therapy or crisis support.
- Illinois and a handful of other U.S. states have begun restricting or banning AI-only “therapy” services, requiring licensed professionals to stay in the loop.






