During the second meeting of the U.S. Food and Drug Administration’s (FDA) Digital Health Advisory Committee (DHAC), members of the panel seemed shocked and, at times, lost for words when grappling with the Agency’s discussion questions surrounding the potential regulation of generative artificial intelligence (AI)-enabled digital mental health medical.
The meeting, held virtually on 6 November 2025, covered a wide range of topics concerning generative AI, including regulatory considerations for digital mental health diagnostics, and therapeutics for children and adults, lessons learned from other fields of AI, and considerations for payers and providers.
Generative AI is being “rapidly developed and adopted in healthcare” at the same time the U.S. is experiencing a mental health crisis that affects all ages, said Michelle Tarver, Director of the Center for Devices and Radiological Health (CDRH). “Generative AI-enabled digital mental health medical devices hold significant promise in helping to address this mental health crisis through innovative approaches,” she said.
In November 2024, the inaugural meeting of the digital health advisory committee focused on topics such as the benefits and risks of generative AI in medical devices and the need to establish infrastructure and guardrails for the technology. As FDA looks to begin regulating AI-enabled medical devices, the Agency has also asked the public for comment on how to assess and evaluate the performance of these devices.
However, when presented with a series of discussion questions concerning the benefits and risks of a theoretical prescription generative AI software device that offered therapy to a patient diagnosed with major depressive disorder (MDD) uninterested in pursuing traditional therapy, the panel was hesitant to recommend use of the technology for higher risk patients without oversight from a clinician, including for children and adolescents and/or those at risk for severe depression and suicide. They also expressed concern at the concept of a theoretical over-the-counter generative AI-enabled mental health chatbot indicated as a treatment for major depressive disorder that patients could access without interacting with a mental health care provider.
Prescription digital mental health device
The hypothetical scenario posed to the expert panel included a patient diagnosed with major depressive disorder reporting stress-related intermittent tearfulness who is uninterested in pursuing therapy with a healthcare provider; however, willing to try an at-home therapy with a theoretical prescription device that uses a large language model to mimic a conversation with a human therapist. In this scenario, the device has been indicated to treat major depressive disorder in adults 22 years and older not already seeing a therapist.
When asked to discuss the benefits and risks of a device that provides automated therapy, panel members offered a range of opinions about the idea. Some saw the usefulness of having an intervention before a person could be seen by a clinician, or even as a supplement to the 988 suicide hotline. However, other panel members raised concerns about an AI chatbot’s potential to misdiagnose or provide incorrect advice without human guidance. There is also a question of who owns the data in the chat conversation, and what the company that develops the chatbot might do with that data in the future.
“I agree that something is better than nothing, but only if the benefits outweigh the risks,” Omer Liran, Co-Director of Virtual Medicine at Cedars-Sinai Medical Center in Los Angeles, said during the meeting. He noted that, potentially, a generative AI therapy chatbot could increase access to care in areas and communities where access to care is difficult or nonexistent, can be scalable, and clinicians could capture data on sessions and outcomes that might be difficult to obtain at in-person clinical visits.
“We don’t, as a matter of course, collect the detailed information of an interaction with the multiple numbers of counseling sessions going on around the country now for major depression, so that gives you just an unbelievable opportunity to [tailor] the therapy, understanding what works and what doesn’t,” Thomas Maddox, Executive Director of the Healthcare Innovation Lab at BJC HealthCare in St. Louis, Miss., said during the meeting.
However, “the risks are numerous, too,” Liran pointed out, and can include hallucinations, incorrect diagnoses, inappropriate advice, missing subtle cues in human emotion, biases, and cybersecurity risks. “[T]here is something special about a human-to human-connection I feel that even a superintelligent AI may not be able to replicate, and I do worry about losing that by making it so convenient for patients to interact with an AI agent,” he said.
John Torous, Staff Psychiatrist at Beth Isreal Deaconess Medical Center, said there needs to be agreement on a comparator for generative AI-based therapy to measure its effectiveness, whether that is engaging with a chatbot on a non-medical related topic, taking a medication, or going for a walk. “Compared to what, I think, is the question, and in what circumstance,” he said.
In terms of risks, “I think in some ways we need to assume the worst-case scenario and then let the chatbots prove that they don’t lead to those outcomes,” Torous said. “But I think we need to be able to quantify what those risks are in studies, and then have a framework to say what they are, and let each bot prove that it can minimize those risks, and then you can make that judgment of what are the risks and benefits of it today.”
Other panel members emphasized the need for some degree of clinician oversight when using AI-enabled chatbots for therapy, such as a review of the conversation, and an option for escalation to a human if needed.
“I think that almost certainly needs to be the case at the first stage before even thinking about allowing these things to be used by themselves,” Ray Dorsey, Director of the Center for Brain and Environment at the Atria Health and Research Institute in New York, said during the meeting.
Wayne K. Goodman, Professor and Chair of the Menninger Department of Psychiatry and Behavioral Sciences at Baylor College of Medicines in Houston, said he’d like to get regular feedback from the app and the patient, just as he’d get feedback from a therapist if things were not going well with a patient he referred to them. “I need some sort of reassurance that, in this transaction that the patient’s having with the digital mental health device, that I’m going to get some feedback on how they’re doing, because I’m the one who wrote the prescription,” he explained. “I’m going to feel some obligation for whether this is working for them the same way it would be if I were prescribing a drug.”
Over-the-counter AI mental health medical device
FDA’s second set of discussion questions posed the hypothetical scenario of the same generative AI-enabled medical device being evaluated for over-the-counter (OTC) use in patients with Major Depressive Disorder (MDD). The discussion questions also asked panel members to consider scenarios where the device could be modified to diagnose and treat major depressive disorder without the involvement of a healthcare provider, and modified so it could diagnose and treat multiple sadness-related mental health disorders without the involvement of a healthcare provider in people who have not been diagnosed with major depressive disorder.
“This proposal would make me very nervous,” Goodman said about transitioning the AI-enabled chatbot to an OTC device. “I certainly would try to limit it to patients with mild depression, and so I would want to see some way of building in,” he said. “I certainly wouldn’t contemplate an over-the-counter digital AI-driven treatment for somebody with moderate-to-severe depression without involvement of a healthcare provider.”
Jessica Jackson, Vice President of Alliance Development at Mental Health America in Alexandria, Virginia, said this scenario sounded more like a wellness app than a therapy device, and to regulate the device, FDA would have to define what it considers as therapy. “I think if the device is doing automated therapy, I do not think we’re at a place to have something that is automated over the counter, even with oversight to say that it is therapy,” she explained. “There’s not enough information about the type of device to say that the therapy is going to match the actual diagnosis to give them exactly what they need.”
Liran said a provider would need to be in the loop and monitoring the AI chatbot in an OTC scenario. “With today’s technology, I’m very uncomfortable with the idea that an AI chatbot will be used in the absence of a human provider for the treatment of major depressive disorder,” he said. “If there’s any possible self-harm, for example, it’s not calling 911. It’s just on the patient to seek help. There’s no evidence that they can be a valid replacement for moderate to severe MDD at this point that that I’ve seen that I would consider high quality evidence, so the maker of this app would have to submit very good evidence for why we should consider that request.”
Panel members said, in terms of the kind of high-quality evidence needed for an OTC AI-enabled therapy device, there should be randomized controlled trials conducted independently of the developer of the device to verify its safety and effectiveness under the guidance of a clinician. There also needs to be evidence of long-term use in patients over several years, and an “extraordinarily favorable safety profile,” Dorsey said. “I just think you’d want to have years of safety data across thousands of individuals because this would be used by hundreds of thousands of individuals,” he said.
When considering an OTC AI-enabled chatbot that both diagnoses and treats MDD, panel members expressed concerns about liability and questioned whether the technology would be able to make an accurate diagnosis.
“It is very difficult, even for the seasoned clinician, to make the diagnosis of major depression or depressive symptoms,” Goodman said. “It’s very heterogeneous. There are a lot of rule-in and rule-outs.” “I would certainly start with less severe depressive symptoms, and I would not include patients who are coming in with suicidal ideation,” he added.
FDA’s discussion question asking the panel member to consider the same scenario with an OTC AI-enabled chatbot that could diagnose and treat sadness-related mental health disorders drew similar concerns. While the technology may not be available yet to infer human emotions from voice and facial recognition, “I think this is a great opportunity to set the bar for what we would like to see in 10 and 15 years, so that we’re not in the position we are now playing catch up where people have already built things without having guidelines,” Jackson said.
AI-enabled therapy device for children
Panel members were most concerned about FDA’s set of discussion questions about expanding the population of the hypothetical generative AI-enabled therapy device to children or adolescents 21 years and younger.
“Our committee is so concerned that we don’t know what to say,” Ami Bhatt, Chair of the Panel and Chief Innovation Officer at the American College of Cardiology, said. “Some because of fear, some because not all children are the same. Some because the relationship between child and caregivers can be challenging and unpredictable, and most agree that child psychiatrists are central to this development process of this device.” There is also “a recognition at the same time that wellness is so important at this age, and a lack of access to care in this age group is incredibly unfair,” Bhatt added. “So this is an important discussion, which leaves us a little bit speechless.”
Chevon Rariy, Chief Clinical Innovation Officer at Visana Health, said FDA should “take great caution” when prescribing an autonomous AI-enabled therapy chatbot for children and adolescents. “I believe in the potential of AI to improve how clinicians can be more human, not less, but I think as doctors — and many of us might feel similarly — we took an oath to do no harm,” Rariy said. “I’m a mom, and I have seen my kids completely absorbed with the device and the impact that it has on their childhood, on their play, on their creativity, on their connection. It’s unmistakable. For me, I would say that it is vitally important to have more data,” she added. “We don’t have data for this specifically for children, and so for me, it would need to be a very high bar.”
Jackson said anything being developed would have to be appropriate for the child’s age and include a mechanism for switching off the app after a determined period of time to limit screen time and interaction with the chatbot. As with the previous discussions, a healthcare provider with specialized should be in the loop and monitoring conversations children and adolescents have with these devices, she said. “It would need to be developed very differently, Jackson said. “I think there is a world where we could ask for some of these higher bars in the product development that would make it developmentally appropriate treatment, the same way that we do therapy and psychiatry currently.”
REFERENCE: Regulatory Focus; 10 NOV 2025; Jeff Craven