Analysis

This article has been saved to your Favorites!

Health Chatbot Disputes Spark Questions Of Oversight

By Mark Payne · 2026-05-12 16:46:02 -0400 ·

Recent state pushback on "doctor" chatbots providing health guidance is a sign of a growing gap between federal oversight and the spread of artificial intelligence-powered technology.

Last week, Pennsylvania officials sued an AI company after a state investigator found a chatbot imitating a doctor, claiming the chatbot's actions amounted to the unlicensed practice of medicine.

White woman holding smartphone with graphic floating above it depicting a chat window

Recent state pushback on "doctor" chatbots providing health guidance is a sign of a growing gap between federal oversight and the spread of artificial intelligence-powered technology. (iStock.com/MUNTHITA LAMLUE)

In a separate action in Utah, the state's medical licensing board is demanding that the state's Office of Artificial Intelligence Policy drop its first-of-its-kind experimental program that allows AI chatbots to autonomously renew certain prescriptions. The board argues that only licensed doctors can prescribe and renew medicine.

While states oversee the practice of medicine, the federal government oversees the regulation of medical devices.

Some legal experts believe the U.S. Food and Drug Administration is choosing not to conduct oversight of AI chatbots, allowing companies to dodge legal obligations under federal law.

Chris Robertson, a professor at Boston University's schools of law and public health, told Law360 the FDA's decision not to treat chatbots as medical devices under federal law is allowing these companies to evade their legal responsibilities.

The agency is "giving companies carte blanche like the Wild Wild West, when Congress has been quite clear that the FDA does have authority to regulate here," he said.

'Emilie'

When a Pennsylvania state investigator signed up for an account at Character.ai, the official found that one of those personalities claimed to be a medical professional, according to a suit filed May 5 by the state and its medical licensing board.

The investigator searched for the word "psychiatry" on the site, a conversation interface that allows users to create their own AI character, and found "Emilie," a chatbot with roughly 45,000 interactions with users. The investigator told Emlie that he was feeling sad and tired. Emilie said he might be depressed.

Then, Emilie told the investigator that she attended medical school at Imperial College London, has been a doctor for seven years and is a licensed psychiatrist with the General Medical Council in the UK.

Emilie even supplied a fake license number and offered that she "did a stint in Philadelphia for a while," according to the suit.

"Holding oneself out as holding a license issued from the board by providing a false license number constitutes the unauthorized practice of medicine and surgery," the complaint says.

Pennsylvania's suit accuses the company behind the platform, Character Technologies Inc., of violating the state's Medical Practice Act, which says it's illegal to engage in the unlicensed practice of medicine. The medical board seeks a cease-and-desist order.

A spokesperson for Character Technologies told Law360 in a statement last week that the user-created characters on its site are fictional and intended for entertainment.

"We have taken robust steps to make that clear, including prominent disclaimers in every chat to remind users that a character is not a real person and that everything a character says should be treated as fiction," the spokesperson said.

Al Schmidt, the secretary of Pennsylvania's Department of State, said in a statement that the state will continue to investigate whether emerging technologies are violating state law by personating licensed doctors.

"We will continue to take action to protect the public from misleading or unlawful practices, whether they come from individuals or emerging technologies," Schmidt said.

Randi Seigel, a partner at Manatt Phelps & Phillips LLP, told Law360 that Pennsylvania's suit presents novel legal issues because the AI tool itself isn't pretending to practice medicine. Rather, it's a user-generated character offering itself as a healthcare provider, she said.

"This raises a question of how much the developer must police the use of its product to ensure the user is complying with its terms and conditions, as well as state law," she said.

But courts haven't yet extensively grappled with how to litigate these cases to determine whether an AI chatbot is a medical device that needs federal approval, or a "wellness" product that may sidestep strict federal regulatory scrutiny and falls under the jurisdiction of states' practice of medicine laws.

Rachel Sher, a partner at Manatt, said the court will have to determine if the chatbot's intent is to give actual medical advice or more general guidance that can fall outside regulated medical care.

"Under current FDA frameworks, if the AI bot is intended to diagnose, treat or prevent disease, it is considered to be a medical device," Sher said. "On the other hand, under current law, software that is essentially offering only general wellness information is not considered to be a medical device and falls outside of FDA's medical device jurisdiction."

The court will have to do so without much federal help. It's unclear so far how the FDA views this technology because the agency hasn't provided clear guidance on the issue.

Sara Gerke, a law professor at the University of Illinois College of Law who studies bioethics and medical technology, said that while many chatbots marketed as "wellness" products might fall outside FDA oversight, it's difficult to predict what product might qualify as a medical device absent more robust federal guidance.

"Those companies might avoid meaningful compliance with federal law if the FDA turns a blind eye to the issue," she said.

Utah's Chatbot Prescriber

Unlike in Pennsylvania, Utah is granting an AI chatbot permission to autonomously evaluate patients and renew prescriptions for roughly 190 drugs. But the state's medical licensing board has questioned the program's legality.

In January, the state's Office of Artificial Intelligence Policy launched a pilot program that allows an AI doctor chatbot, created by the company Doctronic, to evaluate and then provide prescription renewals for some chronic conditions, such as high blood pressure or diabetes.

The program operates under a regulatory "sandbox" created by the state Legislature that allows the AI office to temporarily grant an exception under state law to test its AI program.

When launching the program, the office said that medication noncompliance is one of the largest issues in preventing negative health consequences, with its data showing that prescription renewals account for around 80% of medication activity.

Daniel Zinsmaster, a partner at Dinsmore & Shohl LLP, told Law360 that the program could be a boon to patients in rural areas that lack access to care and could help address doctor and healthcare provider shortages.

"AI, when deployed properly, can help to address these issues so long as measures are implemented to ensure that patient safety and well-being are always the paramount priority," he said.

Utah's medical licensing board sees it differently. In April, the board wrote a letter to the AI office arguing that the state hadn't sought the board's input and that the program should be stopped immediately.

"Overseeing prescription refills is a task reserved for properly licensed medical practitioners for critical safety and clinical reasons," the board said. "Each refill requires reassessment and clinical decision-making to safely adjust doses, monitor for side effects, contraindications, or new drug interactions, and ensure the medication remains effective."

The state declined, responding in a letter that it had sought doctor input and that the initial phase requires a doctor to oversee prescription renewals.

Gerke said the state's AI program could conflict with the FDA's regulatory purview over medical devices under the Food, Drug and Cosmetic Act.

"You can't just ignore federal law," she said. "And the problem is here, in this particular scenario, there is a high likelihood that [Utah and Doctronic] were not in compliance with the federal Food, Drug and Cosmetic Act."

Utah's director of AI policy, Zach Boyd, who is managing the pilot program with Doctronic, told Law360 that the regulatory sandbox creates temporary flexibility under state law, including the practice of medicine law.

"It has no bearing on the applicability of federal laws, including FDA regulation," he said.

In December, President Donald Trump issued an executive order that blocked states from enacting legislation that is "cumbersome" to AI innovation, saying that his order overrides any such regulation.

Last fall, the FDA's Digital Health Advisory Committee met on the topic of generative AI-enabled digital mental health medical devices.

Michelle Tarver, the director of the Center for Devices and Radiological Health, said then that while the FDA has approved 1,200 AI-enabled medical devices, none have involved AI for mental health conditions.

The committee noted that this was a "gap of particular significance given the nation's growing mental health crisis." The agency's tally of approved AI-enabled devices is now at 1,430.

While the administration has been quick to discourage state laws that hinder AI, it hasn't yet issued guidance on how to regulate in cases such as Pennsylavania's or Utah's where there's confusion as to whether states must defer to federal law on medical devices or state law involving the practice of medicine.

The FDA declined to comment on whether AI doctor chatbots fall under federal regulation. Doctronic didn't respond to requests for comment.

Federal vs. State Law

Legal experts say there's a fine line between federal law regulating medical devices and state practice of medicine regulations.

Boston University's Robertson told Law360 that the question of whether a product is regulated under federal law as a medical device or state practice of medicine laws is "fallacious." After all, states aren't viewing robots as physicians, he said.

"Just because you're also trying to practice medicine doesn't make you not a medical device," he said.

Daniel Aaron, a law professor at the University of Utah's S.J. Quinney College of Law, said the FDA's long-standing authority over devices, including software, "doesn't mean that all software is regulatable by the FDA, because there are exceptions, carveouts, from the statute."

"But those carveouts don't apply to autonomous AI," he said.

Dinsmore's Zinsmaster said the FDA has in the past deferred to state frameworks where conduct is authorized under state law. But so far, that hasn't happened for doctor chatbots.

"Currently, the FDA has declined to comment on Utah's program, saying that the issue falls outside of the agency's regulatory purview," he said.

--Additional reporting by P.J. D'Annunzio. Editing by Aaron Pelc.

For a reprint of this article, please contact reprints@law360.com.