FDA Commissioner Martin Makary laughed and clapped as they discussed a major milestone: The FDA had just completed its first AI-assisted review, dramatically cutting the time needed to complete the work.
"The scientific reviewer loved it," Makary said in an episode of "FDA Direct," the agency's podcast.
It was time, Makary said, to stop talking about AI and start using it in earnest.
The agency says it's ready to go. Last week, Makary announced the launch of "Elsa," a generative AI tool the FDA says is already helping review the protocols of clinical trials, accelerate scientific reviews and identify targets for facility inspection.
But life sciences attorneys and others say the speedy rollout invites questions about what safeguards the FDA has in place to detect "hallucinations," in which AI systems manufacture facts or ideas that aren't present in its source materials. They also want to know more about how the tool was trained and how employees are using it.
Susan Lee, a Washington, D.C.-based partner in the life sciences group at Goodwin Procter LLP, said it remains to be seen whether the agency's AI tool "is ready for prime time."
The FDA announcement "puts us in pretty novel territory," she said.
Attorneys say they are on the lookout for AI-produced errors in FDA materials and wondering whether they'll even know whether the technology influenced an agency decision.
Some are encouraging their clients to use their own technology to stay ahead of the FDA.
"We have been advising our clients that they need to deploy tools on their own — so that they are not caught on their heels," said Lisa Dwyer, a partner at King & Spalding LLP and former deputy chief of staff at the FDA.
Dwyer said her firm is helping clients develop AI tools "for regulatory compliance purposes" as the FDA rapidly starts to employ AI programs.
"We never want our clients to be in situations where FDA has more information about a product (or about the contents of an application) than the clients themselves," she said.
Other attorneys said the quick launch of Elsa is at odds with the FDA's typically cautious, slow-moving approach.
Two weeks before the Trump administration took office in January, the agency issued draft guidance on AI to the life sciences industry that encouraged transparency. The guidance also carried warnings about the potential for bias, and did not suggest an immediate embrace of the technology at the agency.
"AI technology can be very helpful, if used appropriately," said Gail Javitt, a director at Hyman Phelps & McNamara PC, a boutique FDA law firm. But "the rapidity and lack of transparency give me some pause."
The FDA, for its part, said its employees — not Elsa — remain in charge. The tool uses Claude Sonnet 3.5, an AI model built by Anthropic. It was trained on publicly available data through April 2024, the agency said.
"The FDA does not rely solely on AI; rather, Elsa is a tool designed to support experts who retain full responsibility for reviewing and verifying all information before publication or regulatory decision-making," an FDA spokesperson said in a statement to Law360 Healthcare Authority.
The Race Is On
Makary, who was a surgical oncologist at Johns Hopkins University before joining the FDA, said he made AI an immediate priority upon his confirmation in late March.
"Day One, we said we've got to stop just talking about AI, having conferences and panels and frameworks and consensus documents," Makary said in the May podcast. "We've got to do it. We have to actually do it."
The FDA announced its first AI-assisted scientific review on May 8, the same day of the podcast.
In the announcement, Jinzhong Liu, an official in the agency's Center for Drug Evaluation and Research, said the tool had allowed him "to perform scientific review tasks in minutes that used to take three days."
All of the agency's centers, Makary pledged, would be using the AI tool by June 30.
They beat the deadline by four weeks, announcing the launch of Elsa on June 2, describing it as a large language model-based tool that can help with reading, writing and summarizing.
Jeremy Walsh, FDA's chief AI officer, called it "the dawn of the AI era" at the agency.
The speed of the rollout stunned outsiders.
Michael Hinckle, managing partner of K&L Gates LLP's Research Triangle Park office in North Carolina, said he and his colleagues never thought the FDA would meet its June 30 deadline, much less beat it. It showed the agency "is willing to really move things along when they think it's really going to help," he said.
Hinckle said he can see the potential of the tool, but hopes the agency hasn't moved too fast.
"As an attorney who's representing people in this regulated industry, the review and the inspections are where the rubber meets the road for us, and there, I think, it certainly has the possibility to make things more efficient," he said.
The FDA's rapid adoption of AI comes as it faces criticism for fast-moving staff cuts that have affected virtually every part of the federal health system.
In March, U.S. Department of Health and Human Services Secretary Robert F. Kennedy Jr. announced plans to lay off 10,000 employees, including 3,500 people at the FDA, as part of a broad restructuring of the agency. He estimated the overhaul would downsize HHS' workforce by 24%.
Anthony Lee, president of the National Treasury Employees Union Chapter 282, which covers 9,000 FDA workers, said the new AI tool is not widely in use at this point. For many employees, he said, training on Elsa began last week.
FDA employees also have plenty of concerns about whether the tool will be used to replace them, if its use will be mandatory and what guardrails are in place to ensure it isn't biased.
"We just don't have answers to any of those questions," Lee said.
Good For The Goose
FDA officials say there are plenty of effective places to use AI at an agency that approves drugs, regulates their safety and effectiveness, and inspects drug manufacturing plants.
Elsa is already being deployed to accelerate the review of the protocols for human studies, shorten the time needed for scientific evaluations and identify "high-priority inspection targets," according to FDA officials.
Other uses of AI technology include summarizing adverse events for safety assessments, comparing food and drug product labeling, and generating code to help develop databases.
Elsa operates on the high-security GovCloud system designed to protect sensitive government data, offering a "secure platform for FDA employees to access internal documents while ensuring all information remains within the agency," an FDA spokesperson said.
Makary said he hopes the tool allows FDA scientists and experts to focus on their "superpowers" rather than busywork.
"Elsa will help increase employee efficiency by having them focus on what they do best, supporting the public health mission and keeping the American people safe," an FDA spokesperson said.
And the agency has clear guidelines for employees "emphasizing accountability, transparency, and the essential role of expert review," the spokesperson said.
Federal officials have also stressed that the model wasn't trained on the sensitive data submitted by the industries the agency regulates. Instead, it was trained on publicly available data.
But there's still much unknown about the tool.
Joe Franklin, special counsel at Covington & Burling LLP and a former FDA official, said the agency could serve as an example to the industries it regulates by being "transparent about how it tests internal AI uses and mitigates any risks."
Goodwin's Susan Lee said she and others in the industry want to know more about how the tool was trained. It's also unclear what safeguards are in place to guard against AI-generated errors and how employees have been trained to use it, she said.
"I think that there's just a lot of additional transparency that the agency could provide to try to increase people's confidence in the tool," Lee said.
The agency's own draft guidance — issued in early January under the Biden administration — suggested regulatory submissions using AI should explain how the model was developed and assess its risks as part of a seven-step process.
Dwyer said the guidance included strong recommendations for evaluating the credibility of AI models.
"We hope and assume that in developing the tools they are deploying, FDA has adhered to the same process," she said.
Emerging Questions
Life sciences attorneys say the tool certainly has the potential to help the agency carry out its complex work and speed up decision-making. But they are also preparing for how they might respond to agency errors rooted in the use of AI.
Hinckle of K&L Gates said AI introduces a new element for attorneys to consider.
"You could wind up having to spend a bunch of time and money addressing an issue with the agency that was really just created artificially and incorrectly by the AI software," he said.
Regardless of what technology the agency uses, the basics remain the same, he said.
The industry should "focus on controlling what we can control," Hinckle said. "Put together the best application you can put together."
Susan Lee said attorneys and their clients should keep in mind that AI may be used to evaluate their submissions and "be ready to ask questions and to challenge decisions if it's appropriate."
Lee is also mulling "emerging legal questions" about the possibility of leaked confidential data, whether AI might fail to treat similar applicants equally, and how the tool fits into the agency's legal obligation to make reasoned decisions.
"How much human oversight or intervention does there have to be in order for the agency to have made a decision vs. for Elsa to have made a decision?" she asked.
The agency's use of AI may make it difficult — for both the applicant and the agency — to pinpoint the source of an error if there is one, attorneys said.
The FDA itself has warned about "automation bias" — the tendency to put too much trust in an algorithm — as a risk when physicians use technology to assist in their decision-making, Javitt said. The same risk is foreseeable here.
Of course, humans can make mistakes, too, and sponsors already should be scrutinizing FDA communications.
"Regardless of the methods used by the agency in its review, you need to read everything carefully and make sure you understand FDA feedback," Javitt said. "At the same time, I do think it's important to be aware that AI may have been involved, and if something doesn't make sense, make sure you get clarification."
--Editing by Marygrace Anderson.
For a reprint of this article, please contact reprints@law360.com.