The FDA Launches Its Generative-AI Tool, Elsa, Ahead of Schedule

Generative artificial intelligence has found another home in the federal government. On Tuesday, the U.S. Food and Drug Administration announced the early launch of its very own generative AI that it hopes will improve efficiency.
The FDA’s tool—nicknamed Elsa—is designed to assist employees with everything from scientific reviews to basic operations. Originally, the FDA planned to launch by June 30, so Elsa is well ahead of schedule and under budget, according to an FDA statement.
It’s not clear what exact information Elsa was trained on, but the FDA says that it didn’t use any “data submitted by regulated industry” in order to protect sensitive research and information. Currently, Elsa houses its information in GovCloud, an Amazon Web Services product specifically intended for classified information.
As a language model, Elsa can help employees with reading, writing, and summarizing. In addition, the FDA said that it can summarize adverse events, generate code for nonclinical applications, and more. Per the agency, Elsa is already being used to “accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets.”
In a May press release announcing the completion of the FDA’s first AI-assisted scientific review, Makary said he was “blown away” by Elsa’s capabilities, which “[hold] tremendous promise in accelerating the review time for new therapies”. He added, “We need to value our scientists’ time and reduce the amount of non-productive busywork that has historically consumed much of the review process.”
According to one scientist, Jinzhong Liu, the FDA’s generative AI completed tasks in minutes that would otherwise take several days. In Tuesday’s announcement, FDA Chief AI Officer Jeremy Walsh said, “Today marks the dawn of the AI era at the FDA with the release of Elsa, AI is no longer a distant promise but a dynamic force enhancing and optimizing the performance and potential of every employee.”
Generative AI can certainly be a useful tool, but every tool has its drawbacks. With AI specifically, there has been an uptick in stories about hallucinations which are outright false or misleading claims and statements. Although commonly associated with chatbots like ChatGPT, hallucinations can still pop up in federal AI models, where they can unleash even more chaos.
Per IT Veterans, AI hallucinations typically stem from factors like biases in training data or a lack of fact-checking safeguards built into the model itself. Even with those in place, though, IT Veterans cautions that human oversight is “essential to mitigate the risks and ensure the reliability of AI-integrated federal data streams”.
Ideally, the FDA has thoroughly considered and taken measures to prevent any mishaps with Elsa’s use. But the expansion of technology that really needs human oversight is always concerning when federal agencies are amidst mass layoffs. At the beginning of April, the FDA laid off 3,500 employees, including scientists and inspection staff (although some layoffs were later reversed).
Time will reveal how Elsa ultimately performs. But eventually, the FDA plans to expand its use throughout the agency as it matures. This includes data processing and generative-AI functions to “further support the FDA’s mission.”
gizmodo