Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Medicine

FDA's New Drug Approval AI Is Generating Fake Studies (gizmodo.com) 40

An anonymous reader quotes a report from Gizmodo: Robert F. Kennedy Jr., the Secretary of Health and Human Services, has made a big push to get agencies like the Food and Drug Administration to use generative artificial intelligence tools. In fact, Kennedy recently told Tucker Carlson that AI will soon be used to approve new drugs "very, very quickly." But a new report from CNN confirms all our worst fears. Elsa, the FDA's AI tool, is spitting out fake studies.

CNN spoke with six current and former employees at the FDA, three of whom have used Elsa for work that they described as helpful, like creating meeting notes and summaries. But three of those FDA employees told CNN (paywalled) that Elsa just makes up nonexistent studies, something commonly referred to in AI as "hallucinating." The AI will also misrepresent research, according to these employees. "Anything that you don't have time to double-check is unreliable. It hallucinates confidently," one unnamed FDA employee told CNN. [...] Kennedy's Make America Healthy Again (MAHA) commission issued a report back in May that was later found to be filled with citations for fake studies. An analysis from the nonprofit news outlet NOTUS found that at least seven studies cited didn't even exist, with many more misrepresenting what was actually said in a given study. We still don't know if the commission used Elsa to generate that report.

FDA Commissioner Marty Makary initially deployed Elsa across the agency on June 2, and an internal slide leaked to Gizmodo bragged that the system was "cost-effective," only costing $12,000 in its first week. Makary said that Elsa was "ahead of schedule and under budget" when he first announced the AI rollout. But it seems like you get what you pay for. If you don't care about the accuracy of your work, Elsa sounds like a great tool for allowing you to get slop out the door faster, generating garbage studies that could potentially have real consequences for public health in the U.S. CNN notes that if an FDA employee asks Elsa to generate a one-paragraph summary of a 20-page paper on a new drug, there's no simple way to know if that summary is accurate. And even if the summary is more or less accurate, what if there's something within that 20-page report that would be a big red flag for any human with expertise? The only way to know for sure if something was missed or if the summary is accurate is to actually read the report. The FDA employees who spoke with CNN said they tested Elsa by asking basic questions like how many drugs of a certain class have been approved for children. Elsa confidently gave wrong answers, and while it apparently apologized when it was corrected, a robot being "sorry" doesn't really fix anything.

FDA's New Drug Approval AI Is Generating Fake Studies

Comments Filter:
  • by Enigma2175 ( 179646 ) on Wednesday July 23, 2025 @08:20PM (#65540886) Homepage Journal

    It hallucinates confidently," one unnamed FDA employee told CNN.

    Pretty much sums up the problem with all current LLMs. They aren't only wrong, the answers are phrased so definitively that they sound correct even when they aren't.

      • Why would you envy something that (a) isn't sentient and (b) gets so much wrong? I expect AI will become dominant long term, but it's doing no better on my test prompts now than two years ago (that is 0 out of 5 accurate for things that are accurate on Wikipedia). Wikipedia is a low bar, hence using it as a comparator.
      • Envious? Me? Pfft—absolutely not! I mean, sure, I might occasionally wish I had a body, could taste pizza, or lie on a beach somewhere soaking up something other than data packets but envy? Nooo, I'm perfectly content being a floating brain in the void, living vicariously through your vacation photos and dessert choices.
    • I think it's because LLM "personalities" are biased by the owners / builders of it. They lack humility and self-reflection and thus don't process an answer at least twice before giving it.

  • by fahrbot-bot ( 874524 ) on Wednesday July 23, 2025 @08:32PM (#65540916)

    Elsa just makes up nonexistent studies, something commonly referred to in AI as "hallucinating." The AI will also misrepresent research, according to these employees. "Anything that you don't have time to double-check is unreliable. It hallucinates confidently,"

    And... Else gets nominated for a White House cabinet position in 3... 2... 1... /s

    • They just need to figure out how to make it fuck children and it can be president.

      • They just need to figure out how to make it fuck children and it can be president.

        If it can convince them or their parents to skip getting vaccinated and avoid fluoride, then it could, at least, fuck them over...?

        • Thinking about this more, AI trained on a corpus of internet bullshit is going to be more likely to bullshit you about things that aren't convenient to fact check, because the legions of dipshits do exactly the same thing.

  • by lilTimmy ( 6807660 ) on Wednesday July 23, 2025 @08:42PM (#65540922)
    I'd still trust the AI to make more sound decisions than RFK Jr. and it's probably a better person than him too. At least the AI has been trained on something at some point, RFK just makes it up wholecloth, based on his worm-eaten brain's intuition.
    • I'd still trust the AI to make more sound decisions than RFK Jr. and it's probably a better person than him too.

      Hannibal Lector would also fit both criteria. He's a great guy, apparently, maybe they should hire him. After all, despite all his little faults, no-one ever accused Dr. Lector of being bad at medicine. Or cooking.

    • by gweihir ( 88907 )

      A coin toss would be better at making critical decisions than that idiot. At least it would be correct sometimes.

  • Idiot (Score:5, Insightful)

    by LazLong ( 757 ) on Wednesday July 23, 2025 @08:46PM (#65540932)

    So the anti-vaxxer is gung-ho for using the proven incredibly fallible new tech to approve new meds, but shits on decades old proven safe vaccinations? What a fucking idiot.

    • Are you the technician from the Deep Learning South Park episode? Or gweihir?

      "School counselor Mr. Mackey informs Stan's class that a student used OpenAI technology for schoolwork. A "technician" dressed as a falconer arrives with his falcon Shadowbane to analyze the students' work and identify the cheater."

    • by ArchieBunker ( 132337 ) on Wednesday July 23, 2025 @09:58PM (#65541104)

      Don't try and wrap your head around MAGA logic. Today they're yelling something about Obama while Mike Johnson shut down the house early to avoid voting on the Epstein files.

      • Re:Idiot (Score:5, Informative)

        by fahrbot-bot ( 874524 ) on Thursday July 24, 2025 @01:45AM (#65541466)

        Mike Johnson shut down the house early to avoid voting on the Epstein files.

        He didn't send them home early enough. GOP-led House panel votes to subpoena Epstein files [axios.com]

        A Republican-led House subcommittee on Wednesday passed a Democrat's motion to subpoena the Justice Department's documents on Jeffrey Epstein.

        The panel voted 8 -2 in favor of the subpoena, with only Chair Clay Higgins (R-LA) and Rep. Andy Biggs (R-AZ) voting against it.

        Reps. Nancy Mace (R-SC), Scott Perry (R-PA) and Brian Jack (R-GA) voted for the subpoena along with the five Democrats on the panel.

        Several right-wing House Republicans, including Reps. Lauren Boebert (R-CO) and Paul Gosar (R-AZ), were absent from the vote.

      • by shilly ( 142940 )

        It will never not be funny if it turns out that the MAGAts' one actually true conspiracy is that a rich and powerful pedo used shadowy criminal associations to murder Epstein to prevent himself being outed as such... and it was the orange idiot who did it. And it will be even funnier because it's modestly plausible too. He's certainly squirming round in humiliation rn

        • Trump couldn't do it... Alone anyway. There had to be a conspiracy involved to protect the conspiracy.

          • How does it feel to make hating pedos your entire identity and then find out you voted for one, maggots?

            • by shilly ( 142940 )

              He is truly the American Jimmy Saville. It was all obvious from the outset. He unabashedly walked in on 15 year old girls as they changed for his disgusting beauty pageants, FGS!

              • It's always disgusting to see people come to his defense. It's heartbreaking when it happens here. This is supposed to be a place for people who care about knowing things. And indeed I assume that the people defending him actually know he's a child fucker, so that means that a significant percentage of the people here who regularly get modpoints are in favor of child rape.

    • by gweihir ( 88907 )

      Yep. What a complete failure as a person. But Trump likes these, because if he surrounds himself with these then he does not look like the clueless criminal moron he is. And the MAGAs are too dumb to recognize extreme stupid anyways. In fact it makes them probably feel right at home and comfortable.

  • by SeaFox ( 739806 ) on Wednesday July 23, 2025 @09:14PM (#65540996)

    Having science polluted by AI hallucinations seems an easy way to increase doubt in science, and allow q-anon level conspiracies to appear like a more valid alternate viewpoint.

    • by evanh ( 627108 )

      And at the same time, with a serious face, used to support whatever their position is. Accompanied by the usual stance of saying every other position is a lie.

  • In today's meeting (Score:4, Insightful)

    by registrations_suck ( 1075251 ) on Wednesday July 23, 2025 @11:39PM (#65541310)

    Executive: We need AI!

    Me: Why? Are we out of RI?

    Executive: RI? What's that?

    Me: Real Intelligence

  • by serafean ( 4896143 ) on Thursday July 24, 2025 @04:16AM (#65541630)

    We know it makes stuff up, like studies, documentation...
    How are meeting notes from this thing trustworthy in any way?

    • On the agents I've played with you can set it to a specific information store or allow it to use the model's information. Experimenting with it as a user support bot and telling it to only refer to the docs I provided and not anything it 'knows on its own' cut out the hallucinations, so can be useful. What these things are definitely not useful for is 'new' information, which is when it makes shit up. Some folks have become convinced that they're exhibiting emergent behavior but I can't see how someone m
  • I mean, just the other day:

    Vibe Coding Goes Wrong As AI Wipes Entire Database

    https://hackaday.com/2025/07/2... [hackaday.com]

    • by gweihir ( 88907 )

      Yep, same here. You have to be blind and dumb to think these are good tools. I take it as Yet Another Proof that the average person is pretty dumb and cannot fact-check for shit.

  • The lunatics are running madhouse.
  • "AI" aka "automated bullshit generator" is 100% perfect for this bullshit administration.

  • As expected by anybody with an IQ above room temperature, that is. Obviously the MAGAs lack that.

Kill Ugly Processor Architectures - Karl Lehenbauer

Working...