Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Biotech AI Science

OpenAI Has Created an AI Model For Longevity Science (technologyreview.com) 33

OpenAI has developed a language model designed for engineering proteins, capable of converting regular cells into stem cells. It marks the company's first venture into biological data and demonstrates AI's potential for unexpected scientific discoveries. An anonymous reader quotes a report from MIT Technology Review: Last week, OpenAI CEO Sam Altman said he was "confident" his company knows how to build an AGI, adding that "superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own." The protein engineering project started a year ago when Retro Biosciences, a longevity research company based in San Francisco, approached OpenAI about working together. That link-up did not happen by chance. Sam Altman, the CEO of OpenAI, personally funded Retro with $180 million, as MIT Technology Review first reported in 2023. Retro has the goal of extending the normal human lifespan by 10 years. For that, it studies what are called Yamanaka factors. Those are a set of proteins that, when added to a human skin cell, will cause it to morph into a young-seeming stem cell, a type that can produce any other tissue in the body. [...]

OpenAI's new model, called GPT-4b micro, was trained to suggest ways to re-engineer the protein factors to increase their function. According to OpenAI, researchers used the model's suggestions to change two of the Yamanaka factors to be more than 50 times as effective -- at least according to some preliminary measures. [...] The model does not work the same way as Google's AlphaFold, which predicts what shape proteins will take. Since the Yamanaka factors are unusually floppy and unstructured proteins, OpenAI said, they called for a different approach, which its large language models were suited to. The model was trained on examples of protein sequences from many species, as well as information on which proteins tend to interact with one another. While that's a lot of data, it's just a fraction of what OpenAI's flagship chatbots were trained on, making GPT-4b an example of a "small language model" that works with a focused data set.

Once Retro scientists were given the model, they tried to steer it to suggest possible redesigns of the Yamanaka proteins. The prompting tactic used is similar to the "few-shot" method, in which a user queries a chatbot by providing a series of examples with answers, followed by an example for the bot to respond to. Although genetic engineers have ways to direct evolution of molecules in the lab, they can usually test only so many possibilities. And even a protein of typical length can be changed in nearly infinite ways (since they're built from hundreds of amino acids, and each acid comes in 20 possible varieties). OpenAI's model, however, often spits out suggestions in which a third of the amino acids in the proteins were changed. "We threw this model into the lab immediately and we got real-world results," says Retro's CEO, Joe Betts-Lacroix. He says the model's ideas were unusually good, leading to improvements over the original Yamanaka factors in a substantial fraction of cases.

This discussion has been archived. No new comments can be posted.

OpenAI Has Created an AI Model For Longevity Science

Comments Filter:
  • A.K.A when you have a hammer, every problem looks like a nail.

    I'm sure it will be a roaring success.

    • Re:LLM for Longevity (Score:5, Interesting)

      by buck-yar ( 164658 ) on Saturday January 18, 2025 @09:00AM (#65098629)
      Their supposed breakthrough is disappointing the further in to the article you go. They claim to have a way to rejuvenate skin cells, but then further in is this FTA:

      But such cell “reprogramming” is not very efficient. It takes several weeks, and less than 1% of cells treated in a lab dish will complete the rejuvenation journey.

      Scientists have found that regression of differentiated cells back into stem cells is a normal body process.

      Conversion of somatic cells to pluripotency by defined factors is a long and complex process that yields embryonic stem cell-like cells that vary in their developmental potential.

      And this dedifferentiation is a crucial step for some cancers.

      Given the fundamental principle that cancer must arise from a cell that has the potential to divide, two major nonexclusive hypotheses of the cellular origin of cancer are that malignancy arises a) from stem cells due to maturation arrest or b) from dedifferentiation of mature cells that retain the ability to proliferate. The role of stem cells in carcinogenesis is clearly demonstrated in teratocarcinomas. The malignant stem cells of teratocarcinomas are derived from normal multipotent stem cells and have the potential to differentiate into normal benign mature tissue. (...) It is now postulated that foci and nodular change reflect adaptive changes to the toxic effects of carcinogens and not "preneoplastic" stages to cancer. The stem cell model predicts that genotoxic chemicals induce mutations in the determined stem cell which may be expressed in its progeny. https://ehp.niehs.nih.gov/doi/... [nih.gov]

      More...

      Almost any differentiated cell can be sent back in time to a pluripotency state by expressing the appropriate transcription factors. The process of somatic reprogramming using Yamanaka factors, many of which are oncogenes, offers a glimpse into how cancer stem cells may originate. https://pmc.ncbi.nlm.nih.gov/a... [nih.gov]

      Many oncogenes induce dedifferentiation... this approach by the LLM sounds dangerous. The treatment designed by the LLM would seem like on the surface it has the potential to induce cancer. More research is needed, hopefully without people with a vested financial interest in finding any useful purpose for their product. These authors sound quick to roll out their findings. Sounds like more of this

      When Josiah Zayner watched a biotech CEO drop his pants at a biohacking conference and inject himself with an untested herpes treatment, he realized things had gone off the rails. Zayner is no stranger to stunts in biohacking -- loosely defined as experiments, often on the self, that take place outside of traditional lab spaces. You might say he invented their latest incarnation: He's sterilized his body to "transplant" his entire microbiome in front of a reporter. He's squabbled with the FDA about selling a kit to make glow-in-the-dark beer. He's extensively documented attempts to genetically engineer the color of his skin. And most notoriously, he injected his arm with DNA encoding for CRISPR that could theoretically enhance his muscles -- in between taking swigs of Scotch at a live-streamed event during an October conference. (Experts say -- and even Zayner himself in the live-stream conceded -- it's unlikely to work.) So when Zayner saw Ascendance Biomedical's CEO injecting himself on a live-stream earlier this month, you might say there was an uneasy flicker of recognition.

      Ascendance Bio soon fell apart in almost comical fashion. The company's own biohackers -- who created the treatment but who were not being paid -- revolted and the CEO locked himself in a lab. Even before all that, the company had another man inject himself with an untested HIV treatment on Facebook Live. And just days after the pants-less

    • by Barny ( 103770 )

      Whatever they sold you, don't touch it!

      Bury it in the desert!

      Wear gloves!

      Oblig [xkcd.com]

    • Sam, I would have saved you 180 million USD if you'd asked how to meet the stated goal.

      Tell you what, how about you cut me a check for 1 million USD because this will save even more waste and reach the stated goal:
      .
      U.S. average lifespan is about 10 years less than that of people who live in countries with stricter food purity laws.

      Outlaw the food additives (6 months to phase out, not a year or two, which looks like the FDA trying to prevent change while giving lobbyists time to bribe cancellation of the cha

      • Really? Countries which care about healthcare of their citizens enough to tell corporations to stuff it and institute stricter food purity laws have longer-lived citizens? Fascinating. The only thing that could possibly be causing the longer lifespans is the food purity laws, not the caring about health. Or the other thing.
        • by narcc ( 412956 )

          You can call attention to something without also implying that nothing else matters. See the parent's comment for an example.

  • How come medical researchers "use AI" but programmers are "replaced by AI?"

    • AI writes itself. Duh.

    • Because a human is still needed to perform experiments and validate the results.

      Technically speaking, humans are needed to validate software too, but for some reason they don't want the people who were formerly in charge of writing the software to switch over to testing it for functionality?

      • Because a human is still needed to perform experiments and validate the results.

        Since when? Most experiments have been automated for ages, as has been the "data analysis" that produces "the results".

        • by gweihir ( 88907 )

          Not if you need actual results. Simulators are and will remain rather limited.

          • Automation is not simulation.

            • by gweihir ( 88907 )

              No. And I did not claim it was. Seriously.

    • by gweihir ( 88907 )

      No actual programmers are bein replaced. The interface to code is just easier and a lot of really bad code is already written by people that only pretend to be programmers. LLMs can pretend that as well. The requirements in medical research are a bit higher.

    • by narcc ( 412956 )

      It's all about the clicks. Saying that AI is replacing programmers isn't going to stress anyone out. Saying AI is replacing medical researchers absolutely will.

      After all, deep down, we all know it's bullshit ... but we're terrified that other people really take it seriously.

  • by gweihir ( 88907 ) on Saturday January 18, 2025 @03:27AM (#65098413)

    All OpenAI is trying to do here is to create the impression that LLMs can actually do useful things besides "better crap" and somewhat better search. The whole thing is just more smore and more mirrors. Also, targeting "longevity science" means they are now going after rich assholes that want to live forever. They must be getting really desperate for more funding.

    • But.... Sam is "confident". Surely that is enough? For a few more billions?

      Here is what I saw in my crystal ball: new free AI product designed to fish in alot of gullible researchers anxious to not miss the next big thing. Once they are dependent on it, tighten up the TOS and triple subscription prices ... cha-ching!
      • But.... Sam is "confident". Surely that is enough? For a few more billions?

        He's already done the 'confident we know how to do it' phase. We're now on to the one about how 'our new model might be too dangerous to release', which should peak in a month or so.

        He just goes back and forth between these two headlines.

        • by gweihir ( 88907 )

          I find it fascinating how such a rather simplistic and obvious strategy continues to work. Investors continue to give money. No so smart AI fans continue to believe LLMs will eventually get really good. The implosion will be spectacular this time and hence at least the quality of the end of AI hypes will have increased.

    • It's blatant, but it feels like we need to keep repeating that. "This chatbot will now solve longevity, and we will get AGI too because we know how. We don't have any proof/publication/anything peer-reviewed whatsoever, but give us money and we'll do all that, pinky promise". Also super-convenient to approach a problem that is not expected to be solved for a long while, to maintain a nice funnel of funding. Moving the goalposts further and further in a way that the funding they'll get won't guarantee a shor
    • by narcc ( 412956 )

      create the impression that LLMs can actually do useful things besides "better crap" and somewhat better search.

      A better search interface, maybe, but dangerously unreliable as a search replacement. Just look at the nonsense Google's AI defecates at the top of their search results.

      • by gweihir ( 88907 )

        Indeed. That is why I wrote "somewhat". In fact, it is not even complete search. For example, when asking ChatGPT about its sources on some statement, it seems to be doing a conventional web-search and when it tried it, it regularly it failed to even find real sources or what it finds is grossly incomplete.

        Hence it can be used as a step in searching something, for example when you do not know the right search term but can describe it. But that is about it. And it is no replacement for actual search. Experim

  • Because if they wanted to solve actual problems for the majority of the population, they would focus on logistics, agriculture, food production and distribution etc. But of course this is straight for some few rich vampires who want to live forever and are willing to shell out serious money for some hope.
  • ... dr Poulsen has been waiting for!

MESSAGE ACKNOWLEDGED -- The Pershing II missiles have been launched.

Working...