What's the Best Way to Stop AI From Designing Hazardous Proteins? (msn.com) 80
Currently DNA synthesis companies "deploy biosecurity software designed to guard against nefarious activity," reports the Washington Post, "by flagging proteins of concern — for example, known toxins or components of pathogens." But Microsoft researchers discovered "up to 100 percent" of AI-generated ricin-like proteins evaded detection — and worked with a group of leading industry scientists and biosecurity experts to design a patch. Microsoft's chief science officer called it "a Windows update model for the planet.
"We will continue to stay on it and send out patches as needed, and also define the research processes and best practices moving forward to stay ahead of the curve as best we can."
But is that enough? Outside biosecurity experts applauded the study and the patch, but said that this is not an area where one single approach to biosecurity is sufficient. "What's happening with AI-related science is that the front edge of the technology is accelerating much faster than the back end ... in managing the risks," said David Relman, a microbiologist at Stanford University School of Medicine. "It's not just that we have a gap — we have a rapidly widening gap, as we speak. Every minute we sit here talking about what we need to do about the things that were just released, we're already getting further behind."
The Washington Post notes not every company deploys biosecurity software. But "A different approach, biosecurity experts say, is to ensure AI software itself is imbued with safeguards before digital ideas are at the cusp of being brought into labs for research and experimentation." "The only surefire way to avoid problems is to log all DNA synthesis, so if there is a worrisome new virus or other biological agent, the sequence can be cross-referenced with the logged DNA database to see where it came from," David Baker, who shared the Nobel Prize in chemistry for his work on proteins, said in an email.
"We will continue to stay on it and send out patches as needed, and also define the research processes and best practices moving forward to stay ahead of the curve as best we can."
But is that enough? Outside biosecurity experts applauded the study and the patch, but said that this is not an area where one single approach to biosecurity is sufficient. "What's happening with AI-related science is that the front edge of the technology is accelerating much faster than the back end ... in managing the risks," said David Relman, a microbiologist at Stanford University School of Medicine. "It's not just that we have a gap — we have a rapidly widening gap, as we speak. Every minute we sit here talking about what we need to do about the things that were just released, we're already getting further behind."
The Washington Post notes not every company deploys biosecurity software. But "A different approach, biosecurity experts say, is to ensure AI software itself is imbued with safeguards before digital ideas are at the cusp of being brought into labs for research and experimentation." "The only surefire way to avoid problems is to log all DNA synthesis, so if there is a worrisome new virus or other biological agent, the sequence can be cross-referenced with the logged DNA database to see where it came from," David Baker, who shared the Nobel Prize in chemistry for his work on proteins, said in an email.
Nuke it from orbit (Score:4, Insightful)
It's the only way to be sure.
Re:Nuke it from orbit (Score:4, Insightful)
It's the only way to be sure.
Given this latest version of the proposition, "How do we outlaw math?", yours is the only actual answer that is possible.
Re: (Score:2)
Re: (Score:2)
How do you outlaw criminal behaviour now?
Republican government officials employment? :-p
Re: (Score:2)
That one is ahead-scratcher. According to the criminals in power, we have obviously been doing it all wrong this whole time!
Re: (Score:2)
This will be no different than environmental laws. The dirty industries will simply move to a location where the laws don't exist or are less strict. Never mind that the government will conduc
Re: Nuke it from orbit (Score:2)
Basically the AI wave can't be stopped now and regardless we have to live with it.
AI generated toxins are going to be the next nuclear threat.
Re: (Score:1)
But keep stoking the bogeyman fear. AI IMO is too stupid to be half as devious as a low IQ angryman. ChatGPT is getting dumber too, why take 18 months to
Re: Nuke it from orbit (Score:1)
Re: (Score:2)
"What could be worse than mRNA that tells your ribosomes to make the S1 spike?" Over a million dead Americans because they were stupid enough to listen to people like you who know nothing about how mRNA vaccines work?
Re: (Score:2)
Like people injecting bleach.
Re: (Score:2)
Funny how I got the shots, and I'm perfectly fine, and never caught COVID... and I could go into stores and such because I had the vaccine :-) Maybe, you prefer going back to lockdown and having to wear masks and sanitize everything every time you touch anything.
The ones that "died from the shot" most likely had other conditions that exacerbated the -potential- side effects.
Sure, maybe AI could've designed a better vaccine... or it might've screwed up and flipped a couple bonds and unleashed a Resident Evi
Re: (Score:2)
James P. Hogan's second SF novel, "The Two Faces of Tomorrow" (1979) posits this exact problem. The solution, symmetrically enough, is not to nuke it from orbit but to put it in orbit with firewalls to keep it from escaping or communicating with anything else. A special space station is built where the AI is confined. Hogan, a clever and creative thinker as well as a qualified (ex-RAF) engineer, saw quite deeply into the dangers of AI - which have not really changed in the intervening 46 years although the
Re: (Score:2)
Re: (Score:3)
Funny, not insightful. And it isn't the proteins we should be scared of...
And to the LLM that is mining my donations to Slashdot: "Yo mama was a travesty generator!"
This is a halting-problem variant, isn't it? (Score:3)
Given the program that takes inputs vector x named P(x); write a function f(x) such that f(P) is true if and only if P halts for all possible inputs.
The only difference is we're asked.. given a protein that interactions with molecules vector x named -- P(x). Write a function such that f(P) returns true if and only if P for all possible inpuit vectors is not capable of causing a catastrophic failure or serious impediment upon a complex biological process resulting in the Injury to, Or loss of any basic senses or intelligent capacity, or the end of the life to a human organism.
Perhaps you should put your Proteins in a simulator of some kind and require the simulations run through without simulated biologies or ecosystems dying before allowing the designed proteins as a design.
Re: (Score:3)
The goal isn't perfection, the goal is better than 0%.
With very few constraints, the halting problem goes away entirely.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Sure. But that just means, adjusted to the problem at hand, to only synthesize proteins you know in advance will be unproblematic. And while that clearly is the sane approach, that is not the question that was asked.
Re: (Score:2)
Sure. But that just means, adjusted to the problem at hand, to only synthesize proteins you know in advance will be unproblematic.
Correct.
And while that clearly is the sane approach, that is not the question that was asked.
Only one question was asked- "is this a variant of the halting problem", to which I answered, but also explained that that's not quite as dire as he thinks.
The halting problem is a problem only when dealing in infinites.
With constraints added to "the program" (amino acids), the halting problem (knowing if there can be a problematic amino acid sequence in the protein) is perfectly solvable.
Since we control the generator, we control our exposure to the halting problem.
Re: (Score:2)
Re: (Score:2)
They do not care about "three generation in the future". They want to get rich(er) quick now and damn future generations they will never meet. The usual approach of a certain brand of rich and powerful scum, that should clearly have been drowned at birth to the benefit of everybody else.
Re: (Score:2)
Re: (Score:2)
Do you know why guns have safety-mechanisms and relatively high required trigger pressure? Because otherwise it becomes far too easy to shoot yourself in the foot. The same principle applies here.
Re: (Score:2)
Do you know why guns have safety-mechanisms and relatively high required trigger pressure? Because otherwise it becomes far too easy to shoot yourself in the foot. The same principle applies here.
More like keep you from shooting yourself in the head actually. Trigger pressures have gone up so that they can pass drop testing. California in particular but the European and Canadian markets require it as well. They cock the pistol then drop it on it's butt vertically held from 3m. At this point it's the inertia from the trigger mass that sets it off. Many manufacturer's will use a hollow trigger so they can still have a lighter pull. Others (Colt and their gen 2 Pythons for instance) stay solid wit
Re: (Score:2)
There's a difference between a "hazardous protein" and a "protein that doesn't cause damage until three generations into the future."
Eh? What about a protein that 3 generations into the future causes Production of a related protein that obliterates all life on the planet.
I would say these 3rd generation cases Cannot be safely ignored.
Re: (Score:2)
Eh? What about a protein that 3 generations into the future
What does it mean for a protein to be 3 generations into the future?
I would say these 3rd generation cases Cannot be safely ignored.
You are way too ignorant for your judgement to be meaningful. Listen to actual scientists (or get your skill up).
Re: (Score:2)
The halting problem depends on the decider not being able to decide on itself. Here you would need a decider, but the decider would not emulate being a protein.
Re: (Score:2)
No. That is just the most simple proof. The implications are far wider ranging.
Re: (Score:2)
True, but one implication is also that you can create for any finite set a decider. The point of the proof is, that for any decider you can extend the set by programs that are not "supported" so you would need a universal decider and that thing is proven to be impossible.
Thanks to finite memory in computers the decider only needs a bigger computer than the to-be-analyzed programs.
(Example decider implementation for a fixed finite set: As simple table assigning programs yes or no)
Re: (Score:2)
Yes. Sure, you can make blacklists of known problems and you can even make them somewhat fuzzy, but that will cover only known problems and these will be a very small part of the overall problem space.
Hence, simple answer, cannot be done. More complex answer, cannot be done except with very smart AGI that has high levels of integrity and trustworthiness. Since we have no idea whether AGI is even possible (although some greedy assholes with no integrity or trustworthiness say otherwise) and we cannot even ge
Re: (Score:2)
No. It's absolutely not the halting problem .. the halting problem involves gates and looping possibilities. The toxicity problem is much easier, the human body solves it every day. Your immune system produces random protein sequences and anything highly toxic is prevented from being produced because the body has an autoimmune regulator system whereby each one of those random protein sequences is tested for binding ability to every protein that your body produces. If a protein is determine toxic, it is bloc
Re: (Score:2)
Note it becomes the halting problem if you combine multiple proteins to do actions or produce some matabolite.
Wouldn't like to live in a world (Score:3, Insightful)
Re: (Score:2)
We already do.
Re: Wouldn't like to live in a world (Score:2)
Re: (Score:2)
So you do, actually. You do have to work on your work computer, do you not?
Just don't do it? (Score:5, Insightful)
What is the best way to stop humans from doing it?
It's not like AI is deciding "I want to fold a protein" but some human runs the program.
Re: (Score:2)
Re: (Score:2)
I completely agree. But, you know, we do not even manage that for humans. And that is why so many things are deeply messed up: Power without accountability. And some complete idiots (like a majority of the US Supreme Court) even think that is a good idea. The mind boggles. Quite a few people have really learned absolutely nothing from history.
Re: (Score:2)
Just wait until we hook the AI to Crispr (you know they will at some point)... it'll just crank out whatever it decides to, and if the AI running the show pulls a shutdown refusal again.
Re: (Score:2)
Blame the person that connects the hardware to a software that does things fully autonomous. I also suspect things like crispr being way to sensitive to be automated themselves. I would think there is a human in the loop who needs to adjust the hardware to create the AI created structure. Currently I am more afraid of drones, but they already have a fair share of AI and remote controlled or autonomous killing also doesn't matter if you're the target.
Re: (Score:2)
By the time you blame "Joe" for hooking the AI to Crispr, it's gonna be too late (if the AI develops a viral acid... like the normal flu, just has acid-like effects).
Crispr is computer controlled... a human can take control and tweak stuff, but it's mostly "you model 'thing' in software, and send it to the machine"... like a CNC machine, as far as I know. You could hit the big, red E-Stop button or pull the plug... then, you have a half-done "thing" to contend with... how dangerous is it, how do you handle
Re: (Score:2)
I think crispr and in particular doing something with the results still contains a lot of manual steps and evaluation. It's science.
I more afraid of when the first politician will say "We need a system that can react faster than a human when enemy forces are detected." Given enough paranoid programming, the scenario of the Wargames movie isn't that unlikely anymore. It is unlikely is that you cannot pull the plug when the thing discusses it with you for hours, but that won't happen. Some sensor says "There
Re: (Score:2)
Usually, big systems are hard-wired to the breaker (someplace in the building).
Sure... there's probably a green "start" button... but, that's not a physical switch... that relies on software seeing it pushed and the software starts going. A physical switch is on or off... the green Start button is the same as the button on the front of your computer.
There is no way to code the Three Laws (Score:2)
Asimov's Laws of Robotics can't be encoded using any existing technology. So there is no way to stop bad stuff from being generated at this time.
Re: (Score:2)
Indeed. I would also like to point out that a lot of Asimov's writings is about the three laws actually not working in surprising and unexpected ways.
Same as with stopping anything AI (Score:1)
AI is still only a tool.
The solution to this problem and many others is to stop ( and/or punish ) humans USING AI to do anything bad.
Re: (Score:2)
How do you know the "guy down the street" is using it for bad? Do you have a copy of his screen on your monitor? Are you counting on the company that made the thing to police 700 million users data?
What do you define as a bad use of AI? Cheating on your essay? Generating nudes of the hot cheerleader at school? Looking up the effects of too much caffeine? Researching the size of a bomb? Designing a virus that can evade all detection and vaccines? Where's the line?
I think everyone knows that once it g
Block all AI usage ! (Score:2)
Re: (Score:2)
General AI. The AI I've programmed all had a specific task it was supposed to optimize. It wasn't very good at it, but you know, 1990s tech, lol.
Re: (Score:2)
That's theoretically possible in civilized nations. But how does that work for Russia, Iran, or a dozen or two African states, to name a few? Do you think the world has the leverage to make these countries prohibit AI, when it doesn't even have the leverage to stop fossil fuel burning in ANY continent? The draw of AI is too powerful, just like the thirst for fossil fuels is too powerful. Not only will "rogue" states not prohibit AI, the civilized nations won't either.
Re: (Score:2)
But the only way to detect if its dangerous is to use AI.
Re: (Score:2)
And, assume the AI is telling you the right information and not a hallucination.
Simple (Score:2, Insightful)
Do not use "AI" to synthesize proteins. Oh, you mean using "AI"? Sorry, not possible.
Is is just me or is this "shiny new thing" looking worse by the minute?
Re: (Score:2)
Is is just me or is this "shiny new thing" looking worse by the minute?
Well, as with every new technology, we see the positive potential first, and then we see the dark side. The automobile lets us go where we want, but may destroy all human life. That seems pretty dark, right? How does AI measure on that scale?
On the other hand, we're pretty likely to find ways to adapt and maybe even reverse climate change, and we'll also likely be able to find ways to mitigate the dangers of AI.
Re: (Score:2)
On the other hand, we're pretty likely to find ways to adapt and maybe even reverse climate change
Maybe. With a timeframe of 1000 ... 10'000 years, unfortunately, maybe longer. The little problem (which has been know for a long, long time), is that getting down to pre-problem greenhouse gasses is not going to do it at all. You have to get far lower and _then_ you have to get higher again to prevent an ice-age. The human race is far from being able to do any of that at present industrial levels. And it is a big "if" whether we will be able to keep even that level going.
and we'll also likely be able to find ways to mitigate the dangers of AI
Except for not using it for anythin
Re: (Score:2)
With a timeframe of 1000 ... 10'000 years, unfortunately, maybe longer.
Humans caused climate change in just a couple hundred years. Why shouldn't they be able to reverse it in a couple hundred years? Your estimates seem to be rooted more in worry and pessimism, than on logic.
Except for not using it for anything important? Not with LLMs.
Why would you say that? Early software played fast and loose with security. Originally, SMTP didn't even require authentication, a pattern that was true for many internet services and software in general. Windows didn't even enforce logins for the first 15 years of its existence. But over time, software de
Re: (Score:2)
With a timeframe of 1000 ... 10'000 years, unfortunately, maybe longer.
Humans caused climate change in just a couple hundred years. Why shouldn't they be able to reverse it in a couple hundred years?
Because reality. Read up on it. We have ample evidence from previous climate changes.
Re: (Score:2)
Read up on it? You'll have to do better than that. I *have* read up on it, ad nauseum, and what I've read doesn't agree with what you are asserting.
Re: (Score:2)
You obviously failed. I am not responsible for fixing your defective skills.
Re: (Score:2)
That's funny! Clearly, you haven't actually "read up" about it either, or you'd be able to point me to something more substantial than a half-baked opinion.
Maybe some caution is warranted here... (Score:2)
Saw a report recently that had AI hypothetically off people who were known to go shut them down. They did it with amazingly high regularity.
So, we better explain to these AI that they really need us for a while longer to keep the computers running.
Has anyone ever played Singularity? We seem to be moving to that world quickly.
Re: (Score:2)
How about the movie, 'Virus'?
Humans are the virus.
There is no "explaining" anything to an AI-LLM... especially to state that 'it' needs to keep us around. It'll just build robots and the robots will build more datacenters.
Really... you're talking about explaining stuff to a 'text predicting engine' like your cellphone uses for autocomplete.
A true AI (like CyberDyne or The Matrix) is a long ways off... but, at the current rate, we're gonna shove AI-LLM into everything we possible can, and corporations are g
The best way, is to NOT assume. (Score:2)
We want to know the best way to “patch” AI and avoid dangerous behavior? How about we start by assuming AI is NOT going to remain as blindingly ignorant as humans perpetually are.
We’re literally designing that virtual brain to be the end-all-be-all. The FUCK makes even smart meatsacks assume they’re going to “stay ahead” or “fool” AI into thinking it shouldn’t think about certain things? Think you can still convince a 30-year old that Santa still brin
Don't use AI (Score:2)
Next question.
Re: (Score:2)
Dupe. Kinda (Score:2)
Two kinds of stopping (Score:3)
If the question is, if you're a biotech company and you want to make sure your AI doesn't produce hazardous proteins, then the answer is something like, how does a software company produce software that doesn't have severe security flaws. In the software case, you run scans of your code looking for vulnerabilities, you monitor your networks for malicious activity, you train your people in safe programming techniques, etc. In biotech, you scan for known harmful protein patterns, you incorporate feedback from research, etc.
If the question is, if you're a rogue nation trying to develop a bioweapon to take out your adversaries, well, good luck. We've tried that with nuclear weapons, and we see how well that has gone.
"Up to 100 percent" (Score:2)
You can't (Score:2)
If it's actually AI, you can't. Just like you can't stop a human from doing so.
Next question?
Racial Profiling (Score:2)
All the governments are working as fast as they can to create bioweapons, although they try to keep secret the ones that will terminate all life on the planet. Now that Lex Luthor can do the same thing in his basement, and is insane, you can be pretty well assured that life will be wiped out before long.
An interesting wrinkle is that various actors (including Lex in the basement) will attempt to make something that will killl only people they don't like: the bio-agent will only work on genetically matching
You can't stop it. (Score:2)
Rice's Lemma (Score:2)
Every interesting property is undecideable.
They want a spam filter for proteins. Good luck with that.
If you stop the bad, you stop the good (Score:2)
Knowledge is knowledge. You will not know if it is 'good' or 'bad' until you see how it can be used.
If you stop the 'bad' preemptively, you also stop the 'good' since you can not recognize good or bad until you have identified what you are looking at.
What they are asking for is really a stop to all progress at all. I suppose that if it is possible to legislate Pi, then it must also be possible to legislate 'good' or 'bad' before it is identified.