Former Google CEO Eric Schmidt Bets AI Will Shake Up Scientific Research 30
Eric Schmidt is funding a nonprofit that's focused on building an artificial intelligence-powered assistant for the laboratory, with the lofty goal of overhauling the scientific research process. From a report: The nonprofit, Future House, plans to develop AI tools that can analyze and summarize research papers as well as respond to scientific questions using large language models -- the same technology that supports popular AI chatbots. But Future House also intends to go a step further. The "AI scientist," as Future House refers to it, will one day be able to sift through thousands of scientific papers and independently compose hypotheses at greater speed and scale than humans, Chief Executive Officer Sam Rodriques said.
A growing number of businesses and investors are focusing on AI's potential applications in science, including uncovering new medicines and therapies. While Future House aims to make breakthroughs of its own, it believes the scientific process itself can be transformed by having AI generate a hypothesis, conduct experiments and reach conclusions -- even though some existing AI tools have been prone to errors and bias. Rodriques acknowledged the risks of AI being applied in science. "It's not just inaccuracy that you need to worry about," he said. There are also concerns that "people can use them to come up with weapons and things like that." Future House will "have an obligation" to make sure there's safeguards in place," he added.
A growing number of businesses and investors are focusing on AI's potential applications in science, including uncovering new medicines and therapies. While Future House aims to make breakthroughs of its own, it believes the scientific process itself can be transformed by having AI generate a hypothesis, conduct experiments and reach conclusions -- even though some existing AI tools have been prone to errors and bias. Rodriques acknowledged the risks of AI being applied in science. "It's not just inaccuracy that you need to worry about," he said. There are also concerns that "people can use them to come up with weapons and things like that." Future House will "have an obligation" to make sure there's safeguards in place," he added.
Shake "up?" (Score:5, Funny)
No, I think you mean "shakedown."
Will AIs replace CEOs ? (Score:3)
That would be interesting. Would anyone notice ?
Re: (Score:2)
Re: (Score:2)
Meta studies are pretty complex (Score:2)
Re: (Score:2)
https://www.newscientist.com/article/2330866-deepminds-protein-folding-ai-cracks-biologys-biggest-problem/ [newscientist.com]
Re: (Score:2)
Re: (Score:2)
This actually sounds good (Score:2)
Re: (Score:2)
Re: (Score:2)
But that would work only if you could then ask it how it came to give a particular answer. AFAIK, you can't ask LLMs how they came to give a particular response.
Re: (Score:1)
I'm not sure about the "worms and viruses" part helping anything, but you're right that relying on technology too much will weaken us, and we're already deep into it even without AI. What's different and worse about AI is just that it can move on its own, so unlike cigarettes, the threat won't necessarily stop just because people start to wake up.
Re: (Score:3)
It plays to Schmit's plan. First, he helped Google surrender our privacy at Google, after fumbling networking at Novell and Sun.
Then he hedged his bet and handily became a citizen of Cyprus so he wouldn't have to pay those pesky taxes. https://www.vox.com/recode/202... [vox.com]
Now he wants you to believe that AI will shake up the scientific method, where AI has no chain of authorities, and the scientific method is a bust without referential integrity.
Nothing to see here, just another Tech Talking Head spouting foam
It already has (Score:2)
AI sucks (Score:3)
You'd be mad to do anything important with AI. AI can be used as an assistant to someone knowledge in the art but then so can a search engine and knowlege base. .. that's a minimum of 50 to 100 years away. Most likely well over 100 years away.
And yes of course i've used chatGPT, bard, midjourney etc. that's why I am saying AI sucks. They can improve AI over the next decade in certain specific areas such as factory automation and certain types of diagnostics but beyond that it's diminishing returns and whack-a-mole. If we are talking human-like AGI intelligence
Re: (Score:2)
Re: (Score:2)
At this time? Very much so. With the current hype, the "ordinary person" interface of AI has gotten a lot better, but the quality of the answers has gotten worse. As to AGI, there is zero indication from what we currently have that it is even possible. It is not a question of computing power or training data either or we would at least have a glimmer of AGI today. Instead, we have absolutely nothing.
Re: (Score:2)
You'd be mad to do anything important with AI
That's saying too much. AI has vastly improved things like handwriting recognition, for example.
the adulterer in the room (Score:2)
He was the one who said "Google isn't free, the cost is your information".
"Science" (Score:2)
I'm not sure what part of the scientific method is compatible with inserting a black box complex enough that no one knows exactly what the processing path is.
"You know what would help this two variable problem? About 5,000 more variables that are completely unknowable." - Einstein, probably
Re: (Score:2)
"About 5,000 more variables that are completely unknowable."
They're completely knowable, it's just that there are a lot of them so they're hard to reason about.
Indeed (Score:2)
Finally they'll use AI to fake the results data and graphs, so that's it's not as easily found out as today.
Rich person is clueless... (Score:2)
Seriously, why do people think that having money and some expertise in one area makes them experts in all others?
One thing I can see happening is making plagiarism, fake results and low-quality research much harder to publish. That would be a good thing. As to the actual work of having insight and turning that into something useful, "AI" will contribute exactly zero. Also, a lot of applied researchers have had insights while dealing with more mundane stuff. I had the core idea for my PhD when thinking about
Can't block bad intent. (Score:2)
"Future House will "have an obligation" to make sure there's safeguards in place," he added.
Nope. Won't work. It doesn't matter how hard they try to stop it, the cat is going to escape the bag. Maybe it's a data breach, maybe it's a new CEO who branches out licensing to questionable partners... but it'll get out. And the ratio of philanthropic uses to nefarious ones is at the very best 1:1.
Not to be unreasonably fatalistic... but fatalism is appropriate here. It's coming. People will design new chemical
Always good to know (Score:2)
That the guys who think they hold the reins of power don't have a fucking clue what they're talking about.
Reproduction Crisis (Score:2)
Former Evil Empire Maker Predicts Obvious.... (Score:1)