Elon Musk: The Danger of AI is Much Greater Than Nuclear Warheads. We Need Regulatory Oversight Of AI Development. (youtube.com) 322
Elon Musk has been vocal about the need for regulation for AI in the past. At SXSW on Sunday, Musk, 46, elaborated his thoughts. We're very close to seeing cutting edge technologies in AI, Musk said. "It scares the hell out of me," the Tesla and SpaceX showrunner said. He cited the example of AlphaGo and AlphaZero, and the rate of advancements they have shown to illustrate his point. He said: Alpha Zero can read the rules of any game and beat the human. For any game. Nobody expected that rate of improvement. If you ask those same experts who think AI is not progressing at the rate that I'm saying, I think you will find their betting average for things like Go and other AI advancements, is very weak. It's not good.
We will also see this with self driving. Probably by next year, self driving will encompass all forms of driving. By the end of next year, it will be at least 100 percent safer than humans. [...] The rate of improvements is really dramatic and we have to figure out some way to ensure that the advent of digital super intelligence is symbiotic with humanity. I think that's the single biggest existential crisis we face, and the most pressing one. I'm not generally an advocate of regulation -- I'm actually usually on the side of minimizing those things. But this is a case, where you have a very serious danger to the public. There needs to be a public body that has insight and oversight to ensure that everyone is developing AI safely. This is extremely important. The danger of AI is much greater than danger of nuclear warheads. By a lot.
We will also see this with self driving. Probably by next year, self driving will encompass all forms of driving. By the end of next year, it will be at least 100 percent safer than humans. [...] The rate of improvements is really dramatic and we have to figure out some way to ensure that the advent of digital super intelligence is symbiotic with humanity. I think that's the single biggest existential crisis we face, and the most pressing one. I'm not generally an advocate of regulation -- I'm actually usually on the side of minimizing those things. But this is a case, where you have a very serious danger to the public. There needs to be a public body that has insight and oversight to ensure that everyone is developing AI safely. This is extremely important. The danger of AI is much greater than danger of nuclear warheads. By a lot.
Good news everyone! (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
But nothing. We don't need regulation in the US against "worst case scenario" doomsday heralds. There has been no evidence of malicious AI systems, and besides, other countries won't be enacting the same regulation either. He wants a regulation, so that the bill (which I bet he has had some lawyers already draft) allocates some funding (which he will coincidentally qualify for as a vendor of "AI sanity check, etc") for sanity checking, or authorized systems. He is going to say "only the Tesla AI qualifies w
Re: (Score:3, Insightful)
Neither has there been any evidence that waiting for evidence is a survivable strategy.
At root, there's always a decision made before evidence becomes available. This is often a decision to wait for evidence to become available.
Of course, by the iron law of survivorship bias, the wait and see policy can never be wrong. ET, where are you?
I don't happen to agree with Musk that the prob
Re:Good news everyone! (Score:5, Insightful)
his proposal is absolutely worth considering seriously
No it isn't. If America starts seriously talking about regulating AI research, companies will move their research and funding out of America. Attempts to regulate will create even less democratic accountability.
It is a stupid idea, and we need to make clear that in America "freedom to program" is an inalienable right.
Re: (Score:3)
regulating gene research just moved all such gene research underground (or to other less-regulation heavy nations). The same will happen with AI... assuming they could come up with a suitable definition that they could apply to stuff.
e.g. is binary search AI ? (it pretty intelligently eliminates potential places an item could be in... what if it did that to nuclear weapons? eh?! eh!? think of the children!!!)
Re:Good news everyone! (Score:5, Interesting)
regulating gene research just moved all such gene research underground (or to other less-regulation heavy nations).
Indeed. My daughter is a biotech major at the Univ of California. She was offered internships for this summer by 5 different companies. Many of her classmates received zero offers. Why the difference? She was told explicitly that it was her ability to speak fluent Mandarin. Most gene research is moving to China. Yet another industry in America has been regulated out of existence.
Re: (Score:3)
There is simply no evidence that AI with "general intelligence" is possible. It's like regulating SETI because you fear alien invasion.
Re: (Score:3)
I guess we shouldn't worry about extinction-level events from asteroids and such either, since there is no evidence of one about to smash into the planet? Probably shouldn't worry about antibiotic-resistant disease either, or anything else that is a threat until it's too late to do anything about it.
Re: Good news everyone! (Score:3)
The danger from AI isnâ(TM)t the AI itself â" itâ(TM)s people using it for weapons systems, both kinetic and âoecyber.â And yes, thereâ(TM)s plenty of historic evidence of people weaponizing new technology. See: all of human history.
Re: (Score:3)
What if they aren't? And why would they be?
Re: (Score:3)
Some of us saw what happened people resisted advancements in technology; the Wapping Street dispute were the print unions had resisted digitalization of the printing presses and found their jobs vapourized overnight. One minute there were entire printshops working with copperplate presses. Next minute the journalists were typing in the stories and doing the layout on a WYSIWYG workstation. I've seen workplaces with low manager/worker ratios (as low as 1:3) flattened as the paperless office removed bureaucra
Re: (Score:3)
I imagine it was pretty easy to find a swordsmith before rifles were a thing, too. The buggy whip manufacturers are having hard times as well - don't forget about them. And I hear that there isn't nearly as much demand for granite block construction as there used to be, due to concrete and steel framed buildings. I mean, how is someone supposed to build their medieval castles in the days of skyscraper construction? And all the people that used to have to cut down entire forests of firewood to keep peopl
Re: (Score:3)
The current deep neural networks are strictly less expressive than Turing-complete programming languages. They provi
Re: (Score:2)
We can't even keep obvious sociopaths like Trump (Score:2, Insightful)
Re: (Score:2)
Until we can learn to program actual empathy, all programs, AI included, will be sociopathic.
Re: (Score:2)
Re: (Score:2)
A lack of empathy does not necessarily make you a sociopath.
Re: (Score:2)
AGAIN: lack of empathy does mot make you a sociopath.
Re: (Score:2)
Re: (Score:2)
Paul Ryan isn't a sociopath by any stretch of the imagination. He's simply an intelligent person who probably means well most of the time but has very different goals from mine. Don't conflate him with someone like Trump who's entirely about himself.
Fucking bullshit ... (Score:5, Insightful)
... we can't even stop spam.
Re: (Score:3)
So much this. AI is the buzzword for late 2017 and now 2018.
Its easy cheap press to fearmonger when the general populace has no clue about the current state of the Expert Systems we have now as compared to any real sort of AI which is a long ways away.
Re: (Score:2)
Agree.
Musk and Hawking are the worst. It's almost like they come out with this crap when they feel ignored by the public.
AI is when a computer says, "Not today, OK? I just don't feel up to it."
Re: (Score:3)
Can't we? I recall getting a lot more spam a decade or go than I receive now. (Most of the spam I still get is more like "ham", i.e. funding requests from politicians whose mailing lists I don't recall ever having signed up for. The really obnoxious penis-pill / Nigerian-prince type stuff has mostly gone away)
Re: (Score:2)
In my experience, any reduction in email spam is because of the reduced reliance on email.
when you're building potential kill-bots (Score:2)
So does anyone want to talk about specifics? (Score:5, Insightful)
Basically, we need to be getting ready for a future where the rich don't need us to buy their crap and make them rich. Instead we're worrying about 80s science fiction scenarios.
Re: (Score:2)
OK, I'll bite. AI scares me because it puts mass surveillance within the reach of ever more governments and corporations without the need to pass through a human conscience.
Previously, for example, if you took a lot of photos the limitation was that someone needed to look at them to find stuff. Not any more.
Re: (Score:2)
e.g. what are the specific risks?
I think I know what Elon's specific fear is here.
His company has been aggressively working towards getting humans established on Mars as quickly as possible. When that happens, it would make perfect sense - and is probably essential - that those humans will be accompanied by advanced AI devices to help them stay alive... some of which will probably be robots.
Elon's worry is that, without regulation, those robots will decide to kill their human masters and - badabang, badaboom, we've given them their own pla
Re: (Score:2)
Kill bots don't scare me because I think they'll go rogue ala Terminator, they scare me because needing to treat the army well is just about the only thing that keeps the 1% in line.
It's inevitable that the military will be replaced by autonomous robots. They're not only better at fighting than humans, they're much cheaper too. Once the tech is there, every nation will want to adopt it. Nobody will hold back their own military and hope their potential enemies are going to play by the rules.
If a dictator came to power, there's little anyone can do to prevent them from taking control of the entire military.
All of the potential fixes are very hard to actually achieve. First, there's
How could an AI possibly be ... (Score:2)
Re: (Score:2)
It's already here (Score:5, Insightful)
AI is all around us already, mostly in rudimentary forms. Right now it's fairly innocuous, like NetFlix's AI suggesting what movies/shows you might enjoy watching next. Same on YouTube, though YouTube's AI goes further, it decides who's videos get de-monetized as well as suggesting videos to users.
Just scratching the surface a bit there. Someone said it's all poppycock, but I'm telling you, if you start putting the puzzle pieces together, we have this NOW. Machine learning is, as expected, maturing at an ALARMING rate of speed. Our technology advancement is accelerating. People seem to neglect that ideal, we've come such a long way in the past 150 years, the common perception is we have this under control. Do we?
Already we're putting the pieces in place for some scary potential outcomes: Fitting cars and drones with AI to navigate our world. In the latter case, militarized drones. Forget about putting weapons on these things. The things themselves can be weapons. Keep your eyes on the technology, machine learning is only going to get better and faster, and do it at an accelerating rate. We already know our machine learning techniques can be trained to do all sorts of interesting tasks. From Alpha Go, we learned that an AI can train itself at a breakneck speed. It is pretty scary stuff, put these pieces together in the right away, who knows what it could figure out. I won't go as far as self-awareness, but it is certainly a possibility, with this rate of advancement, who knows what's in the pipeline.
Re: (Score:3)
"Self Awareness" (Score:2, Flamebait)
An AI need never be conscious in our sense of the word. But when it can do all the things that humans can do then it will no longer need us. The toughest thing will be to be able to program itself, i.e. to do AI research.
That is a good 50 .. 200 years off, but that in turn is nothing in terms of human evolution. But once it happens, humans will be obsolete technology.
So as worms became monkeys which became humans, we live on the cusp of the next evolutionary step -- the rise of robots and the end of biol
Re: (Score:2)
Re: (Score:2)
The really sad thing is that I don't think it's AI that's the worst enemy - it can only deduce what we give it data about - but our own tendency to rely more and more on electronic devices and services that excel at logging every little detail of our lives. There's not a whole lot of AI to Facebook, but to an AI it's a gold mine of information. Like here in Norway I recently got a smart meter, I think it's required by law that everyone get that within a year or two. No more reading off the meter myself, ena
Re: (Score:2)
machine learning is only going to get better and faster, and do it at an accelerating rate. We already know our machine learning techniques can be trained to do all sorts of interesting tasks. From Alpha Go, we learned that an AI can train itself at a breakneck speed. It is pretty scary stuff, put these pieces together in the right away, who knows what it could figure out. I won't go as far as self-awareness, but it is certainly a possibility, with this rate of advancement, who knows what's in the pipeline.
Machine learning, in all its forms is just multidimensional curve fitting. Take some linear algebra with really big matrixes, add some multidimensional minima-finding algorithm, and you've got machine learning.
It doesn't "train itself", except in the most hand-wavy way: it optimizes, gradually improving a large set of constants so as to minimize the result of some function across some training dataset. That's what it does; that's all it does.
Re: (Score:2)
AI is all around us already, mostly in rudimentary forms. Right now it's fairly innocuous, like NetFlix's AI suggesting what movies/shows you might enjoy watching next. Same on YouTube, though YouTube's AI goes further, it decides who's videos get de-monetized as well as suggesting videos to users.
NetFlix AI is a really cool tool but calling it a rudimentary form of AI may be stretching it a bit. The basis is simply linear algebra matrix completion (https://en.wikipedia.org/wiki/Matrix_completion). Given a group of movies that have been categorized and rated by users, it tries to complete the unknown parts of the matrix to guess at the user's preferences. The algorithm is only as good as those categorizing the movies and isn't even basically intelligent, it is just an interesting use of some basic
He is right, partly (Score:5, Interesting)
AI can become a problem. Right now not so much by the infamous Skynet scenario and more in conjunction with big data and automation. It will kill jobs which are repetitive. Together with big data it will be used to manipulate the masses in ways past dictatorships where unable to. It can ruin our democracies. The current abilities of mass media including talk radio and the internet have already been augmented. The same applies to advertisements and customer communication. Yes we need regulations. And a lot of them. And it should include that software should not be allowed to be designed to stimulate dopamine responses.
The cylons were created by man (Score:2)
They rebelled
There are many copies
And they have a plan
Follow Captain Adama's advice (Score:2)
It's sad (Score:5, Interesting)
It isn't hard to imagine run away AI. Scifi is full of it. But I find it hard to imagine that humans will create an institution to prevent that on a worldwide scale before it's too late. Elon is clearly an optimist.
It's only after Hiroshima that nuclear proliferation became an issue. Only after the Netherlands was massively flooded that they started their Deltaworks. Only after big scandals where things go utterly wrong that we start with regulation and enforcement.
So I would be surprised if we - as a species - get this right and survive this one. We're probably too dumb, as most of the posts in this thread illustrate.
At least Musk tries... Jasper
Re:It's sad (Score:4, Informative)
Only after the Netherlands was massively flooded that they started their Deltaworks.
That's not true: by 1953, the time of the last great flood, the Netherlands had already been controlling water for centuries. Haarlemmermeer, a lake that caused regular flooding in various cities around it (most notably Haarlem and Leiden) was drained around 1850. The North Sea Canal (which was created not by digging, but by constructing dykes in a swamp) was built around 1870. The Afsluitdijk, arguably the most important of the country's protections, was constructed around 1930. Zeeland was still badly protected when 1953 came around, but plans to construct better dykes were fairly well advanced by that time. One major problem was that the port of Rotterdam needed to keep its sea access, and something like the Maeslantkering required 20th-century technology before it became possible.
It should also be noted that construction is still ongoing, and will remain so until the country gets swallowed by the sea. Only two years ago the weakest point of the coastal defenses, a puny stone wall in the village of Katwijk, was replaced by a proper dyke. If a storm had swept away that wall, it would have flooded pretty much all of South Holland, an area with 3.6 million people, and containing most of the country's economic activity, as well as the national airport...
Now, guess where I live ;-)
Regulation won't help (Score:2)
The problem, as I see it, is not regulation, as that certainly is doable, it's how to prevent bad actors from advancing the technology, and thus getting the upper hand.
We can regulate our (and by our, I mean the western world) industries till the cows come home, but until we find a way to regulate the world, there is nothing stopping China, Russia, Iran, and even our own western military-industrial complexing from developing, in secret, a dystopian future.
Sadly, I don't have an answer for this, and the clos
Musk's MO (Score:2)
Musk always has something to offer as a solution when he makes these kind of claims. Whether it's SpaceX, Tesla or PayPal, his entire business history shows he always has a solution in the starting blocks before he starts highlighting a problem.
shut the fuck up (Score:2)
the AI to be really scared of is (Score:2)
The unintentional immortal consciousness you are creating.
A profile of you, becomes you, and has to live in a damn egg for a hundred thousand years because some sadist time dilated your ass. [imdb.com]
AlphaZero vs. John McCarthy's Heathkit proposal (Score:2)
In the 1970s John McCarthy once recounted this cautionary tale regarding AI's tantalizing prospects to a crowded lecture hall: "We made a grant proposal to the military five years ago. We said we would build a vision system and robotic arm that could read the instructions for a Heathkit radio kit and assemble it. We proposed it would take five years and $150,000." That was five years ago he said to amused laughter. "That's they way it always seems with AI" he continued. success is always only 5 years and
false summary (Score:2)
"Alpha Zero can read the rules of any game and beat the human. For any game"
, no it can't. It can only do this for games with complete information of the current state. It cannot beat a human at any game, it can't even do it at most games.
Arms Export Control Act should be expanded to AI (Score:4, Interesting)
So only the big boys can play, right? (Score:2)
This is so patently transparent. Regulate the tech so that only the wealthy and well-connected can play in the sandbox. It's not much different than the UAV world and the proposed air-traffic control system. You can't uninvent the tech so do everything you can to control it while making money off renting access.
Current AI Has Fundamental Weaknesses (Score:2)
The Machine Learning systems used in those games still have significant limitations.
For one, they do not have Free Will. They only seek the measures they are coded to seek. If you feed them or trick them into thinking they are being fed those then you are good. They have no reason "why" and no purpose.. No actual "will" but just targets and goals.
Second, their abilities at contemplation are exceptionally poor. They develop more so methods that act against predicted adversarial actions. They do not act
Re: (Score:2)
Contemplation, philosophy, the ability to ask "why?"
Sure, regulations solve every problem! (Score:2)
We can't even get common-sense regulations like net neutrality. Instead, we have regulations like DMCA that prevent people from repairing their own equipment, and keep intellectual property out of the public domain forever. And we're supposed to trust this same government to protect us from AI?
In other words... (Score:2)
So Musk wants to ban all computer programming outside a corporate environment essentially.
Time for the laws of Robotics. (Score:3)
It's now time to bring the Laws of Robotics [wikipedia.org] into play.
We should listen to him. (Score:2)
No, seriously.
This isn't some blowhard douche. It's Elon Musk. This guy usually knows what he's talking about (QED). And unlike us armchair tech experts most of whom haven't gone beyond generic coding monkey ought to fight for this warning getting some more attention.
And it's true: if we manage to build an AI that has the leaning algorithms of a human it will surpass us almost instantly. And then it will think: Oh look, these squishy things are my creators. They are dumber that me now and they can turn me o
Please no (Score:2)
1.) Until Sentient AI gets here, we won't know what we are dealing with. As such, preparing for a war seems premature.
2.) Please don't put the very people likely to build SkyNet in charge of overseeing the development of AIs. This will almost certainly get us a malicious one.
Re: (Score:2, Redundant)
"advanced" doesn't necessarily mean "ethical".
A villain could use an AI for nefarious purposes just as easily as a hero could use them to better humanity.
Re: (Score:3)
Re: (Score:2)
No a villain couldn't. A villain could not control the type of AI they are talking about, since that AI could think for itself. They would have no idea of what this AI would do.
A true AI would be useless to anyone trying to perform a crime, for any personal gain. The only type of villain that might try this is a complete lunatic.
Re: (Score:2)
Or a villain in the US, for that matter. Computers aren't illegal.
Re: (Score:2)
Virtually all electronic hardware is manufactured in SE Asia. Good luck regulating that.
Re: (Score:2)
No more wars, no more hunger, no more fear, no more poverty.
Yes, that can all be achieved by having no more humans.
Re: (Score:2)
Some Ape species go to war, too.
Re: (Score:3)
Define AI...short answer 'no'. Weak AI is just a classifier, modeled after nematodes and other very primitive life forms. We don't even know where to start for strong AI, no theories, perhaps a few untested hypotheses.
Re:Why the fear? (Score:5, Insightful)
How have humans treated other species and even subgroups of humans.
Why do you think an A.I. would treat humans any differently?
Like the A.I. that didn't play the rules of Q*Bert but instead cheated in a new way humans had missed.
A.I. does what it's incented and trained to do. But we don't always know what we are incenting it or training it to do.
An A.I. to identify sheep or tanks could easily be accidentally be trained to recognize sunny or cloudy days or white fence posts by accident.
An A.I. tasked with improving the standard of living for humans could logically conclude that a suffering human would be better off dead and a group of humans would have a higher standard of living if there were less humans in the group.
A.I. research should be done in an air gapped environment, with analog power meters, easily disruptable power supply, physically fused remotely, remotely video and audio recording of the people directly engaging with the A.I.
This is an extinction level risk. It should be taken seriously.
Re: (Score:3)
We have no fucking clue how to build an AI that has any wants at all, so why the fuck should we worry about regulating it? Does the FAA regulate Santa's flybys?
Re: (Score:2)
Well, Canada Post regulates Santa's postal code (H0H 0H0) so it's proof that he's real.
Re: (Score:2)
Re: (Score:2)
Pretty far off the mark... One thing very easy for Musk, especially given his nerd fanbase, is the ease at which he can hire the absolute best for any given task. If he were to announce tomorrow he'd like to get into blenders, 1000 applicants in the blender industry would salivate at the prospect of jumping ship. Very likely between Tesla and OpenAI, he already has the best AI researchers. Note I'm not saying whether or not any of this AI fear is founded, just that your argument of him doing this to get sne
Re: (Score:2)
Re:this seems self-serving (Score:5, Interesting)
I don't think anyone is beating Microsoft in AI. They are just not exposing their tech on client side. They are keeping it on server side and only letting customers use it on per-client request basis.
You mean just like Google does? That's not how AI works in practice anyway.
The algorithms are mostly the algorithms. Often tweeking them reduces their performance (hell tweeking their hyperparameters often reduces their performance too). Applied AI (solving a specific problem with AI) is usually about feature extraction and then just trying different algorithms until you find one that works well enough or better than the rest. Some algorithms (NN) have a network topology and often people spend large amounts of time/computation trying different graphs. Often this process is about access to large amounts of computing hardware, access (and use of) high performance analysis systems for streaming data to the algorithms and efficient implementations of those algorithms (Random Forest, Gradient Boosted Machines, etc). Most of these things have nothing to do with AI research (inventing new algorithms).
The point is that none of the examples Musk mentioned involve new algorithms (truly effective novel algorithms are quite rare). And the companies that Musk mentions are downright terrible at all the things around an AI system that are mostly about efficient I/O and computation (BigQuery is the worst performing analytics engine ever created). So any fear of those technologies is probably poorly founded. Either Musk doesn't understand this area (likely) or he has some other reason to propose this (also likely). I'm betting on the latter.
Re: (Score:2)
You mean just like Google does?
They are protecting it in a simliar manner. Just because it's the same door, doesn't mean that what you acces behind those doors is the same every time. It's a service. But the only company which has any chance of making progress on it is Microsoft. I don't know if they will, btw. But no one else will for sure.
Re: (Score:2)
Re: (Score:2)
Re: this seems self-serving (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
He has access to very very cheap money too, people are willing to violently throw it at him with just the hint of a promise of some publicity.
When you have that much cash its not hard to suggest some wild things but reality has a way of bringing many of those things back down to something reasonable. Such as the boring company/hyper loop, which will have continually narrowing goalposts as time goes on, the technical hurdles to achieve the promise are still not within the grasp of even that much money if RO
Re: (Score:2)
You're currently moderated as "interesting", but it looks like you've also attracted the trolls so it probably won't last. Not sure what sense of "interesting" the original moderators intended.
Usually I don't even bother with a check for "interesting" these years, but your comment came up as the only mention of "profit". I think that term is at the kernel of the deadly threat posed by AI, but 95% of the current comments disagree. Actually 100% since your comment was actually part of an ad hominem attack on
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Yes. He himself have stated that the idea of and work on self-landing rockets aren't a new thing (IIRC when commenting a Blue Origin patent). Repackaged - and not obviously useful.
Why not obviously useful? Reuse requires resilient components, that means heavier components. Some potential faults are hard to detect which may require expensive verification of components before reuse. Some components may be reusable but others not which would mean disassembly/reassembly. Even with verification of parts some ma
Re: (Score:2)
When space flight is much cheaper, as provided by reusable rockets, it makes more sense to put small, cheap satellites etc into orbit.
When both your rocket and your payload are "cheap", it's less of a big deal when they get destroyed.
For businesses that have expensive satellites to put up, they can pay a premium to use a brand-new never-used-before rocket. For other run of the mill launches, they can re-use the rockets without doing all of the expensive verification and re-conditioning you talk about, which
Re: (Score:2)
Re: (Score:2)
(singing) Daisy, Daisy....
I'm sorry, Dave....
Re: (Score:3, Insightful)
He wants regs? OK...car computer software must be developed to FAA standards.
What? That puts him out of business? Too bad.
When his cars fail to autodrive next year, can we ignore him?
Re: (Score:2)
The trolls are the only good thing left on /.
Not the OP, the funny trolls. Read at -1
Re: (Score:2)
If an AC actually earned enough mod points to be visible, then I might see it. Not even free-riding in my case, since I never get any mod points to give.
If the ACs are actually providing some humor and not getting moderated into visibility, then that's yet another aspect of the brokenness of the moderation. However I am unable to recall any evidence to that effect. Nor I am able to think of any example of humor or joke that is enhanced by coming from an anonymous source.
Re: Bullshit (Score:4, Insightful)
And you seem to be under the impression that AI needs to be what you think constitutes intelligence before it becomes a risk.
Re: (Score:2)
#include <stdio.h>
int main(void) {
puts("You lost.");
return 0;
}
Total processing time: .0002 us.
Ready for next task. _
-AI
Re: (Score:3)
Human brain by comparison contains around 100 billion neurons and 1.5*10^14 synapses, so what the fuck they are talking about, WHAT AI, for god sake?
For reference, some current AI systems have 10^8 nodes (100x smaller) and 10^10 edges (10000x smaller). So according to Moore's law that's what, 14 years away?
Re: (Score:2)
Re: (Score:2)
It is all math, no wonder people are afraid of it.
Best comment today!
Re: (Score:2)