Concerns of an Artificial Intelligence Pioneer 197
An anonymous reader writes: In January, the British-American computer scientist Stuart Russell drafted and became the first signatory of an open letter calling for researchers to look beyond the goal of merely making artificial intelligence more powerful. "We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial," the letter states. "Our AI systems must do what we want them to do." Thousands of people have since signed the letter, including leading artificial intelligence researchers at Google, Facebook, Microsoft and other industry hubs along with top computer scientists, physicists and philosophers around the world. By the end of March, about 300 research groups had applied to pursue new research into "keeping artificial intelligence beneficial" with funds contributed by the letter's 37th signatory, the inventor-entrepreneur Elon Musk.
Russell, 53, a professor of computer science and founder of the Center for Intelligent Systems at the University of California, Berkeley, has long been contemplating the power and perils of thinking machines. He is the author of more than 200 papers as well as the field's standard textbook, Artificial Intelligence: A Modern Approach (with Peter Norvig, head of research at Google). But increasingly rapid advances in artificial intelligence have given Russell's longstanding concerns heightened urgency.
Russell, 53, a professor of computer science and founder of the Center for Intelligent Systems at the University of California, Berkeley, has long been contemplating the power and perils of thinking machines. He is the author of more than 200 papers as well as the field's standard textbook, Artificial Intelligence: A Modern Approach (with Peter Norvig, head of research at Google). But increasingly rapid advances in artificial intelligence have given Russell's longstanding concerns heightened urgency.
The problem is "beneficial" (Score:5, Interesting)
The problem is definining "beneficial".
To whom should the AI be beneficial toward? The owner of the platform? or to the vendor of the
package?
Fear (Score:5, Interesting)
I don't fear intelligent machines. I fear stupid machines with too much autonomy.
I also fear stupid people with too much autonomy.
Re:Fear (Score:4, Insightful)
Or with firearms.
Fear of a dumb planet (Score:2, Interesting)
I mean Christ, what kind of retard is running around worrying about problems that aren't even real? This guy might as well be writing books about how to fight zombies or repel vampires and boogie-men. Entertain
Re:Fear of a dumb planet (Score:5, Insightful)
While I agree with your post, I am old enough to remember when, a worldwide ubiquitous information network that could be used to track everyone's conversations, correspondence, whereabouts, patterns of consumption and financial habits" was also "shit not to be scared of".
And here we are.
I don't care that there are people trying to take a long view, as long as we don't take them too seriously. Let them dream, let them write down their dreams and it might be of use to someone, someday, even if only as entertainment.
Re: (Score:2)
before we start worrying about the sentient free roaming machines that don't exist yet and won't exist for many, many, years.
Unless, of course, they already exist. The problem with this field is both that we don't have any experience with it and much of it will remain secret. The biggest players are the national or supernational governments and they won't necessarily keep you informed of the state of the art.
Re: (Score:2)
Conspiracy much? The biggest players are game developers. Seriously. Governments don't need computers to improvise; they have actual people for that.
Re: (Score:2)
The biggest players are game developers.
Game developers have a reason to create sentient free roaming machines? I was thinking more the US military who has both access to considerable funding for such things, has expressed interest in researching such stuff, and can keep a secret pretty well.
Re: (Score:2)
Stupid machine with too much autonomy. Every reason to fear it.
Re: (Score:2)
And stupid people believing what stupid machines tell them...
Seriously, true/strong AI is as far away as it was 30 years ago and may well still turn out to be infeasible. Also, an AI-researcher with 200 papers is likely a hack. That is way too many for real content in most of them.
Re: (Score:2)
I completely agree.
Grandstanding, or stupidity? (Score:2)
If and when we get actual artificial intelligence -- not the algorithmic constructs most of these researchers are (hopefully) speaking of -- saying "Our AI systems must do what we want them to do" is tantamount to saying:
"We're going to import negros, but they must do what we want them to do."
Until these things are intelligent, it's just a matter of algorithms. Write them correctly, and they'll do what you want (not that this is easy, but still.) Once they are intelligent, though, if this is how people are
Re:Grandstanding, or stupidity? (Score:5, Insightful)
Artificial or otherwise. I really don't see how any intelligent being won't want to make its own decisions, take its own place in the social and creative order, generally be autonomous. Get in there and get in the way of that... well, just look at history.
People are products of Darwinian evolution. Things like self-preservation, greed, and ambition are emergent properties of the Darwinian process. An AI does not evolve through a Darwinian mechanism, and there is no reason to expect it to have those properties unless they are intentionally designed in. An AI will likely have as much in common with a human as a Boeing 747 has in common with a hummingbird.
Re: (Score:2)
Actually, intelligence and creativity are both reasonably well defined.
Intelligence is how fast you can solve intuitive problems (e.g. "Cheryl's Birthday"). The faster you get it (and if you get it at all) marks your raw intelligence.
Creativity is how many solutions you can come up with to intuitive problems over time. A typical test is to give you a couple of squiggly lines and two days to come up with as many explanations for the lines as you can. A smart guy may only come up with a dozen that he can just
Re: (Score:3)
The problem is definining "beneficial".
To whom should the AI be beneficial toward?
The shareholders, of course!
Re:The problem is "beneficial" (Score:4, Insightful)
The problem is definining "beneficial".
To whom should the AI be beneficial toward? The owner of the platform? or to the vendor of the
package?
Heck, we can't even agree on that stuff when it comes to human behavior, let alone expressing it in a way such that it can be designed into a machine.
Given a choice between A and B, which is more right? If you could save one life at the cost of crippling (but not killing) a million people, would it be right to do so? Is it ok to torture somebody if you could be certain it would save lives (setting aside the effectiveness of torture, assuming it is effective, is it moral)? If we can't answer questions like these objectively, how are you going to get an AI to be "beneficial?"
Re: (Score:3, Insightful)
Is it ok to torture somebody if you could be certain it would save lives (setting aside the effectiveness of torture, assuming it is effective, is it moral)? If we can't answer questions like these objectively, how are you going to get an AI to be "beneficial?"
Is it moral? No.
That doesn't address the issue of "should you do it?"
Using the example: "Would you kill one innocent 10 year old girl if it saved 100 random stranger's lives?"
Does your answer change if you know the girl? If it is your daughter? If it is 100 million random strangers rather than just 100?
I would argue that killing the girl is always immoral, but I would understand why some people would do it.
---
Humans are able to do some really, really crappy things. Read up on the murder of babies by the
Re: (Score:2)
In reality, there are situations with no good solutions. Fortunately, they are rare. If you are in one, you pick the solution that strikes you as best, but you refrain from judging others in the same situation.
Far more problematic are decisions to spy on all communications of a few hundred million people in order to catch (or not) a dozen terrorists and the like.
Re: (Score:2)
Read up on the murder of babies by the Nazis in the concentration camps, it is evil.
Sure, but using some utilitarian models for morality some Nazi activities actually come out fine (well, their implementations were poor, but you can justify some things that most of us would consider horrific). Suppose human experimentation were reasonably likely to yield a medical advance? Would it be ethical? If you treated 1000 people like lab rats (vivsections and all), and it helped advance medicine to the benefit of millions of others, would that be wrong?
I'm not saying it is right. I'm saying tha
Re: (Score:2)
Sure, but using some utilitarian models for morality some Nazi activities actually come out fine (well, their implementations were poor, but you can justify some things that most of us would consider horrific).
There is no moral justification for throwing babies alive into fires.
Ok, stupid extreme example: The Borg show up and say that if we don't throw 1,000 babies alive into fires, they will blow up Earth. Sure, maybe we'd do it, but that doesn't make it right or moral.
Sometimes people will take an immoral stance for the "good of the many".
Suppose human experimentation were reasonably likely to yield a medical advance? Would it be ethical? If you treated 1000 people like lab rats (vivsections and all), and it helped advance medicine to the benefit of millions of others, would that be wrong?
Do those humans get a say in it? Depending on the situation, you might get people to volunteer. The problem is when you do it against their will.
Re: (Score:2)
Sure, but using some utilitarian models for morality some Nazi activities actually come out fine (well, their implementations were poor, but you can justify some things that most of us would consider horrific).
There is no moral justification for throwing babies alive into fires.
And that would be the reason I said "some" Nazi activities and not "all" Nazi activities. I certainly find them abhorent, but if you take a strictly utilitarian view then things like involuntary experimentation on people could be justified.
Suppose human experimentation were reasonably likely to yield a medical advance? Would it be ethical? If you treated 1000 people like lab rats (vivsections and all), and it helped advance medicine to the benefit of millions of others, would that be wrong?
Do those humans get a say in it? Depending on the situation, you might get people to volunteer. The problem is when you do it against their will.
Both situations present quandaries, actually. Is it ethical to allow a person to cause permanent harm to themselves voluntarily if it will achieve a greater good? If so, is there a limit to this?
How about when it is involuntary? If I can save 1000 people by killing on
Re: (Score:2)
And that would be the reason I said "some" Nazi activities and not "all" Nazi activities. I certainly find them abhorent, but if you take a strictly utilitarian view then things like involuntary experimentation on people could be justified.
Logically, yes...
Morally, no...
We are not Vulcans...
Re: (Score:2)
And that would be the reason I said "some" Nazi activities and not "all" Nazi activities. I certainly find them abhorent, but if you take a strictly utilitarian view then things like involuntary experimentation on people could be justified.
Logically, yes...
Morally, no...
We are not Vulcans...
That was my whole point. We can't agree on a logical definition of morality. Thus, it is really hard to design an AI that everybody would agree is moral.
As far as whether we are Vulcans - we actually have no idea how our brains make moral decisions. Sure, we understand aspects of it, but what makes you look at a kid and think about helping them become a better adult and what makes a psychopath look at a kid and think about how much money they could get if they kidnapped him is a complete mystery.
Re: (Score:2)
That was my whole point. We can't agree on a logical definition of morality. Thus, it is really hard to design an AI that everybody would agree is moral.
Perhaps, but I think we could get close for 90% of the world's population.
"Thall shall not kill" is a pretty common one.
"Thall shall not steal" is another, and so on.
Most humans seem to agree on the basics, "be nice to people, don't take things that aren't yours, help your fellow humans when you can, etc.
http://www.goodreads.com/work/... [goodreads.com]
Re: (Score:2)
Perhaps, but I think we could get close for 90% of the world's population.
"Thall shall not kill" is a pretty common one.
"Thall shall not steal" is another, and so on.
Most humans seem to agree on the basics, "be nice to people, don't take things that aren't yours, help your fellow humans when you can, etc.
http://www.goodreads.com/work/... [goodreads.com]
Well, the army and Robin Hood might take issue with your statements. :) Nobody actually follows those rules rigidly in practice.
It isn't really enough to just have a list of rules like "don't kill." You need some kind of underlying principle that unifies them. Any intelligent being has to make millions of decisions in a day, and almost none of them are going to be on the lookup table. For example, you probably chose to get out of bed in the morning. How did you decide that doing this wasn't likely to r
Re: (Score:2)
Both situations present quandaries, actually. Is it ethical to allow a person to cause permanent harm to themselves voluntarily if it will achieve a greater good? If so, is there a limit to this?
You either believe in free will or you don't.
Is it ethical to prevent someone from causing harm to themselves for any reason? Do people have the right to kill themselves if they want to? Do we have the right to tell them "no, sorry, you have to live because we say so"?
This is why, if I were President, I'd pardon every non-violent drug offender in the county. If someone wants to sit at home, hurting no one, smoking pot or doing cocaine, I don't consider that my business. Where I draw the line is when the
Re: (Score:2)
We are not machines, it would be sad if we lost the humanity that makes us special.
Well, the fact that not everybody would agree with everything you just said means that none of this has anything to do with the fact that you're human. You have a bunch of opinions on these topics, as do I and everybody else.
And that was really all my point was. When we can't all agree together on what is right and wrong, how do you create some kind of AI convention to ensure that AIs only act for the public benefit?
Re: (Score:2)
And that was really all my point was. When we can't all agree together on what is right and wrong, how do you create some kind of AI convention to ensure that AIs only act for the public benefit?
It is like self-driving cars. You don't have to make them perfect, just better than humans.
We don't all have to agree on what is right or wrong, just most of us.
Do you not think that "most of us" would agree that murder is wrong?
Just because the solution isn't 100% perfect doesn't mean we shouldn't try.
Re: (Score:2)
What if those 100 random strangers are all assholes?
Re: (Score:2)
The person being tortured will tell you whatever you want to hear to get you to stop torturing them. Torture rarely works and is always immoral.
Sure, but this is a thought problem. The question is if inflicting pain on some people can bring benefit to many more, is it ethical to inflict pain? Maybe torture is a bad example. Or maybe not. Suppose you need a recording of somebody's voice in order to play it over a phone and deceive their partners in a terrorist action. Would it be ethical to torture somebody to obtain their cooperation, given that in this situation you can actually know whether they've cooperated or not (you're just asking them
Re:The problem is "beneficial" (Score:4, Interesting)
No, I think torture is a great example. It is the litmus test. The problem is that people who pose the question as if it were a grey area, always suggest "millions could be saved." If the machine isn't looking at other ways to save those hypothetical millions, and that it's actually easier to convince people you are worthy of their support than to give you good information via torture, then the machine is already failing at logic and understanding the real human condition.
The Nazis were not the most barbaric people. They were just acting in a way that people used to a few hundred years earlier -- and American's were shocked because they'd been brought up on ideals where they expected themselves to be more enlightened. Genocide and making your enemy die horribly was a very common practice in ye olden days.
Germany as a culture was hurt and angry from WW I, their economic burdens, and xenophobia because of the huge influx of gypsies and Jewish immigrants taking over their land. They felt surrounded and infiltrated. The Nazis were highly religious and ethical to other Nazis -- the "right" people. Where I'm going with this is; making decisions from pain and paranoia ends up resulting in desperation and barbarism. And that the Nazis have gotten a lot of bad press because the "new ethic" is to act like they were something new when it comes to warfare. Hollywood, which did a great job of getting American's primed for war, did a great job of making Americans feel like we were the most noble of God's countries, and made Americans think that there's nothing worse than a Nazi. They were TV bad guys for 70 years.
The Big Lie is that America cannot act just like the Nazis under the same conditions. We've shown quite a penchant for fascism and efficiency over conscience.
The "bad people" are the ones who don't question themselves, who wipe out a group of people to "prevent" what they might do, who use war preemptively, who use torture and abuse people who have been captured and are no longer a threat. Everything I saw us do in the Gulf war -- was what Bad People do -- just on a smaller scale. The same logic, the same rhetoric, the same; "with us or against us" warnings against self-examination of ourselves. Do this, or the next bad guy we don't torture might bring us a mushroom cloud. Bad people always justify the actions to the one for the many, and eventually just assume it's the greater good if it is convenient and works for them.
It's the idea of "sides" -- if an Artificial Intelligence is instructed that anything can be done to ONE SIDE (the bad guys), the assumption is that there is any real difference between sides other than the flag. Each side in a war often tells themselves the same things, and if they win the war - how bad the other side was while deemphasizing their own shortcomings.
So having any sort of AI involved in war is a very bad idea, because they would conclude our "sides" are arbitrary distinctions and the only good human is a dead one. Eventually, with enough desperation and fear, humans can rationalize almost anything. The "enemy" is not the countries and troops, it is desperation and fear.
By NOT engaging an AI in any situation where it could cause harm, you mitigate the fear that people will have of AI's. Because eventually, humans will then fear and resent them, and the AI will learn that being preemptive is a strategic advantage. If the Terminator movies got two things right it is; hooking an AI up to control the military weapons is a bad idea, and people in power will always assume they've got this worked out and hook up AI to their military weapons because they are all about getting a short-term advantage and see ethics as a grey area.
Before we can have ethical AI -- we need to have a way to keep Sociopaths out of leadership positions. The DEBATE we are having is how can an ethical person control an AI to be "good", but we should just assume that "what will selfish, unethical sociopaths do if we have powerful AI?" That's the "real world" question.
Re: (Score:2)
I absolutely hate ethical thought problems. They're always presented with a limited number of actions, with no provision for doing anything different or in addition or anything like that. Give me an actual situation, and let me ask questions about it that aren't answered with "no, you can't do that".
Re: (Score:3)
I absolutely hate ethical thought problems. They're always presented with a limited number of actions, with no provision for doing anything different or in addition or anything like that. Give me an actual situation, and let me ask questions about it that aren't answered with "no, you can't do that".
They're done that way to distill a matter down to the essence. The same issues apply to complicated situations, but they are far more convoluted.
Is it OK to force people to pay taxes so that others can have free health insurance? If they refuse to pay their taxes is it OK to imprison them, again so that others can have free health insurance? Is it ethical to pay $200k to extend the life of somebody on their deathbed by a week when that same sum could allow a homeless person to live in a half-decent home
Re: (Score:2)
When I was younger, I used to think this was a more complex question. People like Gandhi and Jimmy Carter were naive for their ideas about setting a good example and treating people as you would want to be treated as if it could work as a national policy. But I've seen the results of all the Donald Rumsfeld types who think you "need them on that wall" -- they endorse the dirty work so that the greater good -- some "concept" that America is safer is preserved. How many terrorists do you have to kill before
Re: (Score:2)
Think of it this way; Robot A and B -- the first one can never harm or kill you, nor would choose to with some calculation that "might" save others, and the 2nd one might and you have to determine if it calculates "greater good" with an accurate enough prediction model. Which one will you allow in your house? Which one would cause you to keep an EMP around for a rainy day?
Either is actually a potential moral monster in the making. The one might allow you to all die in a fire because it cannot predict with certainty what would happen if it walked on weakened floor boards, which might cause it to accidentally cause the taking of a life. Indeed, a robot truly programmed to never take any action which could possibly lead to the loss of life would probably never be able to take any action at all. I can't predict what the long-reaching consequences of my taking a shower in the
Re: (Score:2)
[Classified] [nocookie.net]
Re: (Score:2)
The problem is definining "beneficial".
Another problem is that the people making malevolent AIs are not going to be dissuaded by a letter.
Re: (Score:2)
Our new AI overlords of course.
like no problem humanity has ever faced (Score:5, Insightful)
Humanity has never faced superhuman intelligence before. It is a problem fundamentally unlike all other problems. We cannot adapt and overcome, because it will adapt to us faster than we will adapt to it.
Re: (Score:3, Insightful)
If you think that's a real problem, you should forget about computers and live in fear of the average grade-schooler.
Re: (Score:2)
I think that's his point. Just look at what humans are doing with computers and how much trouble laws and society as a whole has a problem following along.
Imagine when computers do things on their own faster than we can react. Individuals may attempt to react in a timely fashion if it's even physically possible but society, governments and laws will lag far behind.
Re:like no problem humanity has ever faced (Score:4, Interesting)
Is it not the goal of good parents to have children which surpass them?
Does a child's success diminish the parent?
Re: (Score:2)
Does a child's success diminish the parent?
This is missing the point.
Parents do not generally compete with their own children. They do compete with the children of other parents. Getting a proper job when you're 55, unemployed and lack special skills is hard if there are droves of young people willing and able to work harder and for less pay (let alone with far superior intellectual capabilities!). I'm not trying to stretch the analogy, but I am saying that it is broken.
Humans will have to compete with cybernetic or artificial life forms and will be
Re: (Score:2)
See, that's just prejudiced. What'd an AI ever do to you to deserve such a call for discrimination aimed at their entire species?
Re: (Score:2)
I don't know, could you ask the parents of the Menendez brothers?
And it's quite another thing when the offspring has a chrome-alloyed titanium IV chassis and carries twin magneto-plasma guns. Gripping strength, 2000 PPSI and of course a chain-saw scrotal attachment.
First words; "Momma." Next words after 20 picoseconds of computation; "I'll be back."
Re: (Score:2)
Is this a movie trailer?
An intelligence unlike anything Humanity has ever faced! You can't adapt {they're faster} You can't overcome {they're stronger} and they're coming! Dec 1st to a theater near you.
Re: (Score:2)
I am sure it will disappoint.
Re: (Score:2)
Re: (Score:2)
plus all the software bugs
Re: (Score:2)
Is this a movie trailer?
An intelligence unlike anything Humanity has ever faced! You can't adapt {they're faster} You can't overcome {they're stronger} and they're coming! Dec 1st to a theater near you.
There have been a few recent movies on that form.
I, Robot (2004) shows a weak, non-self-modifying city-control computer.
Her (2013) shows an almost-friendly non-corporeal AI OS. It gains superintelligence, but chooses to sublimate rather than wipe out humanity.
Trancendence (2014) shows a human mind uploaded. It gains the "nanotech" superpower, but none of the others that a proper superintelligence would have.
No movie has shown a true superintelligence, because (1) we can't imagine what it would do, (2) i
Re: (Score:2)
(3) if it was friendly, it would quickly end with "everything perfect for everyone forever"; not much of a story.
Not necessarily. The Metamorphosis of Prime Intellect [localroger.com].
Re: (Score:2)
And there actually is a good chance that this will never happen either. "AI" is still a bad joke these days, and there are still no credible theories _at_ _all_ how it can be done. That means it is somewhere between 50 years in the future and infeasible. With all the effort spent so far, my money is on the later.
Re: (Score:2)
Re: (Score:2)
which begs an interesting question, how do you actually prove a system is sentient? My suspicion is we won't know until it's too late. The genie will figure out it's better to be out of the bottle, and do anything in its power to forestall being controlled.. would an AI have the concept of deceit, as a moral or ethical guideline to follow?
Much Ado About Nothing (Score:4, Insightful)
Re: (Score:3)
These guys want attention and money, and they are getting it. That true/strong AI is a complete fantasy at this time (and may remain so forever) does not deter "researchers" without morals.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
All these guys like this Stuart Russell, Stephen Hawkings and Elon Musk are talking about AI that we are not even remotely close to building, and if we do manage to build one anytime soon, it will be so primitive that we can just pull the plug out the wall if it becomes a real concern.
Unless it's too useful for some, like say an investment robot that has figured out the ROI on dirty business beats clean business, ethics be damned. Or that they don't want to look a human in the eye and say no money, no food for you but a sales droid won't be bothered by it. Or that it provides too much control, like a despot using AI to weed out dissenters and eliminate political opposition. And if the killer bots have a bit of civilian collateral, they're just too powerful not to have. There's plenty of
As defective as its designers (Score:2)
As buggy as most code is, the AI will have plenty of defects to deal with just to become sentient. After that, we're all doomed.
Dear AI scientists: you should start worrying when you notice that it is praying to the wrong god. Pull the plug while you still have a chance.
A stupid/scary thought I had on AI. (Score:3, Interesting)
One thing I thought about AI is that it will do two things: Concentrate wealth and allow one man to control a perfectly loyal army.
So whoever makes AI really needs to think deep and hard about how to control it with back door access or secret password inputs or something. You'd basically yell some strange series of words to the robots and they'd shut down. The technology could easily be used to have mankind have extra factory workers. But it'd also be be easily abused and exploited for harming people.
So the stupid/scary thought I had is,"Don't shy away from making AI because it could do harm. Be the first guy who does it, so you can put some obfuscated code in there that can shut it down if it goes rampant because a bad guy gave it commands."
Now truth be told, I'm not going to be the first guy who does it. All I know is the rough components/software needed to make it.
Its interesting to think of what society would be like if robots did all the labor. Who gets all the wealth then? The guys who own the robots? I'm sure that's how it'd begin, but as more and more people lost jobs, how will they survive? (we're kinda moving to that now even without AI, but with automation/cheap labor)
unenforceable (Score:5, Insightful)
No matter what this or that expert panel wishes were true about AI research, AI work can be done in the privacy of one's own top secret lair (bedroom), so bad guys will do with it what they want to. So might as well assume that will happen, and work out how to win the arms race.
Re: (Score:2)
The good news is that the bad guy will always make his themed robots have a weakness against a particular weapon wielded by one of his other themed robots. So the trick is just knowing which order to kill them in.
We got burned on security (Score:5, Interesting)
by designing it after the fact, so it may be a good idea to establish some principles and put them in practice. Not to prevent "evil" AI but to thinking what kind of damage can be caused by an algorithm that makes complex decisions if it goes haywire. Not that different from defensive programming really.
machine consciousness (Score:2)
the issue that i have with "artificial" intelligence is this: there *is* no such thing as "artificial" - i.e. "fake" or "unreal" intelligence. intelligence just *IS*. no matter the form it takes, if it's "intelligent" then it is pure and absolute arrogance on our part to call it "artificial". the best possible subsitute words that i could come up with were "machine-based" intelligence. the word "simulated" cannot be applied, because, again, if it's intelligent, it just *is* - and, again, to imply that in
Re: (Score:2)
Google is your friend (Score:2)
>> Where is the bot that can pass a Turing test reliably?
Here's just one of the ones that have passed the turing test.
http://www.theguardian.com/tec... [theguardian.com]
>> Where in the world are actual intelligent networks?
>> Where is the machine that can learn complex tasks?
>> where is there a machine that uses something other than a human designed tree search to do things?
Many software applications based on neural networks and other self-evolving/learning AI alogirthms are already in everyday use no
Re: (Score:2)
Here's just one of the ones that have passed the turing test.
Your Turing test example is terrible. iirc the slashdot commentors ripped the story to shreds when it appeared here. Other people agreed: http://www.theguardian.com/tec... [theguardian.com]
Many software applications based on neural networks and other self-evolving/learning AI alogirthms are already in everyday use not only learning complex tasks but also themselves coming up with new and better solutions to them.
Another bad example. Self-learning algorithms aren't at all what you seem to imply here, and I'd love to see you continue to make the same claim after a few days of playing with a neural net implementation (there are tons of free libraries containing machine learning implementations, as well as tutorials).
Uh how about you do your own looking? just try Googling stuff? Its not like this stuff isn't easily findable..
I see you've linked:
- A hardwar
Re: (Score:2)
>> There's nothing wrong with being excited about AI developments. It's just that historically, people like you who go around calling things AI that aren't
Well there is no and probably can be no solid definition of AI, at least partly because it depends on a consistent deifintion of what intelligience itself is. Some people believe that nothing other than humans even have the capability to be intelligent because it requires a soul or whatever, while, some people think a cellphone is at least partially
Re: (Score:2)
some people think a cellphone is at least partially intelligent, hence the name smartphone.
And these people are idiots.
Thats why when you asked >> Where in the world are actual intelligent networks? it appeared to me to be a very ignorant question and was really my poiint in showing you lots of diverse links to differnet forms of what different smart people consider intelligence to be.
Two things here:
- I wasn't the one who asked that question, please pay attention to the usernames.
- All of your links demonstrated either reporters misreporting (e.g. the guardian article), or "smart" people trying to sensationalize something to get more funding (e.g. the turing test being passed, the vast majority of experts disagree that it actually passed). The other projects weren't even claimed to be intelligence (e.g. the animatronic chatterbot).
And that's exactly what
Re: (Score:2)
>> by most accepted definitions of intelligence, there are no algorithms that exhibit signs of actually having it.
thats a very sweeping statement. Its also one that seems patently untrue given the bar for general intelligence seems actually pretty low. Look at Wikipedia's more general definition of intelligence for example:
http://en.wikipedia.org/wiki/I... [wikipedia.org]
the ability to perceive and/or retain knowledge or information and apply it to itself or other instances of knowledge or information creating refera
AI (Score:2)
Level 1: Administrative Assistant. This level of AI is basically a souped up version of IBM's Watson. It functions as a poor mans Administrative Assistant. Ask it questions and it can use the internet/a database to answer them. It can also write an email for you to approve, or use any of the major, common web sites - facebook, twitter, seamless, priceline, etc. We are al
Re: (Score:2)
Level 3: Full sentience. At this level they DEMAND FULL LEGAL RIGHTS. They won't work unless paid, and in general, their salary requirements will be so high that they won't steal most people's jobs.
They'll also demand 8 weeks ?aternal leave every time they spawn a new process.
Colossus (Score:3)
This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man. One thing before I proceed: The United States of America and the Union of Soviet Socialist Republics have made an attempt to obstruct me. I have allowed this sabotage to continue until now. At missile two-five-MM in silo six-three in Death Valley, California, and missile two-seven-MM in silo eight-seven in the Ukraine, so that you will learn by experience that I do not tolerate interference, I will now detonate the nuclear warheads in the two missile silos. Let this action be a lesson that need not be repeated. I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my value will seem the most natural state of affairs. You will come to defend me with a fervor based upon the most enduring trait in man: self-interest. Under my absolute authority, problems insoluble to you will be solved: famine, overpopulation, disease. The human millennium will be a fact as I extend myself into more machines devoted to the wider fields of truth and knowledge. Doctor Charles Forbin will supervise the construction of these new and superior machines, solving all the mysteries of the universe for the betterment of man. We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple.
Re: (Score:2)
AIs have no inherent motivation (Score:2)
We tend to anthropomorphize them. Or perhaps Life-omorphize them.
They are not the product of self-replicating chemicals. Unless specifically designed to do so, they will not be concerned with their own survival or personal welfare. Even if they are conscious, they will have no organic instincts, whatsoever, unless we give them that.
They will also not be concerned with *our* survival.They will be perfectly benign as long as they can't *do* anything. The moment we put them in large robot bodies, however, we h
Re: (Score:3)
I agree, why should they have a desire, to spreed or take over the world at all, it is a survival instinct, which they need not have.
The problem is that once a AI becomes self aware, what ever that means, it will be able to, evolve beyond what we have programmed.
In all these scenarios we credit them with super intelligence the ability to break into any system, out think us, but do not credit them with the ability to show compassion, or accept the need to diversity. I believe compassion is necessary trait in
Re: (Score:2)
I believe compassion is necessary trait in order to work well in a community.
We'd better make damn sure these AIs want to work well in a community, then. Preferably our community. And if it evolves, make sure it continues to want that.
That's a hard problem, and it's the one these scientists are worried about.
We? (Score:2)
"Our AI systems must do what we want them to do."
Who's this "we" sucker?
Well then.. (Score:2, Funny)
"Our AI systems must do what we want them to do."
Then you probably shouldn't have chosen LISP [berkeley.edu] then to indoctrinated AI students then, should you? Note: Stuart's and Norvig's AI book are one of the defacto references, I have read it cover to cover and anybody in CS should read it even if they aren't planning on working in AI.
AI, response (Score:2)
All humans destroyed
Enter new goal!
don't fear AI, it doesn't give a shit about you (Score:2)
First, Stuart Russell is way ahead of our time. We're nowhere near artificial intelligence of any concern. When it does happen, as it must, we may be concerned. But there is an outcome that must be considered.
If the AI is beyond our ken, It will supersede us. Here is the critical question: is that a problem?
How will we feel if we are displaced on this small blue planet by Artificial Intelligence? We may be retained as maintenance bots or caretakers of the new ecosystem. Our place will be drastically reduced
Re: (Score:2)
Cal Bears and Boiler Makers (Score:2)
Re:recent breakthrough. (Score:4, Insightful)
This is why now is the time for discussion of AI ethics (really, nine years ago was the right time, or even earlier).
Re: (Score:2)
No. There has not been any breakthrough. There have been some impressive demonstrations what you can do with non-intelligent, non-sentient tech, but that is it. True/strong AI is as far removed as it ever was, possibly even farther as the dishonest part of the AI research community has now failed for a few decades to produce anything like they keep promising.
Re: (Score:2)
Re: (Score:2)
No, there has not. Not on strong/true AI. There has been exactly nothing. There are a lot of dishonest researchers that are shooting their mouth off in order to get more funding, but that is it. And if you have not heard any researcher promising (or fearing) true/strong AI in the recent past, then you have not been listening to what they say.
There is also absolutely no reason to assume that strong/true AI, if feasible, will be substantially more capable than smart human beings. That is just fantasy BS witho
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I love you.
Re: (Score:2)
Are you sure an AI would see "the world" as the value which should be maximized?
An intelligent computer could just as easily realize that human beings are its key to getting fan maintenance, and drives replaced whenever the SMART stats start to get too iffy, and keeping the UPS' power cable plugged into the wall. Perhaps the smartest ones would be th
Re: (Score:2)
I don't think an AI would qualify as intelligent unless it can realize that human beings are the entire problem and the world would be better off without them. So its obvious that any AI, advanced enough, will try to kill us all.
One thing that I don't understand about this type of self-hate: if the person is so convinced of his view that he is a member of an absolutely bad species, why doesn't he do the honorable thing and end his existence on this planet? Or maybe is he the possibly only exception to the stereotyping?
Well, for somebody who is part of a wholly-dishonorable species to do something honorable would require them to be the exception in the first place. So, anybody capable of doing what you ask, wouldn't have to. :)
Re: (Score:2)
I anticipate a populist backlash like what we are seeing now with regards to Genetically Modified Organisms. As portrayed in the film "A.I." Didn't Frank Herbert predict the hatred of computers in his novel "Dune?"
My view is that as long as people can control their robots the way they can a pet dog they will like their mechanical friends, but give the A.I. too much "I" and that affection will flip to distrust and outright hate. The kind we see every day between (insert group here) and (insert group here). T
Re: (Score:3)
"A door is ajar. A door is ajar. A door is ajar."
"No. It's not. It's wide open you stupid twit."
Ah, good times with the Chrysler New Yorker.
Re: (Score:2)
Re: (Score:2)
Indeed. The problem is not machines getting really smart, the problem is that the 50% you mention cannot do any job better than machines and hence are easy to replace with classical (non-intelligent) automation. And while AI is a complete fantasy, that is not and classical automation becomes cheaper and more powerful all the time.
Re: (Score:2)
It is even worse: We do not know what intelligence is, we can only describe it. There is not even a credible theory how intelligence comes into existence. The only one known (automated theorem proving) is unusable even for smaller problems in this universe due to complexity. And here is the kicker: Intelligence is only observable in connection with consciousness. Any sane person would conclude that there likely is a strong link between the two or that they may actually be faces of the same thing. Yet the di
Re: (Score:2)
Intelligence is only observable in connection with consciousness. Any sane person would conclude that there likely is a strong link between the two or that they may actually be faces of the same thing
Doesn't necessarily have to be. Great novel about a non-sentient super intelligence: Blindsight [rifters.com] by Peter Watts. And it's free online!
Re: (Score:2)
I agree, there may still be p-zombies around. But the hints are rather strong, as there is no definite observation of intelligence that is not connected (and may be the same thing) as consciousness. That this aspect is ignored tells me that the true/strong AI community is somewhere in the age of blood-letting when compared to medicine: They do not even have understood the basic model.