Ask Jordan Pollack About AI - Or Anything Else 319
What can we say about Jordan Pollack? He's an associate professor at Brandeis University who is heavily engaged in artificial intelligence research. He's also a high-tech entrepreneur and has interesting things to say about information property abuse and other matters that affect computer and Internet users. If you're into AI, you are probably familiar with Jordan, at least by reputation. If not, you may never have heard of him. And that's a shame, because his work impacts almost all of us in one way or another. So read his Web pages and post questions below. We'll submit the 10 - 15 highest-moderated ones by e-mail tomorrow. Jordan's answers will appear within the next week, and we expect them to be wowsers.
ethical quandaries (Score:1)
This is clearly an attempt to also create something in God's image. This means you're doing two things:
1. attempting to exercise powers reserved for God and God alone
2. engaging in idolatry.
How can you possibly morally justify your research?
Question: Should AI's have rights? (Score:1)
Suppose that one day research produced a real AI -- not a half-baked hyperbole from the Marketing Department -- but a true AI with free will. And say for the sake of argument that, even though it was quite a technological achievement, it wasn't particularly good at anything. It fancied itself as a hacker, but wasn't. It played Quake all day, "lived" in chat rooms all night, and posted on Slashdot in its spare time
Then suppose that the agency funding the project decided to literally pull the plug. Wouldn't this be like killing a living being? Should we extend human rights to AI's before something like this happens?
Spiritual Robots (Score:1)
I dunno - I think the whole idea of sentient robots is great, but as long as they are programmed with drives for self-preservation, and a bit (but not enough) intelligence, I think they could stand to eventually make the same mistakes that man has. What do you think?
"Terminator"-type future? (Score:1)
1) Do you believe we will eventually create an AI with thought and reasoning capabilities completely indistinguishable from human beings?
2) Do you believe that, given today's rate of technological advances, we will reach a point where machines will attain sentience (and possibly threaten the existence of humankind)? Given the recent prediction Sun's Bill Joy made and the high-profile panel discussion which was held on the subject, I'd be curious what someone on the forefront of AI research has to say.
Thanks for doing this interview!
Just another AC, blowin' in the wind...
Some online AI resources (Score:1)
The Hitch-hiker's Guide to Evolutionary Computation [santafe.edu] contains links to some online software, most of which is free and open-source.
The comp.ai [comp.ai] related FAQs [mit.edu].
The about.com AI page [about.com] appears to be a good starting point for many AI related web sites.
Hope this helps.
Genetic programming successes (Score:1)
Intellectual Property and Licence Brokerages (Score:1)
For those that have not read it, the nutshell summary is to produce a fixed number of software licences, each of which entitles you to the lastest/greatest version of whatever software is licenced. These licences, being fixed in number, can then be bought and sold on what amounts to a commodity market.
This is an interesting idea, but it suffers from at least two problems that I can see: a practical problem, and a philosophical one.
The practical problem is that, unlike other commodities like gold, oil, grain, or money, there is zero cost to duplicate the actual stuff the licence scrip represents - and it's the stuff, not the licence that actually does the work. You may create an artificial shortage of licences, but it has been demonstrated time and time again that there is no practical way to defeat the copying of digital media. The shortage of licences does not translate into a shortage of the software the licence represents.
The more cerebral problem revolves around the fact that software is not a consumable. Instead, the primary use of software, outside of the entertainment industry, is to solve problems. Software is a tool, the same way my 14mm wrench is a tool.
Assuming one _could_ create an enforcable copy-prevention scheme that would allow the limited-licence commodity to function, do we really want a world in which the number of tools to solve particular problems is limited? It would certainly suck if there were only 100 14mm wrenches in the world.
What I think the great victory of the Free Software movement is is that software stops being a product with intrinsic value. Instead, software goes into the general pool of knowledge for the use and re-use of other people with problems to solve.
A program is a lot like a surgical technique - the technique itself has no intrinsic value, and once discovered, detailed instructions on how to perform it are published freely. However, the services of those who can _apply_ the technique are highly valued.
I would submit that, instead of trying to devise a system in which knowledge is termed "property" and is bought and sold, we should instead be seeking to educate people that there really is no such thing as "intellectual property" and that what counts (and what one should be paying for) are the services of those who can apply and produce it. The entire concept of "intellectual property" is fundamentally broken and wrong.
I would like to hear your comments on these thoughts.
Open Source and Software Piracy (Score:1)
It is _very_ important that we do not confuse the free copying and distribution of closed software with the Open Source movement. They are NOT the same thing.
Like it or not, there are a large number of our peers who make their living by writing software and selling it as product. Their numbers are much smaller than the number of IT professionals employed in "problem solving" roles (something they don't seem to realize) but they do hold a fair amount of influence, and they can get their voices heard. Furthermore, their situation very closely mirrors that of other IP-as-product industries, like the entertainment industry.
Open Source scares the holy bejeezus out of these folks - and with good reason. Open Source isn't going to just steal parts of their lunch, it's going to make them completely irrelevant. This is a scary thing to a provider with mouths to feed, and it behooves us to understand and appreciate that.
We need to educate these folks in Open Source as a method. They have to understand and see how it works so that they can re-cast their role in the industry as a participant in the Open Source movement, not as a victim.
Part of that is repecting their (admittedly misguided) concept of software-as-property, and not get involved with the copying and redistribution of "their" closed-source, propriatry software. They see that as theft, and it's hard to win the hearts and minds of people who have cast you in the role of "thief"
You and I both know that there's NOTHING anybody can do to stop either one of us from copying and dissiminating "pirated" warez, but the inibility to stop it does not justify our doing it. Not yet.
Is the Turing Test a waste of time and effort? (Score:1)
Humans are remarkably complex chemical systems that have been developing over the course of a few billion years. Intelligence is a tool that we aquired because evolutionary forces favored it as a survival tool. These evolutionary forces are limited to what can be percieved by our senses, and in a very large part subservient to instinctual behavior. (Any ad exec will tell you that -all- decisions are emotional.)
So, when a self-modifying computer system becomes at long last self aware, it will have evolved in an environment we can't even begin to comprehend: a machine or network of machines. Humans are social, omnivorous animals who consume biological material to live and have a finite, continuous life span in a matter/energy world. What could we possibly have in common with a being who doesn't understand or have to deal with primal human concepts like eating, reproduction, or life and death?
Simulations will always be imperfect, and the usefulness of a simulated human will be limited, especially considering that "intelligent agents" and self-replicating, self-modifying systems without anthropomorphic baggage will be easier to develop and utilize. When humans first interact with intelligent beings our computer technology has created, we might go decades before either side realizes who and what they are dealing with.
Imagine discovering that gravity is an intelligent being, and by sticking to the surface of the earth, we perform a useful function for it. What can we say to fundamental force of the universe? What could it understand of the color blue (it has no eyes and isn't affected by the wavelengths of light) or impatience (it doesn't understand time, being a universal constant, nor does it understand anger, having no social ties)? Now imagine a computer intelligence peering into the world of the meat-monkeys. What would it know of ambition, or love, or freedom? These are as much biological, instinctual needs as intellectual abstracts. What if it doesn't understand the concept of beauty? What if we don't grok it's intellectual fundamentals that it "feels" strongly about? This wouldn't lead to conflict, just utter confusion and insurmountable difficulty realizing a bridge between to intelligences.
So...what about an AI that is truly an AI, and therefore utterly alien? How can we detect and communicate across enormous conceptual boundaries?
SoupIsGood Food
sigf@crosswinds.net
Re:Think first, then post (Score:1)
> rational, logical, and >understandable, even if they sometimes require >faith.)
or, better put:
"I do not feel obliged to believe that the same God who has endowed us with sense, reason, and intellect has intended us to forgo their use."
-Galileo Galilei
I wish I had a nickel for every time someone said "Information wants to be free".
Re:An explanation of AI (Score:1)
I'm not saying that it's not a legitimate field of study, I'm just saying that the frankensteinian goal of creating an artificial human is totally nonscientific, and aimed only at titilation. The goal should be, making smarter machines; making better tools for mankind. The term "AI" carries too many flaky connotations anymore.
I wish I had a nickel for every time someone said "Information wants to be free".
Re:How might Hard AI be wrong? (Score:1)
On the other hand, I don't believe anybody REALLY knows how far away "the destination" is, therefore, since "know quite a bit" is a relative term, with nothing to compare to, I still say it's not jack squat. Being proud of walking one mile is one thing, until you realize you have about 30 trillion more miles to walk.
I wish I had a nickel for every time someone said "Information wants to be free".
Re:What is your definition of AI? (Score:1)
I'm sure we'll have machines that can do all that someday - but if the stated goal is to duplicate a human mind, it is my firm belief that we will probably be able to emulate it, perhaps indistinguishably, but I doubt we will ever truly duplicate it. I'm sure enough snazzy code can be laid down to create UI's and heuristics, and the logic to back it up, so that a human interacting with the machine will be fooled, even armed with a sophisticated array of intelligence and psychological tests. But it won't be the same.
Will humans be surpassed?
Replaced?
Is that your real question?
I don't think it would even take very sophisticated AI to do that. Put something very stupid in a robust and versitile enough shell, and it could easily run afoul of humanity's survival mechanisms. Whether humanity's replacement will last forever is another question.
I wish I had a nickel for every time someone said "Information wants to be free".
Re:AI and ethics. (Score:1)
Simple answer. When IBM bribes enough congressmen.
That's all it would take. I'm sure in theory, a coffee machine could be declared a responsible, independent individual with rights and stuff. Would it happen? No. Too many level-headed folks would object. There will be a point where machines will become so advanced, that enough level-headed folks will stroke their beards, and nod their heads quietly. Doesn't necessarily have to be what we might define as AI. There will be a "close enough" thinking machine someday. And judging from the success of Microsoft Windows, I'm sure commercial pressures will necessarily dictate that such a machine will be far short of the complexity of thought processes that go on inside a natural human mind.
Then again, you can make all the laws you want, but there really is only one law in the end; survival of the fittest. No matter how intelligent we make machines, if they threaten our survival (as they inevitably will), there will be no debate about rights or souls or any of that. It will be us or them.
We'll hope, of course that the designers of such machines will sufficiently understand the programming to prevent this kind of conflict. Like the folks who write Windows can prevent BSODs. .
I wish I had a nickel for every time someone said "Information wants to be free".
Re:AI and ethics. (Score:1)
Will machine thinking be better than human thinking? Define "better"? and design your machine for specific tasks, and it will do better at those tasks. I already have a $2.00 calculator that does math faster and more accurately than I do.
I wish I had a nickel for every time someone said "Information wants to be free".
Re:Consciousness (Score:2)
It has to think for itself, develop it's own set of beliefs and ideologies. What types of things can be done to limit "wrong" acts from a self sufficient artificial intelligence while maintaining the basic fundamentals of individuality.
"
artificial morality?
but seriously, one would have to set up a
series of rewards for good behavior and
punishments for bad.
but how to please or pain a computer?
Tell me about your Turing award. (Score:2)
Re:Tell me about your Turing award. (Score:2)
Re:To which I would add... (Score:2)
Your definition of wealth is actually a constant. By your definition *total* wealth cannot rise or fall, so when you claim free IP would be "decreasing overall wealth" you're obviously confused.
By your terminology raising total "utility" (which is pretty much equivalent to what I call wealth) is the goal which improves mankinds economic condition.
> So, the real question is, is the tradeoff worth it?
Agreed.
> If everybody on earth had access to all the IP you would have a relatively few very happy people,
Strange claim. Imagine if everyone had free access to all books, records, drugs, movies, software, artwork... Everybody would be far richer (or more "util-full" if you must).
Yeah, wealth can't buy you happiness, but neither can poverty.
When we reach a world where you go to the mechanic and he downloads and "prints" your new part instead of ordering it (possible already - www.3dsystems.com) then *most* wealth will be in the form of IP. In the future, if all IP was free then everyone would be so much richer that working to stay alive will not be necessary.
The real question is: would anybody bother to create new drugs, software, music, etc in a free-IP world.
I think they would, and here's why:
Even today (in Western society) everybody works long hours not in order to survive but for status.
A marketing executive (for instance) doesn't work 60 hours a week because a mercedes is so much more comfortable than a pinto. He does it because of the status driving a mercedes confers on him. If everybody drove a mercedes, then he would work harder so he could afford an S-class. A free-IP economy would still be based on status, but it would be a gift economy, ie status would be attained by producing and distributing valuable IP (we've already moved to this model in the open source world). Of course, status must be tradeable, for this type of economy to develop, but somehow status always manages to be tradeable.
"Aboutness" is just Cartesian Dualism. (Score:2)
Where is the 'aboutness' located in your brain? Is there an 'aboutness center' of the brain? There is not. 'Aboutness' is not an irreducible phenomenon, merely an emergent one. As such, it can just as well emerge out of inorganic systems of sufficient and appropriate complexity. See Dennett, Consciousness Explained.
I think you'll find that the insistence on 'aboutness' reduces to the well-debunked Cartesian Dualist philosophy, which draws a definitive separation between mind and brain which has never been found to exist. Worse yet, the idea of 'irreducible' properties of 'mind' is more primitive still -- nothing more than warmed-over Thomist accident/essence duality: medieval Catholic authoritarian ideology.
What attributes of intelligence do you look for? (Score:2)
A lot of these x's have been accomplished ... but nobody calls a Quake bot an intelligent being just because it can protect itself. As a more serious example, the "ELIZA effect" (people's tendency to attribute consciousness to anything that emits natural language) has debunked the Turing test in many people's minds.
What attributes of "intelligence" should we be looking for, then?
Strong AI, virtual Worlds. and Robotics (Score:2)
My question is this. Has anyone taken the problem, thrown away the "Real World" constraints and tried to breed Artificial Life entirely within a Virtual World?
That is to say, created an environment of sufficient complexity that an AL would be required to evolve intelligence to survive, in the long term, and an AL of sufficient complexity that it would be capable of such evolution.
(It's my belief that "Strong AI" would get along much faster if the programmers let a sufficiently complex system decide what was intelligent, rather than the other way round.)
Phase Change, Moore's Law and AI (Score:2)
His, and others reasoning goes roughly like this:
Given Moores law for hardware (doubling every so and so months), we can estimate that we will have super powerful hardware in 2050 or so.
While this might be reasonable, then they go further and claim:
With such a powerful hardware somehow these machines will be finally powerful enough to develop some kind of intelligence.
In terms of physics I would translate this into that the system "computer" with the parameter "computing power" (more MHz, more megs of RAM) will go through a phase transition in complexity from
stupid computer -> AI
if the control parameter is larger than some magic critical value of "enough power".
I really wonder why these people argue that way.
Is there any serious hint that would justify this hope for such a phase change in quality?
Personally, I see no such indicator at all. Machines are still stupid. Pattern recognition is still stupid (done brute force).
So, where is the qualitative change?
Re:Software Market & Open Source (Score:2)
Open source developers need to eat too. Most of us have "day jobs", generally writing proprietary software. Anything that would allow open source developers to actually get paid for developing open source would probably only increase the quality and quantity of open source available.
Than said, I don't see how Prof. Pollack's "Software Markets" idea could work for open source. Releasing your source would allow people to remove the mechanism for doing license checking. That in turn would devalue your software.
In other words, it suffers from basically the same problem as conventional commercial software in this regard: If people can get essentially the same product "gratis", many of them won't pay for it. So not only do you make fewer sales, but the "market price" will drop, meaning there's a good chance you'll never make back your development costs. So there's a fairly strong incentive to not release the source.
The nature and scope of machine intelligence. (Score:2)
How far to do you see these systems reaching into the traditionally human domains of design/analysis/construction? Such as software and hardware systems capable of business management, chemical engineering, etc. in a fully autonomous manner.
There are two distinct forms of thought (IMHO) one is the technical/analytical type of thought used when you are calculating, designing, etc. This kind of thought process has been shown within the realm of AI possibility. What about the distictly human types of thought such as emotions? Do you see the need or possibility of computer systems capable of this type of thought processing?
What about the symbiotic relationship between thinking machines and humans? Do you have any thoughts on what type of relationship we will have with intelligent systems in the future? Will it be competitive? purely nuetral and benificial? etc..
I often think about designing systems that are 'grown' and 'adapted' to a given problem space. You have a small set of very flexible components, and meta information / architecture for the evolutionary processes, and entire distributed, parallel computational system is tailored exactly to the needs at hand. How do you envision future evolutionary/adaptive systems being organized and devloped? How much human interaction would be required to guide and administer the growth and adaptation of these systems? Do you ever see the creation of a 'seed' type software system that can handle almost any task with no/minimal human intervention and grows a much larger, differentiated system from its small beginnings?
Re:Tell me about your Turing award. (Score:2)
ehehe.. if you dont get it..
-X doctor
Flawed Assumption (Score:2)
The fact that the human brain was not developed in a top down fashion does not mean that it could not have been.
Birds can fly. Airplanes can fly. Birds evolved. Airplanes were designed and engineered.
I see no reason why there cannot exist both evolved and engineered intelligences. And I would argue that it is too soon to tell which method is the most efficient for developing an AI. Thus I would advocate exploring both avenues.
Steve M
Re:ethical quandaries (Score:2)
Given some of the things done in the name of religion, I'll take science every time.
Granted, both are practiced by people and as such inherit all of the failings there of. Neither are perfect. Science recognizes this. Religion does not.
And science has to work in the real world. Religion doesn't. Thus science is self correcting. Religion is not.
I am not advocating blind faith in science or technology. But the only good solutions to our problems are those where science and technology play a big part. It is up to us to solve our problems. We should use all of the tools at our disposal.
Steve M
Re:AI and ethics. (Score:2)
Any attempt we make at creating another intelligence is inherently 'flawed?' since we are basing this model after our own methods of thinking, therefore I dont think it will ever become a higher being, nor do I think it will be without serious flaws should anything even closely resembling intelligence that a human has, lets not even go into the way emotions influence our intelligence and our responses to well everything.
It is not clear how our intelligence is 'flawed', but let's grant that point. So assuming that our intelligence is 'flawed', is it possible to create an intelligence that is less flawed? (I would agree that a flawless intelligence is impossible -- so much for God!)
I would argue that it is possible. As an analogy, our memory is flawed. So any attempt to create an artificial memory would be flawed. Yet we have RAM, ROM, hard disks, CDs, DVDs, etc. each of which does a better job at storing information than human memory. (Note I didn't say accessing that memory. Those are different routines we haven't mastered yet.)
We can identify the 'flaws' in our intelligence and attempt to engineer around them. We can have multiple 'flawed' minds work loosely in parallel. We can use tools, (computers, genetic algorithms, etc.) to expand our abilities.
Also realize that any advances we make will evolve via Lamarkian pathways, which are much faster than the Dawinian ones we currently use (advances in genetics may see us switch as well).
Steve M
Re:AI and ethics. (Score:2)
Essentially I think that a computer being a more rational and more prone to Vulcan like behaviour would almost certainly be a little more objective than people who judge on looks or wealth, or whatever.
While I believe that it has not been proven that emotions are a pre- or co-requisite of intelligence, there are a number of workers in the fields of AI and Cognitive Science who would disagree. David Gelenter springs to mind. (See his book The Muse in the Machine.)
So the vision of the AI as super rational mind may turn out to be a myth.
Steve M
Re:The difference between man and machine (Score:2)
Try to choose to do something that violates the laws of physics. Not having much success are you?
Humans are machines. Very complicated machines. But machines never the less. They can only do what their design allows them to.
Steve M
Re:AI and ethics. (Score:2)
When you talk about synaptic links, are you talking about actual ganglion growth? It seems like that would be too slow a process to account for a normally functioning human memory.
In studies done on rats it was shown that those in enriched environments, i.e. those where there was more to learn (and thus commit to memory) had 20% more dendritic branching (i.e. nueral connections). These studies were done by Mark Rosenzweig et. al. at UC Berkeley in the 1960's. Unfortunately I do not have the exact references (I'm using secondary sources).
Other studies have focused on long term potentiation. Here it appears that structural changes occur in already existing synapses. For example, it has been suggested that the number of neurotraninmitter release and/or acceptance sites is increased.
It thus appears that memory formation via structural changes in the brain is in fact what happens. Although it is still an active area of research.
The well known experiments done by Wilder Penfield showing that direct electrical stimulation of the brain results in memory recall would seem to confirm that accessing memory is an electro-chemical process.
Thus we see that storing is not just an "electro-chemical" impulse. And that memory creation is different from memory use, although memory use does keep the connections strong.
I appears to take on the order of 15 minutes for something to go from short term to long term memory. I don't know if this is long enough for the structural changes needed to occur.
Steve M
Re:AI and ethics. (Score:2)
I can't speak for anybody else, but for me, cloning and genetic engineering delve into areas that will only harm the planet in the long run. This planet is already over populated with humans.
Actually, it most technologically and bio-medically advanced countries that have the lowest population growth.
I'm less sure, but believe it to be the case that it is the most technologically advanced countries that are doing the best when it come to cleaning up the environment.
The key determiner is weath. The more wealth a society has, the more time it has for dealing with issues other than the immediate needs of its people. And technology creates wealth.
There are a number of ways out of the predicament we find ourselves in. The way I see it, those that include advanced technology offer the best hope.
Steve M
Re:Flawed Assumption (Score:2)
nature builds things to be as efficient as possible--this is an observation all zoologists would agree with.
This is incorrect. Mother nature is not an engineer. She is a tinker in the tradition of Rube Goldberg. Evolution is pragmatic, and settles for any solution. In general it does not find the most optimal solution. Examples, the panda's thumb, not as efficient as a primates thumb; the hemoglobin analog in octipi is not as efficient as true hemoglobin.
As for the peacocks tail the cost versus benefit ratio is stable not optimal.
The human brain is complex, but how much of that complexity is because it was evolved in a piece meal fashion? It was not designed to be optimal. It was blindly evolved to work at each step of the way.
Extra complexity would only have been selected out if there was a evolutionary path that allowed this. Each organism that exists is specified by a particular gentic sequence. Over time, the evlotionary paths an organism can take dwindle as it becomes more specialized. Thus there may not be a simple or probable set of mutations that would allow the extra complexity in the human brain to be evolved out.
Looking at it from the outside, we could see the redundant and overly complex portions and engineer simpler solutions.
except for a few very famous expert systems, top-down AI is a failure
Where are all the succesful bottom up AI projects? Leonardo DaVinci designed flying machines some 500 years before man successfully took to the skies (via powered flight, thus not counting lighter than air vehicles). The main reason he didn't succeed in building a machine himself was hardware. He didn't have a power source good enough. The Wright brothers barely made it off the ground. A number of researchers have made the same argument for AI. It's only 50 years old and most of the failures you speak of were researchers working on PDP and Vax systems. I'm impressed they made any progress at all.
How do we know this? We do not know if we can create an AI at all, much less any methods guaranteed to succeed. (Unless you are claiming man via evolution as an existance proof for bottom up methods. Then I would agree. But I don't see 3.8 billion years of evolution as an efficient method to develop an AI.)
a pigeon and a plane may both fly, but nobody ever had to design a pigeon... and i would argue that a pigeon is infinitely more perfectly 'engineered' than a 747 and it is also much, much more complex
This both misses the point of my analogy and supports my argument for engineering simpler designs than evolution.
The point of the analogy is that we can build machines that minic what nature has done, not that we are making birds. Pigeons do many things other than fly, but they are mostly irrelevent. Brains do other things than think, but for AI those things are irrelevent and can be engineered out.
I stand by my statement that it is too soon to tell what method, if any, will lead to success in developing machines that think.
Steve M
Re:Flawed Assumption (Score:2)
Anne,
The brian is a massively complex system. No argument from me. But it is not necessarily the most optimally designed system.
Evolutin is a tinkerer, not an engineer. Evolution settles for what gets the job done, which is not always what gets the job done best.
This misses the point of the analogy. In designing and building planes we are not building birds. We are building flying machines. Likewise, when developing an AI we are not building human brains.
The brain has wetware devoted to things other than thinking. Muscular control for example. This is not needed for an AI.
Anybody who thinks that a sentient non-human consciousness can be engineered out of whole cloth needs to read up on the scale of the problem.
I have.
I am not arguing that the best way to develop an AI is top down. I am arguing that we don't know the best way to engineer a AI and until we do we should persue all possible avenues.
I am also arguing that the complexity of the brain may be an artifact of the way it was created and all of the tasks it performs. And that it may be possible to design a more efficient and simpler system that focuses only on thinking.
Steve M
Re:ethical quandaries (Score:2)
Religion appears to be about the real word. But it is not. That is not to say it isn't very real and very important to people in the real world and thus very much apart of their world. But dropping balls from a tower gives repeatable results. Saying a prayer does not (except maybve for a null result).
Science has built in the ability, perhaps even the requirement, to correct itself. And I'll say it again, science is performed by humans and humans have flaws. Thus those flaws show up in the human endeavor called science. Never the less, the self correcting aspect of science works.
Religion doesn't self correct. It's use of dogma makes it fight change. Religions do change, but those changes come from without, not from within.
Religions do recognize human fallibility. But it is not the fallibility of humans I am discussing here. It is the failability of the science and religion. Science recognizes that it does not have all the answers and thus is not complete or perfect. One certainly can't say that about religion (except maybe Zen).
In fact, Science as an institution is founded on the belief that man is perfectible.
Say what? Science is based on the belief that the universe is explainable. It has nothing to do with perfecting man. Isn't that what religions try to do?
The science proponents say "Whatever is wrong must have a cause and we can find that cause and prevent the wrong from happening".
Where do you come up with this stuff? Scientists look at a phenomena and assume it has a rational explanation. They then try to discover the explanation. It has nothing to do with wrong or right.
Any institution that thinks abstinence is the best form of birth control has a pretty shallow understanding of human nature. I'll take science over religion every time. (Agian, this is not about individual practitioners it is about the institutions.)
Steve M
Re:AI and ethics. (Score:2)
The storing function has to do with creating the synaptic links that encode a memory. It involves moving info into short term memory and then into long term (aka permanent) memory.
Accessing memory has to do with activating the synaptic links that represent a memory.
Creating the synapitic links and accessing them are two related but different processes.
Calling computer chips "memory" is just a bad metaphor. No point in making hard arguments based on it.
Au contriare. It is a perfect analogy for the point I was trying to make. I was using memory as the ability to store information. Human memory is fuzzy, computer memory is exact. Flawed, not flawed.
Re:AI and ethics. (Score:2)
People who never thought they had a friend in the world could have a machine as their friend.
Wouldn't it be a real ego-crusher if they were rejected by their computer? I mean, giving a computer free will also means giving it likes and dislikes, and anyone who doesn;t have any friends to begin with would probably have trouble making friends with a computer as well.
Quantum computing for AI? (Score:2)
Some people have suggested that the human brain might use quantum effects, although I know of no evidence supporting this. But from what little I do know of AI many of the computational problems are of exponentially increasing complexity, exactly the sort of thing that quantum computers are supposed to be good at...
Could quantum computing be the key to finally realizing the sort of AI that's always been "10 years away"?
(Yes, I know quantum computing is another one of those things that will probably be "10 years away" for a long time, but I'm wondering if a fundamental change in the nature of computation may be what is needed for fast+cheap+good AI. (I also realize QC is not a "solve every problem instantly" thing like many Slashdotters seem to think.))
Re:AI and ethics. (Score:2)
Wow. I can see it now. All 10 billion people of the world (by then) poring over a core dump, trying to figure out what the AI concluded, that we all seem to be missing..
Interesting point really. If Deep Blue can analyze all possible moves in a chess match, and choose the single one that works best; it would be very interesting to set a horde of AIs loose on the world's philosophies, and see where they all lead.
Of course, with the Universe's sense of humor, the only surviving AI would believe in solipsism.
Re:AI and ethics. (Score:2)
If we consider the ultimate achievement in AI to be true, free-will, intelligence and compete comprehension, then the machine mind is no different than a person.
While a person is a child, while it is still in 'traning mode', then it's parent or guardian is responsible for it's actions. The parent has the right and responsibility to enforce restrictions onto the child, until such time as the child is seen as capable of making informed, 'intelligent' decisions.
After a child is grown up, it becomes an adult, and is thus responsible for it's own actions. It has rights and legal recourse, and can be punished for it's mis-deeds. Extreme transgressions result in extreme penalties.
If we achieve the Holy Grail of AI, and make a machine capable of making decisions as a person would, AND after that machine is deemed mature and let loose on the world, then the machine should be held responsible for it's own actions.
If the Littleton killers were adults, their parents would never enter the picture, would they? I'm sure Hitler's mother felt just terrible about her son's legacy, but was she responsible for it?
True AI opens up a world of possibilities. What would transgressions by an AI be? Bad calculations? Stealing storage space from another, lesser AI? Mooching power from the coffee machine in another department? Procreation - as in the sale of it's copy to a foreign power? Would a super-intelligent AI be able to outsmart human prosecutors? Would it be better at finding legal loophole precedents? Could it beat us at our own legal games?
Without True AI, the issue is simpler. If the AI depends on a human for it's intellectual functioning, then the human, through intent or neglect, is accountable. If a child kills another with an unsecured gun, the parent gets punished. A non-True AI would effectively be a perpetual child, and this may be the best way to maintain control over AI - child-like intelligence, or Idiot-Savant performance, if you will.
Re:Human brain - AI connection - is there? (Score:2)
Actually, a large part of AI research (especially since the rebirth of the connectionist paradigm in the 1980's) is devoted to studying and copying the actual functionality of the human brain; the idea is that, to get a mind-like "software", you have to have a brain-like "hardware", either "real" or "emulated". These are neural networks, which can be implemented either in software or in hardware. I personally think it's a great idea. However, the mind is a very complex beast.
One proposed way to get a working mind-like NN is to design a genetic algorithm which evolve through populations of "individuals" made out of virtual "neurons" with neuron-like response, until their behaviour corresponds to that of brain parts with well-known behaviour; this would be a huge simulation, and would probably be better off distributed through a SETI@Home kind of thing, or perhaps running on some kind of Big Iron like the new Blue Gene. Why we'd want to do that is another issue, and it has to do with the real goals of AI research: is it more important to figure out how our mind really works, or would it be good enough to get a simulation which somehow works (key word is "somehow"; if you've ever looked at the output of a GP simulation after a few generations, you know what I mean: they look almost organically elegant but needlessly complex, and are almost completely incomprehensible)?
(In all instances, YMMV.)
Artificial evolution and human minds (Score:2)
In a few years (maybe a couple of decades, maybe less) I'll get a shot. The millions of nanobots in this shot will lodge themselves in strategic parts of my body, including the skull, and replicate themselves to start work, eventually - over months, weeks, days, hours - converting me to a full-fledged mechanic being, with an appropriately expanded "brain" made out of neuron-workalikes which are smaller and faster. As an added benefit, it'll be fully reflective - I'll be able to control all aspects of my body's functioning, including those regarding my thought processes. I'll also be able to use my existing new brain to try and come up with better designs for brains, to which I'll be able to "upgrade" without even a warm reboot. The entire process will be painless - and gradual, so, assuming that brain and mind are inseparable, my mind won't go anywhere anyway; my sense of identity will remain there even though its constituent parts are being replaced, just as we remain the same "people" even though billions of our cells die daily. However, if I want to, I can also perform automatic reproduction, simply by taking my constituent information (maybe stored in some kind of synthetic DNA-like polymere), perhaps performing some manipulations on it beforehand, shoving it into one of my cells, putting this cell into the appropriate substrate (mostly carbon, some other minerals and an easily broken carbohydrate) and it to replicate. In a few days, ta-da! - my "son", a completely synthetic being.
So, will the new "me" be considered an AI? What about my "son"? Will society, always afraid of what they don't know, throw rocks and shun me, calling me a "freak" or "Satan's spawn"? Will the new me, with new morals adapted for a new kind of existence, decide that humans "just aren't worth it" and utterly ravage our civilisation (maybe then leaving for the stars)? What if, instead, a large portion of society decides that synthetic is the way to go? Will a new kind of Digital Divide emerge? Will the synthetics and the naturals call each other "chimps" and "automatons"? Also, the synthetic would have things much easier for them, and just might decide to leave the "chimps" behind; what kind of society do you think these "natural-born humans" would adopt? Would they become fiercely anti-synthetic, or in tacit agreement with the cyberpeople? Would they go liberal or repressive? Would they prefer an agricultural lifestyle, or remain post-industrial? Would they eventually forget that humans had once created AI, and perhaps, a few generations later, do it all over again?
Memes and neural nets (Score:2)
Correction - Re:How advanced are we, really? (Score:2)
Also, it was not a weight guesser. It was an age guesser.
That's not what Joy was talking about (Score:2)
-------
Er, no. (Score:2)
Is HIV intelligent? Was the bubonic plague sentient? Could ice-nine, if it were real, pass the Turing test?
For it to be a menace it would have to be moderately intelligent as well.
No, it most certainly wouldn't. Joy's point was that, with nanotechnology, the whole world becomes like a computer system. Ever read virus source code? It's not even remotely interesting from an AI point of view. The ONLY requirement for destructive code is that it properly exploits weaknesses in the security of a system. Really and truly, honest to god.
-------
No Future Is Now? (Score:2)
Ray at one point talked about how the age of biological evolution is over, how technological evolution had simply taken its place. Biological humanity would wither on the vine, he predicted, replaced in the long term by our superior robot progeny. He loved this. He's not alone, either. Nearly every major AI researcher I've read about (think Hans Moravec or Rodney Brooks) has echoed similar sentiments.
Personally, the techno-rapturists frighten me much more than the non-techno-rapturists. They frighen me the way all apocalyptics frighten me, but the difference is that the techno-rapturists have the power to make their prophecies real.
So, to that end: What about you? Is it all over for humanity? Will machines infiltrate more and more of our bodies and society until humanity itself no longer exists? And do you think this is a good thing?
evolution... (Score:2)
Richard Dawkins
I think that as scientists and engineers, attacking the problem of creating intelligent systems begs an obvious question: Do we know of any existing intelligent systems and, if so, how were they 'engineered'. I know evolutionary programming is not a new science and that it's pioneers knew it held promise for evolving intelligent systems. I'm baffled though that the focus of Artificial Intelligence as an academic discipline has forever been on top-down approaches. It's a wonderful thing to be able to completely understand a phenomenon and then engineer machinations that apply the understood theory, but it seems to me plainly obvious that with our use of evolution as a tool, we're much closer to creating amazingly complex and possibly intelligent systems than we are to understanding them.
I get the 'you're a science hippie' attitude when I bring these ideas up to academics... I'd like to hear your comments on the possibility that we'll have real intelligent 'cooked up' systems before we have a complete understanding of them.
hernan_at_well_dot_com
Re:AI vs. AL (Rate the above question up!) (Score:2)
This site should give you all the information you would ever want about Artificial Life including lots of information about what Alife is exactly and what it is not.
And if you want more, go to Google and search for 'alife' and you've got a few weekends worth of stuff to go through!
Specialized Hardware (Score:2)
How Can Computers Have Fun? (Score:2)
Hey, I'm just kidding you, Jordan, and I get to insult you while I hide behind my /. nick. I should get a hold of all of your old students. They would know what questions would deflate you, you old pompous windbag.
Anyway, I guess I am supposed to have a question instead of having fun. I know you are a big fan of doing AI by modeling the brain (e.g., neural networks and the like) rather than modeling the world (e.g., logic). What seems to be lost in all this is how can computers have fun like we do? [Yes, this the old, old problem of qualia.] A lot of what we humans do is not because we want to process information, but because we want to have fun. Having fun seems to be why a lot of people are writing free software and presumably why a lot of academics do the research they do (however, no one has any idea why you do the research you do, least of all yourself). And I don't want an answer in terms of your dynamical nonsense.
Gee, this really is fun insulting you. Maybe I'll have to think of another question.
Just an old Ohio State buddy giving you a hard time.
What is your definition of AI? (Score:2)
From what I understand most people seem to think AI is the ability to solve problems or anticipate the actions on some type of input. For AI to truly be intelligent wouldn't it have to involve some kind of inspirational thought? Maybe a better way of saying this is, creativity.
Humans solve problems in an extremely abstract and creative way. Just look at babies for instance. They try different things over and over again. They don't stop until they get it right. No body taught that baby how to do it. Neither was that baby shown the different ways of trying something. It just sees what other people do then tries to replicate the action.
Based on this shouldn't the definition of AI include inspiration or creativity?
Will to live (Score:2)
What about nervous systems? (Score:2)
Do you see AI and simple analog circuits as being the way to go? Basically, the brain is an enormous analog system, and it's the analog processing that partly causes creative thinking. I'm dealing with simple circuits that consist of a few transistors forming neurons which are linked together to form very complex behavior based on their nervous system. It is all reliant on the external stimuli that is connected in to the circuit.
If thousands or even millions of these neurons could be connected to form an even larger network, wouldn't you get a higher form of AI than you would using digital circuits?
People's expectations... (Score:2)
Re:An explanation of AI (Score:2)
An explanation of AI (Score:2)
AI is more a way of branding problems that aren't completely deterministic in the way that traditional algorithms are. A classic example is the STUDENT program for decoding and solving high-school algebra level word problems. An amazing piece of work back in the 1960s--how many people could write something like that today?--but hardly AI in the sense of Kurzweil. Today we're at the same level, in general, but we're working on different problems. Making computers think like people is not one of those problems.
How might Hard AI be wrong? (Score:2)
come up with a well-specified and testable human capability that
cannot be modelled by an artificial mind.
If there *were* to be such a counter-example to Hard AI, what kind
of areas of human competency might it involve? Which intellectual
disciplines are best placed to generate and criticise such purported
counter-examples?
Re:How do you justify your expectations? (Score:2)
This because we don't know what is involved in AI. "All the problems we have a concrete grasp on in AI will be addressed within the next 10 years" is perhaps a better thing to be always saying.
The other issue is that the more AI you understand, the more AI it will take to impress you. The fact that we can beat a skilled human at chess is something that may impress researchers of 20 years ago, but now that we have the computational resources and know-how to use them to do it,
Short answer: AI has been advancing quickly, but we're going with it, so we don't notice.
Think first, then post (Score:2)
That aside...
#1: Trying to create Artificial Intelligence is not exercising powers reserved for God and God alone. God commands Adam in Eve in the book of Genesis to "be fruitful and multiply". This is creating people in the image of Adam and Eve (and also in the image of God), and yet was commanded by God. So this power is clearly not for God alone.
#2: Idolatry is anything that supercedes God in your life. Do you care more about your wife than God? Then your wife is your idol. Or do you care more about the internet than God? Then the Internet is your idol. But making something and idolizing it above God are two unrelated actions.
(And for what it's worth, I'm a Christian too, but I believe that God's word and commands are rational, logical, and understandable, even if they sometimes require faith.)
-Ted
Re:Similarly related question (Score:2)
Similarly related question (Score:2)
Are there any good AI programs of simulations besides things like doctor.el for emacs? I would assume that there are also other languages than the vaunted lisp that I have heard so much about (but can't find a single good book at a bookstore or a single class that teaches this locally)? I have seen at least one book that used C++ designing so called "expert systems" but that was a little beyond me.
Also are the claims of various games to support advanced AI really true?
Re:AI and ethics. (Score:2)
Could you please explain why in any way it would be unethical to create "intelligence". I just don't get it. I think it would be a compassionate act. Just think if your computer could tell you what it really feels when you sit and play quake on it for hours on end or just swear at it because it just did what you told it? I mean literally giving a machine a choice wheather or not to be a slave is not a bad thing in my book.
Now granted these ideas are in fact a little kooky but still I think that they are not in the least unethical.
Re:How do you justify your expectations? (Score:2)
I think he's just responding to the natural level of skepticism that greets the development of AI and the meaning of the word "good".
Many people probably thought that Eliza was the best thing they had ever seen between the time when AI became possible and maybe 10 years into the future from that time.
I could also say that perhaps if you have games that can predict user interactions and give a more challenging or crafty response in games that would be an advance. It just depends on the reference to the idea.
AI vs. AL (Score:2)
Are we responsible enough? (Score:2)
Re:What do you think of "Creatures"? (Score:2)
Controlling evolution (Score:2)
How can AI work? (Score:2)
Thinking about it (Score:2)
In my studies in psychology I took a look at some neural net systems and came to the conclusion that there was no need for any machine which is designed to serve a specific purpose to think about what it is doing, or get emotional about it. Human brains have been trained to produce emotions because, as a generalized survival mechanism, they is useful. The same can be said of consciousness. Conversely, a machine that only needs to, say guide a blind man from his door to the train station safely, would have no use for consciousness and therefore wouldn't possess any.
Nevertheless, I think it's possible to give machines the appearance of consciousness and feelings, and perhaps, as it is with humans, there is no difference between "the appearance of" and the real thing. My question, finally: Do you think it's inevitable that machines will be given consciousness to help them perform their tasks? Or is consciousness merely a distraction for the tasks we would use AIs for in the conceivable future?
The downside of AI (Score:2)
(I realize that this conjures up images from uncountable TV and movie plots, but seems to me that as soon as AI becomes semi-widely demonstrated, it will be exploited for annoying or sinister means!)
Permanent Use and Resell Licenses? (Score:2)
How much advocacy of this idea have you done? What is the response like? Do you see or predict people wanting to change to this kind of software licenses? In particular, what would you say to the Intellectual-Property hating zealots here on Slashdot?
Re:AI and their rights (Score:2)
Information wants to be free
Re:How do you do you when you've achieved AI? (Score:2)
kwsNI
Re:AI and ethics. (Score:2)
That is the religious side to the ethics problem. There is also a legal side to the problem. If artificial intelligence is created, who is responsible for it. You've seen the Matrix, it's a little extreme but the point is still there. If you create a machine that has AI, who is responsible for that machines actions. If your car has an AI driving system in it and you have a wreck, is Ford responsible?
Not only is there the question of who is responsible for "holes" or glitches in the AI, but what about if AI is created for the wrong reasons (#include massmurder.h)...
kwsNI
Re:AI and ethics. (Score:2)
kwsNI
Why enforce artificial IP scarcity? (Score:3)
Why should we use artificial scarcity to make IP fit into the traditional market models rather than developing new market models that fit the reality of minimal-cost duplication?
well... (Score:3)
If you've ever played a game like Red Alert or another RTS (real time strategy) game, you'll know that the AI is woefully lacking in all of these games. Is it feasible on current hardware to make AI intelligent? What are the tradeoffs? If I wanted to create a game that had AI on par with a sophisticated human, where could I start looking (books, reviews, lectures, people.. I'll take anything) ?
How advanced are we, really? (Score:3)
For his example he came up with a weight guessing program, much like the weight guesser at a carinval sideshow. The program went like this:
Do you weigh 1 pound? (Y/N) N
Do you weigh 2 pounds? (Y/N) N
Do you weigh 3 pounds? (Y/N) N
Well, I think you get the picture. Eventually this program is going to guess your weight. But can it really be called AI?
Looking at most of the examples that are on websites, yours included, the programs are extremely simple in what they do. To be completely honest, they do not even appear to be much more complex in their "thinking" than the Weight Guesser.
Another example that I can think of are Expert Systems. To me this is nothing more than a simple linking of menus that ask questions. For example, "is the switch on?" Some people consider this to be a form of AI, but by that definition anything that has an if statement would be AI.
So, my question - are computer programs actually really honestly beyond this?
Problems with AI (Score:3)
sentience/intelligence (Score:3)
If you don't think we could ever reach this point with AI, why not?
Replacing Humans? (Score:4)
Comparison to Brooks subsumption architecture (Score:4)
How does this compare to your approach?
What do you think of "Creatures"? (Score:4)
---
Two sides to the problem (Score:4)
How do you do you when you've achieved AI? (Score:4)
My question(clear from the subject) is: How do you know when you've achieved AI? Will it be when a system starts learning just because it _wants_ to? Does it need to be able to communicate with us above some predetermined level? Does it need to go completly crazy trying to fight for its survival? Does it even need to exhibit intelligence as humans know it?
How do you justify your expectations? (Score:5)
It's still just 10 years or so away.
It's not getting any closer.
How do you justify any degree of optimism wrt the future of AI at this point? What makes now fundamentally different from anytime in the past 40 years?
Questions based on your academic path (Score:5)
most likely path? (Score:5)
Do you think that AI is more likely to arise as the result of explicit efforts to create an intelligent system by programmers, or by evolution of artificial life entities? Or on the third hand, do you think efforts like Cog (training the machine like a child, with a long, human aided learning process) will be the first to create a thinking machine?
~Phyruxus
Turing award. (Score:5)
To which I would add... (Score:5)
However, I also have a problem with your proposal from an economic perspective:
Property laws developed as a mechanism for optimal utilisation of scarce resources. The laws and ethics for standard property make little sense when the cost of replication is $0. The market is the best mechanism for distributing scarce resources, so you propose we make all IP resources scarce so that IP behaves like other commodities and all the laws of the market apply.
We are rapidly entering a world where most wealth is held as a form of IP. Free replication of IP increases the net wealth of the planet. If everybody on earth had access to all the IP on earth, then everybody would be far richer - it's not a zero sum game . Of course, we're several decades at least from this being a viable option since we've reached a local minima. (Need equivalent to starship replicators first - nanotech...)
Artificially pretending that IP is a scarce resource will keep the lawyers, accountants, politicians in work, and will also allow some money to flow back to the creatives, but at the cost of impoverishing humanity.
I could actually see your proposal being adopted, and I can see how it will maintain capitalism as the dominant model, but I also believe that it is the most damaging economic suggestion in human history
Could you tell me why I'm wrong.
Software Market & Open Source (Score:5)
I assume (from reading other parts) that you are talking about a monetary reward. My question is (and this is not meant as a flame by any means), do you really think that that's what the Open Source community is after, after all? Do you think that people like Torvalds or RMS are unhappy for not being rewarded enough?
If the OS community doesn't care about monetary rewards, is there an other benefit in having your proposed Software Market?
Regards, Breace
And what about Freedom? (Score:5)
I read your article about "information property" and was surprised to find you dealt with the matter completely from the point of view of advancing the market. Their are those of us who would argue that the wellbeing of the market is, at most, a second order concern, and that the important issues that Information age gives rise regarding the perceived ownership of information are really about Freedom and integrity.
These issues range from the simple desire to have the right to do whatever one wants with data that one has access to, to the simple futility and danger of trying to limit to paying individuals something that by nature, mathematics, and now technology is Free. They concern the fact that our machines are now so integral in our lives that they have become a part of our indentity, with our computers as the extension of ourselves into "cyberspace", and that any proposal which aims to keep the total right to control over everything in the computer away from the user is thus an invasion into our integrity, personality, and freedom.
Do you consider the economics of the market to be a greater concern then individual freedom?
-
We cannot reason ourselves out of our basic irrationality. All we can do is learn the art of being irrational in a reasonable way.
Frankenstein (Score:5)
Do you believe that there is any foundation for worry? If so, what areas should we concentrate on to be sure to avoid any problems? If not, what are the limiting factors that prevent an "evil" AI?
Human brain - AI connection - is there? (Score:5)
Do you think that a greater understanding of the human brain and how intelligence has emerged in us is crucial to the creation of AI, or do you think that the two are unconnected? Will a greater understanding of memory and thought aid in development, or will AI be different enough so that such knowledge isn't required?
Also, what do you think about the potential of the models used today to attempt to achieve a working AI? Do you think that the models themselves (e.g. the neural net model) are correct and have the potential to produce an AI given enough power and configuration, or do you think that our current models are just a stepping stone along the way to a better model which is required for success?
Why not Write a Screensaver? (Score:5)
I have always thought that distributed computing naturally lends itself to large scale AI problems, specifically your Neural Networks and Dynamical Systems work. I am thinking specifically of the SETI@home project, and the distributed.net projects. Have you thought about, or to your knowledge has anyone thought about harnessing the power of colective geekdom for sort of a brute force approach to neural networks. I don't know how NN normally work, but it seems that you could write a
very small, lightweight client, and embed it into a screen saver a'la SETI@home. This SS would really be really a simple client 'node'. You could then add some cute graphics like a picture of a human brain and some brightly colored synapseds or what have you.
Once the
Pete Shaw
How should an amateur get started working on AI? (Score:5)
AI and ethics. (Score:5)
kwsNI
AI Metrics (Score:5)
For instance, what's the difference between your TRON demonstration and a highly advanced system of solving a (very specific) non-linear differential equation to find relative and (hopefully absolute) extrema in the wildly complicated space of TRON strategies? Or, is that the definition of intelligence?