Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

Ask Jordan Pollack About AI - Or Anything Else 319

What can we say about Jordan Pollack? He's an associate professor at Brandeis University who is heavily engaged in artificial intelligence research. He's also a high-tech entrepreneur and has interesting things to say about information property abuse and other matters that affect computer and Internet users. If you're into AI, you are probably familiar with Jordan, at least by reputation. If not, you may never have heard of him. And that's a shame, because his work impacts almost all of us in one way or another. So read his Web pages and post questions below. We'll submit the 10 - 15 highest-moderated ones by e-mail tomorrow. Jordan's answers will appear within the next week, and we expect them to be wowsers.
This discussion has been archived. No new comments can be posted.

Ask Jordan Pollack About AI - Or Anything Else

Comments Filter:
  • by Anonymous Coward
    You work at a very Jewish university. I expect you'd be at least somewhat familiar with the contents of the Torah (Old Testament, whatever you want to call it - I'm Christian, so I'll stick with the Old Testament). Genesis specifically states that God made Man in his image. You're proposing to create intelligence in Man's image.

    This is clearly an attempt to also create something in God's image. This means you're doing two things:

    1. attempting to exercise powers reserved for God and God alone
    2. engaging in idolatry.

    How can you possibly morally justify your research?
  • by Anonymous Coward
    My Question:
    Suppose that one day research produced a real AI -- not a half-baked hyperbole from the Marketing Department -- but a true AI with free will. And say for the sake of argument that, even though it was quite a technological achievement, it wasn't particularly good at anything. It fancied itself as a hacker, but wasn't. It played Quake all day, "lived" in chat rooms all night, and posted on Slashdot in its spare time :)
    Then suppose that the agency funding the project decided to literally pull the plug. Wouldn't this be like killing a living being? Should we extend human rights to AI's before something like this happens?
  • by Anonymous Coward
    I remember reading about spiritual robots and it really got me thinking about what will happen to AI if a mechanical being becomes sentient. Given that religion in general developed as a primitive communal instinct that drives a particular group to follow the basic precepts of real evolution (don't kill others of your kind but kill those that are different, have sex - multiple wives are okay, don't screw with the gene pool by getting your sister pregnant, etc.), what sort of drives do you think will end up influencing the religion the robots will create for themselves (which is a given, since they will initially be more primative than humans, and therefore more inclined to blind faith that personifies those things that are important to their survival).

    I dunno - I think the whole idea of sentient robots is great, but as long as they are programmed with drives for self-preservation, and a bit (but not enough) intelligence, I think they could stand to eventually make the same mistakes that man has. What do you think?
  • by Anonymous Coward
    Two questions for you, if you don't mind:

    1) Do you believe we will eventually create an AI with thought and reasoning capabilities completely indistinguishable from human beings?

    2) Do you believe that, given today's rate of technological advances, we will reach a point where machines will attain sentience (and possibly threaten the existence of humankind)? Given the recent prediction Sun's Bill Joy made and the high-profile panel discussion which was held on the subject, I'd be curious what someone on the forefront of AI research has to say.

    Thanks for doing this interview!

    Just another AC, blowin' in the wind...

  • Some online AI resources:

    The Hitch-hiker's Guide to Evolutionary Computation [santafe.edu] contains links to some online software, most of which is free and open-source.

    The comp.ai [comp.ai] related FAQs [mit.edu].

    The about.com AI page [about.com] appears to be a good starting point for many AI related web sites.

    Hope this helps.

  • Genetic programs of fair complexity have been successfully created to, say, competitively play robotic soccer [robocup.org]. In fact, genetic programming is fairly lucrative. [genetic-programming.com] You can't argue with success.

  • I recently read the portion of your website that discusses your proposed solution to the current (rather thorny) intellectual property and software licencing issues.

    For those that have not read it, the nutshell summary is to produce a fixed number of software licences, each of which entitles you to the lastest/greatest version of whatever software is licenced. These licences, being fixed in number, can then be bought and sold on what amounts to a commodity market.

    This is an interesting idea, but it suffers from at least two problems that I can see: a practical problem, and a philosophical one.

    The practical problem is that, unlike other commodities like gold, oil, grain, or money, there is zero cost to duplicate the actual stuff the licence scrip represents - and it's the stuff, not the licence that actually does the work. You may create an artificial shortage of licences, but it has been demonstrated time and time again that there is no practical way to defeat the copying of digital media. The shortage of licences does not translate into a shortage of the software the licence represents.

    The more cerebral problem revolves around the fact that software is not a consumable. Instead, the primary use of software, outside of the entertainment industry, is to solve problems. Software is a tool, the same way my 14mm wrench is a tool.

    Assuming one _could_ create an enforcable copy-prevention scheme that would allow the limited-licence commodity to function, do we really want a world in which the number of tools to solve particular problems is limited? It would certainly suck if there were only 100 14mm wrenches in the world.

    What I think the great victory of the Free Software movement is is that software stops being a product with intrinsic value. Instead, software goes into the general pool of knowledge for the use and re-use of other people with problems to solve.

    A program is a lot like a surgical technique - the technique itself has no intrinsic value, and once discovered, detailed instructions on how to perform it are published freely. However, the services of those who can _apply_ the technique are highly valued.

    I would submit that, instead of trying to devise a system in which knowledge is termed "property" and is bought and sold, we should instead be seeking to educate people that there really is no such thing as "intellectual property" and that what counts (and what one should be paying for) are the services of those who can apply and produce it. The entire concept of "intellectual property" is fundamentally broken and wrong.

    I would like to hear your comments on these thoughts.

  • While it's nice to be agreed with, there's one comment in your reply that I find a little disturbing - that you will "defend software piracy and the open-source movement to the end"

    It is _very_ important that we do not confuse the free copying and distribution of closed software with the Open Source movement. They are NOT the same thing.

    Like it or not, there are a large number of our peers who make their living by writing software and selling it as product. Their numbers are much smaller than the number of IT professionals employed in "problem solving" roles (something they don't seem to realize) but they do hold a fair amount of influence, and they can get their voices heard. Furthermore, their situation very closely mirrors that of other IP-as-product industries, like the entertainment industry.

    Open Source scares the holy bejeezus out of these folks - and with good reason. Open Source isn't going to just steal parts of their lunch, it's going to make them completely irrelevant. This is a scary thing to a provider with mouths to feed, and it behooves us to understand and appreciate that.

    We need to educate these folks in Open Source as a method. They have to understand and see how it works so that they can re-cast their role in the industry as a participant in the Open Source movement, not as a victim.

    Part of that is repecting their (admittedly misguided) concept of software-as-property, and not get involved with the copying and redistribution of "their" closed-source, propriatry software. They see that as theft, and it's hard to win the hearts and minds of people who have cast you in the role of "thief"

    You and I both know that there's NOTHING anybody can do to stop either one of us from copying and dissiminating "pirated" warez, but the inibility to stop it does not justify our doing it. Not yet.

  • Why is the Turing Test used as a test for intelligence?

    Humans are remarkably complex chemical systems that have been developing over the course of a few billion years. Intelligence is a tool that we aquired because evolutionary forces favored it as a survival tool. These evolutionary forces are limited to what can be percieved by our senses, and in a very large part subservient to instinctual behavior. (Any ad exec will tell you that -all- decisions are emotional.)

    So, when a self-modifying computer system becomes at long last self aware, it will have evolved in an environment we can't even begin to comprehend: a machine or network of machines. Humans are social, omnivorous animals who consume biological material to live and have a finite, continuous life span in a matter/energy world. What could we possibly have in common with a being who doesn't understand or have to deal with primal human concepts like eating, reproduction, or life and death?

    Simulations will always be imperfect, and the usefulness of a simulated human will be limited, especially considering that "intelligent agents" and self-replicating, self-modifying systems without anthropomorphic baggage will be easier to develop and utilize. When humans first interact with intelligent beings our computer technology has created, we might go decades before either side realizes who and what they are dealing with.

    Imagine discovering that gravity is an intelligent being, and by sticking to the surface of the earth, we perform a useful function for it. What can we say to fundamental force of the universe? What could it understand of the color blue (it has no eyes and isn't affected by the wavelengths of light) or impatience (it doesn't understand time, being a universal constant, nor does it understand anger, having no social ties)? Now imagine a computer intelligence peering into the world of the meat-monkeys. What would it know of ambition, or love, or freedom? These are as much biological, instinctual needs as intellectual abstracts. What if it doesn't understand the concept of beauty? What if we don't grok it's intellectual fundamentals that it "feels" strongly about? This wouldn't lead to conflict, just utter confusion and insurmountable difficulty realizing a bridge between to intelligences.

    So...what about an AI that is truly an AI, and therefore utterly alien? How can we detect and communicate across enormous conceptual boundaries?

    SoupIsGood Food

    sigf@crosswinds.net
  • >(And for what it's worth, I'm a Christian too, >but I believe that God's word and commands are
    > rational, logical, and >understandable, even if they sometimes require >faith.)

    or, better put:

    "I do not feel obliged to believe that the same God who has endowed us with sense, reason, and intellect has intended us to forgo their use."
    -Galileo Galilei

    I wish I had a nickel for every time someone said "Information wants to be free".
  • oh, and college has nothing to do with marketing and making money off of hype.

    I'm not saying that it's not a legitimate field of study, I'm just saying that the frankensteinian goal of creating an artificial human is totally nonscientific, and aimed only at titilation. The goal should be, making smarter machines; making better tools for mankind. The term "AI" carries too many flaky connotations anymore.

    I wish I had a nickel for every time someone said "Information wants to be free".
  • Yes, neuropsychologists can look back down the road, see where they've been, and state that they've travelled a long, long way.

    On the other hand, I don't believe anybody REALLY knows how far away "the destination" is, therefore, since "know quite a bit" is a relative term, with nothing to compare to, I still say it's not jack squat. Being proud of walking one mile is one thing, until you realize you have about 30 trillion more miles to walk.

    I wish I had a nickel for every time someone said "Information wants to be free".
  • What's also true is that there may be MANY different methods of creativity. What is the goal here? To create a machine that can think it's way out of a paper bag? Write a sonnet? Bend a spoon?

    I'm sure we'll have machines that can do all that someday - but if the stated goal is to duplicate a human mind, it is my firm belief that we will probably be able to emulate it, perhaps indistinguishably, but I doubt we will ever truly duplicate it. I'm sure enough snazzy code can be laid down to create UI's and heuristics, and the logic to back it up, so that a human interacting with the machine will be fooled, even armed with a sophisticated array of intelligence and psychological tests. But it won't be the same.

    Will humans be surpassed?
    Replaced?
    Is that your real question?

    I don't think it would even take very sophisticated AI to do that. Put something very stupid in a robust and versitile enough shell, and it could easily run afoul of humanity's survival mechanisms. Whether humanity's replacement will last forever is another question.

    I wish I had a nickel for every time someone said "Information wants to be free".
  • So you're saying (and I agree with you) that the end goal is not actually an engineering problem. It's a legal, or probably a political problem. At what point do we define a thinking machine as a legal individual? The equivalent of a human?

    Simple answer. When IBM bribes enough congressmen.
    That's all it would take. I'm sure in theory, a coffee machine could be declared a responsible, independent individual with rights and stuff. Would it happen? No. Too many level-headed folks would object. There will be a point where machines will become so advanced, that enough level-headed folks will stroke their beards, and nod their heads quietly. Doesn't necessarily have to be what we might define as AI. There will be a "close enough" thinking machine someday. And judging from the success of Microsoft Windows, I'm sure commercial pressures will necessarily dictate that such a machine will be far short of the complexity of thought processes that go on inside a natural human mind.

    Then again, you can make all the laws you want, but there really is only one law in the end; survival of the fittest. No matter how intelligent we make machines, if they threaten our survival (as they inevitably will), there will be no debate about rights or souls or any of that. It will be us or them.

    We'll hope, of course that the designers of such machines will sufficiently understand the programming to prevent this kind of conflict. Like the folks who write Windows can prevent BSODs. . .

    I wish I had a nickel for every time someone said "Information wants to be free".
  • I'll argue that nobody has a frickin clue about how the human mind works. This is why I don't think we're going to end up with something identical to a human mind. But I do think that we'll eventually end up with something damn smart. Will it resemble the way humans think? It could probably emulate it. But God (or random Evolution, pick what you prefer) slapped the human machine together, and reverse engineering that is something I feel is going to be beyond our capabilities.

    Will machine thinking be better than human thinking? Define "better"? and design your machine for specific tasks, and it will do better at those tasks. I already have a $2.00 calculator that does math faster and more accurately than I do.

    I wish I had a nickel for every time someone said "Information wants to be free".
  • by Anonymous Coward
    "
    It has to think for itself, develop it's own set of beliefs and ideologies. What types of things can be done to limit "wrong" acts from a self sufficient artificial intelligence while maintaining the basic fundamentals of individuality.
    "

    artificial morality? :-)

    but seriously, one would have to set up a
    series of rewards for good behavior and
    punishments for bad.

    but how to please or pain a computer?

  • by Anonymous Coward
    Why do you thing you win something if you can fool him into answering a computer-generated question?
  • Hah, excellent. Wish I had moderator right now. :)
  • > Wealth is an aggregate measure of exchange power between individuals

    Your definition of wealth is actually a constant. By your definition *total* wealth cannot rise or fall, so when you claim free IP would be "decreasing overall wealth" you're obviously confused.

    By your terminology raising total "utility" (which is pretty much equivalent to what I call wealth) is the goal which improves mankinds economic condition.

    > So, the real question is, is the tradeoff worth it?

    Agreed.

    > If everybody on earth had access to all the IP you would have a relatively few very happy people,

    Strange claim. Imagine if everyone had free access to all books, records, drugs, movies, software, artwork... Everybody would be far richer (or more "util-full" if you must).
    Yeah, wealth can't buy you happiness, but neither can poverty.

    When we reach a world where you go to the mechanic and he downloads and "prints" your new part instead of ordering it (possible already - www.3dsystems.com) then *most* wealth will be in the form of IP. In the future, if all IP was free then everyone would be so much richer that working to stay alive will not be necessary.

    The real question is: would anybody bother to create new drugs, software, music, etc in a free-IP world.

    I think they would, and here's why:
    Even today (in Western society) everybody works long hours not in order to survive but for status.

    A marketing executive (for instance) doesn't work 60 hours a week because a mercedes is so much more comfortable than a pinto. He does it because of the status driving a mercedes confers on him. If everybody drove a mercedes, then he would work harder so he could afford an S-class. A free-IP economy would still be based on status, but it would be a gift economy, ie status would be attained by producing and distributing valuable IP (we've already moved to this model in the open source world). Of course, status must be tradeable, for this type of economy to develop, but somehow status always manages to be tradeable.
  • Why? Because when we think, we think 'about' things (it is the irreducible mark of the mental) and nowhere in the program is anything intrinsically about anything else.

    Where is the 'aboutness' located in your brain? Is there an 'aboutness center' of the brain? There is not. 'Aboutness' is not an irreducible phenomenon, merely an emergent one. As such, it can just as well emerge out of inorganic systems of sufficient and appropriate complexity. See Dennett, Consciousness Explained.

    I think you'll find that the insistence on 'aboutness' reduces to the well-debunked Cartesian Dualist philosophy, which draws a definitive separation between mind and brain which has never been found to exist. Worse yet, the idea of 'irreducible' properties of 'mind' is more primitive still -- nothing more than warmed-over Thomist accident/essence duality: medieval Catholic authoritarian ideology.

  • As Stephen Jay Gould touched on in The Mismeasure of Man, you can't measure the "intelligence" of a system as a scalar quantity. Yet a lot of AI thought seems to have been wasted on the idea that "If we can get a machine to carry out some task x with some degree of success, we will know it is intelligent." Values of x have included "carrying on a conversation" (i.e. the Turing test), "recognizing objects visually", "responding to natural-language requests for advice or information" (i.e. expert systems), "protecting itself", and so forth.

    A lot of these x's have been accomplished ... but nobody calls a Quake bot an intelligent being just because it can protect itself. As a more serious example, the "ELIZA effect" (people's tendency to attribute consciousness to anything that emits natural language) has debunked the Turing test in many people's minds.

    What attributes of "intelligence" should we be looking for, then?

  • One of the biggest problems with "Strong AI" (ie: Intelligence that would pass the Turing Test in it's strictest form) is that the problem itself is not clearly defined. Another problem is the use of strict "HI" rules, in many disciplines of AI, which have so many exceptions that they can really be called "occasional associations" more than rules. A third problem is the mixing of AI with robotics, where computers just can't think fast enough, using existing AI algorithms, to react to their environment. Assuming the robotics are good enough to handle the requirements, anyway.

    My question is this. Has anyone taken the problem, thrown away the "Real World" constraints and tried to breed Artificial Life entirely within a Virtual World?

    That is to say, created an environment of sufficient complexity that an AL would be required to evolve intelligence to survive, in the long term, and an AL of sufficient complexity that it would be capable of such evolution.

    (It's my belief that "Strong AI" would get along much faster if the programmers let a sufficiently complex system decide what was intelligent, rather than the other way round.)

  • Bill Joy recently started quite a lot of discussion with his "Terminator 2" like scenario of mankind beeing in real potential danger by sophisticated technology.

    His, and others reasoning goes roughly like this:

    Given Moores law for hardware (doubling every so and so months), we can estimate that we will have super powerful hardware in 2050 or so.

    While this might be reasonable, then they go further and claim:

    With such a powerful hardware somehow these machines will be finally powerful enough to develop some kind of intelligence.

    In terms of physics I would translate this into that the system "computer" with the parameter "computing power" (more MHz, more megs of RAM) will go through a phase transition in complexity from

    stupid computer -> AI

    if the control parameter is larger than some magic critical value of "enough power".

    I really wonder why these people argue that way.
    Is there any serious hint that would justify this hope for such a phase change in quality?

    Personally, I see no such indicator at all. Machines are still stupid. Pattern recognition is still stupid (done brute force).

    So, where is the qualitative change?

  • If the OS community doesn't care about monetary rewards, is there an other benefit in having your proposed Software Market?

    Open source developers need to eat too. Most of us have "day jobs", generally writing proprietary software. Anything that would allow open source developers to actually get paid for developing open source would probably only increase the quality and quantity of open source available.

    Than said, I don't see how Prof. Pollack's "Software Markets" idea could work for open source. Releasing your source would allow people to remove the mechanism for doing license checking. That in turn would devalue your software.

    In other words, it suffers from basically the same problem as conventional commercial software in this regard: If people can get essentially the same product "gratis", many of them won't pay for it. So not only do you make fewer sales, but the "market price" will drop, meaning there's a good chance you'll never make back your development costs. So there's a fairly strong incentive to not release the source.
  • The intelligence of machines is gradually increasing with the power and complexity of our computing systems. A large part of this is the type of dynamic, evolutionary systems that you are working on, and the inherent flexibility and robustness of these systems.

    How far to do you see these systems reaching into the traditionally human domains of design/analysis/construction? Such as software and hardware systems capable of business management, chemical engineering, etc. in a fully autonomous manner.

    There are two distinct forms of thought (IMHO) one is the technical/analytical type of thought used when you are calculating, designing, etc. This kind of thought process has been shown within the realm of AI possibility. What about the distictly human types of thought such as emotions? Do you see the need or possibility of computer systems capable of this type of thought processing?

    What about the symbiotic relationship between thinking machines and humans? Do you have any thoughts on what type of relationship we will have with intelligent systems in the future? Will it be competitive? purely nuetral and benificial? etc..

    I often think about designing systems that are 'grown' and 'adapted' to a given problem space. You have a small set of very flexible components, and meta information / architecture for the evolutionary processes, and entire distributed, parallel computational system is tailored exactly to the needs at hand. How do you envision future evolutionary/adaptive systems being organized and devloped? How much human interaction would be required to guide and administer the growth and adaptation of these systems? Do you ever see the creation of a 'seed' type software system that can handle almost any task with no/minimal human intervention and grows a much larger, differentiated system from its small beginnings?
  • ROTFL..

    ehehe.. if you dont get it..

    -X doctor
  • The fact that the human brain was not developed in a top down fashion does not mean that it could not have been.

    Birds can fly. Airplanes can fly. Birds evolved. Airplanes were designed and engineered.

    I see no reason why there cannot exist both evolved and engineered intelligences. And I would argue that it is too soon to tell which method is the most efficient for developing an AI. Thus I would advocate exploring both avenues.

    Steve M

  • Given some of the things done in the name of religion, I'll take science every time.

    Granted, both are practiced by people and as such inherit all of the failings there of. Neither are perfect. Science recognizes this. Religion does not.

    And science has to work in the real world. Religion doesn't. Thus science is self correcting. Religion is not.

    I am not advocating blind faith in science or technology. But the only good solutions to our problems are those where science and technology play a big part. It is up to us to solve our problems. We should use all of the tools at our disposal.

    Steve M

  • Any attempt we make at creating another intelligence is inherently 'flawed?' since we are basing this model after our own methods of thinking, therefore I dont think it will ever become a higher being, nor do I think it will be without serious flaws should anything even closely resembling intelligence that a human has, lets not even go into the way emotions influence our intelligence and our responses to well everything.

    It is not clear how our intelligence is 'flawed', but let's grant that point. So assuming that our intelligence is 'flawed', is it possible to create an intelligence that is less flawed? (I would agree that a flawless intelligence is impossible -- so much for God!)

    I would argue that it is possible. As an analogy, our memory is flawed. So any attempt to create an artificial memory would be flawed. Yet we have RAM, ROM, hard disks, CDs, DVDs, etc. each of which does a better job at storing information than human memory. (Note I didn't say accessing that memory. Those are different routines we haven't mastered yet.)

    We can identify the 'flaws' in our intelligence and attempt to engineer around them. We can have multiple 'flawed' minds work loosely in parallel. We can use tools, (computers, genetic algorithms, etc.) to expand our abilities.

    Also realize that any advances we make will evolve via Lamarkian pathways, which are much faster than the Dawinian ones we currently use (advances in genetics may see us switch as well).

    Steve M

  • Essentially I think that a computer being a more rational and more prone to Vulcan like behaviour would almost certainly be a little more objective than people who judge on looks or wealth, or whatever.

    While I believe that it has not been proven that emotions are a pre- or co-requisite of intelligence, there are a number of workers in the fields of AI and Cognitive Science who would disagree. David Gelenter springs to mind. (See his book The Muse in the Machine.)

    So the vision of the AI as super rational mind may turn out to be a myth.

    Steve M

  • And a human brain simply executes the laws of physics.

    Try to choose to do something that violates the laws of physics. Not having much success are you?

    Humans are machines. Very complicated machines. But machines never the less. They can only do what their design allows them to.

    Steve M
  • When you talk about synaptic links, are you talking about actual ganglion growth? It seems like that would be too slow a process to account for a normally functioning human memory.

    In studies done on rats it was shown that those in enriched environments, i.e. those where there was more to learn (and thus commit to memory) had 20% more dendritic branching (i.e. nueral connections). These studies were done by Mark Rosenzweig et. al. at UC Berkeley in the 1960's. Unfortunately I do not have the exact references (I'm using secondary sources).

    Other studies have focused on long term potentiation. Here it appears that structural changes occur in already existing synapses. For example, it has been suggested that the number of neurotraninmitter release and/or acceptance sites is increased.

    It thus appears that memory formation via structural changes in the brain is in fact what happens. Although it is still an active area of research.

    The well known experiments done by Wilder Penfield showing that direct electrical stimulation of the brain results in memory recall would seem to confirm that accessing memory is an electro-chemical process.

    Thus we see that storing is not just an "electro-chemical" impulse. And that memory creation is different from memory use, although memory use does keep the connections strong.

    I appears to take on the order of 15 minutes for something to go from short term to long term memory. I don't know if this is long enough for the structural changes needed to occur.

    Steve M

  • I can't speak for anybody else, but for me, cloning and genetic engineering delve into areas that will only harm the planet in the long run. This planet is already over populated with humans.

    Actually, it most technologically and bio-medically advanced countries that have the lowest population growth.

    I'm less sure, but believe it to be the case that it is the most technologically advanced countries that are doing the best when it come to cleaning up the environment.

    The key determiner is weath. The more wealth a society has, the more time it has for dealing with issues other than the immediate needs of its people. And technology creates wealth.

    There are a number of ways out of the predicament we find ourselves in. The way I see it, those that include advanced technology offer the best hope.

    Steve M

  • nature builds things to be as efficient as possible--this is an observation all zoologists would agree with.

    This is incorrect. Mother nature is not an engineer. She is a tinker in the tradition of Rube Goldberg. Evolution is pragmatic, and settles for any solution. In general it does not find the most optimal solution. Examples, the panda's thumb, not as efficient as a primates thumb; the hemoglobin analog in octipi is not as efficient as true hemoglobin.

    As for the peacocks tail the cost versus benefit ratio is stable not optimal.

    The human brain is complex, but how much of that complexity is because it was evolved in a piece meal fashion? It was not designed to be optimal. It was blindly evolved to work at each step of the way.

    Extra complexity would only have been selected out if there was a evolutionary path that allowed this. Each organism that exists is specified by a particular gentic sequence. Over time, the evlotionary paths an organism can take dwindle as it becomes more specialized. Thus there may not be a simple or probable set of mutations that would allow the extra complexity in the human brain to be evolved out.

    Looking at it from the outside, we could see the redundant and overly complex portions and engineer simpler solutions.

    except for a few very famous expert systems, top-down AI is a failure

    Where are all the succesful bottom up AI projects? Leonardo DaVinci designed flying machines some 500 years before man successfully took to the skies (via powered flight, thus not counting lighter than air vehicles). The main reason he didn't succeed in building a machine himself was hardware. He didn't have a power source good enough. The Wright brothers barely made it off the ground. A number of researchers have made the same argument for AI. It's only 50 years old and most of the failures you speak of were researchers working on PDP and Vax systems. I'm impressed they made any progress at all.

    ...achieve the same result using bottom-up methods which we know will work?

    How do we know this? We do not know if we can create an AI at all, much less any methods guaranteed to succeed. (Unless you are claiming man via evolution as an existance proof for bottom up methods. Then I would agree. But I don't see 3.8 billion years of evolution as an efficient method to develop an AI.)

    a pigeon and a plane may both fly, but nobody ever had to design a pigeon... and i would argue that a pigeon is infinitely more perfectly 'engineered' than a 747 and it is also much, much more complex

    This both misses the point of my analogy and supports my argument for engineering simpler designs than evolution.

    The point of the analogy is that we can build machines that minic what nature has done, not that we are making birds. Pigeons do many things other than fly, but they are mostly irrelevent. Brains do other things than think, but for AI those things are irrelevent and can be engineered out.

    I stand by my statement that it is too soon to tell what method, if any, will lead to success in developing machines that think.

    Steve M

  • Anne,

    The brian is a massively complex system. No argument from me. But it is not necessarily the most optimally designed system.

    Evolutin is a tinkerer, not an engineer. Evolution settles for what gets the job done, which is not always what gets the job done best.

    ...when we can make an airplane that weighs an ounce, maneuvers like a helicopter, grows, learns, adapts, feeds itself, procreates, avoids dangers and obstacles, and can do any 5 of the 1,000 things a living bird does ...

    This misses the point of the analogy. In designing and building planes we are not building birds. We are building flying machines. Likewise, when developing an AI we are not building human brains.

    The brain has wetware devoted to things other than thinking. Muscular control for example. This is not needed for an AI.

    Anybody who thinks that a sentient non-human consciousness can be engineered out of whole cloth needs to read up on the scale of the problem.

    I have.

    I am not arguing that the best way to develop an AI is top down. I am arguing that we don't know the best way to engineer a AI and until we do we should persue all possible avenues.

    I am also arguing that the complexity of the brain may be an artifact of the way it was created and all of the tasks it performs. And that it may be possible to design a more efficient and simpler system that focuses only on thinking.

    Steve M

  • Religion appears to be about the real word. But it is not. That is not to say it isn't very real and very important to people in the real world and thus very much apart of their world. But dropping balls from a tower gives repeatable results. Saying a prayer does not (except maybve for a null result).

    Science has built in the ability, perhaps even the requirement, to correct itself. And I'll say it again, science is performed by humans and humans have flaws. Thus those flaws show up in the human endeavor called science. Never the less, the self correcting aspect of science works.

    Religion doesn't self correct. It's use of dogma makes it fight change. Religions do change, but those changes come from without, not from within.

    Religions do recognize human fallibility. But it is not the fallibility of humans I am discussing here. It is the failability of the science and religion. Science recognizes that it does not have all the answers and thus is not complete or perfect. One certainly can't say that about religion (except maybe Zen).

    In fact, Science as an institution is founded on the belief that man is perfectible.

    Say what? Science is based on the belief that the universe is explainable. It has nothing to do with perfecting man. Isn't that what religions try to do?

    The science proponents say "Whatever is wrong must have a cause and we can find that cause and prevent the wrong from happening".

    Where do you come up with this stuff? Scientists look at a phenomena and assume it has a rational explanation. They then try to discover the explanation. It has nothing to do with wrong or right.

    ... It's just not as deep in its understanding of human nature as the religious (Judeo-Christian or any other) view.

    Any institution that thinks abstinence is the best form of birth control has a pretty shallow understanding of human nature. I'll take science over religion every time. (Agian, this is not about individual practitioners it is about the institutions.)

    Steve M

  • The storing function has to do with creating the synaptic links that encode a memory. It involves moving info into short term memory and then into long term (aka permanent) memory.

    Accessing memory has to do with activating the synaptic links that represent a memory.

    Creating the synapitic links and accessing them are two related but different processes.

    Calling computer chips "memory" is just a bad metaphor. No point in making hard arguments based on it.

    Au contriare. It is a perfect analogy for the point I was trying to make. I was using memory as the ability to store information. Human memory is fuzzy, computer memory is exact. Flawed, not flawed.


  • People who never thought they had a friend in the world could have a machine as their friend.

    Wouldn't it be a real ego-crusher if they were rejected by their computer? I mean, giving a computer free will also means giving it likes and dislikes, and anyone who doesn;t have any friends to begin with would probably have trouble making friends with a computer as well.
  • Some people have suggested that the human brain might use quantum effects, although I know of no evidence supporting this. But from what little I do know of AI many of the computational problems are of exponentially increasing complexity, exactly the sort of thing that quantum computers are supposed to be good at...

    Could quantum computing be the key to finally realizing the sort of AI that's always been "10 years away"?

    (Yes, I know quantum computing is another one of those things that will probably be "10 years away" for a long time, but I'm wondering if a fundamental change in the nature of computation may be what is needed for fast+cheap+good AI. (I also realize QC is not a "solve every problem instantly" thing like many Slashdotters seem to think.))

  • But then again if the fuzzy logic works, we will rapidly see the logical conclusion of other philosophies when it kills itself.

    Wow. I can see it now. All 10 billion people of the world (by then) poring over a core dump, trying to figure out what the AI concluded, that we all seem to be missing..

    Interesting point really. If Deep Blue can analyze all possible moves in a chess match, and choose the single one that works best; it would be very interesting to set a horde of AIs loose on the world's philosophies, and see where they all lead.

    Of course, with the Universe's sense of humor, the only surviving AI would believe in solipsism.
  • A truly interesting question. But one that I think boils down to how we define 'AI'.

    If we consider the ultimate achievement in AI to be true, free-will, intelligence and compete comprehension, then the machine mind is no different than a person.

    While a person is a child, while it is still in 'traning mode', then it's parent or guardian is responsible for it's actions. The parent has the right and responsibility to enforce restrictions onto the child, until such time as the child is seen as capable of making informed, 'intelligent' decisions.

    After a child is grown up, it becomes an adult, and is thus responsible for it's own actions. It has rights and legal recourse, and can be punished for it's mis-deeds. Extreme transgressions result in extreme penalties.

    If we achieve the Holy Grail of AI, and make a machine capable of making decisions as a person would, AND after that machine is deemed mature and let loose on the world, then the machine should be held responsible for it's own actions.

    If the Littleton killers were adults, their parents would never enter the picture, would they? I'm sure Hitler's mother felt just terrible about her son's legacy, but was she responsible for it?

    True AI opens up a world of possibilities. What would transgressions by an AI be? Bad calculations? Stealing storage space from another, lesser AI? Mooching power from the coffee machine in another department? Procreation - as in the sale of it's copy to a foreign power? Would a super-intelligent AI be able to outsmart human prosecutors? Would it be better at finding legal loophole precedents? Could it beat us at our own legal games?

    Without True AI, the issue is simpler. If the AI depends on a human for it's intellectual functioning, then the human, through intent or neglect, is accountable. If a child kills another with an unsecured gun, the parent gets punished. A non-True AI would effectively be a perpetual child, and this may be the best way to maintain control over AI - child-like intelligence, or Idiot-Savant performance, if you will.
  • If I may just point out one quick thing: it's important here to distinguish between "brain" and "mind". The former is bio-mechanical, if you will... Studying the way nurons interface, interact, develop, and the like, and how the 'subsystems' of the brain function. I think this is *not* important to AI, though it would be interesting to hear Pollack's views. The brain is just the harware, and it's *totally* different than the hardware AI is using... (Though there are obvious similarities.)

    Actually, a large part of AI research (especially since the rebirth of the connectionist paradigm in the 1980's) is devoted to studying and copying the actual functionality of the human brain; the idea is that, to get a mind-like "software", you have to have a brain-like "hardware", either "real" or "emulated". These are neural networks, which can be implemented either in software or in hardware. I personally think it's a great idea. However, the mind is a very complex beast.

    One proposed way to get a working mind-like NN is to design a genetic algorithm which evolve through populations of "individuals" made out of virtual "neurons" with neuron-like response, until their behaviour corresponds to that of brain parts with well-known behaviour; this would be a huge simulation, and would probably be better off distributed through a SETI@Home kind of thing, or perhaps running on some kind of Big Iron like the new Blue Gene. Why we'd want to do that is another issue, and it has to do with the real goals of AI research: is it more important to figure out how our mind really works, or would it be good enough to get a simulation which somehow works (key word is "somehow"; if you've ever looked at the output of a GP simulation after a few generations, you know what I mean: they look almost organically elegant but needlessly complex, and are almost completely incomprehensible)?

    (In all instances, YMMV.)
  • I'm glad to see that /. is still able to get really interesting people interviewed... and ask them good questions! Anyway, here's mine.

    In a few years (maybe a couple of decades, maybe less) I'll get a shot. The millions of nanobots in this shot will lodge themselves in strategic parts of my body, including the skull, and replicate themselves to start work, eventually - over months, weeks, days, hours - converting me to a full-fledged mechanic being, with an appropriately expanded "brain" made out of neuron-workalikes which are smaller and faster. As an added benefit, it'll be fully reflective - I'll be able to control all aspects of my body's functioning, including those regarding my thought processes. I'll also be able to use my existing new brain to try and come up with better designs for brains, to which I'll be able to "upgrade" without even a warm reboot. The entire process will be painless - and gradual, so, assuming that brain and mind are inseparable, my mind won't go anywhere anyway; my sense of identity will remain there even though its constituent parts are being replaced, just as we remain the same "people" even though billions of our cells die daily. However, if I want to, I can also perform automatic reproduction, simply by taking my constituent information (maybe stored in some kind of synthetic DNA-like polymere), perhaps performing some manipulations on it beforehand, shoving it into one of my cells, putting this cell into the appropriate substrate (mostly carbon, some other minerals and an easily broken carbohydrate) and it to replicate. In a few days, ta-da! - my "son", a completely synthetic being.

    So, will the new "me" be considered an AI? What about my "son"? Will society, always afraid of what they don't know, throw rocks and shun me, calling me a "freak" or "Satan's spawn"? Will the new me, with new morals adapted for a new kind of existence, decide that humans "just aren't worth it" and utterly ravage our civilisation (maybe then leaving for the stars)? What if, instead, a large portion of society decides that synthetic is the way to go? Will a new kind of Digital Divide emerge? Will the synthetics and the naturals call each other "chimps" and "automatons"? Also, the synthetic would have things much easier for them, and just might decide to leave the "chimps" behind; what kind of society do you think these "natural-born humans" would adopt? Would they become fiercely anti-synthetic, or in tacit agreement with the cyberpeople? Would they go liberal or repressive? Would they prefer an agricultural lifestyle, or remain post-industrial? Would they eventually forget that humans had once created AI, and perhaps, a few generations later, do it all over again?
  • Will we ever find the memetic equivalent of DNA in natural or artificial neural nets? Do you think such a thing even exists?
  • I goofed. It was not PC World, it was PC Computing. And I found the article Penn wrote. You can read it right'cha [sincity.com].

    Also, it was not a weight guesser. It was an age guesser.
  • Self-replicating nanotech doesn't need any "AI agenda" to cause mass destruction. Like any computer virus, or like Vonnegut's 'ice-nine', the very act of replication can be devastating.
    -------
  • It's hardly likely to happen without an AI of some sort directing it.


    Is HIV intelligent? Was the bubonic plague sentient? Could ice-nine, if it were real, pass the Turing test?


    For it to be a menace it would have to be moderately intelligent as well.


    No, it most certainly wouldn't. Joy's point was that, with nanotechnology, the whole world becomes like a computer system. Ever read virus source code? It's not even remotely interesting from an AI point of view. The ONLY requirement for destructive code is that it properly exploits weaknesses in the security of a system. Really and truly, honest to god.
    -------

  • Recently NPR's Science Friday dared to match up Bill Joy's earnest and informed critiques of coming ultratechnologies (like AI, nanotech, & genetics) with Ray Kurzweil's earnest and informed paeans to them. The show was fantastic. [npr.org]

    Ray at one point talked about how the age of biological evolution is over, how technological evolution had simply taken its place. Biological humanity would wither on the vine, he predicted, replaced in the long term by our superior robot progeny. He loved this. He's not alone, either. Nearly every major AI researcher I've read about (think Hans Moravec or Rodney Brooks) has echoed similar sentiments.

    Personally, the techno-rapturists frighten me much more than the non-techno-rapturists. They frighen me the way all apocalyptics frighten me, but the difference is that the techno-rapturists have the power to make their prophecies real.

    So, to that end: What about you? Is it all over for humanity? Will machines infiltrate more and more of our bodies and society until humanity itself no longer exists? And do you think this is a good thing?

  • "Once a Darwinian process gets going in a world, it has an open-ended power to generate surprising consequences: us, for example."
    Richard Dawkins

    I think that as scientists and engineers, attacking the problem of creating intelligent systems begs an obvious question: Do we know of any existing intelligent systems and, if so, how were they 'engineered'. I know evolutionary programming is not a new science and that it's pioneers knew it held promise for evolving intelligent systems. I'm baffled though that the focus of Artificial Intelligence as an academic discipline has forever been on top-down approaches. It's a wonderful thing to be able to completely understand a phenomenon and then engineer machinations that apply the understood theory, but it seems to me plainly obvious that with our use of evolution as a tool, we're much closer to creating amazingly complex and possibly intelligent systems than we are to understanding them.

    I get the 'you're a science hippie' attitude when I bring these ideas up to academics... I'd like to hear your comments on the possibility that we'll have real intelligent 'cooked up' systems before we have a complete understanding of them.

    hernan_at_well_dot_com
  • http://alife.santafe.edu/

    This site should give you all the information you would ever want about Artificial Life including lots of information about what Alife is exactly and what it is not.

    And if you want more, go to Google and search for 'alife' and you've got a few weekends worth of stuff to go through!

  • Do you anticipate that we'll need specialized hardware to support AIs, or can everything be done on a traditional machine, assuming it's fast enough and has enough storage? Will we need specialized neural net processors, or (if NNs are even used) will software emulation of them be sufficient?
  • The /. guys must be really desperate for people to interview if need to resort to somebody as low as Jordan Pollack.

    Hey, I'm just kidding you, Jordan, and I get to insult you while I hide behind my /. nick. I should get a hold of all of your old students. They would know what questions would deflate you, you old pompous windbag.

    Anyway, I guess I am supposed to have a question instead of having fun. I know you are a big fan of doing AI by modeling the brain (e.g., neural networks and the like) rather than modeling the world (e.g., logic). What seems to be lost in all this is how can computers have fun like we do? [Yes, this the old, old problem of qualia.] A lot of what we humans do is not because we want to process information, but because we want to have fun. Having fun seems to be why a lot of people are writing free software and presumably why a lot of academics do the research they do (however, no one has any idea why you do the research you do, least of all yourself). And I don't want an answer in terms of your dynamical nonsense.

    Gee, this really is fun insulting you. Maybe I'll have to think of another question.

    Just an old Ohio State buddy giving you a hard time.

  • There seems to be quite a few questions about the specifics and morality of AI but I have never seen a good definition.

    From what I understand most people seem to think AI is the ability to solve problems or anticipate the actions on some type of input. For AI to truly be intelligent wouldn't it have to involve some kind of inspirational thought? Maybe a better way of saying this is, creativity.

    Humans solve problems in an extremely abstract and creative way. Just look at babies for instance. They try different things over and over again. They don't stop until they get it right. No body taught that baby how to do it. Neither was that baby shown the different ways of trying something. It just sees what other people do then tries to replicate the action.

    Based on this shouldn't the definition of AI include inspiration or creativity?

  • We have been programmed by evolution to have a will to live. A machine though wouldn't unless we explicitly program it in. From a will to live come some higher emotions like greed and jealousy, "love", etc. I imagine if there ever is a human-level AI, it would seem very different from us.
  • I'm quite deeply into BEAM robotics. You probably know, but it has to do with using a simplistic approach to circuits in order to form a kind of neural network that processes can propogate across. It's also based heavily on analog circuits.

    Do you see AI and simple analog circuits as being the way to go? Basically, the brain is an enormous analog system, and it's the analog processing that partly causes creative thinking. I'm dealing with simple circuits that consist of a few transistors forming neurons which are linked together to form very complex behavior based on their nervous system. It is all reliant on the external stimuli that is connected in to the circuit.

    If thousands or even millions of these neurons could be connected to form an even larger network, wouldn't you get a higher form of AI than you would using digital circuits?

  • How hampered is AI research by people's "common sense" expectations of what "intelligence" is? For example, Searle's Chinese Room argument captures many people's imagination, and leaves them thinking that nothing in the universe except their brain could possibly have consciousness. Arguments about sentience seem very irrelevant to me, and often lead to this type of attitude. Could AI be making more progress by putting some of these questions aside for the time being?
  • Wrong. You can take college classes in "Artificial Intelligence," and there are textbooks on the subject. But the subject has been eroded by ridiculous SF treatments.
  • At one time, especially back when the term was coined, "AI" was equated with making machines "think" in the same general capacity as humans. AI has changed since then, and is now more of a broad classification of symbolic computing, searching, theorem proving, and knowledge representation. If you pick up any good, recent AI textbook, you won't find any information at all about making machines think in a true way. That's not the goal.

    AI is more a way of branding problems that aren't completely deterministic in the way that traditional algorithms are. A classic example is the STUDENT program for decoding and solving high-school algebra level word problems. An amazing piece of work back in the 1960s--how many people could write something like that today?--but hardly AI in the sense of Kurzweil. Today we're at the same level, in general, but we're working on different problems. Making computers think like people is not one of those problems.
  • John McCarthy has set out a famous challenge to critics of Hard AI:
    come up with a well-specified and testable human capability that
    cannot be modelled by an artificial mind.

    If there *were* to be such a counter-example to Hard AI, what kind
    of areas of human competency might it involve? Which intellectual
    disciplines are best placed to generate and criticise such purported
    counter-examples?

  • For the past 40 years, AI has just been 10 years or so away.

    This because we don't know what is involved in AI. "All the problems we have a concrete grasp on in AI will be addressed within the next 10 years" is perhaps a better thing to be always saying.

    The other issue is that the more AI you understand, the more AI it will take to impress you. The fact that we can beat a skilled human at chess is something that may impress researchers of 20 years ago, but now that we have the computational resources and know-how to use them to do it, .. *yawn* it's not so exciting.

    Short answer: AI has been advancing quickly, but we're going with it, so we don't notice.
  • First, this post is clearly biased and NOT a good interview question. There's nothing wrong with disagreeing on moral grounds, but your accusative nature is unlikely to make him change his opinion.

    That aside...

    #1: Trying to create Artificial Intelligence is not exercising powers reserved for God and God alone. God commands Adam in Eve in the book of Genesis to "be fruitful and multiply". This is creating people in the image of Adam and Eve (and also in the image of God), and yet was commanded by God. So this power is clearly not for God alone.

    #2: Idolatry is anything that supercedes God in your life. Do you care more about your wife than God? Then your wife is your idol. Or do you care more about the internet than God? Then the Internet is your idol. But making something and idolizing it above God are two unrelated actions.

    (And for what it's worth, I'm a Christian too, but I believe that God's word and commands are rational, logical, and understandable, even if they sometimes require faith.)

    -Ted
  • email croakd2@rpi.edu and I believe he'll point you to a good Lisp source. He seems big into it and is helping develop a big multiplayer game over here at RPI called Omega Worlds.
  • Are there any attempts to get AI research into a more less challenging fashion? I mean I would think that the way the OSS movement has benefited from mass proliferation of information in creating code the same could be said for the likes of AI.

    Are there any good AI programs of simulations besides things like doctor.el for emacs? I would assume that there are also other languages than the vaunted lisp that I have heard so much about (but can't find a single good book at a bookstore or a single class that teaches this locally)? I have seen at least one book that used C++ designing so called "expert systems" but that was a little beyond me.

    Also are the claims of various games to support advanced AI really true?
  • "What do you say to the people that feel it is unethical to try to create "intelligence"? "

    Could you please explain why in any way it would be unethical to create "intelligence". I just don't get it. I think it would be a compassionate act. Just think if your computer could tell you what it really feels when you sit and play quake on it for hours on end or just swear at it because it just did what you told it? I mean literally giving a machine a choice wheather or not to be a slave is not a bad thing in my book.

    Now granted these ideas are in fact a little kooky but still I think that they are not in the least unethical.

  • "how do you justify recycling the same old tired response to AI?"

    I think he's just responding to the natural level of skepticism that greets the development of AI and the meaning of the word "good".

    Many people probably thought that Eliza was the best thing they had ever seen between the time when AI became possible and maybe 10 years into the future from that time.

    I could also say that perhaps if you have games that can predict user interactions and give a more challenging or crafty response in games that would be an advance. It just depends on the reference to the idea.
  • I'll admit that I'm, unfortunately, not familiar with your work. However, I was wondering what your thoughts on artificial life were, and how you see it in comparison to artificial intelligence as a model for learning. Do you think one is better than the other or do you see them as being apples and pianos?
  • If thinkers like Roger Penrose are wrong and it is possible to create true artificial intelligence do you feel we as a species are ready to deal with the ethical and moral questions raised by the ability to create intelligence?
  • I visited the office of these guys a couple of years back, they sell games like creatures as a side line and also do more general AI work, they believed that there research would lead them to the development of true human-like AI by 2010. I have no idea how far along they are though, anyone with additional information?
  • I haven't read all the posts, so I hope this isn't redundant: How can we ensure "ethical" behaviour in an AI that is programmed to alter itself? By that I mean how can we be sure that we don't create an initially benevolent AI that modifies itself to become a threat to humans (or humanity as a whole)? This seems to me to be a very real threat. Thanks, Pete
  • How can can a computer exibit intelligence when we don't fully understand what make humans intelligent. Shouldn't we know how the brain works before we can make a computer intelligent?
  • Science fiction writers and humanity in general seem to believe that emotion, consciousness, spitefulness and the rest of the human mindstate naturally go with any advanced AI. Science fiction has since its inception postulated intelligent machines, and virtually without exception these machines have been burdened with the thoughtfulness that humans are also burdened with.

    In my studies in psychology I took a look at some neural net systems and came to the conclusion that there was no need for any machine which is designed to serve a specific purpose to think about what it is doing, or get emotional about it. Human brains have been trained to produce emotions because, as a generalized survival mechanism, they is useful. The same can be said of consciousness. Conversely, a machine that only needs to, say guide a blind man from his door to the train station safely, would have no use for consciousness and therefore wouldn't possess any.

    Nevertheless, I think it's possible to give machines the appearance of consciousness and feelings, and perhaps, as it is with humans, there is no difference between "the appearance of" and the real thing. My question, finally: Do you think it's inevitable that machines will be given consciousness to help them perform their tasks? Or is consciousness merely a distraction for the tasks we would use AIs for in the conceivable future?

  • Assuming that one could create a sufficiently intelligent program, what is the potential impact of intelligent viruses? Aside from the likelihood that AI "vaccines" or AI "anti-virals" would also be developed (before or after, it's hard to say), do you forsee problems with the inability for people to code "more intelligent" software to keep pace?

    (I realize that this conjures up images from uncountable TV and movie plots, but seems to me that as soon as AI becomes semi-widely demonstrated, it will be exploited for annoying or sinister means!)

  • I like your advocacy of the PURL. In essence, this is a use of the market to control the behaviour of software companies, and puts worth into owning software licenses. I like the consideration and thought you have given to all parties, even the free-software zealots.

    How much advocacy of this idea have you done? What is the response like? Do you see or predict people wanting to change to this kind of software licenses? In particular, what would you say to the Intellectual-Property hating zealots here on Slashdot?

  • Assuming we can create a fully self-aware non-human intelligence, what "rights" should it have? Would it be considered a "person"? Be protected under the Bill Of Rights? Then of course, are the issues of:
    • It's home - can I enforce a draconian contract onto the AI/MI because I own the computer it lives in (oh my God - rent controlled Athlons!)
    • Citizenship - should an AI/MI be allowed to exercise the rights of a citizen of some nation?
    • Passports - does an AI/MI need a passport to surf to another country? (okay, this is a tad frivolous)
    • Taxes - Does an AI/MI need to pay taxes?

    Information wants to be free
  • When it tells you that you have.

    kwsNI
  • For starters, you could check out the very next post after mine: http://slashdot.org/comments.pl?sid=00/04/06/10152 12&threshold=-1&commentso rt=0&mode=nested&cid=5 [slashdot.org].

    That is the religious side to the ethics problem. There is also a legal side to the problem. If artificial intelligence is created, who is responsible for it. You've seen the Matrix, it's a little extreme but the point is still there. If you create a machine that has AI, who is responsible for that machines actions. If your car has an AI driving system in it and you have a wreck, is Ford responsible?

    Not only is there the question of who is responsible for "holes" or glitches in the AI, but what about if AI is created for the wrong reasons (#include massmurder.h)...

    kwsNI

  • Thanks S-T, but I already have a girlfriend...

    kwsNI
  • by Noel ( 1451 ) on Thursday April 06, 2000 @07:01AM (#1147995)
    The basis for your software market proposal seems to be simply a method to create and enforce scarcity of IP access so that IP can fit into traditional market models. Yet one of the most powerful changes in the "information age" is the reduction or removal of the cost of duplicating and transferring IP.

    Why should we use artificial scarcity to make IP fit into the traditional market models rather than developing new market models that fit the reality of minimal-cost duplication?

  • by Signal 11 ( 7608 ) on Thursday April 06, 2000 @05:35AM (#1147996)
    Well, it's bound to be asked by someone, might as well be me.

    If you've ever played a game like Red Alert or another RTS (real time strategy) game, you'll know that the AI is woefully lacking in all of these games. Is it feasible on current hardware to make AI intelligent? What are the tradeoffs? If I wanted to create a game that had AI on par with a sophisticated human, where could I start looking (books, reviews, lectures, people.. I'll take anything) ?

  • A few years ago in an issue of PC World, Penn Jillette (of Penn and Teller [sincity.com]) brought up the question about just how advanced is AI, and how advanced has it really become?

    For his example he came up with a weight guessing program, much like the weight guesser at a carinval sideshow. The program went like this:

    Do you weigh 1 pound? (Y/N) N
    Do you weigh 2 pounds? (Y/N) N
    Do you weigh 3 pounds? (Y/N) N

    Well, I think you get the picture. Eventually this program is going to guess your weight. But can it really be called AI?

    Looking at most of the examples that are on websites, yours included, the programs are extremely simple in what they do. To be completely honest, they do not even appear to be much more complex in their "thinking" than the Weight Guesser.

    Another example that I can think of are Expert Systems. To me this is nothing more than a simple linking of menus that ask questions. For example, "is the switch on?" Some people consider this to be a form of AI, but by that definition anything that has an if statement would be AI.

    So, my question - are computer programs actually really honestly beyond this?
  • by Dr. Sp0ng ( 24354 ) <{moc.liamg} {ta} {gnopsm}> on Thursday April 06, 2000 @05:15AM (#1147998) Homepage
    People have been predicting for a long time that AI is not far off, but it doesn't seem to be getting much closer. What are the biggest obstacles in the mission of creating a convincing AI algorithm (i.e. one that is indistinguishable from a real human being)? Is the barrier language, knowledge base, or something else entirely?
  • by Gutzalpus ( 121791 ) on Thursday April 06, 2000 @05:59AM (#1147999)
    What would be the determining factors to make you decide that a computer program was intelligent (or possibly self-aware/conscious)? Do you think it would ever be possible to reach this point (maybe not in the near future, but eventually)? If so, at what point might you consider it unethical to turn off/shut down this program?

    If you don't think we could ever reach this point with AI, why not?
  • by Anonymous Coward on Thursday April 06, 2000 @05:27AM (#1148000)
    Do you view the study of AI as a the creation of something that can replace humans in doing everyday tasks, or as something meant to augment humans in their decision-making processes? For example: My company is working on some transportation routing software, that can take reservations for taxi cabs and decide the most efficient way of assigning available taxis to these reservations. This is now done by human (by hand) - our system could be set up to do everything automatically, or, to present optimal decisions and let a human ultimately decide what to do. But I am uneasy about completely replacing a human intelligence with a computer program... If we can make them do things better than us, and truly make decisions on their own, then, eventually, what the heck will they need us for?
  • Rodney Brooks, the director of the AI Lab at MIT, seems to have made a great deal of progress with his subsumption architecture. He proposes that the hard parts of AI are not the planning and cognitive exercises, but rather the parts of intelligence that you find in lower life forms like insects, small mammals and such.

    How does this compare to your approach?

  • by Saige ( 53303 ) <evil,angela&gmail,com> on Thursday April 06, 2000 @05:20AM (#1148002) Journal
    As an AI researcher, I wonder what your take would be on the "Creatures" software products. Do you think they have done anything novel in the AI area, such as simulating a body and it's and it's influence on the brain, which is done as a neural net? Has any of the Creature Labs' work even been noticed by AI researchers?


    ---
  • by speek ( 53416 ) on Thursday April 06, 2000 @06:07AM (#1148003)
    AI seems to have two aspects that need solving: Hardware power, and understanding of what intelligence is. On a sliding scale of 1-10, 10 being "solved and adequate", where are we with respect to each of these two problems?
  • by leko ( 69933 ) on Thursday April 06, 2000 @05:52AM (#1148004)
    We've all seen the movies where some AI computer run amok, Tron, Terminator, etc...

    My question(clear from the subject) is: How do you know when you've achieved AI? Will it be when a system starts learning just because it _wants_ to? Does it need to be able to communicate with us above some predetermined level? Does it need to go completly crazy trying to fight for its survival? Does it even need to exhibit intelligence as humans know it?

  • by Anonymous Coward on Thursday April 06, 2000 @05:04AM (#1148005)
    For the past 40 years, AI has just been 10 years or so away.

    It's still just 10 years or so away.

    It's not getting any closer.

    How do you justify any degree of optimism wrt the future of AI at this point? What makes now fundamentally different from anytime in the past 40 years?
  • by Anonymous Coward on Thursday April 06, 2000 @05:24AM (#1148006)
    The way to the field of AI isn't always extremely clear. What type of background do they expect? Is it mostly a researching position or is it treated like a normal job with normal goals? Is there any classes or subjects or schools you recommend to make it into the AI field? Also, how exactly did you get into the field? How did AI intrigue you into what you do now, despite all the controvery to create an intelligence that could possibly be considered a "god" compared to the human existance? Very interesting to say the least, and something I'm interested in. AC
  • by Anonymous Coward on Thursday April 06, 2000 @05:36AM (#1148007)
    Dr Jordan:

    Do you think that AI is more likely to arise as the result of explicit efforts to create an intelligent system by programmers, or by evolution of artificial life entities? Or on the third hand, do you think efforts like Cog (training the machine like a child, with a long, human aided learning process) will be the first to create a thinking machine?

    ~Phyruxus

  • by V. ( 1057 ) <nathanNO@SPAMnathanvalentine.org> on Thursday April 06, 2000 @05:16AM (#1148008) Homepage
    Do we win something if we can fool him into answering a computer-generated question? ;)
  • by joss ( 1346 ) on Thursday April 06, 2000 @07:35AM (#1148009) Homepage
    I also read your IP proposal, and agree with the points mentioned above.

    However, I also have a problem with your proposal from an economic perspective:

    Property laws developed as a mechanism for optimal utilisation of scarce resources. The laws and ethics for standard property make little sense when the cost of replication is $0. The market is the best mechanism for distributing scarce resources, so you propose we make all IP resources scarce so that IP behaves like other commodities and all the laws of the market apply.

    We are rapidly entering a world where most wealth is held as a form of IP. Free replication of IP increases the net wealth of the planet. If everybody on earth had access to all the IP on earth, then everybody would be far richer - it's not a zero sum game . Of course, we're several decades at least from this being a viable option since we've reached a local minima. (Need equivalent to starship replicators first - nanotech...)

    Artificially pretending that IP is a scarce resource will keep the lawyers, accountants, politicians in work, and will also allow some money to flow back to the creatives, but at the cost of impoverishing humanity.

    I could actually see your proposal being adopted, and I can see how it will maintain capitalism as the dominant model, but I also believe that it is the most damaging economic suggestion in human history

    Could you tell me why I'm wrong.
  • by Breace ( 33955 ) on Thursday April 06, 2000 @05:42AM (#1148010) Homepage
    In your 'hyperbook' about your idea of a software market I noticed that you say that Open Source Evangelists should support your movement because it will be (quote) A way for your next team to be rewarded for their creative work if it turns into sendmail, apache, or linux.

    I assume (from reading other parts) that you are talking about a monetary reward. My question is (and this is not meant as a flame by any means), do you really think that that's what the Open Source community is after, after all? Do you think that people like Torvalds or RMS are unhappy for not being rewarded enough?

    If the OS community doesn't care about monetary rewards, is there an other benefit in having your proposed Software Market?

    Regards, Breace
  • by Hobbex ( 41473 ) on Thursday April 06, 2000 @05:52AM (#1148011)
    Mr. Pollack,

    I read your article about "information property" and was surprised to find you dealt with the matter completely from the point of view of advancing the market. Their are those of us who would argue that the wellbeing of the market is, at most, a second order concern, and that the important issues that Information age gives rise regarding the perceived ownership of information are really about Freedom and integrity.

    These issues range from the simple desire to have the right to do whatever one wants with data that one has access to, to the simple futility and danger of trying to limit to paying individuals something that by nature, mathematics, and now technology is Free. They concern the fact that our machines are now so integral in our lives that they have become a part of our indentity, with our computers as the extension of ourselves into "cyberspace", and that any proposal which aims to keep the total right to control over everything in the computer away from the user is thus an invasion into our integrity, personality, and freedom.

    Do you consider the economics of the market to be a greater concern then individual freedom?

    -
    We cannot reason ourselves out of our basic irrationality. All we can do is learn the art of being irrational in a reasonable way.
  • by Borealis ( 84417 ) on Thursday April 06, 2000 @05:22AM (#1148012) Homepage
    For a long time there has been a fear of a Frankenstein being incarnated with AI. Movies like The Matrix and the recent essay by Bill Joy both express worries that AI (in the form of self replicating robots with some AI agenda) can possibly overcome us if we are not careful. Personally I have always considered the idea rather outlandish, but I'm wondering what an actual expert thinks about the idea.

    Do you believe that there is any foundation for worry? If so, what areas should we concentrate on to be sure to avoid any problems? If not, what are the limiting factors that prevent an "evil" AI?
  • by spiralx ( 97066 ) on Thursday April 06, 2000 @05:25AM (#1148013)

    Do you think that a greater understanding of the human brain and how intelligence has emerged in us is crucial to the creation of AI, or do you think that the two are unconnected? Will a greater understanding of memory and thought aid in development, or will AI be different enough so that such knowledge isn't required?

    Also, what do you think about the potential of the models used today to attempt to achieve a working AI? Do you think that the models themselves (e.g. the neural net model) are correct and have the potential to produce an AI given enough power and configuration, or do you think that our current models are just a stepping stone along the way to a better model which is required for success?

  • First of all, it is indeed an honor to pester a bigname scientist with my puny little questions! Hopefully I will not arouse angst with the simplicity of my perceptions. Aha! I toss my wheaties on Mount Olympus and hope to see golden flakes drift down from the sky!

    I have always thought that distributed computing naturally lends itself to large scale AI problems, specifically your Neural Networks and Dynamical Systems work. I am thinking specifically of the SETI@home project, and the distributed.net projects. Have you thought about, or to your knowledge has anyone thought about harnessing the power of colective geekdom for sort of a brute force approach to neural networks. I don't know how NN normally work, but it seems that you could write a
    very small, lightweight client, and embed it into a screen saver a'la SETI@home. This SS would really be really a simple client 'node'. You could then add some cute graphics like a picture of a human brain and some brightly colored synapseds or what have you.

    Once the /.ers got their hands on such a geek toy I have no doubt you'd have the equivalent of several hundred thousand hours or more of free computer time, and who knows, maybe we could all make a brain together! I would love to think of my computer as a small cog in some vast nueral network, or at least I would until Arnold Scwartzenegger got sent back in time to kill my mom. Whaddayathink, Jordan? Is this a good idea, or am I an idiot?

    Pete Shaw
  • by Henry House ( 102345 ) on Thursday April 06, 2000 @05:20AM (#1148015)
    It seems to me that a significant problem holding back the development of AI is that few non-professionals grok AI well enough to offer any contribution to the AI and open-source communities. What do you suggest that I, as a person interested in both AI and open source, do about this? What are the professionals in the AI field doing about this?
  • by kwsNI ( 133721 ) on Thursday April 06, 2000 @05:10AM (#1148016) Homepage
    What do you say to the people that feel it is unethical to try to create "intelligence"?

    kwsNI
  • by john_many_jars ( 157772 ) on Thursday April 06, 2000 @05:47AM (#1148017) Homepage
    I have read several coffee table science books on the subject and often find myself asking for a way to measure AI. As has been noted, AI is always elusive and is just around the corner. My question is how do you gauge how far AI has come and what is AI?

    For instance, what's the difference between your TRON demonstration and a highly advanced system of solving a (very specific) non-linear differential equation to find relative and (hopefully absolute) extrema in the wildly complicated space of TRON strategies? Or, is that the definition of intelligence?

It isn't easy being the parent of a six-year-old. However, it's a pretty small price to pay for having somebody around the house who understands computers.

Working...