Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Science Technology

Artificial Intelligence Overview 204

spiderfarmer writes: "Well, it feels slightly odd to suggest one of my own articles, but here goes. I've recently completed a brief overview of the current state of AI. The article concept was focused on Cyc, but scope creep being what it is, I ended up doing an overview of the entire field. Some of the Slashdot gang were fairly helpful in pointing me towards experts who would talk to me and towards white papers and books I might not have otherwise found. So, I thought they might be interested in how I put all the information together."
This discussion has been archived. No new comments can be posted.

Artificial Intelligence Overview

Comments Filter:
  • Media and AI (Score:4, Insightful)

    by mellifluous ( 249700 ) on Tuesday August 14, 2001 @09:37AM (#2110944)
    Dr. Lenat and others in the field of AI research should know better than to make claims about consciousnes and morality in a public forum. Cognitive scientists don't even begin to agree on what consciousness is, let alone what it would take for a machine to replicate it. Some very respected individuals do not even think that human consciousness can be replicated within the forseeable future (e.g. Roger Penrose's The Emperor's New Mind). Like any other scientific discipline, these sorts of claims should be left to peer review. Claiming to have invented a conscious machine would be akin to a physicist claiming to have unified quantum with relativity, but without having submitted their findings to any publication.
    • Re:Media and AI (Score:2, Informative)

      by shibboleth ( 228942 )
      I second the motion.

      It is clown-like, dishonest, and absurd for Dr Lenat to claim Cyc is conscious.

      For the author of the article to have taken this seriously (and then to, eg, think that the Cyc people won't let her "interview" the software because she might ruin it) means that she (whatever her fine qualities) has /alot/ of learning to do.

    • Dr. Lenat and others in the field of AI research should know better than to make claims about consciousnes and morality in a public forum. Cognitive scientists don't even begin to agree on what consciousness is, let alone what it would take for a machine to replicate it. Some very respected individuals do not even think that human consciousness can be replicated within the forseeable future (e.g. Roger Penrose's The Emperor's New Mind). Like any other scientific discipline, these sorts of claims should be left to peer review. Claiming to have invented a conscious machine would be akin to a physicist claiming to have unified quantum with relativity, but without having submitted their findings to any publication.

      The Slashdot Scientific Review Technique: If any scientist says anything other than "we don't know," he or she is wrong.

      • If you're more comfortable signing on to any half-assed theory instead of taking an agnostic approach that's your prerogative. Don't expect to be convincing to anyone but yourself and other "true believers."
  • Ive been working several years in knowledge representation for NLP (Verbmobil [verbmobil.dfki.de]), and our main problem was "Semantics". Its: "how do you draw conclusions from some given facts?". Unfortunately there are two bad ways:
    • Either you do it right (using formal logic calculus), and its extremly slow (exponential...).
    • Or you do it informally, and it doesn't give you the right results.
    And basicly everything is an exception or has an exception. I've become so desperate that I left the field and went into Dot-Com.
    Here its much better. Sometimes you advance...

    Frank (http://www.fraber.de/ [fraber.de])
  • Lenat has been issuing press releases about how Cyc is going to be intelligent "real soon now" for about a decade. Prof, Vaughn Pratt from Stanford tried it around 1994 [umbc.edu], and wasn't impressed. The engine was available on the web for a while, as part of a DoD project at MIT, and it wasn't very impressive.

    Cyc is basically a big encyclopedia of common-sense statements coupled to a simple inference engine. There may be uses for such things, but they're not anywhere near intelligent. What you get out is not much more than what somebody else put in. Sort of like Ask Jeeves. [askjeeves.com]

  • Please... (Score:2, Funny)

    by futard ( 462571 )
    You try to write a comprehensive article about AI, but yet make absolutely no mention of how future alien looking robots will be able to bring people back to life with dna but they will only live for one day "because of the space time continuum".

    What amatuerisness.

  • I read alot of posts, some with abilities sought, some with laments about current tech, some about heuristics.
    I've got points to ponder. Like, if true AI is spawned as an internet node, it would reflect the environment? Prolly pr0n. Or, equally, would it condemn political systems, and what would it use as its baseline? Would it have a social insect model for thought?
    As for learning, I believe it could learn things, but adapt? It might get better, but as for being able to fathom its 'creator', metaphysics might be lost on it. Ethics might be another touchy subject. Unless it can hurt, and empathise, I doubt it would make a good politician. (WHOA! Forget what I just said, do any of them?)
    Contemplate art, social values. Would it even have the same ideals of freedom?(yes, I read the posts).
    What about good vs. evil, would you teach it to I Robot? What would you use for definitions? How could it apply them? Would it apply them?
    And can we even define rational thought as a computation, especially with emotions. If the programs resources were low, would it claim to be having a bad day? How would we react to a cranky AI?
    Would it sense how we treat one another, and make moral judgements, tempered with wisdom and mercy? That might take alot of expectation.
    I think we should be happy with modelling aspects, not try to create a being which would be quite lonely(if true AI). It might only be good for the perspective of what it thinks of us. I wonder what it would think of us?
  • When I was at college, I wrote an Eliza program (didn't we all) on a Pr1me system in BASIC.

    Eliza: Hello, how are you today?
    Me: Fine
    Eliza: Hello, how are you today?
    Me: Not bad
    Eliza: That's very negative
    Me: f**k off
    Eliza: That's not very nice
    etc etc etc

    All was okay until the lecturer tested it on a telytype and it dumped sending all sorts of control codes to the telytype...

    You really had to lunge for the power button to prevent massive forest devastation
  • I don't have the overhead needed for: Visual face recognition, speech recognition, etc, but I could do it via a typing interface.
    Step 1) a 100% trustable coder needs to input some data on verbs and nouns.
    Step 2) a 100% trustable coder needs to input contextual sentences.... Explaining to the computer that given a situation A, and actions B, the result C will happen.
    Then after a long string of events are fed into the computer, possibly via natural language(with the computer asking what it doesn't know a first)... Then the computer can "interpret" the events when it sleeps, which is basically just the same as when people dream
    • Oh yeah, everyone who doesn't know much about programming can't see the problem. But Cyc project alone has been spending years doing just what you said - and before that there were decades of research that failed to solve the person-simulation problem. It's a lot harder than it looks at first sight - and no-one really has a clue why!

    • code is all you need (at least for a while, with an emerging intelligence... as it gets older, representing the physical universe would probably be beneficial)

      But if you represent various code objects with certain relationships to other ones, it can still be intelligent (of course assuming the Ai code is in there in the first place)

      Just gotta remember to let the AI config its own code, not just add and change objects... thats when you get the Real Interesting Stuff =]

      Hmm..
      "All you need is Code,
      Code is all you need"
      (sung, badly, to a similar Beatles song ;] )
  • Few comments (Score:2, Interesting)

    by Ruie ( 30480 )
    • It is my honest belief that current research in AI is not so much about how to make computers think, but rather about how to reproduce the behaviour tranditionally associated with thinking. The distinction is important. While a human will think about a new situation possibly inventing and using new rules, current AI programs rely on (possibly partial) knowledge of what is going on. A human can "extend" existing knowledge. An AI is good at "bridging the gaps".

    • Strictly speaking, data mining is not statistics. Statistics deals with analysis of time-based data, typically small amount of parameters is sampled many times. Data mining studies non or weakly time-based data, with lots of parameters but just a few samples taken.

      Examples: taking measurements of temperature in a region over 50 years and trying to predict climate change is statistical problem, while analysing samples of minerals in an area to try to find oil or gas is data mining. (as, presumably, mineral composition does not change over time so only single sample from each point is taken)

    • It is my honest belief that current research in AI is not so much about how to make computers think, but rather about how to reproduce the behaviour tranditionally associated with thinking. The distinction is important. While a human will think about a new situation possibly inventing and using new rules, current AI programs rely on (possibly partial) knowledge of what is going on.

      This is only true for some areas of AI research. Researchers coming from congnitive science and neuroscience are very interested in discovering how thought works in humans and frequently use AI (often neural networks) as a research tool. In this context, the research is about how humans think, and by extension, how to make a computer think in the same way humans do.

    • It is my honest belief that current research in AI is not so much about how to make computers think, but rather about how to reproduce the behaviour tranditionally associated with thinking. The distinction is important.

      Isn't this the same distinction that Alan Turing had decided was ultimately irrelevant? What's more important, the process or the result?

    • Strictly speaking, data mining is not statistics. Statistics deals with analysis of time-based data, typically small amount of parameters is sampled many times. Data mining studies non or weakly time-based data, with lots of parameters but just a few samples taken.

      Examples: taking measurements of temperature in a region over 50 years and trying to predict climate change is statistical problem, while analysing samples of minerals in an area to try to find oil or gas is data mining. (as, presumably, mineral composition does not change over time so only single sample from each point is taken)

      Nope. Disagree.

      If you want a unifying theme to pull together most of AI, you could do worse than think of it as a cookbook of techniques for designing, building and then automatically refining statistical models of (some aspect/s of) reality.

      Pretty much all of the standard problems of AI can be presented to advantage in this framework (including the identification of new objects/concepts/patterns and associations). It is the evolutionary enhancement that our brain has conferred by automatically doing adaptive and hierarchical statistical modelling (incl. pattern recognition, but also much more) so well that is basically what makes 'intelligence' worth having.

      The important thing about such a statistical model is *not* certainty that the model is right -- it never will be (not even for your temperature data). The 'long-run densely-sampled stationary time-series' is a myth -- in reality it just doesn't happen. The important thing *is* to realise your model is imperfect; to allow (as well as you can) for that uncertainty in your model; to explore different ways of setting up your model; and then to use statistical inference (Bayes theorem etc) to improve your predictive model as the data comes in. Only by allowing for the imperfection can you learn from the data.

      The priciples of statistical modelling have a central place in the study of Statistics -- it's the underlying logic that most of the subject is built on. On the other hand the blind application of certain 'standard' statistical tests seems distictly peripheral.

      • Re:Statistics (Score:2, Informative)

        by johnmark ( 64245 )
        . The important thing *is* to realise your model is imperfect; to allow (as well as you can) for that uncertainty in your model; to explore different ways of setting up your model; and then to use statistical inference (Bayes theorem etc) to improve your predictive model as the data comes in.

        There is a renegade group of AI folks who study uncertainty -- see http://www.auai.org. Its curious how an entire sub-field can be overlooked in this general discussion.

        Actually, the decision-theoretic approach that much uncertainty research is based on (Bayes methods included) stands in contrast to the cognitive approaches in the AI mainstream. The term "statistical" carries too much baggage, some of which litters this thread. For fascinating insight into the rough edges of AI and science generally, look into the controversies between classical statistics, Bayesian, logicalist and other approaches to reasoning.


  • The article of course mentons Deep Blue and chess:
    Probably the most famous example of a machine which taught itself, IBM's Deep Blue, which taught itself to play chess better than the human world chess champion, Garry Kasparov.
    I find chess programs, and indeed the problem of chess, relatively unimpressive. Chess is a game of at least almost perfect information, and almost pure deductive logic.

    [I'm not sure I agree with those who say chess is a game of perfect information and pure deductive logic. I believe imperfect, probablisitc information, and induction may come into play under certain circumstances. You offer a sacrafice to set a trap. Will your opponent see the trap? Will he take the sacrafice? If he does, great. If he doesn't, perhaps you have wasted a move, and allowed him to seize the initiative. There is an element of induction and probability in making your decision.]

    Let's face it, pretty soon the World Chess Champion will be a human only because computers are excluded from play. Hell, pretty soon your laptop will consistently beat the (human) World Chess Champion while you watch (the DeCSSed version, shh, don't tell anyone) of Matrix V and recompile Linux Kernel version 4.4 at the same time.

    Poker, thank God, is different. As explained by The University of Alberta Computer Poker Research Group [ualberta.ca]:
    Poker is an excellent domain for artificial intelligence research. It offers many new challenges since it is a game of imperfect information, where decisions must be made under conditions of uncertainty. Multiple competing agents must deal with probabilistic knowledge, risk management, deception, and opponent modeling, among other things.
    The University of Alberta Computer Poker Research Group [ualberta.ca] has implemented a poker playing program named Poki [ualberta.ca]. Poki is implemented in Java, and some of the source code [ualberta.ca] has been released. To facilitate other research into poker, they have also provided a Texas Hold'em communication protocol [ualberta.ca], which allows new computer programs and humans to play against each other online.

    See also:

    Wilson Software [wilsonsoftware.com], makers of the best commercial poker software. There are free Windows (sorry) demo programs for: Texas Hold'Em [wilsonsw.com], 7-Card Stud [wilsonsw.com], Stud 8/or better [wilsonsw.com], Omaha Hi-Low [wilsonsw.com], Omaha High [wilsonsw.com], and Tournament Texas Hold'em [wilsonsw.com]

    rec.gambling.poker [rec.gambling.poker] [Usenet]

    IRC Poker Server [cmu.edu]

    Greg Reynold's Gpkr GUI [anet-stl.com]

    World Series of Poker [binions.com]

    Great Poker Forums [twoplustwo.com]

    Card Player Magazine [cardplayer.com]

    Poker Digest [pokerdigest.com]

    Gambler's Book Shop [gamblersbook.com]

    And now, if you will, may we please have a moment of silence for Stu Ungar [pokerpages.com].

    • by Anonymous Coward
      Poker, thank God, is different. As explained by The University of Alberta Computer Poker Research Group:

      Poker is an excellent domain for artificial intelligence research. It offers many new challenges since it is a game of imperfect information, where decisions must be made under conditions of uncertainty. Multiple competing agents must deal with probabilistic knowledge, risk management, deception, and opponent modeling, among other things.

      Actually, while imperfect information makes poker a different problem than chess, it doesn't make it all that much deeper. I did research on a poker engine and one thing I learned is that it is possible to play poker (or bridge) using searches similar to those in chess. You can sample possible game trees instead of searching static ones because it's only necessary to pick a stratagy that make more money than it loses on average.

      All sorts of things that look like psychology in poker (bluffing, slowplaying etc.) turn out to be mathematically tractable and actually mathematically provable necesities to competent play. There has to be some chance that you will misrepresent your hand, otherwise your play gives too much information to your opponent.

      The really hard problem, as everyone probably already knows, is analysis and planning in situations where the search tree is too big for a chess like search and where sampling isn't good enough. Go is everyone's favorite example and that's a game of perfect information.

      By the way the CYC project mentioned in the article (as self concious!!) will not be the slightest help in making machines talented enough to play go (or poker or chess or even do crossword puzzles for that matter). Nor will it be of any use in teaching a computer to see, hear, touch, manipulate objects or imagine. Nor will the program be able to make useful analogies, so it won't be able to communicate in a human way (and I also think it won't be able to think in any useful way either). In my opinion Dr. Lenat's only great accomplishment is performance art. He's fooled people into wasting 50 million dollars on a complete fraud. And he's still going. Chalk one up for NS (natural stupidity).

      For an intelligent overview of what's wrong with CYC (unlike silly falicies from the likes of Roger Penrose and John Searl) and what an alternative might looke like I recomment Douglas Hofstadter's wonderful book, "Fluid Concepts and Creative Analogies: computer models of the fundamental mechanisms of thought"

      Cowardly Lion

  • AI references (Score:2, Informative)

    by lekroll ( 453847 )
    I must say I find the overview somewhat lacking, but maybe it's just because of my personal pet science :-)
    Anyway, extra suggestions:
    Evolutionary algorithms
    Swarm intelligence
    Distributed AI
    Alife
    Check it out with google. mucho interesting AI stuff.
    • Re:AI references (Score:2, Interesting)

      by Mentifex ( 187202 )
      The biggest action in easily (and totally )accessible AI seems to be happening at http://sourceforge.net [sourceforge.net], where http://sourceforge.net/projects/mind/ [sourceforge.net] is just one of about three hundred (300) Open Source AI projects. The Mind project has an edge because it is based on a well-developed theory of cognition deriving from neuroscience (i.e., the visual feature-extraction of Hubel and Wiesel) and from Chomskyan linguistics.

      For some reason, the American military have been looking into the http://www.scn.org/~mentifex/ [scn.org] AI Home page predating the Mind project by seven years, as evidenced by the following logs of recent military accesses:

      24/Jul/2001:10:59:39 - nipr.mil - /~mentifex/
      24/Jul/2001:11:04:27 - nipr.mil - /~mentifex/
      24/Jul/2001:11:04:34 - nipr.mil - /~mentifex/totalai.html
      24/Jul/2001:11:04:41 - usmc.mil - /~mentifex/jsaimind.html
      24/Jul/2001:11:06:24 - nipr.mil - /~mentifex/mind4th.html
      24/Jul/2001:11:11:56 - usmc.mil - /~mentifex/english.html
      29/Jul/2001:11:37:10 - navy.mil - /~mentifex/
      29/Jul/2001:11:40:56 - navy.mil - /~mentifex/
      30/Jul/2001:07:38:45 - arpa.mil - /~mentifex/
      07/Aug/2001:07:22:58 - pentagon.mil - /~mentifex/jsaimind.html
      07/Aug/2001:14:44:12 - af.mil - /~mentifex/aisource.html
      07/Aug/2001:14:44:16 - af.mil - /~mentifex/jsaimind.html
      07/Aug/2001:14:48:19 - af.mil - /~mentifex/index.html
      08/Aug/2001:11:21:48 - army.mil - /~mentifex/
      08/Aug/2001:11:22:02 - army.mil - /~mentifex/aisource.html
      08/Aug/2001:22:18:15 - nosc.mil - /~mentifex/aisource.html
  • by doctor_oktagon ( 157579 ) on Tuesday August 14, 2001 @09:35AM (#2133766)
    I'm only 30 years old, and as a child I remember the "scary" face of AI being presented to the public as living machines which would out-think humans and render us redundant, engineered by unethical scientists.

    Nowadays AI is never mentioned in popular media. It has been replaced by the new emphasis in public-facing science: cloning and gene therapy.

    This is the new AI in the mind of the ordinary citizen. It will lead to the destruction of the human race, and poses many ethical and moral questions. In the UK it is being demonised by the popular press without real debate, much like AI probably was 20 years ago.

    Incidently, this was an excellent "heads-up" article for a novice like me, and I gained significantly from it.
    • "The Matrix" and "AI:Artificial Intelligence" immediately come to mind. Looks like the media might still have interest.... Sure it isn't the news that's reporting it, but there hasn't been anything to report lately.
    • Nowadays AI is never mentioned in popular media. It has been replaced by the new emphasis in public-facing science: cloning and gene therapy.

      This is true for now.

      Then there will be some hit hollywood movie where the evil AI Robot Scientist and his/her/its borgified army of bio-engineered clones tries to take over the world via a mutant form of gene therapy that manipulates the brain and renders people into zombies, pliable to any suggestion.

      of course, the best way to take over the world is to do it via marketing, and legal agreements. You leave things like wars and armies to silly politicians. You can control the banks and the infrastructure.

      • Then there will be some hit hollywood movie where the evil AI Robot Scientist and his/her/its borgified army of bio-engineered clones tries to take over the world via a mutant form of gene therapy

        I think the reason cloning, gene therapy, stem-cell research and the like gain so much media attention is because they focus on an issue very important to almost everyone on the planet: the ability to produce/modify babies and are associated with reproductive functions.

        White Van Man may not care about evil robots, but he certainly cares about medical advances concerning his "knackers"!
  • Hi there, this is my first slashdot post, so i hope i do it right ^^ i am currently an apprientence at a firm called "humanit" and we are coding a program called "InfoZoom", it is a Data Mining and browsing program. The Main Program is windows-only (*dough*) but you may want to check out the examples on the homepage humanit.de [humanit.de]. There is also a program called DPS (dynmaic personalisation server), it is an implementation of User-, Master- and Tutor-Learning, bo go ahead, read for yourself ;) (this is in no way any advertisement, i just wanted to show you... i hope it's okay...)
  • What any AI needs (Score:2, Interesting)

    by General888 ( 514690 )
    To get AI working you need lots of horse power. And some pretty fancy algorithms. I'm not even sure if we can do real AI with the formal computers we have right now. With quantum computers? Who knows.

    The basics to get self-aware systems is to define a self looping reasoning for the system. That is, the system must be able to observe it's OWN thoughts not only what is happening outside. It must be able to react and change it's thoughts by it's own thinking. That is very important. That also means you need to create a basic language system of some sorts for those underlying systems.

    All in all it's complex but very interesting problem. Complex as in figuring out the basic underlying system and the defining parameters. If we get those done correctly, the AI will 'grow' and build upon them like a learning human would, by interacting with it's surroundings and with it's own thoughts.

    • by Cassandra ( 14615 )

      The basics to get self-aware systems is to define a self looping reasoning for the system. That is, the system must be able to observe it's OWN thoughts not only what is happening outside. It must be able to react and change it's thoughts by it's own thinking. That is very important. That also means you need to create a basic language system of some sorts for those underlying systems.

      How can you be so sure of this? I for one am fully self-aware, and I strongly doubt that my own thoughts are observable by introspection, except for a very small part of them. AFAIK there isn't yet a man made self-aware system that works the way you prescribe.

      • How can you be so sure of this? I for one am fully self-aware, and I strongly doubt that my own thoughts are observable by introspection, except for a very small part of them. AFAIK there isn't yet a man made self-aware system that works the way you prescribe.

        Oh but you can and do observe your own thinking. Every time you do a conscious decision you are able to observe, and in fact change your decision based on your observation. If you didn't have this ability you wouldn't think it over, you would just act.

        I guess what you want to say is you can't observe your sub-consciousness. But that's irrelevant, you only need to be able to observe the higher levels that make _conscious_ decisions. I could draw a parallel here to the AI, saying that the implementation and what goes on in the deeper levels of the program, the algorithms etc, the AI does not know. It only knows the higher level of that, some subset of it's inner workings. That's an artificial example and not quite valid, but if you're a programmer maybe you understand the point. Of course as we program what it knows, we could potentially make it more self-aware of it's inner workings than humans are of their own.

        • Oh but you can and do observe your own thinking. Every time you do a conscious decision you are able to observe, and in fact change your decision based on your observation. If you didn't have this ability you wouldn't think it over, you would just act.

          How often do you think about your own thoughts? My guess is something like less than 5% of the time, the rest of the time you just act. Being self conscious to me is not observing one's own thoughts, but observing one's own actions.

          I know yours is the stance in GEB, and Edelman also said something similar, but that was long ago. I respect your view, although I don't agree with you.

          • [BLOCKQUOTE]How often do you think about your own thoughts? My guess is something like less than 5% of the time, the rest of the time you just act. Being self conscious to me is not observing one's own thoughts, but observing one's own actions. [/BLOCKQUOTE]

            Ok... My opinion is greatly affected by my personal experience. And that was the day when I suddenly realized "hey, I'm thinking.. this is cool!". And you're probably quite right about that 5%, we don't think about it often. And my opinion is most of the time we are not very self-conscious. We're just as you said acting. But it gives me still to this day shivers when I start thinking that I can think and I exist.

            But opinions vary and that's great.

  • Good Overview (Score:2, Informative)

    I am a PhD student at the MIT AI Lab, and I wanted to thank you for a good stab at a high level overview of a diverse and dynamic field.

    I do reiterate a previous poster's comments that the best way to divide the field is into "core AI" -- giving computers human capabilities -- and "applications" -- using core AI technologies to improve quality of life.

    also, you omitted speech recognition and reinforcement learning, which are two important subareas worth mentioning. readers interested in those areas can go to

    http://sls.lcs.mit.edu/ (mit spoken lang sys)

    and

    http://www.cse.msu.edu/rlr/ (RL repository)

    m.

  • I don't mean to be a downer, but there were several factual mistakes in this article. DeAnne Dewitt's article skims over the information in the popular media. For instance, the school for robotics that she looks at was MIT, while the most research in this area is taking place at Carnegie Mellon. She has several arbitrary categories, while a natural division in current AI would probably be "symbolic" vs "connectionist".

    The classical approach to AI was the symbolic, which grows out of Turing's work. Allen Newell and Herbert Simon of Carnegie Mellon University (not even mentioned) were the foremost promoters of this approach, which they called physical symbol systems. Other early AI pioneers include Marvin Minsky, who should probably be mentioned in any article on AI (but was not in this one).

    The author barely mentions the neural network, or connectionist approach. These did not start with the PDP group, as she suggests, but with Frank Rosenblatt of Cornell, with Perceptrons. The most exciting research in this area deals with recurrent neural networks, which exhibit chaotic behavior. This is where I personally think that real intelligence could come from, because it is a more natural model of our brain's operations. The foremost researcher in this domain is Hava Siegelmann of Technion Institute in Haifa, Israel. She promotes the idea of analogical systems, which she has proven have more theoretical power than the Turing machine model.

    If you want an introduction to AI, skip this article. A good place to start might be a scientific journal, or the comp.ai faq. Her resources are not very good either, so don't bother.

  • Oh, NO! (Score:5, Insightful)

    by perdida ( 251676 ) <.moc.oohay. .ta. .tcejorptaerhteht.> on Tuesday August 14, 2001 @09:38AM (#2139036) Homepage Journal
    "If you look at an encyclopedia, you'll see a great deal of knowledge of the world represented in the form of articles. Common sense is exactly not this knowledge. Common sense is the complement of this knowledge. It is the white space behind the words. It is all of the knowledge that the article writer assumed all of his/her readers would already have prior to reading the article -- knowledge that could be put to use in order to understand the article. Cyc is about representing and automating the white space." (I love that answer.)

    Common sense is about representing and automating the white space?

    I think these AI researchers need to talk to a few more sociologists. Human common sense is extremely culturally divergent and goes far beyond the simple, textbook logic cases that certain engineers in this field would probably cite. "Reading between the lines" involves not some native common sense that is wedded to intelligence, but a collectively evolved cultural contextualization. When we read an article in an encyclopedia, a lot of other stuff other than intelligence comes into play: x years of public school education, idiomatic constructs, varying by geographic location, that may or may not enhance or obscure meaning, and, of course, the double meanings and entendres inserted by bored or biased encyclopedia writers.

    The entire postmodern project of literary criticism has been aimed at proving this point- at proving that there is no such thing as a standardized set of meanings, and that every meaning is contextualized. The Modernists wanted to rationalize and bureacratize speech, to restrict the number of meanings, and to leave what is unsaid in a narrow, predictable whitespace of a unified "common sense."

    Of course, there is a language like this, developed in the first half of this century. It takes away as many English words as possible to restrict the meanings that we are able to THINK, let alone say. Of course, this language is called Newspeak.

    • "When we read an article in an encyclopedia, a lot of other stuff other than intelligence comes into play: x years of public school education, idiomatic constructs, varying by geographic location, that may or may not enhance or obscure meaning, and, of course, the double meanings and entendres inserted by bored or biased encyclopedia writers."

      On the contrary, I think these examples you provided ARE intelligence. It's quite obvious that we can instill the kind of intelligence you speak of into software (raw logic/reasoning)but I would call this merely logic/reasoning. The challenge comes ONLY on the front of the semantics of culture and society. But don't give us humans too much credit. Even moral reasoning can be reduced to a few simple algorithms. I'd argue that there's nothing THAT special about our brains. When you get to the lowest levels of thought it's just basic reasoning skills and a large, interlinked repository of information. In fact the only advantage we have over a machine is that we're wired into the most versatile data collection instrument we know of- our body.

      The question is whether that's something that can be replaced by a team of deep-thinking programmers.

    • Except that common sense is even lower level than what you describe. If common sense was just public school education and idiomatic concepts, that would be pretty easy to integrate into Cyc as well.

      But it's not, of course. Common sense also comes from all the procedural and perceptive abilities that humans have. Like being able to look at a scene and identify the objects. Or recognize items by their feel. The ability to learn new strategies for learning.

      The point isn't that Cyc is bad, it's just nothing like a human. And it shouldn't be. Since they're not looking for a machine that can play baseball, they shouldn't necesarily aim for humans. If that makes any sense...
    • Re:Oh, NO! (Score:4, Interesting)

      by JWhitlock ( 201845 ) <John-Whitlock&ieee,org> on Tuesday August 14, 2001 @10:13AM (#2129031)
      I think these AI researchers need to talk to a few more sociologists. Human common sense is extremely culturally divergent and goes far beyond the simple, textbook logic cases that certain engineers in this field would probably cite. "Reading between the lines" involves not some native common sense that is wedded to intelligence, but a collectively evolved cultural contextualization. When we read an article in an encyclopedia, a lot of other stuff other than intelligence comes into play: x years of public school education, idiomatic constructs, varying by geographic location, that may or may not enhance or obscure meaning, and, of course, the double meanings and entendres inserted by bored or biased encyclopedia writers.

      The scientist's explanation took one paragraph, and even sounded like it had a goal - allow a machine to use an encyclopedia to gain new information in a useful manner. This is an important step to an A.I. that can interact with people - you can then train it on reference materials, and have it "understand" them at a certain level.

      This scientist is NOT mistaken - he would have to know that "common sense" does not equal "the human brain's inate ability to make sense of the culture it grows in". If I had to draw a distinction, "common sense", as you are describing, is static, tuned to one culture, while the "common ability" is semi-dynamic, able to learn, but (maybe) unable to unlearn.

      You could try to fake common sense, by programming your own cultural assumptions into the program, subjecting it to cultural stimulus, and fine-tuning the program. Or you could attempt to program "common ability", train it on cultural materials for a few years, and try to tune the program to build it's own "common sense" in a way that is more like a human. I think these scientists are trying to do the latter.

      I'm not sure what your tangent about post-modernism and 1984 have to do with A.I. - are you just making a rant about scientists who didn't get the memo that we are in post-modern times?

      An interesting question is if human intellegence can be removed from the human - does it take eyes to understand the phrase "I've got the blues"? Does it take a parent to understand why many grade school teachers are women and most world leaders men? Does it take walking upright, starting at a tiny height and getting bigger, to understand skyscrapers? Or does that just take a penis?

      Now, I'm using the "white-space" sense of understand - to be sympathetic to the person who has the blues, to feel an unexplained shame when the president is caught sleeping with a women not his wife, to feel an exhiliration driving into a new city. Can these be simulated in a computer without a body and a human's lifetime? Can these things be removed, still leaving a "human" intellegence? If we interacted with this intellegence, would we say it passed the Turing test? Would we want to interact with it?

      Perhaps that's one level of A.I. above where this guy is aiming. It would be extremely useful just to have an intellence with a little of the human ability. You could train it on, for instance, medical journals. A doctor could then descibe symptoms, research, or an interest, and get summaries or references to the library. Once you trained it in the basics, you could burn it to a CD, send it to a doctor, who could then train it for his specific interests. Think of it as a very limited secretary, who requires some training and aclimation, but is still smarter than a PC.

      This is probably the best A.I. can do for a few years - get to the point where you can train an A.I. for a particular subject, then meaningfully interact with those interested in the subject - like a very bad librarian. It's only when the clones come out in force that you can hook a computer up to a fetus, and do some real human A.I. training.

    • Someone around here has a sig that goes, Common sense is what tells us the world is flat.
      I've always loved that. Especially when my wife tells me I've got book smarts but no common sense. I thank her.
    • "Reading between the lines" involves not some native common sense that is wedded to intelligence, but a collectively evolved cultural contextualization. When we read an article in an encyclopedia, a lot of other stuff other than intelligence comes into play: x years of public school education, idiomatic constructs, varying by geographic location, that may or may not enhance or obscure meaning, and, of course, the double meanings and entendres inserted by bored or biased encyclopedia writers.

      That is exactly what Cyc [opencyc.org] is doing. They're defining a contextual database of terms and concepts, trying to form subjective links, including idiomatic construts, double meanings and entendres. Don't knock it before you've read up on it.

    • The entire postmodern project of literary criticism has been aimed at proving this point- at proving that there is no such thing as a standardized set of meanings, and that every meaning is contextualized
      (Rolling my eyes here.) Academics "prove" all sorts of things. They also argue well about the number of angels that can dance on a pinhead. Such things are good for getting tenure, but they don't necessarily relate to the real world at all.

      You don't understand what the meaning of "the white space" was at all.

      It is not culture dependent stuff, or at least not primarily. In fact much of it is your "contectualized" meaning -- though not in the rather trivial sense that postmodernists think is deep.

      It is stuff like: "human beings usually sweat when they are hot". It is stuff like, "if an object moves, its sub-parts usually move with it". It is stuff like: "air is usually transparent". Please tell me a culture where any of these assertions are not true.

      If you cannot, perhaps you will then admit that there is some knowledge which is in fact universal, at least to humans.

      What is important about Cyc, is exactly that this sort of knowledge is so universal that it is hard to even realize we all have it. But we do, and you find that out mighty fast when you try to get a machine to do any kind of real-world reasoning.

  • by +a++00 y0u ( 324067 ) on Tuesday August 14, 2001 @09:26AM (#2141930) Homepage
    becuase the way they think just isn't natural

  • Good article (Score:4, Interesting)

    by coreman ( 8656 ) on Tuesday August 14, 2001 @09:39AM (#2142972) Homepage
    I used to work in AI Alley in Cambridge during the 80s. The industry got hyped to death by claims and demands on performance that wasn't possible. Lots of interesting things were done and lost during that timeframe. The thing is, we hadn't even reached artificial stupidity at that point. I'm not sure we have yet but we're closer in some domains. Until people unlink visions of Cherry 2000 and C3PO from the AI moniker, it's going to be tough getting people to set realistic goals. Just like the Lisp environment back then, I see the research and technology backdooring into products every day. More and more of the stuff that we did back then is making it out as "new technology" under Microsoft when we had it 20 year ago, but hyped it into obscurity. The industry seems to be coming out of it's own dark ages lately and I hope the media doesn't get back on the bandwagon to beat it back down. I thought the article was very good and you did a LOT of good research and didn't just edit together all the buzzwords you found. The response has been good in the community I still keep in touch with and you better watch out or you'll be changing people's minds about talking to journalists. Many of us wish you had been given more access to Cyc and their underlying knowledge engineering techniques to do that topic/aspect justice as well.

    Good Job!
  • by popeyethesailor ( 325796 ) on Tuesday August 14, 2001 @09:53AM (#2143074)
    Me : Can u imagine a Beowulf cluster of yourself?
    Cyc : -1 Troll.
  • Lets say they succeed and make a completely, 100% self aware being. Nevermind if it's a computer or C3PO or a toaster. How do you suppose these new beings are going to react to...well, with lack of a better term, freedom robbing. This is a very distressing issue indeed.

    I mean...consider it rationally. We, as humans, will probably feel somewhat superior to these "artificial" intelligences. The word "artificial" itself implies fake, and nobody likes fake better than the real thing (i.e. Humans). Basically, what I'm wondering, is how will these artificial intelligences react to racism and opression against them. Will they fight back? Will they have a somewhat less extreme implication of moral defence? It's all very important to know, both because this can spark potential wars if we ever do achieve "artificial intelligence" and it would mean alot for general human rivalry and emotion.

    Analyzing how a "fake" creature reacts to abuse could teach us so much more about our own reactions to abuse.
    • A.. toaster?

      TOASTER: Okay, here's my question: Would you like some toast?

      HOLLY: No, thank you. Now ask me another.

      TOASTER: Do you know anything about the use of chaos theory in predicting weather cycles?

      HOLLY: I know everything there is to know about chaos theory in predicting weather cycles!

      TOASTER: Oh, very well. Here's my second question: Would you like a crumpet?

      -= rei =-
    • Depends on the programmers

      The human approach to recreating itself is very likely to always require many hardwired concepts; therefore, the creator of such a hypothetical system will have the role of genetic predecessor, except with the alternative of choosing the ontology of his/her choice. Therefore, the entity will be a reflection of the attitudes of the programmers

    • by HiThere ( 15173 )
      Don't make the mistake of attirbuting human motives to a computer just because it has a certain characteristic. I know that this can be tempting...

      My wife talks of the car as "knowing the way to..." or "wanting to go to...". She doesn't actually believe this (I don't think she does), but she thinks I'm being silly to object to putting things this way. But when thinking about AI computers, this can be a good (and dangerous) model. "The car knows the way to the Japanese resturant." ... well, she's quite familiar with the way to the Japanese resturant, and has solid habit patterns that tend to lead her in that direction from some corners when she must move, and doesn't have a direction firmly in mind. ... Was I talking about the AI or my wife (or my wife's car)? Well, the car doesn't have that kind of brain. But suppose you got in the car and said, "Take me to dinner.", that might be a reasonable description. But my wife would do it because she wanted to. The AI would do it because it was a minimum effort solution to a common problem that had just been posed.

      The difference is that the AI doesn't have the motives for initiating action. Now some designs have "super-goals" that probably will never get fulfilled, but a) they didn't choose those goals, and b) someone else gave the goals to them. Of course a car might well have built in desires to keep the tires safely inflated, to avoid running out of gas, to keep the battery charged, etc. But these are quite different from anomie.

      Perhaps people would choose to build AI's capable of feeling lonely. But this would be a design decision, not inherent. Or perhaps they would feel something that would be translated into English as lonely, but for which super-goal frustration in the absence of actionable choices would be a better name. It might well not have the rise and fall pattern of human loneliness. Or it might. A hierarchy of needs might cause an AI to experience a similar rise and fall in level of frustration at incapacity for making progress in less important goals. As if a car with the lower importance injunctions to "keep your owner healthy" and "laughter is the best medicine" were owned by an asthma sufferer ... it would need to learn to avoid telling jokes. And it might find that quite ... disturbing? But to attribute any particular human emotion to the process would be .. misunderstanding. There would certainly be emotion analogs, but they wouldn't be close analogs to any particular human emotion. Or, if they were, it would be quite surprising. (They don't have the same "genetic" history determining their substrate ... it's much MUCH more reasonable to attribute similar emotions to cats.)

      • You're wife's "wanting to" could very easily be a shortest path solution as well. I think what we do when saying AI can't be like us is to read more into what's happening in our own heads than really is. We're so closely tied to these processes that it becomes impossible to examine them objectively.

        To go further, I posit that emotions are merely emergent behaviours of relatively simple systems, that seem to manifest complex behaviour. Just because we can't see the true motivation behind an emotion or decision, doesn't mean that the process was particularly complicated.

      • Perhaps people would choose to build AI's capable of feeling lonely. But this would be a design decision, not inherent.

        Ah, but think of the interesting implications if the AI is capable of reprogramming itself? I leave the results of that scenario up to your imagination :)

  • One thing I wonder about (and I am not the first, and certainly not the last, to wonder about it) is the idea of intelligence, possibly even conciousness and sentience - arising out of the interaction of many non (or low) intelligence "parts".

    Our brain, composed of billions (trillions?) of neurons - each of which "knows" nothing - yet together, the entire mass "thinks", to the extreme point of being able to question itself.

    Other examples - of a lower order though - include such entities as corporations (essentially any large bureaucracy - like a government or controlling body). Many corporations act as a single entity, though it is composed of multiple humans as smaller "parts". For some reason, corporations tend to drift toward "evilness" the larger the corp is. There seems to be a "break" point at which the corporation becomes an entity unto itself, and typically that entity does bad things - even to the ultimate detriment of the parts of which it is built - the people within the corporation. We say Microsoft is "evil" - but does anyone here honestly believe Bill Gates or anyone at Microsoft stays up late at night cackling to themselves about the takeover of the world? Or are they just wanting to make a better product, and thus gain more money - even when that product isn't better? Is the pursuit of money by the parts what makes the corporate entity become "evil"?

    These kind of entities might be called "hive minds" (one other poster made mention of "swarm intelligence") - the curious thing about these entities is the fact that it is hard to know what they are "thinking" - even the parts that make them up are unable to see this.

    I tend to wonder - since these "entities" seem to arise out of a lot of people or parts working together, sometimes in harmony, sometimes at odds - but that it takes a lot of parts for these entities to begin to "think" - is what we know of as the Internet really a hive mind, of such complexity and vastness, that is ever expanding - that we have little to no hope of understanding what it is "thinking"? Could any of the events we see taking place concerning laws, WIPO, DVDs, MPAA, RIAA, MP3s, 2600, etc - be coming about due to this "hive mind"?

    Comments...?
  • by howardjeremy ( 241291 ) on Tuesday August 14, 2001 @12:37PM (#2145309) Homepage
    With an academic background in moral philosophy and today working as a developer of data mining systems, I can empathise with the author's frustration at trying to understand how (if at all) morality and AI intersect.

    The main problem really is that the term 'AI' is applied to any algorithm for classification, prediction, or optimisation which operates using anything beyond a simple set of heuristics. Such algorithms seem magical to the lay-person, resulting in the over-enthusiastic application of the 'intelligence' moniker.

    Summary
    'AI' is a term used inappropriately for a range of algorithms that attempt to learn without having to specify an exact set of rules for every case. Although these algorithms are currently incapable of displaying real intelligence, it is possible that one day they may. This point is however debatable, and the interested reader should read for themselves the differing points of view of experts in the field, including Daniel Dennett, Roger Penrose, Steven Pinker, Richard Dawkins, and Douglas Hofstadter. If they do ever get to the point that they can act intelligently and flexibly, it will be important that they are trained with appropriate moral premises to ensure that there actions are appropriate in our society.

    To understand these so-called 'AI' tools it is useful to develop a little structure...

    Output
    AI tools are used for classification, prediction, or optimisation. Classification works by showing a computer a set of cases which have a number of properties (sex, age, smoker status, presence of cancer...), and 'training' the algorithm to understand the patterns of how properties tend to occur together. Prediction can then be used to show the algorithm new cases in which one or more of the properties are blank--the algorithm can use its classification training to guess the most likely values of the missing properties. For instance, given sex, age, and smoker status, guess the probability of presence of cancer. is a generalisation of classification--rather than training to minimise classification error, train to maximise or minimise the value of any modelled outcome. For instance, whereas an insurer could use classification algorithms to find the likelihood of someone dying by age x, an optimisation approach could be trained to find the price at which modelled profitability of an applicant is maximised.

    Functional form
    AI tools create a mathematical function from their training. For instance for a classification algorithm this function returns the probability of a particular category for a particular case. The form of this function is an important factor in classifying AI tools. The most popular forms are 'neural networks' and 'decision trees'. Neural networks are interesting because certain types (networks with 2 hidden layers) can approximate any given multi-dimensional surface. Decision trees are interesting because given a large enough tree any surface can be approximated, and in addition a tree can be easily understood by a human, which is very useful in many applications. Other functional forms include linear (as used in linear regression which many will remember from school) and rule-based (as used in expert systems, and similar to a decision tree). One interesting functional form is the network of networks which combines multiple neural networks, feeding the output of one into the input of others. This forms allows the training of network modules that learn to recognise specific features, which is closer to how our brains work than the single network approach.

    The most flexible functional form is that used by practitioners of genetic programming (which also defines a specific training function). Genetic programming creates a function which is any arbitrary piece of computer code. The code is often Lisp, although lower level outputs such as assembly language and even FPGA configurations have been used successfully.

    Training function
    The training algorithm looks at the past cases and tries to find the parameters of the functional form that meet the classification or optimisation objective. This is where the real smarts come in. One naive approach is to try lots of randomly chosen parameters and pick the best. Genetic algorithms are a variant of this approach that pick a bunch of random sets of parameters, find the best sets and combine features from them, introduce a bit of additional randomness, and repeat until a good answer is found. Local/global search works by picking one set of parameters and varying each property a tiny bit to see whether the result is improved or gets worse. By doing this it locates a 'good direction' which it uses to find a new candidate set of parameters, and repeats the process from there. Hybrid algorithms are currently popular since they combine the flexibility of genetic algorithms with the speed of local search. Most neural networks today are trained with local search, although more recent research has examined more robust approaches such as genetic algorithms, Bayesian learning, and various hybrids.

    Learning type
    Supervised learning approaches take a set of cases for training and are told "here is the property we will trying to predict/optimise, and here is it's value in previous observed cases". The algorithm then uses this context to find a set of parameters for the functional form using this context that the analyst provides. Unsupervised learning on the other hand does not specify prediction of any particular property as being the training goal. Instead the algorithm looks for 'interesting' patterns, where 'interesting' is defined by the research. For instance, cluster analysis is an unsupervised learning approach that groups cases that are similar across all properties, normally using simple measurements of Euclidian distance (that's just a fancy word for how far away something is when you've got more than one dimension).

    Contextual learning is a far more interactive approach where the analyst interacts with an algorithm during training constantly providing information about what patterns are interesting, and where the algorithm should investigate next. Systems like Cyc use contextual learning to try to capture the rich understanding of context that humans can feed in.

    AI and moral philosophy
    We are still a long way from seeing an algorithm that can interact in a flexible enough way that we could mistake it for human in a completely general setting (the Turing Test for intelligence). However, given the ability of flexible training functions such as genetic algorithms, we may find that one day an algorithm is given enough inputs, processing power, and flexibility of functional form that it passes this test. The 'morals' that it shows will depend entirely on the inputs provided during training. This is not like humans, who have some generally consistent set of moral rules encoded through evolutionary outcomes (for instance, tendency to care for the young and related). Our moral premises are the underlying 'givens' that form the foundation of what we consider 'right' and 'wrong'. Ensuring that an AI algorithm does not act in ways we consider inappropriate relies on our ability to include these moral premises in the input that we train it with. This is why Lenat talks about teaching Cyc that killing is worse than lying--this is potentially a moral premise. Finding the underlying shared moral premises of a society is a complex task, since for any given premise you can say 'why?' But repeatedly asking 'why?' you eventually get to a point where the answer is 'just because'--this is the point at which you have found a basic premise.

    Summary
    'AI' is a term used inappropriately for a range of algorithms that attempt to learn without having to specify an exact set of rules for every case. Although these algorithms are currently incapable of displaying real intelligence, it is possible that one day they may. This point is however debatable, and the interested reader should read for themselves the differing points of view of experts in the field, including Daniel Dennett, Roger Penrose, Steven Pinker, Richard Dawkins, and Douglas Hofstadter. If they do ever get to the point that they can act intelligently and flexibly, it will be important that they are trained with appropriate moral premises to ensure that there actions are appropriate in our society.

    I hope that some of you find this useful. Feel free to email if you're interested in knowing more. I currently work in applying these types of techniques to helping insurers set prices and financial institutions make credit and marketing decisions.
    Jeremy Howard
    The Optimal Decisions Group
  • by Black Parrot ( 19622 ) on Tuesday August 14, 2001 @11:44AM (#2146380)

    I would recommend re-thinking your division of AI into subfields. You are indescriminately mixing technologies and application areas.

    For example, neural networks are a technology and NLP is an application area. I know people working in NLP that use Lisp, and I know others that use neural networks. In AI, technologies and application areas are (mostly) orthogonal.

    Granted, there probably isn't a perfect breakdown of AI into subfields, but making the distinction above will help you and your readers get a grip on what AI is all about faster.

    • Yeah, finding the correct way to break down this vast field of info was tough. I considered symbolic vs non-symbolic, but got bogged down in cross-referencing. A true application vs technology division can't really be made (in a short article), just because of the sheer amount of Venn-ing between the fields.

      For example, good data mining requires a good NL interface...but without defining each of those fields individually, the information becomes a morass of details.

      The reason for the divisions that I chose, overlapping though they may be, is twofold. The first is that these are the areas that were deliniated by the first 6 people with PhD's I talked to. In other words, this is how the experts suggested that I approach my research. The second reason is that it seemed like the best way to illustrate the concepts and the various fields of study to an audience that may not be familiar with it at all, barring exposure to popular media.

      Given more space, I would liked to have explored a little more into both the theories and the applications. However, there's only so much you can do in 3000 words. :)

  • by Zoinks ( 20480 )
    Did some machine vision stuff for my Ph.D., so I feel an urge to comment...

    • So much of what we (I mean the media) call AI is really algorithms or heuristics for solving a problem. We can call it "AI," but once it's codified in software, does it sit up and say "hello?" No, you feed it some inputs and it produces an output. An image or signal is classified, a matching dataset is extracted, etc, etc.
    • Neural networks! It's hard enough already finding a global optimum in a well-chosen multivariate cost function, now we have to go modeling the same cost function with an arbitrary ANN model with way too many degrees of freedom. I once saw a paper written to show how they trained an ANN to compute a FFT. Big whoop. We already know how to compute an FFT, there's an algorithm for it. The AI article was balanced on the subject of ANNs. They're useful for solving some ill-posed problem. Still, if there could be a better way to solve the problem, I would spend a lot of time trying to find it.
    • What's an AI? To me, it would have to pass the Turing test for starters. After that we're in fantasyland. Self-directed, self-preserving, creative... hmmmm...
  • Genetic Programming (Score:3, Interesting)

    by graveyhead ( 210996 ) <fletch@@@fletchtronics...net> on Tuesday August 14, 2001 @11:35AM (#2156947)

    I was suprised that there was no information at all to be found regarding genetic programming [genetic-programming.com]. This method builds a large population of random computer programs and then refines them through genetic mutation to accomodate a specific task. Darwinian selection ensures that only the most fit programs survive, and less useful ones die off quickly.

    I have been doing some work involving genetic programming lately, and have found it to be an amazing tool for finding creative solutions to complex problems. The problem domain I have been training my genetic program to solve is purely mathematical, but it seems to me that the technique could easily be adapted to find solutions to some of the tougher problems in AI, including but not limited to: data mining, natural language processing, and parellization.

    I read somewhere (can't find the reference right now, sorry) that some work was being done whereby the genetic programs were being evolved that could themselves create neural networks. Each genetic program could be considered a template for creating a neural network. This seems to me like the most likely means of creating a software that could eventually pass a Turing test. I won't get into the self-conciousness debate here.

    • > I read somewhere (can't find the reference right now, sorry) that some work was being done whereby the genetic programs were being evolved that could themselves create neural networks.

      Maybe not exactly what you're talking about, but there's a well-established field called neuroevolution.

      For the neuroevolution systems that I've seen, you don't actually use genetic programming; you use a genetic algorithm, which is a hand-coded program that does evolution on some representation of a neural network (e.g., a string of numbers representing the weights).

      These programs iteratively gen up a population of neural networks, evaluate them on the problem you're trying to solve, score them on how well they solve the problem, and then repeat for another generation, using the "DNA" from the better scorers to generate the new population.

    • I was suprised that there was no information at all to be found regarding genetic programming
      Genetic programming is considered part of the artificial life field. Although closely mathematically related, it is not artificial intelligence. I know of a couple of papers describing models of cognition using genetic programming. They speculate that the brain sometimes generates competing models, but test-data suggests just a couple of models. I find this hard to compare with genetic programming which needs many thousands of models to provide for its strong point: loose description of the solution.
      This seems to me like the most likely means of creating a software that could eventually pass a Turing test.
      Genetics already created neural networks that could pass the turing test. This is true. To artificially reproduce this result would take such a large of amount of computational power and such an efficient representation of the neural network, that we'll probably end up using a biological implementation. This will work so similarly to 'the real thing' that it might be besides the point. Also it took nature a few billion years to evolve the brain and to speed up the breeding process would probably make it more of a biological problem than an AI solution. The work that is being done, (sorry, no link either :-/) creating neural networks with genetic programming, usually targets sub-insect intelligence. This is, in my perception, the general direction in AI nowadays. reveal the secrets of the brain by trying to emulate the smallest structures first. This is also more interesting to the AI-industry, because who needs to talk to a computer about poetry(turing) anyway. An intelligent cruise control/collision avoidance system in your car is much more usefull.
      • I am starting to agree that the processing times/memory requirements just might be too large to consider as an option for creating a human-level intelligence. My own implementation is slow and memory hungry but I assumed it was because a) each "chromosome" program is interpreted, not compiled, and b) it uses gmp in all primitive terminals and nonterminals because of the very large numbers in my problem domain. I assumed that speed could be *vastly* increased by generating composites in assembler and using native number systems (int, long, float, double, etc). Running the fitness test is where my code spends 99.999% of it's time, so my first knee-jerk reaction is to make that as efficient as possible.

        I like the idea of focusing a generated neural network on a specific subset of human intelligence. Perhaps a close-to-human intelligence could be built by creating many subset neural networks concurrently (perhaps one for interpreting visual input, one for natural language processing, one for avoiding car-collisions, etc.) and then gluing them together using yet another neural network designed soely for the task of delegation. Perhaps this loose collection of neural-nets could even, as another slashdotter suggested, recursively feed its own thoughts back into itself in order to improve it's conclusions.

        I love speculating about this stuff. I spend many hours thinking about alternatives to procedural programming.

    • I read somewhere (can't find the reference right now, sorry) that some work was being done whereby the genetic programs were being evolved that could themselves create neural networks. Each genetic program could be considered a template for creating a neural network.


      The keyword you want to pop into google here is "cellular encoding", the seminal work was done by Frederic Gruau back in the mid-90s (I first saw his presentation at GP-96.) Unlike the GA variant suggested by an earlier reply this method does not just change the weightings of the nodes, it re-arranges the architecture during cross-over and mutation. The basic idea is that you start by viewing a simple ANN as a graph and then perform operations on the edges. Astro Teller presented a paper at GP-98 that performed similar operations upon the nodes, but iirc the end-result was not an ANN (I can't remember what he called his variant.) I always wanted to try to perform node-encoding upon FSMs to try my own variant on cellular encoding but never got around to doing it...

  • by peter303 ( 12292 ) on Tuesday August 14, 2001 @11:34AM (#2156950)
    The A.I. effort is often classified into two camps.
    One is to approach human intelligence. This usually
    implies conversational ability, since a hallmark of
    human intelligence is language. This A.I. approach is
    called "hard A.I.".

    Soft A.I. looks at sub-problems, such as problem
    solving, image understanding and so on.
    Many of software inovations originated in A.I.
    labs (e.g. interactive editors, bitmap graphics).
    (During the early 80s these spinoffs were sometimes
    confused with A.I.)

    A problem with both kinds of A.I. is that its a
    receding target. Once an important goal has been
    reached, e.g. a chess computer that beats grand masters,
    people write it off as a nice trick,
    but not really A.I.

    So I proprose what I call "interesting A.I.".
    Two hallmarks of human intelligence are language
    and curiosity. So if an A.I. could TELL us
    something new and interesting on a regular basis,
    then I would call it a success.

    I suspect A.I.s will first arise in entertainment
    computing: either as a robo-toy, a synthetic game
    player, or synthetic actor in a film. This will be
    a results of people's drive for challenging
    creative play.
    • suspect A.I.s will first arise in entertainment computing: either as a robo-toy,

      How about SONY's AIBO?

      It's an interesting twist on the Turing Test definition of AI. Instead of giving Marvin Minsky 10s of millions of dollars to design machines that can somehow be quantitatively measured to be 'intelligent,' SONY produced a 1500 dollar robot dog that is designed to make you think that it is intelligent. And many people do think so (check out AIBO fan web sites ...).

      The AI is kinda created client side (i.e. in your brain) rather than in the machine itself.

      I saw one in Japan recently, in an electronics storefront. On the left, there was a widescreen tv with a SONY promo video playing. On the right there was a perspex cube about 24" on each side, with an AIBO bumping around inside of it. After watching this for about 5 minutes we began to feel quite sorry for the AIBO.

    • I think you're missing the most important case.

      AI will be a big money maker when someone puts animatronics and and a brain inside a realdoll.

      Then we'll see "hard AI".

      Once the first turing test is complete (convincing a human they're conversing with a human for 5 minutes), the second test will bring itself to bear:

      "Convincing a man that he is putting up with the bullshit from a real woman"

      early experiemnts will of course just starting wiring /dev/random to the inputs of most neural functions.. will this be enough ?

  • by mcarbone ( 78119 )
    This is not so much an overview of the current state of AI as it is a general description of the different fields of AI. However, it's a pretty good description and could be a useful introduction to someone who knows nil about the field and is wary about buying their first expensive textbook.

    However, I would hope that most of the Slashdot crowd already knows that the field of AI, while successful, isn't really about conciousness right now (though to many it is a distant goal).

    The best way to really get an Artificial Intelligence overview is to first know the basics, and then flip through all of the major AI journals in the past five years.
  • Bull gave a keynote speech at the AI meeting in
    Seattle last week here [nwsource.com].
    MicroSoft has a big interest too.
    • Yeah, that industry has been where he's rediscovered things we had in the 80s just to announce them as "new". I'm amazed that IJCAI didn't boo him off the stage but then again, it's a new Microsoft influenced generation from the AI heyday of the 80s
      • Yeah, that industry has been where he's rediscovered things we had in the 80s just to announce them as "new". I'm amazed that IJCAI didn't boo him off the stage but then again, it's a new Microsoft influenced generation from the AI heyday of the 80s

        One big step since the '80s is the advance in Bayesian networks, and MCMC methods for training them.

        That, plus the increase in computing power, makes it much more possible to deal realistically with uncertainty and small training sets; it's also now possible (and worthwhile) to embed the systems in end-user applications.

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...