Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Science

Silicon Chip Survival of the Fittest 105

0b1 writes "A scientist has created a Microproccessor that can distinguish between a few words, by just letting it "Mutate", and mixing the Different designs that worked, while eliminating those that didn't. Read the full article if you like. " People are doing a lot of this stuff right now; anyone else wonder where it will end up?
This discussion has been archived. No new comments can be posted.

Silicon Chip Survival of the Fittest

Comments Filter:
  • by Anonymous Coward
    My friend, I wouldn't be surprised if your comment was in fact generated by some kind of AI device. You see, I am highly skeptical that the alleged writer of your comment would indeed pass the infamous Turing Test.

    And it is my duty to mention the possibility that your comment was the result of the random keypresses of a chimp placed long ago in front of your computer.

    I know that Windows has a grudge against me. If it didn't, why would it keep crashing? Maybe it's jealous of my linux box.

  • by Anonymous Coward
    ...will these chips work in Kansas?
  • This is even more exciting than the original article, and deserves an article of its own.

    Yields as low as %10?!?!? No wonder CPU prices are so high, even after all these years of designing and manufacturing them

    <conspiracymode>
    I doubt Intel would use this, though. It might cut into their profits if they could suddenly make a lot more processors with the same fabs, and make them more reliable at higher clock speeds :)
    </conspiracymode>
  • Perhaps you would have gotten some flames if people had either:
    A)Believed you knew what the hell you were talking about
    B)Been able to access your sources online.
    C)Had known what the hell you were talking about, if you had actually explained what these sources said.
  • "no one knows how the human brain works but we all use that"

    That's debatable for some of us. :-)

    As to the documentational aspect, it's not so much knowing HOW it works, it's KNOWING it works. Stuff that no one understands that everyone uses, at least has been "shown to work" by extensive testing - and at least someone somewhere had SOME idea how it was originally supposed to work.

    How do you test boundary conditions on an evolved circuit which may have "extra" boundaries inside? Maybe it has developed an accumulator that counts to 33 in analog, skips 34 (it counts 33 twice), and is fine from 35 up - that just happens to work under the test conditions that don't care about #3. Better examples anyone?
  • Biological life could be so far outside of it's realm of experience that it would just have no way of comprehending us.

    The analog nature of the hardware process described and it's various 'problems' with sensitivities to temperature variation and whatnot could lead to an awareness of sorts to the outside world, including big, warm, squishy beings that often bring about such drastic changes to their state of being.

    Just a thought!
  • Not all chips create the same, the same set of seq. downloaded to different chips (the same batch) might not create the same quality of result? Will this a problem? We can't growth every chip this way? well, as long as we are fixated on the electromechanical, industrial image of computers, rather than a more organic one, then, yes, these techniques will bug people.

    for instance, I remember reading something about some genetic algorithm that did some funky sorting or filtering operation VERY well, but the researchers couldn't exactly understand why it worked (it was like a hypercomplicated regexp if I remember correctly). Sure, it could be unwound, but no one could figure out the 'why', and thus write a good analysis of how it would respond for different things.

    And this is what I think scares people about genetic algorithms, etc. We are locked into black-and-white, deterministic systems, and think thusly about so many things. It escapes people that many of the things we use in real life are based on or made by non-deterministic, statistical systems.

    I'm thinking of enzymatic reactions, recombinant-DNA bacteria that make drugs, etc., etc. Heck, even wood and meat, that are essentially very subjectively "graded", etc., even if a machine does it. We are surrounded, immersed in, subjective, non-deterministic systems. But we still want to see things deterministically.

    Even at the very microscopic view of things, quantum mechanics, many scientists find QM disturbing, because it is non-deterministic, but they continue to use it and work with/on it because it seems to work.

    Sure, we probably wouldn't feel safe with a nuclear reactor control system run by some genetically optimized computer control program, and would insist on having people watch over things, even if it could be determined that the computer was 99.999999999% reliable, vs. 85% for human operators. Funny how that is.

  • "So far they've only managed to make their solution very fragile"

    What's to stop them from optimizing across a wide range of (previously destabilizing) ambient conditions?

    "Besides, it's not like you couldn't simulate analog conditions in software. "

    You can, but not well enough to get anything remotely as interesting as what this researcher got. The models used in simulating analog circuitry work "well enough" to design certain types of analog circuitry in the traditional manner. There are other possible methods of circuit design that approach the solution from other directions, and don't require that the designer put on their "traditional analog design methodology blinders".
  • You do realize that this was only the first step to his ultimate goal, don't you?

    A chip that can respond to 'go' and 'stop'? Pshaw!

    He won't be happy until he's got a chip that understands and responds to the phrase, "Go get me a beer!" (That, and a healthy grant...what more could a research scientist ask for?)

    Incidentally, do these chips still work in Kansas? ;)
  • Maybe I am just paranoid from writing too many hardware diagnostics. However, it seems to me that nature's evolution has had millions and billions of years to work out the flaws in its "designs", to catch all the boundary conditions, race conditions, varying inputs, different ambient conditions, and so on. Don't forget the zillions of test units all interacting with each other :-)

    This chip evolution simply can't have had the same level of testing. They don't know the inner workings, and they apparently are using circuits in novel and even unknown and mysterious ways.

    Am I perhaps too paranoid here? How well can these chips be expected to work, especially when connected together? One of teh hardest debugging lessons I have yet to adequately learn is to change just one thing at a time. Murphy is my mother's middle name; I wonder how well these mystery circuits will work as they are thrown together into ever bigger piles.

    --
  • for certain philosophical reasons, we've decided that artifical life, like arti-intelligence, as long as there is a 'binary' layer underlying it all, will never succeed - it is worth persuing and will make some interesting and useful spin-offs and devices, but it will never grow legs, attain self consciousness, go an a rampage and destroy it's creator, write passionate poetry, produce a blockbuster sci-fi flick, seek eternal life, fetch the newspaper for a 'robo-treat' or replicate itself in the wilderness of earth.

    But it sure is danged interesting to try!

    Chuck
  • Your right, I didn't read this part:

    What would happen, Thompson asked, if it were possible to strip away the digital constraints and apply evolution directly to the hardware?

    so that's satisfies my old objection.

    Chuck
  • Why do half a dozen people have to point it out every time an article is a repeat. Yes, it's a repeat. Who cares? If you've seen it before, just ignore it. And if you haven't seen it before, then it's not a problem.

    If you must point it out, send email to Rob or Hemos. That way you know they'll actually see the message. But posting a message about it is just a waste of everyone's time. Rob and Hemos don't have time to read every post on every story, but a lot of the rest of us will waste time reading it. We don't need to be impressed by how good your memory is.
  • I remember reading about this in the paper edition of New Scientist a few months ago; that doesn't make it any less interesting, particularly the announcement recently that Micro$oft is funding research into genetic methods for software development. The article I read said that the developers were still having trouble with the system; particularly in that the evolved "program"---actually instructions for an FPGA---would only work on one chip at one temperature. I'd love to see how they're doing now.

    "I want to use software that doesn't suck." - ESR
    "All software that isn't free sucks." - RMS

  • I always imagined an application or OS which would keep an eye (top) on which processes took the most time/resources, spawn a couple of mutated copies to run alongside the original during idle time, and keep the version that was most efficient. This way, the code would self-optimize. Care would have to be taken to make sure the apps produced the same result when given identicle information, but after we're sure that's the case...let's have a go at it.

    It would be neat to see if we could set up a version control network that would submit new strains of code up the tree for consideration...so that everybody's machine helps optimize the app in the background.

    Even if this is insanely unfeasable...it still is an interesting thought
  • life.. I've created silicon based life.. oh no those are implants
  • Actually, that IS real Latin!

    Litigo: sue, go to court, quarrel
  • The newsflash is that the problems with AI are
    solved: it takes ~18 years to train one, and they get really cranky between 14 and 17 years old.
  • Yeah I picture it... evolve thinking chips they pass testing and we network them all.
    World War III breaks out and we instruct the missles to launch and they send a message back saying "You might be suisidal but I'm not".

    No missles launch anywhere...

    Or worse.. someone "evolves" a super CPU and it gets implemented in everything.
    for some unknown reason the more chips are made the faster and more powerful it gets.
    Then a set of seamingly random computer gliches show up. A crackers computer shuts down.
    Viruses malfunction, poorly writen software vanishes. Spammers can't send spam.

    "Play nice or I won't run your software"
  • The First NASA/DOD Workshop on Evolvable Hardware [computer.org] - lots of abstracts; full text if you have IEEE membership. Took place 19-21 July, 1999.
  • by Tekmage ( 17375 ) on Friday August 27, 1999 @03:39AM (#1722722) Homepage
    Circuit evolution raises yields on GHz chips [edtn.com] - something of a more recent vintage. :-)
  • Now that's cool. I hope this gets moderated up. You know, if I were Intel/AMD/Motorola/whatever, I would put a great deal of funding into this Japanese lab right now.
  • And this is what I think scares people about genetic algorithms, etc. We are locked into black-and-white, deterministic systems, and think thusly about so many things. It escapes people that many of the things we use in real life are based on or made by non-deterministic, statistical systems

    Unfortunately having an idea of how the solution works helps to determine whether its valid or not. For example, there was an experiment where neural networks were trained to identify tanks. After a while the researchers thought the network was working properly but it failed horribly on some date from outside the training set. Turns out the pictures in the training set with tanks were dark and the pictures without the tanks were light. The neural network was actually detecting whether the picture was light or dark rather than whether their were any tanks in it.

    The problem with genetic algorithms and neural networks is that you're never quite sure whether the solutions are testing/doing the what you want it to or whether something else in the training set/test date is being examined. Therefore you never know if it will fail miserably when the environment is changed or new data is being examined. I think that's why a lot of people aren't very confident/trusting of these types of solutions.

  • How do you test boundary conditions on an evolved circuit which may have "extra" boundaries inside? Maybe it has developed an accumulator that counts to 33 in analog, skips 34 (it counts 33 twice), and is fine from 35 up - that just happens to work under the test conditions that don't care about #3. Better examples anyone?

    Some researchers were trying to train neural networks to identify tanks in photographs. After the neural network worked extremely well on the training set, they tested it on new data and it failed horribly. Turns out the pictures with tanks in them that they were using were dark and the pictures without tanks were light. The neural networks that been trained to determine whether the pictures were light or dark and didn't care about the presence or absence of tanks in the pictures.
  • With that kind of attitude, it'd probably feel the same way about you...

    http://www.whatisthematrix.com/cmp/newFrame.html
  • Genetic algorithms by now is old (heh) and a decently understood techology. Essentially this is nothing but a general-purpose global optimization method. So the guy applied that optimization technique to FPGAs and got something. Big deal. People have done much more interesting things with genetic algorithms.

    Besides, it is just me, or the whole genetic algorithm thing is getting blown all out of proportion by the media, somewhat similar to what happened to neural nets several years ago?

    And the article is quite clueless. It implies that software is too limiting (only 0s and 1s, after all), so playing with FPGAs will open wider horizons. And the researcher speaks of not understanding what's going on like it is a good thing...

    Kaa
  • The point was that they are developing chips that will allow the GA to have access to more analog info.

    And the point of this being..? So far they've only managed to make their solution very fragile.

    Besides, it's not like you couldn't simulate analog conditions in software.

    GA's are not all that well understood. They have been relatively widelly applied, but on a theoretical level we don't have much more aparatus than the schema theorem (Holland) at our disposal.

    Well, we are probably talking about different things here. You are talking about understading in terms of proving theorems about. I am talking understanding in a more practical way -- having an idea what usually works, what never works, and what has never been tried yet. Compared to 4-5 years ago we understand much more about GAs, their uses and limitations.

    Neural networks have received quite a lot of hype and are generally poorly understood by people who proclaim their utility.

    It has been my personal experience that people who proclaim NNs as a general solution to all problems do not understand them at all.

    However we are just scratching the surface of what can be done with Genetic algorithms and genetic programming.

    On the one hand, yes. On the other hand, they are still nothing more than a general-purpose global optimization technique, not a magic wand. Granted, reasonable global optimizers are very hard to come by, but it's still nothing but search in parameter space.

    Kaa
  • The thing is without proof of convergence, the solution you find using GA may very likely be something you hadn't thought of, and even be a better one at that too, but not the optimal one!

    One, I think that there is proof of convergence -- albeit in infinite time :(

    Two, for non-trivial problems you cannot guarantee the global optimum without exhaustive search which is infeasible most of the time. The great majority of problems will settle for something reasonably close to the global optimum, "reasonably" being defined by the specific problem you are trying to solve.

    I'd rather use a more expensive, but garanteed, method instead. But those are hard to come by too.

    There are no guaranteed global optimizations (in the general case) with the exception of exhaustive search. If you can afford an exhaustive search, by all means run it and get your precise global optimum. Unfortunately, for most real-life problem the exhaustive search is so far out of the realm of feasability, it's not even funny. How about finding me a global minimum of a nonlinear function of, say, 15 real-valued variables? You use GAs in situations where you don't have much choice -- you either do a local search with all its disadvantages, or pick an imprecise and "semi-random" solution (GA or, say, simulated annealing). Sometimes the local search is the right way to go, sometimes not.

    Kaa
  • durrrrrrrrrr
  • Hmm hey guys, why don't we use these new evolved chips in our missle command centers.

    Sure that sounds great.

    evolved chip: "hehe they still don't even know how I work, I've passed all their safty 'test' hehehe if they only knew if they only knew"


    I'm sorry I couldn't resist. Would be funny though, if we accidently evolved thinking machines and didn't even know it.
  • Hmm, as always I like that picture of the future better. P.S. I'm generally not a negative future seerer I was just in a negative mood I guess :)
  • The humans work in Kansas, don't they? ;)
  • FYI, Thompson and his "Darwin Chip" were the
    cover story for Discover magazine, June 1998.

    It's the only Discover I've ever saved...

    LL
  • you guys posted this in like '97, when i started reading /.
  • I would just like to add that if the article is good enouth to pop in /. a second time, it is actualy good to it reapear since many people didn't seen it in the first time (the one that posted it the second time surely didn't). When the article recour it gives those people that lost the first apperance to see it.
    --
    "take the red pill and you stay in wonderland and I'll show you how deep the rabitt hole goes"
  • GA's are not all that well understood. They have been relatively widelly applied, but on a theoretical level we don't have much more aparatus than the schema theorem (Holland) at our disposal.

    Well, we are probably talking about different things here. You are talking about understading in terms of proving theorems about. I am talking understanding in a more practical way -- having an idea what usually works, what never works, and what has never been tried yet. Compared to 4-5 years ago we understand much more about GAs, their uses and limitations.

    The thing is without proof of convergence, the solution you find using GA may very likely be something you hadn't thought of, and even be a better one at that too, but not the optimal one!
    And optimisation is exactly where GA's are being used, admittedly very succesfully in some applications, but using a blackbox that does some semi-random bit-shifting and then provides a solution, and often a different one with each run, is not what I would trust an industry scale process to. I'd rather use a more expensive, but garanteed, method instead. But those are hard to come by too.

  • Anyone else ever actually try to build something and let it evolve? About six years ago I wrote a little DOS program (I can Email it to anyone who asks in a few days but i'm offline at home right now so I'd have to fetch it on a disk and it's the weekend in a couple of hours) to try and evolve the behaviour of some little sprites wondering all over the screen.

    It used a decision tree to decide what to do given inputs like what's standing in front and what's to the sides and what have you. They could decide to move forward or turn or attack the square in front. Their 'energy' level was tracked and attacking each other or the 'grass' that grew around randomly replenished it. When they were all dead the last few to die got to spawn the next generation. I was interested to see how hard it would be to evolve some better AI for games.

    They did, quite quickly, evolve what looked like the same algo as the tree I built by hand to test the code (move forward unless theres a wall in front in which case turn left - oh and if there's food in front then eat it) but the tree was a mess. Couln't tell what was going on inside the code. They never really got any farther though.

    What I found interesting was that trying to evolve from my test tree was impossible. My delicately constructed tree was completely screwed as soon as you changed one byte of it - the poor critters just died. The algo that evolved though was WAY more robust, upping the mutation rate to crazy levels still left the critters doing something better than standing still in confusion.

    I did start work on a new version that would let the inputs evolve as well. Rather than just seeing to the sides and two squares in front the viewable square's locations themselves could evolve. I got distracted and moved onto something else before I ever finished it though. Story of my life

    Pre.......
  • I have a suspicion that people will eventually understand the potential value of LISP . This language was perfectly suited to problems that you didn't have an algorithm to solve them with . It's built in software functionality made it useable for many of the specialized things that we are seeing developed today . Squireson http://www.peorialinux.org
  • Remember though that the lifespan of each 'generation' on a computer is much much much much shorter than real life. I think the article said it took 4800 generations to distinguish between the 2 tones... 4800 generations in humans would take 4800 x 25 years (25 is just an estimate of the span between generations... no clue how long it really takes) or 120,000 years... that's probably way off, but it illustrates the point I think.
  • Perhaps training is a bad word. I mean the process of rejecting the bad algorythms and evolving towards a good algorythm. This means you have to have an idea of what a good algorythm's output should be. At the end of the process, you have an algorythm which you know is good for your test cases, but you don't know how good is it for your non-test data.
  • I have been thinking about doing something like this for years.
    But I never seems to have time.
    But if someone started a open source project I would like to get
    involved.

    I can do the GA algorithms, so we need someone to do the network code
    and someone to put together the web site.

    By the way take a look at this.

    On the 17. of July Roland Olsson posted to comp.ai a message [deja.com] saying he had made a program [chalmers.se]
    using ADATE [www-ia.hiof.no]
    The program searches after a SAT solver.
    The satisfiability problem (SAT) is one of the fundamental problems
    in theoretic computer science.
    If we had a fast algorithm to solve this problem, we could solve a big
    class of very interesting problems.
    Example factoring of big integers and solving the travling salesman problem.

    Your can find more problems with instances
    here, if your are interested.

    So if your computer got some spare time, give it a try.
  • Well we should make a client independent of the actual problem.

    Something like 3 parts.

    a network part.
    a search engine part.
    and a problem definition part.

    and a static interface between the parts.

    So to do another problem your just write another problem definition part.

    And to try another search algoritme your write a new search engine part.






  • Is there any sort of "electronic" version of that article?
  • heh..someone just had to say it..

    hehehe
  • Good point. Still, why do we have to know how something works to make it useful? I mean, no-one knows how the human brain works but we all use that..

    There is also a lot of traditional software out there, being used every day, that no-one understands how it works, because it was badly documented (if at all) and/or the people that wrote it have died/forgotten how it works. Early Mac stuff springs to mind.

    What's the difference between computer-evolved obfusticated code and human generated obfusticated code? We've got plenty of the latter around.
  • http://www.theonion.com/onion3522/robots_are_the_f uture.html
    ----

    Humor aside, I think this is really cool. Somehow a working design evolved entirely outside the human constriction of an entirely digital world. AI is tough because biological organims simply don't work on a digital premise. It is hard to model organic intelligence within the constraints of a digital world. AI, unfortunately, hasn't been able to make much progress recently, but with an article like this, I don't think it is implausible that one day I might shake hands with a C3PO. Hey, we humans evolved after all anyway right? If we can fast-forward the evolution of these things I think there is a hope for emergent intellegence. Anyway, there are plenty of practical short-term goals. The first one that popped into my head was: Hey! Use this technique on my graphics card and let me be able to play Quake at 1280x1024 32bit 40frames/sec. Cool. This is cool stuff and I hope it gives AI a boost.
    I wonder what religious zealots think about this evolution though.
  • Actually I was thinking about the same thing. What if we just "helped" it along. What if we pre-evolved FPGA-writing software. What if we gave it access to some sort of duplication mechanism. Could it somehow take advantage of these? What if an evolvement-program was evolved? A sort of supervisor for the evolving?

    BTW, I'm not convinced that some of the more recent blockbuster sci-fi flicks weren't written by a robot:

    super.hype_movie()
    manufacture_merchandise()
    advertise_on_happy_meals()
    sleep(30000)
    Graphics cool = doFX()
    Plot sucks = order_monkeys_to_type()
    sleep(30000)
    showMovie(sucks, cool)
    prepare_for_next(Titanic)
  • I think I like the idea of watching other AI organisms eat M$ retarded organism alive.
  • and it was evolved in hardware?
  • I used to work in the room opposite Thompson when he was doing this research. He's back at COGS [susx.ac.uk] now, working at the Centre for Computational Neuroscience and Robotics [susx.ac.uk], whilst I'm still at SInC [sussexinnovation.co.uk].

    Looks like Thompson's still working on exploiting the non-digital properties of digital devices, if I understand the blurb.

  • If these FPGA computers were to develop an artificial intelligence and form some kind of sentience, what would be the ethical ramifications of selling a self-conscious being? Would it be on par with slavery? Would shutting it down be the equivalent of digital murder? Interesting questions....

    Just a concerned AI...

  • The latest y-chromosome genetic studies (as well as recent studies of the paleoentological record) show conclusively that Darwinian evolution did not occur in higher animals, beyond the species level. You have to have an enormous population and a very short generation cycle (time from birth to sexual maturity) to allow evolution to work. It works in bacteria, viruses, fruit flies, and now apparently, silicon. But humans, whales, apes--not a chance. Not enough time, not a large enough population. Think what you want about where they came from (seeded by God, extra-terrestrials, etc.) but these higher forms have been shown not to have evolved from lower forms. (Oh, boy, I better grab my thermal garb! Flames a comin'!) References: Michael F. Hammer, "A Recent Common Ancestry for Human Y Chromosomes," Nature, 378 (1995), pp. 376-378. L. Simon Whitfield, John E. Sulston, and Peter N. Goodfellow, "Sequence Variation of the Human Y Chromosome," Nature 378 (1995), pp. 379-380.
  • Thanks for a chuckle :) But...

    "Of course, the really smart computers would never believe something as absurd as that"

    Really smart sentient beings don't believe -- they test.

    Jim
  • "Of course, with this technique one is never assured he will find the absolute lowest valley"

    True, and nice visualization for the problem.

    Practically speaking though, you can ensure pretty close to a best solution by starting your search from many random initial positions. The more, the better your chances of hitting the deepest hole.

    Jim




  • "What's to stop them from optimizing across a wide range of (previously destabilizing) ambient conditions?"

    Nothing. But these algorithms might possibly be more prone to failure under exceptional conditions. The article does explain that changes in environment that wouldn't be expected to perturb digital circuitry (temperature, stray capacitance) caused his chip problems.

    The problem with software reliability is that you can't run thru all situations it will encounter in the field, there are too many. A similar problem occurs in trying to imagine all environmental ranges that are required for evolution of this chip. If there are 10 environmental variables with 10 states each thats 10**10 different trials per generation. Not likely.

    It's probably the case that sensitivity to the environment can be handled in some way. But I think it's fair to say that its a concern for genetic algorithms, and it's exacerbated by depending on analog characteristics of circuits designed for digital use.

    Jim
  • "Biological organisms do work on a digital system - what do you call DNA? Sure, it's base four not binary, but it's still a discrete combinatorial system."

    I'm not a biologist, but isn't that a tad oversimplified? Sure the encoding is digital, but the interaction with proteins is highly analog. This a little like the FPGA, which is designed with digital states in mind, but can be evolved to utilize analog interactions between cells.

    Jim

    Jim
  • Yes, the article is old. And yes, perhaps it did appear here in 1997 (I'm a relatively new reader . . . only a year and a half now!). But time and time again, there are clumps of people bitching about it. Well, first off, do YOU remember every single article you read two years ago? I sure as hell don't.

    The article was submitted again, and posted again. Not claiming intent for either party (it could have been a mistake), I see no problem with this. The article is relevant again, given the Kansas Fuckup. Did it ever cross your mind that maybe someone thought to re-submit this article in response to that? Or maybe posted for people who (like me) didn't catch it the first time around.

    Play nice children.
  • So 15 cells seemed to have no logical purpose, but
    the chip stopped working without those cells...

    This reminds me of the "Magic Switch" story in the Jargon File... a switch on a mainframe had only one wire connected to it, with the other end connected to a ground pin, so it obviously wouldn't work - yet flipping it from the position labeled "More Magic" to the position labeled "Magic" caused the computer to crash.
    --
  • "I wonder what religious zealots think about this evolution though."

    They won't think about it, they don't do the "thought" thing. They will pray (or say they did) about it and do what God (or their better interests) tells them they should do. This will most likely be a condemnation of it.

    I am from kansas, and no the chip will not work here (the church SIGs probably wont let it). Or it will work and we just wont be able to tell our childern about it in school.
  • I will never feel ethically obligated to keep any piece of electronic equipment operating, even if it is "intelligent."

    Ok. Think for a few minutes about what runs the human body. Hmmm... Electricity... Uhoh... Looks like we are all just a bunch of poorly designed and rather stupid robots. Guess I can unplug you with impunity...

    Kintanon
  • Hmm, has anyone ever read any of the spin-offs of Assimov's 'I, Robot' and 'Robot's of Dawn', specifically one called 'Changeling' (I think). It talked about the embedding of a positronic brain into an infiniately maleable substance so that the brain could define what it's body was.

    Those things could be extremely interesting if the software could modify the hardware to accomplish its goal. Imagine, Linux decides it needs more CPU cycles, and more RAM, so it gets to work redesigning your motherboard for you. You open the case up a few weeks later to see some kind of shimmery silver blob where your motherboard was, and your PC is running at 25Ghz, with 100TB of RAM! >:) Sounds like fun.

    Kintanon
  • The article is dated 15th November 1997, so it's
    not exactly news, is it. And I'm almost certain that this same story was posted to Slashdot a couple of months ago.

    HH

  • by SirSlud ( 67381 ) on Friday August 27, 1999 @04:38AM (#1722765) Homepage
    "How acceptable is a safety-critical component of a system if it has been artificially evolved and nobody knows how it works?" he asks. "Will an expert in a white coat give a guarantee? And who can be sued if it fails?"

    This is the funniest thing I have ever read (well, today.) Yeah! Who can be sued when it fails? What good is human existance without somebody or some organization to blame things on? Using one of these circuits and then suing the maker if it fails is like drinking until you sustain braindamage, and then suing the beer company. Is there no such thing as personal accountability anymore? Doesn't anyone take respocibility for their own actions? For that matter, why don't they just 'evolve' a circuit that always knows who to sue? (Although, when when that breaks down, /then/ who do you sue?)
  • And the article is quite clueless. It implies that software is too limiting (only 0s and 1s, after all), so playing with FPGAs will open wider horizons. And the researcher speaks of not understanding what's going on like it is a good thing...

    I think you missed the cool part of the research- Thompson didn't understand what was going on because what was going on was fundamentally different from how the electrical engineers do things. In the Discover article, they went into more detail on that point. Apparently, some of the designs that the GA evolved contained components that were entirely unconnected to the main circuitry, but that couldn't be taken out without making the chip stop working.

    And the point about software being too limiting: they're talking about efficiency. If you want to solve the problem using software, think about all the gates you have to use! It's not that it can't be done, but if you can do it with 100 gates in hardware, that's probably better than a 10,000+ gate software solution. Particularly when no one has ever been able to do it with less than 1,000 gates using traditional techniques.
  • You can really only do so much with genetic algorithms because while the most successful code/design is often better than anything a human could create, it is also almost totally unreadable by people; if you did an entire project genetically, you'd lose all control of it. Genetic algorithms are only useful for small sections that need to be as efficient as possible (similar to hand-optimizing software in assembly).
  • I wonder how many CPU cycles it takes to run through a single generation? I suppose, it would depend on the scope of the project and the number of permutable aspects in the code. It would certainly be interesting to see how well a distributed.net project would lend itself to producing a large-scale genetically based application.

    It seems that the model could work fairly well because every client would share the same acceptable fitness parameters, and whole squadrons of teams would be able to work on different genetic threads with their most robust solutions mated and re-distributed to all of the clients at the end of every generation or series of generations.

    It would certainly be more interesting than another brute-force project. Does anyone know how distributed chooses their projects, or are there any groups already working on distributed GP?

  • I like SAT, but what if it were something equally definable, but with far more parameters and variables to take into account? For instance, what if GP was used to create a distributed Weather Modeling program? We would have an easily definable goal (Predicted Weather minus Actual Weather = 0) but plenty of fuzzy parameters so it could grow into a project 'worthy' of distributed computing.

    What would you think of that? It may be difficult to get the National Weather Center to give up their information, but maybe the SETI Project's success could convince them of the potential value.
  • He is a mentor and friend.

    ----------
    Have FreeBSD questions?
  • Even if you ignore the ethics of it (pretend pulling the plug is ok-e-dok-e), the real question is what position does evolving/creating systems that "engineers" and humans in general don't understand put the world in? Society has shown us that the human race as a whole, (but maybe not all of us) loves systems. It can be governments, schools, churchs, whathaveyou. Whatever it is that lets us understand that big picture which is beyond our grasp. Well what happens when the actual functioning of these computers, the pinacle of human control, falls outside of all of our methods of understanding systems? We lose control. Oh well, gotta catch a plane. Oh and I just watched the Matrix for the 4th time. That'll probably do it.
  • They'll just have to think of some way to explain it to the one-brain-cell types on the school board.

    BTW, I live in Kansas, and am mostly a Republican. A pro-choice, atheist, evilutionist Republican, but one nonetheless...
  • People are doing a lot of this stuff right now; anyone else wonder where it will end up?

    I think microsoft will get involved early in this development, as they have already funded research on 'how to make Excel better thru darwinism'. i think their next logical step is to create self-conscious programs... that they can enslave. don't be surprised if they do..

  • yeah! you could watch the paperclip freak out and strangle the dog.
  • Comparing GA chip development with life evolution is not an apples to apples comparision. Chip development has a goal to work towards (thus something to test against). Life evolution is goal-less, its existence & prolification is its point (depends on your philisophical view). Life never has a completed/perfect design which is what a chip designer is striving for. In fact, each living individual is a unique design itself that may or may not succeed whereas we would make millions of copies of the best chip design.
    Besides, you can count the evolutionary flaws in nature by counting the extinct species...or is a better comparision counting the unpropigated mutations within every species?
  • The religious zealots aren't all that concerned, after all, the world is going to end in a little over 3 months, and those involved don't seem to be that close to C3PO yet.

GREAT MOMENTS IN HISTORY (#7): April 2, 1751 Issac Newton becomes discouraged when he falls up a flight of stairs.

Working...