Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Software Science Technology

Artificial Intelligence Is Evolving All By Itself (sciencemag.org) 89

sciencehabit shares a report from Science Magazine: Artificial intelligence (AI) is evolving -- literally. Researchers have created software that borrows concepts from Darwinian evolution, including "survival of the fittest," to build AI programs that improve generation after generation without human input. The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI. The program discovers algorithms using a loose approximation of evolution. It starts by creating a population of 100 candidate algorithms by randomly combining mathematical operations. It then tests them on a simple task, such as an image recognition problem where it has to decide whether a picture shows a cat or a truck.

In each cycle, the program compares the algorithms' performance against hand-designed algorithms. Copies of the top performers are "mutated" by randomly replacing, editing, or deleting some of its code to create slight variations of the best algorithms. These "children" get added to the population, while older programs get culled. The cycle repeats. In a preprint paper published last month on arXiv, the researchers show the approach can stumble on a number of classic machine learning techniques, including neural networks.

This discussion has been archived. No new comments can be posted.

Artificial Intelligence Is Evolving All By Itself

Comments Filter:
  • Is this new? (Score:5, Insightful)

    by Lanthanide ( 4982283 ) on Tuesday April 14, 2020 @05:03AM (#59944650)

    I was taught about 'genetic programming' as described in the summary during my computer science degree 15 years ago.

    What's actually new about this?

    • by lobiusmoop ( 305328 ) on Tuesday April 14, 2020 @05:45AM (#59944738) Homepage

      1985 called, it wants it's 'World Of The Future' fluff news article back.

      • 1985 called, it wants it's 'World Of The Future' fluff news article back.

        I can has artificial? In the future?

    • Re: (Score:3, Informative)

      by axedog ( 991609 )
      The answer is in the abstract of the paper. Traditional genetic algorithms evolve optimal sets of parameters to a fixed algorithm; the parameters are evolved by the GA, but the underlying algorithm does not change. This method allows exploration of completely new ML algorithms from basic operations.
      • Re:Is this new? (Score:5, Informative)

        by K. S. Kyosuke ( 729550 ) on Tuesday April 14, 2020 @07:01AM (#59944870)
        Right, that's the difference between genetic *algorithms* and genetic *programming*. *Neither* is new. Here's a book on the latter subject from 1992. [amazon.com]
        • This is the correct answer and ought to be pinned right under the article. Note that the 1992 book on the subject is a culmination of a decade of research - it wasn't even new 30 years ago.

      • by nospam007 ( 722110 ) * on Tuesday April 14, 2020 @07:21AM (#59944908)

        "The answer is in the abstract of the paper. "

        Are we now supposed to actually RTFA?

      • An algorithm would not be called a genetic algorithm unless the algorithm itself was genetically generated. i.e. your second case. So, yes, this HAS been done before and the field is called "genetic algorithms".
        • Unless something changed in terminology, a genetic *algorithm* is an algorithm that uses genetic principles for its operation. The desired result does not necessarily have the form of a program in the first place.
      • Many years ago, someone developed a random compiler. The compiler would generate random blocks of machine code, and then test if the block correctly implemented the function being compiled.

        The compiler used incredible amounts of processor time for the shortest functions. It also had a tendency to find short functions that used obscure processor features.

        • Superoptimization? [stanford.edu]
          • by cb88 ( 1410145 )
            Yep, and often enough there are good reasons for not using what it finds... since you go to the next revision of the CPU and it's broken.
            • Surely these things can be checked for. This wouldn't prevent at least JIT engines from using good instruction sequences obtained this way.
              • Well then point is a superoptimiser finds an optimal solution for that CPU and microcode...

                So yes you could check but would probably break the first micro code update you get... still probably an interesting exercise for someone's Masters degree...
      • Re:Is this new? (Score:5, Interesting)

        by Rutulian ( 171771 ) on Tuesday April 14, 2020 @12:00PM (#59945946)

        No, it is an example of genetic programming, but just because genetic algorithms have been described elsewhere for other problems doesn't mean their application to ML algorithms is easy or straightforward. They've attempted to tackle a number of important challenges and the paper is actually quite interesting:

        An early example of a symbolically discovered optimizer is that of Bengio et al. [8], who represent F as a tree: the leaves are the possible inputs to the optimizer (i.e. the xi above) and the nodes are one of {+, , ×, ÷}. F is then evolved, making this an example of genetic programming [36]. Our search method is similar to genetic programming but we choose to represent the program as a sequence of instructions—like a programmer would type it—rather than a tree ... Both Bengio et al. [8] and Bello et al. [7] assume the existence of a neural network with a forward pass that computes the activations and a backward pass that provides the weight gradients. Thus, the search process can just focus on discovering how to use these activations and gradients to adjust the network’s weights. In contrast, we do not assume the existence of a network. It must therefore be discovered, as must the weights, gradients and update code.

      • Even so, it has been done before.

        I read the abstract, and I don't see anything really new. If the authors are doing something truly new, it isn't very clear.
      • by cb88 ( 1410145 )
        I mean the distinction is very fragile... see Lisp for instance where the paramters and algorithm can all be modified using the same syntax.

        Also, I wouldn't even bother calling a "billion monkies banging on keyboards similator" intelligent... That's just basic brute forcing a problem with perhaps a few heuristics thrown into improve it's chances.
    • > randomly combining mathematical operations

      That reminds me of what I did a couple weeks ago after giving up on my cryptography homework. I was to take a number of inputs related in a certain way (basically a TLS public key and an encryption) and knew I needed to find a formula that turned some of the inputs into a function of the other ones. (This cracking the encryption). After many false starts, I realized I was basically trying mathematical operations at random by that point, hoping to luck upon an

      • The answer is no. If it's an academic integrity violation, you're potentially screwed. If not, you're fine. Either way, there's no point in telling your professor. If you really want to discuss it, bring it up as something you thought of one time.

      • Realizing that computers can try random things a lot faster than I can ....

        ~raymorris

        Pseudo-random. Adders arranged as a clock, or a clock arranged by adders, a state, can only generate numbers that are apparently random. Such is explained in manuals for a zilog processor I read in 1981.

      • > randomly combining mathematical operations

        That reminds me of what I did a couple weeks ago after giving up on my cryptography homework. I was to take a number of inputs related in a certain way (basically a TLS public key and an encryption) and knew I needed to find a formula that turned some of the inputs into a function of the other ones. (This cracking the encryption). After many false starts, I realized I was basically trying mathematical operations at random by that point, hoping to luck upon an interesting output. Realizing that computers can try random things a lot faster than I can ....

        I made a list of "potentially interesting values" - the inputs, 0, 1, -1, 2, etc. Then I made a list of operations - multiplication, modular exponentiation, bitwise inverse, etc. I set my computer running overnight randomly choosing from the potentially interesting values, randomly choosing mathematical operations to perform on them, and checking to see if the result was another "potentially interesting value". After it ran overnight, I used 'sort | uniq -c | sort -n' to list which randomly generated formulas most often gave results that we're in the "potentially interesting" list.

        This week, I have to crack DSA for the case that the signer is using a low-quality random generator. I *think* I can do the math on this one by hand, but if not I'll use my random formula generator again. There are 8 inputs - the public key, the signature, the message, etc. There are only a few potential operations I might need to use, most importantly modular exponentiation. I may have have my script randomly choose two of the inputs and raise one to the power of the other, do a few more random operations like addition and multiplication, and see which randomly generated formulas produce interesting results.

        The question is, do I ever tell my professor that rather than figuring out the math myself, I just programmed my computer to try shit at random until it finds a formula that works? Next week's homework, for a different class, is machine learning. I kinda wish I had learned that part first, before taking the encryption course.

        When I asked a similar question in a CS class in `98 I was told that of course a clever programmer could find quicker ways to get many of the answers, but the questions were not presented to the student because of the answers being unknown; they're presented because somebody thought you would benefit from learning some method that can be practiced via these questions.

        So it is irrelevant if you committed a technical ethical violation or not; you clearly cheated yourself out of receiving the intended practice

        • That's an interesting perspective, thanks.

          > the student because of the answers being unknown; they're presented because somebody thought you would benefit from learning some method that can be practiced via these questions.

          I'm actually a bit unclear just what we're supposed to be learning from these exercises, because there is no method to be practiced. They are pretty much brainteasers - try different things until you stumble upon thr clever trick for this one. Basically riddles. I don't see that I'm l

          • I don't know about that, but I wish I knew how to crack DSA that's using a low-quality random generator.
            • You can certainly read up on it. Start by finding K, based on the characteristics of the PRG. From there, with a message and it's signature it's straightforward to calculate the private key and you own it forever.

      • That's a sweet class.
    • Re:Is this new? (Score:4, Interesting)

      by Anonymous Coward on Tuesday April 14, 2020 @07:44AM (#59944976)
      Probably nothing. My brother got to work with some masters and PHDs in AI during a summer internship because he has crazy natural talent for these kinds of things. Every time I showed him an article about some "new" AI, he would get back to me a few days later and tell me how that was done back in the 70s, they're just making it deeper or wider because of more compute available. His general stance is almost no one in AI knows about the research back from the 70s and they constantly think they discovered something new.
      • by PPH ( 736903 )

        how that was done back in the 70

        Exactly. I seem to recall a chapter on this stuff in The Handbook of Artificial Intelligence (Barr and Feigenbaum, c 1981). Of course computers at this time tended to be mainframes and minis and slower than my phone. So the problems they tackled tended to be trivially simple.

    • What's actually new about this?

      According to the headline, "Artificial Intelligence" actually exists, which is pretty fucking cool if you ask me, considering it still doesn't.

    • by Kjella ( 173770 )

      I was taught about 'genetic programming' as described in the summary during my computer science degree 15 years ago. What's actually new about this?

      Nothing other than actually getting results. Just like it's super easy to build a neural network if you don't need it to be useful.

      Here's a brief history of meta-programming between a traditional programmer (TP) and meta-programmer (MP):
      TP: We need to find a model and the parameter values for it.
      MP: I can write an algorithm to find the optimal parameter values.
      TP: We need to find a model and the parameters.
      MP: I can write an algorithm to find the hidden parameters.
      TP: Okay, but we still need to find the mod

    • No it's not new, and genetic programming is at least 25 years old by now. What is new is that these particular guys seem to have gotten better results than genetic programming, which turned out to be basically useless, quite similar to a million monkeys on a million typewriters. But, I suspect it's hype.
    • by Shaitan ( 22585 )

      Honestly, outside of computing power what is new about any of the AI and NN material coming out. Some of the frameworks for coordinating and synchronizing distributed networks are pretty interesting but I would also argue those are advances in distributed computing being applied to AI.

      If anything it feels like people have rationalized away a true human or better level deep AI as a goal. Given that we ultimately define intelligence as thinking in a manner similar to ourselves and our highest intelligence is

    • Indeed, genetic programming is not new. Where did they claim to have invented genetic programming? The implementation of old concepts in new problem spaces can still be challenging, and that is exactly what they are describing:

      Existing AutoML search spaces have been constructed to be dense with good solutions ... AutoML-Zero is different: the space is so generic that it ends up being quite sparse. The framework we propose represents ML algorithms as computer programs comprised of three component functions that predict and learn from one example at a time. The instructions in these functions apply basic mathematical operations on a small memory. The operation and memory addresses used by each instruction are free parameters in the search space, as is the size of the component functions. While this reduces expert design, the consequent sparsity means that RS cannot make enough progress; e.g. good algorithms to learn even a trivial task can be as rare as 1 in 10^12 ... Perhaps surprisingly, evolutionary methods can find solutions in the AutoML-Zero search space despite its enormous size and sparsity. By randomly modifying the programs and periodically selecting the best performing ones on given tasks/datasets, we discover reasonable algorithms.

      <emphasis added>

    • by skids ( 119237 )

      I guess the new part is that decades-old genetic programming has managed to invent neural networks and some newer AI features.

      Current gen tech P0wned yet again by ancient graybeard magicks, then repackaged as something new to save face.

    • by Tablizer ( 95088 )

      Indeed. I have a book by Melanie Mitchell published in 1999, first edition 1996, called "An Introduction to Genetic Algorithms". Page 36 describes John Koza's work from the early 1990's which used GA's to evolve Lisp programs*. One example evolves mathematical curve-matching functions, kind of a GA version of regression formulas.

      It's a well-written book. The only notable fuzzy part was the description of a stock (investment) picking system. I suspect intellectual property concerns prevented her from going i

  • by rminsk ( 831757 ) on Tuesday April 14, 2020 @05:03AM (#59944654)
    Holland, J. H., Adaptation in Natural and Artificial Systems, Ann Arbor, MI: University of Michigan Press, 1975.
  • How many iterations would it need to randomly combine/replace code to end up with a playable doom versions?

    • by gweihir ( 88907 )

      Probably a few orders of magnitude more years than this universe has lifetime left ;-)
      The while thing is a worthless stunt of no practical value. Well, maybe some philosophers find it interesting.

    • "How many iterations would it need to randomly combine/replace code to end up with a playable doom versions?"

      Since it cannot distinguish a cat from a truck right now, don't hold your breath.

    • by sad_ ( 7868 )

      And how many more until it finds out it's more fun to play doom irl?

  • Granted, the nets were tiny, but call me when your neural nets can *actually* learn. I mean *while* doing their job! Fast enough to be adaptive. Not while in a learning phase, and frozen otherwise.

    Oh, and call me when you'vs finally upgraded to *spiking* neural nets, and actual simulations. Not just weight matrices.
    Because then, in 2040, I can tell you I did that 45 (so 15) years ago! :)

    And I hadn't even remotely been the first. Just a teen/student in his room.

    • by gweihir ( 88907 )

      But you cannot have replicated "decades of AI research", at least not the last two! You would have preempted them instead ;-)

      That is unless nothing noteworthy happened in the last 20 years of AI research, which given the BS coming out of the field at the moment, I will most certainly not rule out.

      • That is unless nothing noteworthy happened in the last 20 years of AI research

        AI is a class of engineering techniques, not a science, so there is no reason to expect that 20 years of teaching it to new students would product anything "noteworthy," and that remains true even if you publish a bunch of papers and call them "research." Or as in the story, you can not even actually publish a paper yet, and already not have anything noteworthy! lol

      • You could say that the last 15 years of AI research has been "old stuff done with faster computers," but compared to what was going on between 1995 and 2003, that is genius level (I remember one paper from that era that basically devolved to "I made the computer smile at me").

        The biggest actual research gains (beyond raw CPU power) have been related to how the neural network is set up. Should you use a sparse network with lots of layers, what kind of loss function should you use to measure the accuracy of
    • Fast enough to be adaptive. Not while in a learning phase, and frozen otherwise.

      That is equivalent to wanting proof that 1=2 or black = white.

  • by gweihir ( 88907 ) on Tuesday April 14, 2020 @05:40AM (#59944722)

    All they are doing is a bit of more general training. The result is not "evolved" with regards to expressive power, it is just more specialized for the task used to determine fitness.

    • I agree but for the sake of conversation, so many people get "survival of the fittest" wrong. It does not mean "survival of the fiercest". Time for the deadly robots if they do that. XD
    • All they are doing is a bit of more general training. The result is not "evolved" with regards to expressive power, it is just more specialized for the task used to determine fitness.

      In a framework of induction espoused by Vanderbilt's "Golden Boy", Ken Dodge, who landed more than a few grants from federal sources through the 90s, the term was "robust".

  • Desperate writing trying to keep CS new, hip and utopian relevant. To call the method of process Darwinian is stretching a bit. I really thing that CS media needs to take a breather from AI for awhile. It's starting to get stale and the misrepresentation is a tad tacky, think "Virtual Reality" in the mid 90's. Just because you use the term Artificial Intelligence over and over doesn't make it so. Also, points lost for trying to tie in relevance through the theory of evolution. One in proven science clingin
  • Rather than artificial intelligence, perhaps start with basic logic for artificial life, then expand the complexity for those models that succeed to a point where interesting patterns emerge. Conway's game has very basic rules that can very quickly become extremely complex and interesting but why has nobody revisited that work with more modern systems to simulate a more realistic, small (but genuinely intelligent) set of lifeforms say perhaps just converge a handful of multicell scale neural ecosystems, see
    • "work with more modern systems to simulate a more realistic, small (but genuinely intelligent) set of lifeforms" So instead of Artificial Intelligence, you want to first simulate intelligence and then something something. Can you see the problem with that ?
    • I don't know the whole process, but I do know the end result of the calculation: 42.

    • Have a look at Thomas S. Ray. He used a soup of RAM and a bunch of virtual CPUs.... and a simple instruction set with the ability to spawn a new CPU. He found all kinds of analogies to biology, including vira
  • Do you want Skynet?

    Cause, that's how you get Skynet...

  • For some values of "all by itself", lol
    • I can make a baseball fly though the air all by itself! I just throw it, and look! After I let go, it keeps going! On it's own!

      • I can make a baseball fly though the air all by itself! I just throw it, and look! After I let go, it keeps going! On it's own!

        Which, just as with the AI 'breakthru' here, was already discovered in the late 60s. To wit:

        "This cat can fly across the studio"
        "By herself?"
        "No, I fling her"
        "Yeeeeooowwwwll!!!!"

  • Copies of the top performers are "mutated" by randomly replacing, editing, or deleting some of its code to create slight variations

    Isn't that how most debugging is done?

    • Debugging, and rebugging too!

    • Copies of the top performers are "mutated" by randomly replacing, editing, or deleting some of its code to create slight variations

      Isn't that how most debugging is done?

      Isn't that how they extend the patents on profitable drugs?

    • by gtall ( 79522 )

      If you are debugging using random operations like that, I doubt it is very effective.

  • by belthize ( 990217 ) on Tuesday April 14, 2020 @08:26AM (#59945120)

    Maybe they can use it to evolve automated slashdot editors capable of selecting articles that are relevant both technically and temporally.

  • The game of life evolves on it's own. Go play it and move on.
  • Ie how many pictures of cat do you have that you can train it & then test this against ? Then there are the definitions, eg: is a Sumatran tiger a cat ?

    Even if the results are not an improvement this could turn up 'better' in different ways, eg: faster, uses less memory, ...

  • to secure funding over the next AI winter?
  • Hotdog - Not Hotdog

  • Eventually someone was going to figure out that we could create AI the way nature did. Genetic algorithms working with an environment of neural nets that improve with each generation.

    This is it, guys. After this runs for a bit, it'll be intelligent although its intelligence won't look much like ours.

    And keep the off switch handy. We'll want to evolve one that *really* *really* likes us. Even neutrality won't do here.

  • I think it may be survival of the Fit.
    That may have been more of what Darwin was saying and yet you always here it quoted the other way.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...