Artificial Intelligence Is Evolving All By Itself (sciencemag.org) 89
sciencehabit shares a report from Science Magazine: Artificial intelligence (AI) is evolving -- literally. Researchers have created software that borrows concepts from Darwinian evolution, including "survival of the fittest," to build AI programs that improve generation after generation without human input. The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI. The program discovers algorithms using a loose approximation of evolution. It starts by creating a population of 100 candidate algorithms by randomly combining mathematical operations. It then tests them on a simple task, such as an image recognition problem where it has to decide whether a picture shows a cat or a truck.
In each cycle, the program compares the algorithms' performance against hand-designed algorithms. Copies of the top performers are "mutated" by randomly replacing, editing, or deleting some of its code to create slight variations of the best algorithms. These "children" get added to the population, while older programs get culled. The cycle repeats. In a preprint paper published last month on arXiv, the researchers show the approach can stumble on a number of classic machine learning techniques, including neural networks.
In each cycle, the program compares the algorithms' performance against hand-designed algorithms. Copies of the top performers are "mutated" by randomly replacing, editing, or deleting some of its code to create slight variations of the best algorithms. These "children" get added to the population, while older programs get culled. The cycle repeats. In a preprint paper published last month on arXiv, the researchers show the approach can stumble on a number of classic machine learning techniques, including neural networks.
Is this new? (Score:5, Insightful)
I was taught about 'genetic programming' as described in the summary during my computer science degree 15 years ago.
What's actually new about this?
Re: Is this new? (Score:1)
Re: (Score:1)
Re:Is this new? (Score:5, Funny)
1985 called, it wants it's 'World Of The Future' fluff news article back.
Re: (Score:2)
1985 called, it wants it's 'World Of The Future' fluff news article back.
I can has artificial? In the future?
Re: (Score:3, Informative)
Re:Is this new? (Score:5, Informative)
Re: (Score:2)
This is the correct answer and ought to be pinned right under the article. Note that the 1992 book on the subject is a culmination of a decade of research - it wasn't even new 30 years ago.
Re:Is this new? (Score:4)
"The answer is in the abstract of the paper. "
Are we now supposed to actually RTFA?
Re: (Score:1)
Re: (Score:2)
Re: Is this new? (Score:2)
The compiler used incredible amounts of processor time for the shortest functions. It also had a tendency to find short functions that used obscure processor features.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: Is this new? (Score:2)
So yes you could check but would probably break the first micro code update you get... still probably an interesting exercise for someone's Masters degree...
Re:Is this new? (Score:5, Interesting)
No, it is an example of genetic programming, but just because genetic algorithms have been described elsewhere for other problems doesn't mean their application to ML algorithms is easy or straightforward. They've attempted to tackle a number of important challenges and the paper is actually quite interesting:
An early example of a symbolically discovered optimizer is that of Bengio et al. [8], who represent F as a tree: the leaves are the possible inputs to the optimizer (i.e. the xi above) and the nodes are one of {+, , ×, ÷}. F is then evolved, making this an example of genetic programming [36]. Our search method is similar to genetic programming but we choose to represent the program as a sequence of instructions—like a programmer would type it—rather than a tree ... Both Bengio et al. [8] and Bello et al. [7] assume the existence of a neural network with a forward pass that computes the activations and a backward pass that provides the weight gradients. Thus, the search process can just focus on discovering how to use these activations and gradients to adjust the network’s weights. In contrast, we do not assume the existence of a network. It must therefore be discovered, as must the weights, gradients and update code.
Re: (Score:2)
I read the abstract, and I don't see anything really new. If the authors are doing something truly new, it isn't very clear.
Re: (Score:2)
Also, I wouldn't even bother calling a "billion monkies banging on keyboards similator" intelligent... That's just basic brute forcing a problem with perhaps a few heuristics thrown into improve it's chances.
I used this approach after giving up in my homewor (Score:3)
> randomly combining mathematical operations
That reminds me of what I did a couple weeks ago after giving up on my cryptography homework. I was to take a number of inputs related in a certain way (basically a TLS public key and an encryption) and knew I needed to find a formula that turned some of the inputs into a function of the other ones. (This cracking the encryption). After many false starts, I realized I was basically trying mathematical operations at random by that point, hoping to luck upon an
Re: (Score:3)
The answer is no. If it's an academic integrity violation, you're potentially screwed. If not, you're fine. Either way, there's no point in telling your professor. If you really want to discuss it, bring it up as something you thought of one time.
Re: (Score:1)
Realizing that computers can try random things a lot faster than I can ....
~raymorris
Pseudo-random. Adders arranged as a clock, or a clock arranged by adders, a state, can only generate numbers that are apparently random. Such is explained in manuals for a zilog processor I read in 1981.
Re: (Score:2)
> randomly combining mathematical operations
That reminds me of what I did a couple weeks ago after giving up on my cryptography homework. I was to take a number of inputs related in a certain way (basically a TLS public key and an encryption) and knew I needed to find a formula that turned some of the inputs into a function of the other ones. (This cracking the encryption). After many false starts, I realized I was basically trying mathematical operations at random by that point, hoping to luck upon an interesting output. Realizing that computers can try random things a lot faster than I can ....
I made a list of "potentially interesting values" - the inputs, 0, 1, -1, 2, etc. Then I made a list of operations - multiplication, modular exponentiation, bitwise inverse, etc. I set my computer running overnight randomly choosing from the potentially interesting values, randomly choosing mathematical operations to perform on them, and checking to see if the result was another "potentially interesting value". After it ran overnight, I used 'sort | uniq -c | sort -n' to list which randomly generated formulas most often gave results that we're in the "potentially interesting" list.
This week, I have to crack DSA for the case that the signer is using a low-quality random generator. I *think* I can do the math on this one by hand, but if not I'll use my random formula generator again. There are 8 inputs - the public key, the signature, the message, etc. There are only a few potential operations I might need to use, most importantly modular exponentiation. I may have have my script randomly choose two of the inputs and raise one to the power of the other, do a few more random operations like addition and multiplication, and see which randomly generated formulas produce interesting results.
The question is, do I ever tell my professor that rather than figuring out the math myself, I just programmed my computer to try shit at random until it finds a formula that works? Next week's homework, for a different class, is machine learning. I kinda wish I had learned that part first, before taking the encryption course.
When I asked a similar question in a CS class in `98 I was told that of course a clever programmer could find quicker ways to get many of the answers, but the questions were not presented to the student because of the answers being unknown; they're presented because somebody thought you would benefit from learning some method that can be practiced via these questions.
So it is irrelevant if you committed a technical ethical violation or not; you clearly cheated yourself out of receiving the intended practice
Re: (Score:2)
That's an interesting perspective, thanks.
> the student because of the answers being unknown; they're presented because somebody thought you would benefit from learning some method that can be practiced via these questions.
I'm actually a bit unclear just what we're supposed to be learning from these exercises, because there is no method to be practiced. They are pretty much brainteasers - try different things until you stumble upon thr clever trick for this one. Basically riddles. I don't see that I'm l
Re: (Score:2)
Cracking DSA (Score:2)
You can certainly read up on it. Start by finding K, based on the characteristics of the PRG. From there, with a message and it's signature it's straightforward to calculate the private key and you own it forever.
Re: (Score:2)
Re:Is this new? (Score:4, Interesting)
Re: (Score:3)
how that was done back in the 70
Exactly. I seem to recall a chapter on this stuff in The Handbook of Artificial Intelligence (Barr and Feigenbaum, c 1981). Of course computers at this time tended to be mainframes and minis and slower than my phone. So the problems they tackled tended to be trivially simple.
Re: (Score:1)
What's actually new about this?
According to the headline, "Artificial Intelligence" actually exists, which is pretty fucking cool if you ask me, considering it still doesn't.
Re: (Score:3)
I was taught about 'genetic programming' as described in the summary during my computer science degree 15 years ago. What's actually new about this?
Nothing other than actually getting results. Just like it's super easy to build a neural network if you don't need it to be useful.
Here's a brief history of meta-programming between a traditional programmer (TP) and meta-programmer (MP):
TP: We need to find a model and the parameter values for it.
MP: I can write an algorithm to find the optimal parameter values.
TP: We need to find a model and the parameters.
MP: I can write an algorithm to find the hidden parameters.
TP: Okay, but we still need to find the mod
Re: (Score:1)
Re: (Score:2)
Honestly, outside of computing power what is new about any of the AI and NN material coming out. Some of the frameworks for coordinating and synchronizing distributed networks are pretty interesting but I would also argue those are advances in distributed computing being applied to AI.
If anything it feels like people have rationalized away a true human or better level deep AI as a goal. Given that we ultimately define intelligence as thinking in a manner similar to ourselves and our highest intelligence is
Re: (Score:3)
Indeed, genetic programming is not new. Where did they claim to have invented genetic programming? The implementation of old concepts in new problem spaces can still be challenging, and that is exactly what they are describing:
Existing AutoML search spaces have been constructed to be dense with good solutions ... AutoML-Zero is different: the space is so generic that it ends up being quite sparse. The framework we propose represents ML algorithms as computer programs comprised of three component functions that predict and learn from one example at a time. The instructions in these functions apply basic mathematical operations on a small memory. The operation and memory addresses used by each instruction are free parameters in the search space, as is the size of the component functions. While this reduces expert design, the consequent sparsity means that RS cannot make enough progress; e.g. good algorithms to learn even a trivial task can be as rare as 1 in 10^12 ... Perhaps surprisingly, evolutionary methods can find solutions in the AutoML-Zero search space despite its enormous size and sparsity. By randomly modifying the programs and periodically selecting the best performing ones on given tasks/datasets, we discover reasonable algorithms.
<emphasis added>
Re: (Score:2)
I guess the new part is that decades-old genetic programming has managed to invent neural networks and some newer AI features.
Current gen tech P0wned yet again by ancient graybeard magicks, then repackaged as something new to save face.
Re: (Score:2)
Indeed. I have a book by Melanie Mitchell published in 1999, first edition 1996, called "An Introduction to Genetic Algorithms". Page 36 describes John Koza's work from the early 1990's which used GA's to evolve Lisp programs*. One example evolves mathematical curve-matching functions, kind of a GA version of regression formulas.
It's a well-written book. The only notable fuzzy part was the description of a stock (investment) picking system. I suspect intellectual property concerns prevented her from going i
Holland developed genetic systems back in 1975 (Score:5, Informative)
Re: (Score:2)
beat me to it. Yes, exactly. Evolution in software is an incredibly old dupe. Even for /.
Re:Holland developed genetic systems back in 1975 (Score:4, Interesting)
It's not particularly efficient either. Just like ML itself, you can easily get trapped in local extrema and then your monkeys bang out Twilight instead of Hamlet.
Re: (Score:3)
and then your monkeys bang out Twilight instead of Hamlet.
That explains a lot.
Re: (Score:1)
Re: Holland developed genetic systems back in 1975 (Score:2)
Donâ(TM)t forget: on a raspberry pi
Can it program DOOM for me? (Score:2)
How many iterations would it need to randomly combine/replace code to end up with a playable doom versions?
Re: (Score:2)
Probably a few orders of magnitude more years than this universe has lifetime left ;-)
The while thing is a worthless stunt of no practical value. Well, maybe some philosophers find it interesting.
Re: (Score:2)
"How many iterations would it need to randomly combine/replace code to end up with a playable doom versions?"
Since it cannot distinguish a cat from a truck right now, don't hold your breath.
Re: (Score:2)
And how many more until it finds out it's more fun to play doom irl?
Oh please. I've been doing this, two decades ago. (Score:2, Informative)
Granted, the nets were tiny, but call me when your neural nets can *actually* learn. I mean *while* doing their job! Fast enough to be adaptive. Not while in a learning phase, and frozen otherwise.
Oh, and call me when you'vs finally upgraded to *spiking* neural nets, and actual simulations. Not just weight matrices. :)
Because then, in 2040, I can tell you I did that 45 (so 15) years ago!
And I hadn't even remotely been the first. Just a teen/student in his room.
Re: (Score:3)
But you cannot have replicated "decades of AI research", at least not the last two! You would have preempted them instead ;-)
That is unless nothing noteworthy happened in the last 20 years of AI research, which given the BS coming out of the field at the moment, I will most certainly not rule out.
Re: (Score:1)
That is unless nothing noteworthy happened in the last 20 years of AI research
AI is a class of engineering techniques, not a science, so there is no reason to expect that 20 years of teaching it to new students would product anything "noteworthy," and that remains true even if you publish a bunch of papers and call them "research." Or as in the story, you can not even actually publish a paper yet, and already not have anything noteworthy! lol
Re: (Score:2)
The biggest actual research gains (beyond raw CPU power) have been related to how the neural network is set up. Should you use a sparse network with lots of layers, what kind of loss function should you use to measure the accuracy of
Re: (Score:2)
That is equivalent to wanting proof that 1=2 or black = white.
Re: (Score:2)
This is, of course, nonsense (Score:3, Informative)
All they are doing is a bit of more general training. The result is not "evolved" with regards to expressive power, it is just more specialized for the task used to determine fitness.
Re: (Score:1)
Re: (Score:1)
All they are doing is a bit of more general training. The result is not "evolved" with regards to expressive power, it is just more specialized for the task used to determine fitness.
In a framework of induction espoused by Vanderbilt's "Golden Boy", Ken Dodge, who landed more than a few grants from federal sources through the 90s, the term was "robust".
More fluff (Score:1)
Conway's game? (Score:1)
Re: (Score:1)
Re: (Score:2)
I don't know the whole process, but I do know the end result of the calculation: 42.
Re: Conway's game? (Score:1)
Skynet (Score:1)
Do you want Skynet?
Cause, that's how you get Skynet...
Re: (Score:2)
And so it begins...
Uh (Score:2)
Re: (Score:2)
I can make a baseball fly though the air all by itself! I just throw it, and look! After I let go, it keeps going! On it's own!
Re: (Score:2)
I can make a baseball fly though the air all by itself! I just throw it, and look! After I let go, it keeps going! On it's own!
Which, just as with the AI 'breakthru' here, was already discovered in the late 60s. To wit:
"This cat can fly across the studio"
"By herself?"
"No, I fling her"
"Yeeeeooowwwwll!!!!"
Learning from the best? (Score:2)
Copies of the top performers are "mutated" by randomly replacing, editing, or deleting some of its code to create slight variations
Isn't that how most debugging is done?
Re: (Score:2)
Debugging, and rebugging too!
Re: (Score:2)
Copies of the top performers are "mutated" by randomly replacing, editing, or deleting some of its code to create slight variations
Isn't that how most debugging is done?
Isn't that how they extend the patents on profitable drugs?
Re: (Score:2)
If you are debugging using random operations like that, I doubt it is very effective.
Practical application (Score:4, Funny)
Maybe they can use it to evolve automated slashdot editors capable of selecting articles that are relevant both technically and temporally.
Re: (Score:2)
Game of life (Score:2)
This is only as good as the test data (Score:2)
Ie how many pictures of cat do you have that you can train it & then test this against ? Then there are the definitions, eg: is a Sumatran tiger a cat ?
Even if the results are not an improvement this could turn up 'better' in different ways, eg: faster, uses less memory, ...
Will this AI be smart enough (Score:1)
Obvious (Score:2)
Hotdog - Not Hotdog
Obvious and inevitable (Score:2)
Eventually someone was going to figure out that we could create AI the way nature did. Genetic algorithms working with an environment of neural nets that improve with each generation.
This is it, guys. After this runs for a bit, it'll be intelligent although its intelligence won't look much like ours.
And keep the off switch handy. We'll want to evolve one that *really* *really* likes us. Even neutrality won't do here.
Survival of the fittest? or Survival of the Fit. (Score:2)
I think it may be survival of the Fit.
That may have been more of what Darwin was saying and yet you always here it quoted the other way.