CERN Tests First Artificial Retina Capable of Looking For High Energy Particles 60
KentuckyFC writes: Pattern recognition is one of the few areas where humans regularly outperform even the most powerful computers. Our extraordinary ability is a result of the way our bodies process visual information. But surprisingly, our brains only do part of the work. The most basic pattern recognition—edge detection, line detection and the detection of certain shapes—is performed by the complex circuitry of neurons in the retina. Now particle physicists are copying this trick to hunt for new particles. A team at CERN has built and tested an artificial retina capable of identifying particle tracks in the debris from particle collisions. The retina can do this at the same rate the LHC smashes particles together: about 800 million collisions per second. In other words, it can sift through the data in real time. The team says the retina outperforms any other particle-detecting device by a factor of 400v.
Load me up! (Score:1)
What about the US of A? (Score:2, Funny)
What are US scientists up to? Apart from teaching creationism, of course. lol
Re: (Score:2)
A few are working at CERN, the best place in the world to analyze particle collisions. :p
You know, CERN? That ultra cool science experiment that is truly a collaboration of a good many nations? The one that has no interest in who the current superpower is, because it's devoted to pure science?
Yeah, that one.
Re: (Score:2)
Quite a few US universities are heavily involved in CERN. And European universities. And Russian. And to an increasing amount, Chinese. And also many others.
The people "teaching" (implying that there is something worthwhile to learn) creationism, are not scientists - they are coming up with neither new data or reasonable interpretations. So thus no US scientists are "teaching" that steaming pile of poo.
As a European, it would be great if /. would stop descending into the "USA sucks" vs "Muh freeduuum units
400v? (Score:5, Insightful)
Re: (Score:1)
Re: (Score:1)
if only I had mod points for you...
It's been so long since I've posted. It's the thought that counts!
Re:400v? (Score:5, Informative)
For what it's worth, this is an editorial failure - the linked paper properly cites a factor of "400" - no V anywhere.
Re: (Score:2)
That the increase in effectiveness is a factor of 400 is impressive enough by itself.
But 400V times the productivity sounds more imposing.
Re: (Score:1)
What the hell is a factor of 400 volts?
It's a shockingly high factor that electrified the scientists and that will surely galvanize the search for unknown particles to new life.
Re: (Score:2)
Re: (Score:2)
But (Score:1)
will this new retina help me find my missing socks?
Is this s tep towards X-Ray vision? Wouldn't the rest of the eye have to be modified to (lens cornea etc)
Re: (Score:2)
Heh.
I don't often mod ACs, but when I do, I mod them funny.
Re: (Score:2)
http://en.wikipedia.org/wiki/A... [wikipedia.org]
In the retina? (Score:2)
Really? Not in the visual cortex of the brain? It's actually done in the retina itself?
Re: (Score:2)
Really? Not in the visual cortex of the brain? It's actually done in the retina itself?
That's the point they were making, yes.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
There are a variety of processes that are applied to visual data before it comes to your awareness, the retina is first level of screening. The tissue of the retina is not that different than neural tissue, it is perfectly capable of comparing things and making decisions. A video camera will look at every pixel in its range equally and send of all its data uniformly. A living visual system is actively working to prune out anything y
Re: (Score:2)
Re: (Score:2)
Thanks for all the replies, folks. That's fascinating.
oblig (Score:1)
I see what they did there.
Sorry.
Summary (Score:5, Informative)
So, to summarize the paper
http://arxiv.org/pdf/1409.1565... [arxiv.org] :
They have developed an algorithm for quickly giving a rough interpretation of the raw data stream coming out from the detector, i.e. converting the information that "value pixel A =12, value of pixel B = 43, ..." into useful physics data like "a particle with momentum vector P and charge Q was probably created 2 m from the collision point". This algorithm is special in that it can be implemented on an FPGA, and is somehow inspired by the retina of our eyes. Because it can run on an FPGA, it has the potential to be much faster, and can handle much larger data fluxes than current algorithms.
This is needed, because in a few years, we will upgrade the LHC such that it produces many more collisions per second, i.e. the data rates will be much higher. We do this to get more statistics, which may uncover rare physics processes (such as was done for the Higgs boson). Not all of this data deluge can be written to disk (or even downloaded from the detector hardware), so we use a trigger which decides which collisions are interesting enough to read out and store. This trigger works by downloading *part* of the data to a computing cluster that sits in the next room (yes, it does run on Linux), quickly reconstructing the event, and sending the "READ" signal to the rest of the detector if it fits certain criteria indicating that (for example) a heavy particle was created. If the data rate goes up, so must the processing speed, or else we will run out of buffers on the detector.
Summary is completely misleading (Score:3)
Reading the abstract, it is clear that what they did was to do image analysis using an algorithm (albeit in FPGA) modeled on what happens in the retina. Other than the speed advantage, there is nothing special about this that makes it an artificial retina. If you take a picture with a cellphone and do edge detection using software, is that an artificial retina? I would argue no more or less than what is described here.
TFS makes it sound like the image detectors are actually doing edge detection like the ret
Re: (Score:2)
"The image sensors (CCD or CMOS or whatever) is doing no such thing."
What if you mounted the FPGA right on the back of the CCD? Say they power up together as one unit. It outputs direction, velocity and charge, instead of video screens. I think that might count as an artificial retina.
Then, when I consider that they just have longer wires, and are very bad mounters, then the line blurs, and maybe this does count...
Re: (Score:2)
Yes, I guess there is a spectrum of implementations of retina-like processing. On one side, there is the retina and on the other side, a digital camera followed by Photoshop. This is being done algorithmically in FPGA so is closer to the Photoshop end of the spectrum.
There are silicon models of retinal processing. See
http://authors.library.caltech... [caltech.edu]
And there is a book by Carver Mead (I think he was the thesis advisor for above dissertation) called "Analog VLSI and Neural Systems" with a chapter on in silic
Re: (Score:2)
The algorithm combines data from several sensors.
Re: (Score:2)
you mean like this?
http://www.inilabs.com/product... [inilabs.com]
Re: (Score:2)
Part of the method may very well be to put the clustering algorithm directly onto the the same chip as is doing the digital readout of the sensor, i.e. bump-bonded on the back of the sensor, directly providing estimated (x,y) coordinates of the particle hits instead of raw pixel data with zero-suppression as is traditionally done.
However, this is not what this paper is discussing. It discusses mapping the parameter space (m,q) of the gradient and intercept of a particle track y=m*z+q into some kind of matri
Re: (Score:2)
"This algorithm is special in that it can be implemented on an FPGA"
Question: Are current FPGAs faster than 10-15 year old CPUs?
I'm thinking G4s or Athlons or something that's old enough to be easy and cheap to make today at any old fab, yet new enough that the dies and equipment are still around, and it can be ordered.
Take your final FPGA and burn chips from it (I know they do that). Run a hundred, and CERN might pay 20 grand a chip if they're good enough. I made that number up'; I don't have a clue, but t
Re: (Score:2)
Question: Are current FPGAs faster than 10-15 year old CPUs?
Umm, I think you have things backwards. For certain tasks, FPGAs are phenomenally faster than any general purpose CPU. The correct question should be:
Are current FPGAs faster than CPUs 10-15 years from now?
Re: (Score:2)
> and they probably buy them fpga's and boards in industrial quantities anyway
Njaaa. Define "industrial quantities". Mostly I've seen people use a few 10s of them, not 100s or 1000s.
The really expensive part about ASICs are to make the masks for lithography etc., not how many chips you make. Thus you don't want to make a new chip unless you *really* need to.
Re: (Score:2)
FPGAs are very different beasts from normal CPUs - as far as I understand, they are very well suited to doing relatively simple tasks ridiculously fast, and one chip can treat tons of data in parallel. However, they do not do so well on really complex algorithms, algorithms requiring lots of fast memory and branches, and they are harder to program than CPUs.
In this case, I would think each cell cell in the (m,q) parameter space is handled by one "block" of the FPGA, and you then feed all the blocks the data
Re: (Score:2)
"When you "burn" a chip from a FPGA, what it means is that you take the VHDL (etc) code and compile it into a format which you can use to produce specialized chips, instead of a format for programming an FPGA."
Yes, and the problem now, and certainly in the future, is data overwhelming the computational resources. And as an above poster noted, the big cost in a custom ASIC is laying it out.
I had assumed that the gate layout (and thus the logic programming) in any FPGA was still far simpler than last decade's
Re: (Score:2)
They give away Athlons for 10 bucks nowadays. If you could burn your custom asic into that, even if you wasted most of what used to be the Athlon; it would be fast as shit running your FPGA program natively. One you paid to lay it out, seems to me it might be cheap as shit to run a few thousand of them, and saturate the area with these detectors. Which feed already vastly condensed data that we would be capable of capturing.
That is just not how it works. You can't convert an athlon to a custom ASIC. The part that is the Athlon is the hardware.
To make a custom ASIC you need to make different hardware. That means making masks (cost a few million $), testing, making new masks, testing, running a batch, testing, testing testing.
With this batch size it isn't really interesting unless they need the additional speed ASICs bring.
Re: (Score:2)
FPGA's and CPUs are different enough that it is hard to compare the speed. For some tasks FPGA's are way faster, for most tasks CPU's are way faster.
Think of the FPGA as a hummer and the CPU as a Ferrari. Most driving is done on roads, where the Ferrari is faster. However, in rough country I would bet on the Hummer.
Take your final FPGA and burn chips from it (I know they do that). Run a hundred, and CERN might pay 20 grand a chip if they're good enough. I made that number up'; I don't have a clue, but that's where I'm going with my question.
Converting FPGA programming to chips means you need to invest millions to produce masks. You ain't gonna do that for a few hundred if you can avoid it.
Those chips are called ASICs. They are usual
Re: (Score:2)
Oh, and 2 m should have been 2 um. Slashdot ate my alt-gr+m = \mu...
Bladerunner (Score:2)
Hidden content (Score:2)
Artificial eyes (Score:2)
Does anybody know how the bionic eyes which have been tested do this? Do they attempt to send their entire data stream, or do they know to do this part in silica?