Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Science

CERN Tests First Artificial Retina Capable of Looking For High Energy Particles 60

KentuckyFC writes: Pattern recognition is one of the few areas where humans regularly outperform even the most powerful computers. Our extraordinary ability is a result of the way our bodies process visual information. But surprisingly, our brains only do part of the work. The most basic pattern recognition—edge detection, line detection and the detection of certain shapes—is performed by the complex circuitry of neurons in the retina. Now particle physicists are copying this trick to hunt for new particles. A team at CERN has built and tested an artificial retina capable of identifying particle tracks in the debris from particle collisions. The retina can do this at the same rate the LHC smashes particles together: about 800 million collisions per second. In other words, it can sift through the data in real time. The team says the retina outperforms any other particle-detecting device by a factor of 400v.
This discussion has been archived. No new comments can be posted.

CERN Tests First Artificial Retina Capable of Looking For High Energy Particles

Comments Filter:
  • Can it teach me kung fu?
  • by Anonymous Coward

    What are US scientists up to? Apart from teaching creationism, of course. lol

    • A few are working at CERN, the best place in the world to analyze particle collisions. :p

      You know, CERN? That ultra cool science experiment that is truly a collaboration of a good many nations? The one that has no interest in who the current superpower is, because it's devoted to pure science?

      Yeah, that one.

    • by kyrsjo ( 2420192 )

      Quite a few US universities are heavily involved in CERN. And European universities. And Russian. And to an increasing amount, Chinese. And also many others.

      The people "teaching" (implying that there is something worthwhile to learn) creationism, are not scientists - they are coming up with neither new data or reasonable interpretations. So thus no US scientists are "teaching" that steaming pile of poo.

      As a European, it would be great if /. would stop descending into the "USA sucks" vs "Muh freeduuum units

  • 400v? (Score:5, Insightful)

    by Slagothor ( 1156549 ) on Friday September 12, 2014 @03:55PM (#47893231)
    "The team says the retina outperforms any other particle-detecting device by a factor of 400v." 400v! What the hell is a factor of 400 volts?
    • if only I had mod points for you...
      • if only I had mod points for you...

        It's been so long since I've posted. It's the thought that counts!

    • Re:400v? (Score:5, Informative)

      by Fwipp ( 1473271 ) on Friday September 12, 2014 @04:11PM (#47893351)

      For what it's worth, this is an editorial failure - the linked paper properly cites a factor of "400" - no V anywhere.

      • There are times when you can cool something up with the proper application of a crafty suffix at the end of an alphanumeric description, like the two-eighty zee that Datsun/Nissan used to sell.

        That the increase in effectiveness is a factor of 400 is impressive enough by itself.

        But 400V times the productivity sounds more imposing.

    • What the hell is a factor of 400 volts?

      It's a shockingly high factor that electrified the scientists and that will surely galvanize the search for unknown particles to new life.

      • by VAXcat ( 674775 )
        I can't stand this kind of disrespectful joking around, with puns no less. I'm positive that we can run the negative energy displayed by your post to ground.
  • by rossdee ( 243626 )

    will this new retina help me find my missing socks?

    Is this s tep towards X-Ray vision? Wouldn't the rest of the eye have to be modified to (lens cornea etc)

  • is performed by the complex circuitry of neurons in the retina.

    Really? Not in the visual cortex of the brain? It's actually done in the retina itself?

    • is performed by the complex circuitry of neurons in the retina.

      Really? Not in the visual cortex of the brain? It's actually done in the retina itself?

      That's the point they were making, yes.

    • Comment removed based on user account deletion
    • Yes, yes it is. The retina does the edge detection and detects changes in intensity. What the brain gets is not full streaming video.... It's more like lossy, high compression data. The brain fills in all the missing data as needed.
    • by Ken McE ( 599217 )
      volpe (58112): Really?... It's actually done in the retina itself?

      There are a variety of processes that are applied to visual data before it comes to your awareness, the retina is first level of screening. The tissue of the retina is not that different than neural tissue, it is perfectly capable of comparing things and making decisions. A video camera will look at every pixel in its range equally and send of all its data uniformly. A living visual system is actively working to prune out anything y
    • Yep, the eye is basically a part of the brain.
    • by volpe ( 58112 )

      Thanks for all the replies, folks. That's fascinating.

  • I see what they did there.

    Sorry.

  • Summary (Score:5, Informative)

    by kyrsjo ( 2420192 ) on Friday September 12, 2014 @04:30PM (#47893457)

    So, to summarize the paper
    http://arxiv.org/pdf/1409.1565... [arxiv.org] :

    They have developed an algorithm for quickly giving a rough interpretation of the raw data stream coming out from the detector, i.e. converting the information that "value pixel A =12, value of pixel B = 43, ..." into useful physics data like "a particle with momentum vector P and charge Q was probably created 2 m from the collision point". This algorithm is special in that it can be implemented on an FPGA, and is somehow inspired by the retina of our eyes. Because it can run on an FPGA, it has the potential to be much faster, and can handle much larger data fluxes than current algorithms.

    This is needed, because in a few years, we will upgrade the LHC such that it produces many more collisions per second, i.e. the data rates will be much higher. We do this to get more statistics, which may uncover rare physics processes (such as was done for the Higgs boson). Not all of this data deluge can be written to disk (or even downloaded from the detector hardware), so we use a trigger which decides which collisions are interesting enough to read out and store. This trigger works by downloading *part* of the data to a computing cluster that sits in the next room (yes, it does run on Linux), quickly reconstructing the event, and sending the "READ" signal to the rest of the detector if it fits certain criteria indicating that (for example) a heavy particle was created. If the data rate goes up, so must the processing speed, or else we will run out of buffers on the detector.

    • Reading the abstract, it is clear that what they did was to do image analysis using an algorithm (albeit in FPGA) modeled on what happens in the retina. Other than the speed advantage, there is nothing special about this that makes it an artificial retina. If you take a picture with a cellphone and do edge detection using software, is that an artificial retina? I would argue no more or less than what is described here.

      TFS makes it sound like the image detectors are actually doing edge detection like the ret

      • "The image sensors (CCD or CMOS or whatever) is doing no such thing."

        What if you mounted the FPGA right on the back of the CCD? Say they power up together as one unit. It outputs direction, velocity and charge, instead of video screens. I think that might count as an artificial retina.

        Then, when I consider that they just have longer wires, and are very bad mounters, then the line blurs, and maybe this does count...

        • Yes, I guess there is a spectrum of implementations of retina-like processing. On one side, there is the retina and on the other side, a digital camera followed by Photoshop. This is being done algorithmically in FPGA so is closer to the Photoshop end of the spectrum.

          There are silicon models of retinal processing. See
          http://authors.library.caltech... [caltech.edu]
          And there is a book by Carver Mead (I think he was the thesis advisor for above dissertation) called "Analog VLSI and Neural Systems" with a chapter on in silic

        • by kyrsjo ( 2420192 )

          The algorithm combines data from several sensors.

        • you mean like this?
          http://www.inilabs.com/product... [inilabs.com]

      • by kyrsjo ( 2420192 )

        Part of the method may very well be to put the clustering algorithm directly onto the the same chip as is doing the digital readout of the sensor, i.e. bump-bonded on the back of the sensor, directly providing estimated (x,y) coordinates of the particle hits instead of raw pixel data with zero-suppression as is traditionally done.

        However, this is not what this paper is discussing. It discusses mapping the parameter space (m,q) of the gradient and intercept of a particle track y=m*z+q into some kind of matri

    • "This algorithm is special in that it can be implemented on an FPGA"

      Question: Are current FPGAs faster than 10-15 year old CPUs?

      I'm thinking G4s or Athlons or something that's old enough to be easy and cheap to make today at any old fab, yet new enough that the dies and equipment are still around, and it can be ordered.

      Take your final FPGA and burn chips from it (I know they do that). Run a hundred, and CERN might pay 20 grand a chip if they're good enough. I made that number up'; I don't have a clue, but t

      • Question: Are current FPGAs faster than 10-15 year old CPUs?

        Umm, I think you have things backwards. For certain tasks, FPGAs are phenomenally faster than any general purpose CPU. The correct question should be:

        Are current FPGAs faster than CPUs 10-15 years from now?

      • by kyrsjo ( 2420192 )

        FPGAs are very different beasts from normal CPUs - as far as I understand, they are very well suited to doing relatively simple tasks ridiculously fast, and one chip can treat tons of data in parallel. However, they do not do so well on really complex algorithms, algorithms requiring lots of fast memory and branches, and they are harder to program than CPUs.

        In this case, I would think each cell cell in the (m,q) parameter space is handled by one "block" of the FPGA, and you then feed all the blocks the data

        • "When you "burn" a chip from a FPGA, what it means is that you take the VHDL (etc) code and compile it into a format which you can use to produce specialized chips, instead of a format for programming an FPGA."

          Yes, and the problem now, and certainly in the future, is data overwhelming the computational resources. And as an above poster noted, the big cost in a custom ASIC is laying it out.

          I had assumed that the gate layout (and thus the logic programming) in any FPGA was still far simpler than last decade's

          • They give away Athlons for 10 bucks nowadays. If you could burn your custom asic into that, even if you wasted most of what used to be the Athlon; it would be fast as shit running your FPGA program natively. One you paid to lay it out, seems to me it might be cheap as shit to run a few thousand of them, and saturate the area with these detectors. Which feed already vastly condensed data that we would be capable of capturing.

            That is just not how it works. You can't convert an athlon to a custom ASIC. The part that is the Athlon is the hardware.
            To make a custom ASIC you need to make different hardware. That means making masks (cost a few million $), testing, making new masks, testing, running a batch, testing, testing testing.
            With this batch size it isn't really interesting unless they need the additional speed ASICs bring.

      • FPGA's and CPUs are different enough that it is hard to compare the speed. For some tasks FPGA's are way faster, for most tasks CPU's are way faster.
        Think of the FPGA as a hummer and the CPU as a Ferrari. Most driving is done on roads, where the Ferrari is faster. However, in rough country I would bet on the Hummer.

        Take your final FPGA and burn chips from it (I know they do that). Run a hundred, and CERN might pay 20 grand a chip if they're good enough. I made that number up'; I don't have a clue, but that's where I'm going with my question.

        Converting FPGA programming to chips means you need to invest millions to produce masks. You ain't gonna do that for a few hundred if you can avoid it.
        Those chips are called ASICs. They are usual

    • by kyrsjo ( 2420192 )

      Oh, and 2 m should have been 2 um. Slashdot ate my alt-gr+m = \mu...

  • "I only do eyes ..."
  • The fact that the retina actually has intelligent functions is very important. There has been some long running beliefs that in psychiatric patients who have visual hallucinations that the patient actually sees something generated by the brain in the eye. If the retina is performing intelligent functions then perhaps some components of mental diseases do reside in the eyes. Perhaps it explains the rigid insistence by the patients that the visions are more real than normal sights.
  • Does anybody know how the bionic eyes which have been tested do this? Do they attempt to send their entire data stream, or do they know to do this part in silica?

You know you've landed gear-up when it takes full power to taxi.

Working...