Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Space

"Synthetic Tracking" Makes It Possible to Find Millions of Near Earth Asteroids 101

KentuckyFC writes "Astronomers think that near-Earth Asteroids the size of apartment blocks number in the millions. And yet they spot new ones at the rate of only about 30 a year because these objects are so faint and fast moving. Now astronomers at the Jet Propulsion Laboratory have developed a technique called synthetic tracking for dramatically speeding up asteroid discovery. Insteads of long exposures in which near-Earth asteroids show up as faint streaks, the new technique involves taking lots of short exposures and adding them together in a special automated way. The trick is to shift each image so that the pixels that record the asteroid are superimposed on top of each other. The result is an image in which the asteroid is sharp point of light against a background of star streaks. They say synthetic tracking has the capability to spot 80 new near Earth asteroids each night using a standard 5 metre telescope. That'll be handy for spotting rocks heading our way before they get too close and for identifying targets for NASA's future asteroid missions."
This discussion has been archived. No new comments can be posted.

"Synthetic Tracking" Makes It Possible to Find Millions of Near Earth Asteroids

Comments Filter:
  • by Anonymous Coward

    How many hedgehogs to an apartment block?

  • Is it just me? (Score:5, Interesting)

    by Score Whore ( 32328 ) on Thursday September 19, 2013 @12:35PM (#44895157)

    Or does the submitter not see the apparent logical flaw in the way the described this process. If you're going to line up each image so that the asteroid is a single sharp pixel and the stars are streaks, doesn't that suggest that you already know which pixel is the asteroid? In which case you don't really need to search for that particular asteroid, no?

    At a minimum the submitter or the editors need to think whether their description of the procedure is good.

    • by TheCarp ( 96830 )

      Yes and no. I was thinking the same thing but, what if you do the process multiple times, shifting in different directions each time, and possibly again with different amounts of shift per frame? Then you have maybe 8 or even 80 different results to look at, which could be weeded out by algorythmically rejecting any that contain no bright spots.

      • by TheCarp ( 96830 )

        An apartment block would be about .04 furlongs across. You should be able to figure it out from there.

        • by Gilmoure ( 18428 )

          Ah, but I was wondering about volume in hogsheads [traditionaloven.com].

          • by bdwebb ( 985489 )
            Babies is the correct unit of measure, actually. Babies provides a unique unit of measure in that it is minimally variable but, when averaged and used as a constant, can provide a standard unit of measure for determining length, width, speed, time, weight, etc...the possibilities are endless! Almost anything that can be measured can be measured in babies. Imagine - no more confusion between metric or standard...Babies is the answer!

            All units below are based off an average newborn baby:
            B = Baby (Avg.
      • That's exactly what they're doing: brute force analysis. All directions and all "reasonable" variations of speed for a given series of spot pictures, then filter for spot intensity. Decent use for super computing.

        • No, rather a complete waste of supercomputing, since there are already better algorithms for doing the same thing.

          • Care to share?

          • by Anonymous Coward

            No, rather a complete waste of supercomputing, since there are already better algorithms for doing the same thing.

            Yeah, because the scientists at JPL are all idiots compared to some random douchebag on Slashdot who thinks he knows everything.

            • Sad but true. Most anonymous cowards are experts in every field of art, math, and science. If only they spent their time using their knowledge, and not just faffing about on slashdot and complaining about the idiot "experts" that have spent their entire adult lives learning about one tiny little topic... and fail utterly to understand the brilliance of this untapped resource. So sad. I'm going to go drown myself in my research.
    • Re:Is it just me? (Score:5, Insightful)

      by edmudama ( 155475 ) on Thursday September 19, 2013 @12:47PM (#44895287)

      Article doesn't have a good description.

      My guess is you take a bunch of timelapse frames of the same sky.

      Then you overlay them at offsets in different directions which would keep any moving objects in the same place.

      Picture doing 36000 sequences of overlays:
      360 degree variation in 0.1 degree increments at 10 different radial velocities

      Most of those sequences will just show blurred gray washout, but if you happened to hit the right direction as a moving object at the right speed, your overlaid image sequence will effectively keep the moving object in the same spot of the frame, which will result in the average brightness for that pixel or pixles to be higher than the surrounding blurs.

      Just a guess...

      • That was actually brilliant. That was several orders of magnitude more interesting than the summary.
        But you lose a point for not converting distance to international standard apartment block scale.
      • Interesting idea, I wonder if that's ever been tried? I guess the feasibility depends on what the angular motions of these objects are.

        In this case though, they "simply" take a lot of short-exposure images of the same region and add them together. From the abstract:

        The technique relies on a combined use of a novel data processing approach and a new generation of high-speed cameras which allow taking short exposures of moving objects at high frame rates, effectively ``freezing'' their motion. Although the signal to noise ratio (SNR) of a single short exposure is insufficient to detect the dim object in one frame, by shifting successive frames relative to each other and then co-adding the shifted frames in post-processing, we synthetically create a long-exposure image as if the telescope were tracking the object with a significantly higher SNR.

        • Actually, rereading the abstract, it seems that GP was spot on:P

        • Interesting idea, I wonder if that's ever been tried?

          See Inverse Synthetic Aperture Radar [wikipedia.org]. It's basically just taking it one step further. Where ISAR starts with the assumption that you know where something is, and thus know how to shift the captures to produce the synthetic image, this brute forces the computation and waits for a sufficiently strong signal to show up.

      • How does this compare to using regular edge detection to find faint streaks in a time lapse image? How about after detecting bright spots and deleting them followed by edge detection?
      • Re: (Score:2, Informative)

        by Anonymous Coward

        Article doesn't have a good description.

        Before the detection of the NEA, its velocity vector is unknown. However, we find this vector by conducting a search in velocity space. To do this we have developed an algorithm that simultaneously processes the synthetic tracking data at different velocities. The velocities searched initially have (x,y) components that are multiples of 1 pix/frame in each direction. This is a computationally intensive task: for example, the shift and add process for 120 images for 1,000 different velocity vectors requires over 1011 arithmetic operations. However, with current off-the-shelf graphics processing units (GPU) with up to 2,500 processors and teraFLOPS peak speeds, we were able to analyze 30 sec of data in less than 10 sec. Once the NEA is detected in this initial search, an estimate of velocity becomes possible. Using this velocity we refine the astrometry relative to a reference star in the field and determine the velocity to a much higher precision. Elsewhere we plan to describe the details of the synthetic tracking algorithm and report its performance, including its false alarm rate.

      • by Anonymous Coward

        For anyone who wants to look into this, the above is a description of a standard computer vision algorithm called the generalised Hough transform. Its one of those really old techniques that has stood the test of time and remains useful for real problems today.

    • Re:Is it just me? (Score:4, Interesting)

      by kruach aum ( 1934852 ) on Thursday September 19, 2013 @12:47PM (#44895289)

      "If it were moving at speed v, it would show up when I shifted the pictures by x pixels." Repeat for likely ranges of v, watch for bright spots. No contradictions required.

    • Re:Is it just me? (Score:5, Informative)

      by Baloroth ( 2370816 ) on Thursday September 19, 2013 @12:52PM (#44895361)

      From TFA:

      The difficult part of this is knowing which way to shift each image. Shao and co have solved this by brute force: they take consecutive images and examine all possible shifts to see which resolves the fast moving asteroid.

      That's a lot of computation (they try 1,000 different velocity vectors), but that's what computers are really, really good at.

      • Re:Is it just me? (Score:4, Interesting)

        by mdielmann ( 514750 ) on Thursday September 19, 2013 @03:49PM (#44896989) Homepage Journal

        Sometimes the best way to solve a big problem is to just get a bigger hammer.

        I had a problem once that I could probably have solved using some very pretty, complex, elegant formula. But after examining the problem space, I figured I could brute-force it, with a basic fitness algorithm, in about 2 seconds. The overall process it was a part of took between 90 and 300 seconds. The other benefits were, it was quite readable, and didn't require any advanced math or knowledge of the problem to see what was being done. The fact that the pretty formula would have improved performance at most about 2% made it an easy choice.

        • Congratulations. You have discovered how thinking beings solve problems. Consider this planet a computer brute forcing the answer not to emergence of sentience, but also to the most ironic way to become extinct.

    • by Anonymous Coward

      As the material in the second link states, computationally they do a brute-force approach, trying all possible directions and speeds of drift of the object, then look at any resultant images that show a single object. A nightmare from a computational complexity approach, but with the amount of HPC horsepower out there these days, a workable one.

      • they probably just use a quad SLI setup with Nvidia GPU's to do this. Something they'd be perfectly adequate for

        • No, they probably don't. You only use SLI because you need to share and merge graphical output onto a screen. When you're doing computational work, you just have a server with a bunch of compute cards.
        • Nope, separate GPU's work fine for this. And not 4. And without the useless graphical outputs.
          href=https://www.trustedsec.com/february-2011/building-the-ultimate-bad-arse-cuda-cracking-server/>Something akin to this.
          although there certainly are updated versions available, for large operations. This was from 2010, but I couldn't find a more recent picture.
    • The thing you have to guess correctly (and try in large numbers) is the apparent motion vector, not the position.
    • Yes the description may be flawed but seems to me that this is essentially a technique that has been in use for a long time, at least back to the 1950's, with systems like surveillance radar where several ping round trips are superimposed (added together). Involves delaying/storing the received signal and adding back together in a time correlated manner. Noise tends to reduce and object reflections tend to reinforce resulting in an effective improvement in the signal to noise ratio. In the early days,

      • It's basically an enhancement on the traditional [inverse] synthetic aperture methods. With SAR, you know where everything is, roughly, and you're using the technique to refine the image. In this, you use the technique blindly, and wait until something rises up out of the background noise.
  • by Anonymous Coward

    Wha...? Asteroids the size of apartment blocks?

    Can we please have this measurement in a standardized unit, like Volkswagen beetles?

    Man. I thought Slashdot was going downhill back when it was mostly a CueCat fansite, but this really takes the cake.

  • Seems like that would only work if you knew the speed and direction that the asteroid was moving in. and, of course, if you knew that then that might not be the one that you needed to find.
    • by Anonymous Coward

      Yes. From the article:

      "Before the detection of the NEA, its velocity vector is unknown. However, we nd this vector by conducting a search
      in velocity space. To do this we have developed an algorithm that simultaneously processes the synthetic tracking
      data at dierent velocities. The velocities searched initially have (x,y) components that are multiples of 1 pix/frame
      in each direction. This is a computationally intensive task: for example, the shift and add process for 120 images for
      1,000 dierent velocity vect

    • by RichMan ( 8097 )

      You missed the "synthetic" they take lots of static pictures with the same point over time.
      Then use a computer to skew the images it in all the directions and speeds and do the search.

      • by TheCarp ( 96830 )

        Sounds like this http://en.wikipedia.org/wiki/Synthetic_aperture_radar [wikipedia.org]

        SAR is usually implemented by mounting, on a moving platform such as an aircraft or spacecraft, a single beam-forming antenna from which a target scene is repeatedly illuminated with pulses of radio waves at wavelengths anywhere from a meter down to millimeters. The many echo waveforms received successively at the different antenna positions are coherently detected and stored and then post-processed together to resolve elements in an im

  • It seems like it would be much simpler to just use optical flow to find moving objects.

  • FISA court approved this message.
  • by Animats ( 122034 ) on Thursday September 19, 2013 @12:52PM (#44895357) Homepage

    "Medium.com" is one of those aggregator sites. Don't link to them. Link to the actual paper [arxiv.org]. Thank you.

    They had to use the Palomar 200 inch telescope to make this work. There aren't many big telescopes in the world, and they're booked months in advance. They got a few hours of observing for one night, and good results. But they'd need a lot more observing time on big scopes to do their survey.

  • by kryliss ( 72493 ) on Thursday September 19, 2013 @01:07PM (#44895495)

    It always amazes me that the people that complain all the time about Slashdot with dupes, bad articles, etc. come back every day just so they can tell everyone how bad it is. It's as though they sit and wait for it just so they can make long winded comments on how bad Slashdot is. It is getting really old.

  • by Anonymous Coward

    Why would you brute-force the pixel shifts to get the asteroid in a single spot, when the fixed background stars are already a stable reference for co-aligning the images? It seems the simpler way (even with the short exposures) would be to cross-correlate and co-align to the background stars, then look for the "dotted" path of the moving asteroid. And isn't it already done this way?

    • I can only guess that in a single image, the asteroid is indistinguishable from noise, and even overlaying the images doesn't bring it out clearly enough to be spotted. Perhaps, once you do all the meta-math, it turns out to be easier just to do many overlays at multiple assumed velocities (which can be filtered down to what we expect of these asteroids) and then look for a bright spot.

      Stacking images is simple. Spotting lines of dots in amongst the noise could be fairly tricky to do with any kind of conf

  • Yes, like the kind you buy on amazon.com, right?
  • That this is cool. Anything that will help humans identify the source of their ultimate demise is a good thing.
    • It's cooler than when I read it as "Find millions of nearby assholes" and decided that Craigslist and OKCupid already do this. Or living in Boston or Baltimore, where you just have to look outside.
    • That given the size of many apartment blocks these days there is the nagging thought that it might be kinder to everyone concerned to just not look for these massively destructive objects until we have an absolutely foolproof way of deflecting them sufficiently to provide a significant engineering challenge for future generations. I sweep dirt under the mat too!
  • So now we can find a lot more very dangerous space rocks. That's excellent. However, we can't really do much about them unless we can mass-produce space shuttles, clones of Bruce Willis and Ben Affleck, and crappy Aerosmith songs. But if survival means a world with multiple Ben Afflecks and getting ear-spammed by more sappy Aerosmith power ballads whenever I turn on a radio, we'd be better off with the asteroid impacts.
  • I'm amazed they they weren't doing something like this already. I think its a pretty standard cross correlation filtering method. They even say they're brute forcing it with a GPU, which surprised me. Am I missing something or could this be sped up quite a bit with FFT?
    • by cusco ( 717999 )

      Up until this point it was too expensive computationally. Each set of images require 10^11 computations to process, a non-trivial amount of processing even for a big cluster of GPUs. Asteroids are really, really dim, and small ones are even harder to detect. The signals that they're trying to process are below the noise threshold.

  • by Dr La ( 1342733 ) on Thursday September 19, 2013 @03:56PM (#44897069) Homepage
    The more sensitive camera and the algorithm to empirically find the correct direction and speed of movement of a not-known asteroid are new.

    The method of overlaying multiple short images so that the asteroid is a pinpoint additive composite of multiple images and the stars become trails is not new.

    The latter technique is called "stacking" (a word existing for quite a long time and meaning the same as their "synthetic tracking"). It is regularly done to image and get astrometry on faint objects, when speed and direction of movement are already known (e.g. in follow-up observations on a Near earth Asteroid that already has some observations over the previous hours/days and hence a preliminary orbit). That part is really not new, and there is no need to invent new terminology ("synthetic tracking") for it.

    Frankly, it is weird that the authors nowhere mention "stacking" as an existing technique that is often used in imaging faint asteroids. It suggests they did not investigate whether their "new" technique is really that new. Yes, they innovate on it, but they did not invent a completely novel technique.
    • by Dr La ( 1342733 )
      Small addition: the technique is called "track & stack" when employed on moving objects. Most existing astrometric software can do it.
    • This approach appears to be slightly different from traditional frame stacking, in that they are utilizing a low read noise (really, 1e- doesn't pass the smell test--there's got to be some tradeoffs) to take a large number of frames with short exposure times. The only other interesting approach they are taking is searching the velocity space, for which (given 1000 points) they need a 2500-node HPC cluster (you do have one of those in your closet, yes?). From their description, they are also only searching 1

      • by Dr La ( 1342733 )

        It's worth keeping in mind, though, that you don't publish CalTech papers and get time on the Palomar 200" by being a dim-witted slacker.

        True. I think their presentation of things is more the result of current publishing demands: useful or even innovative is not good enough anymore to get your paper through, it needs to be "new" and "never done before" instead of an innovation on an existing technique.

        Still, I find the complete lack of any reference to even the words "track & stack" weird, given that tracking & stacking is common practise in imaging faint asteroids. Maybe not when you use a 5-meter telescope, but with with smaller

  • This is a simple and nifty idea and it works.

    Good now we can find those small ez to mine rocks nearbye. Or use those rocks for other purposes... build your habitat directly in one.

Put your Nose to the Grindstone! -- Amalgamated Plastic Surgeons and Toolmakers, Ltd.

Working...