Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Databases Programming Software Space Science IT Technology

6.7 Meter Telescope To Capture 30 Terabytes Per Night 67

Lumenary7204 writes "The Register has a story about the Large Synoptic Survey Telescope, a project to build a 6.7 meter effective-diameter ground-based telescope that will be used to map some of the faintest objects in the night sky. Jeff Kantor, the LSST Project Data Manager, indicates that the telescope should be in operation by 2016, will generate around 30 terabytes of data per night, and will 'open a movie-like window on objects that change or move on rapid timescales: exploding supernovae, potentially hazardous near-Earth asteroids, and distant Kuiper Belt Objects.' The end result will be a 150 petabyte database containing one of the most detailed surveys of the universe ever undertaken by a ground-based telescope. The telescope's 8.4 meter mirror blank was recently unveiled at the University of Arizona's Mirror Lab in Tucson."
This discussion has been archived. No new comments can be posted.

6.7 Meter Telescope To Capture 30 Terabytes Per Night

Comments Filter:
  • by Anonymous Coward on Saturday October 04, 2008 @03:15AM (#25254529)

    What does this story add that the following LSST stories didn't?

    http://science.slashdot.org/article.pl?sid=07/01/10/0111227

    http://science.slashdot.org/article.pl?sid=08/04/22/0116259

    http://science.slashdot.org/article.pl?sid=08/09/02/2346240

  • it will run on MySQL (Score:5, Informative)

    by datacharmer ( 1137495 ) on Saturday October 04, 2008 @03:49AM (#25254619) Homepage

    This project was presented at the MySQL Users Conference 2008 in a dedicated talk and a keynote.

    The storage will be organized in clusters based on MySQL databases.

    Astronomy, Petabytes, and MySQL [oreilly.com]

    The Science and Fiction of Petascale Analytics [oreilly.com]

  • by bertok ( 226922 ) on Saturday October 04, 2008 @04:00AM (#25254641)

    30 TB per night sounds like a lot, but 1.5 TB drives are about AUD 350 each, retail. By 2016, I'd expect vendors to have released at least a 10 TB hard drive at that price point, and I wouldn't be surprised if we're using 30 to 50 TB drives.

    So it all boils down to about $1000 per night of operation, or about $350K per year. Not exactly expensive for a science project. A single mars mission costs about $300M, but this telescope would generate more discoveries. That's not even considering that storage costs would continue to drop over the lifetime of the telescope, so the eventual total cost may be less than $100K per year. That's the salary of just one person!

  • by dwater ( 72834 ) on Saturday October 04, 2008 @04:39AM (#25254747)

    The cost of the storage might be reasonable, but what about the performance aspect? 30TB per night sounds like a lot to store in one night...being generous and calling a night 12 hours - I'm probably wrong, but I make that 43200 seconds which is 694 MB/s. Without looking up any performance stats for hard drives, that sounds fairly easily attainable (too).

  • Nits and Grins (Score:4, Informative)

    by martyb ( 196687 ) on Saturday October 04, 2008 @09:05AM (#25255511)

    From TFA title: (emphasis added)

    6.7 Meter Telescope To Capture 30 Terabytes Per Night

    <nit>
    That's 6.7 Meter effective diameter Telescope. The primary mirror has a diameter of 8.4m but the tertiary mirror (5.2m diameter) sits right in the middle of the primary, so its area needs to be subtracted from the primary. The area of the primary is pi*(8.4/2)^2 which is 55.4m^2 and the area of the tertiary is pi*(5.2/2)^2 which is 21.2m^2; a single mirror of that area would have a diameter of about 6.7m.
    </nit>

    6.7 Meter Telescope To Capture 30 Terabytes Per Night

    <grin>
    Hey!! I thought information wanted to be free! And here they plan to go off and capture 30 TERAbytes? Each night? OMG!!!!11Eleventy!! Say it ain't so!!
    </grin>

  • Re:30TB raw? (Score:2, Informative)

    by mrsquid0 ( 1335303 ) on Saturday October 04, 2008 @09:32AM (#25255579) Homepage
    Back when I was using CFHT there was no high-pass filtering done on the data. That would change the noise properties of the data, which could render the data useless for certain types of analysis. The big space savings were done using lossless data compression. Depending on the type of data one can reduce the disk space required by up to about 90%. A second space-saving technique was to combine calibration data, such as bias frames and flats. In many cases combined calibration data is just as good as the individual frames, and in some cases better. Roughly half of the data collected each night is calibration data, so this can result in a big saving in space. I have not used CFHT since the 1990s, so I have no idea how they deal with their data now.
  • Re:A waste (Score:2, Informative)

    by resignator ( 670173 ) on Saturday October 04, 2008 @01:15PM (#25256647)
    It would be far less cost effective to upgrade Hubble than to build the LSST. Shuttle launches cost a TON of money not to mention you are risking astronauts lives to try and upgrade something we CAN build better on Earth. Hubble has had optical problems in the past and the 2 flights to repair it cost about one billion dollars. Imagine what a complete rebuild would require. "Making the most advanced telescope, ON THE GROUND, seems like an oxymoron to me." And the reason it is considered such an advanced telescope: http://en.wikipedia.org/wiki/Adaptive_optics [wikipedia.org]

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...