Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Databases Programming Software Space Science IT Technology

6.7 Meter Telescope To Capture 30 Terabytes Per Night 67

Lumenary7204 writes "The Register has a story about the Large Synoptic Survey Telescope, a project to build a 6.7 meter effective-diameter ground-based telescope that will be used to map some of the faintest objects in the night sky. Jeff Kantor, the LSST Project Data Manager, indicates that the telescope should be in operation by 2016, will generate around 30 terabytes of data per night, and will 'open a movie-like window on objects that change or move on rapid timescales: exploding supernovae, potentially hazardous near-Earth asteroids, and distant Kuiper Belt Objects.' The end result will be a 150 petabyte database containing one of the most detailed surveys of the universe ever undertaken by a ground-based telescope. The telescope's 8.4 meter mirror blank was recently unveiled at the University of Arizona's Mirror Lab in Tucson."
This discussion has been archived. No new comments can be posted.

6.7 Meter Telescope To Capture 30 Terabytes Per Night

Comments Filter:
  • ... launch it into space? We need to replace the Hubble.
    • Re:Why not... (Score:5, Interesting)

      by Nyeerrmm ( 940927 ) on Saturday October 04, 2008 @03:23AM (#25254705)

      The basic problem is that a 6.4 meter aperture can't fit in a launch vehicle, Ares V is only to be 5.5 meters.

      Hubble was built at the diameter it was (2.4 meter) because thats about the maximum you could build of a stiff mirror that held its shape well enough through launch to remain optically sound on orbit. When you require 10-nm level precision, it takes a hefty structure to keep things that stiff.

      In order to go bigger, the methods they're using for James Webb manage to double the aperture while halving the weight. The way they do this is using active controls and sensors to correct errors rather than rely on avoiding all errors. But looking at James Webb, you'll notice it focuses on IR which is very hard to observe from Earth, while no optical band concept is out there. This is because the new, big Earth-bound scopes use adaptive optics to eliminate seeing errors (the variations in the atmosphere that Hubble avoids), and get potentially better images than Hubble since larger mirrors can be used.

      Of course, if the money shows up, there are other advantages to having a space-based observatory, particularly access time and not having to worry about the effectiveness of the adaptive elements, so I'm sure we'll see a proper Hubble replacement eventually, but it's certainly not as critical for scientific progress as some might think.

  • by ShadowFalls ( 991965 ) on Saturday October 04, 2008 @01:42AM (#25254427)
    It not being in space might have something to do with the amount of data it would have to transmit and the speed limitations... Besides, you can't replace Hubble, its impossible to exactly replicate that many technical difficulties...
    • Yes, yes.. the HST has had issues, but all in all, it's been in almost continuous operation for over 18 years. I don't think I've had anything for that long that didn't have problems (including the first wife).
  • 30TB raw? (Score:3, Interesting)

    by Jah-Wren Ryel ( 80510 ) on Saturday October 04, 2008 @01:57AM (#25254467)

    When I worked at the CFHT a few decades ago, they had a bunch of "data reduction" algorithms they ran on each night's run that reduced the amount of data they needed to store by at least a factor of 10.

    • by $RANDOMLUSER ( 804576 ) on Saturday October 04, 2008 @02:05AM (#25254505)
      What, like storing the year as two digits, that sort of thing?
      • by ShadowFalls ( 991965 ) on Saturday October 04, 2008 @02:14AM (#25254527)
        No, removing any data that could prove the possibility of Extraterrestrial life :P
      • Honestly, I don't know what the data reduction algorithms did. If I had to guess, I would guess that they included things like a high-pass filter to remove readings that were "in the noise" and maybe some sort of compression for repeated values over large areas like "empty space." Just guessing though.

        • Re: (Score:2, Informative)

          by mrsquid0 ( 1335303 )
          Back when I was using CFHT there was no high-pass filtering done on the data. That would change the noise properties of the data, which could render the data useless for certain types of analysis. The big space savings were done using lossless data compression. Depending on the type of data one can reduce the disk space required by up to about 90%. A second space-saving technique was to combine calibration data, such as bias frames and flats. In many cases combined calibration data is just as good as t
          • It's actually a fascinating example that information theory and compression really are true. The calibration images such as bias images (a readout of the CCD with no effective exposure time) or dark images (a readout of the CCD with the shutter closed but with an exposure time like those of the actual sky observations) indeed contain little information and so compresses by factors of 4-5 with straightforward things like gzip. Regular images of the sky compress by approximately a factor of 2.
      • by Kjella ( 173770 )

        What, like storing the year as two digits, that sort of thing?

        I know you're trying to be to be a smartass, but BMP<->PNG for example? Both are lossless but one takes much, much more space than the other. I'm sure there's other and probably better ways of compressing that data that's really specific to the application, like say FLAC is to music. Though I would guess that's already applied...

    • They still stored all of the raw data, they just provided the reduced data products as a convenience to the astronomers.
  • by Anonymous Coward

    What does this story add that the following LSST stories didn't?

    http://science.slashdot.org/article.pl?sid=07/01/10/0111227

    http://science.slashdot.org/article.pl?sid=08/04/22/0116259

    http://science.slashdot.org/article.pl?sid=08/09/02/2346240

  • by Harold Halloway ( 1047486 ) on Saturday October 04, 2008 @02:18AM (#25254541)

    30 Terabytes, consisting mainly of #000000.

    • But sometimes it contains nice #ffffff pixels! But I agree, it's very compressible anyway. RLE compression? ;)
    • by NixieBunny ( 859050 ) on Saturday October 04, 2008 @03:02PM (#25257967) Homepage
      I used to think that until I started to work for astronomers. They actually take photos of noise, and then add noisy images together ("stacking") until pictures of interesting faraway things emerge from the noise. That's why a nightly sky survey is so useful - you can add together a few months of images and see stuff that you would never have seen in a single image.
  • it will run on MySQL (Score:5, Informative)

    by datacharmer ( 1137495 ) on Saturday October 04, 2008 @02:49AM (#25254619) Homepage

    This project was presented at the MySQL Users Conference 2008 in a dedicated talk and a keynote.

    The storage will be organized in clusters based on MySQL databases.

    Astronomy, Petabytes, and MySQL [oreilly.com]

    The Science and Fiction of Petascale Analytics [oreilly.com]

  • by bertok ( 226922 ) on Saturday October 04, 2008 @03:00AM (#25254641)

    30 TB per night sounds like a lot, but 1.5 TB drives are about AUD 350 each, retail. By 2016, I'd expect vendors to have released at least a 10 TB hard drive at that price point, and I wouldn't be surprised if we're using 30 to 50 TB drives.

    So it all boils down to about $1000 per night of operation, or about $350K per year. Not exactly expensive for a science project. A single mars mission costs about $300M, but this telescope would generate more discoveries. That's not even considering that storage costs would continue to drop over the lifetime of the telescope, so the eventual total cost may be less than $100K per year. That's the salary of just one person!

    • by dwater ( 72834 ) on Saturday October 04, 2008 @03:39AM (#25254747)

      The cost of the storage might be reasonable, but what about the performance aspect? 30TB per night sounds like a lot to store in one night...being generous and calling a night 12 hours - I'm probably wrong, but I make that 43200 seconds which is 694 MB/s. Without looking up any performance stats for hard drives, that sounds fairly easily attainable (too).

    • 30 TB per night sounds like a lot, but 1.5 TB drives are about AUD 350 each, retail

      Funny, but the idea of buying and installing twenty top-of-the-line new disks each day sounds like really big numbers to me...

      Not to mention that they need backups. How many tapes do they have to buy? And data transfer, too. All those bytes are worthless if no one gets to see them, so they need at least 30 TB / day data link capacity.

  • ... How many furlongs per meter? How many fortnights per night? I can't understand these eeeevil foreign units.
  • Nits and Grins (Score:4, Informative)

    by martyb ( 196687 ) on Saturday October 04, 2008 @08:05AM (#25255511)

    From TFA title: (emphasis added)

    6.7 Meter Telescope To Capture 30 Terabytes Per Night

    <nit>
    That's 6.7 Meter effective diameter Telescope. The primary mirror has a diameter of 8.4m but the tertiary mirror (5.2m diameter) sits right in the middle of the primary, so its area needs to be subtracted from the primary. The area of the primary is pi*(8.4/2)^2 which is 55.4m^2 and the area of the tertiary is pi*(5.2/2)^2 which is 21.2m^2; a single mirror of that area would have a diameter of about 6.7m.
    </nit>

    6.7 Meter Telescope To Capture 30 Terabytes Per Night

    <grin>
    Hey!! I thought information wanted to be free! And here they plan to go off and capture 30 TERAbytes? Each night? OMG!!!!11Eleventy!! Say it ain't so!!
    </grin>

    • </nit>

      6.7 Meter Telescope To Capture 30 Terabytes Per Night

      <grin> Hey!! I thought information wanted to be free! And here they plan to go off and capture 30 TERAbytes? Each night? OMG!!!!11Eleventy!! Say it ain't so!! </grin>

      what's the big deal? that's only .00000000003 yottabyte a night. :-)

  • Jennifer Gates [wikipedia.org] was right, 640TB ought to be enough for anybody.

  • This telescope is amazing. The three-mirror configuration gives sharp focus, over a very wide field...the only problem is that the focus is on a spherical surface.

    The LSST fixes this by having three relatively small (small compared to the mirrors) lenses [llnl.gov] to flatten the field, and they use a very large image sensor.

    I am curious if they considered using a non-flat image sensor. It would be hard, but with e-beam or UV-laser lithography, I would think that you would be able to build a big sensor on a curved s

    • The detector surface is indeed effectively curved. It's made up of a large number of CCDs which will each be tangent to the focal surface at their location.
  • ... the p0rn industry for doing the groundbreaking research needed to manage this quantity of data.
  • Large Hadron Collider, Syntoptic Telescope Survey, Seismic Data Acquisition, Genome Decoding all use as much data capacity that exits. That now measures in the terabytes-per-day rate. Video tapes now have that capacity.
  • How much porn can one person save onto 150 petabyte drive? thats so much pornabytes. jeezes

One person's error is another person's data.

Working...