Forgot your password?
typodupeerror
IBM Space Science

IBM Designing Superman Servers For World's Largest Telescope 67

Posted by Soulskill
from the batman-servers-were-too-melancholy dept.
Nerval's Lobster writes "How's this for a daunting task? By 2017, IBM must develop low-power microservers that can handle 10 times the traffic of today's Internet — and resist blowing desert sands, to boot. Sound impossible? Hopefully not. Those are the design parameters of the Square Kilometer Array (SKA) Project, the world's largest radio telescope, located in South Africa and Australia amid some of the world's most rugged terrain. It will be up to the SKA-specific business unit of South Africa's National Research Foundation, IBM, and ASTON (also known as the Netherlands Institute for Radio Astronomy) to jointly design the servers. Scientists from all three organizations will collaborate remotely and at the newly established ASTRON & IBM Center for Exascale Technology in Drenthe, the Netherlands. By peering into the furthest regions of space, the SKA project hopes to glimpse 'back in time,' where the radio waves from some of the earliest moments of the universe — before stars were formed — are still detectable. The hardware is powerful enough to pick up an airport radar on a planet 50 light-years away, according to the SKA team."
This discussion has been archived. No new comments can be posted.

IBM Designing Superman Servers For World's Largest Telescope

Comments Filter:
  • by Anonymous Coward on Tuesday March 12, 2013 @08:43PM (#43154945)
    The LHC only records about 25 PB a year though, as the raw data is heavily filtered by custom hardware before getting to the more off the shelf computers that record data for later use. SKA on the other hand, needs to hold on to raw data for a couple hours until a run is complete, requiring intermediate storage of data of about a PB an hour, which will then get reduced to about a 1-5 PB a day for longer term storage and analysis. The intermediate data will use conventional hardware for processing, but even ignoring that, the long term data, that which needs to be stored and distributed, will out pace LHC's year' production in about a eek. If you wanted a more apples-to-apples comparison to LHC's raw data collection, you would need to look more at the amount of raw data produced before filtered down to commodity computer hardware. And with a final goal of thousands of antennas collecting up to 30 GHz signals across nearly the full spectrum, that is a lot more than the 10 terabits/s LHC roughly generates, and the intermediate 1 PB/hr data for SKA is much more than LHC's intermediate ~ 1 TB/hr.

Computers will not be perfected until they can compute how much more than the estimate the job will cost.

Working...