Hugh Pickens writes: "The NY Times reports that researchers and workers in fields as diverse as bio-technology, astronomy and computer science will soon find themselves overwhelmed with information so the next generation of computer scientists has to learn think in terms of internet scale of petabytes of data. For the most part, university students have used rather modest computing systems to support their studies but these machines fail to churn through enough data to really challenge and train a young mind meant to ponder the mega-scale problems of tomorrow. “If they imprint on these small systems, that becomes their frame of reference and what they’re always thinking about,” said Jim Spohrer, a director at IBM.’s Almaden Research Center. This year, the National Science Foundation funded 14 universities that want to teach their students how to grapple with big data questions and students are beginning to work with data sets like the Large Synoptic Survey Telescope, the largest public data set in the world, which takes detailed images of larger chunks of the sky and produces about 30 terabytes of data each night. “Science these days has basically turned into a data-management problem,” says Jimmy Lin, an associate professor at the University of Maryland."