CERN Releases 300TB of Large Hadron Collider Data Into Open Access (techcrunch.com) 60
An anonymous reader writes: The European Organization for Nuclear Research, known as CERN, has released 300 terabytes of collider data to the public. "Once we've exhausted our exploration of the data, we see no reason not to make them available publicly," said Kati Lassila-Perini, a physicist who works on the Compact Muon Solenoid detector. "The benefits are numerous, from inspiring high school students to the training of the particle physicists of tomorrow. And personally, as CMS's data preservation coordinator, this is a crucial part of ensuring the long-term availability of our research data," she said in a news release accompanying the data. Much of the data is from 2011, and much of it is from protons colliding at 7 TeV (teraelectronvolts). The 300 terabytes of data includes both raw data from the detectors and "derived" datasets. CERN is providing tools to work with the data which is handy.
Pseudoscientists of the world, unite! (Score:5, Insightful)
I just can visualize a horde of crackpots using this data to fuel fringe theories, find messages from God and prove the existence of aliens.
That being said, this is awfully cool from CERN. The raw data will be really useful in academic environments, and the Linux visualization tools are great.
Re: (Score:2, Funny)
I just can visualize a horde of crackpots using this data to fuel fringe theories
I heard from good authority that the LHC breached a planar dimension, and one of its red/white striped inhabitants escaped into the LHC data stream.
So now they're releasing the data into the public in the hopes that someone will find this wimpy alien lifeform data object (waldo)...
Re: (Score:2)
"Once we've exhausted our exploration of the data, we see no reason not to make them available publicly,"
Actually they're just sifting and patching out the winning lottery numbers first. In these ~300TB dregs you'd be lucky to find a Pick 3. Best suggestion is to make a list of numbers absent from the data and play those.
There's also a lot of Quantum Space Spam in it, such as embedded 3D jpgs meant to be projected into 4D space showing reproductive attachments for higher dimensional beings.
Re:Pseudoscientists of the world, unite! (Score:4, Interesting)
Data from most NASA astronomy satellites is available after a specified amount of time.
e.g. Hubble Space Telescope data are available after one year, and Fermi gamma-ray space telescope data are available as soon as it's processed (within one day).
Software tools are also publicly available along with software support.
Nice to see particle physicists catching up with astronomers on data release!
Re: Pseudoscientists of the world, unite! (Score:1)
CERN has been releasing a lot of data since the 90s. The hard part is not deciding to release it and just copy paste the data to some webserver, but the process of documenting it and making sure there are tools to work with the data. Both the particle physics and astronomy fields have put a lot of time and money into developing these tools for decades. It is now at the point other smaller, data heavy projects can take advantage of the same the same tools with a lot less investment of manpower. But it still
Re: (Score:2)
horde of crackpots using this data to fuel fringe theories
To be fair, most crackpots would manage to fuel their fringe theories just as well without this data.
Re: (Score:2)
I'm still expecting to see the crackpots doing the logical equivalent of reading from a book held upside down.
Re: (Score:2)
Re: (Score:3)
By the time you have downloaded the 300 TB, they'll have built another, bigger, particle collider, and released an even bigger tarball about that one.
No reason not to make them available publicly ? (Score:5, Insightful)
Re: (Score:3)
Re: (Score:3)
Re: (Score:1)
And the miniature black hole they created, that wandered to the center of the earth since and that will eat up this planet from inside over the next few decades. Wake up, sheeple, we must leave earth before its too late! Scientist's experiment gone mad!
Re:No reason not to make them available publicly ? (Score:5, Insightful)
If I'm not mistaken, the LHC has been publicly funded, so these data should have been public to start with. Anything else is bs.
It's standard practice in experimental particle physics to give those who put the time and effort into designing, building, and running the experiment the first chance to analyze the data and publish results. After that, it's not unusual to release the raw data publicly. Otherwise, there'd really be no incentive to do the work, since someone else could swoop in and publish results without having contributed to producing the data.
Re: No reason not to make them available publicly (Score:3)
It was available to all scientists of the funding and visiting countries. Now as the scientists are through with it you can have a look too.
Re: (Score:2)
Re: No reason not to make them available publicly (Score:2)
It is now. Before that the people who developed the experiments got first access. I personally understand that perfectly. They invested decades of their lives.
Re: (Score:3)
Why? What interest does the general population have in access to the LHC data? They've already release a subset of the data [opendata.cern.ch] for educational purposes, in addition to this considerable data dump. It serves no public interest to make the whole data set available to everyone, and in fact would run contrary to the public interest: the data set is absolutely massive (the LHC produces petabytes of data per day), and the costs associated with making that data available to the public would be non-negligible.
If a spe
Re: (Score:2)
Cool. Where's the torrent? It's not in TPB yet.
Re: (Score:2)
Unfortunately its not simple. Scientists and the organization that they work for are judged based on their publications. So a lab that spent a lot of money to build a new experiment need to show a lot of publications from that experiment or they won't get future funding.
Its not a great system, but it is what is in place and its not obvious how to do better.
300TB (Score:2)
I understand there are tools to work with these data, but even so, 300TB is a lot. Wouldn't it be better, assuming they want to encourage future generations of particle physicists, to open source the tools and provide better instruction on how one should manage these data? That seems like half the problem. No way will anyone in high school download 300TBs to play with. Even if they could, what would they use to play with it?
Re: (Score:2)
Re: 300TB (Score:2)
Nobody will use it in high school where people have problems with calculus. It might be helpful in college and university.
Re: (Score:2)
Yeah certainly, building the largest particle collider in the world is way more easier than copying 300 TB of data. And it will be way more fun, too!
Re: (Score:2)
You manage it on a thumb drive, DUH! Make sure it's USB 3.0! Load the data into Excel, and you can make pretty graphs.
Re: (Score:3)
Congratulations on not following TFLinks. They did open-source the tools and provide instructions.
You also don't need to download the entire 300 TB, the data is divided into batches.
Available on the CERN Open Data Portal - which is built in collaboration with members of CERN's IT Department and Scientific Information Service - the collision data are released into the public domain under the CC0 waiver and come in types: The so-called 'primary datasets' are in the same format used by the CMS Collaboration to perform research. The 'derived datasets' on the other hand require a lot less computing power and can be readily analysed by university or high-school students, and CMS has provided a limited number of datasets in this format.
Notably, CMS is also providing the simulated data generated with the same software version that should be used to analyse the primary datasets. Simulations play a crucial role in particle-physics research and CMS is also making available the protocols for generating the simulations that are provided. The data release is accompanied by analysis tools and code examples tailored to the datasets. A virtual-machine image based on CernVM, which comes preloaded with the software environment needed to analyse the CMS data, can also be downloaded from the portal.
Library of Congresses? (Score:2)
300 TB?
How many Libraries of Congress is that?
Re: (Score:3)
US or metric LoCs?
Re: (Score:2)
Its not like we have 300TB SANs in our homes or schools.
Not yet, anyway, but in the next five to ten years that might not be a problem any longer. Plus I would hope the data is in a more manageable form then just one giant tarball (is there any file system that allows for an individual file that big anyway?)
Re: (Score:1)
The place I work for has a 374TB SAN that's EoL. We got some quotes for re-sale and best offer we were given was £3000.
So a bit of a chunk of cash but not crazy.
El Psy Congaroo (Score:2)
Maybe now, we can unlock the mysteries of Steins Gate! Mwahaha!
After reading through 300TB, you'll be at... (Score:2)
human is dead, mismatch.
Raises the bar (Score:3, Funny)
300 TB. How many floppies? (Score:1)
Just curious how many floppy disks would it take to store 300 TB?
Re: (Score:1)
Re: (Score:2)
Why not use the 8 inch hard sectored floppy disks? Of course there's nowhere you could either read or write them... but they were a bit thinner, and I think they stored all of 100KB.
Re: (Score:1)
Why not use the 8 inch hard sectored floppy disks? Of course there's nowhere you could either read or write them... but they were a bit thinner, and I think they stored all of 100KB.
Because I am willing to bet there were more 3.5" floppies made than all other types of removable media put together. I have a DEC RX-02 somewhere, but haven't had to use it since 1998... ISTR they held about 500kB.
There sure are a lot... (Score:2)
Download Link (Score:1)
http://opendata.cern.ch/about/CMS