Forgot your password?
typodupeerror
Open Source IT Science

Open Data Needs Open Source Tools 62

Posted by Soulskill
from the stop-trying-to-fork-reality dept.
macslocum writes "Nat Torkington begins sketching out an open data process that borrows liberally from open source tools: 'Open source discourages laziness (because everyone can see the corners you've cut), it can get bugs fixed or at least identified much faster (many eyes), it promotes collaboration, and it's a great training ground for skills development. I see no reason why open data shouldn't bring the same opportunities to data projects. And a lot of data projects need these things. From talking to government folks and scientists, it's become obvious that serious problems exist in some datasets. Sometimes corners were cut in gathering the data, or there's a poor chain of provenance for the data so it's impossible to figure out what's trustworthy and what's not. Sometimes the dataset is delivered as a tarball, then immediately forks as all the users add their new records to their own copy and don't share the additions. Sometimes the dataset is delivered as a tarball but nobody has provided a way for users to collaborate even if they want to. So lately I've been asking myself: What if we applied the best thinking and practices from open source to open data? What if we ran an open data project like an open source project? What would this look like?'"
This discussion has been archived. No new comments can be posted.

Open Data Needs Open Source Tools

Comments Filter:
  • Well... (Score:3, Insightful)

    by fuzzyfuzzyfungus (1223518) on Tuesday March 09, 2010 @01:35PM (#31416166) Journal
    The organizational challenges are likely a nasty morass of situation specific oddities, special cases, and unexpectedly tricky personal politics; but OSS technology has clear application.

    Most of the large and notable OSS programs are substantially sized codebases distributed and developed across hundreds of different locations. If only by sheer necessity, OSS revision control tools are up to the challenge. That won't change the fact that gathering good data about the real world is hard; but it will make managing a big dataset with a whole bunch of contributors and keeping everything in sync a whole lot easier. Any of the contemporary(ie. post-CVS distributed) revision control systems could do that easily enough. Plus, you get something resembling chain of provenance(at least once the data enter the system) and the ability to filter out comitts from people who you think are unreliable.
  • Really? (Score:1, Insightful)

    by Anonymous Coward on Tuesday March 09, 2010 @01:38PM (#31416224)

    it can get bugs fixed or at least identified much faster (many eyes),

    So then why were there all those buffer overflow issues, null pointer issues in the Linux kernel before Coverity ran it's scan on the code? Why did that Debian SSH bug exist for over 2 years if this is true?

  • Already being done (Score:5, Insightful)

    by kiwimate (458274) on Tuesday March 09, 2010 @01:43PM (#31416288) Journal

    What if we ran an open data project like an open source project? What would this look like?

    Wikipedia. With all the inherent problems of self-proclaimed authorities who don't know what they're talking about; bored trouble-makers who inject bad information because they're, well, bored; petty little squabbles which result in valid data being deleted; and so on.

  • by bokmann (323771) on Tuesday March 09, 2010 @01:56PM (#31416490) Homepage

    Interesting problem. Several things come to mind:

    1) The Pragmatic tip "Keep knowledge in Plain Text" (fro the Pragmatic Programmer book, that also brought us DRY). You can argue whether XML, JSON, etc are considered 'plain text', but the spirit is simple - data is open when it is usable.

    2) tools like diff and patch. If you make a change, you need to be able to extract that change from the whole and give it to other people.

    3) Version control tools to manage the complexity of forking, branching, merging, and otherwise dealing with all the many little 'diffs' people will create. Git is an awesoe decentralized tool for this.

    4) Open databases. Not just SQL databases like Postgres and MySQL, but other database types for other data structures like CouchDB, Mulgara, etc.

    All of these things come with the poer to help address this problem, but come with a barrier to entry in that their use requires skill not just in the tool, but in the problem space of 'data management'.

    The problem of data management, as well as the job to point to one set as 'canonical' should be in the hands of someone capable of doing the work. PErhaps there is a skillset worth defining here - some offshoot of library sciences?

  • by mikael_j (106439) on Tuesday March 09, 2010 @01:56PM (#31416494)

    I don't think kiwimate was saying that Wikipedia is an open source project, just that Wikipedia is a great example of an open data project run like an open source project.

    /Mikael

  • by musicalmicah (1532521) on Tuesday March 09, 2010 @02:19PM (#31416734)

    What if we ran an open data project like an open source project? What would this look like?

    Wikipedia. With all the inherent problems of self-proclaimed authorities who don't know what they're talking about; bored trouble-makers who inject bad information because they're, well, bored; petty little squabbles which result in valid data being deleted; and so on.

    Gee, you make it sound so terrible when you put it like that. It also happens to be an amazing source of information and the perfect resource for an initial foray into any research topic. It's a shining example of what happens when huge amounts of people want to share their knowledge and time with the world. Sure, it's got a few flaws, but in the grand scheme of things, it has made a massive body of information ever more accessible and usable.

    Moreover, I've seen all the flaws you've listed in closed collaborative projects as well. Like all projects, Wikipedia is both a beneficiary and a victim of human nature.

  • by Anonymous Coward on Tuesday March 09, 2010 @02:27PM (#31416852)

    What if we ran an open data project like an open source project? What would this look like?

    Wikipedia. With all the inherent problems of self-proclaimed authorities who don't know what they're talking about; bored trouble-makers who inject bad information because they're, well, bored; petty little squabbles which result in valid data being deleted; and so on.

    Right, because no-one ever edits wikipedia because some self-interested, self-proclaimed authority has written something erroneous ?

  • by wastedlife (1319259) on Tuesday March 09, 2010 @03:35PM (#31417802) Homepage Journal

    Unlike most open source projects, Wikipedia accepts anonymous contributions and then immediately publishes them without review or verification. That seems like a very strong difference to me.

  • by jonaskoelker (922170) <jonaskoelker AT gnu DOT org> on Tuesday March 09, 2010 @04:21PM (#31418448) Homepage

    What if we ran an open data project like an open source project? What would this look like?

    Wikipedia. With all the inherent problems of self-proclaimed authorities

    Who do not have commit access.

    That is one of the keys to running an open source project well: you, being the giant with some source code, let everybody stand on your shoulders so they can see farther. And you let others stand on their shoulders so they can see even farther still.

    But you don't let just about anyone become part of your shoulders. Especially not if that would weaken your shoulders (i.e. bad code or citation-free encyclopaedia entries).

    That's the difference between Open Source projects and the Wikipedia project: Wikipedia lets the midgets stand on the shoulders of the giant, even if that makes the giant shorter rather than taller. Well-run open source projects don't let that happen. And poorly run open source projects don't exist due to survivor bias ;-)

  • by oneiros27 (46144) on Tuesday March 09, 2010 @08:23PM (#31421400) Homepage

    You're assuming that the differences are something that someone can keep up with in real time. If someone makes a change in calibration that results in a few month's worth of data changing, it might take weeks or even months to catch up (as you're still trying to deal with the ingest of the new data at the same time). As for bittorrent, p2p is banned in some federal agencies -- and as such, we haven't had a chance to find out how well it scales to dealing with lots (10s of millions) of small files (1 to 16MB).

    As for the low-level issues -- it's not even close. The problem is that people build their catalogs to handle the type of science they want to do; they often don't revolve around the same concepts, and they might have one or thousands of tables. See my talk Data Relationships: Towards a Conceptual Model of Scientific Data Catalogs [nasa.gov] from the 2008 American Geophysical Union.

    I've been working for years with people who want to search the data from the systems I maintain, but the way that they want me to describe the data to make it searchable aren't easy to define -- even terms like 'instrument' mean something different between their system and mine. (and I have a paper submitted for the Journal of Library Metadata's special 'eScience' issue, dealing with issues in terminology and other problems that the library field doesn't typically run into, but we have to deal with in science informatics)

    Disclaimer : If it's not apparent from the message, I work in this field.

The sooner you fall behind, the more time you have to catch up.

Working...