Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Open Source IT Science

Open Data Needs Open Source Tools 62

macslocum writes "Nat Torkington begins sketching out an open data process that borrows liberally from open source tools: 'Open source discourages laziness (because everyone can see the corners you've cut), it can get bugs fixed or at least identified much faster (many eyes), it promotes collaboration, and it's a great training ground for skills development. I see no reason why open data shouldn't bring the same opportunities to data projects. And a lot of data projects need these things. From talking to government folks and scientists, it's become obvious that serious problems exist in some datasets. Sometimes corners were cut in gathering the data, or there's a poor chain of provenance for the data so it's impossible to figure out what's trustworthy and what's not. Sometimes the dataset is delivered as a tarball, then immediately forks as all the users add their new records to their own copy and don't share the additions. Sometimes the dataset is delivered as a tarball but nobody has provided a way for users to collaborate even if they want to. So lately I've been asking myself: What if we applied the best thinking and practices from open source to open data? What if we ran an open data project like an open source project? What would this look like?'"
This discussion has been archived. No new comments can be posted.

Open Data Needs Open Source Tools

Comments Filter:
  • eclipse? (Score:4, Informative)

    by toastar ( 573882 ) on Tuesday March 09, 2010 @01:32PM (#31416114)

    Is Eclipse not open source?

    • return false;

      • by zuzulo ( 136299 )

        Just force everyone to use a versioning system. Wouldnt take a lot of tweaks to make an existing open source versioning system suitable for various types of data sets after all. Just depends on whether the data you are using is compressed, but even so the meta data and analysis associated with the raw data is unlikely to be compressed ... the hard part would be convincing everyone involved to actually use it. jmho and ymmv of course ... ;-)

        • by gnat ( 1960 )

          Hi, zuzulo. A versioning system would definitely be part of the solution, but there's more than git behind a successful open source project. In my post, I tried to sketch some of the tools that the data world is missing. Even if everyone just slapped the data into git, that implies it's stored in a format that makes it look like source code and so is amenable to diff and patch. What if we add a new column to the database? That affects every row, but should it be stored as a completely new version of th

    • Re:eclipse? (Score:4, Informative)

      by Monkeedude1212 ( 1560403 ) on Tuesday March 09, 2010 @01:38PM (#31416228) Journal

      Who modded him offtopic?
      Eclipse has an open source Data Tools Platform [eclipse.org]

      • Well, he should have mentioned that.

      • by epine ( 68316 )

        Eclipse has an open source Data Tools Platform

        For an extremely laid-back Zen-like stream-of-consciousness definition of "has". My stream of consciousness experience trying to grok this thing was extremely irritating.

        From Eclipse Data Tools Platform (DTP) Project [eclipse.org]

        "Data Tools" is a vast domain, yet there are a fairly small number of foundational requirements when developing with or managing data-centric systems. (What does it do?) A developer is interested in an environment that is easy to configure (what does it do?), one in which the challenges of application development are due to the problem domain (what does it do?), not the complexity of the tools employed. (What does it do?) Data management, whether by a developer working on an application (what does it do?), or an administrator maintaining or monitoring a production system (what does it do?), should also provide a consistent (what does it do?), highly usable environment that works well with associated technologies. (What does it do?)

        Three rules plucked from Ten rules for writing fiction [guardian.co.uk] by Elmore Leonard

        Never open a book with weather. If it's only to create atmosphere, and not a character's reaction to the weather, you don't want to go on too long. The reader^H^H^H^H^H^Hgeek is apt to leaf ahead looking for people^H^H^H^H^H^Hpurpose.

        Don't go into great detail describing places and things (or meta framework), unless you're Margaret Atwood and can paint scenes with language. You don't want descriptions that bring the action, the flow of the story, to a standstill.

        Try to leave out the part that readers tend to skip. Think of what you skip reading a novel: thick paragraphs of prose you can see have too many words in them.

        I generally get along well with Eclipse, but for the love of God:

        What does DTP do?

  • Well... (Score:3, Insightful)

    by fuzzyfuzzyfungus ( 1223518 ) on Tuesday March 09, 2010 @01:35PM (#31416166) Journal
    The organizational challenges are likely a nasty morass of situation specific oddities, special cases, and unexpectedly tricky personal politics; but OSS technology has clear application.

    Most of the large and notable OSS programs are substantially sized codebases distributed and developed across hundreds of different locations. If only by sheer necessity, OSS revision control tools are up to the challenge. That won't change the fact that gathering good data about the real world is hard; but it will make managing a big dataset with a whole bunch of contributors and keeping everything in sync a whole lot easier. Any of the contemporary(ie. post-CVS distributed) revision control systems could do that easily enough. Plus, you get something resembling chain of provenance(at least once the data enter the system) and the ability to filter out comitts from people who you think are unreliable.
  • Really? (Score:1, Insightful)

    by Anonymous Coward

    it can get bugs fixed or at least identified much faster (many eyes),

    So then why were there all those buffer overflow issues, null pointer issues in the Linux kernel before Coverity ran it's scan on the code? Why did that Debian SSH bug exist for over 2 years if this is true?

  • Open Street Map (Score:3, Informative)

    by Anonymous Coward on Tuesday March 09, 2010 @01:40PM (#31416260)

    I perfect example of collaboration with a massive dataset:

    http://www.openstreetmap.org/

    • by toastar ( 573882 )

      Gratz for actually reading the article to the end, I gave up after the first paragraph.

    • Openstreetmap is good and useful if you don't want to fork out money. But it suffers from some vandalism, and some bad data. It needs more quality control if I'm going to depend on it in a remote location or when a life may be at stake. It will probably get more QC and then end up with some of the negative points that Wikipedia has.
  • Already being done (Score:5, Insightful)

    by kiwimate ( 458274 ) on Tuesday March 09, 2010 @01:43PM (#31416288) Journal

    What if we ran an open data project like an open source project? What would this look like?

    Wikipedia. With all the inherent problems of self-proclaimed authorities who don't know what they're talking about; bored trouble-makers who inject bad information because they're, well, bored; petty little squabbles which result in valid data being deleted; and so on.

    • by viralMeme ( 1461143 ) on Tuesday March 09, 2010 @01:50PM (#31416412)

      > Wikipedia. With all the inherent problems of self-proclaimed authorities who don't know what they're talking about ..

      Wikipedia isn't an open source project, it's an online collaborative encyclopedia. Mediawiki [mediawiki.org] on the other hand is the software project that powers Wikipedia.

      • by mikael_j ( 106439 ) on Tuesday March 09, 2010 @01:56PM (#31416494)

        I don't think kiwimate was saying that Wikipedia is an open source project, just that Wikipedia is a great example of an open data project run like an open source project.

        /Mikael

        • Re: (Score:3, Insightful)

          by wastedlife ( 1319259 )

          Unlike most open source projects, Wikipedia accepts anonymous contributions and then immediately publishes them without review or verification. That seems like a very strong difference to me.

    • by musicalmicah ( 1532521 ) on Tuesday March 09, 2010 @02:19PM (#31416734)

      What if we ran an open data project like an open source project? What would this look like?

      Wikipedia. With all the inherent problems of self-proclaimed authorities who don't know what they're talking about; bored trouble-makers who inject bad information because they're, well, bored; petty little squabbles which result in valid data being deleted; and so on.

      Gee, you make it sound so terrible when you put it like that. It also happens to be an amazing source of information and the perfect resource for an initial foray into any research topic. It's a shining example of what happens when huge amounts of people want to share their knowledge and time with the world. Sure, it's got a few flaws, but in the grand scheme of things, it has made a massive body of information ever more accessible and usable.

      Moreover, I've seen all the flaws you've listed in closed collaborative projects as well. Like all projects, Wikipedia is both a beneficiary and a victim of human nature.

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      What if we ran an open data project like an open source project? What would this look like?

      Wikipedia. With all the inherent problems of self-proclaimed authorities who don't know what they're talking about; bored trouble-makers who inject bad information because they're, well, bored; petty little squabbles which result in valid data being deleted; and so on.

      Right, because no-one ever edits wikipedia because some self-interested, self-proclaimed authority has written something erroneous ?

    • by Hurricane78 ( 562437 ) <deleted&slashdot,org> on Tuesday March 09, 2010 @02:50PM (#31417172)

      I've said this a thousand times before: Make Wikipedia a P2P project without a single control, and build a cascading network of trust relationships on top of it (think CSS rules, but on articles instead of elements, and one CSS file per user, perhaps including those of others), and you solve all problems with then not-existing central authorities, and so also with censorship.

      The only caveat: People have to learn again, who to trust and who not. (Example of where this fails: Political parties and other groups with advanced social engineering / rhetorics / mass psychology skills, like marketing companies.)

      • Just to play devil's advocate

        Make Wikipedia a P2P project without a single control, and build a cascading network of trust
        relationships on top of it, and you solve all problems with then not-existing central authorities, and so also with censorship

        Who in their right mind is going to set up a network where they themselves are not the central authorities of what goes onto it? Those people who get bored will create thousands upon thousands of useless articles to use up space on the server - and since no authority is there to restrict that, it'll work. And if you put any kind of countermeasure in place, thats opening itself to censorship, or centralized authority.

        The only caveat: People have to learn again, who to trust and who not. (Example of where this fails: Political parties and other groups with advanced social engineering / rhetorics / mass psychology skills, like marketing companies.)

        Sounds more like everything would become politically charged -

        • by lennier ( 44736 )

          Who in their right mind is going to set up a network where they themselves are not the central authorities of what goes onto it?

          Nobody, but fortunately that's exactly not the system being proposed. This system would allow everyone to themselves be the central authorities of what goes into their network.

          Those people who get bored will create thousands upon thousands of useless articles to use up space on the server - and since no authority is there to restrict that, it'll work.

          Sure. You've just described 'the web' (anyone can post anything even useless trivia!), so this system won't be much different. But you seem to be under a misapprehension: there won't be any one 'server' - rather there'll be a single unified data-storage/publishing fabric where you pay for raw information storage/transmission (upload/d

          • by lennier ( 44736 )

            Here's a bit more detail on what I think is an important but overlooked element of this vision:

            In order for people to be not only content creators but content editors, rebroadcasting/remixing other people's work (and this is hugely important - we can't have a unified wiki/blog/Xanadu/datasphere without it, the ability to publish a view of the world is essential), what we need is a unified global publishing and computing fabric.

            By 'computing' I mean we need the ability to publish arbitrary computable functio

      • Re: (Score:3, Interesting)

        by lennier ( 44736 )

        I've said this a thousand times before: Make Wikipedia a P2P project without a single control, and build a cascading network of trust relationships on top of it (think CSS rules, but on articles instead of elements, and one CSS file per user, perhaps including those of others), and you solve all problems with then not-existing central authorities, and so also with censorship.

        I agree wholeheartedly. If I understand correctly, this is very like what David Gelernter [edge.org] is saying with his datasphere/lifestreams concept: a fully distributed system with no centre where any node can absorb and retransmit its own view of the data universe. Twitter and 'retweets' is a sort of lame, struggling, misbegotten attempt to shamble towards this idea.

        What would happen, I think, is that such a distributed Wikipedia would converge on a few 'trusted super-editors' who produced their own authorised ver

    • What if we ran an open data project like an open source project? What would this look like?

      Wikipedia. With all the inherent problems of self-proclaimed authorities

      Who do not have commit access.

      That is one of the keys to running an open source project well: you, being the giant with some source code, let everybody stand on your shoulders so they can see farther. And you let others stand on their shoulders so they can see even farther still.

      But you don't let just about anyone become part of your shoulders. Especially not if that would weaken your shoulders (i.e. bad code or citation-free encyclopaedia entries).

      That's the difference between Open Source projects and th

    • by Yvanhoe ( 564877 )
      And a huge success.
      Face it : the problem you mention exist today but are hidden to the public's eye. Giving the public a way to correct it is what wikipedia did and proved as workable.
    • [Citation needed]

    • by grcumb ( 781340 )

      What if we ran an open data project like an open source project? What would this look like?

      Wikipedia. With all the inherent problems of self-proclaimed authorities who don't know what they're talking about; bored trouble-makers who inject bad information because they're, well, bored; petty little squabbles which result in valid data being deleted; and so on.

      So, basically just like any other large-scale, cooperative human enterprise, with the sole distinction that everyone gets to see the sausage being made (and to make it, if they choose)?

    • Wikipedia. With all the inherent problems of self-proclaimed authorities who don't know what they're talking about; bored trouble-makers who inject bad information because they're, well, bored; petty little squabbles which result in valid data being deleted; and so on.

      You misspelled slashdot.

  • Use Open Standards (Score:5, Informative)

    by The-Pheon ( 65392 ) on Tuesday March 09, 2010 @01:46PM (#31416332) Homepage

    People could start by documenting their data in standardized formats, like DDI 3 [ddi-alliance.org].

    • For other scientific data sets which are primarily tabular numeric data, I've always liked NetCDF [wikipedia.org] and or occasionally HDF [wikipedia.org]
  • Any one here use Madagascar?

    http://www.reproducibility.org/ [reproducibility.org]

  • by bokmann ( 323771 ) on Tuesday March 09, 2010 @01:56PM (#31416490) Homepage

    Interesting problem. Several things come to mind:

    1) The Pragmatic tip "Keep knowledge in Plain Text" (fro the Pragmatic Programmer book, that also brought us DRY). You can argue whether XML, JSON, etc are considered 'plain text', but the spirit is simple - data is open when it is usable.

    2) tools like diff and patch. If you make a change, you need to be able to extract that change from the whole and give it to other people.

    3) Version control tools to manage the complexity of forking, branching, merging, and otherwise dealing with all the many little 'diffs' people will create. Git is an awesoe decentralized tool for this.

    4) Open databases. Not just SQL databases like Postgres and MySQL, but other database types for other data structures like CouchDB, Mulgara, etc.

    All of these things come with the poer to help address this problem, but come with a barrier to entry in that their use requires skill not just in the tool, but in the problem space of 'data management'.

    The problem of data management, as well as the job to point to one set as 'canonical' should be in the hands of someone capable of doing the work. PErhaps there is a skillset worth defining here - some offshoot of library sciences?

    • by GrantRobertson ( 973370 ) on Tuesday March 09, 2010 @03:58PM (#31418142) Homepage Journal

      Perhaps there is a skillset worth defining here - some offshoot of library sciences?

      That offshoot is called "Information Science." Most "Library Science" programs now call themselves "Library and Information Science" programs. There is now even a consortium of universities that call themselves "iSchools." [ischools.org] In my preliminary research while looking for a graduate program in "Information Science" it seems as if the program at Berkeley [berkeley.edu] has gone the farthest in getting away from the legacy "Library Science" and moving toward a pure "Information Science" program.

      I personally think that the field of "Information Science" is really where we are going to find the next major improvements in the ability of computers to actually impact our daily lives. We need entirely new models of how to look at dynamic, "living" data and track changes not only to the data but to the schema and provenance of that data. That is how "data" becomes "information" and then "knowledge." I won't write my doctoral thesis here, but suffice it to say that simply squeezing data into a decades old model of software version control is not quite going to cut it. In software version control you don't have as much of a trust problem. Yes, you do care if someone inappropriately copies code from a proprietary or differently-licensed source. However, you don't have as much incentive for people to intentionally fudge the code/data one way or another. In addition, data can be legitimately manipulated, transformed, and summarized to harvest that "information" out of the raw numbers. This does not happen with code. Yes, there is refactoring, but with code it is not as necessary to document every minute change and how it was arrived at. With data, the equations and algorithms used for each transformation need to be recorded along with the new dataset. In addition, the reason for those transformations and the authority of those who did the transformation.

      Throw into the mix that there will be many different sets of similar data gathered about the same phenomena but with slightly different schemas and different actual data points which will all have different provenances but will need to be manipulated in ways to bring their models into forms that are parallel to all the other data sets associated with those phenomena while still tracking how they are different ... and you will see that we don't just need a different box to think outside of, we need an entirely different warehouse. (You know, the place where we store the boxes, outside of which we will do our thinking.)

      Many of the suggestions posted here are a start, but only a start.

      • by gnat ( 1960 )

        Could you post a link to your thesis? It sounds interesting. Thanks!

        • The thesis isn't written. I'm not even in graduate school yet. But that is likely what my thesis will be about when I finally write it. My head continuously swims with all the connections between information that need to be tracked. Maybe I'll get to be a pioneer. Woo hoo.

          You can find information about many of my ideas at www.ideationizing.com [ideationizing.com].

    • Semantic Web technologies (in particular RDF, a graph-structured data format) are ideally suited for publishing data. Also, these technologies facilitate the integration of separate pieces of information; integration is what you want to do if thousands of people start publishing structured data. Linked Data [w3.org] (RDF using HTTP URIs to identify things) is already used by the NYT and the UK government to publish data online.
  • by headkase ( 533448 ) on Tuesday March 09, 2010 @02:03PM (#31416562)
    High-level: Save your differences from day to day, bittorrent those differences to others, merge back in differences from others. Low-level: OMG, we used different table-names.
    • Re: (Score:3, Insightful)

      by oneiros27 ( 46144 )

      You're assuming that the differences are something that someone can keep up with in real time. If someone makes a change in calibration that results in a few month's worth of data changing, it might take weeks or even months to catch up (as you're still trying to deal with the ingest of the new data at the same time). As for bittorrent, p2p is banned in some federal agencies -- and as such, we haven't had a chance to find out how well it scales to dealing with lots (10s of millions) of small files (1 to 1

  • I just think it is not possible to build such useful data. I am working in parallel computing through a theoretical scheduling perspective.
    Each single paper you see is interested in a slightly different model which needs slightly different parameters or have a look at slightly different metrics.

    Despite I would love to have a database that provides the instances of all those guys as well as their implementations and results, I do not believe it is going to happen. Since every scientist need different paramet

  • What if we ran an open data project like an open source project? What would this look like?'"

    Every time someone asked about the date, they'd get a reply of RTFM

    Whenever someone did like the data they'd fork it with their own approved data

    MS would issue a white paper saying why closed source data is better and cheaper

    Everytime someone announced some new data, RMS would yell "That's GNU!!!!!>

  • What I've been saying for ages is that the biggest problems for the open data movement are mostly found inside Government agencies. Until the open data promoters can establish a cohesive pitch, based around solving goals for the agency in question, then these technical solutions are a waste of time. Nat's latest 'open source' model for open data will only excite those already sold on the idea.

    Most of the people who need convincing as to why they should get on board the open data train, need to be sold on

    • by gnat ( 1960 )

      Absolutely! All too often we're guilty of saying "open the data because that's what I believe you should do", not "open the data because this is how it will make your life easier" or "open the data because this is how it will help you do your job", etc. It's come from a technologist-centric pull, but it won't succeed until it becomes a bureaucrat-originated push.

  • The real problem is the lack of a standardized language between different scientists / agencies. It's really up to the funding sources (such as the NCI) to come up with the standards else you end up with standards, that while technically better, that only a few follow, ie: chembank.broad.mit.edu. Further, having mutiple "standards
  • by Bazman ( 4849 ) on Tuesday March 09, 2010 @06:07PM (#31419936) Journal

    Looked at the CKAN software (www.ckan.net)? They run their own knowledge archive,a nd the software also powers the UK data.gov.uk site. RESTful API and python client.

  • OpenDAP (Score:3, Informative)

    by story645 ( 1278106 ) <story645@gmail.com> on Tuesday March 09, 2010 @06:57PM (#31420580) Journal

    The main point of the openDAP [opendap.org] project is to facilitate remote collaboration on data, and there are already a few organizations that use it to share data. I've used the python variant for NetCDF files and found it pretty happy and the web interface is clean. The best part of the OpenDAP project is probably that the data doesn't need to be downloaded/copied to be processed, which is really important for anyone who can't afford the racks of harddrives some of these datasets need.

  • Is what we do on the fusor forum for amateur high energy scientists. It's not perfect, but we basically share in the same manner as open source software all that we do, and it's working fine for us. We help the newbies when we can, or tell them to search the extensive archives for when that question has been asked and answered before, post data, pictures of our gear and all that. It's a good crowd, but a small site, so don't all go there at once....it won't take it and this isn't funded by some large out
  • Perhaps /. could lead the way by providing an open database of their stories and comments (license changes would be needed with opt-out).

    Then again, I might just think that because I'd rather have a different interface to the same info rather than the one I'm stuck with.

  • They lost me when I read "Open source discourages laziness (because everyone can see the corners you've cut)".

    Whoever said that hasn't seen a lot of open source GUI's [wikipedia.org] lately. Then they had the nerve to say open source products make bugs more likely to be identified because more people are looking at it. But how many of those people know what they're looking at? And is the core group, that knows what they're looking at, any bigger than some for-profit's programming team?

  • Where did it come from, and what is it supposed to represent?

    It's probably just cause I'm an electronics geek with a fondness for "hollow state", but that thing sure looks like the business end of a "magic eye tube" to me.

    For those who have no idea what a magic eye tube is:

    http://www.magiceyetubes.com/eye02.jpg [magiceyetubes.com]
    http://en.wikipedia.org/wiki/Magic_eye_tube [wikipedia.org]

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...