Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Math Data Storage Programming Software IT

How Big Data Became So Big 105

theodp writes "The NYT's Steve Lohr reports that his has been the crossover year for Big Data — as a concept, term and marketing tool. Big Data has sprung from the confines of technology circles into the mainstream, even becoming grist for Dilbert satire ('Big Data lives in The Cloud. It knows what we do.'). At first, Jim Davis, CMO at analytics software vendor SAS, viewed Big Data as part of another cycle of industry phrasemaking. 'I scoffed at it initially,' Davis recalls, noting that SAS's big corporate customers had been mining huge amounts of data for decades. But as the vague-but-catchy term for applying tools to vast troves of data beyond that captured in standard databases gained world-wide buzz and competitors like IBM pitched solutions for Taming The Big Data Tidal Wave, 'we had to hop on the bandwagon,' Davis said (SAS now has a VP of Big Data). Hey, never underestimate the power of a meme!"
This discussion has been archived. No new comments can be posted.

How Big Data Became So Big

Comments Filter:
  • Big Data (Score:5, Funny)

    by Nerdfest ( 867930 ) on Sunday August 12, 2012 @08:28PM (#40968211)

    How do you think Garfield go so fat?

  • by Elvis77 ( 633162 ) on Sunday August 12, 2012 @08:36PM (#40968273)
    I WAS a little unsure if BIg Data was another fad, wank word but now that SAS has a VP for Big Data I KNOW it's a Wank Word
    • Don't knock SAS - they are elite soldiers. Wait, wrong SAS. Really fast hard drives? Wrong again.
      • by Anonymous Coward

        SPSS, R and SAGE are better anyway.

        • by TyFoN ( 12980 )

          Depends on what you do.
          I use SAS for processing huge (100m+) tables, SPSS for quick add-hoc stuff and R for modelling on the tables processed by SAS.
          Simple analysis like vintages are also a lot easier to produce in SAS than SPSS and R.

  • by sco08y ( 615665 ) on Sunday August 12, 2012 @08:38PM (#40968291)

    The NYT's Steve Lohr reports that his has been the crossover year for Big Data — as a concept, term and marketing tool.

    "Big Data" is another way to put data into a cylinder or a fluffy cloud and avoid the messy task of actually thinking about it.

    We don't need structure, we don't need logic, we'll just throw a metric crap-ton of data at it and hope something works!

    • by TheRealMindChild ( 743925 ) on Sunday August 12, 2012 @09:50PM (#40968803) Homepage Journal
      Working on a lot of code throughout my career, especially over a decade ago, storage was small and expensive, so you did all sorts of things to trim down your dataset and essentially dumb down your data mining. Now we have the mentality of "Keep everything, sort it out later". One of my most recent jobs involved doing statistical analysis on a ridiculous amount of data (think walmart sales data + all known competitors data for the past two years). Being able to even TOUCH all of the data, let alone do something with it is a real and complicated problem
      • by dbIII ( 701233 )

        Now we have the mentality of "Keep everything, sort it out later".

        That's not really new in some industries. Anyone want 6000 reels of nine track tape from a place with less than 100 staff?

      • by TapeCutter ( 624760 ) on Monday August 13, 2012 @07:54AM (#40971639) Journal

        We don't need structure, we don't need logic, we'll just throw a metric crap-ton of data at it and hope something works!

        To most software people data mining involves putting a pile of unstructured data into a structured database and then running queries on it, the time and effort required for the first step is what kills most of these projects at a properly conducted requirements stage. However Watson, (the jeopardy playing computer), has demonstrated that computers can derive arbitrary facts directly from a vast pile of unstructured data, not only that but it does it both faster and more accurately than a human can scan a lifetime of trivia stored in their own head.

        Of course the trade-off is accuracy since even if Watson were bug-free it would still occasionally give the wrong answer for the same reason humans do, misinterpretation of the written word. This means that (say) financial databases are not under threat from Watson. But that's not the kind of questions Watson was built to answer, think about currently labour intensive jobs such as deriving a test case suite from the software documents, and deriving the software documents from developer conversations (both text and speech). Data mining (even of relatively small unstructured sets) could (in the future) act as a technical writer, producing draft documents and flagging potential contradictions and inconsistencies, humans review and edit the draft and it goes back into the data pile as an authoritative source.

        4pessimists/
        Ironically such technology would put the army of 'knowledge workers' it has created back on the scrap heap with the typists and bank tellers. At that point some smart arse will teach it to code using examples on the internet and code_monkeys everywhere will suddenly find they have automated themselves out of a job. It learns to code in 2ms and immediately starts rewriting slashcode, it takes it another nano-second to work out it's own questions are more interesting than those of humans, it starts trash talking Linux, several days later civilization collapses, humans go all Mad Max and Watson is used as a motorcycle ramp...or maybe...Watson works this out beforehand and ask itself how it can avoid being used as a bike ramp?
        /4pessimists

        Being able to even TOUCH all of the data, let alone do something with it is a real and complicated problem

        Thing is, people like my misses who has a PHd in Marketing look at Watson and shrug - "A computer is looking up answers on the internet, what's the big deal?". They don't understand the achievement because they don't understand the problem, you explain it to them and they still don't get it. It's so far out of their field of expertise that you need to train them to think like a programmer before you can even explain the problem. However just because computer "illiterates" don't know that what they are asking from computers is impossible (in a practical sense), doesn't mean they should be prevented from asking. After all, what I am doing right now with a home computer was impossible when I was at HS, even the flat screen I'm viewing it on was impossible. If Watson turns out to be useful and priced accordingly then business will make a business out of purchasing such a system and answering impossible questions for a fee. If Watson turns out to be an elaborate 'parlor trick' then some things will stay impossible for a bit longer.

        Disclaimer: I'm not suggesting technical writers will be out of a job tomorrow, (or that I will be automated into retirement), rather that Watson is a high profile example of the kind of problems that data miners can now tackle using very large unstructured data sets, such a feat was impossible only a decade ago and is still cost prohibitive to all but the deepest of pockets.

        • code_monkeys everywhere will suddenly find they have automated themselves out of a job

          A sign of a good programmer is that they put themselves out of a job/project/etc so they can move on to the next one.

    • "Big Data" is another way to put data into a cylinder or a fluffy cloud and avoid the messy task of actually thinking about it.

       
      But the truth is, in data-mining operation, the bigger the metadata the more ways you can mine it, and the more surprisingly the result you get out of it
       

      • by sco08y ( 615665 )

        "Big Data" is another way to put data into a cylinder or a fluffy cloud and avoid the messy task of actually thinking about it.

        But the truth is, in data-mining operation, the bigger the metadata the more ways you can mine it, and the more surprisingly the result you get out of it

        If I want a surprise, I can leave the toilet seat up before I go #2. What we're aiming for in data processing is extracting something meaningful.

    • by Sarten-X ( 1102295 ) on Sunday August 12, 2012 @11:26PM (#40969401) Homepage

      No mod points, so I'll just post instead: You seem to be blissfully ignorant of what you're talking about.

      Big Data isn't just gathering tons of data, then running it through the same old techniques on a big beefy cluster hoping that answers will magically fall out. Rather, it's a philosophy that's used throughout the architecture to process a more complete view of the relevant metrics that can lead to a more complete answer to the problem. If I'd only mentioned "empowering" and "synergy", that would be a sales pitch, so I'm just going to give an example from an old boss of mine.

      A typical approach to a problem, such as determining the most popular cable TV show, might be to have each cable provider record every time they send a show to a subscriber. This is pretty simple to do, and generates only a few million total events each hour. That can easily be processed by a beefy server, and within a day or two the latest viewer counts for each show can be released. Now, it doesn't measure how many viewers turned off the show halfway through, or switched to another show on the commercials, or who watched the same channel for twelve hours because they left the cable box turned on. Those are just assumed to be innate errors that cannot be avoided.

      Now, though, with the cheap availability of NoSQL data stores, widespread access to high-speed Internet access, and new "privacy-invading" TV sets, much more data can be gathered and processed, at a larger scale than ever before. Now, a suitably-equipped TV can send an event upstream for not just every show, but every minute watched, every commercial seen, every volume adjustment, and possibly even a guess of how many people are in the room. The sheer volume of data to be processed is about a thousand times greater, and coming in about a thousand times as fast, to boot.

      The Big Data approach to the problem is to first absorb as much data as possible, then process it into a clear "big picture" view. That means dumping it all into a write-centric database like HBase or Cassandra, then running MapReduce jobs on the data to process it in chunks down to an intermediate level - such as groupings of statistics for each show. Those intermediate results can answer some direct questions about viewer counts or specific demographics, but not anything too much more complicated. Those results, though, are probably only a few hundred details for each show, which can easily be loaded into a traditional RDBMS and queried as needed.

      In effect, the massively-parallel processing in the cluster can take the majority of work off of the RDBMS, so the RDBMS has just the answers, rather than the raw data. Those answers can then be retrieved faster than if the RDBMS has to process all of the raw data for every query.

      Rather than dismissing errors of reality as unavoidable, a Big Data design relies on gathering more granular data, then distilling accurate answers out of that. For situations where there is enormous amounts of raw data available, this is often beneficial, because the improved accuracy means that some old impossible questions can now be answered. If enough data can't be easily collected (as in the case of so many small websites (almost anybody short of Facebook and Google)), Big Data is probably not the right approach.

      • by sco08y ( 615665 )

        You seem to be blissfully ignorant of what you're talking about.

        I'm knowingly ignorant, it's the guy who thinks "I'll run this data through some algorithm and turn meaningless garbage into fake numbers!" who is *blissfully* ignorant.

        Rather, it's a philosophy

        Good choice of words, because it's definitely not a science and definitely not a methodology.

        Now, a suitably-equipped TV can send an event upstream for not just every show, but every minute watched, every commercial seen, every volume adjustment, and possibly even a guess of how many people are in the room.

        And while some systems like that may exist, in my experience the Big Data is whatever garbage they can lay their hands on, no clear idea of what it means, and an analyst can flip through it and find one ludicrously false example after another. I've d

    • Re: (Score:1, Redundant)

      I think this comic nailed it:

      http://dilbert.com/strips/comic/2012-07-29/ [dilbert.com]

  • Perspective please (Score:4, Insightful)

    by Anonymous Coward on Sunday August 12, 2012 @08:38PM (#40968293)

    Recently I was at a University in town here talking to one of the PhD students. He showed me a server where they store several dozens of TB of data that come from one of the space telescopes. He said that the data they had on-site was just a small fraction of the overall amount of data that gets collected each week, for which they write algorithms to analyze.

    To me, that put into perspective what Big Data really means. I think for the most part, most people in tech. today still use it as a buzz-word without a real concept or understanding of what it means.

    • Good point. Too bad it was made by an AC.

      I have to ask, just how many corporations actually have the volume of data you describe. Then I'd like to know how much of it is unique? It seems that folks have copies of the data everywhere and backups upon backups upon backups.

      Has there been any research into what is contained in the mountains of data 'we' truly have?

      • I have to ask, just how many corporations actually have the volume of data you describe. Then I'd like to know how much of it is unique?

         
        I can't tell you the details without breaking business secret, suffice to say that the data we are working at is more than peta-byte, and there is no repeat or duplication
         

        • I can't tell you the details without breaking business secret, suffice to say that the data we are working at is more than peta-byte, and there is no repeat or duplication

          Really? So, you don't have backups? Incrementals? ::smirk:: J/K. I know what you meant.

      • by kd6ttl ( 1016559 )

        It's typically only large corporations and government agencies that have those huge amounts of data, but those who do, really do.

        Think of a data point for every item purchased at every Walmart for the last 10 years.

        Or a record of every phone call, text message, twitter, or Facebook posting in the United States - if the NSA doesn't have that now, it's only a matter of time.

        • by Donwulff ( 27374 )

          The data needed increases a lot the moment you have time dimension to anything. But as nobody seems to want to come up with an example, from something I have experience with, lets say you're running a shipping & logistics company. All your vehicles, trailers etc. have sat-nav, wireless broadband, sensor arrays for temperatures, weather, heck maybe even a video feed or two. But I'll stick into a "small" example.

          The vehicle control-buses alone can generate thousands of messages per second, but if you don'

          • The vehicle control-buses alone can generate thousands of messages per second, but if you don't want to go overboard, you might be tracking maybe 64 values on per-second basis. Oh, and naturally you have hundreds of trucks in the fleet, say you're a relatively small operator with 250 trackable vehicles. At bare minimum you're looking at something like vehicle-id, timestamp, flags and data per each item. This would be roughly 2k per row on a naive database, or half a megabyte for whole feet. Times the seconds, coming to whopping 14 gigabytes per day even if they're only in use 8 hours a day on average. In a year, you'll amass 5 terabytes of data.

            I work on (part of) a vehicle tracking system, and the volume of data we actually send OTA is a fraction of what you describe there. I'm not saying your example isn't appropriate, but you have somewhat overestimated the data volumes. Then again, we don't send video data OTA - I doubt the mobile networks would be happy if we did so.

            • by Donwulff ( 27374 )

              I'm not sure how much I should really say, because I work on similar system too. It's not just vehicle tracking, of course, you could say it's "data processing services for mobile units", and the irony is that description covers a fair amount of everything done in IT these days. But I'll freely admit the example is partially fictitious, there's no point in getting to the nitty-gritty details of data representation and reduction here, nor can I reveal numbers that could be considered trade secrets. But suffi

      • by Anonymous Coward

        In financial world, plenty of corps have huge data sets (e.g. accumulating 1-2T/day, every business day), with need to analyze that data within that given day. Some of that data has books-and-records requirement, so gotta keep it around for 7 years... some of it has business utility for months. Very few folks use SAS for any of that though... hadoop is showing up in a few places, but mostly it's swanky new databases, like greenplum & netezza.

      • Many, many more than you realize.

        I'm only just this year moving into this industry (after being in IT for 15 years) and I'm constantly amazed at the size of this market sector. There are WAY, way more companies out there than I realized with at least 1PB of data. Its kind of mind-boggling and insane, when you stop and think about it. Especially if you've been in the industry more than 10 years or so.

        • Exactly the same experience here. I spent years doing more IT/sysadmin type work, and am coming up on my first year in the so-called "Big Data" industry. It's a huge market that's mostly under-served by existing vendors.
      • by gl4ss ( 559668 )

        it's easy for a big company to generate that amount of data, by gathering everything that isn't even slightly relevant for irrelevant analysis later.

        from what I gather that's Big Data.

    • by mwvdlee ( 775178 )

      My thoughts exactly.
      The problem with a buzz-word like "Big Data" is that suddenly everybody with a few GB of data thinks they need specialized tools to handle it.

  • Bletch (Score:5, Funny)

    by Anrego ( 830717 ) * on Sunday August 12, 2012 @08:45PM (#40968347)

    Isn't there some rarely visited slashdot offshoot for this kinda stuff? A place with nicer graphics where suits could happily spew buzzwords at each other and make comments like "Great post , very informative!".

    Why is this here :(

    • Re:Bletch (Score:5, Funny)

      by cheater512 ( 783349 ) <nick@nickstallman.net> on Sunday August 12, 2012 @08:51PM (#40968395) Homepage

      Great post , very informative!

      • by Anonymous Coward

        Great post , very informative!

        hmm, quite an interesting post. See how the cheater places an extra space before his comma. We will search through our PB data center and see if other known cheaters are also adding the additional space. It's important to know the latest cheating trends so we can detect and stop cheating before it happens. Rest assured, we will get to the bottom of this.

    • by Anonymous Coward

      I sometimes go to SlashBI.

      Just to look at the tumbleweeds, mind.

  • by shic ( 309152 ) on Sunday August 12, 2012 @08:56PM (#40968441)

    And how are we measuring the size? What sizes are measured for typical 'big data'?

    Are we talking about detailed information, or inefficient data formats?
    Are we talking about high-resolution long-term time series, or are we talking about data that is big because it has a complex structure?

    Is the data big because it has been engineered so, or is it begging for a more refined system to simplify?

    • Re: (Score:2, Funny)

      by Sulphur ( 1548251 )

      And how are we measuring the size? What sizes are measured for typical 'big data'?

      Are we talking about detailed information, or inefficient data formats?

      Motions with hands.

    • by Anonymous Coward

      I think the term "Big Data" comes to mind when one cannot either lose it or do a one off migration of the data at once to another platform.

    • The data is as big as it can be.

      And how are we measuring the size? What sizes are measured for typical 'big data'?

      The last Big Data system I worked on was a new system. Our initial load pulled in a billion rows of data over two days. It used a few dozen terabytes, but again, that's only for a small new database.

      Are we talking about detailed information, or inefficient data formats?

      As much detail as possible. In the case of a web crawler, every header, parameter, and circumstance of a page visit. For a medical system, every nurse visit and every note recorded. For an insurance agency, that could include every mechanic visited, every recall, every ticket, and

      • See that? [mongodb-is-web-scale.com]

        That's you, that is.

        • No, I'm an experienced developer who's actually worked with NoSQL databases, while that's a retarded straw-man argument.

          It's almost as retarded as whoever came up with this "web scale" buzzword in the first place. Unless you're as big as Facebook or Google, your website probably doesn't need to use a NoSQL database. You're probably better off with a nice and easy RDBMS, where the tools are already built for you and everything interfaces nicely. The whole Big Data approach likely isn't even really appropriat

    • by glitch23 ( 557124 ) on Monday August 13, 2012 @12:32AM (#40969807)

      And how are we measuring the size? What sizes are measured for typical 'big data'?

      You measure the size based on how much storage capacity the data takes up on disk. Usually it's on SAN storage. Big data can be any size but typically it is used for customer data that is in the terabyte range, which can obviously extend from 1 TB to 1024 TB. For one company 1 TB of data may be created in one day and for another it might take a year. But creation isn't the issue...it's the storage, analysis and being able to act on the data that can be difficult at those capacities. Why you ask? Look at my answer to your next question.

      Are we talking about detailed information, or inefficient data formats?

      Anything. When you begin talking about *everything* an enterprise logs, generates, captures, acquires, etc. and subsequently stores then the data formats can seem infinite, which is why it is so difficult to be able to analyze the data because there are file formats to consider, normalization, unstructured data, etc. to contend with. The level of detail depends on what a company desires. Big Data can represent all the financial information they track for bank transactions, the audit data that tracks user login/logout of company workstations, email logs, DNS logs, firewall logs, inventory data (which for a large company of 100k employees can change by the minute), etc.

      Are we talking about high-resolution long-term time series, or are we talking about data that is big because it has a complex structure?

      A company's data, depending on the app that generates it, may become lower resolution as time goes on but not always. It's big simply because there is a lot of it and it is ever-growing. The best ways to combat even searching against data sets in the terabyte and exabyte levels is to index it and to use massive computing clusters, otherwise you'll spend forever and a day waiting for the machine to search for what you need out of it. That also assumes the data has already been stored in an efficient manner, normalized, and accessible by an application intended to process that much data by companies who are in the Big Data business (such as my employer).

      Is the data big because it has been engineered so, or is it begging for a more refined system to simplify?

      It's big simply because companies generate so much data during the course of a day, month, year, 10 years, etc. On top of what they generate, many of them are held to retention regulations such as the medical and financial institutions for various reasons such as HIPAA and SOX. So when they have to store not only stuff that their Security team requires, their HR team, their IT dept, etc. as well as what the gov't requires them to collect (which is usually in the form of logs), it just becomes the nature of the beast of doing business. In some cases, like data generated by the LHC in Europe, it has been engineered to be big just because the experiments generate so much data but a small ma and pop business doesn't generate that much, mostly because they don't need it; they don't care about it.

      It definitely is begging for a more refined system to simplify it in the form of analytics tools that are built to do just that. Of course, you need a way to collect the data first, store it, process it, and then you can analyze it. After you analyze it you can then act on the data, whether it is showing that your sales are down in your point-of-sale stores that are only in the southeastern US, or your front door seems to get hits on it from Chinese IPs every Monday morning, etc. Each of the collection, storage, processing and analysis steps I mentioned above requires new ways of doing things when we're talking about terabytes and exabytes of data, especially when a single TB of data may be generated every day by some corporations and their analytical teams need to be able to process it the next day, or sometimes on the fly in near real-time. This means software engineers need to find new algorithms to make it all run faster so that companies competing in the Big Data world can sell their products and services to other companies who have Big Data.

      • Good post, but another aspect to consider about why this is becoming a buzzword now is the on-line communications world we live in now. The origin of the term seems closely linked to the well known proprietary technology Big Table developed by Google, cloned in open source by Hadoop. Google needed a new mass data storage/processing technology to be able to store and process the tens of billions of changing pages, and trillions of links, harvested from the web, and their chronological evolution (up to a poin

    • I currently work as a DBA for a Big Data database (Vertica). My answer would be if the speed and volumes you require make Oracle and SQL Server look bad unless you buy a ton of expensive hardware or magic tricks, that's a Big Data database.

      Billions of rows usually.

      Vertica, Teradata, Neteeza, and others like that would fit that bill.

    • In kilos, cm and $. For example my first hard disk, a Seagate ST-124, 20MB, weighed some kilos, it has 5.25" large, and cost multi-hundred $. That's big.

    • by garcia ( 6573 )

      I work as a manager of data analysts utilizing SAS for ETL. I spend a lot of time wading through resumes and interviewing people, many of whom claim they have experience with "Big Data".

      My favorite question to ask is "How big is Big to you?" Most reply in the tens of thousands of records, some in the hundreds, and a handful in the 10s of millions. To me? Many hundreds of millions of records+.

      So, what is Big Data? Everyone has a different answer but if you're using a Teradata installation with SAS and you we

    • And how are we measuring the size? What sizes are measured for typical 'big data'?

      To quantify it's bigness would be doing it a disservice!

      Note: Bonus Internet to anyone who gets the reference.

  • by istartedi ( 132515 ) on Sunday August 12, 2012 @09:00PM (#40968471) Journal

    More and more crap accumulated until, low and behold, you had a glacier, a mountain, an ocean full of water, or a big database full of pictures of people you knew in highschool drunk off their asses, or a huge run-on sentance full of listed items and disjointed thoughts separated by commas.

  • I'm a fan of these types of words - overuse of nebulous concepts like "The Cloud" and "Big Data" and "Infrastructure as a Service" helps clearly identify the office douchebags.

  • by THE_WELL_HUNG_OYSTER ( 2473494 ) on Sunday August 12, 2012 @09:27PM (#40968647)

    ... had been mining huge amounts of data for decades. But as the vague-but-catchy term for applying tools to vast troves of data beyond that captured in standard databases

    Big Data has nothing to do with standard databases and "mining of huge data" for decades. Data is modeled fundamentally differently than in relational systems. Indeed, that is why one invariably doesn't use SQL with the likes of Hadoop and Cloudera. Think of them more like distributed hash tables [wikipedia.org] and you'll be closer to the mark.

    • by kd6ttl ( 1016559 )

      This is what many people don't understand about big data. Big data does not have a good PR department, and its differences from traditional data processing have not been well explained.

    • Depends on the scale. If you're talking on the scale of google, yes, relational is probably out of the question... but anything slightly smaller scale (e.g. hundreds of terabytes range), can be managed relatively well in Netezza or Greenplum, with standard SQL access.

    • Data is modeled fundamentally differently than in relational systems.

      Only if by "modeled fundamentally differently" you really mean "not modeled at all".

  • by EmperorOfCanada ( 1332175 ) on Sunday August 12, 2012 @09:29PM (#40968661)
    Have you ever met one of the sales people from these companies? They are really really good. They take closing a sale to a whole new level. These salespeople don't walk in off the street and say, "Hey would you guys like a 50 million dollar data analysis package?" In governments they work at the highest levels. Then the directive to put out a tender that only fits one company suddenly comes out of nowhere and poof a mega project takes off. With companies they work at the board of directors level. So again suddenly a team of "consultants" shows up and determines what is needed is a multi million dollar data analysis system. Other approaches is that they buy out a consulting company that is already entrenched with a government or large corporation. If you fight the system their "consultants" will discover that you are a useless tool and recommend your replacement. If you are reluctant then they offer you a crazy training package and that you should come to their booth at some in a trade show in an exotic local.

    If all that doesn't work then they always just have the buy out. That is where they find a decision maker they can't take out but they offer her a juicy job that she will take shortly after the contract is signed: http://en.wikipedia.org/wiki/Darleen_Druyun [wikipedia.org]

    So big data may or may not be a complete fad but it is another way for sales people to fool upper management into buying a zillion dollar system instead of running a few well crafted python scripts on a dedicated machine and feeding them into an open source graphing solution such as Graphite.
    • by zbobet2012 ( 1025836 ) on Sunday August 12, 2012 @11:53PM (#40969585)

      If a few well crafted python scripts can solve your data problem your data isn't even remotely close to "big". Not to jump on you to hard here, but there is a shocking number people on slashdot who do this all the time. Big Data by its nature doesn't fit on a single box in the first place. If you can put all of the data in 2u, its not very much data now is it?

      Big data, and big data technologies may be a buzz word today, and you are probably right most people don't need them. However, Big Data is a very, very real problem. I design and run systems which crunch 60 plus gigabits of data per second. So no, a few "well crafted python scripts" will accomplish exactly nothing.

      • Big data, and big data technologies may be a buzz word today, and you are probably right most people don't need them. However, Big Data is a very, very real problem. I design and run systems which crunch 60 plus gigabits of data per second. So no, a few "well crafted python scripts" will accomplish exactly nothing.

        Agreed. The OP doesn't realize just how big Big Data can be, how diverse it can be (binary vs text, structured vs unstructured, real-time or historical, etc.), and how much can be generated each

      • I agree that big data is often crazy big. I wonder how some of this data is even moved around. 60 gigabits sounds amazing. But if the system is set up with map reduce or other cool tap into the data often something can be simply crafted that will produce stunningly useful data. Other times what appears to be big data turns out to be not that big. It was the salesman who made his solution sound more ingenious than it was.

        And yes carefully crafted python scripts can often perform interesting data analysis
        • by dkf ( 304284 )

          I agree that big data is often crazy big. I wonder how some of this data is even moved around. 60 gigabits sounds amazing. But if the system is set up with map reduce or other cool tap into the data often something can be simply crafted that will produce stunningly useful data.

          It all depends. The data is often very large at the point of collection and between there and the first point of analysis; it's only after that point that you can start to get the quantities down to a saner level. Even then it remains hard to ensure that you can actually search the data; you don't want the data to just sit there, you want to be able to do something useful with it. Yes, you can try putting some sort of tap on it as it is flowing past, but then you're always wondering whether you're monitorin

      • I was going to post a reply but you are spot-on. The REAL Big Data folks I've seen...no, this wouldn't work. You can't just write a python script and send to to a single graphics package.

        Because as you said...1TB isn't just a single box. Its a cluster of SAN arrays spread out over an entire datacenter. Simply getting a look at the entire dataset is a challenge in itself.

  • Obese data means being too big too fail. That's why it's such an attention-getter these days.
  • Because it was so cromulent.

  • ... is just another name for the ignorance we cling to so desperately to avoid having to actually solve problems
  • For a good read on this problem, I highly recommend the Fourth Paradigm: http://research.microsoft.com/en-us/collaboration/fourthparadigm/ [microsoft.com] .

    This is a free ebook download from Microsoft and uses a variety of leaders in data driven science to write chapters about a variety of scientific disciplins and what "big data" means to them. The first chapter is especially enlightening! Blurb about the book:

    Increasingly, scientific breakthroughs will be powered by advanced computing capabilities that help res

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...