Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Advertising Businesses IT Science Technology Your Rights Online

Test Shows Big Data Text Analysis Inconsistent, Inaccurate 60

DillyTonto writes The "state of the art" in big-data (text) analysis turns out to use a method of categorizing words and documents that, when tested, offered different results for the same data 20% of the time and was flat wrong another 10%, according to researchers at Northwestern. The Researchers offered a more accurate method, but only as an example of how to use community detection algorithms to improve on the leading method (LDA). Meanwhile, a certain percentage of answers from all those big data installations will continue to be flat wrong until they're re-run, which will make them wrong in a different way.
This discussion has been archived. No new comments can be posted.

Test Shows Big Data Text Analysis Inconsistent, Inaccurate

Comments Filter:
  • In other words, when it comes to big data, you're doing it wrong - and if you change how you're doing it, you're still going to be doing it wrong.

    Big data fails to live up to hype - news at 11.

    • by Drethon ( 1445051 ) on Sunday February 01, 2015 @01:00PM (#48952381)
      This is what scares most people, or at least me, about ideas of using big data to predict criminals or otherwise mess up people's lives.
      • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday February 01, 2015 @01:10PM (#48952451) Homepage Journal

        This is what scares most people, or at least me, about ideas of using big data to predict criminals or otherwise mess up people's lives.

        It's not a problem to use big data to try to figure out where to focus. But you have to subject the results to some sanity checking, and before you actually impact someone's life, perhaps even some common sense. Shocking idea, I know, and the reason why it's still a problem.

        • Just as we expect expert practitioners in medicine or civil engineering to bear liability for mistakes in their respective professions, can the notion of modeling malpractice be far behind? When will the first class-action suit be filed against a statistical model that incorrectly denies service or besmirches the credit ratings of thousands?
    • by jythie ( 914043 )
      Yeah, but this makes sense. The success criteria for analysis is not accuracy, but faith. As long as they can sell it to marketers, correctness is just something that needs a bit of spin.
  • Color me surprised (Score:5, Insightful)

    by Crashmarik ( 635988 ) on Sunday February 01, 2015 @12:53PM (#48952341)

    People thought you could bypass doing the work and actually understand what is going on but get useful results.
    Turns out you can't.

    Or put another way, If big data is so great "Why didn't Watson see IBM's crash coming ?"

    • Or put another way, If big data is so great "Why didn't Watson see IBM's crash coming ?"

      You're assuming it didn't.

    • by BarbaraHudson ( 3785311 ) <barbara DOT jane ... T icloud DOT com> on Sunday February 01, 2015 @01:04PM (#48952407) Journal

      There's lies, damn lies, and statistics. Big data is just the 3rd repackaged, snake oil for people who (a) don't understand the business they're in (or they wouldn't need consultants telling them big data will tell them how to better run their business), (b) don't know which data is relevant, (c) don't know what questions are important, and (d) should be fired.

      Big data wouldn't have prevented GM from going bankrupt. GM head idiot Wagoneer didn't understand that the nature of the business had changed (point a). Also didn't understand that those big sales figures for Hummer were irrelevant, because they were a product that was soon answering the "wrong question" (point b). He failed to address the crunch others knew was coming, so he didn't ask "what happens when ..." (point c). As for point d, he was finally fired, but too late.

      Big data is just a new twist of online dating. "Given enough people, we can match any two." Yeah, right.

      • by plover ( 150551 )

        When you're dealing with statistics, you ought to recognize that 92% accuracy is a huge improvement over a random distribution. You do not use big data to select a target for a sniper rifle, you use it to point a shotgun.

        And just like your faulty GM CEO analogy (I assume you felt the need to apply a car analogy for the benefit of the slashdot crowd) only an idiot would send someone off in the woods blindfolded and have him fire his shotgun in a random direction hoping to bring home some kind of food animal

        • That's a nice strawman argument you got going there.

          People who have an understanding of their business will achieve far better results than a random distribution. The CEO of Ford saw it coming several years in advance, and prepared (via tens of billions of borrowing against every company asset, including their logo) to have enough funds to weather it out, while changing their product line-up to match the new reality.

          There is no "silver bullet" to replace competence.

          • Exactly. There is actually an excellent book on the topic: Once Upon a Car by Bill Vlasic. In fact, GM even approached Ford for a potential merger. Ford, realizing that GM was screwed and they were, in fact, not...thought it over for the night...and then flatly rejected GM.
          • by plover ( 150551 )

            You seem to be belaboring this mistaken impression that analyzing Big Data somehow replaces thinking in the board room. It does not. Big Data is a tool that can help provide evidence of what people have done in the past, statistically correlated to potential causes. Big Data doesn't decide "hey, let's buy GM." People make those decisions, and they try to make them based on the information they have -- and Big Data can be a good source of that info. But people can be idiots, they can be talented, they c

            • As I pointed out, big data is being used by people who shouldn't be in the position to make decisions. You can't make right decisions if you ask the wrong questions.
              • Coincidently the Slashdot 'thought of the day' below reads -"There's no sense in being precise when you don't even know what you're talking about. -- John von Neumann"
  • by plover ( 150551 ) on Sunday February 01, 2015 @01:02PM (#48952397) Homepage Journal

    The difference between "92% accurate" and "accurate enough for my task" are profound.

    If you were using these kind of analytics to bill your customers, 92% would be hideously inaccurate. You'd face lawsuits on a daily basis, and you wouldn't survive a month in business. So the easy answer is, "this would be the wrong tool for billing."

    But if you're advertising, you know the rates at which people bite on your message. Perhaps only 0.1% of random people are going to respond, but of people who are interested, 5.0% might bite. If you have the choice between sending the message to 10000 random people, or to 217 targeted people (only 92% of whom may be your target audience), both groups will deliver the same 10 hits. Let's say the cost per message is $10.00 per thousand views. The first wave of advertising cost you $100. The second costs you $2.17. Big Data, with all of its inaccuracies, still improves your results by a wide margin.

    Way too often people like this point out that perfection is impossible. They presume that "because it's not perfect, it's useless." The answer is not always to focus on becoming more accurate, but to choose the right tool for the job, and to learn how to recognize when it's good enough to be usable. At that point you learn how to cope with the inaccuracy and derive the maximum benefits possible given what you have.

    • by hax4bux ( 209237 )

      +1, great response

    • by Jumperalex ( 185007 ) on Sunday February 01, 2015 @01:47PM (#48952671)

      All models are wrong, some are useful.

    • by Anonymous Coward

      Advertising is a scourge. How about an example using medical procedures instead?

      • by plover ( 150551 )

        That's a great question. Do you think 80% accuracy is good enough for medical use? If you're a doctor facing an unfamiliar situation, and your data says treatment X helped 40% of patients it was tried on, treatment Y helped 35% of them, and all other treatments (Z, W, etc.) helped no more than 30%, but you know the data might only be 80% accurate, what treatment do you choose? Are those ratios even meaningful in the presence of so many errors?

        Consider the case where the patient's condition is critical, a

        • by sfcat ( 872532 )

          That's a great question. Do you think 80% accuracy is good enough for medical use? If you're a doctor facing an unfamiliar situation, and your data says treatment X helped 40% of patients it was tried on, treatment Y helped 35% of them, and all other treatments (Z, W, etc.) helped no more than 30%, but you know the data might only be 80% accurate, what treatment do you choose? Are those ratios even meaningful in the presence of so many errors?

          Consider the case where the patient's condition is critical, and you don't have time for additional evaluation. Is X always the best choice? What if your specialty makes you better than average at treatment Y? Maybe that 20% inaccuracy works in favor of the doctor who has the right experience.

          It could it be used for ill, too. What if you know you'll get paid more by the insurance company for all the extra tests required to do treatment Y? You could justify part of your decision based on the uncertainty of the data.

          In the end, historical data is just one factor out of many that goes into each of these decisions. Inaccurate data may lead to suboptimal decisions, so it can't be the only factor.

          Great strawman, but your strawman happens to actually be a nuclear powered, armor plated tank...with sharks and laser beams!!! Turns out way back in the 60's, when they started to think about what problems computers could one day solve, they listed many: beat world champion at chess, drive cars, etc...one of them was medical diagnosis. It took decades longer than thought to solve the ones they have been able to solve with one exception: medical diagnosis. By the early 80s we had "expert systems" that wer

    • Comment removed based on user account deletion
      • by plover ( 150551 )

        They could certainly send 50 times as many messages, but they'll improve their return on investment if they target all of them at people who are more susceptible to their message in the first place. Given the cost of the Big Data systems they may only be able to afford to send 10 times as many instead of 50 times, but as long as their message is 5% effective instead of 0.1%, it's still a vast improvement on ROI.

    • I always liked the HIV test example.
      If you have a test that is 99% accurate, you would think that's a pretty good test.
      However, if only 1 in 1000 people actually are HIV positive, this means you get 10 false positives per correct positive.
      So that's not a very good number falsely claiming people have HIV 9 times out of 10!

      Actually, even 99,9% would be bad, since that means you're wrong 1 time out of 2.
      • As someone with a bit of experience in Big Data and medical technology...

        A test that falsely indicated HIV 9 times out of 10 is absolutely wonderful, if it actually catches that one correct positive reliably. A false negative is far more dangerous, and it's the job of the doctor to try multiple tests to confirm a diagnosis. If the initial screening comes back positive, the patient can be warned off intercourse for a while or start some initial therapy while another test is tried, without significant risk to

        • What big data brings to the table is you can find that "A" is strongly correlated with "B" and have not even an inkling as to why.

          If you scan all medical and personal records throughout history and find that everyone that owns a yellow Camero (no matter what year) at some point in their life comes down with liver cancer at 45. Sure you can make up reasons all you want but if there is no other correlation in the data, do you just ignore it? What if it's just 90% of the Camero owners? even 50% or 20% would

          • That's almost exactly what one aspect of my project was.

            My project was, in brief, allowing medical researchers to search through patient records for patients matching particular criteria. The system could recommend related criteria, as well, based on the correlation to the already-shown results.

            Early tests were particularly useless, as the system noticed a strong correlation between being pregnant and being female. It also suggested that if you included people who had smoked within the last six months, ther

  • This is actually good news. We always wanted computers to behave more like people, and in this case they are. The same question and data often gets different results just like people. What a great technological breakthrough.

  • by lucm ( 889690 ) on Sunday February 01, 2015 @01:19PM (#48952529)

    The hype over big data comes from companies like Facebook or Amazon. It's a consequence of bad decisions made in the early days.

    It's easy to see how this happens. Some dude says: to hell with data models, data governance or a formal approach to data warehousing; those are too "enterprisey", we are a nimble startup with the need to pivot and build MVPs quickly, let's just serialize our java/python/php objects for now. A billion dollars and 20 petabytes later the company has to rely on machine learning to sift through their digital garbage so they could find out how many users they have. And if they need stuff that runs on thousands of commodity servers, like hadoop or cassandra, it's not because it's better, it's because IBM doesn't make a mainframe big enough to help them.

    In most organization these solutions should not even be considered. That's like considering bariatric surgery to lose 10 lbs because it helped the morbidly obese lady next door lose 250 lbs.

    But it's cooler to say you work on a Spark project than on evolving an Inman-inspired enterprise data warehouse using Netezza to crunch numbers. So let's all brush up on our graph theory and deliver unreliable answers to painstakingly formulated questions until the next fad kicks in.

  • "Companies that make products must show that their products work," Amaral said in the Northwestern release. "They must be certified."

    This researcher is completely out of touch with what's sold in the marketplace. No wonder he doesn't understand that flawed solutions can still be useful.

  • ...They seem to be quite proud of their massive warrantless text analysis ; )

  • by dorpus ( 636554 ) on Sunday February 01, 2015 @02:29PM (#48952985)

    I analyzed the free-text field on hospital surveys. A simple keyword search gave me very reliable results on what the patients were complaining about -- they fell into the categories of bad food (food, cafeteria, diet, tasted, stale), dirty rooms (dirty, rat, blood, bathroom), rude staff (rude, ignore, curt), noise (noise, loud, echo, hallway), TV broken (TV, Television, "can't see"). So if the context is narrow enough, even simple searches work.

    I agree that more broadly worded questions require more sophistication. I've looked at word combinations and so forth, though I haven't really needed to use them yet in analyzing health care data. We would not trust a computer to parse a full doctor's report, no matter how sophisticated the software; that will require manual inspection, often by multiple people to agree on a consensus interpretation.

  • I was a bit surprised reading the headline as this would also mean that text analysis like a bayesian classifier or using a linear svm for classifying text would also be inconsistent or inaccurate. I personally have good experience with the latter and the former.

    But after reading the article - yes I know - it seems to focus on the LDA [wikipedia.org] model but that is only of technique that is available for doing textual analysis or categorising documents.
  • by iceco2 ( 703132 ) <{meirmaor} {at} {gmail.com}> on Sunday February 01, 2015 @03:11PM (#48953321)

    The first hint you get is when you notice this paper was published in a physics journal, not a great sign. Then you actually start reading, and you see they declare LDA as "state of the art". And when you actually read what they propose it is a bunch of standard text techniques which actually work quite well with LDA.
    So what they actually showed is that taking vanilla algorithms out of the box without even the most basic data processing under-performs compared to superior data processing attached to a simpler algorithm. Which anyone which did any sort of text processing or any other kind of data managling already new.

    • Which anyone which did any sort of text processing or any other kind of data managling already new.

      By text processing, did you mean "knew->new" , "which->who" and "managing->managling"?

  • Let me get this straight. You're saying that big data, and the tools used to analyze it are frequently inaccurate, or just plain wrong? To which I say,
    "Yes, but big data is 'web scale', so it has to be better."
    /sarcasm
  • We need a light-hearted romp, something that touches on our fear of Big Data while extolling the virtues. Something with a dash of romance. If only Spencer Tracy and Catherine Hepburn were still alive, they'd be perfect.
  • by fygment ( 444210 ) on Monday February 02, 2015 @07:55AM (#48957421)

    In the latter it's PCA/SVD and it's used to reduce the dimensionality (compact) of large numbers of variables eg a linear approximation is almost as good as accounting for all the variables individually.
    The problem in both text analysis and climate (or any other) models is that PCA/LDA/etc. are linear, and the data they are applied to are generally nonlinear.
    The latter means that the solution space has many (infinite?) number of sub optimal solutions.
    That in turn means PCA/LDA/etc. return a linear approximation to one of those solutions, and those solutions can be very different.

    So, yeah, there is a margin of error. And yeah, the reasons for that error varies. No surprise, because text understanding (and the climate) are hugely complex and nonlinear problems.

    BUT at least maybe more people will become aware that models are pretty much flawed ... so don't base legal or public policy on them.

  • So this algo is consistent 80% of the time and is correct 90%.... this is just a spin on numbers

"All the people are so happy now, their heads are caving in. I'm glad they are a snowman with protective rubber skin" -- They Might Be Giants

Working...