Google Begat the End of the Scientific Method? 387
TheSauce writes "In a fairly concise one-pager from Chris Anderson, at Wired, the editor posits that all of our current (or now previous) models for collecting data are dead. The content is compelling. It notes that we've entered the Age of the Petabyte — where one can collect immense amounts of data that are paradigm agnostic. It goes on to add a comment from the head of Google's R&D, that we need an update to George Box's maxim: 'All models are wrong, and increasingly you can succeed without them.' Have we reached a time where all of our tool-sets are now made moot by vast clouds of information and strictly applied maths?"
Ahem (Score:5, Insightful)
WTF?
English, ---, do you speak it?
WTF indeed (Score:5, Insightful)
I saw the article yesterday, but it was so WTFey I just moved on...definitely not Slashdot submission material (especially being a Wired article).
Re:WTF indeed (Score:5, Funny)
I hadn't seen WTF adjective-ised before, but I love it... there's just so much I can use it with. In fact, I gotta go now and tell my boss how my project is going....
Re:WTF indeed (Score:5, Funny)
And I hadn't seen adjective verbed!
Re:WTF indeed (Score:4, Funny)
Re: (Score:3, Funny)
I didn't know you could turn "verb" into a verb, or, to verb a noun, verb verb.
Re:WTF indeed (Score:5, Funny)
This thread is cromulent.
Re:WTF indeed (Score:5, Funny)
Just to clarify (Score:5, Insightful)
To avoid the same fate as the GP, let me clarify that by WTFey I specifically meant that the article was full of fluff, light on details and generally pointless...which makes me think "WTF." The closest thing to a point I could get from the article was "Nice big blobs of data can be useful, and statistical data based on said blobs could replace the results of scientific research." Mmmkay.
A sensational headline leading to a rather pointless article consisting mostly of fluff: WTF.
Re:Just to clarify (Score:5, Interesting)
Well I think the point they make is that with this kind of mathematical tools running against this huge sets of data, you get models out that you couldn't have thought of about. This is real AI. During the last days we had entries here on Slashdot about how AI is not advancing, but this kind of thing is very advanced AI and it is new.
I'll explain myself. The biggest job that a brain does (lets not consider a human brain so we don't get into the consciousness/mind type of conversation) is to find statistical correlations from the input data and extracting models from this correlations that can be used to predict the future. This is exactly what this tools are doing.
Before this tools, by looking at the data you would go: mmmm, this is interesting, lets check it out. That is, you would come up with a model and try to find out if it predicts the data. Then we started to use computers to check our models, and from what this WTFey article says, it is the computer the one coming out with the model now, starting from raw data.
Re: (Score:3, Insightful)
Re:Just to clarify (Score:4, Insightful)
Quite so, the article was dead wrong.
Having that much data allows for science that wouldn't have happened otherwise, but it doesn't allow us to forget about sound scientific principles. I for one don't want to die because the pharmaceutical company and my doctor thought that a correlation with safety was enough, without doing the experiments to verify. I could die either way, but correlation just isn't enough in many cases. Statistics don't prove or disprove anything, ultimately science is about understanding things the way that they are. Statistics can't do that.
If you can collect and store 100 pieces of information about a test subject for 200,000 test subjects at 150 points in time, you can do a huge amount with that. But, the data still needs to be interpreted, verified and placed into a verifiable model.
It doesn't really surprise me that Google would be handling search the way that they do, considering how borderline impossible it is to search for certain things unless you already know what you want. Searching for answers to software bugs ought to be straight forward, but Google seems completely incapable of sanely coping with version numbers without a lot of work.
Re: (Score:3, Insightful)
Re: (Score:3, Informative)
I suggest you to include the search terms -buy and -price. That makes wonders in getting Google to show you the actually relevant pieces of information.
Re: (Score:3, Interesting)
Re: (Score:3, Funny)
Re:WTF indeed (Score:5, Funny)
It reads like some sort of brain-damaged new-age technohippy tripe. Yeah, we don't need methodologies any more, because, maaaan, we've got tubes! Gimme a break.
Re:Ahem (Score:5, Insightful)
I used to think that I could translate most dialects of bullshit into english but this threw me off guard. The most reasonable explanation is that Chris Anderson is a tool and doesn't know what he is talking about.
For example, data is now "paradigm agnostic". Seriously, wtf? When was data ever not "paradigm agnostic" and when did we develop the need for a term to describe it. Data is data. It is raw, and unanalysed, and as such the notion of a paradigm is completely irrelevant.
Re: (Score:2, Insightful)
"For example, data is now "paradigm agnostic". Seriously, wtf?"
Just look at the creation evolution controversy, to see how data is not 'paradigm agnostic'. Each claim the others data is unsound by the paradigm's umbrella it falls under.
Re:Ahem (Score:5, Informative)
Each claim the others data is unsound by the paradigm's umbrella it falls under.
No, each claim the other's theory is wrong.
Nobody (sane) refutes the existence of ring species, or refutes microevolution, or other observable forms of data. The only thing in dispute in the controversy is "species are species because they were made that way" versus "species are species because after some really big N evolutionary steps they become that way".
Re:Ahem (Score:5, Informative)
If your methodology for evaluating a theory requires classifying it by abstract metaphysical concepts like "natural" and "supernatural", then you're a step away from the scientific method of "experiment".
Re:Ahem (Score:5, Funny)
Well, we already know it wants to be free, so maybe now it's just exercising its sentient status in other areas.
Re:Ahem (Score:5, Insightful)
Information doesn't want to be free. But when it isn't, neither are you.
The Paradigm is the Data Subset (Score:5, Insightful)
For example, to detect stress you might traditionally measure heartbeat, skin conductivity, pupil dilation.
In the "petabyte age" you throw in the number of times the subject uses the letter 's'; how frequently they use the 'reload' button on the browser; what colour of pants they wore last tuesday; Pepsi vs. coca cola; the number of times they picked their nose in 1997 and any and every other bit of data you have on the subject.
In the "petabyte age", most of the data you sift through will show no correlation, but you have a much better chance of finding the unexpected if indeed, there is some unknown factor out there.
Re:The Paradigm is the Data Subset (Score:5, Insightful)
Don't you run a much higher probability of finding high correlation by chance?
I can expect to find a result that matches my model to 95% certainty about 5% of the time in random data. You can correct for this, but it's against human nature because people like to see the face of Mary in toast.
Learning how to look for correlation in huge uncontrolled data sets will require a new paradigm... or it will ultimately be useless and even perhaps, unsuccessful.
Re:The Paradigm is the Data Subset (Score:4, Interesting)
But that goes for any visualisation technique - look to Edward Tufte or Stephen Few for detailed examples of how even the simple xy-graph can be abused.
Re: (Score:3, Insightful)
Yes. The more data you collect, the more likely any two things will be correlated slightly. With millions or billions of data points, you would be shocked to find a variable that does NOT correlate significantly with everything else. That's why "correlation" or "significance" alone becomes less useful and we need to a) report effect size measures to get a better sense of how important the correlation actually is and b) continue to use our heads (and not always give blind trust to the cloud) to determine
Re: (Score:3, Insightful)
The paradigm is embedded in the quantity, or subset, of data you choose to analyse.
In addition, once you start to analyze something, you have already built the "model" ipso facto. I can't imagine how you could set out to analyze something without a model.
The example Anderson uses in fact shows this. Ventner had to have a model of an ecosystem within which he posits the existence of organisms. Through testing (statistical analysis), he finds them. Thus 1) ecosystems house organisms and 2) there are organisims we don't yet know about.
Seems like the scientific method to me.
Re:Ahem (Score:5, Insightful)
Re:Ahem (Score:5, Interesting)
Yeah, I don't know what "paradigm agnostic" means specifically, but I think it's a mistake to think that "data is data".
Not all data is created equally. You have to ask how it was collected, according to what rules, and with what purpose. I can collect all sorts of data by stupid means, and have it be unsuitable for proving anything. It's even possible that I could collect a bunch of data in an appropriate way, accounting for the variables which matter for my particular experiment, and have that data be inappropriate for other uses.
Of course, if what's intended by "paradigm agnostic" is that we no longer pay attention to those things, then I hope we're not becoming paradigm agnostic. I'm just bringing this up because I think some people think numbers don't lie, and that when you analyze data, either your conclusions will be infallible or your analysis is flawed. On the contrary, data can not only be bad, but it can be inappropriate.
Re:Ahem (Score:5, Funny)
Not all data is created equally. You have to ask how it was collected, according to what rules, and with what purpose
I wear a goatee as a result of a small study.
Several years ago after after my marriage unravelled and I got divorced and couldn't as much as get a dinner date, I decided "fuck it, why do I bother buying razors?" and simply stopped shaving.
Then one night in a bar a woman told me I should shave it into a goatee. So I started asking women "goatee or full beard?" and collecting the binary (y/n) data. Of seventeen randomly selected women aged 21 to 70, sixteen said "goatee". The one who said "full beard" was standing beside her boyfriend, who wore a full beard.
My losing streak ended, thanks to pseudoscience!
Re:Ahem (Score:5, Funny)
I used to think that I could translate most dialects of bullshit into english
Piping TFA to bs2english yields:
Google is a great place to work, and an even better place to invest money in. Go Google! P.S.: buy Google stock.
Re:Data is not paradigm agnostic. (Score:4, Funny)
Re:Ahem (Score:5, Interesting)
It's simple really: The article seems to be saying that we have access to such a ludicrously large amount of data that trying to draw any real meaning from it is pointless. So, we employ a "shotgun" approach at reading the data, and voila, we get data that at least appears to be interesting.
Of course, since we have no particular purpose in mind when we do this, and no particular method other than "random", we end up with mostly useless data (in the example given, we have a bunch of random gene sequences that must belong to previously unknown species, but we know nothing about those species other than that we found some random DNA that probably belongs to them, and have no particularly good way of finding out more).
The article seems to be saying that since we have so much data, we can now draw correlations between different pieces of data and call it science. No reason is given why this is useful other than that we have so much of it, and Google is somehow involved. Apparently when you have enough data, "correlation does not equal causation" is no longer true. Again, no coherent reason is given for this stance.
I think the article makes the same mistake a lot of ill-informed people that get excited by big numbers make: It seems to believe that data is in and of itself an end goal, when really vast amounts of data are useless unless it can help us as humans answer questions that we want answered. Yes, knowing that there are lots of species of organisms in the air that we didn't know about before is sort of interesting I guess, but it doesn't really tell us anything useful.
Above all, the article proves that you can be almost entirely incoherent and still get your article published in Wired if it says something about how Google is changing the world.
Re: (Score:2)
Re: (Score:2, Funny)
Re:Ahem (Score:5, Insightful)
I'm glad slashdot linked it. I read this the other day and had no idea what to make of it. After the first 20 comments I see I'm not completely retarded.
Re:Ahem (Score:5, Funny)
I'm not completely retarded.
The data is inconclusive. Let me see what I turn up on a Google search.
Re: (Score:2, Insightful)
I saw it as saying "With so much data, you can use that as a base for preliminary research."
You then research those interesting things in traditional ways, but you have started with some sort of insight.
If you have enough images of the sky and stars, you can use the images to look for interesting things first, and then jump on a telescope or satellite when you have something
Re:Ahem (Score:4, Interesting)
It's an idiotic notion. We've had vast amounts of data for well over a century now, more than we can hope to fully measure and catalog in a life time. Everything from fossils to space probe readings to seismic measurements fill up data archives, in some cases literally warehouses full of data tapes, artifacts and paper. The way you deal with this sort of thing never changes. Providing the data is stored in a reasonable fashion, if you have a theory, you can go back and look at the old measurements, artifacts, bones, whatever and test your theory against the data. The only difference is that rather than going out and making the observations yourself, your using someone else's (or some computer that just transmitted its data).
Very WTFey (Score:2)
What is not useful about that?
Re: (Score:3, Insightful)
Old-way: develop physical model of how we think things work, test a few cases, refine model. New way: collect a huge relevant data set, mine the data for interrelationships, make a correlation. Correlation models replace scientific models. no more need for the hypothesis testing.
Re:Ahem (Score:5, Interesting)
Just undoing a slip of the mouse moderation.
That's one disadvantage of the current mod system - no chance to fix mistakes
Re: (Score:3, Interesting)
all of our current (or now previous) models for collecting data are dead.
I guess I have to R this FA. ALL the models for data collection? No more controlled double-blind studies?
It notes that we've entered the Age of the Petabyte -- where one can collect intense amounts of data that is paradigm agnostic.
Science has always at least tried to be paradigm agnostic. It can't always succeed of course, but I don't see how... Ok, I guess I'd better RTFA.
OK, I'm back. The article is horseshit. It is a whole bunch of
Definitions (Score:3, Insightful)
They may lead from one to the other but they are not all the same thing.
Re:Definitions (Score:5, Insightful)
Re: (Score:2, Funny)
Also, charisma and dexterity are very important.
Re:Definitions (Score:5, Insightful)
It depends (Score:3, Funny)
Fighter classes generally stop at con, where as Casters generally for Int or Wis. No one cares about Cha.
Re:It depends (Score:4, Funny)
Re:It depends (Score:4, Funny)
Re: (Score:3, Funny)
Re:It depends (Score:4, Funny)
Not quite (Score:4, Funny)
Re: (Score:3, Funny)
Quite.
And no matter the amounts of data, no matter the computing power, I don't think pure statistics will ever be able to analyze human language efficiently.
Two words: selection bias (Score:3, Interesting)
So no. Even if everything he wrote is all true, you still apply science to study things, just in a different way. The internet doesn't make science obsolete any more than it made economics obsolete, and saying otherwise is as much hubris now as it was then.
Re: (Score:3, Funny)
I'm made out of those you insensitive clod.
Re: (Score:3, Funny)
TODAY: Feeling up.
So... (Score:5, Insightful)
Re: (Score:3, Funny)
humm - the hell? (Score:2)
what?
the current qoute "Never frighten a small man -- he'll kill you." seems more relevent
How bout no (Score:5, Insightful)
Um, no. Claims like this demonstrate a lack of understanding of what a model is.
From the perspective of physics, the universe is just a massive amount of data--more data than any single human can comprehend at once. But thanks to the models of Newton we have a set of relatively simple equations that describe, generally, the way bodies in the universe interact. The model is not perfect, but it is useful.
Likewise, Google uses a very explicit model to describe the universe of the web: some pages are more relevant to a given search query than others, and these pages will generally be more 'popular' among other important pages. Again, the model is not perfect, but it is useful.
The fallacy is that somehow what Google is doing is a paradigm shift. It's not. It's just applying the same kind of scientific method to a type of data that hadn't existed before.
What, I think, the article is really trying to say is that Google's data is so massive and complex that we can't ascribe any explanation to the results it gives us. First of all, that is false, because the PageRank algorithm in its simplest form does give us a very explicit explanation (popular pages generally return better results). But even if it were true, Newton faced the same kind of accusations when people called his model of the universe 'Godless' and claimed, for example, that he decribed how gravity works without actually explaining "why" it works like it does. And that accusation is always with science. There are always more questions raised than answered. This is nothing new.
Re: (Score:3, Insightful)
But thanks to the models of Newton we have a set of relatively simple equations that describe, generally, the way bodies in the universe interact. The model is not perfect, but it is useful.
You are aware that the Newtonian Physics model breaks down when you are talking about traveling close to the speed of light?
Although, most of the time we are dealing with things that aren't traveling so fast, but there are many scenarios in physics that we need a different model for.
I think what the Googlite is advocatin
Don't rule science out it. (Score:5, Insightful)
The article is utter nonsense. But it's such a rambling mess it's hard to know where to start picking it apart. Perhaps the best is when he presents as an example of this new "model-free" approach with a program which includes "simulations of the brain and the nervous system". Uh, hello... a simulation IS a model.
Re:Don't rule science out it. (Score:5, Funny)
He didn't bother writing more than one rambling page because he figured someone said it better somewhere else on the internet and that we're all bound to find it.
Re:Don't rule science out it. (Score:5, Interesting)
I suppose you could start where he, again, tries to present the argument that correlation really is "good enough" - causation be damned. What he is blattering on about is that you can infer lots of things via statistical analysis - even complex things. That's certainly true. Where he fails (and it's an EPIC fail) is his assertion that this method is a general phenomena, suitable for every day use.
The other major failure of TFA is that I can't find a car analogy anywhere.
Re: (Score:3, Insightful)
Once upon a time cars were pretty simple. The most effective way to fix a car that had broken was to find a mechanic. This was a man trained in the models of how cars work. He would sift through the collection of parts (data) in the car until he noticed an anomaly that he would charge you outrageously for.
Now cars have become so complex that these models are no longer needed. Instead you can just examine the millions of cars that either work or don't work right there on teh interweb. One you find a correla
Re: (Score:3, Informative)
Man, I'm feeling old today. Whatever happened to "first principles"? And my slide rule.
Re:Don't rule science out it. (Score:5, Insightful)
The article does not make a compelling point. It keeps saying that we can give up on models (and science), because now we just have lots of data, and "correlation is enough." What utter BS. Establishing a correlation is not enough. Even if it is predictive for the given trend, it doesn't allow us to generalize to new domains the way a well-established scientific model does. If an engineer is designing a totally new device, that goes above and beyond what any established device has done, what data can he draw upon? If there is no mountain of data, he must rely on the tried-and-true techniques of engineering/science: use our best models, and predict how the new device/system will behave.
The article actually makes this point perfectly clear when it says: Indeed. Merely having tons of data doesn't actually give you insight into what you have measured. You must distill the data, pull out trends, and construct models. I just don't see how have mountains of data about a species, but still being unable to answer simple questions about it, is superior to conventional science (which can answer questions about the things it has discovered).
A deluge of data and data-mining techniques is a boon to science. But I don't see the benefit of giving up on the remarkably successful strategy of constructing models to explain the phenomena we've observed. I somehow doubt that having 20 petabytes of data on electron-electron interactions is more useful than having a concise theory of quantum mechanics.
My Start menu has been Googled (Score:5, Insightful)
For example, for years I would pride myself on my well-tended Windows Start menu. I'd create base categories for my application folders like Hardware, Games, and Internet, and move applications into those folders to keep my Start menu manageable. I blogged about this procedure [demodulated.com] and included a screenshot.
Now that I'm using Vista I have little need to be so organized. I rarely have to navigate manually to an application folder thanks to the embedded search box on the Start menu. So now my Start menu is a huge clutter, but so what? I see that exercise as futile as dusting the cardboard boxes in the attic.
Re: (Score:3, Insightful)
Now that I'm using Vista I have little need to be so organized. I rarely have to navigate manually to an application folder thanks to the embedded search box on the Start menu.
If you're going to take your hands off the mouse to run an app, why not just pop open a console and start it from there? I have no use for any sort of start menu, I have a console. It's certainly more flexible than a search bar, you can pass arguments or file names(with wild cards even) to the application.
What question do you ask the data. (Score:5, Insightful)
Re:What question do you ask the data. (Score:4, Insightful)
Exactly. The "deluge of data" is a useful tool, no doubt about it. But Google doesn't make the job of collecting and analyzing data irrelevant any more than the advent of the telescope made the skills and knowledge of astronomers obsolete.
I particularly love this line from TFA:
For instance, Google conquered the advertising world with nothing more than applied mathematics. It didn't pretend to know anything about the culture and conventions of advertising -- it just assumed that better data, with better analytical tools, would win the day. And Google was right.
(Applied) science at its best! "The culture and conventions of advertising" are basically folk wisdom, and folk wisdom is often right but more often wrong. Google took a scientific, unbiased view of how to move bits around and make money with them: start with as few preconceptions as possible, analyze the data, see what happens.
quality still as important as quantity (Score:3, Interesting)
Then there was Slashdot's retrospective of Artificial Intelligence a few days ago. Many of the interesting advances where made in the kilobyte and megabyte eras. It seems the gigabyte and terabyte eras have barely made a dent in progress.
Google =/= scientific method (Score:5, Informative)
That an incredible amount of data exists on any given topic does nothing to describe relationships, causality, precision, accuracy, distribution, correlation, or anything else. Data is information, and information must be processed in order to make it meaningful. Additionally, everything that's written, printed, published, etc, is not necessarily true, accurate, precise, etc.
If anything, the Google phenomenon demands more rigorous examination by accepted methods.
The preceding message has been brought to you by Captain Obvious and the letters O,R,L,Y.
Say what now? (Score:2)
Correlation supersedes causation
Since when?
/.? /. was not behaving this way last week.
I'm pretty sure I was told the opposite in [every stats class ever]
Crunching large amounts of data is useless if you don't sort out which results are meaningless.
Side Note: WTF is up with
I always post using Plain Old Text and hitting enter twice (two line breaks) only displays as one line break.
{p} doesn't create a new paragraph.
{br}{br} is the only thing that shows up correctly for me.
Re: (Score:3, Interesting)
Entropy forces causality to appear in physical systems. A boiled egg is highly correlated with a heated raw egg, but I challenge you to explain away the causation from one state to the other.
One way functions are quite similar, and probably a result of the same physical properties of matter. When a key is used to encrypt data, there is a high correlation between the original data, the key, and the encrypted data, but
No. (Score:5, Insightful)
Second, in my experience with large sets of data, you can do all kinds of math to them to bring out interesting relationships but someone with domain expertise is going to have a much better insight into what the data is saying than someone who doesn't. It seems the peak of hubris to think that the techniques taught in every science (social, hard, or otherwise) are worth nothing compared to massive amounts of data. How do you know where to get the data from? How do you apply the data?
I don't think it's quite time to throw out "correlation != causation". In fact, I think now more than ever we need to be able to understand underlying phenomena behind the data precisely because there is so much of it. With so much data, coincidental correlation is going to happen quite often I'm sure.
And, of course, the ultimate reason we need to understand things is for, you know, when the cloud's not there.
Wrong (Score:5, Insightful)
Interesting, ranty, and wrong (Score:5, Insightful)
Re: (Score:2)
I hadn't heard of Frank Tipler until you mentioned him. WOW! That man speaks crazy talk.
And he has a faculty position.
I'm starting to realize that academia is similar in some ways to pop-culture, in that name recognition is everything. It differs just in that the publicity stunts you do need to impress a different sort of person.
In a way, it's career advice. *Goes back to getting PhD*
Quite... (Score:4, Informative)
And since most slashdot readers don't RTFA most comments here have proven useless in trying to figure what those kernels you mention are.
But this guy, who has read TFA (and commented on it on the Wired's site) seems to have found them.
Re:Interesting, ranty, and wrong (Score:4, Insightful)
A thought-provoking piece written by someone who neither understands the scientific method nor Google. Who doesn't understand the difference between a Theory and a model. Who still doesn't get correlation!=causation. Who probably has never had to actually analyze any substantial amount of data before. And who has clearly been raised on a self-important intellectual diet consisting of too much Buckminster Fuller, Kurtzweil, Frank Tipler, and Derrida.
And he works at Wired magazine? You don't say.
Biggest Data Collector LHC relies on Models (Score:5, Insightful)
I thought this was a joke at first. One thing to think about is that the biggest data collector of them all, the Large Hadron Collider, which fits the frame given perfectly - delivering terabytes of data in huge data sets is just the opposite of the described scenario. Models are crucial to actually picking what data is actually recorded. In fact a large part of how good the LHC data will be will be in using models to select what events to capture. The way the data is captured is of course also based on long effort and knowledge from previous detectors. This isn't just randomly, or even generically selectively gathering data and then analyzing it. This is targeted data gathering based on complex scientific theories. There have been shouting matches at what to tag for collection based on what people think is important for a given theory - and these will happen again.
As our collection abilities rise exponentially, the the storage and analysis abilities are not exponentially growing, even though they are increasing at a fast rate! I would argue exactly the opposite of what this article said. We are going to be more and more dependent on our current scientific theories to even be able to choose appropriately the rich data that new sensors and techniques will let us collect. That is we are more and more dependent on our scientific theories when we get data not less. Did we even know to get methylation data when sequencing a genome. How about some other "ylation". Without background theory and experience we wouldn't even know some of that stuff was there to collect!
WTF, be serious (Score:3, Insightful)
This is nonsense pure and simple.
One needs to acquire facts. Now these "facts" can come from your own research or, in the age if the internet, someone else' data, but they still need to be collected and verified.
The *only* advantage that google provides is a more efficient way of sharing and finding facts. Not even all facts, those that are popular and topical are what you'll most likely find.
Historical information, from when newspapers only used dead trees, can be very difficult to find on the internet unless someone else did the research first.
this one needs a "haha" tag (Score:2)
What could possibly go wrong?
Nonsense. (Score:2)
This is grade school stuff. Correlation is not causation.
Which means if you're approaching a region you haven't sampled, then you can't understand what's going to happen because you've thrown away your interest in 'why [something] does what it does, [because] it just does it.'
If you're only using models as correlations or proxies, what are you using models for anyways? There's nothing 'increasingly' true about that.
WHAT? (Score:2)
A scientist doing an experiment still relies on the scientific method to collect his own data to see if they support his hypothes[ie]s. I really don't see anyone publishing a paper and saying "Dudes! I used Google to find my data points!" How the hell is Google going to stop people from doing experiments and finding their own data?
This article is complete crap. I don't think this person even understands whats the "Scientific Method" means.
To paraphrase Mark Twain... (Score:2)
There's lies, damn lies, and statistics. Now "Clouds" of information.
Feh.
No. Science Scales. (Score:4, Informative)
Have we reached a time where all of our tool-sets are now made moot by vast clouds of information and strictly applied maths?
No. And also no to the basic premise of the article.
Meteorologists have been doing this for decades (principal component analysis has been a crucial tool there since the 1960's, and correlation analysis has been used in some form since the 1920's if not earlier) and so have the astronomers. Oh, and the particle physicists have been sifting data in their own way on a big scale ever since World War II.
As one of many examples, if you ever have heard of an "El Nino event," that was discovered through correlation analysis and is best understood through principal component analysis. BTW, the original work predates electronic computers and was all done by hand. The vast quantities of meteorological data require statistical analysis to make any progress at all, but that certainly does not mean that you cannot use the scientific method.
So, no, this does not invalidate the scientific method. In the Internet jargon, science scales.
Consequence of the Post-Modern Age (Score:3, Informative)
In a sense, what the article is proposing is the "simulation" of reality in a computer system based on the available "data". This simulation as i will suppose in a moment is merely a flawed model since the data being related must in some sense be based on an algorithm which inherently MIMICS reality and is not a substitution for it (no matter how, "accurate" agreement). But nonetheless, the result of this as Baudrillard observed is not a simulation but a simulacrum of reality and eventually will take the place of reality. The implication is that reality is not created or manufactured by the interaction of people in a "real" sense but is actually lead by the operation of the simulacrum!
Nonetheless, the fact is there is no possible way to store ALL the data of the entire world (since some data is not recordable by a binary machine, and no a "quantum" computer is the solution to say it can be); however, the problem is this fact does not mean we cannot be mislead by the simulacrum and be lead into a future where human interaction is as I would call inhuman, but as some who have (in some cases unknowingly) fallen for the post-modern myth would call it merely an evolutionary result of human-interaction.
In the future the storage of data, the usage of data, and the power of data will have a huge impact on our humanity as the past twenty years should already be evidence of. I am not an apocalyptic fear-monger, but the proof is in the pudding. For further reading, I recommend a highly prescient book written in 1990 by a Mr. Mark Poster called the Mode of Information which talks about some of these implications which are in the process of becoming as we speak
When people say shit like this... (Score:4, Funny)
It means there's about to be an explosion in models and theoretical sciences. Always beware the End of History ;)
Houses will fall down, Tumors will go unchecked (Score:2)
Obvious ones are medicines and housing materials.
Important ones are accurate global warming models and electric battery efficiency tests.
This Wired jerkoff and his band of know-little acolytes think that because they can accomplish everything in _their_ day without science, that it will die out.
This myopic self centredness would not have yielded them a clear signal on their iPhone. Science did that.
This is a lazy way to work (Score:2)
I'm concerned that placing too much trust in such a model-less paradigm is dangerous. Why? There could be several reasons. Data can be artificially manipulated, for example. This can cause us to draw erroneous conclusions, and consequently, to make poor decisions. I would still want to know "why".
Foundation Series (Score:3, Interesting)
Anyway catching the parallels here? The "search engine" is a great tool for gathering existing data - but our current tools help us:
1. Analyze that data
2. Gather more data
Can you honestly say that those aren't important anymore? The summary seems pretty crazy to me.
Wired. (Score:3, Funny)
You know, this may be the most pure Wired article I've read in a long time. Reminds me of the magazine's layout when it first came out. Complete bull, unreadable, unstructure, but slick.
I've never heard something so ridiculous (Score:3, Insightful)
Google used reams of data to get good at advertising and marketing, so Wired is using this ability to predict the end of SCIENCE?
Do they not realize the difference between these things? Advertising is extremely hand wavy and vague in the best of circumstances - I would argue that Google's offerings aren't really better than any other method, they're just cheaper for advertisers, and have a much larger base than normal.
I'm honestly astounded at this.
"Paradigm agnostic" (Score:3, Insightful)
An unknowable paradigm? Interesting.
Modeling is the core of science (Score:4, Interesting)
I must admit, as an applied mathematician who makes models of physical things for a living, this sort of research threatens to steal my bread and butter. It may be self-centered, but I think modeling is, beside experiment, half of science.
Simplified models are so valuable to our understanding because they tell us what information we can remove, which parts of a problem are important and which parts may be ignored. They allow us to not just make predictions, but they guide future experimentalists as to what sorts of changes will impact the system and which won't.
To be fair, it's more of a cycle: experiments generate data, models are constructed to explain the data. These models make predictions (and hopefully useful simplifications) that can be checked by further experiments to validate them. At the end of the process, we've produced a clearer picture of how a system works. Enough information maybe for someone building something slightly different to not have to test the aspects covered by the model.
I view these data-mining techniques like the scientific computing techniques of the last 30 years or so, only the inverse. Sci Comp nerds wanted to do away with experiments. They thought they could numerically simulate (relatively) exact models (like Navier-Stokes for fluid motion rather than one of its more tractable, understandable simplifications) and use the generated data instead of experimental data. The trouble was that no one will believe that the crazy new phenomenon discovered by your program is real until they see it in the lab, until they construct a simplified model that has the same behavior -- i.e. the same science as before.
The new data-mining idea is the same, but for the modeling end of things. "No models, please," they say. They'll just data-mine the experimental results and "discover" whatever the model missed. Except people will want to do experiments to verify the discovery. They'll want to build models so they can know they're doing the right experiments, and so on.
At the end, I think Sci Comp and data-mining are fantastic new tools that have a lot to offer science, but I don't think either eliminates the need for old fashioned modeling.
Re: (Score:3, Funny)
Petaphile [urbandictionary.com]
1) Someone who loves their pets more than human beings or, at the extreme, someone willing to kill a human to save a lower animal's life.
2) Somebody who has sex with animals because they cannot attract any humans, or they are attracted to animals
(and the best one)
3) someone so caught up in his own egomaniacle conception of the world that he is compelled to spew vomit and blood on a strangers clothes to show his contempt for anybody's thought but his own.
Which sounds kinda like the summary for the a