Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Science

Cornell Team Says It's Unified the Structure of Scientific Theories 115

An anonymous reader writes "Cornell physicists say they've codified why science works, or more specifically, why scientific theories work – a meta-theory. Publishing online in the journal Science (abstract), the team has developed a unified computational framework they say exposes the hidden hierarchy of scientific theories by quantifying the degree to which predictions – like how a particular cellular mechanism might work under certain conditions, or how sound travels through space – depend on the detailed variables of a model."
This discussion has been archived. No new comments can be posted.

Cornell Team Says It's Unified the Structure of Scientific Theories

Comments Filter:
  • by BenSchuarmer ( 922752 ) on Friday November 01, 2013 @01:46PM (#45302089)
    42
  • by Anonymous Coward

    What I want to know is the science behind how this new theory works. Have they done any research into that?

    • by gweihir ( 88907 )

      I hear they are still working on it and are looking forward to training their successors as well! (And then the successors will do the same. And so on... ;-)

  • ...are by definition metaphysics.

    So perhaps this belongs in a philosophy journal, not a scientific one?
    • Too bad I don't have mod points to give.

      My question is "how is this research more useful than a phone sanitizer?"

      • by Zordak ( 123132 )
        Hey, don't knock the phone sanitizers. You never know when a worldwide telephone-borne epidemic might strike.
      • by icebike ( 68054 )

        Not much from what I can see.
        The buzzword laden title suggests a whole lot more than the (limited) information in the article or the summary, and it all boils down to:

        they find that in an impossibly complex system like a cell, only a few combinations of those variables end up predicting how a system will behave.

        Which translates into Wheat from Chaff:
        After evaluating every variable you can find, only a few of those will be found to be important.

        Well DUH!
        The statisticians figured this out a hundred years ago. Just about every statistical test invented is designed to figure out precisely which variables matter.

        Now if the good professor could ju

        • by Prune ( 557140 )
          As usual, the devil's in the details. http://arxiv.org/pdf/1303.6738v1.pdf [arxiv.org]
        • by blueg3 ( 192743 )

          The buzzword laden title

          Don't forget: that's the title of the ScienceBlog article.

          The title of the paper? Parameter Space Compression Underlies Emergent Theories and Predictive Models

        • by fatphil ( 181876 )
          > Now if the good professor could just predict which variables will be important in advance, we could skip all this messy data collection and analysis and simply leap to conclusions.

          Obligatory XKCD - the green ones.

          (No, I can't be arsed to find a link.)
      • Possible answer (Score:5, Interesting)

        by Okian Warrior ( 537106 ) on Friday November 01, 2013 @02:48PM (#45302847) Homepage Journal

        My question is "how is this research more useful than a phone sanitizer?"

        I can't speak of the article because it's paywalled, but if you like I can answer your question from my impression of the abstract.

        Scientific theories are ultimately about data compression: they allow us to represent a sea of experiential data in a small space. For example, to predict the travel of a cannonball you don't need an almanac cross-referencing all the cannon angles, all the possible gunpowder charges, and all the cannonball masses. There's an equation that lets you relate measured numbers to the arc of the cannonball, and it fits on half a page.

        Scientific models are the same: they allow us to predict results from a simplified description. The brain contains an id, an ego, and a superego which have their own goals and weaknesses, and from this we can predict the general behaviour of people.

        The problem is that we don't have any way to measure how good a theory is, or even whether it is any good at all; viz, the second example above. This, and our society's desperate motivation to publish, has led to a situation where we cannot always tell whether some science finding is significant or even true.

        Some specific problems with science:

        .) There's no way to determine which observations are outliers that should be discarded: It's done "by eye" of the researcher.
        .) There's no way to determine whether the results are significant. Thresholds like "p<0.5" are arbitrary, and 5% of those results will be due to random chance.
        .) There's no way to determine whether the data is linear or polynomial. It's currently done "by eye" of the researcher.
        .) Linear and polynomial regression are based on minimizing least-squared error, which was chosen arbitrarily (by Laplace, IIRC) for no compelling reason. LSE regression is "approximately" right, but is frequently off and can be skewed by outliers &c.

        (Of course, there are "proposed" and "this seems right" answers to each of these problems above. A comprehensive "theory of theories" would be able to show *why* something is right by compelling argument without arbitrary human choice.)

        To date, pretty much all scientific research is done using "this seems right" methods of correlation and discovery. This is not a bad thing, it has served us well for 450 years and we've made a lot of progress this way.

        If we could tack down the arbitrary choices to a computable algorithm, it would greatly enhance and streamline the process of science.

        • by Prune ( 557140 )
          A momentary web search for the title immediately returns the free preprint version: http://arxiv.org/pdf/1303.6738v1.pdf [arxiv.org]
        • by blueg3 ( 192743 )

          I can't speak of the article because it's paywalled

          How do people not know about arXiv [arxiv.org]?

        • by Anonymous Coward

          You make some good points, but I think the article falls far short in answering them. It basically boils down to principal component analysis for some simple models that we can compute analytically. If that's the answer to science, I need to get out...

        • Re: (Score:1, Flamebait)

          As a cyberneticist and information theorist I was right with you on science (or signal processing in general) being a form of (de)compression until you went bat-shit insane:

          The brain contains an id, an ego, and a superego which have their own goals and weaknesses, and from this we can predict the general behaviour of people.

          Prove it! When I look in a head I see a complex neuronal network. I don't find "id" or "ego" or "superego" or any other unfalsifiable bullshit.

          The problem is that we don't have any way to measure how good a theory is, or even whether it is any good at all

          Fool. How accurately the theory predicts actual outcomes in reality is the measure of a theory. As for your other philosophical bullishit: Protip: That's not a science. It's not based in reali

          • by narcc ( 412956 )

            I see that you have absolutely no scientific background. What's it like to be so sure of yourself with little to know knowledge of the subject?

        • by Anonymous Coward

          There's no way to determine which observations are outliers that should be discarded: It's done "by eye" of the researcher.

          Not exactly. Proper researchers use objective ways of defining whether something is an outlier or not (see "studentized residuals" and "leverage"), they don't "eyeball" it. Also, just because a sample is an outlier doesn't necessarily mean it should be discarded: many times, the most interesting data points are outliers (though often _can_ be discarded if they results from technical artifacts, for instance).

          There's no way to determine whether the results are significant. Thresholds like "p *less-than* 0.5" are arbitrary, and 5% of those results will be due to random chance.

          Not exactly. First, no one uses "p *less-than* 0.5" as threshold for anything (more like "p *less-tha

    • by Anonymous Coward

      True, but scientists are largely the ones that need to understand exactly how their theories work as theories, in order to better make them, and explain them. Science is built upon Empiricism, which is purely a philosophy about how we can get answers from the world. This is very similar: a way to organize those answers better. Most of this is obvious, but it is nonetheless important for actual scientists to understand.

      • It's not that obvious, or I wouldn't see so damned many posts on Slashdot where people insist that you can prove the scientific method itself from within science, and other such fallacies. Empiricism is not really at all the same as Naive Realism, and yet there are plenty of people here who argue as though they were classic Realists, but think they are arguing "Scientifically".

      • I don't think scientists need to understand how theories work in order to come up with better theories.
        In reality, you discover exact places where your theory does NOT work in order to develop a better theory.

        What they have discovered is statistical regression, not basic science. Sure, there are just a few factors of a cell that will predict--WITHIN A REASONABLE RANGE OF ERRORS--what a cell will do in the future.
        That doesn't mean you can build a cell with only those parts and nothing else. If you want a
    • by Anonymous Coward

      Perhaps it works as a sort of a Nash equilibrium. Understanding reality tend to produce best results, long term. In short term, people could play other hands due to other positions but in long term, the reality remains and the rest changes. Eventually, it is reality that is acknowledged.

      That is why science wins eventually and every time over superstition and ignorance. Non-science can only win if no players remain ;)

      • by mbkennel ( 97636 )

        "That is why science wins eventually and every time over superstition and ignorance."

        Unless accompanied by massive barbarian hordes.

        "Non-science can only win if no players remain ;)"

        That's an accepted strategy: off with their head.
    • by DriedClexler ( 814907 ) on Friday November 01, 2013 @02:30PM (#45302563)

      Scott Aaronson (of quantum computing fame) wrote a great paper on the implications of computational complexity theory [arxiv.org] for for philosophy, and he addresses a related issue, about "why should science work at all", specifically Occam's Razor.

      He relates it to Valiant's PAC-learning model, which says that the more complexity your model allows (higher VC dimension), the lower the probability that any theory you match to the observe data will correctly generalize, hence why less complex theories tend to be more correct when going outside the sample data.

      • Thank you Sir,

        this is now on my top "to read" list once I manage to push away a little bit the usual clutter.

      • by Prune ( 557140 )

        > the lower the probability that any theory you match to the observe data will correctly generalize, hence why less complex theories tend to be more correct when going outside the sample data

        So what's old is new again, eh? The probabilistic justification for Occam's razor is far older than Mr Aaronson, dating decades back to decision trees. I suggest that next time you give credit where credit is due.

    • More logic than philosophy.
      Philosophy is unfortunately a pseudo-science, and is often in conflict with logic.

      • by fatphil ( 181876 )
        There's plenty of logic to be found in the gamut of topics under teh umbrella called "philosophy", but basically no philosophy in what's called "logic". However, I admit I'm biased. I remember at university the pure mathematicians (inc. me) used to get particularly wound up by the philosophy grads - they really were wackos (who were indulgin in metaphysics most of the time).
    • by ljw1004 ( 764174 ) on Friday November 01, 2013 @03:16PM (#45303257)

      If you take as axiomatic that all science should go solely in a science journal, and all discussion about science should go solely in a philosophy journal, and there exists science which is also a discussion about science -- then where should it go?

      The authors are making the claim that science can be used to discuss science, and they back it up with a decent analysis. Either their claim is wrong, or your axioms are wrong. You can't make this go away just by waving your hands about definitions.

      PS. Original definition of metaphysics was "the chapter in the book that came after [greek: "meta"] the chapter on physics". So no, not metaphysics by this definition either :)

      • by fatphil ( 181876 )
        One can contrive a middle ground that bridges the gap. There was bugger all posted to this thread, which I found boring, I'm glad I managed to whip up some interesting discussion. In particular getting the Aaronson link was a win.
  • by TechyImmigrant ( 175943 ) on Friday November 01, 2013 @01:58PM (#45302197) Homepage Journal

    The abstract is a heck of a lot more clear than the description posted:

    "We report a similarity between the microscopic parameter dependance of emergent theories in physics and that of multiparameter models common in other areas of science. In both cases, predictions are possible despite large uncertainties in the microscopic parameters because these details are compressed into just a few governing parameters that are sufficient to describe relevant observables. We make this commonality explicit by examining parameter sensitivity in a hopping model of diffusion and a generalized Ising model of ferromagnetism. We trace the emergence of a smaller effective model to the development of a hierarchy of parameter importance quantified by the eigenvalues of the Fisher Information Matrix. Strikingly, the same hierarchy appears ubiquitously in models taken from diverse areas of science. We conclude that the emergence of effective continuum and universal theories in physics is due to the same parameter space hierarchy that underlies predictive modeling in other areas of science."

    • "The abstract is a heck of a lot more clear than the description posted:"

      It also actually makes sense. Looking at OP, I found myself thinking, "So? What's new about that?"

      The abstract is indeed much more clear and coherent.

      • by mbkennel ( 97636 )
        | I found myself thinking, "So? What's new about that?"

        Quantification, a reasonably precise definition of predictive power, and empirical/observational results.
        • No, you missed the point.

          I know what the paper is about. But the explanation given in OP and on ScienceBlog (OP's first link) are vague and in the latter case, actually confuse the issue.
      • Or rather, not OP but the first link in OP. ScienceBlog's "explanation" seemed to confuse the issue more than explain.
    • Scientific theories only works when the minute details don't significantly affect the macro behavior (and vis-versa). That is, if there is a hierarchy of behaviors where theories can match the observations with some small uncertainty, the illusion of science is created with the assumed emergent continuum between apparently self-consistent levels of heirarchy.

      Example of a simple hierarchy: the earth going around the sun is a macro-behavior, and testing molecular motion in a test-tube is a micro behavior.

  • by Anonymous Coward

    Ph.D == Doctor of Philosophy; which is exactly what this article sounds like to me.

    Now they have applied their tools to physics theories, and found a similar division into stiff and sloppy: The former, stiff rules, comprise the useful information pertaining to a high-level or coarse description of the phenomenon being considered, whereas the latter, sloppy ones, hide microscopic complexity that becomes relevant at finer scales of a more elementary theory. In other words, Sethna says, “We’re comi

  • by sinij ( 911942 ) on Friday November 01, 2013 @02:04PM (#45302269)
    Can someone explain this with a car analogy?
    • Re: (Score:2, Funny)

      by Anonymous Coward

      A quantum mechanic is a person who works on really tiny cars.

    • Easy. Cornell physicists say they've codified why cars work, or more specifically, why car theories work – a meta-car-theory. Publishing online in the journal Science (abstract), the team has developed a unified driving framework they say exposes the hidden hierarchy of car theories by quantifying the degree to which predictions – like how a particular cellular battery might work under certain conditions, or how sound travels through the car's subwoofer – depend on the detailed variables
    • Can someone explain this with a car analogy?

      In what some deem a massive effort of Not Invented Here syndrome a Cornell Team completes its quest to re-invent the wheel by independently rediscovering information theory.

      Sorry, it's too straight forward to understand, so I could only fit the wheel of the car in there.

  • Sound travels in a medium not empty space.
    • by hAckz0r ( 989977 )

      Sound travels in a medium not empty space.

      In some theories empty space _is_ a medium, so does this mean that some theories are just a little more sound?

  • Damn! That article might be centrally relevant to my research right now, but I can't tell from the abstract (it might also be an unrelated specific corner of physics).

    It's behind a paywall, they want money just to find out.

    Can anyone find a free copy that we can examine?

    (I'm wondering how useful it is to post news articles about papers that the public can't read. We could, as a group [i.e. - Slashdot], help promote open science by not publicizing closed-source articles.)

  • by deathcloset ( 626704 ) on Friday November 01, 2013 @02:10PM (#45302341) Journal

    http://en.wikipedia.org/wiki/Double_pendulum [wikipedia.org]

    There is so much redundancy in the universe. It looks chaotic to us, but I think that really everything is just looping (orbiting/spinning) asynchronously so it appears that all this complicated random stuff is happening, but really it's all just a crap-ton of super-simple systems interacting. I think that science and reality are so obvious sometimes that we just can't see them - like air. The ancients knew that there was wind, that they could blow paper off a table and that it was hard to breath at high altitudes, but they didn't know until Empedocles (500–435 B.C.) used a clepsydras, or water-thief, to discover air that these were truly the same things.

    And gravity, the overused example, was thought by the ancients to be a set of unrelated actions and happenings - to quote Disney's "the Sword and the Stone"

    Merlin: Don't take gravity too lightly or it'll catch up with you.

    Arthur: What's gravity?

    Merlin: Gravity is what causes you to fall.

    Arthur: Oh, like a stumble or a trip?

    Merlin: Yes, it's like a stumble or a- No, no, no, it's the force that pulls you downward, the phenomenon that any two material particles or bodies, if free to move, will be accelerated toward each other

  • Long since documented by our buddy Randall: http://xkcd.com/927/ [xkcd.com]

  • All that goes through my head is Bad Religion's "The Answer [wikia.com]". And yes, I know the song is referring to religious zealotry, but it just happens any time I hear anything about "the answer to everything is...".
  • Does their theory explain how their own theory works?

    Cause' otherwise its really something else that "works".

    Like an opinion or belief or something subjective like that.
    • If their own theory explains how it works, then perhaps they can fix the logic problem of Godel's ;-)

      As far as real science is concerned, I think physicists miss the idea of looking for limits. Predict or derive Chemistry from classical physics.
      It can either be done or not (I think it is impossible). If it cannot be done, then physics does NOT describe the entire universe.
      Get that layering of emergent reality locked into your head and perhaps physicists will find ways to decide which of the layers of qu
      • Is your suggestion that physics is not granular enough to describe the entire universe?
        • No, I am not looking at the granularity.I am examining fundamental assumptions.
          If you are going to have the basic building block of the universe, then that means it can build the _entire_ universe, including Chemistry and Biology.
          imo, we cannot derive either of those from physics so there is a limit to what physics can build of the universe.
          My 'logic' is that if there is a limit at one end, I suspect there is a limit at the other, that there are more separate layers that cannot imply or be derived from o
          • by lennier ( 44736 )

            If you are going to have the basic building block of the universe, then that means it can build the _entire_ universe, including Chemistry and Biology.
            imo, we cannot derive either of those from physics so there is a limit to what physics can build of the universe.

            That statement would raise a lot of eyebrows among the materialists. The whole point of computer simulations of chemical molecules is that the behaviour of substances can be derived from physics.

            But interestingly, I was startled to find that even something as fairly basic as the chemical properties of atoms isn't actually derived from the lower-level quantum-mechanical equations for those atoms, the way I thought it worked in high school. Although theoretically science is a layered model, with physics at th

          • Sure physics can explain chemistry. I challenge you to provide one example of a chemistry phenomenon that physics can not explain.
  • by Prune ( 557140 ) on Friday November 01, 2013 @03:15PM (#45303243)
  • Scientific models tend to express a common computational relationship. That's because we like to quantify things in scientific models, and perhaps unsurprisingly, we have a fairly standard paradigm for quantitative analysis in our mathematical algebraic, geometric and topological models.

    The physicists here are discussing a feature of using information theory to generalize how certain fixed parameters can take values at different scales while still preserving most of their predictive structure. That's all.

  • The point of science ought to be to train you to think deductively, if your intellectual interests lie in the natural world. I am glad that most of the variables in most scientific models are irrelevant, and as others have commented, statisticians make much hay of this fact. But the next time someone comes along and shows why some tiny discrepency in calculated values is actually due to some effect that nobody understood before, there will be tremendous ramifications. The most famous example would be that t

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...