Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Science

Cornell Team Says It's Unified the Structure of Scientific Theories 115

An anonymous reader writes "Cornell physicists say they've codified why science works, or more specifically, why scientific theories work – a meta-theory. Publishing online in the journal Science (abstract), the team has developed a unified computational framework they say exposes the hidden hierarchy of scientific theories by quantifying the degree to which predictions – like how a particular cellular mechanism might work under certain conditions, or how sound travels through space – depend on the detailed variables of a model."
This discussion has been archived. No new comments can be posted.

Cornell Team Says It's Unified the Structure of Scientific Theories

Comments Filter:
  • by deathcloset ( 626704 ) on Friday November 01, 2013 @02:10PM (#45302341) Journal

    http://en.wikipedia.org/wiki/Double_pendulum [wikipedia.org]

    There is so much redundancy in the universe. It looks chaotic to us, but I think that really everything is just looping (orbiting/spinning) asynchronously so it appears that all this complicated random stuff is happening, but really it's all just a crap-ton of super-simple systems interacting. I think that science and reality are so obvious sometimes that we just can't see them - like air. The ancients knew that there was wind, that they could blow paper off a table and that it was hard to breath at high altitudes, but they didn't know until Empedocles (500–435 B.C.) used a clepsydras, or water-thief, to discover air that these were truly the same things.

    And gravity, the overused example, was thought by the ancients to be a set of unrelated actions and happenings - to quote Disney's "the Sword and the Stone"

    Merlin: Don't take gravity too lightly or it'll catch up with you.

    Arthur: What's gravity?

    Merlin: Gravity is what causes you to fall.

    Arthur: Oh, like a stumble or a trip?

    Merlin: Yes, it's like a stumble or a- No, no, no, it's the force that pulls you downward, the phenomenon that any two material particles or bodies, if free to move, will be accelerated toward each other

  • by DriedClexler ( 814907 ) on Friday November 01, 2013 @02:30PM (#45302563)

    Scott Aaronson (of quantum computing fame) wrote a great paper on the implications of computational complexity theory [arxiv.org] for for philosophy, and he addresses a related issue, about "why should science work at all", specifically Occam's Razor.

    He relates it to Valiant's PAC-learning model, which says that the more complexity your model allows (higher VC dimension), the lower the probability that any theory you match to the observe data will correctly generalize, hence why less complex theories tend to be more correct when going outside the sample data.

  • Possible answer (Score:5, Interesting)

    by Okian Warrior ( 537106 ) on Friday November 01, 2013 @02:48PM (#45302847) Homepage Journal

    My question is "how is this research more useful than a phone sanitizer?"

    I can't speak of the article because it's paywalled, but if you like I can answer your question from my impression of the abstract.

    Scientific theories are ultimately about data compression: they allow us to represent a sea of experiential data in a small space. For example, to predict the travel of a cannonball you don't need an almanac cross-referencing all the cannon angles, all the possible gunpowder charges, and all the cannonball masses. There's an equation that lets you relate measured numbers to the arc of the cannonball, and it fits on half a page.

    Scientific models are the same: they allow us to predict results from a simplified description. The brain contains an id, an ego, and a superego which have their own goals and weaknesses, and from this we can predict the general behaviour of people.

    The problem is that we don't have any way to measure how good a theory is, or even whether it is any good at all; viz, the second example above. This, and our society's desperate motivation to publish, has led to a situation where we cannot always tell whether some science finding is significant or even true.

    Some specific problems with science:

    .) There's no way to determine which observations are outliers that should be discarded: It's done "by eye" of the researcher.
    .) There's no way to determine whether the results are significant. Thresholds like "p<0.5" are arbitrary, and 5% of those results will be due to random chance.
    .) There's no way to determine whether the data is linear or polynomial. It's currently done "by eye" of the researcher.
    .) Linear and polynomial regression are based on minimizing least-squared error, which was chosen arbitrarily (by Laplace, IIRC) for no compelling reason. LSE regression is "approximately" right, but is frequently off and can be skewed by outliers &c.

    (Of course, there are "proposed" and "this seems right" answers to each of these problems above. A comprehensive "theory of theories" would be able to show *why* something is right by compelling argument without arbitrary human choice.)

    To date, pretty much all scientific research is done using "this seems right" methods of correlation and discovery. This is not a bad thing, it has served us well for 450 years and we've made a lot of progress this way.

    If we could tack down the arbitrary choices to a computable algorithm, it would greatly enhance and streamline the process of science.

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...