Forgot your password?
typodupeerror
Math Science

Edward Lorenz, Father of Chaos Theory, Dies at 90 104

Posted by timothy
from the what-are-the-odds-his-middle-name-was-norton dept.
An anonymous reader writes "Professor Edward N. Lorenz, who discovered in 1961 that subtle changes in the initial conditions of a weather simulation program could cause very large differences in its results, died of cancer Wednesday at the age of 90. The contributions of the father of chaos theory, who coined the term 'the butterfly effect' and also discovered the Lorenz Attractor, are best summarized by the wording of the Kyoto Prize in 1991 which noted that his discovery of chaos theory 'profoundly influenced a wide range of basic sciences and brought about one of the most dramatic changes in mankind's view of nature since Sir Isaac Newton.'"
This discussion has been archived. No new comments can be posted.

Edward Lorenz, Father of Chaos Theory, Dies at 90

Comments Filter:
  • by DamienRBlack (1165691) on Sunday April 20, 2008 @06:37AM (#23133580)
    Why aren't they reporting that his cancer was caused by a zebra sneezing in the UK last fall under a fig tree. It seems quite relevant.
    • by dreamchaser (49529) on Sunday April 20, 2008 @06:39AM (#23133588) Homepage Journal
      You might get modded troll or flamebait by the people who didn't understand your subtle reference. It could have been a butterfly taking off in New Mexico though. We aren't quite sure.

      A great man whose contributions will be remembered for centuries to come has passed. I think I'm going to fire up a fractal generator and play with Lorenz Attractors now.
      • Re: (Score:3, Interesting)

        by Anonymous Coward
        And it would be warranted. The 'butterfly effect' is a horribly misleading statement. Mathematical chaos applies to only certain deterministic systems, not real life. There is no evidence that the real world is a simulation or even if it was that it falls into the narrow range of non-linear dynamics problems that exhibit mathematical chaos. Lorenz's attempt at modeling the weather certainly exhibited mathematical chaos, but the model wasn't the weather itself.
        • by smallfries (601545) on Sunday April 20, 2008 @07:22AM (#23133744) Homepage
          Besides, decades of research have improved those models to the extent that we can accurately predict the weather anywhere up to 20mins in the future.
        • by maxume (22995)
          You mean the Zebra sneezes and it doesn't affect the gigawatt scale prevailing wind?!
        • But Agent Smith... you say that now but yesterday when Neo kicked your butt it was like "it's only a simulation, it's not like this is reality"...
        • Except it was just a reference to a man and his work, not a discussion on the validity of chaos theory as applied to dynamic systems. While you are more or less correct you do not have to be pedantic about it nor imply that the original parent was trolling.
        • by whit3 (318913)
          There are many real physical systems where the
          'butterfly effect' is very evident, and the history
          of science includes prior art. Lagrange found in
          his orbital calculations of planets that the sensitivity of
          the solutions to errors in observation could be
          extreme, about 200 years ago. A century ago,
          the excessive sensitivity of matrices with large
          eigenvalues provided a good model of the problem.

          Today, we call it 'chaos theory', but Lorenz is just a recent
          worker in the field, not really the father...
        • by mbkennel (97636)

          The 'butterfly effect' is a horribly misleading statement. Mathematical chaos applies to only certain deterministic systems, not real life. There is no evidence that the real world is a simulation or even if it was that it falls into the narrow range of non-linear dynamics problems that exhibit mathematical chaos.

          There is ample experimental evidence that chaos occurs all over physical reality.

          Lorenz's attempt at modeling the weather certainly exhibited mathematical chaos, but the model wasn't the weather itself.

          The model wasn't weather, obviously.

        • by Darby (84953)
          Mathematical chaos applies to only certain deterministic systems, not real life. There is no evidence that the real world is a simulation or even if it was that it falls into the narrow range of non-linear dynamics problems that exhibit mathematical chaos.

          There is no evidence that the earth is a simulation, but there are experiments demonstrating that some aspects of reality do follow the model.

          Mitchell Feigenbaum's period doubling cascade into chaos started out as a mathematical curiosity, but Albert Libch
      • Overrated (Score:5, Informative)

        by 2.7182 (819680) on Sunday April 20, 2008 @07:20AM (#23133738)
        Its controversial that he was the first. A lot of people worked on this area. In fact, it is controversial that chaos will ever contribute to science in any way. The pure mathematical theory is very hard. See the work by Curt McMullen for example. Many people I know are very skeptical, and there are a lot of bad papers purporting to use chaos theory.
        • Re:Overrated (Score:5, Informative)

          by 2.7182 (819680) on Sunday April 20, 2008 @07:58AM (#23133848)
          OK, this is going to get some people pissed, but it is my honest opinion and I am not doing this to troll. There are a lot of scientific areas that are promoted to get peoples careers going. In fact, they are largely vaporware. Here are some examples:

          1. Robotics. Most of academic robotics is pretty lame. The good people go into industry. Consider for example Michael Raibert and Big Dog. (Look on youtube.) This guy is a true genius, so he left MIT. Most robotics that you see in the media are really bad. Like Alan Alda talking to a robot that "has emotions".

          2. Wavelets. First of all, was invented a long time ago. Its just another choice of basis. Not clear if they are the best for compression or denoising. Look closely and you will see that classical harmonic analysis provides a good competing answer. Jpeg2000 may be better than jpg but not clear if it is due to the use of wavelets, or because of the fact that they had like 40 people working on the lossless coding scheme, which is an ad hoc heuristic. And besides, how many of us are using jpg2000 ? Finally, people I know that work in it say "I just use the haar basis". Haar found this basis in something like 1912.

          3. Chaos. By definition hard to appy to experimental science. As mention the mathematical theory is super hard. McMullen won a Fields medal for it. Work by Sullivan and Duordy is awesome, but they aren't claiming to connect it to experiments.

          4. Catastrophe theory. This was the 60s and 70s version of wavelets. Hardly mentioned in the media anyone, and mostly the people who work on it are pure mathematicians.

          5. Artificial intelligence. Goedel Escher Bach had our hopes up. But nothing ever happened. It' too hard. People claim breakthoughs all the time, but wheres the beef ?

          6. Computer vision. A total mess. They don't even read each others papers and are busy reproducing each other's work, with tends to be some hacks that work only in limited conditions. Remember the MIT face recognition program after 9/11 that was at the Statue of Liberty ? They failed it!

          • 1. It was the academic robotics teams that did well in the 2005 DARPA grand challenge (the top finishers were from Stanford and Carnegie-Mellon). Note that this also incorporated your #6, computer vision.

            5. AI continues to improve at a steady pace. It's not that "nothing ever happened". For example, the DARPA Grand Challenge was won and Kasparov was beaten by Deep Blue. I think you might be referring to "strong AI" which we won't get until about 2030 because computers simply won't be fast enough until the

            • Re: (Score:2, Offtopic)

              by Metasquares (555685)
              What you are saying is we won't get AI until 2030 because computers won't be fast enough to brute force it until then (and that's only if you believe Kurzweil). We can do it earlier if we could come up with some clever ideas, but I don't have any doubts that the first AI is going to be an exact neuron-for-neuron reconstruction of the brain.
              • Re: (Score:3, Insightful)

                by bunratty (545641)
                I know Kurzweil makes similar claims, but I am not merely regurgitating his ideas. The computational power of the brain is massive. In order to have similar computational power in a computer, we will need to wait for many generations of improvements in processors. We will almost certainly need to move away from very powerful cores based on semiconductors timed by a clock towards smaller asynchronous processors. Even with those advancements, there's no way we'll be able to simulate a human brain in real time
                • by DeadDecoy (877617)
                  I don't think it's the brain's computational power that's slowing us up so much as a lack of understanding how the brain works. I doubt that the full chemical pathways that occur when someone experiences something (and has that information stored and processed) has been described anywhere. Sure we know what some of the general components are for long/short term memory and cognitive thinking, but we're still a long way off from understanding how that stuff works. It would be like understanding how a everythi
          • by SkyDude (919251)

            4. Catastrophe theory. This was the 60s and 70s version of wavelets. Hardly mentioned in the media anyone, and mostly the people who work on it are pure mathematicians.

            This would explain the onslaught of mega-disaster movies in the 70s.

            5. Artificial intelligence. Goedel Escher Bach had our hopes up. But nothing ever happened. It's too hard. People claim breakthroughs all the time, but wheres the beef ?

            This may be miles above my head, but how could AI ever exist? OK, I can be sold on the idea that there's a clever algorithm at work in AI, but, just like a human mind, choices would be limited to what the person has learned about. Doesn't that limitation apply to a computer's programming? If so, where's the intelligence?

            • Re:Overrated (Score:5, Insightful)

              by Xest (935314) on Sunday April 20, 2008 @11:46AM (#23134784)
              The first step in producing an intelligent system is creating something that can constantly take in inputs and react to them in some intelligent way.

              People overlook how important Turing's original successes in producing the earlier computers were towards this goal, the fact that he was able to create a machine that was continuously able to react to inputs and respond to them in a much more dynamic way than mechanical systems is a good first step, the fact we even have computers is a major hurdle out the way in producing intelligent systems.

              AI suffers in that the more we understand about intelligence the less we actually attribute to intelligence. Intelligence is too often treated as some mystical thing that is unexplainable and just is, but the fact is at the end of the day it does just come down to sets of processes and knowledge - albeit extremely complex ones! The problem is in how do we produce something capable of performing processes on par with a human brain when the human brain is a massively powerful system that we just don't have the technology to create artificially on that scale yet.

              Of course there's also the question of defining intelligence in the first place, different people explain intelligence in different ways. Many people even redefine their understanding of intelligence many times in a single day, you may have person x deciding one person is stupid and unintelligent one minute because they failed a simple English exam yet they may decide their dog is intelligent the next because it lifted it's paw for that person when given a command to do so. It is the moving goalposts of what intelligence is that are often why intelligence often gets treated with such contempt as using the above example we may create a robot that could equally lift a robotic paw on being given the same voice command as a real dog, yet when the robot does it it's no longer classed as intelligent. It's hard if not impossible right now to create a system that would be capable of passing the English exam, but you can guarantee as soon as we could it would no longer be seen as an intelligent task due to the very fact that it had been handed on to a machine to perform.

              AI isn't impossible by any measure, we just have to have realistic expectations for it and realise when we've create a machine that has actually peformed an intelligent task. There is never going to be a mystical machine that's seen as being an amazing AI robot because it can walk like us, talk like us and act like us simply because by the time we are able to produce such a machine we will understand it well enough that the mysticism has gone and it's just another machine performing some task that we now understand that we can produce machines to perform.
              • It's hard to quantify awareness, and until we can do that we have nothing to measure, and therefore no science. Best to stick with applications, at least that way we'll make steady advancement and perhaps approach the goal.

                Remember it's mostly a matter of interpretation, of semantics. To ask whether a machine can think is like asking whether a submarine can swim.

          • Re: (Score:2, Informative)

            by protobion (870000)

            OK, this is going to get some people pissed, but it is my honest opinion and I am not doing this to troll. There are a lot of scientific areas that are promoted to get peoples careers going. In fact, they are largely vaporware. Here are some examples:

            1. Robotics. Most of academic robotics is pretty lame. The good people go into industry. Consider for example Michael Raibert and Big Dog. (Look on youtube.) This guy is a true genius, so he left MIT. Most robotics that you see in the media are really bad. Like Alan Alda talking to a robot that "has emotions".

            2. Wavelets. First of all, was invented a long time ago. Its just another choice of basis. Not clear if they are the best for compression or denoising. Look closely and you will see that classical harmonic analysis provides a good competing answer. Jpeg2000 may be better than jpg but not clear if it is due to the use of wavelets, or because of the fact that they had like 40 people working on the lossless coding scheme, which is an ad hoc heuristic. And besides, how many of us are using jpg2000 ? Finally, people I know that work in it say "I just use the haar basis". Haar found this basis in something like 1912.

            3. Chaos. By definition hard to appy to experimental science. As mention the mathematical theory is super hard. McMullen won a Fields medal for it. Work by Sullivan and Duordy is awesome, but they aren't claiming to connect it to experiments.

            4. Catastrophe theory. This was the 60s and 70s version of wavelets. Hardly mentioned in the media anyone, and mostly the people who work on it are pure mathematicians.

            5. Artificial intelligence. Goedel Escher Bach had our hopes up. But nothing ever happened. It' too hard. People claim breakthoughs all the time, but wheres the beef ?

            6. Computer vision. A total mess. They don't even read each others papers and are busy reproducing each other's work, with tends to be some hacks that work only in limited conditions. Remember the MIT face recognition program after 9/11 that was at the Statue of Liberty ? They failed it!

            And I trust you speak of this from your expertise in every one of those fields ?

            I'll talk about only what I know. In biology, from eco-systems to cellular processes, one sees a lot of non-linear dynamics, many of those appearing to conform to chaos theory. Chaos theory is beleived to be the closest thing we have to explaining those phenomenon. And yes, it is challenging.

          • Huh? (Score:2, Offtopic)

            by Xest (935314)

            "5. Artificial intelligence. Goedel Escher Bach had our hopes up. But nothing ever happened. It' too hard. People claim breakthoughs all the time, but wheres the beef ?"

            You sound like you're a hook line and sinker victim of the AI effect [wikipedia.org]. I'm no expert in the other areas you mention, but in terms of AI you truly don't seem to understand the subject at all. There are plenty of examples out there of fields where AI has been extremely successful and are used on a daily basis - data mining, medical diagnosis, s

          • by genmax (990012)
            Wavelets - yes it is "just" a choice of basis, but a good one for images because it admits a sparse representations of image features. Sharp edges which essentially are very high frequency and would take a lot of significant Fourier coefficients to reconstruct are much more compactly explained in the wavelet basis. Sparse representation implies better compression and higher signal to noise ratios for de-noising.

            I'm sorry, but your high-handed dismissal of all of these research areas (Computer Vision - hea

          • Re: (Score:1, Insightful)

            by Anonymous Coward
            By using the word 'vaporware' you are indicating that you fail to understand basic research. Basic research is about knowledge, and not about products, and serve as a basis for many real day applications, or forms paths to new knowledge. In order for basic research to work, the researcher need to have a goal, a focus point that might be researchable in the far future, to work too. This 'dream' keeps them focussed and interested in an otherwise boring topic.

            I don't think many people didn't see the use of G
          • by fm6 (162816)
            Basically, you're complaining that all the ideas you mention are overhyped and are dominated by inept hacks.

            You're absolutely right. But so what? It doesn't detract from the achievements of the brilliant people who founded these fields. Let's stop and appreciate Edward Lorenz's achievements, and save Fixing Academic Science for another day.
        • Re: (Score:3, Informative)

          by jmichaelg (148257)
          it is controversial that chaos will ever contribute to science in any way.

          I agree that a lot of chaos work produced not much more than chaos. But sometimes a paper can tell you what results to discard out of hand and that in itself is a contribution. From his seminal 1963 paper [allenpress.com],

          When our results concerning the instability of nonperiodic flow are applied to the atmosphere, which is ostensibly non-periodic, they indicate that that prediction of the distant sufficiently distant future is impossible by any met

          • Chaos is a process or descending down paths probabilities, not a result.

            Quantum theory and the indeterminacy of states means that the actual state of a system has to be known, it cannot be calculated.
        • by Schemat1c (464768)

          Its controversial that he was the first. A lot of people worked on this area. In fact, it is controversial that chaos will ever contribute to science in any way.
          Yes, the whole history of this idea is very... chaotic?

          *ducks*
        • Re: (Score:3, Interesting)

          by DynaSoar (714234)
          > ... it is controversial that chaos will ever
          > contribute to science in any way. ... there are
          > a lot of bad papers purporting to use chaos
          > theory.

          From my own field:

          Supporting your assertion -- A characterization of chaos is measuring the fractional (ie. 'fractal') dimensionality of the phenomenon. Someone estimated the dimension of the human cortex, with its convolutions embedded within convolutions. They plugged in numbers and got a result. But what's the point? What good does it do? There's
        • by anilg (961244)
          Hehe.. yeah. Too bad he didn't patent his theory, he could have collected large royalties from MS over Vista :)
    • Re: (Score:3, Funny)

      by Digestromath (1190577)
      I think the cancer angle is all wrong for a headline.

      The sensationalist news should report "Butterfly goes on rampage, slays notable mathematician. Police seek public's help in finding strange attractor."

      Or is it really just too soon for jokes? How do we as a community honour someone who has made such great contributions. Contributions no doubt to be misremembered by pop culture (thanks alot Hollywood).

      Perhaps he would be fondly remembered though an internet meme.

    • ...if he doesn't die next year at the same time, we can finally be sure he was on to something. :)
  • by 26199 (577806) * on Sunday April 20, 2008 @06:37AM (#23133582) Homepage

    ...and also one that's fun to play with [exploratorium.edu] (needs java).

    • I'll say. [triumf.ca]
      In case you missed, I'll say it again. [wikipedia.org]
      There are several programs out there, most are freeware, a few are open source, but the one that sticks out in my mind and easily the best and most powerful of the bunch is FractINT. For the past fifteen years I've been playing with this gem, and I'm still finding new stuff I can do on it. Some of it isn't even covered in the 580-page manual.
    • by kramulous (977841) *
      Woohoo! When I click on the canvas I only get a straight line (vertical, drawing towards the top). What's so special about that?

      Yes, I'll go now.
  • by Anonymous Coward
    How come this news gets on ./ so late ??
  • I don't like toot my own horn but I've studied chaos theory and made some significant findings over the years.

    My best work has been realised over a night of heavy intoxication especially between 18 and 23 years of age. This work requires a lot of effort and is usually conducted on Friday and Saturday nights. I can't believe just how many gifted mathematicians there is over these nights. So much research, so many beers.

    However these days I'm a bit more relaxed and allow the the younger crowds to take over.
    • I believe that I may have worked with some of your collaborators
    • by ericvids (227598)
      > I don't like toot my own horn

      I don't like toot either, but he's YOUR horn so I couldn't care less. /chaos ensues

      (It's funny. Laugh.)
  • He got his geography wrong. Butterflies in Brazil do not lead to hurricanes in Texas. As you can see on this [wikipedia.org] graphic on this page [wikipedia.org], there is practically no hurricane activity in the South Atlantic.

    Most hurricanes that would hit Texas all originate as storms over West Africa.

    I wonder why Lorenz didn't use Africa in the title of his paper instead of Brazil.
    • Re: (Score:1, Funny)

      by Anonymous Coward
      whoosh... that's the sound of a butterfly flying completely over your head...
  • by wickerprints (1094741) on Sunday April 20, 2008 @07:27AM (#23133758)

    Back in my college days, I visited the library and looked up Lorenz's paper, "Deterministic Nonperiodic Flow." On the face of it, the presentation was not particularly striking, nor did it seem significant on a superficial reading. That it was buried in a meteorology journal, rather than a mathematics or physics journal, only further obscured its importance.

    Lorenz's discovery was not so much about the specific nonlinear differential system (now named after him) that he discussed in the paper, nor was it about chaos theory as we now know it. The significance lay entirely in the notion that even simple dynamical systems can display sensitive dependence on initial conditions, and that when extrapolated to real-world phenomena, the intrinsic complexity of their behavior was all but inevitable.

    A chaotic system is not merely disordered, or random. There is an underlying structure. Call it a kind of orderly disorder. Prior to (and indeed, for some time after) Lorenz's work, physicists largely dismissed this possibility as absurd. We can, in such a system, model its state at some infinitesimal time t+dt after some given state at time t. We can do this quite accurately. But as Lorenz showed, the deterministic property is insufficient to imply that one can know the state of the system at any arbitrary time in the future. There is a difference between knowing how the future is calculated from the past, versus knowing what the future will actually be.

    Hence the chosen title. "Deterministic" = future states are well-defined from a known prior state. "Nonperiodic" = does not display cyclical behavior. "Flow" = fluid dynamics, in Lorenz's case, atmospheric convection.

    He is truly missed.

    • by S3D (745318) on Sunday April 20, 2008 @08:59AM (#23134030)
      Flow in the title of Lorentz paper is not a flow from fluid dynamics or physics. It's a purely mathematical term which mean a solution of differential equation (Lorentz equation in the case). In more general sense flow is a group action of R on the manifold [wikipedia.org] - that is solution of the differential equation on the curved surface. It's studied by specific branches of mathematics - Differential (topological) dynamics [wikipedia.org], which in big parts owes its origination to the Lorentz paper. So the title of the paper really mean "Deterministic Nonperiodic Solutions"
    • by jlcooke (50413)
      Pointcare knew of such systems when he worked on the n-body problem. Thing was - he died before Lorenz was ever in school. So Lorenz is given title of "father of chaos" when it's not appropriate to do so, imho.
  • by Fallen Andy (795676) on Sunday April 20, 2008 @08:11AM (#23133888)
    Time travelling hunter strays off path and kills a butterfly ... A Sound of Thunder [wikipedia.org]

    Andy

  • Evidently dying on Weds does not result in a Slashdot post until Sunday? That is one slow effect...
  • Its a leaf in the forest.
  • I do not want to belittle Lorenz' major contribution to chaos theory, but the concept (if not the word) of chaos had long been fairly well grasped by Poincaré (1890) and Hadamard (1898).
    • Re: (Score:1, Informative)

      by Anonymous Coward
      No, they discovered chaotic systems but did not identify chaos theory. Their discoveries are analogous to Kepler's equations of planetary motion versus Newton's formal theory of gravity. Or Faraday's studies of the electric and magnetic fields versus Maxwell's formal electrodynamic theory. Hadamard, Lyapunov, and Poincaré made great contributions, but they did not found chaos theory. They observed individual problems but failed to piece it together in the more general mathematical theory. I don't
  • Doesn't anybody see the irony here?
    • by crossmr (957846)
      er.. no?

      You might be thinking of coincidence... or really something else entirely... lots of people die of lots of things..
  • An experiment (Score:5, Informative)

    by Coppit (2441) on Sunday April 20, 2008 @10:12AM (#23134332) Homepage
    Here's a simple chaos experiment you can do at home... Turn on a faucet slightly so that it drips regularly. Then increment the flow slightly, and pretty soon the drips will come out in a non-regular way. Understanding the transition from regular to irregular is part of what chaos theory is about.
  • Fractint (Score:2, Informative)

    by Alarindris (1253418)
    I saw a program on the Mandelbrot set and Lorenz attractor on PBS in the late 80's. Completely changed the way I thought about the world. Also where I discovered Fractint.

    http://www.fractint.org/ [fractint.org]
  • by rakzor (1198165)
    Famous scientists around their 90's are dieing lately.
  • There once was a man named Fisk whose thrust of the sword was so brisk that with the speed of the action the Lorentz-Fitzgerald Contraction reduced his rapier to a disk!
  • by thedrx (1139811)
    What is your Erdos-Bacon number?
  • by iluvcapra (782887) on Sunday April 20, 2008 @03:42PM (#23136124)
    Chaos: The Making of a New Science [amazon.com]. Tells the entire story of Lorentz's discovery, in gory detail, down to the fact that he used a Royal McBee computer to do his original weather simulation, the same computer in the famous hacker "Story of Mel".
  • Whoa, talk about a coincidence.
  • If a butterfly can cause a hurricane, what can an entire air corridor of passenger liners do?
    Did we notice a change in the weather in the days after 9/11 when the planes were grounded?
    Has the last few decades of jet air travel caused the weather system to adapt such that reducing the number of flights (like 800 jets grounded for safety inspections) have a greater effect than leaving them flying?
    Could the rapid swings in weather, (higher highs/lower lows) be caused by the aircraft Giga-Butterfly Effect (a
  • I mean, seriously... cancer? At age 90? How predictable...

    I just wish he could have had a "Final Destination" style death... [youtube.com]
  • I got a first class example of this in my game, where I have dumb bots following simple paths with nodes. That's all they do, follow paths. But in the engine they are subject to the vagueries of the physics and sim system. I have long noticed how even the slightest change in initial orientation or inertia can have wildly significant effects on the precise paths the bots take, or whether they can complete them at all. The system, in fact, is sufficiently complex, that even background
  • by AP31R0N (723649)
    OffTopic Snobbery:

    He dies at 90? Is that because it is going to happen in the future? "The train arrives at noon." Or is it dies habitually? "Tony Hawk skates."

    It seems more likely that "Lorenz died at the age of 90". Or "Lorenz is dead at age 90".

    Do people go stupid when someone dies and forget verb tense because they are so wracked with grief for a stranger?

Repel them. Repel them. Induce them to relinquish the spheroid. - Indiana University fans' chant for their perennially bad football team

Working...