Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Science

Distributed Computing World Climate Simulation 287

Burnt Offerings writes: "The BBC reports that scientists at climateprediction.com are nearing the completion and public release in late summer of a distributed computing project that simulates the world's climate from 1950-2050 AD. It seems that each user's simulation will have different initial conditions built into their runtime simulation and a single completed simulation from 1950-2050 AD takes on average eight-months (Doh!), assuming average household computing power. The results will be sent back to the project's team, where they will select the models that resulted in the 'real' climate patterns that have occured since 1950-2000. I presume they will then use these validated models to help extrapolate the world's climate from 2000-2050. Pretty cool (or should I say warm? or hot?)."
This discussion has been archived. No new comments can be posted.

Distributed Computing World Climate Simulation

Comments Filter:
  • doesn't a year have 12, not 8 months?
  • ... for me. I couldn't vote on the latest Megahertz poll, my stately 33 MHz 486 didn't even have a category in which to put it.

    For me, a single completed simulation from 1950-2050 AD should take a little over 100 years. Can't wait to get started.

    With luck, however, I should get the right answer.
  • end result (Score:4, Funny)

    by Graspee_Leemoor ( 302316 ) on Tuesday May 07, 2002 @09:08PM (#3481663) Homepage Journal
    The end result of the project:

    "On 1st January, 2050, it will start rather cloudy with outbreaks of rain, mainly in the north. These will clear up by late afternoon, leaving it warm with mild breezes in most of the country."

    graspee

  • Oh boy.... (Score:2, Funny)

    by Ooblek ( 544753 )
    More one-liners to add to the fortune cookie!

    On this day in 1950, it was raining. The rain was as pure as Evian.

    On this day in 1980, it was raining. The rain was as pure as the innards of a Duracell battery.

  • If it wasn't, we'd have accurate forecasts up a few months in advance. As it is, I find forecasts are routinely wrong about even tomorrow's weather. What happened to the hole "butterfly flapping its wings in Singapore affects the weather in Kansas" thing? I don't see how initial conditions would tell them much, I bet even random quantum events have a very strong influence on weather models over 50 years. I'd put the odds of success for this distributed computing project around the same as SETI.

    Websurfing done right! StumbleUpon [stumbleupon.com]
    • Weather != Climate (Score:5, Informative)

      by cperciva ( 102828 ) on Tuesday May 07, 2002 @09:20PM (#3481753) Homepage
      Weather is chaotic, but climate is ... well, ok, climate might be chaotic, but we really don't know -- and if it is chaotic, it is still only chaotic on timescales of more than 50 years.

      Predicting climate 50 years in the future is a computationally difficult task, but it isn't impossible the way that predicting weather would be.
      • What is climate but (basically dumbing it down) taking the average of the last x number of years of weather to define the norm. So, to define what the climate is fifty years into the future, one would have to take a look at the weather for each of those years. I agree that is no small task.

        I must take issue with the parent post, though. I agree that weather is a choatic system, very much so. But, all aspects of weather can be parameterized, even the most chaotic ones. The key here is a matter of scale. The mesoscale type systems are extremely hard to model, but you take a global system (long wave patterns), and you will have a much better time of modeling them. How? You throw out the small scale stuff like your butterfly and such. On a global scale, something like that would quickly disappear into the larger scale. That is why global models (like the MRF, NOGAPS, and such) work better out farther (those models run out to 384 hours as opposed to smaller scale models that run out 84). Verification rates are acceptable for those models out that far (numbers I cannot quote off the top of my head). They could do better, but they would require more time to process and would not be useful to the operational meteorologist.

        This distributed system will be over eight months and on such a large scale, the results will be useful.
        • Dumbing it down even further; Climate is what we expect, weather is what we get.

      • Climate - The condition of a place in relation to various phenomena of the atmosphere, as temperature, moisture, etc., especially as they affect animal or vegetable life.

        Sorry to fall back to dictionary definitions, but this sure sounds like weather to me. Maybe averaged on a longer time scale, but it's still quite obviously a chaotic system. We've found loose correlations with sunspots, deforestation, etc.. but even very large trends like the "little ice age" of 1500AD are unexplained and most likely chaotic. If we can't explain hundreds of years of pronounced trends, I don't see how we can do anything with the relatively uneventful last 50 years.

        Websurfing: The Next Generation - StumbleUpon [stumbleupon.com]
      • Predicting climate 50 years in the future is a computationally difficult task, but it isn't impossible the way that predicting weather would be.

        Perhaps it's not impossible, but no-one has been able to do it yet. That's why they're resorting to this...

        Can anybody read between the lines here? They're essentially saying, "Every climate model we have (that predicts global warming) wasn't able to accurately predict the global warming 1900-2000. We're fresh out of ideas so let's run a couple of million models with varying random values. When one of them (inevitably) comes pretty close we can cling to that as "proving" it to be a working model and use its results as convincing evidence that we must cut CO2 production or we will all die."

        I'm not giving these jokers a minute of my CPU time. They are guessing. They don't have a workable model so instead of trying to keep thinking they're in a rush to get a "verified" (by passed events) model within a year so they can try to use the results to push their political agenda. The fact that a few of the millions of models they run correctly guesses the last 50 years of climate change is no indication it will predict future climate change unless there is a reasonable belief that the model was based on some logic. These models are based on random guesses at chaotic values.

        Trust me, the results are already known. It will show global warming for 2000-2050. Can you imagine the coup if the random model that happened to guess 1950-2000 also showed global cooling of 5 degrees in the next 5 decades? How much you wanna bet that that result would NEVER see the light of day...

        Spend your CPU cycles on SETI...

    • Extrapolation is usually not very reliable. In most of these chaotic systems, the fact that a model predicted accurately what has happenned in the last 50 years, does not mean very much because of the following factors:

      (a) there may not be enough models that have been run, so we may pick something that "seems close"
      (b) running a 50 year simulation (rather 100 year) in 8 months on small computers means that the model is not going to be very sophisticated
      (c) there are random parameters, such as volcanic eruptions, man made emissions, deforestation/aforestation, etc., that won't get into the model properly

      A prof of mine told this in the class: In the good old days, many mechanical engineers came up with formulae for heat transfer in pipes under various conditions. The formulae matched experimental data almost perfectly. They started extrapolating the results. Eventually, they found out that *ALL* those extrapolations violated the second law of thermodynamics -- and they went back to just interpolating.

      S
  • Infeasible (Score:4, Insightful)

    by Enonu ( 129798 ) on Tuesday May 07, 2002 @09:09PM (#3481688)
    Blame the climate changes from 1950 to 2000 on the expanded use of the automobile and unregular industrial waste. Do you think any scientist in 1950 could have known about our current situation? How can we in 2000 know about the new problems that'll creep up between now and 2050?

    Spend your extra CPU cycles computing the cure for cancer or finding ET. I doubt this will prove useful.
    • Re:Infeasible (Score:4, Insightful)

      by Papineau ( 527159 ) on Tuesday May 07, 2002 @09:52PM (#3481940) Homepage
      I think what they want to do is, given the state in 1950 and what we know about the inputs (use of automobile, etc.), which models predicts correctly what we see in 2000, so that those models can then be used, along with some new inputs, to forecast what 2050 will be. Either prospective inputs, to get a glimpse of our possible futures, or actual inputs, to further validate the model or get a sharper view of the future climate.

      That being said, 8 months is way too long to get something useful. I know a couple friends who reinstall their OS (and apps) in shorter terms than that, and don't really bother with bringing all data along, just some backup on CDR "in case I really want it again". I think they could at least chop it in periods of a few years, so that if you finish a "unit", somebody else can then pick up where you left. I'd like to see the completion efficiency of whole units in a few months.
  • by Chayce ( 199487 )
    Now they will be able to prove with computer simulations what the weather will be like tommorow and still be wrong... Gosh what a marvle of modern technology. I wouldn't be such a cynic except there are so many variables that go into weather predictions that any attempt is still a guess...
  • I'm always a little sketchy on joining these distributed computing projects.

    Something about my computer time being put to work so that a bunch of scientists can invent a new drug and make lots of money; or put out a new study and get some fame. It just doesn't seem right.
    • I run NO Distributed Computing (DC) project unless it follows these rules:

      1. Must Be Non-Profit. If it is for Profit I Must get a cut.
      A. example: Seti@Home is run by the University of Berkley.
      B. United Devices is for profit (think about it, Drug companies will make money). However, Easynews.com gives me 2 free Gigs of access a month for running it. Hey all I want is a piece, and I am getting it.

      2. A DC project must be bug free. This may seem like a bloody obvious sort of thing. But considering the state of software releases nowadays one might think I am asking for a miracle! Seriously I understand the point of Version 2 releases and stuff like that. As long as it is handled competently and professionally I probably will forgive them. But I will have zero patience for a DC project that crashes my machine or keeps me from running ANY app. And that leads me to rule 3...

      3. A DC must take a back seat to.. everything. It must also be maintence free.
      Does this require any explanation?

      4. Finally, it must be controversy free.
      I have yet to come across a /. article accusing a DC app of loading in spyware, or a trojan of any sort. But I have faith that it will come.

  • I'm not going to reprint the page [rl.ac.uk]
    unless it get's slashdotted, but none of the models (HadAM3 ,HadSM3,HadCM3,HadCM3L)
    in the simulation take into account the biological factor.


    It has been said, that both termites, cars, factories, cows, and Taco
    Bell produce huge amounts of greenhouse gas which do attribute to global
    warming. How can this lead to an accurate prediction model if these factors
    aren't accounted for?

    • To quote from the Hadley Centre [met-office.gov.uk], (who represent the Had part in the named models), which is about 1 link away from the page you quoted:

      The specific aims of the research conducted by the centre are to:
      [...]
      understand physical, chemical and biological processes within the climate system and develop state-of-the-art climate models which represent them;[...](Emphasis mine)

  • My understanding of the el nino weather patern is simply "something pretty weird is going on lately" causing droughts were there shouldnt be and ditto with floods. given that this odity will be in only say 40% (or the last 20 years) of the data they are fitting wont any extrapolations be pointless.Add that to global warming and i am seeing wasted CPU cycles
    • Actually, it looks as though El Niño may have taken out the Mayans, so it's not just 'lately'.

      (It appears from Mayan records that they had a few years off unseasonable weather that brought them to the edge---and redoubling the human sacrifice rate not only didn't work but ran into resource constraints).

  • So, let me get this straight: they're going to pick generated results that most closely resemble real, measured results, and then adjust their model to compensate.

    Those models wouldn't be "validated" as the poster claims, or would they? It seems to me that without identifying the reasons the computed models differed from the measured results, the selection is damn near arbitrary -- the difference may be something the scientists never considered.

    I've been wrong before.... once.
    • It's generally regarded as a Bayesian technique. Actually, there's far more to Bayesian statistics that bootstrapping, but it's the part I spend a lot of time working with. In fact, I suppose that bootstrapping isn't fundamentally a Bayesian process, but it is highly empirical so it appeals to the same "crowd" as more decidedly Bayesian techniques. By the by, "Bayesian" statistics are statistics that make heavy use of Bayes' Rule to incorporate prior knowledge not included in your measured data.

      My background - you develop a program to predict something biological. Let us say, to pick a problem on the same order of difficulty as predicting the weather, that you're trying to predict the three dimensional confirmation that proteins assume, based on their sequence.

      Now, okay, you have a bunch of known sequences, which other people (personally, I do both the data mining and some crystalography) have attached to known structures. So, what do you do?

      Well, you could fiddle with your program until it predicts really well on those sequences, and announce that it was good. This is "Bad Science", as the parent-poster points out, since the criterion are arbitrary - you have a tendency to "discover" random noise in the data, and you have no way of validating your results.

      So, second option. Instead, you split the data in half at random (actually into more than 2 pieces, but conceptually in half.) You take one half, and you make the model predict as well as you can on that data. Then, you VALIDATE ON THE OTHER HALF OF THE DATA. You *never* change the model on the basis of the second half of the data - that is arbitrary/bad/cheating. This is called "bootstrapping". It has nothing to do with compiler installation.

      So, as far as most scientists (as opposed to mathematicians) are concerned, the important question is - does this work? In the biological sciences, I can say categorically, yes, this bootstrapping technique has a proven track record. It does work. Obviously, you can screw up (using non-representative data is a good start) but the technique, when properly applied, is sound.

      So, I assume it would work for predicting the weather, as well. By work I mean - you would know how well your software predicted the weather. Bootstrapping is not a means of predicting the weather in and of itself, merely of honestly evaluating the effectiveness of a weather prediction mechanism you already have.
      • Thanks for the explaination!

        I'm currently working on an application that monitors seemingly random data -- the stock market. I never stopped to consider that there may be statistical techniques above and beyond the standard technincal indicators.

        Food for thought!
      • Well, the problem is that they are actually using non-representative data. 1950-2000 is a too small sample by far to even begin forming a model for climate variation, something which varies over periods of centuries or millenia.

        They will probably get some form of result. It wont be valid, but it will nonetheless be a result which matches the earlier period.
        Of course, this will start breaking down as soon as natural climate variation changes cycle. Likely it would be invalidated even faster if they try to apply the model to known data from the last 20k years (altho if they could get the model to account for the earlier climate variations that far back, I'd tend to accept it as more valid).
      • I wish I could remember the exact details, but this was the basic idea:

        Some branch of the US military was trying to train a neural network to look at a photograph and recognize whether or not there was a tank there.

        The people designing the system had pictures of scenes without tanks, and pictures of scenes with tanks. Half of the pictures were sealed away in a safe for later testing. Then, a neural net was trained on the first half of the pictures until it could, with 100% accuracy, correctly identify if there was a tank, or not, in the picture. Finally, the second half of the pictures were presented to the algorithm, and it also correctly identified those pictures as tank/not-tank.

        However, when it was tried on another series of pictures, the neural net could only accurately identify about 50% - no better than chance. The engineers who trained the net were dumbfounded, so they went back and started studying exactly what the neural net was trying to use to recognize a tank.

        Finally, they found the answer - all the pictures with tanks were taken on an overcast day, and all the pictures without tanks were taken on a sunny day. The million dollar neural net had been trained to differentiate between blue and grey skies! Back to the drawing board...
    • I would trust a lot more if they ran the simulations against a period from, say, 1950 to 1980 for the fitness test, then could demonstrate that the simulations that passed the fitness test did a good job of "predicting" the climate from 1980 to 2000. If they can't do that, they're just picking the decks of cards that happened to match so far, with no reasonable guarantee that they will continue to match.
  • I may have missed it, but I didn't see any indication of what the PC requirements were? Windows only, or Linux, Mac etc?

    No doubt the odd geek has a room full of Alphas to add to the cause.

    • On their FAQ [rl.ac.uk] (dated 5 Oct 2000!), they state they will support Linux initially and are looking for sponsorship to port the client to Windows. Considering the "What's New" page was last updated on 17 Aug 2001, the actual status of ports for different clients is unclear.

  • It seems like there is a bit of professional dueling going on between this project and Seti@home looking at their FAQ [rl.ac.uk] and the quote by Dr Meyers Allen saying about their project "It's not a stripped down 'toy' version, so the runs take time"

    My favorite quote from their FAQ was in response to the possible affect the computers running the client might have on the environment:

    "Each day, about 23 times more energy will be spent boiling water for tea in the United Kingdom than would be used by the computers involved in the Casino-21 project."

  • checkpoints (Score:2, Insightful)

    by thelen ( 208445 )

    If this thing takes eight months to complete, I sure hope they plan on storing periodic checkpoints of progress for each test in a central location. What happens if my machine gets hosed at four months? Is all that data lost?

  • This is all well and good, but a much more effective distributed computing project would be one that would help the National Weather Service (where most meterological outlets still get their info) predict the weather for the next few days. They use a series of computers for simulation and then make an educated guess based on the runs of those models. Imagine, however, if instead of four or five test runs, they've got thousands to choose from.....

    Just a thought.
    • Do you have a big RS/6000 or two sitting around, or a sizeable Linux cluster(s) connected via fiber to the National Climatic Data Center in Silver Spring, MD, that you can crunch a few dozen gigabytes of data a couple of times a day to help out with?

      Speaking as someone who builds clusters to run mesoscale atmospheric models, the amount of data that's required to be passed back and forth between the compute nodes of a cluster requires gigabit bandwidth to keep decent processors happy. I don't see how a WAN-based distributed computing project without massive bandwidth and nearly isochronous data transmissions are going to be of any use in producing a working forecast. Most atmospheric models I've seen require frequent communication between the nodes to keep the processors busy. In an average run for an area the size of a couple of average states for a 36 hour forecast, the traffic on the network in a five node cluster approaches a terabyte.
  • Uh...good luck (Score:3, Insightful)

    by Sinical ( 14215 ) on Tuesday May 07, 2002 @09:19PM (#3481747)
    The information on their website says the time step is 30 minutes and that their box is 3.75 degrees longitude by 2.25 degrees latitude (or visa versa: BIG, in any event).

    Therefore, how do they expect this to work -- additionally absent any outside changes in the environment?

    What I mean is, how do they know if they did a good job? Perhaps if the results are all very close to the current day climate, I'd buy that they got it right, but if they have a reasonable distribution of results, how do you decide? I mean, we've been clear-cutting the hell out of forests left and right for years: do they somehow takes this into account? Heck, how do they present the geographic information about the Earth: this bit has forest, this bit is desert. I would think that this would make quite a bit of difference in results (changes in albedo, for instance).

    I certainly wish them luck, but they're not getting my PC for that long without something more detailed , informationwise.
    • "What I mean is, how do they know if they did a good job?"

      Notice that the dates being simulated are 1950-2050. We have historical data for 1950 to the present. One of the big accepted checks for a climate model is to run a period for which you have historic data from the same initial conditions and check to see if you end up with similar answers to reality. Pretty simple, in theory. The real problem is that the cell size is just enormous. Do you have any idea what sorts of ocean current and landscape variables are contained in a 3.75x2.25 degree square? To get better results, you need small cell size and very detailed modeling of feedbacks. However, the shear range of permutations that can be attempted with a seti@home size user base is useful in and of itself.
  • by Wolfier ( 94144 ) on Tuesday May 07, 2002 @09:22PM (#3481764)
    Global warming accelerated by CPU heat as weather enthusiasts simulate climate with computer. Temperature for the next 2 years will rise by 2 degrees
    • Well aren't we in a ice age anyhow?
    • Re:Next in news (Score:2, Interesting)

      by thelen ( 208445 )

      From the FAQ:

      Won't all these computers being left on for 24 hours a day have a detrimental impact on the Climate System?

      Assume a computer running 24hrs/day requires, on average, 50W of power. If 100,000 computers join the Casino-21 project, the project will require 5,000kW of power. There are 24 hours in a day, so each day the project will consume 120,000kW-hrs, or 432,000,000kJ of energy.

      That's a big number, so let's try and put it in perspective by calculating how much energy is necessary to boil water for a cup of tea. Assuming a specific heat of water of 4.19 kJ/(kg-K), 0.237kg/cup of water, a necessary temperature rise from 20 degrees Celsius to 100 degrees Celsius, and that only one cup of water is boiled for each cup of tea, then about 80kJ/cup of energy are necessary (assuming our kettle is 100% efficient). This means that running the Casino-21 project for one day is equivalent to boiling water for 5,400,000 cups of tea.

      Is 5,400,000 cups of tea a lot? According to the Tea Council, some 37 million people in the United Kingdom drink, on average, 3.4 cups of tea per day. That's nearly 126 million cups of tea per day in the UK alone!

      Each day, about 23 times more energy will be spent boiling water for tea in the United Kingdom than would be used by the computers involved in the Casino-21 project. More seriously, a rough calculation suggests that 100,000 computers running 24hrs/day for one year at a power consumption of 50W will contribute approximately 0.0001% of the total amount of CO2 generated in one year. This is not an insignificant amount, but seems (to us) a worthwhile investment to better understand the climate system.

      • I'm not sure about the average 50W. Let's take an Athlon XP 1800+:
        According to it's datasheet, it's typical power consumption is 59.2W. Add to this the rest of the cards in the system, RAM, chipset, HDD, CD, etc. Lets say the total is 80W, which is conservative. You don't need a fan on your NB or your GPU for nothing, although when you're not doing 3D the latter shouldn't be hot. Then you have the power supply, the efficiency of which is usually 70% at full load, and less than that if it's not fully loaded. Since the vast majority of PSU shipped nowadays is 300W, and 80W is way lower (but you still need 300W for peaks, like when you boot or game, or if you have quite some cards or HDD), the efficiency is probably around 50%. The rest of the calculation seems correct, so the total for 100 000 computers is about 17.4 million cup of teas.

        Total is, it's still lower than the power needed to boil the number of cup of tea drank in UK in a day (I would have taken coffee rather than tea for the example, as it's more common internationally), but the starting figure of 50W per computer seems low. In my room, my 2 computers quickly heat the room more than if I have a 100W light bulb on (although part of it is light, so doesn't heat the room as much).

        Just to do this calculation is intersting: it shows the relative weight of some human activities. I'd also really like to have an accurate view of the electrical consumption of my computers.

        And since I prefer cold juice to tea or coffee, I don't take part as much as you to the global warming. My drink gives back more heat than it absorbs :)
        • As luck would have it, I have an Athlon 1800+ system, and I hooked an ammeter to it a while back. The numbers match your estimate pretty well:

          Idle: 107 watts
          Unreal Tournament: 132 watts

          • Unreal Tournament: 132 watts

            Ouch. You have an expensive computer. My 1GHz Toshiba laptop draws about 30watts finding prime numbers. I bet your air conditioner gets a workout.
            • Ouch. You have an expensive computer. My 1GHz Toshiba laptop draws about 30watts finding prime numbers. I bet your air conditioner gets a workout.

              I bet what you save on electricity bills, he saved at the time of purchase.
            • Ouch. You have an expensive computer. My 1GHz Toshiba laptop draws about 30watts finding prime numbers. I bet your air conditioner gets a workout.

              My laptop draws only 20 watts. I don't play unreal tournament on it though.

              Laptops aren't optimized for speed. I'm sure you're getting less than 1.0/1.8 of the performance as my system. Throw in consideration for the power wasted by my honking graphics card, and you're probably not getting more instructions per joule than I am.

              BTW, I only run that beast of a machine when I'm using it.

  • This is cool. Beyond being used to understand the current climate change that is happening, obscure weather phenom could be modeled on a larger scale for a longer time.

    Perfect example would be an article out of the latest AMS Bulletin of the American Meteorological Society Earth Interactions [allenpress.com] that discusses plane contrails. It seems that the lack of air traffic after 9/11 allowed the meteorlogist to work on a long held theory that plane contrails affect weather. Only problem was that the dataset was only over three days, which was just a small time sample.

    Using a system such as this, those weather conditions could be recreated over a longer period of time and the results could be realized. Too cool.
  • Pretty cool (or should I say warm? or hot?)
    You should wait until the results come in.
  • Pretty cool (or should I say warm? or hot?)

    Say "cool", global warming could lead to an ice age. One theory predicts that warming can lead to too much fresh water being introduced into the north atlantic and decrease salinity levels beyond a key threshold. This in turn "shuts off" a descending (vertical) current, which in turn disrupts the gulf stream (horizontal) that currently sends warm water north, which ultimately results in cooling in north america and europe.

    FWIW, there is evidence that the above occurs fairly regularly on a geological time scale. Man's efforts may or may not have much of an impact, it may or may not be egotistical to think we can change weather patterns with our SUVs. Perhaps if we have an impact the system was teetering on the edge in the first place. Not that this justifies a push over the edge.
  • by MongooseCN ( 139203 ) on Tuesday May 07, 2002 @09:28PM (#3481800) Homepage
    In simulation A we set the Funding Amount variable to 0$ and the Donating Corporation to NULL. Their results was intense global warming in 2050.

    In simulation B we set the Funding Amount variable to 200,000$ and the Donating Corporation to Exxon Mobile. Their result was no global warming at all in 2050.

    In simulation C we set the Funding Amount variable to 300,000$ and the Donating Corporation to Amazon Lumber Harvesters. Their result was an actual decrease in green house gases by the year 2050 due to deforestation.

    In simulation D...
  • Pretty cool (or should I say warm? or hot?)

    Wait eight months and tell us!
  • Their system sounds interesting and I like the distributed part of it, however this sounds like something a neural net would be good at. Basically train the net to handle the climate patterns over the last 2000 and compare the data. The ones that were the closest could carry the right weights and heuristics to somewhat predict weather in the future. The cool thing about this would be that we for the most part wouldn't understand why the hell the system said that eastern europe's temperature was going to be 10 degrees higher in the 2040's until it possibly happened. We could also keep it updated daily, so that it can progressivly adapt it's weights to new changes.

  • Reminds me of a friend of mine. He tried to use neural nets to predict trends on commodities by training different models with tons of data, and then using the best fitted time series to predict future trends. It seems pretty obvious to some people, but it just doesn't work. The models may be simplifying the data, but they are not basing it on true relationships. For systems that have well-defined dynamics and precise measurements, you do have a fighting chance with this approach, but I doubt that the weather falls into either of these categories.

    • ...is that all you have to do is circle several stock quotes in the newspaper, connect them in a shape resembling the golden spiral, enter them into your ghetto home-build mainframe and you've got it. Of course be warned that your computer may print out a 215 digit number and then blow up, and you'll have some very angry Rabbi's to deal with once you've gotten your stock predictions...
  • ----SNIP FROM FAQ----

    Won't all these computers being left on for 24 hours a day have a detrimental impact on the Climate System?

    Thanks to Craig Greenock for this one, and several others since. Assume a computer running 24hrs/day requires, on average, 50W of power. If 100,000 computers join the Casino-21 project, the project will require 5,000kW of power. There are 24 hours in a day, so each day the project will consume 120,000kW-hrs, or 432,000,000kJ of energy.

    That's a big number, so let's try and put it in perspective by calculating how much energy is necessary to boil water for a cup of tea. Assuming a specific heat of water of 4.19 kJ/(kg-K), 0.237kg/cup of water, a necessary temperature rise from 20 degrees Celsius to 100 degrees Celsius, and that only one cup of water is boiled for each cup of tea, then about 80kJ/cup of energy are necessary (assuming our kettle is 100% efficient). This means that running the Casino-21 project for one day is equivalent to boiling water for 5,400,000 cups of tea.

    Is 5,400,000 cups of tea a lot? According to the Tea Council, some 37 million people in the United Kingdom drink, on average, 3.4 cups of tea per day. That's nearly 126 million cups of tea per day in the UK alone!

    Each day, about 23 times more energy will be spent boiling water for tea in the United Kingdom than would be used by the computers involved in the Casino-21 project. More seriously, a rough calculation suggests that 100,000 computers running 24hrs/day for one year at a power consumption of 50W will contribute approximately 0.0001% of the total amount of CO2 generated in one year. This is not an insignificant amount, but seems (to us) a worthwhile investment to better understand the climate system.

    Assuming you are convinced this experiment needs to be done, there are basically two options: to buy a hangar-full of PCs and run it ourselves (not even an option right now, since the climate research community doesn't have the resources); or to recycle spare CPU out in the community, as we propose to do under the Casino-21 experiment. Since the main environmental impact of a PC is in manufacture and disposal, not the power consumed in running it (never mind the air-conditioning costs and visual impact of that hangar on some innocent rural community), environmentalists will, we hope, approve of our strategy.

    ----/SNIP FROM FAQ----

    I found this interesting. I've always worried about leaving my computer on 24/7/365 because I feel so wasteful of electricity. Not that I won't, but that puts it in perspective. Especially when I look around my school and see all those CPUs idling (or even just running a word processor).
  • Microsoft has developed a very similar distributed simulation software package. Last I heard, it would only take 3-5 months to recieve the results, too. A savings of 3 or more months. Rumor is, they plan on using it so that a person can run Office XP. Finally enough cpu power to run it quickly... I'm sorta jealous.
  • Coincidence? I think not!

    Seriously, this sort of modeling will take less time as processors scale bigger and Internet connectivity proliferates. I would like to participate, but it would be nice if I didn't have to run an MS OS to do so. I can, do and probably will, but if they would just release the source ...

  • by blair1q ( 305137 ) on Tuesday May 07, 2002 @10:11PM (#3482030) Journal
    They're starting with different initial conditions and hoping that some subset results in 50 years of weather?

    Shouldn't they use the last 50 years of weather as initial conditions and vary parameters of the model instead?

    What they're doing is like flipping an imaginary coin 500 times hoping to match the first 250 flips of a real coin to predict the the last 250 flips (albeit in a system with non-independent trials). But then they're taking those 500 flips and matching the first 250 to weather reports (might as well be coin flips) and then imagining the next 250 flips will approximate the future weather reports. What they need to do is fix the initial conditions and modify the model (coin flips vs. rolls of the die vs. LCRNG, etc.) to find a model that approximates the dynamics of the system.

    Am I making sense here? How are these bozos not just going to apply their effective innumeracy to waste a few trillion CPU hours that could otherwise have been used to do protein folding or cancer-killing molecule matching?

    --Blair
    • by astroboy ( 1125 ) <ljdursi@gmail.com> on Wednesday May 08, 2002 @12:18AM (#3482500) Homepage
      They're starting with different initial conditions and hoping that some subset results in 50 years of weather?

      No. The term `starting conditions' appears in the BBC article, but if you go to the website [rl.ac.uk] it says:

      The only systematic way to estimate future climate change is to run hundreds of thousands of state-of-the-art climate models with slightly different physics in order to represent uncertainties.

      In large-scale simulations such as these, there are often bits of physics/chemistry/weather that have to be put in by hand because, usually, the relevant bits of science would be too expensive to calculate, or couldn't be seen on the resolution of the simulation. While it's usually pretty doable to come up with reasonable models for the unresolved effects, there are often parameters in the models that could take a range of values.

      This ensemble of models allows for the callibration of the model parameters against 50 years of data; this gives some confidence in the predictive power of the models for the next 50 years.

      This sort of parameter estimation based on calibration is very common for models of complex systems, and not just for computer models. Ideally, one wants to get to the point where such things aren't necessary and you can directly calculate all the science a priori of course, but these model calibrations are often useful steps along the way.

      • I'm glad they're running a lot of different models.
        It will be interesting to see how divergent the predictions for the next 50 years are from the best fits to the past 50 years.
        It will also be interesting to see how badly the best fits for the next 50 years fit the past 50 years. (There's gotta be a better way to phrase that)
        There's also the long term effects that we have no good means to capture, like what turns off and on the various ocean currents.
      • That sounds a little better. I did go to their website, and saw that they were going to use one of their four models, but I didn't dig farther to see that the journalists (as per usual) didn't understand what they were copying into their notebooks.

        But what the researchers should be doing first is back-testing by using the first 25 years as calibration and the second 25 as a check on the extrapolation. Then doing it the other way around. Or maybe the distributed software does that, and all the permutations in-between.

        At any rate, where it should fall on its ass is in the prediction of weather that actually makes a difference: hurricanes and tornadoes, which have crucial features that won't be well modeled, if at all, by the large differential boxes they selected. It will also run afoul of interference from random volcanic eruptions on a Pinatubo-Mount St. Helens ashfall scale, which happen on a decade or so time scale, the timing and location of which would be critical to the rest of the test run.

        So I'm going to stick with my attitude that this is a tragic waste of CPU cycles that might actually go towards developing a drug that might actually save a life.

        --Blair

        P.S. SETI is likewise a waste; if we do hear a beep in the darkness, our only logical reaction will be to band together 6 billion of us as one to build the biggest, nastiest zero-time-of-flight weapon we can create, then hunker down in the sweaty dark to wait to fire it. Anyone coming that far is going to be wanting to make a buck off of it, taking chunks of the planet or slaves, and they're going to be ready for casual resistance.
        • hurricanes and tornadoes, which have crucial features that won't be well modeled, if at all, by the large differential boxes they selected.
          I agree the grid resolution is high, but you've missed the point. The whole *idea* is to find which parameterisation of small scale effects (eddy viscosity, ground friction, mesoscale vortices, sea ice production, add-your-favourite-here) leads to the most accurate predictions. Even if the models are flawed, this is still worth doing.

          PS: IMHO, volcanic ash effects are overrated.
  • So why do you think the distributed results will be faulty in the end?
    1. Intel Pentium 1 FDIV bug
    2. SETI@Home changes its coding to interfere with other dormant programs so that they dominate idle CPU time.
    3. People start wagering on predictions of the weather and hackers break into the mainframe to tamper with the results.
    4. CowboyNeal.
  • My CPU hours will remain dedicated to searching for a cure for diseases. If you would like to help check out the Folding@Home [stanford.edu] project that uses distributed computing to model protein folding to find possible cures.
  • TKOE perhaps? (Score:2, Insightful)

    by DarkRecluse ( 231992 )
    Why don't we quit wasting time trying to predict major climate change and start taking action to clean up our act?

    Have you ever thought of how much garbage the world population puts out, trees we cut down, pollutants we flush, and general mayhem we induce?

    Maybe we should be using our excess computing time into working on projects that actually might affect our environment in a positive way, rather than saying we should see what it is going to be like down the road...we all know what is going on here, and I'm not talking about global warming.

    Its not the effect of global warming that is our problem right now, but the effect of our blatant misuse of resources and obvious disregard for the earth. Do we not live on this planet with the environment we are destroying...I don't think you need to be a very good scientist to realize that when the environment is decimated, we will be hard pressed to survive...

    I guess everyone has some idea that God is going to come and fix everything for us, so we don't have to worry about cleaning up...hey, why don't we all call our mommys and see if they will do our work for us...why don't we own up and say, "Holy shit, I don't want to take the chance that my children are not going to grow up because I ruined their world for them." What is our general purpose in life besides taking up space, making money, and destroying the environment?

    The world is a big place, but eventually our actions are going to reach around to spank us, just like our mom's did when we were bad...except it won't be a spanking we live through:/

    I invite everyone to spend their 8 months attempting to exact reform in our environmental policies and personal resource use, rather than hoping your computer will somehow figure it out for you.

  • It seems that this could make for a real headache, splitting the workload up onto all of these different computers. It's not data like Seti@home where you can distribute out data pieces, is it? All of the information needs to be there to simulate the planet. It sounds like it would be more effective to just get the fastest supercomputer they can get their hands on and start work on a more thorough level, like Japan is doing. Otherwise...

    "How's the global climate simulation going?"

    "We're still waiting on the data from Australia. We sent it out to 5 people but we haven't gotten anything back yet."

    In the meantime, the Earth's atmosphere bursts into flames and makes the whole point moot. ;)
  • As one poster has pointed out, weather is a chaotic system (and climate is also chaotic by definition).

    Chaos is gravely misunderstood though so let me real quick through in my explaination for why this experiment will just generate FUD.

    Chaotic equations are chaotic not because of the number of variables involved but because of the interdependency on themselves (each iteration requires the former iteration). This leads to extreme sensitive dependency on initial conditions (a.k.a. the Butterfly Effect). I should have probably emphasized the word extreme because even the slightly deviation will produce dramatically different results.

    Even the best climate prediction algorithm would be crap if the initial condition was off by 10^(-20). The fact that we cannot measure temperatures exactly means that we could never feed a perfect initial condition.

    Chaotic equations do have a given period before divergence gets extreme when initial conditions are altered. The original equations that Lorenz used (the pioneer of weather forecasting and the father of Chaos theory) showed divergence after about three days (which is why five-day forecasts still suck to this day).

    I find it very hard to believe that these folks have developed an equation that doesn't show divergence for 100 years. Not to mention the fact that the number of initial conditions are much larger than the project makes them out to be.

    Summary: Some PhD is looking for research money and figures that mixing "scientific" proof for global warming, chaos, and SETI-style distributed computer has to be good for a couple million at least.
    • Climate is not necessary chaotic if it is considered to be a moving average of weather. It is entirely possible and indeed quite likely that the non-linear fluctuations which make weather prediction so difficult to predict are in fact damped out over longer time periods. Or to put it in chaos terms, that the fractal dimension of the attractor for weather varies inversely with the sampling frequency.
      • Or to put it in chaos terms, that the fractal dimension of the attractor for weather varies inversely with the sampling frequency.

        But does any chaotic system exhibit such behavoir??` The mere fact that climate is study of average weather is irrelevant to the system at hand. It is equivalent to studying the trends of a graph between [0, 10] and [0, 10000]. A chaotic system will by definition exhibit divergence either way with a slight change in initial conditions.
        • But does any chaotic system exhibit such behavoir??`

          Yes. In fact, any system which displays locally nonlinear disturbances in a globally linear function will do so.

          The mere fact that climate is study of average weather is irrelevant to the system at hand.

          No it isn't. It should immediately alert you to the possibility that climate might be more predictable than weather. Averages always have lower variance than the underlying data.

          A chaotic system will by definition exhibit divergence either way with a slight change in initial conditions

          This isn't a rigourous definition you're talking about here, and your definition doesn't prove your point. A chaotic system might exhibit divergent behaviour, but that doesn't necessarily require that the divergence be either permanent of large in relation to an underlying linear trend. For example, if I take the output of a nonlinear oscillator and add it to the signal for Radio Luxembourg, I can make a system which is "chaotic" in the sense that its local behaviour is divergent in a nonlinear way dependent on small variations in initial conditions. But I can still extract a useful signal from my system by applying the right filter.

          • But I can still extract a useful signal from my system by applying the right filter.

            But that is because part of the system isn't chaotic. This argument relies on the assumption that part of the climate system is not chaotic. The article points out things like El Nino as an example of such parts of the system that may exhibit predictability but even El Nino is not predictable in any way other than making an educated guess. It also is still dependent on localized prediction.

            This experiment is trying to make a prediction 100 ahead in a system. Extrapolating the data from 50 years doesn't seem to make any sense because of the fact that the system is nonlinear to begin with and therefore doesn't lend itself to extrapolation.
          • The reason your filtering is so easy is that the operation of addition is linear! Try a non-linear operation, then try tuning in Radio Luxembourg as its carrier is shifted chaotically.

            Your example has no dynamics at all, so it hardly qualifies as a useful example of a non-linear dynamical system.

            Furthermore, if your example is to be applied to weather/climate, you seem to be suggesting that weather is a small effect compared to long-term climate variation, but the daily fluctuations in temperature, not to mention the annual differences between summer and winter are *larger* or comparable to the long-term climate variations. The longer-term secular trends are much smaller (a few degrees per century, say) compared to the chaotic portion (tens of degrees daily departure from "average")!
            • The reason your filtering is so easy is that the operation of addition is linear! Try a non-linear operation, then try tuning in Radio Luxembourg as its carrier is shifted chaotically.

              You are quite wrong if you think it's impossible to filter out nonlinear distrubances.

              Your example has no dynamics at all, so it hardly qualifies as a useful example of a non-linear dynamical system.

              I think you're out of your depth here. I've described a system with non-linear distubances to a linear system. My whole point was that you can't argue from the local non-stability of the data to the conclusion that the whole system has nonlinear dynamics.

              Furthermore, if your example is to be applied to weather/climate, you seem to be suggesting that weather is a small effect compared to long-term climate variation, but the daily fluctuations in temperature, not to mention the annual differences between summer and winter are *larger* or comparable to the long-term climate variations.

              You completely misunderstand my point. I simply suggested that the effect of weather on the *average* climate might be small and non-persistent. For example, the daily and monthly fluctuations in the stock market are large compared to the long term return, but it's well known that stock returns are more predictable over long holding periods than short ones.

              The longer-term secular trends are much smaller (a few degrees per century, say) compared to the chaotic portion (tens of degrees daily departure from "average")!

              Yes, but, you fool, if you're concerned with melting icecaps, a change of a few degrees in the 100-year average temperature is much more important than ten degrees for a day.

              • You've done nothing to respond except introduce another spurious example: the stock market is not analogous because the "long"-term trends are monotonic. [Which has something to do with economic growth, and also to do with the fact that stock markets are a relatively recent invention. Consider something like the price of grain or day labor to get something more like climate.] In the distant past, we have had climates that are both warmer and cooler than the present. The long term effects appear to be no more predictable than the short term effects.

                Your comment about the melting icecaps makes my point for me. That the longer term climate can shift between qualitiatively different regimes (ice age vs. tropical) with small quantitative changes indicates that the longer term trends are indeed chaotic. The climate signal you are looking for in the presence of large short-term effects is both small and likely chaotic---exactly opposed to the examples you present.

                To further explain the "dynamics" comment I made; the presence of the large radio signal does not feedback in any way to affect the nonlinear oscillator, and the nonlinear oscillator does not affect Radio Luxembourg [Neglecting the unfortunate fact that RTL is now off the air.] To use the linear combination of the two as an analogy to the weather only works if the melting of ice doesn't depend on the daily temperature.
    • What I'm about to emphasise has already been pointed out by another poster, but I'll elaborate a bit. What you have written about the butterfly effect etc is correct. And irrelevant. Nobody is trying to predict the weather for the next 50 years, but rather the climate.

      Here's the difference. To predict the weather would mean to give the exact distribution of temperature, rainfall, wind etc at a certain date. This is what the weather report after the news is all about. This cannot be done reliably more than about 3 days into the future, because the system is so chaotic.

      The climate is a different matter. It's basically an average of the weather. What they want to predict is things like the average temperature for period 2000-2010 in North America, for example. Over long periods (centuries or more), climate seems to be chaotic, too. It is certainly at least partially chaotic on smaller timescales, but there should be trends that are more or less predictable on medium timescales (decades?).

      For example, if there are more greenhouse gases in the atmosphere, then this has an effect on the average weather, so one might expect average temperatures to rise. But even this is not yet completely understood. For example, increased levels of CO2 might increase cloud formation, which might increase albedo, and hence decrease the temperature. This is not yet completely understood, not because the subject matter is inherintly chaotic and thus impossible to understand, but rather because the science of climatology is not yet sufficiently advanced. This is precisely the point behind this project - to advance our understanding in climatology, so that we can better understand the effects of greenhouse gases, for example. By no means does this justify the American energy policy of sitting back and happily burning fossil fuels with gusto, until the scientists are 99.8% sure that it was a bad thing and now it's too late. That's a bit like Russian roulette: "The scientists can't yet prove that this chamber is loaded, so we might as well pull the trigger".

      To sum up what is known so far: increased levels of CO2 (and other "greenhouse" gases) has a very real effect on the climate. Exactly what this effect is, is not yet 100% sure, but it seems most likely to raise the temperature. On the other hand, the world average temperature has increased dramatically over the last few decades, correlating strongly with rising CO2 levels. Of course, there are natural climate fluctuations, so this could still be a coincidence. We haven't proved with 100% certainty that our increased emissions are responsible for global warming, but it seems very likely. That is why we should try to do something about it.

      In summary: Global warming is a very real threat, and not just to some unheard-of third world countries. It affects you, Americans, too. Yes, you! Hence this project is very important and potentially very useful. I hope they get a lot of support.
      • Nobody is trying to predict the weather for the next 50 years, but rather the climate.

        Which is still chaotic. See my response to the first reply in this thread.

        It is certainly at least partially chaotic on smaller timescales, but there should be trends that are more or less predictable on medium timescales (decades?).

        No, not in a chaotic system. The most that could be accomplished is that a small sampling from today's climate could be used to understand what possible climatical period we are in but this is not what the experiment is attempting to do.

        What that would entail is collecting data in order to generate an attractor for the system. This would involve calculating phase-space coordinates. This project is attempting to extrapolate a system that can't be extrapolated.

        I won't get into a global warming debate here as that wasn't my intention behind my original post but please note that bad science hurts everyone on either side of the issue.


    • [...] this experiment will just generate FUD.


      No, you're mistaken. You seem to be making the elementary mistake of confusing climate modelling with weather forecasting. Curiously enough, you've fallen victim to the very thing you accuse the experimenters of: you made a (relatively) small blunder before you started writing, which has rendered the rest of your comment utterly irrelevant.

      Further research is left as an exercise for the original poster. If you can't be bothered to read the article, or the detailed write up on the project's site, I can't be bothered to point out all the places you're wrong.

    • The parent of this comment is very inaccurate in many of its details. I don't want to do a point-by-point, but it is full of all of the popular misconceptions about mathematics. Actually, if you believed the exact opposite of everything that post asserts, you'd be pretty close.

      Don't get me wrong, either... I'm not trying to dog on this particular guy, since there are a bunch of other crap posts in reply to this story.

      So, before we start, I am a mathematician, and I pretty much do applied dynamical systems (applied chaos theory, in lay-terms). (Always wanted to drop that N... Woo)

      If we buy the argument above (essentially: weather is chaotic, therefore cannot be modeled, so let's quit), then there would be almost no science done on nonlinear systems. But people are studying chaotic systems all of the time, and doing good science.

      Yes, it is true that the weather exhibits sensitive dependence on IC, but so does just about every physical system, even linear ones. (Think of standing a pencil on its end. Let it go. Which way will it fall? Repeat 100 times.) Just because something exhibits SDIC does not mean it cannot be modelled and does not mean no prediction is good. For example, consider a mixing fluid (say milk in your coffee). There's no question that there is chaos in the coffee if you look at it, but, no matter what you do, you expect to see a homogeneous light-brown mixture eventually. To say that I cannot predict the eventual state of my coffee is wrong.

      I want to make three points:

      1. Just because a system is chaotic does not mean that its average, or other statistical quantities, are chaotic. The coffee is a good example of that. In the case of the weather, it may be true that the day-to-day temperature in NYC is completely chaotic... but that the average temperature in the U.S. on a yearly cycle could be very well-behaved. Noone knows whether or not this is true, and the popular misconception that "if you take a simple system which is chaotic, and embed it in a larger system, the larger system will be worse". This is very much untrue.
      2. Because a system is chaotic, we cannot understand it. The Lorentz oscillator is a good counterexample to this. Although the Lorentz system is complicated at first glance, we find that it has a chaotic attractor, but that this attractor has a relatively low dimension. Thus we need understand the dynamics on this low-dimensional object. They're bad, but not so bad. Of course, the LO comes from a model of fluid convection in a cube, and you're taking an infinite-dimensional system, taking only three variables, and finding chaos. Thus the full system must be much crazier, right? No. There's a lot of evidence that the full fluid system that LO comes from is not qualitatively more complicated than the LO itself. We really may understand this system pretty well.
      3. Just because we don't have a rigorous, detailed mathematical model to describe a physical process does not mean it is completely unintelligible. I know that most physics that /.'ers have seen is at a relatively basic level (say college undergrad), and, in this case, almost always the systems are mathematically understood very well. This is the exception in science. Most physical (and forget biological) systems are not understood at the variables level. Noone can actually solve the fluid equations for the interior of the sun. But scientists know a hell of a lot about sunspots. As another example, if you go to the doctor, he does a lot of good science without understanding anything at a basic level. For example, one could apply the previous poster's argument to the human body, which is probably much more complicated than the weather, and certainly AS complicated. You walk into the doctor's office coughing up blood, he's going to do something to stop that... and he's not going to worry about if his model of the interaction between your pancreas and liver is exactly correct. The previous poster's argument is, well, the body is a chaotic system with tons of variables, any model the doctor uses will break down, therefore he can't say anything useful about my health. If he tells me to stop sucking down the greaseball McDonald's burgers because he found a heart murmur, it doesn't matter, because some completely nonlinear interaction between my toe and my ear could counteract it, therefore the doctor is no more likely to be correct than chance. You buy that argument? I don't. The bottom line is, we can make inferences based on data which is observed. No, this is of course not as good as a mathematical theory with all variables accounted for, but it does pretty damn well. This is the way most science is done.

      I also don't want to get into the debate about global warming which inevitably comes up, but some of the above fallacies always crop up in the arguments against it. It is certainly true that we don't understand the climate completely, not by a long shot. But the amount of evidence that the earth is warming, and that we are playing a role in this warming, is becoming very large. It is certainly not sure one way or the other, but, anyone who says that they are sure it is not happening (as I have seen other posters in this thread do) is simply completely full of shit.

      Well, ok, that's enough rambling for one night... just wanted to get that off my chest

  • by guanxi ( 216397 ) on Tuesday May 07, 2002 @11:38PM (#3482350)

    ... or at least the best science has come up with so far, are downloadable from the Intergovernmental Panel on Climate Change (IPCC) [www.ipcc.ch].

    I'd start with the Summaries for Policy Makers, as a way of becoming very well infomrmed in just ~20pp.

    AFAIK: It's a UN organization that is the center of research. Their reports are a consensus of almost all the leading scientists from every country on the globe, and their policy statements are approved line-by-line by governments. Even with all that, there are pretty strong statements.

    Here's better background [ucsusa.org].


    • AFAIK: It's a UN organization that is the center of research.


      Close... the IPCC was designed to collate all well-reviewed, reliable, statistically sound studies done around the world, and describe the consensus of opinion amongst researchers in the field.


      RANT MODE = "ON"

      The idea was to prevent scum-sucking American corporations from buying the US Government (by convincing the typical Merkin in the street) and preventing the measures required to help allleviate the threat, from being introduced. Of course we (rational human that is) reckoned without the extraordinary phenomena of Gee Dubya. The US is now storing up /vast/ amounts of resentment around the world, even in places like Europe where we have traditionally been sympathetic to their values. Since the US started bullying respected heads of world bodies out of office -- well let's just say I don't have ANY respect for the current Administration, and I just hope the rest of the world aren't confusing the actions of a handful of corrupt, hyper-rich elite types who run America, with the actions of those unfortunate enough to live there and get brainwashed by all the anti-science propaganda. You see this here on Slashdot whenever a climate change story comes up. It's sad it is to see otherwise intelligent people talking *complete bollocks*, seemingly completely unaware that they've been brainwashed by oil companies.

      Better luck in 2004.

  • Yet another attempt to model a multi-billion year old climate based on a short data stream.

    Let's estimate the average income of everyone in the US over time by looking at people in Rhode Island for the last three days. Same sampling scale, or close.

    Useless experiment to hype up the global warming debate again. Gee, I wonder if they'll pick any of the initial conditions that say "things aren't so bad after all". Nope, the only starting conditions that will ever see the light of day are the ones that back up their theory.

    Not that the science on the other side is any better. I'm getting tired of the entire debate because, guess what kids, this is supposed to be SCIENCE. Not prognositcation. There is a difference. Come up with a theory, build a series of experiments to prove it, and see if it sticks to the fridge or not. All I'm seeing here is "come up with a theory, pick the data points that will support it, and then publish it in the NY Times".

  • This silly experiment is a waste of time. Everyone with a time machine already knows that my massive Weather Altering Device (WAD) will come online in 2008 with the sole purpose of ruining the results of this trial...
  • ...especially considering the nature of distributed computing where participants might sign up on a whim and then drop out a little bit later because they have to reinstall everything or upgrade their system or change work or simply gets tired of the project or it conflicts with some other program or gives their system performance degradation.

    I don't know how much amount of immediate data that needs to be stored, but there definitely should still be a mechanism for periodically sending up progress-dumps so that somebody else can take over from wherever you were. This could at least shorten the time for having all the data run since you would notice participant drop-outs earlier and could hand over the rest of the calculations to another participant.

    It could also be used to sort out really bad seeds at an earlier stage where the system, for example already after 10 or 20 years discover that you are way off and could hand you another seed instead.
  • Given the arithmatic errata most desktop processors have and the cross-platform nature of distributed computing, I'm wondering how anyone can possibly hope to gain accurate results - especially if there's any floating point math involved.

    And with this specific project - isn't the earth's climate largely dependent on the amount of solar output, and isn't that amount relatively variable? How are they gonna know the slight variations in solar output over the next 50 years?

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...