Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Science

The Scientific Internet 70

ManicDeity writes "'Forget the Matrix. It's time for the Grid-' From the good people at CERN who brought us the WWW comes a new network designed for scientific use named GriPhyN (Grid Physics Network). Loosely described 'as a Napster for scientist' it is being developed to handle the massive amounts of information created from the highest energy experiment to date. The article can be found at Space.com. " It was my understanding that this was part of what I2 was supposed to do - but this looks cool as well.
This discussion has been archived. No new comments can be posted.

The Scientific Internet

Comments Filter:
  • by Anonymous Coward
    This is also called zero sum gaming in which everyone wins a little if they all play fair, but the first one to cheat will win everything and the rest will lose.

    I think you are overestimating the effect of the free rider problem. Obviously, just like Napster or Gnutella, people can use more resources than they provide, but as long as each participant has a priority on their own machines, they are likely to overall gain more than they lose. Because you are likely to have access to more computer resources when you need it, and you only give up resources that you don't need.

    (Offtopic digression into Middle Ages)
    Actually, from what I've learned of the middle ages, the commons where used as a place to go crops for the entire village. They were established after northern farmers developed a heavy plow that could work through the rougher soil of northern Europe. The heavy plow required a huge number of team of oxen and not only could every farmer not afford such a team, but it had an enormous turning radius, and so it was much more difficult to individually plow each farmer's small plot. The commons system began to fail as horses and calvary became more popular. (People were only considered free if they could fight, and few people could have their own horse so they had to pledge to a Lord and become their vassal.) Perhaps herders used the communal system as well, but it was afaik developed out of agrarian interests).

  • http://www.globus.org/ [globus.org]
    http://grid.web.cern.ch/grid/ [web.cern.ch]

    And yes, it'll run on Linux (at CERN anyway, they're quickly getting rid of all the "legacy RISC" platforms here)

    It's not really about having fast pipes all over Europe, it's more like having software you can run to have your applications running on thousands of nodes around the world and also managing all of it.

  • by fb ( 10330 )
    There is one essential caveat: grid-type projects are meant - at this stage - to be essentially private networks linking supercomputing centers for scientific data exchange only. Scientists - having now experienced bandwidth limitations - think that giving indiscriminate Internet access to the public at large has been a big mistake.

    Grids are basically meant to be wide area networks allowing services that were previously available in local area networks only, such as load balancing between different supercomputing servers and real time analysis and visualization of several terabytes of simulation data.

    I am aware of two such projects, one in the US, and the other one ... NDA ... (I think I'm allowed to say it isn't even in the planning stage yet, let's say I'm working very, very near one of the possible centers of operation :-).

    Of course the technology will be there to be made available - at a later stage - to the general public, but don't hold your breath.
    --
  • Does this mean that I can get other scientist's work without paying for it, then give that information to others without proper credit?
  • They don't ask for the data it just gets sent to them. Definately not napster like

    I work on a related project the Particle Physics Data Grid and can correct this a little.

    It can be true that the data gets pushed (on PPDG the clients do actually pull quite directly, but that's an unimportant distinction), but only after a definition of the data that would be useful for that system (here we mean system as either a box, or a cluster, or whatever... Some set of resources). So it's a bit like Napster in that you say what you are interested in, but with pretty complex access to meta-data and, of course, vastly larger return sets.

    So while the data pushing seems a bit less Napster like, the fact is it's really only confusing because of the timing of events. The idea of data caches and drawing datasets from wherever they might be available is a fair analogy to draw.

  • by the_tsi ( 19767 ) on Monday October 02, 2000 @05:44AM (#739591)
    It's been around for years. Pete Townshend designed it in 1970-1971.

    We won't get fooled again.

    -Chris
  • In any chaotic system (complex system) small changes in one place can effect huge changes elsewhere. Our environment (both microscopially and macroscopially) is such a system.
    One would surmise that the good lord would build something a little more tolerant of change ;)

    How that proves there is a diety.. I dunno..

    A study shows that over 90% of people who died of cancer in the last 20 years ate carrots throughout their lives. So carrots cause cancer?

  • Comment removed based on user account deletion
  • It's been a rough day so far, so bear with me here, but this crap sickens me (yes, I know all about dumbing down for the audience... and I don't agree with it in the slightest in this case).

    Third Paragraph: "In late 1990, Tim Berners-Lee, a scientist at CERN, the European Laboratory for Particle Physics near Geneva, Switzerland, invented the World Wide Web."

    Uh... pardon me, but... no... he didn't. He invented http, which combined with tcp/ip, hard and software out the wang, and a small pinch of timing, allowed the WWW to come into being over the next several years.

    $WWW != 'http';
    my $WWW = "A vast number of sites, on a vast number of computers, all strung together over miles of cable, all glued together by http and tcp/ip, and allowing for communication, collaboration, and personal expression on a scale never before seen.";

    But that's just me. I could be wrong. If Gore has anything to do with it, I guess I am.

    Paragraph Seven: "Petabytes of data means a thousand trillion bytes. This is the amount of data that can be stored by a million personal computer hard drives." Oh really. A million, eh? I guess the size of the drive doesn't matter then? W/o doing the actual math, I suppose I could fit a petabyte on a million of my old 120MB drives, just as easily as on a million of my new 75GB drives...

    Audience, schmaudience...
  • It's basically a distributed system, like Gnutella or Freenet, from what I read. I think this will end up being a pretty big deal with education institutes-being able to share this amount of data is going to benefit everyone. The only concern for me is the amount of bandwith this is going to consume-can you imagine downloading a few petabytes of data? Sheesh. Hopefully, though, this system will get worked out and develop into something quite useful.

    Colin Winters
  • Fair enough we can set up a farm to do something, that's the easiest bit, but how many problems can be solved with coarse grained parallelism, such as seti@home? I'm not convinced that the real problem is lack of brawn, sometimes it's more about brains.

    For instance, the human genome project, which is often sited as a potential application of the GRID, suffers more from lack of knowledge, not not number crunching. We only need to to assemble the bits of the genome once, and when its done, it's a few gig of data lying around on our hard drive. The problem is more that we don't fully understand how the body gets from a DNA sequence to a properly folded protein, let alone, what the resulting lump of atoms actually does. More crunch won't help - deeper insights will.

    Secondly, what generally happens when you do a lot of processing of discrete bits of data, is that you end up with another database holding the results. Then, you have to sift through that database.... Crunch is all very well, but unless you know what to do with the results, it's a waste of time.

  • The Slashdot article only mentions networks. This project is distributed computing: having a pool of computers and assigning various data and programs to them. Like Seti@Home, but more general.
  • by habib23 ( 33217 ) on Monday October 02, 2000 @05:50AM (#739598)
    Internet 2 is alive and well (and having a conference in Atlanta at the end of October). But they are an organisation really part of UCAID. The physical network used as the national research and education backbone is now called Abilene, and was built by Qwest. This replaces vBNS which was built by MCI/Worldcom under the orginal 1995 grant (expired in 2000) and has since been sold off too the highest bidder. As I said this is alive and well, and peers with the European (10-155) and Asian (APAN) as well as Canadian (CA*Net3) equivalents. There is a world wide interconnected series of research networks (where do you think the grid traffic is running over, the commodity internet). I just got back from NorduNet 2000 in Helsinki (I live in the states) and the EU is already working on upgrading 10-155 and all the sub networks it encompasses. More info at http://www.internet2.edu and

    http://www.ucaid.edu/abilene/

  • This is part of the whole range of applications that the Internet2 project was designed to support... GriPhyN needs to run on Internet2. The commercial net doesn't allow enough bandwidth to transfer the huge amounts of data around.

    If the article had listed the partner sites for GriPhyN we would have seen that they are all Internet2 institutions...

    If you want more information, here is the GryPhyN homepage

    http://www.phys.ufl.edu/~avery/mre/

    --Dan
  • 'Forget the Matrix. It's time for the Grid'

    Those of us who've taken math know that a grid is a matrix.
  • When reading mass-media news articles, apply this rule:

    s/napster/peer-to-peer/;
    --

  • I am assuming nuclear weapons secrets, etc will NOT be distributed/shared on here?
  • Next time read the article, the author of the article specifically refers to it as a napster for scientists...
  • People seem to be confused about whether this is a Napster clone or a Gnutella clone or a SETI clone, etc. This is rather reasonable, since as near as I can tell it is a combination of Napster and SETI. SETI works on a small amount of data that is stored locally on your computer. This will work on a huge amount of data, which is stored in a distributed fashion as well. So, when a work request is submitted to a central computer, it gets sent to an idle machine, which accesses the relevant data through the network, performs the desired calculations, returns the results to the scientist, and probably also stores the results in the massive distributed database. It is more similar to Napster than Gnutella, I think, due to the centralization neccessary to manage it
  • It seems to me that this will be the equivalent of the worlds greatest SQL server. They talk about datasets that are the results of analysis runs, deciding which to keep/toss based on how often the dataset would be needed. This sounds exactly like query results, etc. The scale of this, and the distibuted nature of the computing resources is completely new. To put things into perspective, Napster currently tells me there are 2,716 Gigabytes available on line, but this project intends to work with datasets over 1000 times larger than that!

    On a more personal level, It's simply staggering how well everyone has managed to keep Moore's law chugging along since I got into things back in the early 1980s. I wrote a web page to compute disk prices [tsrcom.com] back in 1991, and I keep having to revise it to be more optimistic every 2-4 years!

    --Mike--

  • I couldn't even set up a nationwide backbone capable of routing gigabytes of data with 11.9 million dollars. Let alone petabytes of data which not only have to be routed, but have the additional overhead of abstraction of jobs that have to be completed, and output returned to the original user?

    Even if this happens, what if some scientist from geneva wants to use all this distributing power to give his distributed.net scores a boost? It's supposed to be as easy as using the electrical grid. "When scientists submit a processing job to this worldwide network of computers, the only thing they care about is that the job gets done. They don't know which machine (or machines) the work gets farmed out to." Who's going to pay for all this processor time? I guess they will have to install meters on the side of the scientist's building, and someone will have to come check it monthly.

    Something tells me the author of this article, Katherine Freese, had to sit down and listen to alot of frustrated but free-publicity-loving people give metaphors so she could begin to comprehend.

    Scientist says : "No, if we do this the ship won't fall over the edge of the earth, the earth is round."

    She writes : "Scientists develop new technology to allow ships to stick to the bottom of the earth."

  • I couldn't even set up a nationwide backbone capable of routing gigabytes of data with 11.9 million dollars. Let alone petabytes of data which not only have to be routed, but have the additional overhead of abstraction of jobs that have to be completed, and output returned to the original user?

    The 11.9 million bucks is for R&D only - it will pay mostly for people. There will be some clusters purchased with this dough, but just for development. The money to purchase the big iron will come from a separate grant, which is just beginning to be negotiated.

    Even if this happens, what if some scientist from geneva wants to use all this distributing power to give his distributed.net scores a boost? It's supposed to be as easy as using the electrical grid. "When scientists submit a processing job to this worldwide network of computers, the only thing they care about is that the job gets done. They don't know which machine (or machines) the work gets farmed out to." Who's going to pay for all this processor time? I guess they will have to install meters on the side of the scientist's building, and someone will have to come check it monthly.

    I believe there is no answer yet to the questions "how do we guarantee that only `grid` jobs run on the network?" or "how do we bill the grid customers (if at all)?".

    Cheers,
    Craig

  • It's nice to see that someone finds a use for perfectly good existing technology instead of rushing to push everyone into the next new thing. By using a peer-to-peer or other existing technologies the scientists are giving themselves credit for knowing what they need instead of believing in hype.

    As much as I'd like to say that scientists are naturally skeptical of the gee-whiz hype associated with new technologies, it really isn't true.

    I'm not associated with GriPhyN, but my boss is one of the principle investigators. There was a distributed computing project called Nile [cornell.edu] which did suffer (IMHO) from pushing some new technologies where it wasn't really needed. The result was slowed development, the initial prototype came out a lot later than it should have, and the whole project suffered.

    So I think the decision to go with proven tech was due to a lesson learned the hard way.

    ---
    #include "disclaimer.h"

  • ManicDeity writes "'Forget the Matrix. It's time for the Grid-' From the good people at CERN who brought us the WWW comes a new network designed for scientific use named GriPhyN (Grid Physics Network)...

    Not true - CERN is neither responsible for GriPhyN [griphyn.org] (which is an NSF project), nor the invention of the data grid concept. The experiments at CERN (and other places) "merely" drive the need for something like GriPhyN.

    That said, there is a European project similar in scope to GriPhyN, which CERN is a part of: the DataGrid Project [web.cern.ch].

    For a book about Grids, you can look for "The Grid: Blueprint for a New Computing Infrastructure", edited by Ian Foster and Carl Kesselman. Both are on the GriPhyN project: Dr. Foster is a principle investigator on GriPhyN, and Dr. Kesselman is one of the Senior Personnel.

  • Looks like this is another seti. This is basically the same deal right?
  • Here's a link to the Internet2 website: http://www.internet2.edu/
  • Wow. You're incredibly self-righteous. You're also living in a dream, "Oh, ow, the man is stepping on me" world.

    The "Napster for scientists" thing is probably a dumb thing for someone to say in the first place. But even beside that, I think what bothers me is "Note how scientists are encouraged/enabled to share ideas/data/information. Note how the hoi polloi are not."

    This may shock you, but COPYRIGHTED MUSICAL WORKS AND SCIENTIFIC INFORMATION ARE NOT THE SAME THING. Believe it or not, your making "Uncle John's Cabin" available on the web does not help anybody, unless by help you mean, "allow people to pirate Grateful Dead tunes." There is no reason for you and a thousand other people to go over it and try to figure out what brand of guitar is being played. The sharing on Napster is a way to get around paying for music, and nothing more. No matter how you sugar coat it, that's what it is.

    On the other hand, this "Napster for scientists" has a couple of big differences. First off, the people who generate the data are the ones putting it up. It's not like someone got ahold of someone else's data and thought, "You know. This is really good data. I want some more," went out, found pirated copies of the data on Napster, and downloaded it. This is a fundamentally different situation, and it didn't bear the comparison in the first place. The comparison having been made, though, it certainly didn't require your comments about how this is another example of the intelligentsia beating up the proles.

  • The sharing of information has been around for a while. In the automobile industry, open standards, and the absence (to a degree) of secrecy, has made the industry.

  • What an interesting phrase. Note how scientists are encouraged/enabled to share ideas/data/information. Note how the hoi polloi are not.

    I see. So how many petabytes spare capacity have you got? This is serious data here. All we are talking about is a community of people using the Internet to do their job.

    So, some Universities have got together and are planning on using it for research, as was originally intended. What's the problem with that? They'll publish when they're ready, else the funding will dry up.

  • The difference is that when scientists starting publicizing their ideas for the purposes of testing and priority, they also unwittingly tapped into the REAL benefit (to society): knowledge is power. Sure, you can make money off of things that you know that I don't. But that's chump change compared to what we can do if we share what we know. Non-scientific fields have yet to figure this out.

    Actually, although it is true that scientists publish their results in peer-reviewed publications available to the general public as well as the scientific community, most scientists I know who compete in a particular field generally hold of on the sharing of data amongst themselves as long as they can to avoid being "scooped" by their colleagues. This is IMHO a major problem concerning the current funding model for publicly funded research grants. The lack of cooperation between researchers, due to the intense competition for research funding, leads to wasted repetition of effort and IMHO generally holds back progress, as well as making science less fun for all involved.

    Scientists do share more than industry does; but they could share more than they are.

  • I was under the impression that I2 was simply a seperate network from the Internet for scientific purposes. This network was being developed because of the "bandwidth/bottleneck" problems that were starting to pop up on "the net"

    I2 had nothing to do with distributed computing model as described in this article.

  • 1-13: This is exactly why the fleeing aliens chose this planet to hide and then conduct their life-forming experiments on, yes.
    Don't assume that 1-13 are effects of a god. Merely causes for the effect of life.
  • Hehehe.. Nice troll. Sad thing about it, though, is that there are plenty of people out there who _do_ think like that.
  • Many villages in the Middle Ages tried an experiment in which the entire village shared a single large green space between dozens of herds. In the end the experiment failed because a few of the herd owners overgrazed the commons, leaving no food for the rest.

    This is also called zero sum gaming in which everyone wins a little if they all play fair, but the first one to cheat will win everything and the rest will lose.

    If all universities equally share their computers, then they all win a little. But if my university scrimps on their computer budget then I can save a lot of money, and still get all my computing done using other peoples computers. My university wins a lot.

    Also people will be lazy in writing their algorithms because they know that their code will still run, it will just need more processors on more machines to execute.

    Eventually it is possible that all universities stop paying for a computer budget, every thinking that they can save money by using other peoples computers, but in the end there is not enough computing power to share.

    A more fair system would be similar to how your electric meter works. The university reimburses those other computer systems based on how many cycles of processing are used. This would also tend to force people who want to do this work to use efficient code algorithms thus saving even more money overall.

    Was it Heinlein that said There Is No Such Thing As A Free Lunch (TINSTAAFL)?

  • Got petabytes of potentially secure data? No problem.. Get microsoft.net!
  • Whatever did happen to 'Internet2' - I know some american universities were testing something along those lines, but what happened to it.
    (Sorry if it's offtopic, but if I2 did take off it would be ideal for everyone...)
    Richy C. [beebware.com]
    --
  • It's just that ManicDeity called it a 'Napster for Scientists', Hemos likened it to 'Internet2', but it seems it's more like distributed.net [distributed.net] . Sorry about that, I forgot to read the article before posting - but I'm sure I heard of something like this a few months (even a year) ago. Any1 else?
    Richy C. [beebware.com]
    --
  • Looks like the author of the article is confused about the difference between networks and applications. Research and Education networks, such as Internet2, are there to facilitate existence of advanced applications, such as various data grids, teleimmersion, LBE bulk data transfers, etc.

    Appearence of new applications reinforces the need for advanced networks, not the other way around. In fact, we (Internet2) work with the U.S. counterparts of the described European project

    Perhaps a lot of students don't realize this, but all traffic between Internet2 participating Universities goes over Abilene (Internet2 backbone).

    More information about Internet2 and its activities can be found at:

    --Stanislav Shalunov (Internet Engineer at Internet2)
  • And you'd never be able to get published in a refereed journal, so grants for doing this sort of thing are unlikely, and noone will trust your results.
  • It'll be turned to plasma.
  • Real science, much to the horror of the Intellectual Property Industry, operates best in an environment where ideas are openly shared and there is no excuse for censorship. It breaks down when researchers refuse to communiate with each other or are prohibited from bouncing their ideas off one another. The best ideas in science were given to humanity for free; Relativity, Evolution, the double helix, etcetera. No one asks for a royalty payment when you work E=mc2. It will be interesting to see if the land-rush to legislate IP in the new millenium doesn't end up killing the goose that laid the golden egg.
  • If you mean the scientific underpinnings of how nuclear weapons work, there are no secrets in that regard. The physics of fission and fusion are taught in introductory courses to high-school and college students. When they were described by Einstein they were only secret because only a handful of his peers had done the necessary background to understand what he was talking about, not to mention that the Russians and Germans vehemently denied, for political reasons, that anything he said was true.

    For that matter, there's a very detailed description of how to build a fusion bomb that was published in one of Tom Clancy's novels (I think it was called "An Ominous-Sounding Title" but I'm not sure). At any rate, only the technological implementation of nuclear fusion or fission can be considered a secret. As if it matters; our current stockpile of missiles is still fully capable of wiping humanity from the face of the earth. What difference is one or two more going to make, honestly?

  • Did you know that FIFTY PERCENT of all Americans are of BELOW AVERAGE intelligence? What a national tragedy!

  • I'd have to say this is something other than science. You're appealing to special circumstances, but ignoring one simple fact: if none of these conditions were met, the Universe as a whole would be essentially unchanged, and the only difference would be the abscence of US. In which case we would not be here to argue about it.

    But wait, you're ignoring two simple facts: the other explanation for this amazing set of coincidences is that we have evolved within the conditions you describe and are therefore perfectly comfortable.

    Who says that life must have liquid water?

    Who says that life must have 70% nitrogen?

    Who says that planets must have perfectly circular orbits?

    Who says that cosmic rays are bad for life? They're bad for us. But that's just us.

    What characteristics of water are essential for life?

    How is the thickness of the crust designed? How do you know what it would be like if it were thinner, or thicker?

    You have no other examples of life, intelligent or otherwise, to display. So none of what you say means much to me. There are billions of other star systems, around which there may be billions of other planets. Your string of amazing coincidences is an appeal to a very small sample. We do not know what conditions are like in most of the rest of the universe. More specifically, YOU do not know. I am certainly not going to listen to your opinion on a matter of which you have displayed such profound ignorance.

    Your argument is that of a four-year-old child who cannot see past the end of his own nose. Because I have a four-year-old I am familiar with the condition and imagine that someday you'll mature enough to realize that there is more to the universe than you are prepared to understand. I don't particularly care if you do, of course. Evolution will take care of those who cannot adapt to reality.

  • It's a joke. Fifty percent of all americans would fall below the line, and fifty percent would fall above it. I believe that some President was presented with this statistic (Johnson? Maybe Nixon) and professed shock. Not understanding that in any statistic of that nature, it would always be so.
  • Not really. I class all who build such devices as lunatics. Ourselves included. A device like that is no more or less dangerous in the hands of a supposed friend than an enemy.
  • Ooooo thing like this just iritate the hell out of me.......

  • Loosely described 'as a Napster for scientist'

    psssss... No one tell Larz...pass it on....

  • Hmmm. cool idea, wonder if someone could write a slick GUI in VB for it.... yah... could work.....

    BTW get over the IP thing, most scientists will be the first ones to tell you that information should be free.

  • Distributed hampster testing maybe?

  • Yes, of course you can. If you do it too much, the community will notice and start being rude to you, though. Nothing to stop you but that.
  • From the way the article describes this it won't be much like Napster at all. With napster you share data with people all over the internet. This however sends data to computers for processing and storing during their idle time. They don't ask for the data it just gets sent to them. Definately not napster like
  • Damn. I know I'm gonna get (-1, Redundant) on my original article, but I stupidly hit the wrong button and Submitted it instead of Previewing it. Nevertheless, here's what I wanted to write after thinking it through:

    You could also read the original project summary [griphyn.org] for the Grid Physics Network.

    Although the site linked by the story (or click here [ufl.edu] for your One-Link (tm) GriPhyN info, in case you're too lazy to check the article out) has more new info that the original one, and it's more easily understood.
    (Grrr. It's Preview, not Submit. It's Preview, not Submit).
  • You can read the project summary [griphyn.org] for the Grid Physics Network.

    Here's what they're proposing to use the GriPhyN for: The four physics experiments are about to enter a new era of exploration of the fundamental forces of nature and the structure of the universe. The CMS and ATLAS experiments at the Large Hadron Collider will search for the origins of mass and probe matter at the smallest length scales; LIGO (Laser Interferometer Gravitational-wave Observatory) will detect the gravitational waves of pulsars, supernovae and in-spiraling binary stars; and SDSS (Sloan Digital Sky Survey) will carry out an automated sky survey enabling systematic studies of stars, galaxies, nebulae, and large-scale structure.

  • The real race is now how will you find a way to make money of this new medium- It will probably involve p0rn, and create several millionaires...... Of course, the scientist will not see a dime....
  • We really should make the stories moderatable too. The lame comment about Napster is an offtopic. Maybe a comparison to SETI@Home would make sence, but what on earth has Napster to do with it?
  • It's not an excuse for lame comments.
  • by 64.28.67.48 ( 217783 ) on Monday October 02, 2000 @05:54AM (#739643)
    And more like SETI@Home [berkeley.edu] -- using the net to do distributed computing, but the article seems to indicate that storage will be distributed as well. And I think that all the computers that are part of the network will be dedicated to this task; you won't be able to hook up your home machine to GriPhyN. If you're going to spread out your storage, you will need to be able to rely on it being there (i.e., Joe Homeuser shut off his PC and now we can't get access to the secrets of the atom),

    -------------
  • It's nice to see that someone finds a use for perfectly good existing technology instead of rushing to push everyone into the next new thing. By using a peer-to-peer or other existing technologies the scientists are giving themselves credit for knowing what they need instead of believing in hype.

    It's a lesson millions of Microsoft ME and 98 buyers could have done well with.
  • So will Dow Corning be suing all the freeloading scientists for "violating their intellectual property"?

    "Remember, corporate greed comes before ethics!" -TBWA/Chiat/Day board meeting

  • Was it Heinlein that said There Is No Such Thing As A Free Lunch (TINSTAAFL)? Actually, it was TANSTAAFL (There Ain't No Such Thing As A Free Luch). But close enough. And certainly closer than /.s fortune cookie attributing it to Freidman.
  • In the last century 10's of millions of people lost their lives in wars.
    Millions of woman were raped.
    Every year thousands of children are shot by guns.
    etc etc...

    Yeah, praise the Lord!

  • 1. In a couple of billion years, the sun will be too cold to support the existing life here on earth. A couple of billion years ago, it was warm enough to allow liquid water on Mars.
    2. If the rotation of the earth took 4 more hours, I could get 10 hours of sleep each night and still spend 18 hours working/partying.
    3. Um... nearly circular orbits are the norm for any forming solar system. Take any cloud of gas, start to condense it in free space, and bingo!
    4. These same gases cause Venus to be a raging cauldron of heat.
    5. And if it were only 60 feet away, it would plow over all tall buildings, making construction of anything more than mud huts very dangerous.
    6. Damn. I just dug a hole in my back yard. Guess we're all gonna die now.
    7. The tropics experience no appreciable change in climate due to the tilt of the axis. They always recieve direct sunlight. Stuff still grows there.
    8. Venus also has an atmosphere. So does Jupiter, so does Saturn...
    9. Well, who put those meteors out there in the first place, eh?
    10. What is to say that a different sized planet would not simply give rise to a different form of life?
    11. Again, see previous comment. Essential to all forms of life that we know. Seeing as we have a very limited experience, this isn't really fair.
    12. Sorry, I'm running out of snarky comments.
    13. Again, life as we know it.

    You're probably bored of reading this by now, but if you are not, consider this: In an infinite universe of infinite possibilities, assume there is a finite chance that the conditions necessary for life will arise. This leads to an infinite number of worlds which will support and contain life, and of course, at least one person who will insist that HIS world is special, that it was manually created by some greater being specifically for HIM.
  • they think of new things like 'the new internet' and all that jank about once a week. when will they actually deliver? I remember a couple years ago about a 'new internet' just for universities. what happened to that? I dont think its going to happen. too much money currently invested in the current 'old internet'


    BTW...I KNOW ITS DAN QUALE. SHUT UP

    "sex on tv is bad, you might fall off..."
  • Very true , most programmers are overawed with their own achievements that they forget other people's achievements and therefore their rights on what they achieve. I create something great and someone else steals it without a thank you- which in these times means money- is very ungenerous I feel.
  • by OlympicSponsor ( 236309 ) on Monday October 02, 2000 @05:44AM (#739651)
    What an interesting phrase. Note how scientists are encouraged/enabled to share ideas/data/information. Note how the hoi polloi are not. What's the diff? Not money--we all know how much money a good scientific discovery can make.

    The difference is that when scientists starting publicizing their ideas for the purposes of testing and priority, they also unwittingly tapped into the REAL benefit (to society): knowledge is power. Sure, you can make money off of things that you know that I don't. But that's chump change compared to what we can do if we share what we know. Non-scientific fields have yet to figure this out.

    The first society to allow (mandate?) free sharing of ideas to everyone (while still giving scientific-priority-style credit to the originator) is going to look back at us and laugh: "They thought they could advance by keeping secrets! What fools!"
    --
  • what we really need is to clear all the pornographers/sex sites of the web completely, then we don't need a 'new' I2.

    I'll be happy with the 'old' one.
    what do you think ?
  • Heinlein: Job, a comedy of justice. And in case you did not know, that's a book. You know, the funny analog paper like thingy?
  • www.popularpower.com is already doing this, and they've solved that problem by distributing some of the money that their customers pay them to those that allow use of their systems. They've got clients for NT and Linux. Minty-fresh.
  • I think the biggest thing that puts this in a different category than distributed computing projects like Seti@home is that with Seti@home, you use a package which includes all the software used for the analysis, then simply exchange uninterpreted data, and the results from it back and forth. With GriPhyN, scientists send both the data and the code for the analysis program. The potential impact this could have not just on computing, but on open source style projects is incredible!

In the long run, every program becomes rococco, and then rubble. -- Alan Perlis

Working...