Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science

Teragrid: Massive Grid Computing 115

onyxcide writes: "Envision is running a quick article on a new national grid of computing resources called TeraGrid. Half a petabyte of disk storage, 40-gigabyte-per-second national optical backbone, and 13 teraflops of computing power will make up this monster. It will allow "lavish amounts of online data to be continually available for instantaneous analysis, data mining, and knowlege synthesis." There's another article in the same magazine here: Transforming Research with High-Performance Grid Computing" LighthouseJ adds some details: "C|Net's news.com has a story about a new Compaq supercomputer named Terascale. It uses 3,000 Alpha EV68 processors distributed over 750 servers using networking systems from Quadrics. They say it can perform as fast as 10,000 desktop PC's combined in one second. The massive computer will make it's official debut on Monday at the Supercomputing Center in Pittsburgh PA."
This discussion has been archived. No new comments can be posted.

Teragrid: Massive Grid Computing

Comments Filter:
  • Who gets to use this monster. There are a lot of projects that could use these flops, but who gets em? Goverment projects or privit one? And, Imagen a baerwolf cluster of these
  • ...it will take us to /. a site running on this!
  • Yes, yes, Moore was right again, and we can build an even bigger and faster network of supercomputers and throw them at the seven types of problems they're really good at solving. Of course, we've diddled this configuration a bit--massively distributed on the WAN level (instead of just sticking them all in Kansas an putting fat pipes to them directly).

    So?
  • Wow! (Score:3, Interesting)

    by haxor.dk ( 463614 ) on Saturday October 27, 2001 @06:03PM (#2488257) Homepage
    Wow, some part of my mind is drooling!

    The groundwork for a Matrix/Johnny Mnemonic-style cyberspace, anyone? ;)
    • What would be the advantage of a cyberspace modeled around Keanu Reeves anyway?

      Dude, there goes virtual So-crates! Eeeexcelent! (play wailing air guitar riff)
  • ...someone *not* making a lame Beowulf cluster joke about these? No, I couldn't either.
  • Havn't we already seen a few stories about this system?
  • I tried combining 10.000 PC's once. Took me, my friends and a bulldozer all weekend.
    • You've got to wonder how fast 10,000 PCs combined in one second really is. I mean, they're not really designed to withstand high-speed impact.
  • TeraGrid at SC2001 (Score:5, Informative)

    by shalunov ( 149369 ) on Saturday October 27, 2001 @06:08PM (#2488271) Homepage
    TeraGrid [teragrid.org] will be present [npaci.edu] at SC2001 [sc2001.org] (a yearly conference and expo for supercomputing and high-performance networking). Just to give you a hint of what it is like, the showfloor will have more than 10Gb/s of total outgoing Internet capacity (plus more private/non-IP circuits).

    If you're going to be in Denver the week of Nov 12, 2001, consider stopping by. If nothing else, the place will have free and open 802.11b!

  • As fast as...? (Score:1, Redundant)

    by Glock27 ( 446276 )
    They say it can perform as fast as 10,000 desktop PC's combined in one second.

    I bet it can perform as fast as 10,000 desktop PC's combined in one year, too! (WHATEVER the hell that means!)

    I presume the author meant it was "10,000 times faster than a desktop PC".

    I wonder if Hammer will be faster than those Alphas per processor...I'd think so.

    299,792,458 m/s...not just a good idea, its the law!

  • ... it is the social (read human political) barriers. How often do you get organisations which have a disjoint culture agreeing to "share" resources? Scientists are no different as there's only a limited pot of government funds ... ask the Department of *ENERGY* why they're doing genome research. Corporate cultures makes it difficult to merge, just putting together a backbone and lots of honey pots (CPU resources) doesn't automatically lead to a automagic collaboration of trust. Grid computing doesn't address the social issues ... will my work be safe?, can I get a fair cut of the machine, will commercialisation contaminate standards, etc ...

    My point is that it takes a while for *HUMAN* systems to adjust to new technology waves. I would point out that in the early 1900s, factories were driven by belt-pulleys and machines (lathes/drills/press/etc) were contained in small 3-story buildings. Once electric motors got small enough and eliminated the physical requirement of being mechanically linked to the power source, then we could suddenly build whole acres of assembly plants and skyscrapers.

    I see a necessary transition for software ... someone in a distant /. post noted that the GPL promoted a wierd form of trust ... because you knew the viral nature would eventually force publishing of any improvements, you had some confidence that the effort you put into developing software would (potentially) be amplified giving you improved down the track. The Sun Community Source License (SCSL) and Microsoft End-User License (MSFU) don't exactly inspire the same confidence and level of trust.

    Currently TeraGrids are the beowulf of ASPs ... but nothing different from a fancy queuing system. Other systems such as Globus are seriously researched but writing apps is still difficult. As for Microsofts .BET, it is stilll an unknown factor (and RPCs over low-latency internet doesn't exactly promote radically new killer apps). What does it require for a radically new level of trust (integrity, availability, confidentiality) to engineer the new killer apps? Chucking money at hardware without solving the human issues seem a little like an indirect government subsidy to the chip companies to me.

    LL

  • Wow (Score:1, Funny)

    Counterstrike would run at, like, a billion frames a second!

    I bet Square's pissed they didn't come up with that until _after_ they'd spent all that time rendering the FF movie. 10 megs a frame or some silliness like that? Sheesh.
  • by Anonymous Coward
    They're both NSF-funded systems. Any scientist in the country can get time on it, just by writing a proposal. A peer-review committee then decides who gets time and who doesn't, or if there's a better machine to use. They're both part of the NSF PACI (Patnerships for Advanced Computational Infrastructure). See www.paci.org [slashdot.org] for more info on getting time.

    The money for the TeraScale machine was awarded last year, and it went to the Pittsburgh Supercomputer Center. The follow-on the the TeraScale machine was an award made two months ago, the Distributed TeraScale Facility, or the DTF. The DTF award went to NCSA in Illinois, SDSC in San Diego, Cal Tech, and Argonne National Lab. The winners decided to rename the DTF the TeraGrid. They've got a web page about the new system at www.teragrid.org [teragrid.org]

  • by Skim123 ( 3322 ) on Saturday October 27, 2001 @06:31PM (#2488307) Homepage
    Got to hear a talk from Henri Casanova [ucsd.edu], one of the top dogs working on distributed application scheduleing and simulation software for The Grid. Neat stuff, but, as he addressed in his talk, we're really looking at a network of computers that only people needing massively intensive computations done on highly parallizable problems would find useful. Translation: only researchers in certain fields need this.
  • by anzha ( 138288 ) on Saturday October 27, 2001 @06:33PM (#2488309) Homepage Journal

    FYI, this will be updated after supercomputing, but this [top500.org] is the list of the fastest computers in the world.

    The Pittsburgh's super'puter will rank up there with LANL's new one (also a Compaq based one). Pittsburgh's will be the fastest SC for nonclassified work.

    I'm not sure whether or not it'll dethrone LLNL's ASCI White or not. It does knock seaborg @ NERSC from the fastest unclassified SC spot though.

  • It sounds like this monster will require a lot of wattage to run and generate a lot of heat. I suspect that the system could be used as a computer AND a furnace, i.e. the heat output could be cycled through the location where it's housed, reducing natural gas (or oil?) or electrical heating costs.
  • Hmm... think long and hard about that one! I'm not even stoned, and its messing with my mind.

    "Knowledge Synthesis"... doesn't that defeat the point of "knowledge"? whoa.

    >/dev/null

  • by Anton Anatopopov ( 529711 ) on Saturday October 27, 2001 @07:46PM (#2488418)
    If this kind of project gets off the ground, I think we may yet see a futures and options market develop for that valuable commodity - CPU Cycles.

    There would be plenty of room for speculation, and participants in the market would basically be betting on Moore's law, in addition to the other economic factors common to all derivatives markets.

    The problems I forsee are to do with the standardization of the contracts. We would need to agree on an architecture, and a delivery method for the CPU cycles. All in all though, this could be a really lucrative business, especially with the demand for GHz from Hollywood movie studios set to explode in the near future due to actors being replaced with CGI animation.

    Sometimes I feel like I am living in a Bruce Sterling or William Gibson novel, the pace of technology just seems to get faster and faster.

    • Keep in mind that only certain applications would work on a massively distributed basis. Things like Seti@home and Distributed.net are good because they deal with small chunks of data that can be processed by PCs. Things like CG rendering probably wouldn't work in a broad sense because of the kinds of bandwidth and storage needed to deal with frames. I have no idea how big (in bytes) a single frame of a motion picture is, but I would guess that the costs in bandwidth just to send back the finished product would neglect any benefit.

      At the same time, I'm working on some artificial intelligence research, and I could definitely benefit from having computers spread around doing my work. I'd probably even pay for it. :-)
  • gigabit / gigabyte (Score:3, Interesting)

    by jonbrewer ( 11894 ) on Saturday October 27, 2001 @08:13PM (#2488464) Homepage
    All please read that announcement as having said "40 gigabit," 'cause that's what it is. Still fast... 4x OC-192.

    God knows how the research people pay for this. Impoverished corporations like my employer still dick around with multiples of T1.

    Avaki [avaki.com] were in peddling their grid [avaki.com] computing solution, and I had to say to the guy... "do you have any idea how little bandwidth we have?"

    Grid computing will affect the rest of us when everyone can get high speed network connections.
    • Yeah, it's really nice to see that now even the submitters on /. don't read the stories anymore.

      • it seems your the one who didn't read the story... you just read the headline on /.

        "We will share a unified grid with more than half a petabyte of disk storage, a 40-gigabit-per-second national optical backbone"

        You don't measure bandwidth in GBps you measure it in Gbps...

        I thought they were still testing OC-768 ?
        • I read both, especially interested how they would finance 8 OC192's, as mentioned in the headline:

          "40-gigabyte-per-second national optical backbone"

          Then I saw 40 gbit in the story, hence was disappointed that there is no quality check on the headline...

          Now where did you think to deduct I did not read the story, dear Watson?

  • by hattig ( 47930 )
    I have a friend who's currently trying to think of a business case for Grid Computing ... but is having trouble. Apart from academics and researchers, can The Grid ever become mainstream? Why should companies invest in it, i.e., your average medium-to-large corporation? The books often seem to cite creation of virtual companies and vertical integration of companies (i.e. from the component manufacturers to the end retailers), but these situations don't seem particularly realistic ... and you'd have to agree policies over data sharing for a start!
    • Massively Multiplayer Online Game.

      That require more than extremely simple calculations. If you can't make a better game universe with this baby, then the problem is your game designers.
    • [warning: I am participating in GRID activities and research with my current job. These opinions are my own.]

      It is important to note that the GRID currently is not aiming to satisfy a "business case". Specifically, it is a research tool that is designed to aid scientists address problems that can not (easily) be solved using existing solutions. There are many good explanations of the GRID around the net, one of them is at the Globus site.

      There are a few examples of where the GRID is being used or will soon be used in the research community:

      - NEES [neesgrid.org]: National Earthquake Engineering Simulation GRID
      - HENP [bnl.gov]: High Energy Nuclear Physics working group
      - Internet2 [internet2.edu]: How the Internet2 infrastructure is being used in the development of various GRID projects.

      Now, there are additional reasons as to why businesses might be interested in projects such as the GRID. I come from a FEA background and it would be useful to many organizations to be able to harness multiple systems to complete some of the CPU, data and time intensive tasks that the GRID proposes to address.

      Further, the GRID's long term goal is to provide the ability to offer compute cycles and storage in a way that the current electrical power grid does. I am sure we can all imagine personal uses for this sort of power. Creating a viable business end for this is the question that I can not answer (and that you are asking). However, creating this system will help researchers. Once it is available, creating consumer level benefits should not be difficult.

      Finally, you mention some of the policy issues, particularly concerning data storage. One of the key parts of the GRID work involves ACLs, distributed directory services, and the like. It is important to note that organizations in GRID projects (and corporations of the future who might use GRID like services) will have the ability to grant/deny access to their systems. There is a great deal of effort currently under way to make sure that the grid is not going to become a general purpose storage system for everyone's generic data. Some of the work on this type of middleware is available at the Internet2 [internet2.edu] middleware site.

      • The "big science" projects are the obvious target of Grid Computing; but there the grid is only useful for research institutions and R&D labs of corporations.

        I also wonder if the oft-cited electricity analogy breaks down. Consumers and businesses pay for having electricity on tap - and they don't have their own power generators onsite. However, many people (and certainly businesses) have computer resources on location - these will get more powerful, so by the time The Grid starts becoming more ubiquitious, why would they have any need to use it?

        However, IBM seem to think it's the next biggest thing to happen to modern-day computing, so I'm obviously missing something here...!

  • Can it run Tribes 2?
  • in a few years my wristwatch will be more powerfull ;D
  • Data mining and "knowledge synthesis" algorithms are NOT scalable to that many nodes, especially for such clumsy topology.

    I suggest them to get their facts straight.

    All you can do is to hold stupid matrices and do y=s.axb. And _rather_ slowly.


  • What will this network be like when all of the pcs in the world are linked together?

    Trapper-Keeper......

    Be afraid, be very afraid.
  • Just imagine a beowulf cluster of these clusters!
  • I find this area of research particularly interesting because of my own research and the high amounts of computing power that it requires.

    But to answer some of the previous posts about the sharing of resources, one of the larger problems is to figure out and method of saying this:

    Run program X at site Y under policy P providing access to Z under policy Q.

    So, it's not like you'll just be able to tap in, there will be policies for program execution and data access. But it's coming faster than you think.

    One of the coolest concepts is that of process migration which will probably be integrated into a ubiquitous computing grid. Whereby a process running on Processor A, Architecture X can migrate to Processor B, Architecture Y and preserve state. I've seen this work with some DEC's and Sparcs swapping processes and it's most impressive, but still needs some work.

    I would suggest reading The Grid: Blueprint For a New Computing Infrastructure [amazon.com] if you'd like to get more about the general idea of the grid. It's light on technical details, but a good high point view.
  • Imagine a Beowulf cluster of these things!

To stay youthful, stay useful.

Working...