Super Computing 2000 58
Stephen Adler of Brookhaven Laboratory has written a fine account of the Super Computing 2000 conference in Dallas, Texas. He covers super computing, venture capital, some fascinating info about SETI, open source software, and even has some geek porn.
That's not Tux!!! (Score:1)
I didn't notice any references to UD in the article, so I'm somewhat confused by this. And really, a blue chicken with the UD logo on its chest looks almost entirely unlike a penguin.
--Charlie
Re:Geek Porn (Score:2)
pragmatic definition of supercomputing (Score:2)
Re:The vector pr0n was missing ! (Score:2)
The fun part is, that Cray has been moving in the direction of ``kludging together 4000 4-way SMP Linux boxes with 100BaseT and some duct tape'' as you so elegantly put it. I know, they're not building beowulf clusters yet, but they are using Alpha processors (instead of Cray Vector processors) in all of their machines that are actually worth mentioning (T3E etc). It is a big step in the commodity-hardware (more cheaper cpus in favour of fewer expensive ones) direction.
Yes, it will be interesting to see what happens next. I do not understand why american companies have been moving away from the vector market - any idea ? At least IBM should have the resources to produce a vector CPU. I think it's Hitachi that's licensing the Power3 core, but they're using it in a heavily modified Power3-like CPU with vector registers. Kind of interesting...
Oh, and about the clusters: The ASCI project is funding the development of hardware and software for very high speed computing. It is an american initiative (so american-only vendors, meaning no vector machines because Cray cannot deliver anything remotely reasonable in the high end) with the goal of producing computing systems powerful enough to simulate nuclear weapon tests (in order to eliminate or reduce the need for actual live testing). Simulating that kind of physics is a very real-world problem (not your average distributed.net/seti@home embarassingly parallel problem). However, the four fastest supercomputers in the world (ASCI White, ASCI Red, ASCI Blue_Pacific and ASCI Blue Mountain) are clusters. Yes, it's not 100BaseT, but the machines are certainly not SMP/ccNUMA either. You will notice that the 4.9TFlops/sec for #1 is much less than the 12.3TFlops/sec which is the theoretical peak for that machine. The Hitachi and the Crays are much closer to reaching their theoretical peak, of course, because as you pointed out their architectures are so different from the clusters. But the clusters are real nonetheless.
Re:The vector pr0n was missing ! (Score:2)
The main thing I wanted to state in the very beginning in all of this was something like:
1) Vector computing gets very little attention in the US. (I couldn't find a single vector machine in the top100 which was located in the US)
2) I find it amusing that japanese vector machines are not in use in the US, because of politics
3) I'm wondering why no US company is building vector machines that are used for *high*end* computing (meaning, it will at least appear at top500, which no Cray Vector does)
I don't want to bash Cray and I'm not on Hitachi's payroll
Let me know what you think about 1, 2, and 3
Re:That's not Tux!!! (Score:1)
--Charlie
Cray-6 debut (Score:2)
Re:The vector pr0n was missing ! (Score:2)
So on a 1GHz ``ordinary'' cpu you do
multiply a, b
and get 1GFlop/sec because you
can run ~1 instruction per second.
On a 300MHz vector machine you would do
multiply a, b
and get 4.8GFlops/sec, because each register
holds 16 scalars instead of one.
The usual characteristics of vector machines are:
Clock speed is low
The work-per-cycle that can be done is high
Memory bandwidth and access time is extremely good
They're expensive, but for some problems the price/performance is better than what you can get from a non-vector machine
Some vector machines have no cache - or, put in another way, all their memory is the cache. Access times are constant no matter how you access your memory. You will pay for this with a very long pipeline which in turn means that the CPU is virtually unusable for some problems. On the other hand, the cache-based CPUs are virtually unusable for other problems. It's a compromise.
Vectors vs (cc)NUMA (Score:2)
To sum up my aborted post, Cray has been evolving from the single processor Cray-1, to the Multi-processor Cray YMP, to the massively distributed T3E. Seymour and Cray Computer Corp. (spun off of CRI in the late 80's or so) failed because they couldn't push as much performance through a smaller number of processors. Eventually the physical laws of silicon (Seymour even tried GaAs to get more performance) take over, and you must expand the number of processing units to get greater and greater performance.
The T3E is a 3D toroidal-constructed system. SGI's Origin uses the Hypercube. Sun uses whatever Cray's Business Unit [sun.com] did back in the day before SGI sold the Starfire, renamed the Sun UltraEnterprise 10000, to Sun (SMP I guess). The model works. That's not disputed.
The High-End market that Cray and Hitachi serves is fairly stagnant (growing slightly more than inflation) at around 1 Billion USD/year (IIRC). It doesn't grow 40% per year like standard PCs [dell.com], handhelds [palm.com], or the streaming video & porn [sgi.com] market. The pie is only so big, so IBM and Sun choose the bigger market; they're exploiting the internet. ASCI projects don't make much money. They're done for the press they receive. I've heard of companies exploiting Cray's extreme I/O bandwidth for file archivers to tape robots, but that's about it for general purpose. You wouldn't buy an SV1 or a Hitachi to run Apache, that's for sure.
As Durinia pointed out elsewhere in this discussion, Ford and other auto companies still use Cray Vector machines, as well as other research labs, etc. Vector use isn't dead in the US. It's just not the centerpiece, I guess.
Re:The vector pr0n was missing ! (Score:1)
Based on your figures and gross assumptions (such as the system data path running at full processor speed), to achieve 40 TB/s memory bandwidth would require in the neighborhood of a 10 MB wide data path to memory.
I won't even comment on your perception that the US lacks vector computing.
Re:The vector pr0n was missing ! (Score:2)
They mainly build Alpha processor based machines these days. And if you look at their vector machine, you'll notice that they brag about 1.8GFlops/(sec*cpu). For a vector CPU, that would have been fast five years ago. Look at the other numbers as well... Scales to 1.8TFlops/sec versus 4TFlops/sec for Hitachi, which would do so on much fewer CPUs, which again would mean that you would usually be able to get much closer to the theoretical peak on the Hitachi. 40GBytes/sec memory bandwidth, versus 40TBytes/sec...
A 1 GHz Athlon can sustain a little more than one GFlops/sec on a matrix multiply - compare that to your 1.8GFlops/sec ``vector'' CPU... Oh, the Athlons are made in Dresden (Germany)
For real evidence, rather than marketing numbers, see the Top500:
1) ASCI White - 8192 Power3 CPUs (IBM)
2) ASCI Red - 9632 Intel CPUs (Intel)
3) ASCI Blue Pacific - 5808 604e CPUs (IBM)
4) ASCI Blue Mountain - 6144 MIPS CPUs (SGI)
5) ?? 1336 Power3 CPUs (IBM)
6) ?? 1104 Power3 CPUs (IBM)
7) SR8000-F1/112 112 Hitachi CPUs
...
10) T3E1200 1084 Alpha CPUs (Cray)
Notice that the Hitachi which is somewhat faster than the Cray has one order of magnitude *fewer* CPUs than the Cray. And the fastest Cray in use today is *NOT* a vector.
See www.top500.org for more info.
I give you, that Vector computing is still alive in the US. I am not going to agree that it is ``well'' too.
But thanks for the information
Re:SIMD (Score:1)
Re:The vector pr0n was missing ! (Score:2)
2) I find it amusing as well, actually. They (the gov't) do this for two reasons - they don't really "trust" overseas SC vendors to put SC in government sites (i.e. they don't want to call a japanese service guy to fix a SC at CIA HQ or Army labs, etc). Also, they want to have some say in the design of the machines - because they help fund the R&D for Cray, they can be real up front about what kind of machine they need, even before it's designed.
3) I think I pointed this out before - the top500 list is not an accurate measure of performance, by any means. I don't know how the application performance of the Hitachi and the Cray might line up, but the reason the Vector machines don't show up very well is that all of these other shared mem/superclustered-type machines are overrated. Companies are still buying Crays. Why? They're not stupid. (well, not all of them!) I'm guessing Ford bought 5 (I checked the numbers :) because their applications just ran fastest on that machine. I think an SV1 set a record recently for NASTRAN.
Re:Nobel laureates reviewing business proposals? (Score:2)
> physicist, for instance, doesn't mean that they know anything about good software or hardware design.
I'm glad to see someone else reflected on this passage in Steven Adler's otherwise very cool review of this geekfest. (& I wish I had made the effort to have gone to last year's conference here in Portland.)
Adler describes the situation very well: take one guy who wants to play with big toys, works long hours for less-than-market-scale pay in exchange for the priviledge, & knows he is close to burn-out, hold before him the temptation to sell it all for a chance to grab the brass ring . . . & watch him catch the start-up bug. This is what all of those VC suits are hoping for.
This -- & whether all of the stars in the geek's eyes will blind him to the fact he's signing away most of his idea for a few million dollars. That will melt away during the start-up phase.
Geoff
Re:Geek Porn (Score:2)
Supercomputing Isn't Dead (Score:4)
Though there is still a place in the world, in my mind, for mid-to-large SIMD systems (SGI/Crays, Starfires, RS/6000s, etc.), this conference and other events are showing that cluster supercomputing and widely distributed computing (a'la SETI@Home, Distributed.Net, WebWorld, etc.) are also being taken seriously.
What drives any advances in hardware is applications that can take advantage of them. Supercomputing is not dead because on top of the usual uses (fluid dynamics modeling, codebreaking, etc.), the Internet and new algorithms in IR and AI are combining to compel people to want to approach Information Retrieval and understanding problems which are at the level of requiring supercomputing resources.
The company I work for (www.webmind.com) is building a hybrid AI system for IR and understanding (among other things) which is optimized to run not on a vector supercomputer but on a cluster of independent servers (though a nice MIMD supercomputer would be nice). With commodity hardware we have managed to get over 100GB of RAM and 200GHz of CPU for less than $500,000 for our first prototype of a large installation. A 6-64way IBM RS/6000 starts at $420,000.
Missing from the write-up (and possibly from the conference) are the folks at Starbridge Systems who are working on a "Hypercomputer" which has a field-reprogrammable topology. If any supercomputer company that needs it deserves venture capital funding, it's this one. The idea of allowing an application to change the network topology of the processors to optimize for its own data representation is extremely powerful - it's the next "big thing" in both MIMD supercomputing and networks for cluster supercomputing, IMNSHO.
SETI@Home is a very cool project, but it would be nice to have more about applications driving supercomputing at such conferences. Distributed.Net seems to get too little time since their client isn't as pretty as SETI, and also cluster supercomputing not over the Internet is being used for all sorts of cool stuff, and it would be especially interesting to hear more results about commodity clusters vs. proprietary large systems in areas like fluid dynamics modeling where the proprietary systems usually rule.
Also, more of a focus on evolutionary computing, FPGAs in supercomputer design, and information retrieval and understanding applications would be nice.
But overall, a good writeup of what looks like it was a pretty interesting and refreshingly diverse (not just "big iron" focused) SC conference...
Sigh (Score:1)
Re:Isn't supercomputing dead yet? (Score:1)
I know what I want for Christmas! (Score:1)
Hot-Cha-Cha!
Re:Supercomputing Isn't Dead (Score:2)
Isn't this inherent in MPI? Is there something more to this that I'm missing? I'm not too well versed in super computing but I thought that with MPI you could write your programs to change the topologies to optimize things. Just wondering...
Wow, a conference worth seeing (Score:1)
Comdex and all it's spin-offs have not been good in so many years I've lost count. I suspect there's a critical point at which the geek-to-marketing-slime ratio causes these things to suck.
-- RLJ
Network Challenge competition (Score:1)
One of the interesting things that went on at SC2000 [sc2000.org] was Netchallenge competition [anl.gov] with cash prizes.
Its winners [anl.gov] have all shown something of interest to the public (disclaimer: our application is one of them, so I am not a neutral party).
Our demo included, we think, the first ever demonstration of interdomain operation of DiffServ-based IP QoS (more on this to come on the QBone web site [internet2.edu]) as well as an interesting application developed by Computer Music Center [stanford.edu] folks from Stanford. The application (audio teleportation) allows one to be teleported into a space with different acoustics (with CD-quality sound). During SC2000, two musicians gave a live performance from two different places with their sound being mixed in the air in the acoustics of a marble hall in Stanford.
itanium. (Score:1)
Distributed Computing (Score:1)
SETI@Home keeps popping up as a success story of distributed computing, but alas it is above and beyond all a success of sci-fi literature and movies.
The project itself is largely a failure, producing completely useless results during the first months of operation, wasting hundreds of CPU years on inefficient code and not yielding any product yet found to be useful for science. Meanwhile, other scientific projects with more merit and less Hollywood appeal don't get a chance to be realized on distributed platforms. I'd call that a disservice to science, not a success.
I submitted a story earlier about Distributed Science having signed the first contract for paid work (both paying company and suppliers involved), but it got rejected (like any of the 6 news on Distributed Science this year that I submitted), which is twice bitter considering that is the only commercial distributed effort openly in support of Linux and FreeBSD and pays a coder to work on that end.
It makes me wonder if the Linux community, for which
Spending the time on Mac clients might be more promising and would free a coder to worship open source for naught instead of pay.
Sacrifices have to be made to fight "the man", right?
Error: Permission Denied (Score:2)
This site has been blocked by your administrative team
Content: Sex, pornography
If you have a business need to view this material, please contact the Information Technology Manager.
Re:Geek Porn (Score:1)
Did you check out the Argentinian physicist?
Wow! Smart and cute, too! Truly a (male) geeks fantasy!
Re:The vector pr0n was missing ! (Score:3)
Top vector machines in the world: (from www.top500.org again)
9) Hitachi - 917GFlops/sec - In Japan
12) Fujitsu - 886GFlops/sec - In the UK
13) Hitachi - 873GFlops/sec - In Japan
18) Hitachi - 691.3GFlops/sec - In Japan
24) Hitachi - 577GFlops/sec - In Japan
33) Fujitsu - 492GFlops/sec - In Japan
35) Fujitsu - 482GFlops/sec - In Japan
37) Hitachi - 449GFlops/sec - In Japan
59) Fujitsu - 319GFlops/sec - In Japan
63) Fujitsu - 296.1GFlops/sec - In Japan
65) Fujitsu - 286GFlops/sec - In France
67) NEC - 280GFlops/sec - In France
76) NEC - 244GFlops/sec - In Japan
77) NEC - 243GFlops/sec - In Australia
78) NEC - 243GFlops/sec - In Canada
79) NEC - 243GFlops/sec - In Japan
...
I found lots of crays, but all T3E, meaning Alpha based. In the first 100 entries, I could not find one single vector machine located in the US.
Please point me to it if you can find it.
Re:Supercomputing Isn't Dead (Score:2)
-_Quinn
Vector computing in the US ? Hehe.. (Score:2)
www.top500.org/lists/2000/11/trends.html
you will see that:
Of the 500 fastest supercomputers in the world today:
All vector based systems are of Japanese origin.
Even I had not expected that
Nobel laureates reviewing business proposals? (Score:1)
These guys may mean business, but I'm glad that they aren't doing business with my money. Just because someone is a top notch physicist, for instance, doesn't mean that they know anything about good software or hardware design.
Looks to me like these VCs are using this as a marketing ploy. Look at our business! Approved by a Nobel Laureate! Make Money Fast!
Apples and oranges (was Re:itanium.) (Score:1)
sgi _may_ have several gigaflops/node next year? puh-leez! g4s have been doing this since _last_ year!
Maybe, maybe not. The G4's greatly hyped AltiVec vector unit can only do 32-bit floating point arithmetic. "Real" supercomputer sites tend to care much more about 64-bit floating point, and the G4's peak on that is one op per clock cycle (i.e. 400 MHz G4 peaks at 400 MFLOPs 64-bit). On the other hand, the Itanium can issue two 64-bit FP instructions every cycle, and both can be multiply/adds -- you can get up to 4 FP ops every cycle (i.e. 800 MHz Itanium peaks at 3.2 GFLOPS 4-bit). There's also the question of memory bandwidth; the G4 uses the same crappy PC100 memory system that current IA32 boxes use: 800 MB/s, 300-350MB/s sustained.
Disclaimer: I work at a "real" supercomputer site that has prerelease Itanium systems, and my code was one of the ones shown in the SGI booth. I'm not allowed to talk about Itanium performance numbers yet, unfortunately.
Re:pragmatic definition of supercomputing (Score:1)
More pointedly, with the current rise of ultra-networked commodity PCs being used for certain types of problems, should they be less qualified as a supercomputer because they cost a tenth of Big Iron? I see the price of the system as being a common symptom, not a cause, of typical supercomputers, and one for which a cure exists.
A parallel for the Brave Gnu World would be to ask what Enterprise-class software is. Ten years ago a good answer would've included "costs a bloody fortune and comes with a team of people to maintain it", simply because that was a common factor in all the software at the time. Recent changes in the software market have made cost secondary to function.
The vector pr0n was missing ! (Score:3)
One CPU:
256 registers
each register holds 64 double precision floats
330MHz
32 FLOPS per clock-cycle
One CPU will yield 9.something GFLOPS. It has 16GB memory in the processor module. And they sell MP machines from a few GFLOPS to 4TFLOPS. Both Fujitsu and Hitachi had processor modules stripped down - they were *sweet*!
So, the 760 chipset has a 200MHz FSB ? Well, Hitachi has 40 TeraBytes/second memory bandwidth in the local machines, and some ~8GigaBytes/second between machines.
Friends, vector computing is not dead - unfortunately only Japan produces vector machines, so only asia and europe can use them. US national laboratories are not allowed to buy them, not because of export regulations, but because of import regulations... Go figure.
Bloody typo! Re:Apples and oranges (Score:1)
you can get up to 4 FP ops every cycle (i.e. 800 MHz Itanium peaks at 3.2 GFLOPS 4-bit)
That should read "64-bit", not "4-bit". We talking about the Itanium and not Star Bridge's FPGA-based machine, after all. :)
Not Only The Japanese... (Score:1)
SGI just sold the Cray unit, to Tera
(http://www.tera.com/) who appears to have, wisely, taken on the Cray brand in toto.
SIMD (Score:1)
Re:The vector pr0n was missing ! (Score:2)
Re:The vector pr0n was missing ! (Score:3)
To Cray's defense, however, I'm going to have to argue a couple of points. First off, the fact that the SV1 is the "budget" vector box follow-on to the J90, while the SV2 (2002) is the follow-on to the T90 and the T3E has to say something. J90's architecture was defined in 1994. It's a bit long in the tooth. SV1 might not achieve the absolute performance, but the GFLOPS/$$ aren't too shabby these days. Secondly, SGI's assimilation of CRI in the mid 90's didn't bode well for their Vector roadmap. SGI was more interested in stripping the MPP parts (You would too if everyone kept buying T3{D/E}s instead of Origin 2000s) from Cray and casting the rest off. Now that they've been cast off, I'm hoping everything gets righted again.
You've got to understand that Cray's not in the business of kluging together 4000 4-Way SMP Linux boxes with 100BaseT and some duct tape. That cluster model blows away the first 6 Top500 "supercomputers". #7 is the first 'honest' supercomputer in that it was designed for that performance, not just a cluster of similar high-theoretical peak boxes connected by bottlenecks. Cray @ #10 is the next one. Ironically, the T3E1200 in #10 was also originally released to the public in 1994. That's a while ago. For historical record, that's pre-SGI. Let's see what happens post-SGI with the totally redesigned SV2, as well.
Linpack == Theoretical, not sustained. (Score:2)
So, basically, Linpack looks at raw performance if the processors/vector pipes aren't starved. Cray's non-vector box, the T3E (again, 1994) has the highest sustained real-world benchmarks, just over 1 TFLOPS/s. Don't preach theoretical. That's for academics. Most supercomputer applications don't consist of small loops of code that runs everything out of registers. You're kidding yourself if you think they do.
hype and performance. (Score:1)
from apples website:
...Researchers at the Ohio Aerospace Institute in Brookpark did exactly that, linking 15 Apple Macintosh G4 personal computers to tackle chores once the realm of Cray supercomputers ... Overall performance beats a commercial cluster of Silicon Graphics computers costing twice as much.
so the prerelease itaniums promise to do better in march? apple/moto arent sitting still either. faster g4s on faster buses are due in january. but id love to see performance numbers and an explanation when you can post them. post at appleinsider [appleinsider.com]. thanks
more fun stuff at apple.com/g4 [apple.com]
Re:The vector pr0n was missing ! (Score:2)
1) yes, the GFLOPS per "CPU" is fewer for the SV1. That is because they're made to run in highly-coupled groups of 4 called an "MSP". That's why they prefer to say it's really 7.2 GFLOPS as opposed to 1.8. It's not the same as just having 4 CPU's in parallel - it's really more like 4 CPUs as 1 CPU. Let's also remember that this is Cray's "Budget" line. Did you look at the cost on the Hitachi? Ouch.
2) Where the heck do you get 40TB/sec of memory bandwidth?!? Each node-to-node link in the Hitachi is 1 to 1.2 GB/s each way. See their site [http].
3) Calling the Top500 list "real evidence rather than marketing numbers" is a big joke. One of the biggest discussions at the conference was about a new benchmark suite being created to rank the top500 list. Why? Because Linpack is really a peak-speed measurement. These huge parallel ASCI machines run them at 30-50% of peak. On real applications, they'd be damn lucky to get 1% of peak, if that. In fact, a Cray T3E still holds the world record for sustained performance on a real application. It's interconnect is that good. I'm sure Hitachi is behind the new benchmarks as well (Cray is)...anything which emphasizes real application speed will make any vector machine look MUCH better.
4) Cray does not "mainly sell Alpha-based machines" these days. They sell them once in a while, but if you're counting by number of systems shipped, they sell a lot more SV1s. I think Ford just bought 2 or 3. I think they've maybe sold 4 or 5 T3Es all year.
5) I don't have any numbers on this, but I really really don't think the Hitachi has a MTTI of over a year.
lastly, did you see who won the "Supercomputing product of the year" award at the conference? The SV1 did. For the second straight year.
Now, I'm not knocking the Hitachi - it's a good machine - I'm just pointing out that vector computing in the US is doing just fine, thank you. And it will be doing WAY better when the SV2 is out in 2002...it's going to be a doozie!
Re:Geek Porn (Score:2)
Re:The vector pr0n was missing ! (Score:1)
Geek Porn (Score:3)
Sorry... Couldn't help it...
Re:Ho hum, that looks boring. (Score:1)
Dude, did you even look at the pictures?
I didn't even see a woman in any of the pictures...
Tho, there were some scantily clad processors...
Disappointing (Score:1)
Especially odd to me was compaq because they are following in Digital's (RIP) practice of letting just about anybody test drive [compaq.com] their boxes over the net, so they must be pretty confident that they have their security tight enough to prevent at least bulk mischief.
Re:Geek Porn (Score:2)
I don't know about you, but I need some "cooling down" just looking at this geek "porn"
Open Source Panel (Score:4)
Let me preface this by saying that I like Linux, I run it on all my desktop machines, and have admin'ed a cluster of 300 Linux boxen and loved it. However ...
I was not actually at SC this year, but have attended the two previous ones and have actually had a chance to speak with Todd Needham and other people from MS Research. The legal and marketing department there may be full of idiots, but never think that if someone is an MS employee they're automatically and idiot; they may have misplaced loyalties, but they are not stupid. I have to admit that Todd Needham from MS had one or two good points in the panel discussion. Open Source is not the magic "pixie dust" that automatically fixes everything (*snicker* Todd quoted jwz [jwz.org]). As someone who's worked in a business environment, I can say with confidence that some level of heirarchical organization is necessary. The Cathedral/Bazaar analogy may be good, but it is not a black-or-white issue; projects can fall somewhere in-between, and those that strike the right balance will be successful.
With respect to security, many eyes may make all bugs shallow, provided some of those eyes are willing to partake in a formal security analysis. A compiler researcher as OSDI this year revealed that in the Linux kernel there are ~80 places where interrupts are disabled and never reenabled. How did he discover this? By modifying a compiler to be a little "smarter" and read a formal specification tailored to the application you're compiling, thus performing more rigorous static anaylsis on your code. It's pretty interesting reading. Many eyes are not a substitute for formal analysis, but something to augment it; conversely, formal analysis of software is still a young field, and many eyes are still a necessity for improving robustness in large projects.
The bottom line: seek the middle ground. Just because "their" ways doesn't work doesn't mean everything was wrong with it. And yes, I'm ready to be moderated down as "flamebait". :)
-jdm
Re:Ho hum, that looks boring. (Score:2)
Re:Geek Porn (Score:1)
~luge
Re:Ho hum, that looks boring. (Score:1)
Check that, I found one. Kinda like the "Where's Waldo?" of the techgeek convention... "Where's Woman?"...
(Hint: Look on page 4)
Yes, I know I'm being silly. It's that kind of day.
Re:Isn't supercomputing dead yet? (Score:5)
Supercomputer applications can be ordered by the inherent parallelism of the underlying problem. Some problems are inherent parallel (e.g. SETI, HEP, etc.) some require communication (Weather, NP, etc.). Wherever communication is needed old fasioned "supercomputers" e.g. shared mem SIMD, MIMD, ccNUMA etc. mashines will be superior to clusters, i.e. distributed mem MIMD mashines.
Re:Geek Porn (Score:1)
Re:Isn't supercomputing dead yet? (Score:1)
Supercomputing is clustering. Beowulf is supercomputing. At least that's why there was some representation of Scyld Beowulf at the Supercomputing show.
Distributed licenses have nothing to do with parallel apps and scientific computing.
I wish people would try not to make themselves appear ignorant by only posting what they know about.
That's sure not to happen though...
I have given up on trying to get an intellectual discussion of issues on
There is too much evangelism and preaching going on to see any real arguments backed by claims that are verifiable.
Too bad.
9-Megapixel IBM Monitor pix (Score:1)
http://ssadler.phy.bnl.gov/adler/sc2k/sc2kpg2.htm
Milinar
why doesn't anyone mention supercomputing? (Score:2)
Re:The vector pr0n was missing ! (Score:2)
The funny thing is that the SV1 line is a continuation of the J90, Cray's "budget" line (these start at $700k USD). The next no-holds-barred vector machine out of Cray Inc. will be the SV2 due out in 2002.
I assure you, Vector computing in the US is not dead.
/. to the next level! (Score:3)
I can just see it now, sadist troll cults that enjoy taking brutal beatings, bitter kung-fu flame wars and noble /. vigilante enforcers!
Imagine being a newbie in that type of system? "Why is everyone punching me?"