The Potential of Science With the Cell Processor 176
prostoalex writes "High Performance Computing Newswire is running an article on a paper by computer scientists at the U.S. Department of Energy's Lawrence Berkeley National Laboratory. They have evaluated the processor's performance in running several scientific application kernels, then compared this performance against other processor architectures. The full paper is available from Computer Science department at Berkeley."
Cell + Linux = success (Score:3, Funny)
Linux is the only free OS. Yes the BSD lincenses may appear more free, but as they have no restrictions, they are actually less free than the GPL. You see, restricting the end user more actually makes them more free than not putting restrictions on them. You must be a dumb luser for not understanding this.
And you obviously dont have a real job. A real job involves being a student or professional academic. You see, academics are the ones who know all about productivity - if you work for a commercial organisation you obviously do not know anything about computers. Usability is stupid. Whats wrong with the command line? If you cant use the command line then you shouldnt be using a computer. vi should be the standard word processor - you are such a luser if you want to use Word. Installing software should have to involve recompiling the kernel of the OS. If you dont know how to do this, you are a stupid luser who should RTFM. Or go to a Linux irc channel or newsgroup. After all, they are soooo friendly. If you dont know how the latest 2.6 kernel scheduling algorithm works then they will tell you to stop wasting their time, but they really are quite supportive.
Oh, and M$ is just as evil as Apple. Take LookOUT for instance. You could just as easily use Eudora. Who needs groupware anyway, a simple email client should be all we use (thats all we use as academics, why cant businesses be any different).
And trend setters - Linux is the trend setter. It may appear KDE is a ripoff from XP, but thats because M$ stole the KDE code. We all know they have GPL'ed code hidden in there somewhere (but not the things that dont work, only the things that work could possibly have GPL'ed code in it).
And Apple is the suxor because they charge people for their product. We all know that its a much better business model to give all your products away for free. If you charge for anything, then you are allied with M$ and will burn in hell.
What about the compiler? (Score:2, Insightful)
What about the programmer? (Score:5, Insightful)
But not to programmers who do science.
"What gcc -O3 does is way more importent then what an assembly wizard can do for most projects."
Not an unsurmountable problem.
Re:What about the programmer? (Score:3, Insightful)
the last thing 85% of these people want to do. They don't want
to be computing experts to do their science/research.
When you're talking about reuseable modules like an FFT or matrix multiplication, then many scientists doing simulations would love to have a hand optimized FFT or matrix module to plug in as a simulation component. Even if they don't know a drop of assembly themselves, having the optimized module available can make a large differenc
Re:What about the compiler? (Score:5, Insightful)
Re:What about the compiler? (Score:3, Insightful)
Re:What about the compiler? (Score:5, Insightful)
Re:What about the compiler? (Score:1)
Re:What about the compiler? (Score:1, Insightful)
And I know from Uni that many profs WILL hand optimize code for complex, much used algorithms. Then again, some will just use matlab.
Re:What about the compiler? (Score:2)
Well, if they hire the typical contractor to do the work, $10 million goes towards the computer, $90 million goes towards a coffee service, and $300 million goes towards per diem.
Re:What about the compiler? (Score:4, Informative)
It makes sense for a game developer - and even more an embedded developer. You spend the time to optimize once, and then the code is run on hundreds of thousands or millions of sites, over years. The time you spend can effectively be amortized over all those customers.
For scientific software the calculation generally changes. You write code, and that code is typically used in one single place (the lab where the code was written), and only run a comparatively few times, indeed sometimes only once.
For a game developer to spend three months extra to shave a few seconds of one run of a piece of code makes perfect sense. For an embedded developer using a couple of months' worth of development cost to be able to use a slower, cheaper chip, shaving a dollar of the production of perhaps tens of millions of gadgets makes sense.
For a graduate student (cheap as they are in the funny-mirror economics of science) to spend three months to make one single run of a piece of software run a few hours faster does not make sense at all.
In fact, disregarding the inherent coolness factor of custom hardware, in most situations it just doesn't pay to make custom stuff for science when you can just run it for a little longer to get the same result. In fact, not infrequently have I heard about labs spending the time and effort to make custom stuff, but by the time they're done, the off the shelf hardware had already caught up.
Re:What about the compiler? (Score:2)
Wrong. A programmer hour is much more valuable than a machine hour.
And this hasn't been lost on scientists and engineers--hence the popularity of software like MATLAB.
Re:What about the compiler? (Score:3, Informative)
Re:What about the compiler? (Score:3, Informative)
You forgot to take into account the team of scientists waiting for the machine to produce a result.
Re:What about the compiler? (Score:2)
Re:What about the compiler? (Score:2)
Re:What about the compiler? (Score:2)
You know the saying, "You can write Fortran in any language?" Scientists judge a language by how easy it is to write Fortran in it. That's why C is their second-favorite language.
Re:What about the compiler? (Score:3, Interesting)
Actually bullshit. We're talking scientific applications here, and it's not uncommon that programs written to run on supercomputers *are* optimized by an assembly wizard to squeeze every cycle out of it.
Re:What about the compiler? (Score:1)
Re:What about the compiler? (Score:2)
Say what? Um, that type of optimization doesn't exist, unless the programmer is really untalented. Most of the big opportunities should stand out like a sore thumb on a trace. Once you know what's taking all the time in the code, you can look at the way it's put together to catch the low-hanging fruit. Generally, the first 10% of the work gets you 90% of the way there. Then there's
Re:What about the compiler? (Score:4, Informative)
Actually, it's not bullshit. Simple C intrinsics code is the way to go to program the Cell... there's just no need for hand-optimized asm. Intrinsics has a poor rep on x86 because SSE sucks. 8 registers. A source operand must be modified on each instr, no MADD, MSUB, etc.
But Cell has 128 registers and a full set of vector instructions. There's no danger of stack spills. As long as the compiler doesn't freak out about aliasing (which is easy), and it can inline everything, and you present it enough independent execution streams at once... the SPE compiler writes really, really nice code.
The thing that does need to be hand-optimized still is the memory transfer. DMA can be overlapped with execution, but it has to be done explicitly. In fact, algorithms typically need to be designed from the start so that accesses are predictable and coherent and fit within ~180kb. (Generally, someone seeking performance would do this step long before asm code on any platform anyway...)
Re:What about the compiler? (Score:1)
Re:What about the compiler? (Score:2, Informative)
Scientific users code to the bleeding edge. You give them hardware that blows their hair back and they will figure out how to use it. You give them crappy painful hardware (Maspar, CM*) that is hard to optimize for, then they probably won't use it.
Assembly language optimization is not a big deal. Right now the biggest thing bugging me is that I have to rewrite a core portion of a code to use SSE, since SSE is so limited for integer support. As this is a small amount of work, and th
Re:What about the compiler? (Score:1)
Re:What about the compiler? (Score:2)
How much do these new machines cost?
How much does a competent programmer cost?
Which one is the best option?
--jeffk++
Re:What about the compiler? (Score:5, Insightful)
"According to the authors, the current implementation of Cell is most often noted for its extremely high performance single-precision (32-bit) floating performance, but the majority of scientific applications require double precision (64-bit). Although Cell's peak double precision performance is still impressive relative to its commodity peers (eight SPEs at 3.2GHz = 14.6 Gflop/s), the group quantified how modest hardware changes, which they named Cell+, could improve double precision performance."
So the Cell is great because there's going to be millions of them sold in PS3's so they'll be cheap. But it's only really great if a new custom variant is built. Sounds kind of contradictory.
Re:What about the compiler? (Score:3, Informative)
So the Cell is great because there's going to be millions of them sold in PS3's so they'll be cheap. But it's only really great if a new custom variant is built. Sounds kind of contradictory.
Did you not read the last bit?
On average, Cell is eight times faster and at least eight times more power efficient than current Opteron and Itanium processors, despite the fact that Cell's peak double precision performance is fourteen times slower than its peak single precision performance. If Cell were to include
Re:What about the compiler? (Score:2, Interesting)
>custom variant is built. Sounds kind of contradictory.
No, the Cell is great because, as the pdf shows, it has an incredible Gflops/Power ratio, even in its current configuration.
For example, here are the Gflops (double precision) obtained in 2d FFT:
Cell+ Cell X1E AMD64 IA64
1K^2 15.9 6.6 6.99 1.19 0.52
2K^2 26.5 6.7 7.10
Re:What about the compiler? (Score:1)
The HPC world is substantially different from either gaming or "normal" application programming. The strong draw of the cell is that it is a production core with characteristics that are important to High Performance Computing, particularly power dissipation per flop. While conventional applications target getting th
Re:What about the compiler? (Score:2)
Re:What about the compiler? (Score:2)
Cell had a specific problem domain to address during the design of the initial product. If Cell really is all that, there will be future revisions. These researchers are pointing out what is necessary to make Cell more viable to a broader base of users. They are putting themselves at the head of the line.
They have evaluated the existing Cell, added their guesswork as to what could be done with modest changes and quantified the result relative to
Re:What about the compiler? (Score:5, Interesting)
Irrelevant to people doing serious high-performance computing, not hardly.
I am currently doing embedded audio digital signal processing, On one of the algorithms I am doing, even with maximum optimization for speed, the C/C++ compiler generated about 12 instructions per data point, where I, an experienced assembly language programmer (although having no previous experience with this particular processor) did it in 4 instructions per point. That's a factor of 3 speedup for that algorithm. Considering that we are still running at high CPU utilization (pushing 90%), and taking into account the fact that we can't go to a faster processor because we can't handle the additional heat dissipation in this system, I'll take it.
I have another algorithm in this system. Written in C, it is taking about 13% of my timeline. I am seriously considering an assembly language rewrite, to see if I can improve that. The C implementation as it stands is correct, straightforward, and clean, but the compiler can only do so much.
In a previous incarnation, I was doing real-time video image processing on a TI 320C80. We were typically processing 256x256 frames at 60 Hz. That's a little under four million pixels per second. The C compiler for that beast was HOPELESS as far as generating optimal code for the image processing kernels. It was hand-tuned assembly language or nothing. (And yes, that experience was absolutely priceless when I landed on my current job.)
Re:What about the compiler? (Score:4, Informative)
Some of my best VU routines that I spent a couple weeks hand-optimizing, I re-wrote with SPE intrinsics in an afternoon. After some initial time figuring out exactly how the compiler likes to see things, it was a total breeze. My VU code ran in 700 usec while my SPE code ran in 30 usec (@ ~1.3 IPC! Good work, compiler).
The real worry now is becoming DMA-bound. For example, assuming you're running all 8 SPEs full-bore, and you write as much data as you read. At 25.6 GB/s, you get 3.2 GB/s per SPE, so 1.6 GB/s in each direction (assuming perfect bus utilization), so @3.2 GHz, that's 0.5 Bytes/cycle. So, for a 16-byte register, you need to execute 32 instructions minimum or you're DMA-bound!
Food for thought.
No, this is why we have subroutine libraries (Score:5, Interesting)
(a) lend themselves extremely well to optimisation
(b) lend themselves extremely well to incorporation in subroutine libraries
(c) tend to isolate the most compute-intensive low-level operations used in scientific computation
SGEMM
If you read the article, you will find (among others) a reference to a operation called "SGEMM". This stands for Single precision General Matrix Multiplication. This is the sort of routines that make up the BLAS library (Basic Linear Algebra Subprograms) (see e.g. http://www.netlib.org/blas/ [netlib.org]). High performance computation typically starts with creating optimised implementation of the BLAS routines (if necessary handcoded at assembler level), sparse-matrix equivalents of them, Fast Fourier routines, and the LAPACK library.
ATLAS
There is a general movement away from optimised assembly language coding for the BLAS, as embodied in the ATLAS software package (Automatically Tuned Linear Algebra Software; see e.g. http://math-atlas.sourceforge.net/ [sourceforge.net]). The ATLAS package provides the BLAS routines but produces fairly optimal code on any machine using nothing but ordinary compilers. How? If you run a makefile for the ATLAS package, it may take about 12 hours (depending on your computer of course; this is a typical number for a PC) or so to compile. In this time the makefile will simply run through multiple switches and for the BLAS routines and run testsuites for all its routines for varying problem sizes. And then it picks the best possible combination of switches for each routine and each problem size for the machine architecture on which it's being run. In particular it takes account of the size of caches. That's why it produces much faster subroutine libraries than those produced by simply compiling e.g. the BLAS routines with an -O3 optimisation switch thrown in.
Specially tuned versus automatic?: MATLAB
The question is of course: who wins? Specially tuned code or automatic optimisation? This can be illustrated with the example of the well-known MATLAB package. Perhaps you have used MATLAB on PC's, and wondered why its matrix and vector operations are so fast? That's because for Intel and AMD processors it uses a specially (vendor-optimised) subroutine library (see http://www.mathworks.com/access/helpdesk/help/tech doc/rn/r14sp1_v7_0_1_math.html [mathworks.com]) For SUN machines, it uses SUN's optimised subroutine library. For other processors (for which there are no optimised libraries) Matlab uses the ATLAS routines. Despite the great progress and portability that the ATLAS library provides, carefully optimised libraries can still beat it (see the Intel Math Kernel Library at http://www.intel.com/cd/software/products/asmo-na/ eng/266858.htm [intel.com])
Summary
In summary:
-large tracts of Scientific computation depend on optimised subroutine libraries
-hand-crafted assembly-language optimisation can still outperform machine-optimised code.
Therefore the objections that the hand-crafted routines described in the article distort the comparison or are not representative of real-world performance are invalid.
However ... it's so expensive and difficult that you only ever want to do it if you absolutely must. For scientific computation this typically means that you only consider handcrafting "inner loop primitives" such as the BLAS routines, FFT's, SPARSEPACK routines etc. for this treatment, and that you just don't attempt to do that yourself.
Re:What about the compiler? (Score:2)
Re:What about the compiler? (Score:2)
Sometimes people DO use published research results to construct compilers.
-O3 is more important when the optimization is just a mean to an end - however, when optimization is an end itself, it's easy to see the value of disciplined hand tuning.
Doesn't it easily scale up? (Score:2, Interesting)
I'm not entirely sure of this, can someone corroborate/disprove?
Re:Doesn't it easily scale up? (Score:2)
Hmm (Score:2)
'designed', nothing (Score:2)
Re:Hmm (Score:2)
Re:Doesn't it easily scale up? (Score:2)
The code will be the tricky bit.
Not likely to be low cost CPUs (Score:2)
Moreoever, the scientific community is very likely to push their cell+ architecture and I'm sure IBM would be more than happy to help... For a massive price.
So, when building an HPC system, you're likely to work around the best architecture (the more expensive cell+), and
WTF? (Score:5, Insightful)
IF IBM was the maker of the chip they would most certainly not sell them at a loss. Why should they? Sony might sell the console at a loss to recoup the loss from game sales but IBM has no way to recoup any losses.
Then again IBM is in a parnetship with Sony and Toshiba so the chip is probaly owned by this partnership and Sony will just be making the chips it needs itself.
So any idea that IBM is selling Cells at a loss is insane.
Then the cost of the PS3 is mostly claimed to be in the Blu-ray drive tech. Not going to be off much intrest to a science setup is it? Even if they want to use a blu-ray drive they need just 1 in a 1000 cell rig. Not going to break the bank.
No the cell will be cheap because when you run an order of millions of identical cpu's prices drop rapidly. There might even be a very real market for cheap cells. Regular CPU's always have lesser quality versions. Not a problem for an intel or AMD who just badge them celeron or whatever but you can't do that with a console processor. All cell processors destined for the PS3 must be off similar spec.
So what to do with a cell chip that has one of the cores defective? Throw it away OR rebadge it and sell it for blade servers? That is were celerons come from (defective cache)
We already know that the cell processor is going to be sold for other purposes then the PS3. IBM has a line of blade servers coming up that will use the cell.
No I am afraid that it will be perfectly possible to buy Cells and they will be sold at a profit just like any other cpu. Nothing special about it. they will however benefit greatly from the fact that they already got a large customer lined up. Regular CPU's need to recover their costs as quickly as possible because their success will be uncertain. This is why regular top end cpu's are so fucking expensive. But the Cell allready has an order for millions, meaning the costs can be spread out in advance over all those units.
Re:WTF? (Score:4, Insightful)
Use it. Seriously, that's why there's central + 7 of them, not 8. One is actually a spare so that unless it's either flawed in the central logic or two separate cores, the chip is still good. Good way to keep the yields up...
Re:WTF? (Score:2)
Actually, the cell has 8 SPU's on die. It only utilizes seven, specifically to handle the possibility of defective units. They throw the extra SPU on there to increase yields.
Re:WTF? (Score:2)
Re:Not likely to be low cost CPUs (Score:2)
Re:Not likely to be low cost CPUs (Score:2)
Re:Not likely to be low cost CPUs (Score:2)
Well, I'm just saying, I wouldn't bet money on IBM coming out with their cell in a high volume enough way to provide ultra-low pricing as in the PowerPC or obviously x86 markets. History has shown time and time again, that innovation is not what is important, dominance is. Alpha had a superb chip but was in no way marketable. Apple has always had a better design in computer hardware, but will likely nev
Lattice QCD people: (Score:2)
Re:Lattice QCD people: (Score:1)
Re:Lattice QCD people: (Score:2)
Not the real issue (Score:1, Offtopic)
When can we start Folding with it? (Score:1)
Re:When can we start Folding with it? (Score:1)
Re:When can we start Folding with it? (Score:1)
The ball is in the hands of developpers. (Score:2, Insightful)
Re:The ball is in the hands of developpers. (Score:4, Informative)
Indeed, most scientists. They also know very little about profiling but since the simulation is used only maybe a hundred times that hardly matters.
The cases we're talking about here are where thousands of processors grind the same program (or evolved versions of it) for years as the terabytes of data roll in. Such is the situation in weather modelling, high energy physics and several other disciplines. That's not a "program" in the usual sense, but rather a "research program" occupying a whole department including everyone from "domain-knowledge" scientists down to some very long haired programmers who will not shy away from a bit of ASM. If you're a developer good at optimization and parallellism there might just be a job for you.
Re:The ball is in the hands of developpers. (Score:2)
Femlab? (Score:2)
Re:The ball is in the hands of developpers. (Score:2)
I do a bit of HPC. I wouldn't touch Matlab with a ten foot pole. Of course, I wouldn't touch Matlab with a ten foot pole for non-HPC stuff either.
Ease of Programming? (Score:3, Interesting)
The Cell processor may be faster but how easy is it to implement an optimizing development system that eliminates the need to hand-optimized the code? Is not programming productivity just as important as performance? I suspect that the Cell's design is not as elegant (from a programmer's POV) as it could have been, only because it was not designed with an elegant software model in mind. I don't think it is a good idea to design a software model around a CPU. It is much wiser to design the CPU around an established model. In this vein, I don't see the cell as a truly revolutionary processor because, like every other processor in existence, it is optimized for the algorithmic software model. A truly innovative design would have embraced a non-algorithmic, reactive, synchronous model, thereby killing two birds with one stone: solving the current software reliability crisis while leaving other processors in dust in terms of performance. One man's opinion.
Re:Ease of Programming? (Score:2)
It's possible that this is the case, however IBM is actively working on compiler technology [ibm.com] to abstract the complexity of an unshared memory architecture from developers whose goal isn't to squeeze the processor:
When compiling SPE code, the compiler identifies data references in system memory that have not been optimized by using ex
Re:Ease of Programming? (Score:2)
That's why F0rtran really doesn't matter here (Score:2)
Re:Ease of Programming? (Score:2)
When you're talking about scientific computations which can sometimes take a month or more to do one run, then suddenly it can become worth it to sacrifice a bit of programmer time if it can make a substantial increase in performance. If you can do a run in a week instead of a month, then that makes a huge difference in what you can investigate. Often it's not a question of just buying more machines because sometimes you need to know the ans
Re:Ease of Programming? (Score:2)
Not much payoff optimizing development systems for slow hardware. Cray tout the X1E as offering "Unrivalled Vector Processing and Scalability for Extreme Performance" [cray.com]. These guys smoked one for dinner, woke up the next day, rebuilt their code from the ground up a completely different way and smoked it again for lunch.
It took them a month to figure out how to do that, on maybe $3K worth
Re:Ease of Programming? (Score:2)
Hunh? From a (assembler) programmer's POV we have something close to AltiVec/VMX vs. x86 and EPIC - and you ask which is easier?
And why Apple going Intel was so sad (Score:1, Insightful)
The tacked on FPU, MMX, SSE SIMD stuff whilst welcome still leaves few registers for program use
The PowerPC on the otherhand has a nice collection of regs, and as good if not better SIMD--The CELL goes a big step further
More regs = more varibles in the CPU = higher bandwidth of calculation
be they regular regs or SIMD regs.
That plus the way it handles cache
Could be a pig to program without the right kind o
bang, buck, effort (Score:4, Informative)
One thing that Cell has that previous processors didn't is that the PS3 tie-in and IBM's backing may convince people that it's going to be around for a while; most previous efforts suffered from the problem that nobody wanted to invest time in adapting their code to an architecture that was not going to be around in a few years anyway.
single threaded vs multithreaded (Score:1)
Re:single threaded vs multithreaded (Score:2)
2) Most simulations are highly parallel. There are lots of cases where you can simulate many parts of the system simultaniously, and only synchronize state at certain points.
Ran simulations, not code (Score:5, Insightful)
By the end of the article, I was looking for their idea of a hypothetical best-case pony.
Re:Ran simulations, not code (Score:2)
Simulation makes computer architecture research possible because researchers don't have access to prototype hardware. If we insisted that all experiments run on real hardware, the only people who could possibly do research are Intel, AMD, and IBM because they have access to the fab and mask
Re:Ran simulations, not code (Score:3, Insightful)
Researchers are very good at simulating things that have little or nothing to do with reality. It all looks good in theory according to their formulas, but they fail to take something in to account. As an example take the defunct Elbrus E2K computer chip. It was supposed to be an aw
Re:Ran simulations, not code (Score:2)
1) The Cell is actually a pretty simple architecture. Once memory is transfered to SPE local store, performance is deterministic within a fraction of a %.
Re:Ran simulations, not code (Score:2)
Re:Ran simulations, not code (Score:2)
The RAM is XDR. The IOIF (to talk to other Cells) connection is 2 FlexIO ports. The bus itself (called the EIB [ibm.com]) is something like 300 GB/s. I agree that peak is never achievable, but it should be possible to get around 18 GB/s or so.
Re:Ran simulations, not code (Score:2)
Re:Ran simulations, not code (Score:2)
The LBNL guys started with a simple model. Their model generally predicted performance within 2% of what the full emulator said. It
Re:Ran simulations, not code (Score:2)
Re:Ran simulations, not code (Score:2)
Interleaving
NUMA (one memory controller per cpu)
Wider memory bus width
Re:Ran simulations, not code (Score:2)
Bye egghat.
14 times slower vs 8 times faster (Score:2)
So, that means that the cell in it's current design is 14/8= 1.75x times slower for double precision than an Opteron/Itanium is for single precision. I searched around byt couldn't find a good answer on what is the ratio between an Opteron/Itanium s
Re:14 times slower vs 8 times faster (Score:2)
And you misread the statement. It said that Cell was 8 times faster than Opteron in DP.
Re:14 times slower vs 8 times faster (Score:2)
Still, it would seem that Cell is 1.75x (14/8) times slower for double precision (although on average it's 8x times faster (which makes sense, because its single precision speed is enough to raise the average).
Benchmark (Score:1)
Cell have 8 vector processor and something like a ppc to "control" all of them, it's done specially for FP operations. It's like a comparation of a GPU with a CPU, it haven't got so much sense.
Ignore everything important? (Score:3, Interesting)
Scream "my computer beats your abacus" all you want.
But then it is from Berkeley, so that's normal.
Re:Ignore everything important? (Score:2)
not a fair comparison (Score:2, Insightful)
Re:Xbox 2 is a "commodity" (Score:1, Offtopic)
John Carmack on PS3 vs 360 [youtube.com]
Metal Gear 4 demo vid.. 8 or 9 mins long, very cool. [youtube.com]
Re:Xbox 2 is a "commodity" (Score:2)
To be honest I question the validity of this study anyway, I seem to recall lots of pape
Re:Xbox 2 is a "commodity" (Score:1, Offtopic)
(Although, I dunno if it is still worth the price tag though)
Re:Xbox 2 is a "commodity" (Score:3, Informative)
Have you seriously never seen anything like this before? As a professional ps2/360/ps3 developer I have to say that I was seriously underwhelmed by this demo. Every one of the effects has been used before. THe original xbox has every effect he mentioned. And HL2 has a significantly more complex lighting system and postprocessing effects.
The demo appears to be a single high-poly character in a texture mapped box. The demoer admi
Re:Xbox 2 is a "commodity" (Score:2)
> You assumed that the MGS4 trailer was pre-rendered cutscene,
> that obviously shows that you have little knowledge of the PlayStation
> and MGS. MGS has NEVER used pre-rendered cutscenes.. blah blah blah
I never said it was prerendered. You simply misunderstood they way these things work. In-game cut scenes use different models than the regular game. That is because the artists need more detailed control of the animations. They can be much more compl
Re:Xbox 2 is a "commodity" (Score:3, Insightful)
High performance computing is that which you'd want to throw a huge Beowulf cluster at, or possibly a supercomputer or twenty. Not three small pathetic cores.
Re:Xbox 2 is a "commodity" (Score:1)
Whether or not HPC is something you'd want to throw 20 or more supercomputers at in a Beowulf cluster, at least you know that the PS3 is really the only next-generation video game system because nobody concerned with raw performance and power efficiency would want to use the Xbox 2 in a HPC environment.
Re:Xbox 2 is a "commodity" (Score:2)
Not quite. What they're saying is that the Cell is better suited to parralel applications, like physics simulations, and that it is more scaleable - ie, easier to build supercomputers or distributed computing nodes from.
However, that has no bearing upon what 'generation' the host console is - largely be
Re:PS3 will rule in 2008 (Score:1, Offtopic)
If this is worthwile for users will depend a lot on how open the console is for third-party software. Usually consoles are designed to run only software licensed by the cons