Stanford Uses Million-Core Supercomputer To Model Supersonic Jet Noise 66
coondoggie writes "Stanford researchers said this week they had used a supercomputer with 1,572,864 compute cores to predict the noise generated by a supersonic jet engine. 'Computational fluid dynamics simulations test all aspects of a supercomputer. The waves propagating throughout the simulation require a carefully orchestrated balance between computation, memory and communication. Supercomputers like Sequoia divvy up the complex math into smaller parts so they can be computed simultaneously. The more cores you have, the faster and more complex the calculations can be. And yet, despite the additional computing horsepower, the difficulty of the calculations only becomes more challenging with more cores. At the one-million-core level, previously innocuous parts of the computer code can suddenly become bottlenecks.'"
Pfft. I can simulate supersonic jet noise just by (Score:5, Funny)
Pfft. I can simulate supersonic jet noise just by overclocking my Radeon 7970.
Re:Pfft. I can simulate supersonic jet noise just (Score:3, Funny)
Pfft is my simulation of jet noice
Re:Pfft. I can simulate supersonic jet noise just (Score:2)
I can get it by flipping the switch on my amp and running a pick down the low E string on my Ibanez. They seriously needed a million processors? Sounds like government had a hand in that one.
Re:Pfft. I can simulate supersonic jet noise just (Score:0)
you're working in the analog domain. They're making a digital simulation of it. This makes it possible to simulate with many different configurations and find one that minimizes the noise while preserving the thrust.
I know, I know you were joking, wooosh and all that ...
Re:Pfft. I can simulate supersonic jet noise just (Score:0)
Put it in her mouth.
That minimizes noise, while maximizing thrust.
Re:Pfft. I can simulate supersonic jet noise just (Score:2)
My first tower desktop left dark dust-marks against the wall where the fans were. Told my parents that I forgot to turn the after-burners off after take-off.
Beowolf? (Score:1)
I don't know. The word just popped into my head.
Wake me when they reach 4444444 cores (Score:3)
everything is in the subject
Re:Wake me when they reach 4444444 cores (Score:2)
Re:Wake me when they reach 4444444 cores (Score:3)
Well, do they count CUDA cores as fully-fledged CPU cores ?
Re:Wake me when they reach 4444444 cores (Score:0)
slashdot uids are not random, they're sequential, so i bet he just registered a few accounts each day to gauge the rate of uid progression, calculated when his desired uid would likely be up for grabs, and then just had like 40 or 50 tabs, all ready to create a new account, waiting for the right moment to click.
not that difficult.
For those who can't afford that type of equipment (Score:2)
Re:For those who can't afford that type of equipme (Score:2)
Re:For those who can't afford that type of equipme (Score:3)
Slashdotters don't have sex, and so they cannot have slashdaughters. Ergo, slashdaughters do not exist. QED.
Re:For those who can't afford that type of equipme (Score:3)
Re:For those who can't afford that type of equipme (Score:1)
Slashdotters don't have sex, and so they cannot have slashdaughters. Ergo, slashdaughters do not exist. QED.
Slash has had sex with many, many women over the years.
I'm sure he has at least a few Slashdaughters.
One million cores? (Score:0)
Gosh, I'll bet it could load up Windows 8 in less than 15 seconds or handle a 3000 ship EVE Online battle.
Re:One million cores? (Score:0)
I for one welcome our One Million Core overlords !!!
Imagine a Beowulf Cluster of a One Million Core cluster !!!!!
Re:One million cores? (Score:1)
Is there a system that can handle a 3000 ship EVE online batter with no lag?
Pfft (Score:0)
Will it blend?
Re:Pfft (Score:0)
Yes, but don't breathe the smoke. You'll get a coniosis thats, like, A MILLION TIMES worse than a regular coniosis.
That many ... (Score:2)
five-dimensionally connecting the cores (Score:5, Interesting)
But searching for "5-d torus interconnect" gets you nothing on wikipedia. Here's the 2-dimensional version explanation: http://en.wikipedia.org/wiki/Torus_interconnect [wikipedia.org]
and the K computer by Fujitsu [wikipedia.org] at Riken uses a 6-d (six dimensional) torus network. So how does the 5-d torus interconnect lead to the 2**19 + 2**20 cores or possibly 2**17+2**18 cpus? I'm not seeing it in my head clearly. Off to a paper-napkin to sketch it out!
.
Each core connects 5-dimensionally going forward or back in each dimension gives 10 interconnects from one core to the 10 5-dimensional neighbors one distance away. But the number of cores is divisible only by twos and a three (factor number of cores = 3 * 2^19) so I'm not seeing the construct...
Re:five-dimensionally connecting the cores (Score:5, Informative)
See Hardware Section 8, BG/Q Networks
Re:five-dimensionally connecting the cores (Score:2)
Re:five-dimensionally connecting the cores (Score:2)
.
is the relevant picture showing a "Midplane, 512 nodes, 4x4x4x4x2 Torus". So a five-dimensional torus of size 4 x 4 x 4 x 4 x 2 is divisible by three.
Re:five-dimensionally connecting the cores (Score:0)
And here's a typical implementation:
http://i42.photobucket.com/albums/e309/dred1367/16.jpg
Re:five-dimensionally connecting the cores (Score:2)
.
"Spaghetti code, I'd like you to meet the spaghetti interconnect."
Re:five-dimensionally connecting the cores (Score:1)
Well a quick google and we get this
https://asc.llnl.gov/computing_resources/sequoia/configuration.html [llnl.gov]
16 cores per CPU/chip. (or according to wikipeda 18, but one is used for the OS and one is saved as a spare).
Note also that each dimension of the torus does not have to be the same, so the constraint is
No of cores = 16*A*B*C*D*E
That is assuming each node on the torus has 16 cores (potentially could be a multiple of 16).
Anyway according to
http://en.wikipedia.org/wiki/Blue_Gene [wikipedia.org]
The system is 96 racks
Each rack is 32 node cards, each node card has 32 modules, each module has 16 cores.=1,572864 total cores.
I would guess there are two dimensions between the racks (12 x 8) then the remaining dimensions are inside the racks, potentially 12x8x16x2x32 but probably split a bit more than that so the longest dimension is not 32, 12x8x16x4x16 maybe, where there are two seperate links inside the node cards splitting them into two groups of 16 modules.
There is a couple of different ways they could wire up the 3 dimensions in the cabinets so that part it is hard to be sure.
Re:five-dimensionally connecting the cores (Score:2)
I'm more interesting in how the headline writer got from "1,572,864 cores" to "million core".
why round down "1,572,864 cores" to "million core" (Score:2)
Re:why round down "1,572,864 cores" to "million co (Score:2)
Shouldn't that be in ratios of 3 to 1 approximately? Responding to myself to catch the error of leap year frequency!
Re:five-dimensionally connecting the cores (Score:2)
It's the same topology as the state space of a Rubik's cube.
a 1D torus is simply a ring. Imagine a simple ring made from eight points. Translate that ring to the side a bit, and spin 360 degrees in steps of 45 degrees. That gives you a 2D torus. Now once again move that torus off to one side, and spin it again with the same number of iterations. That's a 3D torus. Another tasty way of visualisation would be a ring of donuts sitting on the sides stacked top to bottom in a closed circle. Every node then has six neighbors. Every additional dimension adds two neighbors, until you have 10 connections for each node. Obviously, you need some means of getting data in and out, so that gives you the data connections.
How many cores does it take to (Score:2)
simulate the Matrix?
Re:How many cores does it take to (Score:3)
simulate the Matrix?
One. Actually, you could do it with rocks [xkcd.com].
And this is why the West is in decline (Score:0)
"We've finished the prediction!" (Score:0)
"It's loud. Really loud. Now let's mine some bitcoin with this baby."
Wikipedia article out of date. (Score:2)
Makes the sound 'boom!' (Score:1)
But what was the question?
Accurate? (Score:0)
So.. what I'd like to know is:
How much more accurate was this vs the scale model that runs on a single desktop PC via mathematical approximation
and how much in terms of compute hardware / electricity per decimal point of precision it cost to achieve this one.
Re:Accurate? (Score:1)
Re:Accurate? (Score:0)
I imagine that this was done using direct numerical simulation, which is considerably more accurate than any other method. Since turbulent fluid flow is inherently transient, and also involves very tiny wiggles, the only way to fully resolve what is happening is by massive computer simulations such as this one.
There is no point to a mathematical approximation, since they are trying to gain some insight into the foundations of the physics of sound. This wasn't just some for-the-hell-of-it simulation.
That is an excellent question - is it a direct numerical simulation, or RANS, LES... Even with that computing horsepower, limited to moderate Reynolds numbers and probably simple geometries for DNS simulation. Why turbulence is so cool - one the remaining (stubborn) classical physics problems with limited approximate solutions.
Re:Accurate? (Score:1)
Such simulations using double precision for accuracy. You get precision problems if you just used 32-bit floating-point otherwise the tiny differences between approximated values will amplify over every time-step. The goal of this project was to model turbulence and how it could be reduced by adding grooves to the engine exhausts. Turbulence is almost fractal in nature - the closer you look at any volume in space, the smaller the vortex tubes get, right down to atoms spinning round each other. Because there is more turbulence closer to the surface of the aircraft, they use multi-grid methods where the volume of space right next to the aircraft has the highest grid resolution, down to the nearest centimetre or lower.
So they needed over 1 million processing cores to model a volume of space that would contain the entire airframe down to the engines and wheels (250 metres x 100 metres x 20 metres) at centimeter resolution. You just wouldn't be able to get a desktop PC to store all that data - it would be in the range of several hundred gigabytes.
More cores = interesting problems (Score:3)
You get some pretty interesting problems, when you increase the number of cores in your computer.
A couple of years ago, we replaced a 4-core IBM P5 with a 32-core HP DL 580. We tested it for a couple of months with just a user, or two, at a time. Then, we took a day and tested with the entire company (roughly 250 users). Thank goodness we did before we put it into production because, for some people, it was actually slower than the P5. It looked like it was going to be a disaster.
Fortunately, I had seen this problem before (on a Sequent Symmetry, of all things). I ran "strace" on the offending process, and sure enough, we were having problems with lock contention. We talked to our software vendor and, while it took a while for them to admit it was their problem (and probably cost us multiple thousands of dollars to have them fix it), they rewrote the code to use fewer locks. Problem solved.
Re:More cores = interesting problems (Score:2)
"Using fewer locks" often means "data integrity goes down the drain".
That race condition could never to happen to us, right?
Re:More cores = interesting problems (Score:2)
Of course not. It's closed-source software.
Thankfully, they were only reading the file. Why they were locking it, in the first place, I'll never know.
1M Cores!?! (Score:0)
OK, OK... I'm switching over to Windows 8 too!
"Compute" (Score:0)
Off-topic: since when is "compute" a grammatically acceptable replacement for "computing" or "computational"?
Re:"Compute" (Score:2)
It's cooler. Look up "compute server".
No cores needed? (Score:2)
I was able to calculate the noise from the jet *inside the cabin* without so much as a calculator...
Strange implications? (Score:1)
"The waves propagating throughout the simulation require a carefully orchestrated balance between computation, memory and communication."
This statement seems to imply the outcome of the simulation depends somehow on the tuning of the system hardware. That has dire implications for whatever method they are using.
If a simulation becomes non-deterministic depending on how the hardware communicates, and gives different solutions to the same problem because of that, then I would say it is not a good approach to computational bogodynamics.
Re:Strange implications? (Score:0)
Re:Strange implications? (Score:2)
I think the way the sentence is constructed is slightly confusing. They are not talking about the simulated sound waves but about the computation waves. This type of code is not monolithic, but runs through various phases during computation (as in map-reduce for instance). To remain efficient, you have to orchestrate the nodes to remain in sync to avoid costly idling locks.
Typically, some parts of CFD or other scientific simulations may include non-deterministic steps, e.g. mathematical optimization often benefits from some randomization for speeding up convergence. Overall this does not change the outcome of the computation because the convergence results is what matters.
Re:Strange implications? (Score:2)
It means they have to match the speed at which calculations are performed on chunks of data against the speed that these chunks can be propagated to and from neighbors. Then every now and again they need to save checkpoints or saves of the entire simulation, so they don't lose months of calculations.
Physics is on their side. (Score:5, Insightful)
Re:Physics is on their side. (Score:0)
I agree.
Re:Physics is on their side. (Score:2)
Thank you for this info.
Do you have some example of physics related to other classes of equations ?
Wikipedia [wikipedia.org] confirms this and also tells us that heat equations are parabolic, but there aren't much examples.
Re:Physics is on their side. (Score:2)
I'm not extremely experienced with the details of numerically solving these equations numerically in parallel, but generally the solution of an elliptic equation at a given grid point depends on values at surrounding grid points. Since spatial domains are often broken into smaller tiles for parallel computing, this may complicate matters somewhat compared to a problem that solely depends on values at the previous time step.
Re:Physics is on their side. (Score:2)
Whereas wave equation problems are propagation problems, the theory to solve the corresponding PDEs is well understood and can rely on explicit finite differences or finite element schemes which are known to be stable, for instance under the Courant–Friedrichs–Lewy condition (look this up if you want). The corresponding code "only" requires matrix additions and multiplications, this is why for this problems the Linpack benchmark is relevant.
On the other hand the parabolic Laplace and Poisson equations can only be solved by matrix inversion. Most often they are well conditioned and sparse problems, nonetheless matrix inversion is a harder problem which is not so simple to parallelize. It is still a very active field of research. Look up fast Poisson solvers or adaptive multigrid solvers for instance.
A simple example of a Poisson-like problem is heat transfer or image denoising.
Re:Physics is on their side. (Score:2)
On the other hand the parabolic Laplace and Poisson equations can only be solved by matrix inversion.
You can, but you don't have to. A common undergrad PDE computing task is to solve Laplace's equation using finite differencing. If at each time step you make the value of the current point equal to the average of the 4 surrounding points at the previous step, then you converge to the solution.
All you're doing is solving Ax=b, so any style of solver will work just fine, including but not limited to direct solvers. Iterative ones work just fine.
Re:Physics is on their side. (Score:2)
With 1.5 million nodes, space could be partitioned into 64x64x384 cubical regions (from the piccy in the article they are simulating a non-cubic region). Not having *any* idea how large each of those regions is in their simulation (as that would be technical, and therefore of no use in a supposedly scientific publication for the masses, sigh), it's probably best to just give some example numbers for what the overhead is:
dimensions "overlap" ratio
32x32x32 1 0.09
32x32x32 2 0.17
64x64x64 2 0.09
128x128x128 2 0.046
128x128x128 3 0.069
Which means that if each node is given a 128x128x128 region of space, and it needs to exchange its outermost 2 layers in each dimension, then jus under 5% of the data will need to be transfered to another node every time step. Clearly for smaller cells, this overhead increases, and is far from negligible in very small cells. Fewer big-grunt boxes do make things easier (compare the Earth Simulator, when that first came out).
The formula used in the above is: borderratio(s,w)=(s^3-(s-w)^3)/s^3, where s=the side of the cube, and w=the width of the data you need to share. It's a slight under-estimate, as some data needs to be transmitted in multiple directions.
How was it simulated? (Score:0)
If I had enough PC fans to cool all of those cores, the computer wouldn't have to simulate anything: The fans alone would provide the jet engine noise.
Amdahl's law (Score:3)
At the one-million-core level, previously innocuous parts of the computer code can suddenly become bottlenecks.
When they say this, they mean it. To put this in perspective: with 1,572,864 cores, an application which is 99.9999% scalable will use LESS THAN HALF of the hardware! Over 60% of the hardware will be tied up waiting for that 0.0001% of serial code to execute.
This problem is explained by Amdahl's law [wikipedia.org], an important (yet depressing) observation which shows just how difficult writing an effective parallel algorithm actually is -- even when you're only writing for 4 cores.
Re:Amdahl's law (Score:2)
You can use Amdahls law to go in the reverse direction from the speedup seen to a figure for the "parallelisabiliy" of the implementation of the solution to the problem, but that's just a meaningless number. You're more interested in the speedup you achieved, there's no need to bring Amdahl's law into discussions at all.
I'm not saying Amdahl's law is useless, I frequently have to bring it to the attention of newbs who don't have a clue, it's just that it's not always useful either.
Re:Amdahl's law (Score:3)
There's Gustafson's Law [slashdot.org] exactly for this. Amdahl's law is not appropriate at this case. In fact, even the Wikipedia page of Amdahl's law mentions this. You are never going to use a computer with 1 million cores to do something done manageable time for a 4 core cpu or whatever. If the portion of the code that is serial is consistently small (let's suppose just reading the initial conditions from a text file) then you make sure you are applying the 1 million-cpu machine to a large enough job.
People don't want to compute fixed-size jobs (which is what Amdahl's law refers to). People want to calculate the biggest job possible and parallelization helps for that... otherwise no one would make a 1 million cpu machine.
Just the fans will do it (Score:0)
Just turning in on, the cooling fans will be as noisy as a supersonic jet.
Cooling one million cores takes a lot of air!