Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Supercomputing EU The Almighty Buck Science Hardware Technology

Europe Plans Exascale Funding Above U.S. Levels 70

dcblogs writes "The European Commission last week said it is doubling its multi-year investment in the push for exascale computing from €630 million to €1.2 billion (or the equivalent of $1.58 billion). They are making this a priority even as austerity measures are imposed to prevent defaults. China, meanwhile, has a five-year plan to deliver exascale computing between 2016-20 (PDF). The Europeans announced the plan the same week the White House released its fiscal year 2013 budget, which envisions a third year of anemic funding to develop exascale technologies. Last year, the U.S. Department of Energy science budget asked for nearly $91 million in funding for the efforts in the current fiscal year; it received $73.4 million. DOE science is trying for about $90 million for exascale for 2013. There's more funding tucked in military and security budgets. The U.S. wants exascale around 2018, but it has yet to deliver a plan or the money for it."
This discussion has been archived. No new comments can be posted.

Europe Plans Exascale Funding Above U.S. Levels

Comments Filter:
  • by toygeek ( 473120 ) on Wednesday February 22, 2012 @02:50AM (#39121651) Journal

    I didn't know what it was, I don't follow supercomputing very closely. I looked it up. From http://en.wikipedia.org/wiki/Exascale_computing [wikipedia.org]

    "Exascale computing refers to computing capabilities beyond the currently existing petascale. If achieved, it would represent a thousandfold increase over that scale."

    To define Petascale:

    "In computing, petascale refers to a computer system capable of reaching performance in excess of one petaflops, i.e. one quadrillion floating point operations per second." http://en.wikipedia.org/wiki/Petascale [wikipedia.org]

    A Petascale computer, the Cray XT5 Jaguar can do 1.75 petaflops. To reach an exaflop, it would require almost 6000 installations of this supercomputer.

    So yeah, Exaflop is pretty big. http://en.wikipedia.org/wiki/Orders_of_magnitude_(computing) [wikipedia.org]

  • by ldobehardcore ( 1738858 ) <steven...dubois@@@gmail...com> on Wednesday February 22, 2012 @03:02AM (#39121713)

    One big reason why an exa-scale installation is generally better than an exa-scale distributed project is that of Data Transfer.

    Distributed computing is plagued by Data Transfer bottlenecks. If it's an internet project, the cumulative effect of combined bandwidth does add up. But serving out project segments at exa-scale levels is very expensive, and equally expensive receiving the solution chunks. There's also the problem of "internet climatology" (I'm not sure what it's really called) where the connections aren't uniform. While the internet does "self-heal" it takes time, and that adds up as well.

    Basically, when you scale up the computing power on a distributed project, the problems scale too. Out of order processing of problem chunks also causes problems when peers join and drop out in unpredictable ways. Often the same chunk has to spend many times more cycles than actually required, due to peers getting bored with the work, or just testing out the system and dropping the piece they're working on.

    An exa-scale supercomputer would remove the problem of collaboration overhead, or at least significantly reduce it. Scheduling is much more efficient, and in the end FLOPS doesn't measure performance in any reliable qualitative way. A distributed project can run at an exaFLOPS rate and still do no productive work, if the participants never finish any of the work they are tasked with.

  • by Anonymous Coward on Wednesday February 22, 2012 @03:35AM (#39121891)

    With regard to the radiotelescope, the US doesn't have any suitable noise free sites. There's a reason that the SKA is being placed in the least populated part of the least populated continent on the planet. Most of the equipment for the telescope is going to be Intel Inside.

  • by timbo234 ( 833667 ) on Wednesday February 22, 2012 @04:56AM (#39122369) Journal

    Let's go through your examples:
    "NASA went to space, so Europe made the ESA .. a weak form of NASA"
    Ok, the ESA has got nothing on NASA (no surprise since its total funding it sadly only about 1/5th what NASA gets). But the only reason NASA was able to get to the moon so quickly back in the day was that it 'stole' German rocket technology and scientists after the war. Everything NASA's done since then has been based on developments on the rocket technology it got from Germany after the war.

    "The US starting building the supercollider (which Reagan cancelled) so they built the LHC -- a weaker supercollider ... they only win cause supercollider funding got cut"
    Nonsense, the LHC is a machine that is literally *the edge* of what current science and technology can do, which is why it's taken so long to get it working. You can't compare that to a collider that was cancelled 20 years ago due to being unrealisticly expensive to build.

    "The US has Boeing so Europee made Airbus -- most of their planes are uninspired boeing clones"
    Airbus pioneered the use technologies like fly-by-wire and composite fuselages long before Boeing dared. They've also introduced new aircraft that change the economics on certain routes such as the A380. Not to mention that the first commercial jetliner in production was the deHavilland Comet from the UK, although it proved unsafe and was eventually overtaken by the American 707.

    "The US built the National Ignition Facility to study nuclear fusion, so Europe is building Laser Megajoule"
    The NIF and ITER are two different approaches to achieving viable nuclear fusion, Europe has commited the majority of its funding to the ITER approach but it'd be stupid not to have some smaller scale experiments which use the approach that NIF uses. Just as I'm sure the US has some experiments that try the ITER torus approach.

    Oh and BTW the National Ignition Facility was 5 years behind schedule and almost 4 times more expensive than originally budgeted when completed.

    There are areas of scient and technology where the US is ahead and some where Europe is, but it's always annoying in those discussions to have some jingoistic American spread around the myth that all technological development comes from the US and Europe (and everyone else) are just copiers. It's a myth not supported by history, including not by recent history.

  • by stevelinton ( 4044 ) <sal@dcs.st-and.ac.uk> on Wednesday February 22, 2012 @06:07AM (#39122683) Homepage

    Most of the applications on big supercomputers are simulations. In the basic case, each node sits there simulating it's cube of atmosphere, or its bit of the airflow around an aircraft or car design or its bit of a nuclear weapon in a simulated warehouse fire. Every time-step it needs to exchange the state of the borders of it's region with its neighbour nodes. In some other applications, all the nodes are working together to invert a huge matrix or do a massive Fourier transform in several dimensions. These need even more communication.

    The demand is genuine, and can't be met by wide-area distributed computing using any algorithms we know.

  • by Anonymous Coward on Wednesday February 22, 2012 @06:13AM (#39122713)

    Resolving the turbulent flow around an airfoil with a Direct Numerical Simulation (DNS, i.e., without a turbulent model) requires an exascale computer in order to be practical (i.e. only take some weeks).

    At the moment there is a whole science of creating turbulence models for approximating turbulence behavior. However, because turbulence is one of the most important unresolved problems of classical mechanics, none of the models work in all cases, and in some cases, none work.

    We are still far from having "exascale on the desktop" but some practical DNS simulations will give a lot of insight into turbulence, allowing us to develop better turbulence models with the corresponding improvements in energy efficiency (e.g. aerodynamics, combustion, lubrication,... for applications in combustion engines, wind turbines, cars, trains, ships, airplanes, weather forecasting...).

  • by serviscope_minor ( 664417 ) on Wednesday February 22, 2012 @07:05AM (#39122903) Journal

    Why is everyone pushing for exascale computing?

    Well, we all want more computing power.

    What is such a super computer used for?

    Soliving very large systems of linear equations (for one). Many (but by no means all) scientific problems come down on the inside to solving linear systems over and over again. Basically, anything that can be described by partial differential equations.

    Sometimes people want to find eigen decompositions of them too.

    But there's also things like molecular dynamics, etc, etc.

    Couldn't a massive distributed system work just as well?

    Yes: that's exactly what supercomputers are these days. They're usually a collection of decent to very good CPUs (BlueGene series aside) in a bunch of 4p rack mount boxes with a top-end network.

    The key generally in the network, which allows micorsecond latency communication between neighbouring nodes.

    The nature of exactly what networking works well is very dependent on the problem.

"Life begins when you can spend your spare time programming instead of watching television." -- Cal Keegan

Working...