Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing EU The Almighty Buck Science Hardware Technology

Europe Plans Exascale Funding Above U.S. Levels 70

dcblogs writes "The European Commission last week said it is doubling its multi-year investment in the push for exascale computing from €630 million to €1.2 billion (or the equivalent of $1.58 billion). They are making this a priority even as austerity measures are imposed to prevent defaults. China, meanwhile, has a five-year plan to deliver exascale computing between 2016-20 (PDF). The Europeans announced the plan the same week the White House released its fiscal year 2013 budget, which envisions a third year of anemic funding to develop exascale technologies. Last year, the U.S. Department of Energy science budget asked for nearly $91 million in funding for the efforts in the current fiscal year; it received $73.4 million. DOE science is trying for about $90 million for exascale for 2013. There's more funding tucked in military and security budgets. The U.S. wants exascale around 2018, but it has yet to deliver a plan or the money for it."
This discussion has been archived. No new comments can be posted.

Europe Plans Exascale Funding Above U.S. Levels

Comments Filter:
  • by luminousone11 ( 2472748 ) on Wednesday February 22, 2012 @01:24AM (#39121525)
    Who knows any kind of toys they have barred in the NSA, and DoD. And given the amount of money that flies through defense contracts wouldn't be hard to hide that in a small line item somewhere(likely next to that Wayne Tech justice league space station *wink*).
    • by Anonymous Coward

      And if they don't the NSA can just steal it!

  • by Anonymous Coward

    Why is everyone pushing for exascale computing? What is such a super computer used for? Couldn't a massive distributed system work just as well?

    • If it could, do you think they'd be wanting to develop exascale computers in the first place? The only tasks that work on distributed systems (distributed meaning random people installing your app on their computer, not distributed memory machines) are pretty much trivially parallel in the first place (since your node-node bandwidth is practically nil, and latency is massive).
    • by ldobehardcore ( 1738858 ) <steven.dubois@gm ... com minus distro> on Wednesday February 22, 2012 @02:02AM (#39121713)

      One big reason why an exa-scale installation is generally better than an exa-scale distributed project is that of Data Transfer.

      Distributed computing is plagued by Data Transfer bottlenecks. If it's an internet project, the cumulative effect of combined bandwidth does add up. But serving out project segments at exa-scale levels is very expensive, and equally expensive receiving the solution chunks. There's also the problem of "internet climatology" (I'm not sure what it's really called) where the connections aren't uniform. While the internet does "self-heal" it takes time, and that adds up as well.

      Basically, when you scale up the computing power on a distributed project, the problems scale too. Out of order processing of problem chunks also causes problems when peers join and drop out in unpredictable ways. Often the same chunk has to spend many times more cycles than actually required, due to peers getting bored with the work, or just testing out the system and dropping the piece they're working on.

      An exa-scale supercomputer would remove the problem of collaboration overhead, or at least significantly reduce it. Scheduling is much more efficient, and in the end FLOPS doesn't measure performance in any reliable qualitative way. A distributed project can run at an exaFLOPS rate and still do no productive work, if the participants never finish any of the work they are tasked with.

      • seems to work well for seti@home.

        surely the expense of exascale couldn't be justified merely because of a data transfer limitation for clustering and boinc.

        boinc is enjoying organic growth in performance, including in data transfer rates, with upgrading of consumer hardware and telco infrastructure, and scientific organisations can access this power at a fraction of the cost of establishing a supercomputing facility.

        the other bottleneck is I/O, and this will not change with exascale. sensors can on
        • by stevelinton ( 4044 ) <sal@dcs.st-and.ac.uk> on Wednesday February 22, 2012 @05:07AM (#39122683) Homepage

          Most of the applications on big supercomputers are simulations. In the basic case, each node sits there simulating it's cube of atmosphere, or its bit of the airflow around an aircraft or car design or its bit of a nuclear weapon in a simulated warehouse fire. Every time-step it needs to exchange the state of the borders of it's region with its neighbour nodes. In some other applications, all the nodes are working together to invert a huge matrix or do a massive Fourier transform in several dimensions. These need even more communication.

          The demand is genuine, and can't be met by wide-area distributed computing using any algorithms we know.

          • I can imagine things like Navier-Stokes/CFD is supercomputer territory. maybe also rather than making the idle process just a message processing loop, they could have it working on boinc processing as well, since the cost of idling any exascale computer would be much higher than a consumer PC.
    • by serviscope_minor ( 664417 ) on Wednesday February 22, 2012 @06:05AM (#39122903) Journal

      Why is everyone pushing for exascale computing?

      Well, we all want more computing power.

      What is such a super computer used for?

      Soliving very large systems of linear equations (for one). Many (but by no means all) scientific problems come down on the inside to solving linear systems over and over again. Basically, anything that can be described by partial differential equations.

      Sometimes people want to find eigen decompositions of them too.

      But there's also things like molecular dynamics, etc, etc.

      Couldn't a massive distributed system work just as well?

      Yes: that's exactly what supercomputers are these days. They're usually a collection of decent to very good CPUs (BlueGene series aside) in a bunch of 4p rack mount boxes with a top-end network.

      The key generally in the network, which allows micorsecond latency communication between neighbouring nodes.

      The nature of exactly what networking works well is very dependent on the problem.

    • Why is everyone pushing for exascale computing? What is such a super computer used for? Couldn't a massive distributed system work just as well?

      No, not for what they really intend such a system for.

      How exactly do you think that the governments are going to perform threat/intelligence analysis of all that data, video, and audio they're collecting both on the internet and from all the CCTV cams, cellphones, and those 30,000 new government drones that will be patrolling the US domestic skies, especially with all the recent data-retention and snooping laws nearly all the Western governments have been, or trying to, implement? Especially for analysis do

      • You just present a point of view in the most light and easy to read way. Ever tried comedy? (no joke here)

        It is true that such supercomputers could be used to analyze huge amounts of data regarding activities of the civilians. And this is both amazing and scary. Amazing because we've got far enough that we have the data AND the power to process it in a timely manner; scary because of what could be done with the results of such analysis. I understand the pros and cons of the government (or any company or ind

        • You just present a point of view in the most light and easy to read way. Ever tried comedy? (no joke here)

          Seriously, thanks! Comes from decades of playing in club/bar bands and having to keep crowds entertained in between sets/songs/broken-string-replacements/equipment failures/band changes/etc. The ability to make people laugh is a matter of survival when playing a gig in a bar-full of drunk & rowdy patched-up outlaw bikers, where the smallest biker still resembles Mongo's bigger and angrier brother packing a semi-auto pistol.

          Apparently, however, someone with mod points doesn't appreciate my sense of humor

    • Distributed works great when you have a large number of chunks which must be independantly processed. Samples of telescope data in SETI@home, potential proteins to test against a receptor, that sort of thing. If all those chunks are interdependant, then distributed computing on the internet isn't going to work. You need a super.
  • More waste (Score:1, Insightful)

    by Anonymous Coward

    Such government "grand" plans are good to distract the crowds, entertain the peons, and prop politicians and their friend's pet projects and corporations up. But the fact that such project requires forcing people to "invest" in them is proof that these resources are misaligned to the current needs and preferences of the people.

    I'm sure that we'll get to exascale at some point, but trying to push it too early (before investors find ways to fund it voluntarily) means wasted opportunities. Unfortunately, as Ba

  • by toygeek ( 473120 ) on Wednesday February 22, 2012 @01:50AM (#39121651) Journal

    I didn't know what it was, I don't follow supercomputing very closely. I looked it up. From http://en.wikipedia.org/wiki/Exascale_computing [wikipedia.org]

    "Exascale computing refers to computing capabilities beyond the currently existing petascale. If achieved, it would represent a thousandfold increase over that scale."

    To define Petascale:

    "In computing, petascale refers to a computer system capable of reaching performance in excess of one petaflops, i.e. one quadrillion floating point operations per second." http://en.wikipedia.org/wiki/Petascale [wikipedia.org]

    A Petascale computer, the Cray XT5 Jaguar can do 1.75 petaflops. To reach an exaflop, it would require almost 6000 installations of this supercomputer.

    So yeah, Exaflop is pretty big. http://en.wikipedia.org/wiki/Orders_of_magnitude_(computing) [wikipedia.org]

  • by j. andrew rogers ( 774820 ) on Wednesday February 22, 2012 @01:51AM (#39121661)

    The US is awash in privately funded technology R&D toward exascale computing. While there is government funding, it is somewhat superfluous to the extent that US has a huge, well-funded private sector obsessed with massively scaling just about everything vaguely related to computing. That whole Internet-scale computing thing.

    The US is hardly disadvantaged by the government not spending money on exascale computing. The US government does not need to compensate for the absence of private investment.

    • The US is hardly disadvantaged by the government not spending money on exascale computing. The US government does not need to compensate for the absence of private investment.

      It's not so much the absence of private investment, as it is the absence of a cohesive direction for all that private investment.
      Through either funding or regulation, the government can focus private investment in constructive ways in order to achieve national priorities.

      Of course, the free marketeers don't like such ideas, but if they get their way, we'll be buying our solutions from China.
      Just like we now have to buy all our rare earth minerals from China.

    • The US is awash in privately funded technology R&D toward exascale computing. While there is government funding, it is somewhat superfluous to the extent that US has a huge, well-funded private sector obsessed with massively scaling just about everything vaguely related to computing. That whole Internet-scale computing thing.

      This. Huge.

      A billion dollars can be a single contract for a large scale server farm/cluster in the US. Rare, yes, but, imagine, that can be a single contract between only two companies. If you think one of the companies isn't trying to make their product faster/better than the other guy, you're nuts.

      Intel in 2010 spent over 6 billion dollars [tellingtechtales.com] in R&D alone. You think none of that was to become faster?

      I named the biggest (probably), but, that's just one of a LOT of companies. The nations spending on t

  • by Required Snark ( 1702878 ) on Wednesday February 22, 2012 @02:22AM (#39121813)
    The USA is well on it's way to 3rd world status. We will fall behind because we are not funding fundamental research.

    We have no ability to put humans in space.

    We no longer host any major sub-atomic research facility. The generation after the CERN will not be in the US. We're not even in the running.

    The next big ground based radio telescope will not be in the US.

    The NASA planetary exploration budget is being diverted to fund private launch companies. If there was a viable economic model for space transport, then private sector equity funding would be available. It's not. Many of the commercial space ventures are funded by individuals who made fortunes in software (Musk, Carmac, Bezos, Allen. Branson, but in music and transportation), Wall Street is not betting on making money in the launch sector. Putting NASA money into launch ventures is not basic science R&D.

    We are, however teaching creationism and climate change denial in schools. Most of the Republican presidential candidates are anti-evolution. Santorum just said that he is "pro-science", and the Democrats are anti-science. This is clearly in 1984 territory: Ignorance Is Strength.

    Most Slashdot readers will experience the slide into 3rd world status during the course of their lives.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      With regard to the radiotelescope, the US doesn't have any suitable noise free sites. There's a reason that the SKA is being placed in the least populated part of the least populated continent on the planet. Most of the equipment for the telescope is going to be Intel Inside.

      • by dkf ( 304284 )

        With regard to the radiotelescope, the US doesn't have any suitable noise free sites.

        It's also in the northern hemisphere, which has already been well studied. Yet the majority of really interesting objects (notably including the center of our galaxy) are in the southern sky. There's no particular reason for this — it's pure happenstance — but it does mean that we want a good radiotelescope well south of the equator. (To be fair, Hawaii's probably far enough south for many of the interesting targets and the ocean surrounding it really isn't that radio-noisy, but it's not large e

    • Bullshit doomism (Score:5, Insightful)

      by SuperKendall ( 25149 ) on Wednesday February 22, 2012 @02:52AM (#39121999)

      I'll just address one point:

      We have no ability to put humans in space.

      Temporarily, because we have MULTIPLE private companies working to that end. In just a few years we'll have multiple private companies that can put way more people in space than any government ever has, a far superior situation to be in.

      Do not mistake transition for defeat.

    • by loufoque ( 1400831 ) on Wednesday February 22, 2012 @04:48AM (#39122615)

      You know what's worse about this?

      The fact that it matters to you.

      The US doesn't need to be the best at everything to be a good country to live in. You should be happy of technological improvement wherever it happens.

    • by argStyopa ( 232550 ) on Wednesday February 22, 2012 @10:22AM (#39124897) Journal

      Instead of saying "welcome to 3rd-world America", say instead "welcome to prewar America".

      Seriously - the ongoing wailing about "the US is falling behind" is getting a little tiresome.

      First, lets dispense with US exceptionalism. I love my country, and there are a number of notably special things about its situation geographically, culturally, historically, etc that make it a unique place but Americans are not (and have never been) intrinsically smarter, prettier, faster, stronger, or any way different than any other cross-section of humanity. We have the same proportions of brilliant scientists and racist a-holes as pretty much any other random bunch of 330 million people you'd gather in the world.

      Secondly, and more directly to my point - to fear the US 'falling behind' speaks to a staggering level of ignorance of the last 100 years of world history.

      In 1912 - a mere 100 years ago - the list of great powers in the world would have been Britain, France, Germany, Russia, Austria/Hungary, and only marginally, the USA. The US was a largely agrarian country of mostly first-generation immigrants, late industrializing and largely disconnected with Old World affairs.

      Yet after two catastrophic continent-spanning conflicts in 25 years (and a not-insignificant influenza epidemic), the three leading European states were prostrate - two from their almost-Pyrrhic victory (UK, France), one lay dismembered and occupied after being pummeled nearly into dust (Germany) - one of the powers entirely ceased to exist (A/H), one emerged from civil war at least superficially changed (Russia - USSR), one emerged from nowhere (Japan), and only one was basically unscathed - the United States.

      In the two conflicts total deaths over the span of these listed powers totalled something more than 50 million. US fatalities were approximately 500,000. Possibly more significantly, the wars had completely devastated the industrial, technological, and even cultural infrastructure of the old world, with the subsequent Cold War arguably further contributing - paralyzing truly independent European development for 4+ decades.

      The US was in the historically-unique position of being a superpower by default, not by inclination. US armies had not marched all over the world subjugating enemies, conquering colonies, and gathering territory for the motherland. (Certainly the US had engaged in its own efforts in colonialism like other Powers of the day, much of it naked military conquest barely cloaked as 'liberatory' exercises.) But it's clear that even the burgeoning jingoism of the early-20th-century US wasn't posed as a challenge to the Great Powers, except insofar as it was competitive to Old World efforts to colonize and dominate the largely-unexploited Western Hemisphere. Instead, the US was largely aimed at internal development, a patronizing benevolence toward other peoples of the Western Hemisphere, and essentially (even as late as the early 20th-century) a *revolutionary* geopolitical stance vis a vis the Old World states and their efforts to "lock down" most of the undeveloped world into agreed-upon exclusionary spheres of influence.

      For emerging in 1945 as the dominant superpower on the planet, it should be astonishing that the US began the 20th century with a second-rate navy and almost no army to speak of.

      In fact, as a superpower, one might point out that the US has been particularly clumsy. Certainly, many anti-Americans (and we've generated many of them) would point to the scores of bad US foreign policy decisions as clear signs of its essentially-malignant nature; in point of fact, most if not all were simply colossal blunders born of a government run by unsophisticated and unsubtle men born and raised in a country that was (in their day) fairly irrelevant. Wilson's naivete in insisting on national boundaries in post-WWI Europe almost guaranteed non-self-sufficient states vulnerable to Caesarist populism. Read about the WW2 conferences between Stalin, Churchill, and FDR - FDR, for

  • Did anybody else first interpret the headline as commentary on the national debt?
  • From the article:

    "As for China, 'the Chinese are very practical in this regard,' said Joseph. 'They are very interested in how they use their machines to make their industries stronger.'"

    LOL

    This fits my picture of Europe, US and Japan calculating wheather, eartquakes and nuclear explosions, while the Chinese let their industry sector use it to improve their products.

  • by Anonymous Coward on Wednesday February 22, 2012 @05:13AM (#39122713)

    Resolving the turbulent flow around an airfoil with a Direct Numerical Simulation (DNS, i.e., without a turbulent model) requires an exascale computer in order to be practical (i.e. only take some weeks).

    At the moment there is a whole science of creating turbulence models for approximating turbulence behavior. However, because turbulence is one of the most important unresolved problems of classical mechanics, none of the models work in all cases, and in some cases, none work.

    We are still far from having "exascale on the desktop" but some practical DNS simulations will give a lot of insight into turbulence, allowing us to develop better turbulence models with the corresponding improvements in energy efficiency (e.g. aerodynamics, combustion, lubrication,... for applications in combustion engines, wind turbines, cars, trains, ships, airplanes, weather forecasting...).

  • They'll be running Windows.

  • The is GPU based computing but if your problem fits that paradigm then your're set: http://blogs.nvidia.com/2011/11/exascale-an-innovator%E2%80%99s-dilemma [nvidia.com] (No I don't work for nvidia)

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...