Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science Technology

The Ultimate Limit of Moore's Law 418

BuzzSkyline writes "Physicists have found that there is an ultimate limit to the speed of calculations, regardless of any improvements in technology. According to the researchers who found the computation limit, the bound 'poses an absolute law of nature, just like the speed of light.' While many experts expect technological limits to kick in eventually, engineers always seem to find ways around such roadblocks. If the physicists are right, though, no technology could ever beat the ultimate limit they've calculated — which is about 10^16 times faster than today's fastest machines. At the current Moore's Law pace, computational speeds will hit the wall in 75 to 80 years. A paper describing the analysis, which relies on thermodynamics, quantum mechanics, and information theory, appeared in a recent issue of Physical Review Letters (abstract here)."
This discussion has been archived. No new comments can be posted.

The Ultimate Limit of Moore's Law

Comments Filter:
  • by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Tuesday October 13, 2009 @05:41PM (#29737671) Journal

    Intel co-founder Gordon Moore predicted 40 years ago that manufacturers could double computing speed every two years or so
    by cramming ever-tinier transistors on a chip.

    That's not exactly correct. Moore's Law (or observation more like) reads in the original article as [intel.com]:

    The complexity for minimum component costs has increased at a rate of roughly a factor of two per year ... Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.

    All he's concerned about is quoting how many components can fit on a single integrated circuit. One can see this propagated to processing speed, memory capacity, sensors and even the number and size of pixels in digital cameras but his observation itself is about the size of transistors -- not speed.

    The title should be "The Ultimate Limit of Computing Speed" not Moore's Law.

    Furthermore, we've always had Planck Time [wikipedia.org] as a lower bound on the time of one operation with our smallest measurement of time so far being 10^26 Planck Times. So essentially they've bumped that lower bound up and it's highly likely more discoveries will bump that even further up. I guess our kids and grandchildren have their work cut out for them.

  • by line-bundle ( 235965 ) on Tuesday October 13, 2009 @06:07PM (#29738109) Homepage Journal

    I RTFA but there is nothing in the article. Only talk of 75 years...

    I remember one way to get an upper limit on frequency is using the equation E=hf, the Planck-Einstein relation. For a given amount of energy you can only get so much frequency. But this was a million years ago in my physics class.

  • PDF on arxiv (Score:3, Informative)

    by sugarmotor ( 621907 ) on Tuesday October 13, 2009 @06:16PM (#29738201) Homepage

    Thermodynamic cost of reversible computing
    thermo-arxiv
    February 1, 2008
    Lev B. Levitin and Tommaso Toffoli

    http://arxiv.org/pdf/quant-ph/0701237v2 [arxiv.org]

    Not sure it is the same as in the Phys. Rev. Lett. 99, 110502 (2007) -- linked from the article -- which is from 2007

    Stephan

  • by anonymousbob22 ( 1320281 ) on Tuesday October 13, 2009 @06:22PM (#29738291)
    Actually, much of the newer components available are far more efficient than their predecessors in terms of power usage.
    Compare Intel's newer processors to the Pentium 4 and you'll see gains in both computing power and power efficiency [wikipedia.org]
  • by SchroedingersCat ( 583063 ) on Tuesday October 13, 2009 @06:26PM (#29738351)
    2.56 × 10^47 bits per second per gram Ref: Bremermann's limit [wikipedia.org]
  • by SchroedingersCat ( 583063 ) on Tuesday October 13, 2009 @06:50PM (#29738627)
    From Wikipedia [wikipedia.org]: "a computer the size of the entire Earth, operating at the Bremermann's limit could perform approximately 10^75 mathematical computations per second. If we assume that a cryptographic key can be tested with only one operation, then a typical 128 bit key could be cracked in 10^37 seconds. However, a 256 bit key (which is already in use in some systems) would take about a minute to crack. Using a 512 bit key would increase the cracking time to 10^71 years, but only halve the speed of encryption."

    You see - the system that threatens to reduce your lifespan is a much faster way to acquire that key.
  • by Anonymous Coward on Tuesday October 13, 2009 @07:13PM (#29738883)

    What he means is this:

    Logical operations can be composed of other operations.

    For instance, it's possible to build all the Boolean logical operations (like AND, OR, NOT, etc) out of combinations of the Not-And operator.

    In 1980, one of the two physicists here showed that there is one such quantum operation that you can use to construct all other logical operations.

    This paper is about the fundamental speed at which you can do that particular quantum operation.

    So, "one computation per second" means one application of his quantum operator.

  • by Anonymous Coward on Tuesday October 13, 2009 @07:23PM (#29738957)

    I checked out the pdf of the paper, and didn't see any numerical limit stated, just equations.

    That's because the linked PRL paper in the article summary is incorrect. It is two years old, not "published today" like TFA states. The actual PRL paper, published today, can be found right here [aip.org].

  • Re:WHAT!! (Score:4, Informative)

    by Straterra ( 1045994 ) on Tuesday October 13, 2009 @07:29PM (#29739009)

    looks like we've almost reached that point now. We've had Xeon 3.0GHz cpus for over 5 years now [archive.org], and they're still coming out with brand new 3ghz processors [slashdot.org]. That's a long time to not see a jump in speed, what happened to "doubling every 18 months"? We should be around 24ghz by now.

    Sorry, Performance != Clockspeed

    I, for one, am glad Intel went away from modeling their processors after their clockspeed. They went to an actual model for this reason. If you want an example where they didn't, and lower clock speed processors kept up just go back and look at the 423/427 Pentium 4's vs the Socket A Athlons (XP, ect)

  • by physburn ( 1095481 ) on Tuesday October 13, 2009 @07:41PM (#29739113) Homepage Journal
    The paper referenced above is at arXiv [arxiv.org], and doesn't give a maximum computer speed per see. It just proves that a quantum running at R computational steps per second, will generate Q = hR^2, of heat, where h is plancks constant. The other limits was that R < 4E/h, where E is the average energy of the system. You might get a maximum computing speed out of this, but only if you have a fundamental limit to how fast you can cool the computer. Not sure where the're fundamental limit come from if not in the above paper.

    ---

    Personally I think, Moore's Law will run out somewhere the early 2020s, and have blogged [blogspot.com] about what such a computer might be specced as.

  • by khayman80 ( 824400 ) on Tuesday October 13, 2009 @08:04PM (#29739301) Homepage Journal
    Nitpick: that's 10^(-37) seconds, or ~2M Planck times.
  • by ciderVisor ( 1318765 ) on Tuesday October 13, 2009 @08:09PM (#29739337)

    I see now why so many people are creationists and climate-change deniers.

    Very, very few people are actually denying climate change; change is the norm - stasis is the exception. Is it right, however, to lump together those who are skeptical of evolution with those who are skeptical of AGW, particularly CO2-driven AGW ?

  • Re:WHAT!! (Score:4, Informative)

    by Chris Burke ( 6130 ) on Tuesday October 13, 2009 @09:01PM (#29739729) Homepage

    To be fair, back in the day of the 386 and 486, AMD processors were essentially clones of their Intel counterparts. The only real difference between the AMD and Intel offerings was bus speed, processor speed, and external clock multiplier.

    Well at the time AMD was literally a "second source" supplier because back then companies like IBM actually cared about that sort of thing. I don't think AMDvsIntel really mattered a lot, since nobody knew of AMD's existence. Hell, I owned two computers with AMD processors and didn't know it until over a decade later.

    When the Pentium was eventually launched, AMD no longer produced a direct clone, and started releasing their processors with 'Performance Rating' (PR) numbers instead of clock speed, effectively claiming that their K5 processors were as efficient as a higher clocked Pentium.

    I've heard AMDers call the K5 "the highest IPC x86 part ever", with a wry smirk because just like how MHz isn't everything, neither is IPC, and it never lived up to those Markethertz numbers in terms of real performance.

    Funnily enough the Pentium showed the weakness of MHz as a raw metric without taking AMD into account. When the Pentium 60 and 75 were released, there were 486 DX4 100s (okay these were actually made by AMD but like I said who knew or cared?), and the P-75 was a better performer. Granted most of my friends were nerds, but many of them understood this concept back then.

    I'd say AMD and the 486 compatible market had as much responsibility for the MHz war as intel.

    To be sure, AMD only started talking about overall performance instead of just MHz when it started to hurt them. Forget the 486, in the P3 vs K7 days it was all about clock speed and the race to 1GHz. Sure, the architectures were still fairly similar and thus somewhat comparable by clock speed, but that hardly told the whole story.

    It's taken a while for the market to get passed comparisons based on clock speed, and I'm glad to see the performance rating numbers dropped.

    They're only semi-dropped. They were still used as relative-frequency indicators relative to Intel's offering from the K7 Palomino through most of the life of K8. Now that the P4 was dropped, and Intel switched to a completely opaque numbering scheme, AMD has switched to a non-comparison-based numbering scheme as well.

    MHz is still a valuable comparison tool, but people seem to understand that you can only compare clock speed within a family of processors.

    Yeah if you don't count the person I first replied to. :P

  • by Anonymous Coward on Tuesday October 13, 2009 @09:29PM (#29739957)

    per se

    http://en.wikipedia.org/wiki/Per_se [wikipedia.org]

    Parent made a typo, you have a more fundamental ignorance.

  • by canadian_right ( 410687 ) <alexander.russell@telus.net> on Wednesday October 14, 2009 @01:25AM (#29741337) Homepage

    The limit he is talking about assumes perfectly operating quantum computers that use the minimum amount of energy possible per operation in the shortest time that a single quantum of information takes to change. It is NOT physically possible to compute faster than this, according to current quantum mechanics theory. To reach this theoretical limit will take a lot of amazing engineering.

    When he said he was talking about the same sort of limit as the speed of light he wasn't exaggerating. Unless there is some break through in our understanding of the physical universe this theory actually does spell out the fastest possible computer operation of flipping a single bit.

    This like the Shannon–Hartley theorem - something you can reach in theory, but likely are not going to in practise.

    Now, I just wish TFA actually gave a bit more information on the limit instead of just saying 10 quintillion times faster than todays computers. Lots of room left for improvement there!

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...