Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Science Technology

The Ultimate Limit of Moore's Law 418

BuzzSkyline writes "Physicists have found that there is an ultimate limit to the speed of calculations, regardless of any improvements in technology. According to the researchers who found the computation limit, the bound 'poses an absolute law of nature, just like the speed of light.' While many experts expect technological limits to kick in eventually, engineers always seem to find ways around such roadblocks. If the physicists are right, though, no technology could ever beat the ultimate limit they've calculated — which is about 10^16 times faster than today's fastest machines. At the current Moore's Law pace, computational speeds will hit the wall in 75 to 80 years. A paper describing the analysis, which relies on thermodynamics, quantum mechanics, and information theory, appeared in a recent issue of Physical Review Letters (abstract here)."
This discussion has been archived. No new comments can be posted.

The Ultimate Limit of Moore's Law

Comments Filter:
  • Efficiency (Score:5, Insightful)

    by truthsearch ( 249536 ) on Tuesday October 13, 2009 @05:45PM (#29737763) Homepage Journal

    So we'll have to wait another 75 years before management lets us focus on application efficiency instead of throwing hardware at the performance problems? Sigh...

  • by History's Coming To ( 1059484 ) on Tuesday October 13, 2009 @05:49PM (#29737813) Journal
    This isn't like the speed of light, it is quite possibly the reason for it.
  • by jopet ( 538074 ) on Tuesday October 13, 2009 @05:51PM (#29737843) Journal

    and no exponential growth can go on for just a comparatively very short time. This should be self-evident, but for some reason, people seem to ignore that. Especially people who call themselves journalists or economists.

  • by wb8wsf ( 106309 ) <steve@wb8wsf.org> on Tuesday October 13, 2009 @05:52PM (#29737865)

    Though there might be a limit on how fast a computation can go, I would think that
    parallel systems will boost that far beyond whatever limit there may be. If we crash
    into a boundary, multiple systems--or hundreds of thousands of them--will continue
    the upward trend.

          I suppose there is also the question of whether 10^16 more computing power "ought
    to be enough for anybody". ;-)

  • by phantomfive ( 622387 ) on Tuesday October 13, 2009 @05:55PM (#29737929) Journal
    Basically he is assuming that eventually we will develop quantum computing, and based his calculation on the theory of how fast a quantum event can take place. The problem is, given all we don't actually know about quantum mechanics, and all we don't know about super-small things, all it would take is a single observation to throw this minimum out the window.

    In theory, it is nice to make theoretical limits. In practice, the limits are sometimes nothing more than theoretical. We don't know how to make smaller-than-quantum computers yet, but we also don't know how to make quantum computers yet. So this could be a prediction like every other prediction of the end of Moore's law, some of which were based on stronger reasoning than this argument. Interesting argument to make, though.
  • by Anonymous Coward on Tuesday October 13, 2009 @05:58PM (#29737979)

    Parallel computing won't help.
    There's a limit to how fast your compute subsystems can exchange data as well.

  • by imgod2u ( 812837 ) on Tuesday October 13, 2009 @06:17PM (#29738231) Homepage

    We've been at roughly ~200ps per circuit operation for quite some time and yet processors are still getting faster. Parallel computation, what a novel idea.

  • by nick_davison ( 217681 ) on Tuesday October 13, 2009 @06:22PM (#29738277)

    And were the engineer a hacker, he'd pick up the scientist, carry him half way across the room, set him down and say, "Your turn."

    The game changing hackers are the ones who don't listen to the conventional logic of the time and figure out how to wander along a totally different axis that the "experts" hadn't thought of yet.

    Look at Wolfenstein/Doom. 3D graphics "weren't possible" on home computers at the time. John Carmack turned it in to a 2D solution and solved it anyway. Perhaps not perfect in every regard but still a hell of a lot better than what anyone else were managing.

    Nick's law: at least every 18 months, someone else will declare a limit to Moore's law [and turn out to be wrong].

    With our current understanding of transistor science, I'm sure their point is a wonderful one. Problem is, with enough money behind finding the solution, someone'll come up with another axis to wander along that'll continue the advances. But don't feel bad, I'm sure plenty of people thought cart science had reached its theoretical peak and man would never move faster than horses were capable of, too.

  • by jonbryce ( 703250 ) on Tuesday October 13, 2009 @06:29PM (#29738397) Homepage

    Given that brute force attacks scale to multiple processors better than just about any other task, I don't think there is a limit.

  • by phantomfive ( 622387 ) on Tuesday October 13, 2009 @06:35PM (#29738449) Journal

    You are generating the latter. Kill yourself.

    Great argument, you're a regular Cyrano there.

    How can you have stronger reasoning, than something that's based on the limits of what modern physics can understand

    Does this even need to be said? Einstein did it: he took some observations and extrapolated them to show that modern physics was not entirely correct (that is, what was modern physics at the time). Indeed, all scientific theory can only be based on what we've observed. Thus, new observations make for new theory, or corrections in old theory. As we continue to make more observations, for example with the LHC, theory will continue to evolve. Surely even someone of your eloquence can see this.

  • by WH44 ( 1108629 ) on Tuesday October 13, 2009 @06:36PM (#29738467)

    How can you have stronger reasoning, than something that's based on the limits of what modern physics can understand (thermodynamics and quantum mechanics)? We have developed quantum computers.

    The previous limits he is referencing were also based on the limits of what modern physics could understand - just making a faulty assumption. He's questioning the assumptions here, too.

    There's skepticism, and then there is metaphysical woowoo babble. You are generating the latter. Kill yourself.

    He is generating the former. Take your own advice.

  • Re:WHAT!! (Score:2, Insightful)

    by iamhassi ( 659463 ) on Tuesday October 13, 2009 @06:55PM (#29738693) Journal
    "so in 80 years my computers processors wont be able to get any faster"

    looks like we've almost reached that point now. We've had Xeon 3.0GHz cpus for over 5 years now [archive.org], and they're still coming out with brand new 3ghz processors [slashdot.org]. That's a long time to not see a jump in speed, what happened to "doubling every 18 months"? We should be around 24ghz by now.
  • There Is No Limit (Score:1, Insightful)

    by Anonymous Coward on Tuesday October 13, 2009 @07:00PM (#29738753)

    After you have built the machine that cannot possibly be made any faster, then you build more and distribute your problems among them.
    "Reports of my demise have been greatly exaggerated." - Moore

  • by Glasswire ( 302197 ) on Tuesday October 13, 2009 @07:03PM (#29738789) Homepage

    Yup, back in the 80s the physicists said it would be physically impossible to provide switching and encoding which would allow phone line communication to exceed 2400 baud in modems. Yet before we gave up on phone lines, the modem builders were giving us 56,000 baud connections.

  • by DarkOx ( 621550 ) on Tuesday October 13, 2009 @07:04PM (#29738793) Journal

    The physics folks might have worked out some interesting details here but that's all it is interesting. The engineers have already moved on. Its not about getting smaller and going faster has largely past the point of diminishing returns already. There are few applications the digital logic we have today can't perform within time constraints. Even our jet fighters are practically flying themselves. In fact our computing machines are so fast we starting to struggle justifying their applications on anyone task not because they are to expensive this time but because they are so fast that their just idle most of the time anyway. Virtualization is more or less going back to time sharing without the pain. Its about doing more at the same time now, hence all the milti-core chips.

  • by Chris Burke ( 6130 ) on Tuesday October 13, 2009 @07:25PM (#29738975) Homepage

    All he's concerned about is quoting how many components can fit on a single integrated circuit. One can see this propagated to processing speed, memory capacity, sensors and even the number and size of pixels in digital cameras but his observation itself is about the size of transistors -- not speed.

    The title should be "The Ultimate Limit of Computing Speed" not Moore's Law.

    Meh.

    While technically correct, the performance corollary of Moore's Law -- which is roughly "more transistors generally means smaller and thus faster transistors rather than exploding die sizes, plus more to do computation with, so performance also increases exponentially, and we observe that this is the case" -- is strong enough that it's often simply called Moore's Law even among the engineers in the chip design industry. It's just understood what you're talking about, even though the time constant is different.

    You'll occasionally see Intel (the company Moore founded) show charts with historical performance and future projections, and they'll include a line labeled "Moore's Law" to show how they're doing relative to the observation. Because technically it is just an observation, and it holds true only to the extent that engineers of the computer, electrical, and material science variety bust their asses to make it true.

    So maybe the layman thinks Moore's Law is about performance, and that's not technically true, but it's correct enough that even the engineers directly affected by it refer to it as if it meant performance. So I say the the title is fine.

  • Re:WHAT!! (Score:5, Insightful)

    by Chris Burke ( 6130 ) on Tuesday October 13, 2009 @07:31PM (#29739027) Homepage

    looks like we've almost reached that point now. We've had Xeon 3.0GHz cpus for over 5 years now, and they're still coming out with brand new 3ghz processors. That's a long time to not see a jump in speed, what happened to "doubling every 18 months"? We should be around 24ghz by now.

    Performance != MHz.

    Those 3GHz Pentium 4 Xeons suck balls compared to even a Core 2, forget about an i7.

    The only way the P4 got to what were at the time extremely high frequencies was by having a craptastic architecture. It was driven by marketing, which when the P4 was released was all about MHz. People thought MHz == Performance, so they cranked up the MHz for minimal gain in performance. AMD tried like hell to convince people otherwise, but fat lot of good it seemed to do. And now Intel is suffering for their previous emphasis on MHz over all.

  • by linux_geek_germany ( 1079711 ) on Tuesday October 13, 2009 @07:44PM (#29739133)
    please ship me the replicator then :)
  • Two things (Score:4, Insightful)

    by RingDev ( 879105 ) on Tuesday October 13, 2009 @08:17PM (#29739383) Homepage Journal

    1) If you don't approach science as a whole, from the angle of challenging expectations, you're doing it wrong. We don't prove that theories are "right", we fail to disprove them. So if you find the concept of disproving theories to be personally insulting, you have no business in a lab.

    2) Given the attitude you've shown in this thread you appear to have the interpersonal skills of a Hymenoepimecis argyraphaga wasp. If you behave so improperly when not behind a computer, I would venture a guess that you are all but un-employable, regardless of how intelligent you feel you are. If you are gainfully employed, I would appreciate it if you could conduct yourself in a professional manor when participating in a public debate.

    -Rick

  • Re:WHAT!! (Score:3, Insightful)

    by Burning1 ( 204959 ) on Tuesday October 13, 2009 @08:36PM (#29739521) Homepage

    To be fair, back in the day of the 386 and 486, AMD processors were essentially clones of their Intel counterparts. The only real difference between the AMD and Intel offerings was bus speed, processor speed, and external clock multiplier.

    When the Pentium was eventually launched, AMD no longer produced a direct clone, and started releasing their processors with 'Performance Rating' (PR) numbers instead of clock speed, effectively claiming that their K5 processors were as efficient as a higher clocked Pentium.

    I'd say AMD and the 486 compatible market had as much responsibility for the MHz war as intel.

    It's taken a while for the market to get passed comparisons based on clock speed, and I'm glad to see the performance rating numbers dropped.

    MHz is still a valuable comparison tool, but people seem to understand that you can only compare clock speed within a family of processors.

  • Re:Amaze us (Score:3, Insightful)

    by Cal27 ( 1610211 ) on Tuesday October 13, 2009 @09:19PM (#29739891)
    You'll be able to run Office and watch a flash video in Firefox...
    At the same time.
  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Tuesday October 13, 2009 @09:53PM (#29740107)
    Comment removed based on user account deletion
  • In other words... (Score:5, Insightful)

    by jonaskoelker ( 922170 ) <jonaskoelkerNO@SPAMyahoo.com> on Tuesday October 13, 2009 @10:05PM (#29740193)

    In other words, quoting a fortune cookie:

    Progress means replacing a theory that's wrong by a theory that's more subtly wrong.

  • Re:WHAT!! (Score:5, Insightful)

    by Chris Burke ( 6130 ) on Tuesday October 13, 2009 @10:12PM (#29740233) Homepage

    Instead of going faster, cores became more optimized and doubled, quadrupled, and octocores are around the corner if not here already. However, the "Turbo" mode in the i5/i7 shows that cranking up the clock frequency still helps for single/low threaded applications.

    For any given architecture, yes, higher clock speed will mean more performance. That's a given. But that doesn't mean it's worth chasing after by modifying the architecture, which is why there's been a "ceiling" on frequency in favor of more efficient architectures and yes Multi-core Mania.

    So, why don't we have 8, 16, or 24 GHz clock frequencies? Is this only because of limitations (memory) bus speeds or is this because of silicon heat dissipation problems?

    Not so much memory speeds, since memory bandwidth/latency can be a bottleneck for any high-performance design whether it goes the "speed demon" or "brainiac" route.

    As you surmise, power is important. Dynamic power is proportional to clock frequency, and having to add extra flip-flops to store intermediate values in a long pipeline only exacerbates the issue. Those flops also burn static power, which has become a significant portion of the overall chip power budget (part of what doomed the P4 architecture). Power budgets are also much more constrained, and the manufacturers are trying to target fixed power budgets for different market segments. This means extra power burned may actually hurt your clock frequency, partially negating the gains of a high-frequency design. With performance-per-watt becoming a major metric for customers, and yes heat dissipation also being an issue, it doesn't make a lot of sense to chase high performance by also burning lots of power as high frequency designs naturally do.

    So with that in mind, the "speed demon" vs "brainiac" debate leans towards the brainiac side. Though the number of gates per pipe stage is already pretty low. Getting it down further means substantially hurting IPC, without necessarily gaining tons of frequency. Branch mispredicts still happen. Having slightly more gates per stage, doing a better job of predicting branches or shuffling data around, having larger caches and TLBs, smarter schedulers, ends up being a better idea.

    But as you say, frequency still matters. So I'd look to future chips trying to eek out as much frequency as possible within a given power envelope, not just by looking at the number of active threads/cores, but also by looking at the actual dynamic power situation and adjusting frequency accordingly. TDP values are worst-case scenarios for OEMs to design cooling solutions around. When the actual power usage is less than the worst case, when you have e.g. an integer-only app where the floating point units are unused, you can afford to crank up the frequency some and get extra performance.

    It's all about being smart with your power budget these days. That's why 24GHz processors don't make any sense. Intel had very convincing data showing they could scale the P4 up that high and get good performance, but if you've seen the cooling solutions for the P4 Prescott, then you know why that ended up being a dead end.

  • by theshowmecanuck ( 703852 ) on Tuesday October 13, 2009 @10:14PM (#29740251) Journal
    Get this through your head: If man were meant to fly, God would have given him wings. What? Oh. Never mind.
  • by pclminion ( 145572 ) on Tuesday October 13, 2009 @10:31PM (#29740371)

    I'm just waiting for a peta-hertz computer with a 500 exabyte hard-drive able to do universe simulations in real time that will fit in my pocket

    It is impossible to simulate the universe. This is pretty easy to prove. If it was possible, using some device, to simulate the universe, then it is not actually necessary to simulate the universe -- we only need simulate the device which simulates the universe, since the device is necessarily contained within the universe. This should be easier, because the device itself is much smaller than the entire universe.

    But if simulating the device which simulates the universe, is equivalent to simulating the universe, then that would mean that the complete set of states which define the universe can actually be represented by some subset of those very same states -- the subset of states which describe the device which is being used to simulate the universe. In other words, the universe is a set such that if you remove some subset of states you end up with the same set again. I hope you can see how this is a logical impossibility.

  • Re:Two things (Score:4, Insightful)

    by iris-n ( 1276146 ) on Tuesday October 13, 2009 @11:16PM (#29740697)

    Even though he can't talk, he's right. You have to have some healthy skepticism, but at some point it just becomes stupid.

    Can you honestly conceive of "technological advances" that would make FTL communication possible?

    Or some engine more efficient than Carnot's cycle?

    Or a computer that could compute all the evolution of the universe in a second?

    There are things that just don't make sense. These things are so fundamental that to give up on them you would have to give up all of modern physics, and any hope of being able to correctly describe nature.

  • by Undead NDR ( 1252916 ) on Wednesday October 14, 2009 @04:20AM (#29742051) Homepage Journal

    In my example you have chip A that computes an integer at the maximum speed. But if we develop super parallel computing [...] you could simply feed your problem split up into a couple of billion of the suckers while taking up the same space as a laptop now and to the user it would feel like the machine is fast enough to compute the giant problem instantly, when really it is slicing that problem up into a couple billion pieces and each 'core" is just doing its teeny tiny bit of the pie.

    How fast will the problem be sliced?

  • by maxume ( 22995 ) on Wednesday October 14, 2009 @05:49AM (#29742453)

    Well sure, but things aren't likely to get to the point where that program becomes possible and still have 'somebody' be limited to a few people, so it will become increasing difficult to stop everyone capable of creating it, and then, if things don't reach that point, you don't have to stop anyone.

  • Re:Two things (Score:3, Insightful)

    by bkr1_2k ( 237627 ) on Wednesday October 14, 2009 @06:59AM (#29742721)

    We can theorize ways to travel faster than light but not to communicate faster than light? Really? Those two don't mesh, for me. Obviously if there are ways we can travel faster than light we'll be able to communicate at such speeds as well.

  • by BlueParrot ( 965239 ) on Wednesday October 14, 2009 @09:35AM (#29743757)

    The problem is, given all we don't actually know about quantum mechanics, and all we don't know about super-small things, all it would take is a single observation to throw this minimum out the window.

    You make it sound as if all of quantum mechanics is unreliable and stuff we don't know. In reality there are plenty of quantum mechanical limitations that are likely on as solid footing as other fundamental limits like conservation of energy, momentum and the second law of thermodynamics.

    Take Heisenberg's uncertainty relation as an example. There may be things we don't fully understand about quantum mechanical phenomena ( such as wave function collapse ) , but I would not hold my breath waiting for a breakthrough which allow accurately measuring a particle's position and momentum. Likewise I would not expect to see two fermions occupy the same quantum state ( and thereby violate the pauli exclusion principle ) any time soon, nor would I expect the de-Broglie wavelength of a particle to be anything other than h/p.

    I think part of the reason a lot of people seem to think QM is some unreliable theory that we don't really understand is simply ignorance of how fundamental it is to modern physics. Put it this way, without QM we would not have solid state physics, which is what chip designers rely on making the CPU I'm using to write this. We would not have LEDs or Lasers for the optic communications used in many internet backbones, and we would not have nuclear reactors to power the whole thing ( The stability of a nuclear reactor relies on two phenomena Doppler Broadening and the tendency of Neutron cross sections to change with neutron energy. Both of these are QM phenomena.)

    Basically saying "it's just a theory" is as much a naive criticism of Quantum Mechanics as it is a naive criticism of Evolution. It may not be absolute truth ( physical theories in general are not ) , but it very much is the best available description of nature we have and it is certainly more reliable than assuming without good reason that theory will not agree with practice.

  • by hardwarefreak ( 899370 ) on Wednesday October 14, 2009 @12:25PM (#29746217)

    You're dead wrong. Quantum theory has been around for over 50 years, and nothing observed in the universe sparked the idea of quantum theory. Its roots are purely mathematical. It was 'discovered' in the human mind and on paper. The first experimental observations of properties of quantum theory weren't made until the 2000s. You seem to be stuck in the realm of Newtonian physics, where everything can be seen, touched, tasted, and smelled. The great physicists come up with original shit in their own minds without influence from the physical world. Einstein and Hawking are in this group. With both men, the theories started in the mind, and were not influenced or guided by the physical world around them. If their theories had been, we'd not have ever had general relativity or Hawking radiation.

    You really need to go read some basic history of physics and cosmology. No great theories are based on previous observation. This is why some people are called "Geniuses" and others aren't. One last note: can you point me to an experimental observation of string theory? [laughs]

  • by Anonymous Coward on Wednesday October 14, 2009 @02:40PM (#29748011)

    Knowns:

    A: Device is finite

    B: Device can simulate the universe completely.

    C: Device exists within the universe

    Conjecture based on postulates:

    1) Universe is finite (because device is finite and can simulate the universe)

    2) Device is subset of the universe (because it is inside the universe)

    3) A subset of the universe is the same size as itself

    4) The universe is infinite: this cancels 1, which means one of the "Knowns" is false.

There are two ways to write error-free programs; only the third one works.

Working...