The Ultimate Limit of Moore's Law 418
BuzzSkyline writes "Physicists have found that there is an ultimate limit to the speed of calculations, regardless of any improvements in technology. According to the researchers who found the computation limit, the bound 'poses an absolute law of nature, just like the speed of light.' While many experts expect technological limits to kick in eventually, engineers always seem to find ways around such roadblocks. If the physicists are right, though, no technology could ever beat the ultimate limit they've calculated — which is about 10^16 times faster than today's fastest machines. At the current Moore's Law pace, computational speeds will hit the wall in 75 to 80 years. A paper describing the analysis, which relies on thermodynamics, quantum mechanics, and information theory, appeared in a recent issue of Physical Review Letters (abstract here)."
Transistors Per IC and Planck Time (Score:5, Informative)
Intel co-founder Gordon Moore predicted 40 years ago that manufacturers could double computing speed every two years or so
by cramming ever-tinier transistors on a chip.
That's not exactly correct. Moore's Law (or observation more like) reads in the original article as [intel.com]:
The complexity for minimum component costs has increased at a rate of roughly a factor of two per year ... Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.
All he's concerned about is quoting how many components can fit on a single integrated circuit. One can see this propagated to processing speed, memory capacity, sensors and even the number and size of pixels in digital cameras but his observation itself is about the size of transistors -- not speed.
The title should be "The Ultimate Limit of Computing Speed" not Moore's Law.
Furthermore, we've always had Planck Time [wikipedia.org] as a lower bound on the time of one operation with our smallest measurement of time so far being 10^26 Planck Times. So essentially they've bumped that lower bound up and it's highly likely more discoveries will bump that even further up. I guess our kids and grandchildren have their work cut out for them.
Re:Transistors Per IC and Planck Time (Score:5, Insightful)
In theory, it is nice to make theoretical limits. In practice, the limits are sometimes nothing more than theoretical. We don't know how to make smaller-than-quantum computers yet, but we also don't know how to make quantum computers yet. So this could be a prediction like every other prediction of the end of Moore's law, some of which were based on stronger reasoning than this argument. Interesting argument to make, though.
Re:Transistors Per IC and Planck Time (Score:5, Funny)
Re: (Score:3, Insightful)
Re:Transistors Per IC and Planck Time (Score:4, Insightful)
You make it sound as if all of quantum mechanics is unreliable and stuff we don't know. In reality there are plenty of quantum mechanical limitations that are likely on as solid footing as other fundamental limits like conservation of energy, momentum and the second law of thermodynamics.
Take Heisenberg's uncertainty relation as an example. There may be things we don't fully understand about quantum mechanical phenomena ( such as wave function collapse ) , but I would not hold my breath waiting for a breakthrough which allow accurately measuring a particle's position and momentum. Likewise I would not expect to see two fermions occupy the same quantum state ( and thereby violate the pauli exclusion principle ) any time soon, nor would I expect the de-Broglie wavelength of a particle to be anything other than h/p.
I think part of the reason a lot of people seem to think QM is some unreliable theory that we don't really understand is simply ignorance of how fundamental it is to modern physics. Put it this way, without QM we would not have solid state physics, which is what chip designers rely on making the CPU I'm using to write this. We would not have LEDs or Lasers for the optic communications used in many internet backbones, and we would not have nuclear reactors to power the whole thing ( The stability of a nuclear reactor relies on two phenomena Doppler Broadening and the tendency of Neutron cross sections to change with neutron energy. Both of these are QM phenomena.)
Basically saying "it's just a theory" is as much a naive criticism of Quantum Mechanics as it is a naive criticism of Evolution. It may not be absolute truth ( physical theories in general are not ) , but it very much is the best available description of nature we have and it is certainly more reliable than assuming without good reason that theory will not agree with practice.
Re:Transistors Per IC and Planck Time (Score:5, Insightful)
You are generating the latter. Kill yourself.
Great argument, you're a regular Cyrano there.
How can you have stronger reasoning, than something that's based on the limits of what modern physics can understand
Does this even need to be said? Einstein did it: he took some observations and extrapolated them to show that modern physics was not entirely correct (that is, what was modern physics at the time). Indeed, all scientific theory can only be based on what we've observed. Thus, new observations make for new theory, or corrections in old theory. As we continue to make more observations, for example with the LHC, theory will continue to evolve. Surely even someone of your eloquence can see this.
In other words... (Score:5, Insightful)
In other words, quoting a fortune cookie:
Progress means replacing a theory that's wrong by a theory that's more subtly wrong.
Two things (Score:4, Insightful)
1) If you don't approach science as a whole, from the angle of challenging expectations, you're doing it wrong. We don't prove that theories are "right", we fail to disprove them. So if you find the concept of disproving theories to be personally insulting, you have no business in a lab.
2) Given the attitude you've shown in this thread you appear to have the interpersonal skills of a Hymenoepimecis argyraphaga wasp. If you behave so improperly when not behind a computer, I would venture a guess that you are all but un-employable, regardless of how intelligent you feel you are. If you are gainfully employed, I would appreciate it if you could conduct yourself in a professional manor when participating in a public debate.
-Rick
Re:Two things (Score:4, Insightful)
Even though he can't talk, he's right. You have to have some healthy skepticism, but at some point it just becomes stupid.
Can you honestly conceive of "technological advances" that would make FTL communication possible?
Or some engine more efficient than Carnot's cycle?
Or a computer that could compute all the evolution of the universe in a second?
There are things that just don't make sense. These things are so fundamental that to give up on them you would have to give up all of modern physics, and any hope of being able to correctly describe nature.
Re: (Score:3, Insightful)
We can theorize ways to travel faster than light but not to communicate faster than light? Really? Those two don't mesh, for me. Obviously if there are ways we can travel faster than light we'll be able to communicate at such speeds as well.
Re: (Score:2, Insightful)
How can you have stronger reasoning, than something that's based on the limits of what modern physics can understand (thermodynamics and quantum mechanics)? We have developed quantum computers.
The previous limits he is referencing were also based on the limits of what modern physics could understand - just making a faulty assumption. He's questioning the assumptions here, too.
There's skepticism, and then there is metaphysical woowoo babble. You are generating the latter. Kill yourself.
He is generating the former. Take your own advice.
Comment removed (Score:4, Insightful)
Re: (Score:3, Informative)
The limit he is talking about assumes perfectly operating quantum computers that use the minimum amount of energy possible per operation in the shortest time that a single quantum of information takes to change. It is NOT physically possible to compute faster than this, according to current quantum mechanics theory. To reach this theoretical limit will take a lot of amazing engineering.
When he said he was talking about the same sort of limit as the speed of light he wasn't exaggerating. Unless there is s
Re: (Score:3, Funny)
Furthermore, you will never get to
Re:Transistors Per IC and Planck Time (Score:4, Interesting)
Planck Time lives in just four dimensions. Imagine opening up, say the sixth dimension, figuring out the vectors for the next WoW move, then plugging them in to realtime in the four we live in.
Information exists in so many states at once. Von Neumann liked to talk about Hilbert space.... I think that's where my data lives.
Re:Transistors Per IC and Planck Time (Score:5, Insightful)
All he's concerned about is quoting how many components can fit on a single integrated circuit. One can see this propagated to processing speed, memory capacity, sensors and even the number and size of pixels in digital cameras but his observation itself is about the size of transistors -- not speed.
The title should be "The Ultimate Limit of Computing Speed" not Moore's Law.
Meh.
While technically correct, the performance corollary of Moore's Law -- which is roughly "more transistors generally means smaller and thus faster transistors rather than exploding die sizes, plus more to do computation with, so performance also increases exponentially, and we observe that this is the case" -- is strong enough that it's often simply called Moore's Law even among the engineers in the chip design industry. It's just understood what you're talking about, even though the time constant is different.
You'll occasionally see Intel (the company Moore founded) show charts with historical performance and future projections, and they'll include a line labeled "Moore's Law" to show how they're doing relative to the observation. Because technically it is just an observation, and it holds true only to the extent that engineers of the computer, electrical, and material science variety bust their asses to make it true.
So maybe the layman thinks Moore's Law is about performance, and that's not technically true, but it's correct enough that even the engineers directly affected by it refer to it as if it meant performance. So I say the the title is fine.
WHAT!! (Score:2, Funny)
Re:WHAT!! (Score:5, Funny)
I plan on setting up server farms in parallel dimensions
Re:WHAT!! (Score:5, Funny)
Then it's finally time for One Dimension Per Child.
Re: (Score:2)
Re:WHAT!! (Score:5, Funny)
Then it's finally time for One Dimension Per Child.
I do hope you mean one _extra_ dimension per child.
Re: (Score:3, Funny)
Re: (Score:3, Funny)
A little too lengthy for my tastes.
Re: (Score:2)
I would expect that (if this is true) then the exponential rate of performance increases processor to processor will decrease, or to put it another way: you'll see diminishing returns on your processor improvements that will keep putting that 80 year figure further into the future.
Is anyone who understands this sort of stuff commenting? This sounds like it'll be regarded as quite an important discovery.
Re: (Score:2, Insightful)
looks like we've almost reached that point now. We've had Xeon 3.0GHz cpus for over 5 years now [archive.org], and they're still coming out with brand new 3ghz processors [slashdot.org]. That's a long time to not see a jump in speed, what happened to "doubling every 18 months"? We should be around 24ghz by now.
Re:WHAT!! (Score:4, Informative)
looks like we've almost reached that point now. We've had Xeon 3.0GHz cpus for over 5 years now [archive.org], and they're still coming out with brand new 3ghz processors [slashdot.org]. That's a long time to not see a jump in speed, what happened to "doubling every 18 months"? We should be around 24ghz by now.
Sorry, Performance != Clockspeed
I, for one, am glad Intel went away from modeling their processors after their clockspeed. They went to an actual model for this reason. If you want an example where they didn't, and lower clock speed processors kept up just go back and look at the 423/427 Pentium 4's vs the Socket A Athlons (XP, ect)
Re:WHAT!! (Score:5, Insightful)
looks like we've almost reached that point now. We've had Xeon 3.0GHz cpus for over 5 years now, and they're still coming out with brand new 3ghz processors. That's a long time to not see a jump in speed, what happened to "doubling every 18 months"? We should be around 24ghz by now.
Performance != MHz.
Those 3GHz Pentium 4 Xeons suck balls compared to even a Core 2, forget about an i7.
The only way the P4 got to what were at the time extremely high frequencies was by having a craptastic architecture. It was driven by marketing, which when the P4 was released was all about MHz. People thought MHz == Performance, so they cranked up the MHz for minimal gain in performance. AMD tried like hell to convince people otherwise, but fat lot of good it seemed to do. And now Intel is suffering for their previous emphasis on MHz over all.
Re: (Score:3, Insightful)
To be fair, back in the day of the 386 and 486, AMD processors were essentially clones of their Intel counterparts. The only real difference between the AMD and Intel offerings was bus speed, processor speed, and external clock multiplier.
When the Pentium was eventually launched, AMD no longer produced a direct clone, and started releasing their processors with 'Performance Rating' (PR) numbers instead of clock speed, effectively claiming that their K5 processors were as efficient as a higher clocked Pentiu
Re:WHAT!! (Score:4, Informative)
To be fair, back in the day of the 386 and 486, AMD processors were essentially clones of their Intel counterparts. The only real difference between the AMD and Intel offerings was bus speed, processor speed, and external clock multiplier.
Well at the time AMD was literally a "second source" supplier because back then companies like IBM actually cared about that sort of thing. I don't think AMDvsIntel really mattered a lot, since nobody knew of AMD's existence. Hell, I owned two computers with AMD processors and didn't know it until over a decade later.
When the Pentium was eventually launched, AMD no longer produced a direct clone, and started releasing their processors with 'Performance Rating' (PR) numbers instead of clock speed, effectively claiming that their K5 processors were as efficient as a higher clocked Pentium.
I've heard AMDers call the K5 "the highest IPC x86 part ever", with a wry smirk because just like how MHz isn't everything, neither is IPC, and it never lived up to those Markethertz numbers in terms of real performance.
Funnily enough the Pentium showed the weakness of MHz as a raw metric without taking AMD into account. When the Pentium 60 and 75 were released, there were 486 DX4 100s (okay these were actually made by AMD but like I said who knew or cared?), and the P-75 was a better performer. Granted most of my friends were nerds, but many of them understood this concept back then.
I'd say AMD and the 486 compatible market had as much responsibility for the MHz war as intel.
To be sure, AMD only started talking about overall performance instead of just MHz when it started to hurt them. Forget the 486, in the P3 vs K7 days it was all about clock speed and the race to 1GHz. Sure, the architectures were still fairly similar and thus somewhat comparable by clock speed, but that hardly told the whole story.
It's taken a while for the market to get passed comparisons based on clock speed, and I'm glad to see the performance rating numbers dropped.
They're only semi-dropped. They were still used as relative-frequency indicators relative to Intel's offering from the K7 Palomino through most of the life of K8. Now that the P4 was dropped, and Intel switched to a completely opaque numbering scheme, AMD has switched to a non-comparison-based numbering scheme as well.
MHz is still a valuable comparison tool, but people seem to understand that you can only compare clock speed within a family of processors.
Yeah if you don't count the person I first replied to. :P
Re:WHAT!! (Score:5, Insightful)
Instead of going faster, cores became more optimized and doubled, quadrupled, and octocores are around the corner if not here already. However, the "Turbo" mode in the i5/i7 shows that cranking up the clock frequency still helps for single/low threaded applications.
For any given architecture, yes, higher clock speed will mean more performance. That's a given. But that doesn't mean it's worth chasing after by modifying the architecture, which is why there's been a "ceiling" on frequency in favor of more efficient architectures and yes Multi-core Mania.
So, why don't we have 8, 16, or 24 GHz clock frequencies? Is this only because of limitations (memory) bus speeds or is this because of silicon heat dissipation problems?
Not so much memory speeds, since memory bandwidth/latency can be a bottleneck for any high-performance design whether it goes the "speed demon" or "brainiac" route.
As you surmise, power is important. Dynamic power is proportional to clock frequency, and having to add extra flip-flops to store intermediate values in a long pipeline only exacerbates the issue. Those flops also burn static power, which has become a significant portion of the overall chip power budget (part of what doomed the P4 architecture). Power budgets are also much more constrained, and the manufacturers are trying to target fixed power budgets for different market segments. This means extra power burned may actually hurt your clock frequency, partially negating the gains of a high-frequency design. With performance-per-watt becoming a major metric for customers, and yes heat dissipation also being an issue, it doesn't make a lot of sense to chase high performance by also burning lots of power as high frequency designs naturally do.
So with that in mind, the "speed demon" vs "brainiac" debate leans towards the brainiac side. Though the number of gates per pipe stage is already pretty low. Getting it down further means substantially hurting IPC, without necessarily gaining tons of frequency. Branch mispredicts still happen. Having slightly more gates per stage, doing a better job of predicting branches or shuffling data around, having larger caches and TLBs, smarter schedulers, ends up being a better idea.
But as you say, frequency still matters. So I'd look to future chips trying to eek out as much frequency as possible within a given power envelope, not just by looking at the number of active threads/cores, but also by looking at the actual dynamic power situation and adjusting frequency accordingly. TDP values are worst-case scenarios for OEMs to design cooling solutions around. When the actual power usage is less than the worst case, when you have e.g. an integer-only app where the floating point units are unused, you can afford to crank up the frequency some and get extra performance.
It's all about being smart with your power budget these days. That's why 24GHz processors don't make any sense. Intel had very convincing data showing they could scale the P4 up that high and get good performance, but if you've seen the cooling solutions for the P4 Prescott, then you know why that ended up being a dead end.
Re: (Score:3, Insightful)
At the same time.
Efficiency (Score:5, Insightful)
So we'll have to wait another 75 years before management lets us focus on application efficiency instead of throwing hardware at the performance problems? Sigh...
Re: (Score:2)
Meh, I'll be dead by then anyway.
Thank God, I don't think I could deal with kids preening about their new zaptastic whiz bang they just bought off New Egg 70+ years from now as if they had invented the damn thing.
If by some chance I am alive then, I better have made a killing off of NewEgg's IPO.
Re: (Score:2)
In the meantime just keep putting redundant statements in endlessly nested loops, etc.
Re: (Score:3, Interesting)
So we'll have to wait another 75 years before management lets us focus on application efficiency instead of throwing hardware at the performance problems? Sigh...
No, you still won't be doing performance optimizations if that's not what makes the most money...
Re: (Score:2)
Note to parent: don't show this article to management.
Might Prove A Vinge novel correct? (Score:3, Interesting)
about the nature of computation and lightspeed and the like as explored in the wonderful novel A Fire Upon The Deep (Zones of Thought) [amazon.com]
in which the universe has depth and the depth determines how fast things can go including neural tissue, computation, and intergalactic travel. I have long suspected that Earth is towards the shallow end ...
JUST like the speed of light. (Score:5, Insightful)
Passing the buck (Score:5, Funny)
Eh, let's let the singularity first, then we'll let the robots take care of the problem.
Re:Passing the buck (Score:5, Funny)
Eh, let's let the singularity first, then we'll let the robots take care of the problem.
You mean us?
Re: (Score:3, Funny)
But the singularity might accidentally the world.
No growth can go on forever (Score:4, Insightful)
and no exponential growth can go on for just a comparatively very short time. This should be self-evident, but for some reason, people seem to ignore that. Especially people who call themselves journalists or economists.
Re: (Score:3, Interesting)
and no exponential growth can go on for just a comparatively very short time. This should be self-evident, but for some reason, people seem to ignore that. Especially people who call themselves journalists or economists.
As far as we know the expansion of the universe and entropy will go on forever.
I doubt that is what you mean though...
Self contained systems do have limits unless of course they are self recursive and halo-graphic.
Like fractals and information...
Economies and ecosystems are not.
Form over function (Score:2)
I figure it will be sort of like the netbook war of today. Manufactures will realize that there isn't much of a way to get faster so they will start concentrating on design, reliability and lifespan. It will probably be a golden age in computing.
I'm just waiting for a peta-hertz computer with a 500 exabyte hard-drive able to do universe simulations in real time that will fit in my pocket, go 100 years on a charge and be indestructible.
Re: (Score:3, Funny)
Re:Form over function (Score:5, Insightful)
I'm just waiting for a peta-hertz computer with a 500 exabyte hard-drive able to do universe simulations in real time that will fit in my pocket
It is impossible to simulate the universe. This is pretty easy to prove. If it was possible, using some device, to simulate the universe, then it is not actually necessary to simulate the universe -- we only need simulate the device which simulates the universe, since the device is necessarily contained within the universe. This should be easier, because the device itself is much smaller than the entire universe.
But if simulating the device which simulates the universe, is equivalent to simulating the universe, then that would mean that the complete set of states which define the universe can actually be represented by some subset of those very same states -- the subset of states which describe the device which is being used to simulate the universe. In other words, the universe is a set such that if you remove some subset of states you end up with the same set again. I hope you can see how this is a logical impossibility.
What is the limit? (Score:4, Interesting)
So what is that limit? What units would you express such a limit in? The fundamental unit of information is a bit, what is the fundamental unit of computation? Would you state that rate in "computations per second"? "Computations per second per cm^3"? "computations per second per gram?"
I checked out the pdf of the paper, and didn't see any numerical limit stated, just equations.
Re:What is the limit? (Score:5, Interesting)
A more practical question: how many bits does my encryption key need now to make brute force cracking impractical for the fastest computer possible in this Universe (i.e probability of finding the key within my remaining lifespan 0.0001% (1 in a million))?
And not involving a system that reduces my lifespan, such as one failed attempt kills me, smart-ass.
Re:What is the limit? (Score:4, Funny)
Re:What is the limit? (Score:4, Insightful)
Given that brute force attacks scale to multiple processors better than just about any other task, I don't think there is a limit.
Re:What is the limit? (Score:4, Informative)
You see - the system that threatens to reduce your lifespan is a much faster way to acquire that key.
Re:What is the limit? (Score:5, Informative)
Re:What is the limit? (Score:5, Funny)
Pointing out that someone was off by 74 orders of magnitude isn't a nitpick :-P
Re: (Score:3, Interesting)
The Von Neumann-Landauer limit [wikipedia.org] suggests that each bit of information lost requires ln(2)*kT of energy which is released as heat. Assuming that, as a minimum, it is necessary to flip through all the bits in the key to brute force it then a 512 bit key which requires 2^512 - 1 bit flips corresponding to an energy of 4*10^133 Joules [google.co.uk] if done at room temperature or around 6*10^121 Joules [google.co.uk] if done at the coldest temperature yet achieved which is 450 pico Kelvin. Given a universe of mass 1.6*10^55 kg [hypertextbook.com] Einstein's rel
Re: (Score:2)
Speed of light.
Re: (Score:2, Informative)
Re: (Score:2)
So a bit is a unit of computation as well as a unit of information? How do you determine how many bits you need to do a computation? Lets say, 1 + 1 = 2 (or 10), how many bits is that?
Re: (Score:3, Informative)
Are there really limits? (Score:3, Insightful)
Though there might be a limit on how fast a computation can go, I would think that
parallel systems will boost that far beyond whatever limit there may be. If we crash
into a boundary, multiple systems--or hundreds of thousands of them--will continue
the upward trend.
I suppose there is also the question of whether 10^16 more computing power "ought ;-)
to be enough for anybody".
Re:Are there really limits? (Score:4, Insightful)
Parallel computing won't help.
There's a limit to how fast your compute subsystems can exchange data as well.
Re: (Score:2)
The limit is on the amount of computation you can do per gram per second. Unless your computer was VERY VERY dense and compact (close to being a black hole, in fact) then it would have to be parallel to achieve this limit.
Parallel processing... (Score:2)
We're already hitting clock speed "brownouts", and using parallel processing to get around them. To really tell where the limits are you need to look at how small you can make a processor (best case, something like say one bit per Planck length) and how much latency you can afford as information propagates from processor to processor at the speed of light or less.
Reminds me of a joke (Score:5, Interesting)
A scientist and an engineer are lead into a room. They are asked to stand on one side. On the opposite side is Treasure (or delicious cake if you please).
They are told that they may have the prize if they can reach it, however they may never go more than half the distance between them and it.
The scientist balks claiming it is obviously impossible as he can NEVER reach the prize and leaves the room. The engineer shrugs, and walks halfway to the prize 10 times or so, says "close enough" and takes it.
So I guess we'll just see, eh?
Re: (Score:3, Funny)
And a mathematician would stand for a moment, calculate the limit, and then run fullspeed into the wall.
Re: (Score:2)
Re: (Score:2)
... though every version I have ever heard of the joke/tale replaced "Treausre" with "attractive woman". This was partially due to the (sadly) male-only content of my graduate-level EE classes.
Re: (Score:3, Insightful)
And were the engineer a hacker, he'd pick up the scientist, carry him half way across the room, set him down and say, "Your turn."
The game changing hackers are the ones who don't listen to the conventional logic of the time and figure out how to wander along a totally different axis that the "experts" hadn't thought of yet.
Look at Wolfenstein/Doom. 3D graphics "weren't possible" on home computers at the time. John Carmack turned it in to a 2D solution and solved it anyway. Perhaps not perfect in every regar
You mean mathematician instead of scientist (Score:2)
The scientist would not give up so easily.
The scientist would simply say that the wave function of the cake already overlaps with his wave function and take the cake.
Re: (Score:2)
Doesn't this joke originally have a third person? He is a mathematician, and dies inside the room. They find a piece of paper with him, with "Let's first assume I can reach the treasure/cake..." written on it.
Re: (Score:2)
Re:Reminds me of a joke (Score:5, Funny)
(As an engineer...)
Nah, that's not breaking the rules. After ten "moves", the eleventh move is simply to reach out and grab the treasure. If you average out his body's movement, you'll find that he has not, actually, traversed farther than half way to the treasure. Only a mathematician would consider the leading edge to be representative of the body, whereas an engineer would consider the centre of gravity to be representative (assume a spherical body... hey, no assumption required!), and thus there'd be no problem in reaching out to grab the treasure as long as his centre of gravity hasn't proceeded more than halfway between his previous location and the treasure. Mind you, if it's very heavy treasure, this may be more difficult.
Electricity cost comes first... (Score:4, Interesting)
At the current rate of progress, so to speak, no one will be able to afford a computer that runs 10^16 times faster than current systems. Even as a gamer, I'm already leery of buying any of the newer video cards and CPU setups, after reviewing the cost in electricity needed to run them for a year compared to my existing system - they use somewhere around 4 times the electricity!
I can understand fitting more transistors onto a chipset, and more chipsets onto a system, but even with nanotech and similar technologies, I don't see much chance for each transistor to use proportionally less electricity to allow 10^16 more of them to be running at once. You'd have to run a conductance cable to the sun to get that kind of power.
Ryan Fenton
Re: (Score:2, Informative)
Compare Intel's newer processors to the Pentium 4 and you'll see gains in both computing power and power efficiency [wikipedia.org]
Anyone else get the feeling... (Score:3, Interesting)
that the ultimate limit is the processes that the universe itself uses to "compute" its own state? That we can only ever asymptotically approach this limit? Once we hit the limit, our computations cease being simulations and become reality.
Re:Anyone else get the feeling... (Score:5, Funny)
Lay off the bong hits.
Does this mean no warp drive? (Score:2)
I guess it is all just fiction after all.
Re: (Score:2)
Nothing interesting in article. (Score:3, Informative)
I RTFA but there is nothing in the article. Only talk of 75 years...
I remember one way to get an upper limit on frequency is using the equation E=hf, the Planck-Einstein relation. For a given amount of energy you can only get so much frequency. But this was a million years ago in my physics class.
PDF on arxiv (Score:3, Informative)
Thermodynamic cost of reversible computing
thermo-arxiv
February 1, 2008
Lev B. Levitin and Tommaso Toffoli
http://arxiv.org/pdf/quant-ph/0701237v2 [arxiv.org]
Not sure it is the same as in the Phys. Rev. Lett. 99, 110502 (2007) -- linked from the article -- which is from 2007
Stephan
Re: (Score:2)
Not sure it is the same as in the Phys. Rev. Lett. 99, 110502 (2007) -- linked from the article -- which is from 2007
The abstract is the same, so it appears to be the same paper.
I haven't read the article all the way through yet. Do the authors take in to account the possibility of optical or quantum computers?
Fundamental time unit (Score:3, Insightful)
We've been at roughly ~200ps per circuit operation for quite some time and yet processors are still getting faster. Parallel computation, what a novel idea.
Too bad spped is just a byproduct (Score:2)
of Moore's law. Moore's law has to to with the cost of a number of transistor in a given space of silicon.
There are practical limits we are running into that are getting harder and harder to solve.
We are approaching the point where 1 particle of metal per billion and ruin a fab process.
In order to bypass that, we will need to self contained fabs; which would have an even more limited lifecycle the current fabs.
This means the cost of chips could rise dramatically. I don't think many people are going to spend
No Zen (Score:2)
As usual, Zen is ignored. They don't take into account that when nothing happens that can also be your computation (accuracy -> oo).
Stephan
Constrained Freedom (Score:3, Interesting)
per TFabstract: "errors that appear as a result of the interaction of the information-carrying system with uncontrolled degrees of freedom must be corrected."
Would not quantum teleportation via entanglement provide a means of distributing computation to include massively parallel? Quantum teleportation would provide a constraint that would redefine the problem by redefining the environment (ie. uncontrolled degrees of freedom). Replace Moore's Law with Bell's Theorem.
And does not quantum computing operate on all possible states, with the answer inherent in the wave function? Spew out the entangled qubits as needed and let them fight it out as a quantum form of Swarm.
If a result can be obtained this way, you may still have a problem with simultaneity -- the answer may arrive "before" the question, making it impossible to decode. However the problem then becomes a limitation of spacetime's ability to pass definitive information, and the limit of computation itself if such exists and/or can be measured in this context becomes moot. Being able to error trace via backtrack is similarly hampered but for the same reason and would still be possible post hoc.
But if a computational system is devised that can operate on such principles, and it is to be used for practical calculations, be aware that any defining of arguments will be restricted to the input end and results for comparison and decision making may not yet be available for such decisions (assuming a reasonable latitude of autonomous action). In which case, make sure you teach it phenomenology *before* putting it to work.
10^16 times faster than today's fastest machines (Score:2)
Yeah, until I hit the Turbo(tm) button! 11^16, baby! 11, because that's one more, isn't it?
If by "immediately" there is a limit, then yes. (Score:2)
It is imaginable that through some of that quantum black magic all possible answers are calculated instantaneously and the correct one selected at the same time and delivered upon query. The bottleneck in that would be the speed in which the question can be presented.
Subspace FTL field (Score:5, Funny)
So the solution is very obvious. Just put the entire computer in subspace field that creates a pocket of reality where the speed of light is faster (many times faster). Course you then have to have some mechanism for speeding up and slowing down data coming in the ODN conduits. It's been commonly done since the early 24th century. All of these pesky "limits" can be worked around with some fancy level-three diagnostics.
Re: (Score:3, Funny)
I recall a short story of a "US vs USSR" style chess championship (or "Deep Blue" vs another computer...). Each side put up their best computer for the contest.
One side had an ace in the hole, though... they had developed a field that sped up the passage of time. Set a computer in it, and it could calculate all possible moves from a given position in a reasonable amount of time.
So:
Our heros' computer made an opening move.
The foe's computer, able to calculate all possible moves from that position, resign
Yay, we did it again! (Score:2)
I hate it when they do that. "this is the limit" really means "based on our current understanding of what's available, this is the limit" -- that's great for the present tense, and it's total garbage for the future tense -- not that english has a future tense, and this is precisely the reason.
But that's fine. 75 years huh? There's a very slim possibility that I'll still be here to point and laugh in their face in 75 years. But there's a virtual guarantee that I be able to point and laugh in their face
Even when you hit the limit you can add more cpus (Score:2)
Even when you hit the limit you can add more cpus / other helper chips.
Right now lot of stuff is started to be coded to use many cores / cups / gpu + cpu.
Physicists said we could not exceed 2400 baud too (Score:2, Insightful)
Yup, back in the 80s the physicists said it would be physically impossible to provide switching and encoding which would allow phone line communication to exceed 2400 baud in modems. Yet before we gave up on phone lines, the modem builders were giving us 56,000 baud connections.
The theory people arrive a bit late to the party (Score:3, Insightful)
The physics folks might have worked out some interesting details here but that's all it is interesting. The engineers have already moved on. Its not about getting smaller and going faster has largely past the point of diminishing returns already. There are few applications the digital logic we have today can't perform within time constraints. Even our jet fighters are practically flying themselves. In fact our computing machines are so fast we starting to struggle justifying their applications on anyone task not because they are to expensive this time but because they are so fast that their just idle most of the time anyway. Virtualization is more or less going back to time sharing without the pain. Its about doing more at the same time now, hence all the milti-core chips.
Re: (Score:2)
There's a corollary somewhere that the speed of software halves every 18 months, it's been around for years.
Re: (Score:2)
There's a corollary somewhere that the speed of software halves every 18 months, it's been around for years.
"Somewhere" is probably some article that mentioned Wirth's law [wikipedia.org], possibly under the name "Gates' law" or "Page's law" or "Reiser's law".
Re: (Score:2)
The further away your neighbor's blob of idle computronium is, the higher the latency you incur by using it.
Re: (Score:2)
Finally, Windows will run fast enough to be useful.
Yeah, but Windows 79 is releasing next week, and I hear it's kind of slow on only 500 exabytes of RAM...
Re: (Score:2)
Re: (Score:2)
Re:The problem is... (Score:4, Interesting)
As countless of such laments throughout recorded history have shown, worries about intellectual demise of the youth are greatly overblown.
Re: (Score:2)
Complain that it was paid for with your tax dollars and demand a free copy.
To which the typical response will be "only partially, that's why we have these fees, to recoup the cost of the investment. Ergo, as your tax dollars didn't pay for the entirety of the cost of publication, you're not entitled to it gratis".