MEMS Actuated On Chip Water Cooling 17
Epoch of Entropy writes "Electronics Cooling has this article : "A Stanford research team is using MEMS technology to explore the lower bound volume for the heat sink. The technology combines two-phase convection in micromachined silicon heat sinks (microchannels) with a novel electroosmotic pump to achieve minimal heat sink volume at the chip backside. ...features a novel and compact electroosmotic pump, forced two-phase convection in the heat sink, and a remote heat rejecter." This translates into: We've figured out a way to put a water cooling system right on the CPU."
MEMS class (Score:3, Informative)
U Gotz 2 h4v3 th3 Flow, Yo (Score:1, Troll)
130W (Score:1)
Why the hell does a chip need to use 130W in the first place?
I mean I am sorry but what is going on here.
How soon before I need a heavy duty circuit just to run a new computer?
Sure I understand some information theory, more computation ==> greater entropy ==> heat.
One guy I used to know used to reheat his coffee by leaving it on top of the CPU box of a CSIRO computer in the 70's. Will companies getting rid of big airconditioned ($$$ in rent and electricty) server rooms turn out to have been a mistake in a few years.
Is this why IBM is going to comptuing on demand - because only they can afford to aircondition the new chips being built.
At this rate Transmeta will actually have a market.
Re:130W (Score:3, Informative)
It has more to do with today's modern silicon having more, and faster gate logic than information theory. Each gate requires a certain amount of power to maintain it's state, and a certain amount to change its state. This is where the dissipated Watts number comes from. The faster you want each one to switch (higher MHz), the more current will be consumed in the switch. Multiply this by the number of gates and you get values like 130W. This is however a number that often refers to the power requirement when most of the chip is in operation. Different computations exercise different parts of a modern microprocessor and therefore will require various levels of power.
The computers of yesteryear, as you confirm, had much higher levels of power consumption. This, I believe, is mostly due to larger, less efficient gates and more discrete logic (less functional consolidation). Also, the equipment of yesterday had to spin larger hard drives (more energy required) and big tape motors, etc.
My point is this. We have come _sooo_ far. 130W is nothing compared to the power requirements of the huge machines that people used to have plugged into 240V circuits. These boxen [compaq.com] didn't do a tenth of what today's machines can do.
The [computational power]:[electrical power] quotient has risen, and the people we have to thank are those that wear bunny suits all day or those that design these wonderful pieces of technology. The smaller these gates get, the less power they consume. We are getting smaller and less power hungry every day. As we get smaller, we can get faster, and therefore we move back up the curve. However the quotient rises, efficiency grows, and mankind stands only to benefit...
That is until the chips start designing themselves, and make us irrelevent in their world, at which time we will all move back into the trees to begin the cycle anew.
computation - entropy (Score:1)
Time to start imprvoing the software so the hardware does not have to work so hard, though there is a theoretical limits on the minimum amount of energy required (==> heat produced) to do ANY calculation.
Re:computation - entropy (Score:2)
The amount of power dissipated by current microprocessors is many orders of magnitude higher than the minimum required due to entropy arguments.
Thus, entropy arguments aren't terribly useful when trying to figure out how much power a chip will dissipate.
There are a few interesting papers on the subject floating around; the ones that discuss the limits of transistor technology usually touch on this.
Re:computation - entropy (Score:2)
Guess we didn't learn much about computational arguments in thermo when I took it a few years back. Lots of PV graphs tho... Thanks for the theory.
ObNitPick - Source of power dissipation. (Score:3, Informative)
ObNitPick - it's much simpler to think of this in terms of the total capacitance of the chip, as opposed to counting gates (which have capacitance that radically changes as devices scale).
Layout rules have been more or less the same for a while, so regardless of device size, you'll have roughly the same proportion of the chip being gate, reverse-biased diffusion region (for non-SOI chips), metal that's near the substrate, and so forth. Multiply the area of the chip by this fraction and by the capacitance per unit area for the region type in question, and you get the total capacitance. Assume some fraction of this is switched on each clock, and use CV^2/2 to get the energy lost per clock (it's dissipated resistively in the charging/discharging transistors).
Summary: power loss is (mostly) proportional to the square of the core voltage, times the core's area, times a capacitance per area value, times a scaling factor.
The computers of yesteryear, as you confirm, had much higher levels of power consumption. This, I believe, is mostly due to larger, less efficient gates and more discrete logic (less functional consolidation). Also, the equipment of yesterday had to spin larger hard drives (more energy required) and big tape motors, etc.
The computers of yesteryear (for the last decade or so, at least) had high power dissipation due to much higher supply voltage (the "V^2" in CV^2/2).
We've kept cores at the same size or larger (due to fancier implementation designed to improve performance per clock), and we've driven up the clock speed. Something has to give to keep power sane, and so far it's been voltage (though SOI helps by decreasing some of the capacitance).
There are other factors that change too (leakage is a problem in large, fast SRAM arrays [cache], and the capacitance per area shifts for several reasons), but as an approximation the analysis above holds quite well.
We're actually in for a bit of stickiness soon, as we're approaching the useful limits to the supply voltage for silicon (though we still have quite a ways to go, and there are biasing tricks you can play to make the swing lower for a given supply voltage).
Many of the papers on the subject are online, and make quite interesting reading.
on chip heat pipe? (Score:2, Insightful)
/\/\Ust 0v3r(l0(K (Score:1, Funny)
Re:/\/\Ust 0v3r(l0(K (Score:1)
Finally a use for the old IBM techonology! (Score:1)
smotic? (Score:1)