Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science Technology

MEMS Actuated On Chip Water Cooling 17

Epoch of Entropy writes "Electronics Cooling has this article : "A Stanford research team is using MEMS technology to explore the lower bound volume for the heat sink. The technology combines two-phase convection in micromachined silicon heat sinks (microchannels) with a novel electroosmotic pump to achieve minimal heat sink volume at the chip backside. ...features a novel and compact electroosmotic pump, forced two-phase convection in the heat sink, and a remote heat rejecter." This translates into: We've figured out a way to put a water cooling system right on the CPU."
This discussion has been archived. No new comments can be posted.

MEMS Actuated On Chip Water Cooling

Comments Filter:
  • MEMS class (Score:3, Informative)

    by GigsVT ( 208848 ) on Wednesday November 13, 2002 @09:50PM (#4665554) Journal
    For those who are on Dish Network, look in the 9000 level channels for UWTV, they are running a MEMS class. Pretty interesting stuff. I was really surprised how far they have gotten in this field.
  • Be sure you filter your water well, or you could give your processor a stroke [stroke-site.org]!
  • by tqft ( 619476 )

    Why the hell does a chip need to use 130W in the first place?

    I mean I am sorry but what is going on here.

    How soon before I need a heavy duty circuit just to run a new computer?

    Sure I understand some information theory, more computation ==> greater entropy ==> heat.

    One guy I used to know used to reheat his coffee by leaving it on top of the CPU box of a CSIRO computer in the 70's. Will companies getting rid of big airconditioned ($$$ in rent and electricty) server rooms turn out to have been a mistake in a few years.

    Is this why IBM is going to comptuing on demand - because only they can afford to aircondition the new chips being built.

    At this rate Transmeta will actually have a market.
    • Re:130W (Score:3, Informative)

      by QuietRiot ( 16908 )
      I'm not sure exactly what you mean by ...more computation ==> greater entropy ==> heat.

      It has more to do with today's modern silicon having more, and faster gate logic than information theory. Each gate requires a certain amount of power to maintain it's state, and a certain amount to change its state. This is where the dissipated Watts number comes from. The faster you want each one to switch (higher MHz), the more current will be consumed in the switch. Multiply this by the number of gates and you get values like 130W. This is however a number that often refers to the power requirement when most of the chip is in operation. Different computations exercise different parts of a modern microprocessor and therefore will require various levels of power.

      The computers of yesteryear, as you confirm, had much higher levels of power consumption. This, I believe, is mostly due to larger, less efficient gates and more discrete logic (less functional consolidation). Also, the equipment of yesterday had to spin larger hard drives (more energy required) and big tape motors, etc.

      My point is this. We have come _sooo_ far. 130W is nothing compared to the power requirements of the huge machines that people used to have plugged into 240V circuits. These boxen [compaq.com] didn't do a tenth of what today's machines can do.

      The [computational power]:[electrical power] quotient has risen, and the people we have to thank are those that wear bunny suits all day or those that design these wonderful pieces of technology. The smaller these gates get, the less power they consume. We are getting smaller and less power hungry every day. As we get smaller, we can get faster, and therefore we move back up the curve. However the quotient rises, efficiency grows, and mankind stands only to benefit...

      That is until the chips start designing themselves, and make us irrelevent in their world, at which time we will all move back into the trees to begin the cycle anew.
      • every irreversible computation creates a net increase entropy (the 2nd law of thermodynamics in action) and unless something really weird is going on (eg supernova producing neutrinos just before she blows)you will see it as heat.

        Time to start imprvoing the software so the hardware does not have to work so hard, though there is a theoretical limits on the minimum amount of energy required (==> heat produced) to do ANY calculation.
        • every irreversible computation creates a net increase entropy (the 2nd law of thermodynamics in action) and unless something really weird is going on (eg supernova producing neutrinos just before she blows)you will see it as heat.

          The amount of power dissipated by current microprocessors is many orders of magnitude higher than the minimum required due to entropy arguments.

          Thus, entropy arguments aren't terribly useful when trying to figure out how much power a chip will dissipate.

          There are a few interesting papers on the subject floating around; the ones that discuss the limits of transistor technology usually touch on this.
        • every irreversible computation creates a net increase entropy (the 2nd law of thermodynamics in action) and unless something really weird is going on (eg supernova producing neutrinos just before she blows)you will see it as heat.

          Guess we didn't learn much about computational arguments in thermo when I took it a few years back. Lots of PV graphs tho... Thanks for the theory.
      • Each gate requires a certain amount of power to maintain it's state, and a certain amount to change its state. This is where the dissipated Watts number comes from. The faster you want each one to switch (higher MHz), the more current will be consumed in the switch. Multiply this by the number of gates and you get values like 130W. This is however a number that often refers to the power requirement when most of the chip is in operation. Different computations exercise different parts of a modern microprocessor and therefore will require various levels of power.

        ObNitPick - it's much simpler to think of this in terms of the total capacitance of the chip, as opposed to counting gates (which have capacitance that radically changes as devices scale).

        Layout rules have been more or less the same for a while, so regardless of device size, you'll have roughly the same proportion of the chip being gate, reverse-biased diffusion region (for non-SOI chips), metal that's near the substrate, and so forth. Multiply the area of the chip by this fraction and by the capacitance per unit area for the region type in question, and you get the total capacitance. Assume some fraction of this is switched on each clock, and use CV^2/2 to get the energy lost per clock (it's dissipated resistively in the charging/discharging transistors).

        Summary: power loss is (mostly) proportional to the square of the core voltage, times the core's area, times a capacitance per area value, times a scaling factor.

        The computers of yesteryear, as you confirm, had much higher levels of power consumption. This, I believe, is mostly due to larger, less efficient gates and more discrete logic (less functional consolidation). Also, the equipment of yesterday had to spin larger hard drives (more energy required) and big tape motors, etc.

        The computers of yesteryear (for the last decade or so, at least) had high power dissipation due to much higher supply voltage (the "V^2" in CV^2/2).

        We've kept cores at the same size or larger (due to fancier implementation designed to improve performance per clock), and we've driven up the clock speed. Something has to give to keep power sane, and so far it's been voltage (though SOI helps by decreasing some of the capacitance).

        There are other factors that change too (leakage is a problem in large, fast SRAM arrays [cache], and the capacitance per area shifts for several reasons), but as an approximation the analysis above holds quite well.

        We're actually in for a bit of stickiness soon, as we're approaching the useful limits to the supply voltage for silicon (though we still have quite a ways to go, and there are biasing tricks you can play to make the swing lower for a given supply voltage).

        Many of the papers on the subject are online, and make quite interesting reading.
  • on chip heat pipe? (Score:2, Insightful)

    by Hadlock ( 143607 )
    that's what this sounds like to me...
  • largo: I must over clock i must over clock it can be clock to amazing levels piro: calm down! Largo: it can be so l33t now that they water cool the chip its self Piro:How did u afford the chip and that watercooling kit Largo:Do not ask questions! Must overclock to be l33t
    • thats funny. i dont quite understand it but i actually know people that go out and buy a cheap processor and then spend $100+ on a cooling system... why not just buy the faster processor to begin with?
  • From the article:
    The technology combines two-phase convection in micromachined silicon heat sinks (microchannels) with a novel electroosmotic pump to achieve minimal heat sink volume at the chip backside [1, 2].
    Wow! Somebody finally found an advantage of Microchannel Architecture!

  • elec-troo-smotic pump
    A what? Oh, that's electro-osmotic. Makes perfect sense now.

I had the rare misfortune of being one of the first people to try and implement a PL/1 compiler. -- T. Cheatham

Working...