Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Science Technology

Nanotube Transistors 61

orn writes: "Reuters is reporting that IBM has made a step toward using carbon nanotubes as the next best thing since sliced silicon wafers. They created a process that could potentially be turned into a manufacturing method to make extremely small transistors. Yahoo has a copy of the article. We all know that small transistors means higher speeds! Yum!" The NY Times has another article about the same technology, which we've mentioned before.
This discussion has been archived. No new comments can be posted.

Nanotube Transistors

Comments Filter:
  • that would be great, if only the wireing of the motherboard and programing of the chipset on the mother board didn't become exponentialy more dificult as you add CPUs.

    can you imagin how hard it would be to Program the Firmware for the Memory Controler Chip!!
    it would have to balance the load of memory access requests from perifferals plus millions of CPUs!!

    if you want to have a massively parallel proc. machine build a beowulf cluster, it will be easier to manage.
  • There are three points to address:

    Heat: Straight (non semiconducting) carbon nanotubes conduct heat way better than copper or any other conventional conductor. Moreover, Carbon nanotubes can withstand way higher temperatures than silicon.

    Yield: This problem grows with number of transistors with little effect imposed by the way you stack them.
    >

    Interference: Is I believe the problem that is actually solved with carbon nanotubes. It appears that natively they conduct well along the tube and isolate in perpendicular direction.

    Petrus
  • Hey, dude! Come on, use YOUR head! My comment about the money was a JOKE!

    Did you actually READ this interview of Gordon Moore [technologyreview.com] that I referenced? Do you understand the term 'sarcasm'? I don't think so.

    I'll try not to flame you, because that's not what computer engineers do.

  • Scaling down does make shorter channel lengths (less distance to travel) and smaller capactive loads (quicker to charge) making faster logic. So that will help bring faster computers. And making smaller gates does inherently space them more closely (shrinking wires), so you get some speed gain there too (though scaling down wires can increase resistance and parasitic capacitance).

    The real problem, though, is inter-module communication. The modules are still spaced about as far apart as they ever were...and there are always going to be more and more of them on die (thanks to increasingly complex architecture tricks). So it's that wire that runs essentially "all the way across the chip" which keeps a pretty constant delay. Communication tricks and/or smaller (less complicated) dies are really the only way to get around that one.
    ---------
  • They said 500 times smaller? I want 500 G4s on one chip.

    Who needs a Cray when you've got one of these.

    Combine this with the optical memory in anearlier article [slashdot.org], and you're set.

    Give me MacOS X on one of these and I'll be a really happy man.

  • Hybrid chips would be extremely dense, allowing substantial gains in processor speeds or the amount of data a memory chip can hold.

    Wee! We'll (consumers) reach the 4GB memory limit for 32-bit processors after all! And it'll be cheap!

    **Checking pricewatch** "Hmm, Corsair 2048MB PC4200 DDR SDRAM at $159? Nah, I'll go for the 4096MB stick at $221!"

    Can you say cheap, volitile solid state storage? It might be a bad thing though, if the novel you've been working on for 2 years suddenly gets lost when your computer has to reboot. =)

    -Cyc

  • I'm sure a Connection Machine II won't run you too much these days. 10,000 processors, but it won't fit on your desk. ah well.

    Something else you're not addressing about the brain is that the brain is not silicon. They don't work the same way. Parallelism concepts are wildly divergent amongst computers, let alone amongt biologic processes.

  • There is a Michael Sim's [slashdot.org] post [slashdot.org] that explains that in fixing moderation, they give a whole lot of people mod points.

    I myself blame an overreaction on the spork crapflooders, though.
  • I'll remember this the next time I design fictional circuits. Sorry, unless your whole world is spice simulations the voltage ramps down.
  • Cooler, lower power chips made by IBM, eh? I happen to be particularily fond of the IBM G3 PowerPC chip that resides in my Powerbook. No fan, up 4 realistic hours of battery life, and it can play DVDs on the train.
  • Faster transisters aren't going to help alot of things, just more time wasted waiting for data from slow disks. I'd like to see multi-GB sized solidstate storage at the same price as disk drives (or at least close).
  • I want 2ghz that runs on low power and low heat and all the time.

    No you don't. Admit it - you want several low power, low heat, SMALL 2ghz processors running in parallel. This technology may provide it.

    The article quotes Moore's law and how in 18 to 20 years we would have reached the physical limits of silicon. Does this imply that this technology will not be commercially available for the majority of this time ?
    --

  • If you read the article, really the whole point of it is that the IBM researches have devised a NEW process by which these nanotubes can be mass-produced.
  • I'm actually very surprised that no one has mentioned this here, but to me it's not just the increased speed of these that is interesting. What most people apparently aren't aware of is that these same tubes are what are responsible for passing current through the synapses in brain cells.

    Seems to me these types of materials could lead to new breeds of computers that have the type of power that our brains have. It wouldn't surprise me in the least if the first REAL steps toward developing AI involve the use of nanotubes.

  • You seem to be unable to grasp that what's important here is not that ONE transistor is faster than another, but that in the future with this kind of scale reduction two things are possible -- reduced power consumption and MORE transistors working at the same time in a much much smaller space.
  • Dig it: - Smallest working transistor -- way WAY smaller than a silicon device could ever be (a human hair is 50,000 X thicker -- we're talking small. - the next step that scientists are already working is REDUCING the size of the gate. Do you know what that means to performance? Can you guess? - the paper in Science also shows that this device has excellent electrical properties - Also - carbon nanotubes are not only good for this, but the carbon-carbon bond creates the strongest material on Earth -- no debate -- there is no stronger thing on Earth. Imagine a car or a building made of this stuff...
  • my electric bill is $25/mo, and I split it w/my roommate.. $12/mo to get 9.9mk/s in RC5 is worth it. Sorry.
  • I'll remember this the next time I design fictional circuits. Sorry, unless your whole world is spice simulations the voltage ramps down.

    *sigh*.

    Look up high-voltage CMOS processes offered by your local foundaries.

    The reason why you're only encountering low-voltage processes is that high-voltage processes presumably aren't useful for the applications you deal with. Arguing that you can't build deep submicron transistors that use high voltages is silly (you almost certainly know yourself what would be changed to do it).
  • ou could perfectly well run a 0.18 or 0.13 micron process at 5V Sorry, that's just not true. If you power a process with a supply voltage that is too high, you will break the gate oxide.

    For low-voltage 0.18 or 0.13 processes, you are quite correct. However, there's nothing fundamental that prevents you from using a 0.18 or 0.13 process with a thicker oxide and some doping tweaking that would work acceptably at 5V.
  • When I started designing circuits for pay Vgs was approximately 1.8 volts, now I'm designing with a Vgs of approximately 1.2 volts. Things are excaberated by the fact that Vt hasn't decreased proportionaly.

    ...And this is orthogonal to the point I was shooting down. You could perfectly well run a 0.18 or 0.13 micron process at 5V. You'd just melt down the chip, if you were trying to do this with a sqare centimetre of silicon. You could also run a 1.2 micron process at 1.2 volts if you felt like it, and you'd get current driving performance just as horrible as that which you're complaining about for 0.13.

    Shrinking feature size isn't responsible for reduced current. The desire to run a constant area of silicon (and hence more-or-less constant parasitic capacitance) at ever-increasing speed without melting it is responsible.
  • And then they mate the other side of the nano-tube device to a neuron coupler to get a direct brain to computer interface. I wonder where it would be best to connect the interface? And how much computing power should be placed in the nanotube modules within the skull. (I suppose that one might be able to use bluetooth to get the signal out, instead of depending on a wired connection. The bandwidth requirements shouldn't be too high.)

    This is sounding more and more like "True Names", et al.


    Caution: Now approaching the (technological) singularity.
  • Moderate this up.

    With current planar layout of siicon chips, the wiring becomes a real problem. With spatial chips, wiring problems will be still there but will enter into different dimension.

    For instance 2-d 256MBit 0.15um RAM uses about some 8x9mm space, out of which 75% is wires.

    3D 256Mbit RAM would on the same technology would
    take up 0.5x0.5x0.125mm. In other terms, 8x9x4mm 3D RAM would have 1TBit capacity. This needs 37-bit address to be byte-addressable.

    Petrus
  • I also saw the part about being strong fibers, and I thought 'hm what about biological uses in repairing nerves?' What about imbedded in muscles or used as tendons?

    It would be the one nerve somebody would get on, and you could tell them where to get off!

    Maybe now I'll get the ultimate twitch i've wanted for Quake and Unreal

  • in my dream world it would be built into the OS, making the programming yokel easy. IIRC, some of the Pentium designs used some chip logic that would guess ahead a certain amount to find shortcuts for processing. If you have a bunch of processors, and one monitoring the guesses, you could get above optimal performance. Call it the Intuition Engine &copy&reg
    --
  • You should, of course, point out some of the problems you have to confront when going "3-D": the three biggies are thermal dissipation, product yield, and device crosstalk/interference. Heat can be dissipated in (rough) proportion to the area of the package, but heat is generated in (rough) proportion to the volume; as you make it thicker, the heat you generate rises faster than the capacity to eliminate heat. The second problem is that every layer you build up on a chip results in lower yields (because every step in the process incurs a few percent error rate). Finally, when you stack devices, you have to deal with crosstalk/interference between vertically stacked devices (including EM and electron migration). This really isn't my field so I can't comment more, but there is much work that needs to be done in order to make this work.

  • >> And I'm not talking about piddly 2/4/8 way or even 64 way SMP machines. I'm talking thousands to millions of independent CPUs all fitting inside a desktop sized PC.

    Well... if you dont really care about the size of the chip... tell me how you plan to put a million chips of the size of today inside a desktop åc.. it sure gonna fill your desk :)

    Parallell processing is gaining ground, of course. SMP (Symmetric MultiProcessing) motherboards are not that expensive, and modern OS'es supports more then one CPU.
    But dont forget todays CPU's are parallell already. They contain of lots of different parts, all working together.

    I say we need smaller chips. I say we need more chips :-)

  • What the hell? Have you ever even done parallel programming? For most programs out there it will be difficult to derive benefit from even 4 processors... much less 1000... I really don't see what gains you expect to occur from having that degree of parallelism...
  • I've been studying this in far too much detail recently :).
    Apparently not enough detail or not enough time though. Ignoring short channel effects (which really can't be ignored unfortunately) the current that a transistor can provide is also proportional to the square of the difference between the gate to source voltage and the threshold voltage.

    i = K(vgs - Vt)^2 - in the saturation region

    When I started designing circuits for pay Vgs was approximately 1.8 volts, now I'm designing with a Vgs of approximately 1.2 volts. Things are excaberated by the fact that Vt hasn't decreased proportionaly. Still, ignoring Vt I've got 44% of the drive strength that I had a couple years ago given the same width/length ratio. The feature size has shrunk from 1.8 to 1.3 microns which offers some relief on the capicitance but has the opposite effect on resistance.

    Yes, I can make faster circuits but its not nearly as easy as just shrinking the technology.

    A huge number of other problems are being ignored here, but this really was just a top of my head ramble with what I'm seeing right now on real circuits.

  • to a significant extent because that's where there are trained people.

    Okay, Gordon, then why did you lobby to get an extra half-million H-1 visas issued in this country over the next few years?

    Gordon is being disingenuous. He built plants in places like Malaysia and Ireland because that's where the depressed economies and low taxes were. The major design work is still being done in the U.S., using engineers imported from other countries. Minor developments in other countries have come only after intense internal lobbying by engineers formerly imported from those countries. This "because that's where the talent is" is nothing more than post hoc ergo propter hoc hogwash.

    And if Intel wanted American engineers, it'd pay more than AMD or Sun or IBM, which it hasn't, doesn't, and never will. And it would take the initiative to sponsor computer engineering labs at universities, which it specifically refused to do, like it was the stupidest thing it could spend money on, when I suggested they give us a little help at my school. But thanks for the Academic Discount all the same, Gordon.

    When you can rent four foreign engineers for the price to educate and run one local...

    --Blair
  • in my dream world it would be built into the OS, making the programming yokel easy.

    Unfortunately, this doesn't make it easy.

    To run efficiently in parallel, a program must be written to use a parallel algorithm. The best parallelized OS and hardware in the world won't save you if the algorithm is serial. Writing parallel algorithms is more difficult than writing serial algorithms (usually), and turning a serial program into a parallel program automatically is extremely difficult (some optimizing compilers try to do this when building for parallel platforms, and usually don't do very well).

    To make matters even worse, many problems are intrinsically serial - impossible to reduce to parallel form. As long as even part of your program is irreducibly serial, there will be a limit to how much you can boost performance (Amdahl's Law).

    In summary, trying to use a massively parallel machine to boost performance doesn't work as well as you're assuming (though it's good for many special cases).
  • by Christopher Thomas ( 11717 ) on Friday April 27, 2001 @07:48AM (#262369)
    The transistor itself is smaller, but again the capacitance isn't necessarily decreasing but the ability of the transistor to drive current is.

    I'm afraid this isn't the case. As you make the gate of a transister shorter, its ability to drive current increases. Shrink the entire transistor, and as the W/L ratio of the gate stays the same, the transconductance of the transistor stays (roughly) the same.

    Yes, process differences and short-channel effects muck this up somewhat, but the effect is still there. This is why we're bothering to move to finer linewidths at all.

    Until recently, shrinking circuits like this produced a *big* speed boost, because most of the parasitic capacitances scaled directly with device area. Same drive current with 1/x^2 the capacitance means a speedup of x^2 for a linewidth shrink of factor x. Recently, as you point out, wire delays have become relevant, but while that does limit signal propagation speed, it doesn't affect a transistor's ability to drive current.

    I've been studying this in far too much detail recently :).
  • by Brento ( 26177 ) <brento@brentozar ... minus herbivore> on Friday April 27, 2001 @05:48AM (#262370) Homepage
    What I would like to see is cooler chips so I don't toast my lap on the train. If new chips can do that, well, then it may be time to pick up some Big Blue stock...

    You mean like Transmeta [transmeta.com] chips? They don't even need CPU fans. You *DO* own some Transmeta stock, don't you? Surely you're not just shooting off at the mouth about buying IBM's stock?

    Sorry to be grumpy about this, but I can't believe how far Transmeta has fallen, and how much geeks like us have let them down. They built exactly what they said they were going to build - a cooler, lower-power CPU that's 100% Intel-compatible, and nobody seems to be buying the product that we all got so excited about. Everybody says they want cooler chips, lower-power chips, but then it's the ridiculously hot, power-sucking Athlons that get all the press. Wonder why that is?
  • by Matt2000 ( 29624 ) on Friday April 27, 2001 @06:26AM (#262371) Homepage
    I've been reading about all sorts of uses for these carbon nanotubes, the most recently being hyperdense packaging of gases for energy use.

    A good summary of some of those uses, and other resources can be found here. [rdg.ac.uk]

  • by stiefvater ( 101844 ) on Friday April 27, 2001 @06:04AM (#262372) Homepage

    my understanding is that the REAL importance of nanotubes is that they can be built in 3d - not constrained to the 2d of current chip tech.

    -k

  • IMB's Press Release [ibm.com], as well as a neat article [usatoday.com] at USA Today (believe it or not) and, finally, a very informative article [techreview.com] at TechReview.com [techreview.com].
  • by DigitalDreg ( 206095 ) on Friday April 27, 2001 @06:47AM (#262374)
    You know, I disagree.

    Unless you've got a real use for that 2GHz processor, anything else is just wasted electricity/heat.

    Lets look at my house for a minute - a fairly modest setup. The old 486 running Linux to protect my Windows machines from the 3l33t h4x0rz is idle most of the time. Like as in less than 2% load. I probably should replace it with a little LinkSys box, but I need Linux access every once in a while at home. That would be a great use for a processor whose rate could be increased/decreased dynamically.

    Look at my Pentium 233. It's idle most of the time I spend composing email, browsing, and programming. It's only when I want to do a brain sucking compile or image manipulation that I need more juice.

    Here at work it's even more wasteful. We've got tons of machines that idle all night, spinning hard drives and running stupid screen savers. Power management is often set incorrectly, and the screen savers can burn quite a bit of needless CPU. Wouldn't it be great if they could slow down and save some juice?

    Energy is too cheap .. I wish things were built more like my PalmPilot - the damn thing is so stingy, it's wonderful!
  • Yes you're describing branch prediction. This is universally used in all modern processors. It gives probably 2x speedup or so in general. What you're describing is a way to follow multiple branch paths when you reach a branch that is hard to predict. Actually, my PhD advisor did a paper on this before http://www-cse.ucsd.edu/users/tullsen/ISCA98.ps I don't remember the exact numbers, but they probably saw speedups of 15% or less. As you increase the number of paths you follow simultaneously (in your massively parallel processor) the gains speedup. Hell, if you give perfect branch prediction (rendering your optimization unnecessary) you probably will only get 50% speedup maximum. That's a hell of a lot less than the 1000x you'd get from a perfect speedup. Actually, my current research involves ways to use processors similar to what you describe (but much less parallel--8 way at the most) to improve the performance of programs. But trust me, it's not as simple as just waving your hands and saying 'the xxxxx will take care of it'--we've been waiting for compilers to automatically take care of it for more than 10 years and they still aren't very good at it.
  • by Chakat ( 320875 ) on Friday April 27, 2001 @06:22AM (#262376) Homepage
    I know you're being silly, but there was talk a few years back about actually using microscopic vacuum tubes for certain very high speed application. IIRC, the tubes were supposed to be less succeptable to interferance, hence allowing them to be driven faster. Of course, this was about 2-3 years ago, and I haven't heard a lick about it since, so it was probably just some nostalgic hacker's pipe dream.
  • by Invisible Agent ( 412805 ) on Friday April 27, 2001 @07:13AM (#262377)
    And look, Intel is already moving all of its lines to .13 micron, and their existing low-heat chips are already in the same ballpark as Transmeta's in term of cooling characteristics (albeit still with a thin fan package).

    Meanwhile, Transmeta's still struggling to break through to >700Mhz speeds (necessary to compete with even slower Intel chips considering that the instructions are emulated, Transmeta propoganda aside). So net-net, Intel is going to roll over Transmeta like a Mack truck.

    Which is sad, Transmeta has a lot of promise, but results need to live up to hype. Brento asks why geeks have abandoned Transmeta, didn't we want cooler chips? Yes, but first we want them to be fast.

    Invisible Agent
  • by Conare ( 442798 ) on Friday April 27, 2001 @05:13AM (#262378) Journal
    "IBM researchers said they've found a process by which they can form batches of nanotube transistors, which are as small as 10 atoms across. Until now, nanotubes had to be positioned one at a time or by random chance, IBM said."

    batches != individually maufactured
  • by garcia ( 6573 ) on Friday April 27, 2001 @06:01AM (#262379)
    b/c the chips are just not fast enough. Low power devices are all well and good, but most of us want to compete w/the other geeks and have a system that just fucking screams!

    I love the work that they did, but I have no use for it. I don't want a 700mhz processor that isn't running at 700 all the time. I want 2ghz that runs on low power and low heat and all the time.

    I am sure that most agree w/me.
  • by Petrus ( 17053 ) on Friday April 27, 2001 @06:15AM (#262380)
    Carbon structures can withstand way higher temperatures than silicon. Probably carbon nanotube will generate less heat for the same performance, because it moves lower charges via better conductors on a shorter distance. Hoowever since your processor is only part of your lapop and your processing requirements will increase, and your laptop will only get thinner, not smaller (for the screen size), it will dissipate perhaps only half of current energy. The carbon nanotubes will get much hotter, just because they can. Probably the CPU core will easily be able to reach betwern 400 and 600 dec C. That can be efectively distributed using carbon nanotube heatsink, so there will be no burrning hot spots on your laptop. BTW, that is another applicatin of carbon nanotubes - very effective heatsinks. What conducts electricity efficiently often conduct also heat wery well. Heatsinks were will be probably the first carbon nanotube techology in your laptop. Petrus Vectorius Petrus Vectorius.
  • Stamping 2N2222 on these things, just terrible.

    --

  • "Researchers have already found a way to form the nanotubes into transistors 500 times smaller than today's silicon based transistors, IBM said."

    The researchers they are referring to are, amongst others, from the "Delft Univeristy of Technology" (Delft, The Netherlands) better know as "de TU-Delft"

    I know they did a Science article on this subject, but since www.sciencemag.org doesn't even have abstracts on-line, I can't verify that this is the one:

    Kouwenhoven L, Science Vol. 275 (28 March 1997) pages 1896-1897. Title: "Single-Molecule Transistors"

    There are also a few other articles about Nanowires in Science by the same guy. There's also a (very) short piece [tudelft.nl] on the "TU-Delft" homepage.

  • by krugdm ( 322700 ) <slashdot.ikrug@com> on Friday April 27, 2001 @05:21AM (#262383) Homepage Journal

    So would a chip made with nanotube transistors run cooler than current chips do? While I'm all for smaller chips, my laptop really can't get much smaller than it is now due to keyboard size and the fact that I like bigger, not smaller screens.

    What I would like to see is cooler chips so I don't toast my lap on the train. If new chips can do that, well, then it may be time to pick up some Big Blue stock...

  • By themselves, perhaps, nanotubes are not going to make computers faster. But if one can pack the current equivalent of 8 CPUs onto the space currently occupied by one, and run them SMP, or even (perhaps!) MMP, that would in fact be a speed increase in terms of end use.

    What I think makes this even more exciting is the article I read via /. yesterday about a 1x1x1 cm glass cube holding 1TB of data. Add together the space of a current IDE drive (say, 8x10x21 cm, or 1680 cm^3/1680 TB) and a nanotube CPU running the equivalent of 8 CPUs as SMP, and all in the space of a current small desktop machine.

  • by Anonymous Coward on Friday April 27, 2001 @05:40AM (#262385)
    Give up this quest for ever smaller and faster single CPUs. We are just about at a dead end with this tech.

    Where are the massively parallel machines? And I'm not talking about piddly 2/4/8 way or even 64 way SMP machines. I'm talking thousands to millions of independent CPUs all fitting inside a desktop sized PC.

    Speed isn't the end all answer to computing. Look at the human brain and how insanely slow large ion synapses are yet look what the brain can do. It's not the speed. It's the parallelism.

  • by Anonymous Coward on Friday April 27, 2001 @05:36AM (#262386)
    Your understanding is basically correct - buckballs and carbon nanotubes are generated in what amounts to a low pressure carbon arc plasma. This helps to prevent normal chemistry from interfering and pulling out carbons before it is "too late" and a stable structure has been produced.

    However, this doesn't require that the transistors be individually manufactured. Many molecules are produced in the plasma, and the next step of the work is to separate out the ones that are desired.

    Right now the hard part of the process is moving the molecule into the right place to make the transistor. That is what, so far, must be done individually. However, ataching a type of molecule to a particular place is not something that must necessarily be done by hand. If you think about it, lithography is all about getting the etchant (or modern equivalent) to deliver itself to the right place. Also, recent 'designer' medicines have useful properties of self-delivery. So although this is a tricky problem, there is every reason to believe the engineers will find a solution.

    -posting anonymously due to unjustified "bitchslap" thanks to the fascist editors
  • by grub ( 11606 ) <slashdot@grub.net> on Friday April 27, 2001 @05:12AM (#262387) Homepage Journal
    "My understanding of the process is that the nanotubes form in very hot furnaces by passing lasers and high current electric arcs through the carbon vapour in the furnace. This is not an easy process, and essentially requires that these transistors are individually manufactured. "

    Certainly right now it's cost ineffective to make large batches of these, the article says as much. Remember though that the first transistors were hand soldered devices..

    It's just a matter of time.

  • by wass ( 72082 ) on Friday April 27, 2001 @06:56AM (#262388)
    Transistors and the interconnection between them are three dimensional constructs. All of the dimensions have scaled downwards thought not all at the same rate. There are a number of things that impact real world circuit performance as a result of this. Wire for instance has a smaller cross section, the resistance of a wire is inversely proportional to this cross section (think of how water flows through a straw v.s how water flows through a garden hose). The capacitance is inversely proportional to its distance from ground, and this distance has shrunk.

    One of the largest factors inhibiting speed of microelectronics is parasitic reactances (inductive and capacative), which you've casually mentioned, but not fully.

    Every length of wire has some finite inductance, and the longer the wire, the more inductance it has. This is one of the hardest parasitic effects to kill. To see how this limits speed, picture how an inductor works. It basically stores energy in a magnetic field around itself, dependent on the current through the device. Magnetic fields don't like to change instantly, and when the current does change too quickly, it takes time for hte magnetic field to collapse (or create). This causes these parasitic inductances to act as virtual low-pass filters all over the circuit. Thus large inductances prevent one from quickly-changing currents, which limits quickly-changing voltages, which ultimately limits quickly-changing logic states of the device.

    This is only talked about self-inductance, this inductance really depends on the location of all components in the circuit, as their magnetic fields all interact and they'll exhibit mutual inductances too.

    The other reactive effect is parasitic capacitance, in which an electric field is created between varying components, usually between circuit-board traces and ground-planes, or between layers within the lithographed silicon devices. You mention capacitance, but the largest effect comes from the surface-area in common, not just the distance between traces. These parasitic capacitances don't like it when the electric field between them changes too quickly, so they also act as virtual low-pass filters all over the place, and also limit quickly-changing voltages. Ie, with a single-pole low-pass filter, it takes time for the voltage to ramp up, which means to ramp up to the threshold for a high/low transition will be limited.

    Sometimes, the parasitic inductances and capacitances work together in nasty ways that they cause oscillations (or ringing), which is a real pain in the butt to try to kill off.

    This is probably the primary reason the engineers want to shrink circuits - to kill the parasitics. That's why lead lengths are as short as possible and as narrow as possible on boards.

    I always hear of overclocker enthusiasts talk about cooling their CPU's to lower and lower temperatures as if they can find no limit to their clock speeds. But they don't seem to acknowledge these inherent low-pass filters all over the place in terms of the parasitics.
    __ __ ____ _ ______
    \ V .V / _` (_-&#60_-&#60
    .\_/\_/\__,_/__/__/

  • by Alpha State ( 89105 ) on Friday April 27, 2001 @05:16AM (#262389) Homepage
    Which is the biggest reason why the description, at least, is inaccurate. Faster transistors can help to a point - but by making the entire chip die smaller, and reducing the distance between the logic gates - that is where the speed gain is to be found.

    Which is one reason why smaller == faster. Smaller transistors means less distance between them, which means shorter signal paths.

    The other big factor is the energy involved in switching the transformer, which must be dispersed as heat. Smaller transistors also means less capacitance (or less charge to alter the state of the transistor), which means less current and less waste heat.

    Unfortunately, these devices are getting to the point where they are triggered by only a few electrons, which means they will be easily affected by thermal noise. This will make designing a processor extremely difficult.

  • by sl3xd ( 111641 ) on Friday April 27, 2001 @05:04AM (#262390) Journal
    Unfortunately, we're getting to the point that the actual speed of the transistor/gates is nearing the speed of the traces between the different gates. Simply having faster transistors won't help bring faster computers.

    We've begun to reach the point where the distance between the gates is the limiting factor; the amount of time it takes a signal to traverse the distance between gates being the critical part.

    Which is the biggest reason why the description, at least, is inaccurate. Faster transistors can help to a point - but by making the entire chip die smaller, and reducing the distance between the logic gates - that is where the speed gain is to be found.
  • by micromoog ( 206608 ) on Friday April 27, 2001 @05:23AM (#262391)
    I thought the whole point of transistors was to get away from tubes! And isn't it going to be a pain to find and replace those tiny little tubes when they burn out?
  • by hillct ( 230132 ) on Friday April 27, 2001 @05:32AM (#262392) Homepage Journal
    Fore more deep background check out the Nanotube website at: http://www.pa.msu.edu/cmp/csc/nanotube.html [msu.edu]


    ---
  • by Fatal0E ( 230910 ) on Friday April 27, 2001 @06:08AM (#262393)
    "Intel has technical operations in China, Russia, India and so forth, and we have them there frankly to a significant extent because that's where there are trained people." (See! It has nothing to do with money!)

    I'll do my best to not flame you but come on, use your head. Even if the amount of technically competent people were the same here (in the US) as in India, China, Russia and so forth they'd STILL have all those plants and research centers abroad.

    It's all about the money cause they don't have to pay as much in those places. Take GM or whoever it was that moved a whole car manufacturing plant to Mexico. Are American plant workers any more skilled at manning a production line then their Mexican counterparts (after being trained)? In Intels case, I'm sure there aren't as many American engineers as Russia, China, etc but you have to admit saying it's not about the money (esp in Intel's case) is kinda ridiculous.
  • by ishrat ( 235467 ) on Friday April 27, 2001 @05:27AM (#262394) Homepage
    Nanotechnology would allow materials to be produced on a very small scale but in much higher volumes and at lower costs than traditional materials and manufacturing processes.

    "It will take several years and additional breakthroughs before nanochips become as prevalent as silicon chips are today," said Harvard Professor Charles Lieber.

    A possible shortcut, Lieber said, is the development of hybrid silicon-nanotechnology chips. Hybrid chips could be ready in about five years, he said, while pure nanochips will likely come within a decade.

    Hybrid chips would be extremely dense, allowing substantial gains in processor speeds or the amount of data a memory chip can hold.

    Looks like a substantial achievement from all this.

  • by Bonker ( 243350 ) on Friday April 27, 2001 @05:14AM (#262395)
    Wasn't there an article just yesterday about a new litho technology that allowed chipmakers to etch on the molocule-size scale?

    Faster transistors may give you a few more FpS in the FPS of your choice (I made a funny!) but Nanotube-transistors have a much greater application than speeding up existing technology. This kind of logic device can be used in small and nano-sized robots, increasing the range of tasks they can do and making them more controllable.

    Ultimately, this is going to be the kind of advance we site as making the artery-scraper nanites and nerve-tissue replacements possible. Don't limit yourselves to thinking *solely* about data processing when new logic devices come along.

  • by eXtro ( 258933 ) on Friday April 27, 2001 @05:38AM (#262396) Homepage
    Most speed improvement is through architectural techniques and not smaller transistor size. Smaller transistors might enable the improvements since you can put more in a given area, but by and large a smaller feature size at this point in the game doesn't win much, if any speed.

    Transistors and the interconnection between them are three dimensional constructs. All of the dimensions have scaled downwards thought not all at the same rate. There are a number of things that impact real world circuit performance as a result of this. Wire for instance has a smaller cross section, the resistance of a wire is inversely proportional to this cross section (think of how water flows through a straw v.s how water flows through a garden hose). The capacitance is inversely proportional to its distance from ground, and this distance has shrunk. It's also proportional to the surface area which would tend to lower the capacitance. The neighbouring wires are also closer however, which is more capacitance. What this means is that signals are travelling over a highly resistive capacitive network. Making sharp transitions (required for 'fast' digital circuits) is very difficult, sort of like trying to shake a skipping rope and making a square wave.

    The transistor itself is smaller, but again the capacitance isn't necessarily decreasing but the ability of the transistor to drive current is. The current required from a transistor is proportional to the transition rate of the signals. Since we're trying to run faster we need more current, but our devices are physically smaller. This is why in high speed circuits, such as in the front side bus on the PIII or PIV, you'll typically see very large transistors and much wider wire.

    There are other effects as well, such as higher electric fields in the drain and source of the transistor. There is a voltage difference between drain or source of the transistor and the body, since this difference is happening over less 'distance' the electric field is much stronger. This is why voltages keep dropping (which in turn makes the gain harder to get too!)

    Sorry if this is a bit of a ramble.

  • by gus goose ( 306978 ) on Friday April 27, 2001 @05:07AM (#262397) Journal
    My understanding of the process is that the nanotubes form in very hot furnaces by passing lasers and high current electric arcs through the carbon vapour in the furnace. This is not an easy process, and essentially requires that these transistors are individually manufactured.

    This is by no means a commertially viable process.... yet. Don't hold your breath.
  • by science geek ( 447130 ) on Friday April 27, 2001 @05:12AM (#262399)
    This is way cool. More deep science stuff about nanotubes is at the homepages at http://www.research.ibm.com/nanoscience/nanotubes. html and http://www.research.ibm.com/nanoscience/index1.htm l Can I get these with filters?

A person with one watch knows what time it is; a person with two watches is never sure. Proverb

Working...